In the past couple of years, everything from GPU advancements to a greater breadth of well-labeled datasets has contributed to seemingly non-stop innovation in AI technologies. As the advent of "artificial general intelligence" appears continually more probable, important questions in safety and ethics are raised. Particularly, as artificial intelligence approaches human-like intelligence, will AI soon become moral patients: deserving of moral consideration by moral agents? Furthermore, are leading AI companies — likely some of the most influential stakeholders in the future of AI — doing enough in the present, or even properly considering the implications of AI? This dashboard is dedicated to researching and numerically scoring the most promising AI organizations for their adherence to promoting transparency, safety, and the potential welfare of AI technologies.

 

Final Score:

Anthropic

Google Deepmind

OpenAI

Microsoft

xAI

Final Score:

61

42

46

25

25

Acknowledgement of AI Moral Status:

5

2

2

2

1

Transparency on AI Capabilities and Limitations:

4

3

5

2

2

Employee and Stakeholder Awareness and Training:

5

4

6

2

3

AI Rights and Protections:

5

3

2

3

1

Accountability for AI Systems:

6

6

5

3

3

Commitment to Safety in AI Development:

9

6

4

3

3

Robustness and Resilience to Adversarial Attacks:

6

5

4

5

3

Transparency and Explainability:

7

3

5

2

2

Mitigation of Bias and Unintended Outcomes:

6

5

6

1

3

Collaboration with External Experts and Safety Research:

8

5

7

2

4