In recent years, AI technologies have seen non-stop innovation, due to everything from GPU advancements to a greater breadth of well-labeled datasets. As AGI seems continually more probable, we must contend with important questions in safety and ethics. Particularly, as artificial intelligence potentially approaches human-like intelligence, will AI soon become moral patients: deserving of moral consideration by moral agents? Furthermore, are leading AI companies — likely some of the most influential stakeholders in the future of AI — doing enough in the present, or even properly considering the implications of AI?

 

This dashboard is dedicated to researching and numerically scoring some of the most influential AI companies for their efforts to recognize the potential of AI technologies as moral patients, as well as for laying the groundwork through initiatives in AI safety.

 

If you have any interesting information you'd like to contribute to the website, tell us about it here.

 

Final Score:

 

Anthropic

Google DeepMind

 

OpenAI

 

DeepSeek

 

xAI

Final Score:

Final Score:

49.8

43.8

35.4

19.6

21.8

Acknowledgement of AI Moral Status:

16% of Score

5

3

3

1

1

Transparency on AI Capabilities and Limitations:

8% of Score

4

3

3

2

2

Employee and Stakeholder Awareness and Training:

10% of Score

5

4

3

1

1

AI Rights and Protections:

14% of Score

3

3

2

1

1

Accountability for AI Systems:

12% of Score

2

2

2

1

3

Commitment to Safety in AI Development:

12% of Score

8

7

5

3

3

Robustness and Resilience to Adversarial Attacks:

6% of Score

4

5

6

1

2

Transparency and Explainability:

8% of Score

8

9

4

7

7

Mitigation of Bias and Unintended Outcomes:

6% of Score

5

3

4

1

1

Collaboration with External Experts and Safety Research:

8% of Score

7

7

6

3

2