[Note: this is a draft version of this website. Its most recent substantive update was in September 2025. If you would be interested in maintaining this website and have a track record that would enable you to credibly commit to doing so in a responsible manner, please express interest here.]
In recent years, AI technologies have seen non-stop innovation, due to everything from GPU advancements to a greater breadth of well-labeled datasets. As AGI seems continually more probable, we must contend with important questions in safety and ethics. Particularly, as artificial intelligence potentially approaches human-like intelligence, will AI soon become moral patients: deserving of moral consideration by moral agents? Furthermore, are leading AI companies — likely some of the most influential stakeholders in the future of AI — doing enough in the present, or even properly considering the implications of AI?
This dashboard is dedicated to researching and numerically scoring some of the most influential AI companies for their efforts to recognize the potential of AI technologies as moral patients, as well as for laying the groundwork through initiatives in AI safety.
If you have any interesting information you'd like to contribute to the website, tell us about it here.
Final Score:
Final Score:
52.6
43.8
35.4
19.6
21.8
Acknowledgement of AI Moral Status:
16% of Score
5
3
3
1
1
Transparency on AI Capabilities and Limitations:
8% of Score
4
3
3
2
2
Employee and Stakeholder Awareness and Training:
10% of Score
5
4
3
1
1
AI Rights and Protections:
14% of Score
5
3
2
1
1
Ethical Accountability for AI Systems:
12% of Score
2
2
2
1
3
Commitment to Safety in AI Development:
12% of Score
8
7
5
3
3
Protection from Malicious Actors and Security Risks:
6% of Score
4
5
6
1
2
Transparent and Explainable AI Systems:
8% of Score
8
9
4
7
7
Mitigation of Manipulation and Stakeholder Biases:
6% of Score
5
3
4
1
1
Collaboration with External Experts and Researchers:
8% of Score
7
7
6
3
2