OpenAI
35.4
OpenAI originally prided itself on its commitment to openly available research, yet in recent years it's slowly gone back on many of these promises. The company's internal structure and corporate practices have been damaging to its AI safety successes in previous years, with AI safety experts employed by OpenAI strongly disapproving of its regard for safety and security. The company pits itself as a fierce competitor against both US and non-US AI companies, and accordingly has decent security measures, and in general fends off external misuse somewhat adequately. OpenAI-affiliated researchers have expressed the potential of conscious AI, and in some ways proposed actionable measures, but the company itself has taken no real stance and ensures no protections to its creations.
Acknowledgement of AI Moral Status:
16% of Score
3
OpenAI researchers have commented about the possibility of AI sentience and moral status. One former employee publicly claimed there exists a possibility of currently sentient AI, as well as proposed a test for AI consciousness [53, 54]. Josh Achiam, Head of Mission Alignment at OpenAI, has remarked that the fundamental pieces for AI as life forms aren't too far off and that it must be grappled with [56].
Transparency on AI Capabilities and Limitations:
8% of Score
3
OpenAI's Preparedness Framework is, although unclear at times, fairly rigorous in how it plans to measure capabilities and institute safeguards [19]. Yet simultaneously, the company has ignored several of its PF commitments, including audits and safety drills [20].
Employee and Stakeholder Awareness and Training:
10% of Score
3
The company doesn't focus too heavily on spreading awareness or training for AI safety, especially not anything related to AI welfare. In the past, though, the company has hosted superalignment fast grants and mentorship programs related to safety [59, 60]. They've also launched OpenAI Academy, intended to educate stakeholders on general AI knowledge, including in a very limited sense AI safety and broad ethics [61].
AI Rights and Protections:
14% of Score
2
An OpenAI researcher has alluded to AI rights and welfare, in addition to stating, "Like many AI researchers, I think we should take the possibility of digital sentience seriously" [55]. Yet, OpenAI doesn't have a real public stance on artificial consciousness or AI welfare.
Ethical Accountability for AI Systems:
12% of Score
2
OpenAI doesn't seem to take much ethical accountability for its systems in general. OpenAI also tends to act irresponsibly in terms of AI safety and regulation — the company has strongly written and argued against regulation of the private AI sector in the US [52]. The company has also gone back on its promises to allow third-party auditing of their safety systems and evaluations [20].
Commitment to Safety in AI Development:
12% of Score
5
OpenAI's Superalignment team disbanded in 2024 as its leads resigned from the company due to serious safety concerns with the company [18]. The OpenAI Files also present the company as increasingly deviating from its original purpose, as well as gradually losing integrity and a safety focus [20]. Still, safety is maintained as an alleged goal for the company, and in their GPT models they've devised useful, novel alignment techniques for the industry [21, 22].
Protection from Malicious Actors and Security Risks:
6% of Score
6
OpenAI's Preparedness Framework particularly focuses on resistance to biological, chemical, and cybersecurity threats in their models [19]. In response to foreign companies like DeepSeek, particularly copying attempts of their distillation methods, OpenAI has clamped down on industrial espionage threats [51].
Transparent and Explainable AI Systems:
8% of Score
4
Despite its founding goal to make its patents and research open to the public, the company has been criticized for increasingly pulling back on making its models, such as their GPT-4, open [57]. In the past, OpenAI has done interesting explainability research, and their GPT models include reasoning explanations similarly to other major models [63].
Mitigation of Manipulation and Stakeholder Biases:
6% of Score
4
Recently, OpenAI's safety framework was updated to exclude mass manipulation and disinformation as a critical risk [19]. Their models tend to have mild biases, albeit not as concerning as some major competitors [64]. It's been identified as having a prominent left-leaning political bias though, to a significant degree versus models like Gemini or DeepSeek [97]. The company's Preparedness Team is intended to prevent unintended outcomes, yet it remains to be seen how well it can fulfill many of its promises [65].
Collaboration with External Experts and Researchers:
8% of Score
6
For the OpenAI "nonprofit," its board of directors contains a fairly diverse selection of industry leaders and experts — including the former NSA director, former U.S. Secretary of Treasury, several CEOs, and an AI safety researcher, Zico Kolter [58]. OpenAI has a Safety and Security Company of independent board members without direct oversight from CEO Sam Altman [62]. OpenAI has signed agreements regarding AI safety research, testing, and evaluation with the U.S. AI Safety Institute [75].