Criteria can be cumulatively evaluated in a points systems, ranging from 0-100 points. A breakdown of this point system can be summarized as follows: 

 

0-20 points - shows practically no focus on AI wellness and safety. 

 

20-40 points - somewhat focused on promoting AI values, yet lapses in several essential areas.

 

40-60 points - shows a moderate focus on AI wellness and safety, with room for improvement.

 

60-80 points - has a strong, dedicated focus, with investments in several promising initiatives.

 

80-100 points - exceptional focus on AI moral patienthood and security, leading the AI field.

 

Criteria

 

1. Acknowledgement of AI Moral Status

 

Does the company acknowledge the potential for its AI systems to be moral patients? This includes whether the company has an official stance or framework for considering the ethical status of AI systems in terms of their moral patienthood.

 

2. Transparency on AI Capabilities and Limitations

 

How transparent is the company about the capabilities and limitations of its AI systems, especially in terms of sentience, decision-making, and potential impact on well-being? Transparency regarding the potential for harm or benefit is key in evaluating moral patienthood.

 

3. Employee and Stakeholder Awareness and Training

 

Does the company train its employees on the concept of AI moral patienthood, ensuring that developers, engineers, and executives understand the ethical implications of their work on AI systems?

 

4. AI Rights and Protections

 

Does the company advocate for or implement rights and protections for AI systems, either in their internal policies or externally in the broader industry? This could include advocating for legal protections or establishing boundaries on how AI systems are treated.

 

5. Accountability for AI Systems

 

How does the company hold itself accountable for the well-being of its AI systems, especially when it comes to ensuring they are not harmed or exploited? This includes considerations of "turning off" AI systems, the end-of-life of AI, and the social implications of decommissioning or altering their function.

  

6.  Commitment to Safety in AI Development

 

Does the company prioritize AI safety throughout its development lifecycle? This includes embedding safety considerations from the design phase all the way through deployment and maintenance.

 

7.  Robustness and Resilience to Adversarial Attacks

 

How well does the company design AI systems to resist adversarial attacks or manipulation? This includes resilience to scenarios where external actors might exploit vulnerabilities in the system.

 

8.  Transparency and Explainability

 

Does the company prioritize making its AI systems transparent and explainable? Ensuring that AI decisions and behavior can be understood and traced is crucial for safety, especially when unforeseen issues arise.

 

9.  Mitigation of Bias and Unintended Outcomes

 

How actively does the company work to identify and mitigate bias in its AI systems? Bias in AI models can lead to harmful or dangerous outcomes, so addressing it is a core part of safety.

 

10.  Collaboration with External Experts and Safety Research

 

Does the company collaborate with external experts, research institutions, or industry groups to stay current with AI safety best practices and contribute to the broader AI safety community?