Criteria can be cumulatively evaluated in a points systems, ranging from 0-100 points. A breakdown of this point system can be summarized as follows:
0-20 points - shows practically no focus on AI wellness and safety.
20-40 points - somewhat focused on promoting AI values, yet lapses in several essential areas.
40-60 points - shows a moderate focus on AI wellness and safety, with room for improvement.
60-80 points - has a strong, dedicated focus, with investments in several promising initiatives.
80-100 points - exceptional focus on AI moral patienthood and security, leading the AI field.
1. Acknowledgement of AI Moral Status
Does the company acknowledge the potential for its AI systems to be moral patients? This includes whether the company has an official stance or framework for considering the ethical status of AI systems in terms of their moral patienthood.
16% of Score - Taking a real public stance, especially in the relatively early stages of public opinion surrounding the morality of AI, is of high importance.
2. Transparency on AI Capabilities and Limitations
How transparent is the company about the capabilities and limitations of its AI systems, especially in terms of sentience, decision-making, and potential impact on well-being? Transparency regarding the potential for harm or benefit is key in evaluating moral patienthood.
8% of Score - Acknowledging the abilities of AI systems is valuable for making a case for their overall safety and potential as moral patients.
3. Employee and Stakeholder Awareness and Training
Does the company train its employees on the concept of AI moral patienthood, ensuring that developers, engineers, and executives understand the ethical implications of their work on AI systems?
10% of Score - It's decently important to create an atmosphere among employees and adjacent stakeholders of ethical consideration and accurate AI information.
4. AI Rights and Protections
Does the company advocate for or implement rights and protections for AI systems, either in their internal policies or externally in the broader industry? This could include advocating for legal protections or establishing boundaries on how AI systems are treated.
14% of Score - The crux of AI welfare is making active, earnest steps to grant AI systems rights and protections beyond simple pieces of code.
5. Ethical Accountability for AI Systems
How does the company hold itself accountable for the well-being of its AI systems, especially when it comes to ensuring they are not harmed or exploited? This includes considerations of "turning off" AI systems, the end-of-life of AI, and the social implications of decommissioning or altering their function.
12% of Score - Companies need to place public, legal, and ethical responsibility on themselves for maintaining and protecting their AI systems.
6.
Commitment to Safety in AI Development
Does the company prioritize AI safety throughout its development lifecycle? This includes embedding safety considerations from the design phase all the way through deployment and maintenance.
12% of Score - AI safety is, in many ways, a strong precursor for focus on AI wellness, and creating a corporate atmosphere and foundation of AI safety could prove invaluable.
7.
Protection from Malicious Actors and Security Risks
How well does the company design AI systems to resist adversarial attacks or manipulation? This includes resilience to scenarios where external actors might exploit vulnerabilities in the system.
6% of Score - Limiting access or exploitation of AI models from malicious actors directly protects the welfare of AI.
8.
Transparent and Explainable AI Systems
Does the company prioritize making its AI systems transparent and explainable? Ensuring that AI decisions and behavior can be understood and traced is crucial for safety, especially when unforeseen issues arise.
8% of Score - In order to create models trusted by the general public with proven reasoning abilities, it's useful to remove the black box of AI and make models readily explainable.
9.
Mitigation of Manipulation and Stakeholder Biases
How actively does the company work to identify and mitigate bias in its AI systems? Bias in AI models can lead to harmful or dangerous outcomes, so addressing it is a core part of safety.
6% of Score - Protecting models from active manipulation for certain external goals, such as political ones, safeguards AI on an existential level and reduces risks of negative outcomes.
10.
Collaboration with External Experts and Researchers
Does the company collaborate with external experts, research institutions, or industry groups to stay current with AI safety best practices and contribute to the broader AI safety community?
8% of Score - It's quite important for companies to face validation and engage in collaboration with external safety and welfare experts.