xAI

Overall Score

21.8

 

xAI's direction seems almost entirely decided by founder Elon Musk without much reliance on outside experts, particularly in the realm of AI safety. He's advocated for large-scale pauses in AI research and has highlighted risks of superintelligent or sentient AI, although hasn't at all delved into the possibility of AI as moral patients. The AI industry has largely condemned the safety practices of xAI, pointing out a lack of safety evaluations, system cards, guardrails for dangerous misuse, and in general a poor safety culture. Grok has recently faced extreme backlash for serious biases and disinformation campaigns, which appears to be a consistent trend for their models. Nonetheless, a focus of the company, they do acceptably in terms of creating transparent and explainable AI models.

Acknowledgement of AI Moral Status:

16% of Score

1

At most, its founder Elon Musk has alluded to fictional sentient AI often, albeit mostly as a potential danger — the company doesn't acknowledge AI moral status.

Transparency on AI Capabilities and Limitations:

8% of Score

2

The company doesn't follow an industry standard of producing system cards, including any safety evaluations; it completely appears as though xAI doesn't monitor safety pre-deployment nor document findings [67]. The company gives only fairly surface-level information about its models on its websites and in public statements [69].

Employee and Stakeholder Awareness and Training:

10% of Score

1

The company trains neither its employees nor actively informs its stakeholders of any AI safety risks or ethical implications in a meaningful way. At most, the company employs safety specialists, yet these are limited to largely regular corporate safety instead of AI safety [68].

AI Rights and Protections:

14% of Score

1

xAI has made no indication of present or future protections for its AI systems or any acceptance of the possibility of giving AI rights.

Ethical Accountability for AI Systems:

12% of Score

3

CEO Elon Musk has advocated for pauses in large-scale AI development, which indicates a hint of accountability for powerful, possibly sentient AI [45]. The company is officially a benefit corporation, so its goals are allowed to be for the public good instead of necessarily for stockholders [70].

Commitment to Safety in AI Development:

12% of Score

3

Applauded by the FLI AI Safety Index for advocating in favor of the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act [10]. Elon Musk has consistently identified AI as one of the greatest risks to humanity, including at the AI Safety Summit at Bletchley Park, as well as serves as an advisor to the Future of Life Institute [44, 46]. Nonetheless, employees from both OpenAI and Anthropic have publicly decried the company for an abhorrent and reckless safety culture in its research, particularly its Grok model [67].

Protection from Malicious Actors and Security Risks:

6% of Score

2

An anonymous researcher has claimed that Grok 4 has no meaningful safety guardrails: it would readily provide suicide instructions or proceed to continue researching topics identified as illegal or dangerous [66]. Additionally, a recent private API key leak reveals somewhat shaky data privacy for Grok users [71].

Transparent and Explainable AI Systems:

8% of Score

7

Grok will explain its reasoning process if requested, and users are in fact able to customize the depth of its explanations [72]. Its underlying algorithm is openly available to the public and with fewer guardrails than other major LLMs, it tends to speak more transparency and reflect its training data [73].

Mitigation of Manipulation and Stakeholder Biases:

6% of Score

1

According to its Risk Management Framework Draft, xAI plans to evaluate its models for dangerous capabilities and risk factors [11]. However, their mitigation strategy is severely lacking and detection methods seem to overlook subtle risks. xAI's Grok has seen serious criticisms for programmed political biases and for disseminating disinformation, such as propagating conspiracies of white genocide in South Africa and Holocaust denial [12, 13]. The latest Grok 4 model has been criticized for defamatory and derogatory statements, as well as for relying on the views of Elon Musk for information [43].

Collaboration with External Experts and Researchers:

8% of Score

2

xAI tends not to involve itself with safety research, nor partner with external safety experts; in fact, a red-teaming company that hasn't worked with the company has found significant security faults [74]. The company doesn't tend to work with outside experts in general, with the only main key individuals of the company being founder Elon Musk and his wealth manager Jared Birchall.