A wide range of key actors - governments, nonprofits, research groups, companies - influence the developing AI space. Yet, in assessing and comparing the relative strengths of any parties through some particular metric, it would be both unfair and highly complex to contrast two highly dissimilar actors. Thus, the scope of this dashboard focuses on AI-adjacent companies. Furthermore, in order to most efficiently and concisely encompass the approximate state of the AI scene with regards to AI welfare and safety, the select few companies analyzed should have clear, strong importance to the topic at hand.

 

Who isn't included?

 

Among analyzing AI startups and major investors in AI, there still remains a great breadth of options to choose from — not all of them are in the scope of this dashboard. Microsoft is excluded as its relationship to artificial intelligence funding and research is largely tied with OpenAI, and although aspects like their contractual AGI clause threaten this connection, in the present much of the discussion as it relates to Microsoft is covered by simply analyzing OpenAI. Other companies, like Meta, Alibaba, Mistral AI, and Amazon, tend to either lack the scope or current demonstrated AI abilities comparable to those included; though recent developments like Meta's acquisition of Scale AI seem promising, it remains to be seen if their strategies will hold water.

 

Anthropic

 

Anthropic, founded by former OpenAI researchers, is dedicated to developing AI systems that are interpretable, aligned with human values, and safe for long-term use, as they aim to do with their Claude model. Their focus on AI alignment, especially for powerful models, positions them as key contributors to AI safety. However, as a smaller company, Anthropic faces challenges in scaling its research and ensuring practical deployment.

 

Google DeepMind

 

DeepMind is a leader in AI research, known for breakthroughs like AlphaGo and AlphaFold, which have had significant impacts on gaming and healthcare. The company is heavily invested in the pursuit of AGI and focuses on ensuring AI alignment and safety. However, its focus on AGI has sometimes led to concerns that the company overlooks more immediate ethical issues, such as biases in current AI systems and privacy concerns.

 

OpenAI

 

OpenAI is a clear leader in the AI boom, notably responsible for producing ChatGPT and DALL-E. Unlike larger tech giants, OpenAI has a stated aim of creating "safe and beneficial" artificial general intelligence, and it often recruits interdisciplinary experts to further its goals. Yet, it faces several controversies in recent years, including in its approach to AI alignment and safety.

 

DeepSeek

 

A prominent newcomer back in early 2025, the Chinese company DeepSeek's models powerfully contend with other major LLMs. DeepSeek is unique in its cheap training cost, transparency, and open weight publishing, but it faces backlash due to its built-in alignment towards the Chinese government's belief system, raising questions of censorship.

 

xAI

 

xAI aims to develop advanced models capable of strong reasoning, as well as has openly stated that there exist existential impacts from AI systems. xAI has found success with its Grok model, yet the company itself is fairly experimental and a comparative newcomer to the AI space versus other more established companies. However, although Grok has been marketed as more open and honest, it's been involved in political disinformation in the past.