A wide range of key actors - governments, nonprofits, research groups, companies - influence the developing AI space. Yet, in assessing and comparing the relative strengths of any parties through some particular metric, it would be both unfair and highly complex to contrast two highly dissimilar actors. Thus, the scope of this dashboard focuses on AI-adjacent companies. Furthermore, in order to most efficienctly and concisely encompass the approximate state of the AI scene with regards to AI welfare and safety, the select few companies analyzed should have some clear importance to the topic at hand.
Anthropic, founded by former OpenAI researchers, is dedicated to developing AI systems that are interpretable, aligned with human values, and safe for long-term use, as they aim to do with their Claude model. Their focus on AI alignment, especially for powerful models, positions them as key contributors to AI safety. However, as a smaller company, Anthropic faces challenges in scaling its research and ensuring practical deployment.
DeepMind is a leader in AI research, known for breakthroughs like AlphaGo and AlphaFold, which have had significant impacts on gaming and healthcare. The company is heavily invested in the pursuit of AGI and focuses on ensuring AI alignment and safety. However, its focus on AGI has sometimes led to concerns that the company overlooks more immediate ethical issues, such as biases in current AI systems and privacy concerns.
OpenAI is a clear leader in the AI boom, notably responsible for producing ChatGPT and DALL-E. Unlike larger tech giants, OpenAI has a stated aim of creating "safe and beneficial" artificial general intelligence, and it often recruits interdisciplinary experts to further its goals. Yet, it faces several controversies in recent years, including in its approach to AI alignment and safety.
Microsoft is a major player in AI, with its Azure AI platform, Microsoft Copilot and investments in OpenAI, advancing AI in areas like cloud computing, productivity tools, and healthcare. However, Microsoft is a very profit-oriented, and as such often gives little attention towards both safely making AI systems and focusing on AI alignment or wellness.
xAI aims to develop advanced models capable of strong reasoning, as well as has openly stated that there exist existential impacts from AI systems. xAI has found success with its Grok model, yet the company itself is fairly experimental and a comparative newcomer to the AI space versus other more established companies.