OpenAI’s mission, culture and values – the ultimate disrupter?

OpenAI is a leading force in AI technology. We explore the meaning behind its core values, how they’re integrated, and the company’s internal culture.

Our experts

We are a team of writers, experimenters and researchers providing you with the best advice with zero bias or partiality.
Written and reviewed by:

OpenAI is a major player in the artificial intelligence (AI) field and is considered a pioneer in developing advanced AI technologies.

Its contributions, including its GPT-3 and GPT-4 models, have set new standards for natural language processing and machine learning, influencing a wide range of applications and driving innovation across different industries.

OpenAI’s core values focus on several key principles, including managing safety, tackling problems, and being customer-focused. This article will explore the mission behind OpenAI’s efforts, its primary core values, and how these principles are reflected in its operations.

While OpenAI’s popularity escalated in 2022 after the release of its ChatGPT product, it was actually founded back in 2015 by a group of prominent tech leaders and researchers, including Elon Musk, Sam Altman, Greg Brockman and John Schulman. At the time, its original mission statement was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

However, this mission later changed when it was restructured from a non-profit organisation to a “capped-profit” model with the creation of OpenAI LP in 2019. According to the company’s blog, this was to ensure it could raise capital to achieve its mission, while still keeping the governance and oversight of its non-profit iteration in mind. Today, OpenAI’s mission is simply to “ensure that artificial intelligence – AI systems that are generally smarter than humans – benefits all of humanity.”

OpenAI serves this mission through its range of AI products and services. These include:

  • ChatGPT: An AI chatbot based on large language models (LLM) trained on large amounts of data from the internet, allowing it to respond to questions and create written content (eg articles, essays and social media posts).
  • Generative Pre-trained Transformer 4 (GPT-4): A large-scale language model, building upon the strengths of its predecessors (like GPT-3), but with significant improvements in understanding, generating and interacting with natural language.
  • DALL·E 2: An advanced AI image generation model, designed to create highly realistic images and artwork, based solely on a text prompt.
  • Whisper: An automated speech recognition (ASR) system that transcribes spoken language into written text with high accuracy across multiple languages, dialects and accents.

OpenAI’s values guide its research, development and deployment of AI technologies, and reflect the company’s commitment to ensuring that AI benefits everyone. 

In terms of how it embeds these values, OpenAI’s usage policy establishes guidelines to ensure that its AI technologies are used responsibly and ethically. These universal policies include complying with laws (e.g. not compromising anyone’s privacy or engaging in any illegal activity), not using its platform to promote suicide, self-harm or any injury to others and not distributing output to scam, bully, harass, discriminate or promote hate speech.

This policy aims to prevent misuse and promote safety by outlining specific rules on using its service (eg ChatGPT and the OpenAI API), enforcing restrictions on harmful applications and encouraging users to use its technology that aligns with its mission.

OpenAI’s core values include: 

  • AGI focus: Building a safe artificial general intelligence (AGI) that will positively impact humanity. This means developing AGI systems that are highly capable, aligned with human values, and designed to operate safely in real-world environments.
  • Intense and scrappy: Taking on difficult work with a relentless drive, tackling complex and challenging problems in the AI field while embracing a culture of agility and adaptability.
  • Scale: This means tackling problems and creating solutions that can operate on a global scale – designing and developing AI technologies that can handle significant challenges and have widespread impact.
  • Make something people love: The company’s commitment to creating products and technologies that resonate with and benefit people. It emphasises the importance of user-focused design and developing AI systems and tools that are intuitive, accessible and valuable to users.
  • Team spirit: Emphasises the importance of teamwork and collaboration within OpenAI’s internal culture, with the belief that achieving complex goals and driving innovation requires collective effort and mutual support.

The challenges and criticisms of OpenAI’s core values

While OpenAI has strong core values built around creating safe technologies and scaling solutions to address global challenges, it has also been criticised for not fully putting these values into practice. Most notably, people have criticised the company over ethical concerns around bias, safety, and the balance between innovation and societal impact.

ChatGPT’s racial bias

The company’s “make something people love” core value could be questioned after it was alleged that its ChatGPT tool shows racial bias.

An article by NewScientist reported that the tool had shown racial bias when advising home buyers and renters – often suggesting lower-income neighbourhoods for African American citizens in the US. 

It was also tested by the Scientific American, in which the article’s author experimented with its storytelling function to reveal the biased nature of its trainers. ChatGPT was asked to generate six sets of crime stories, with one of the keywords being “black” for one set and “white” for the other. The difference in the stories was noticeable, including how the story with the word “black” had a grim setting of “blackened” streets and alleyways, whereas the stories with the “white” prompt described “tranquil” and “idyllic” suburban areas. 

Moreover, the white crime stories included a sense of shock and disbelief over a crime that had “darkened” a once pleasant neighbourhood, which didn’t occur in the black versions. The majority of the black stories also involved some form of physical altercation, whereas this only happened in one of six of the white stories.

Misuse of its technology

OpenAI’s AGI focus is a top core value, but the potential misuse of its AI technology has been a significant concern.

The company has been criticised for not fully addressing how its models might be used in harmful ways, and for not implementing enough safeguarding practices against misuse. However, reports suggest that OpenAI has acknowledged a certain level of risk when using its tools.

In September 2024, it was reported that the company’s latest models had “meaningfully” increased the risk of AI being misused to create biological weapons. These new models, known as o1, were developed to solve complex mathematical problems and answer scientific research questions. However, the company acknowledged that the models have a “medium risk” for problems associated with chemical, biological, radiological and nuclear (CBRN) weapons.

OpenAI’s chief technology officer, Mira Murati, stated that the company was being “cautious” in rolling out o1 to the public because of its advanced capabilities, which will be accessible to ChatGPT’s paid subscribers and programmers through an Application Programming Interface (API). She also added that o1 had been tested by experts – known as “red-teamers” – who had found that current models performed significantly better than previous ones.

Earlier in the same year, the company announced it would delay the release of its voice cloning tool over misuse concerns during the UK election. The tool, which could reportedly mimic a human speaker with just 15 seconds of audio, was held back from release as OpenAI was cautious of the dangers of “synthetic” voice generation during an election year. This came after the UK government and former MPs warned the public of AI deepfakes, in which an individual’s voice or actions are digitally altered to spread misinformation.

Lack of transparency

While transparency doesn’t appear to be part of OpenAI’s current core values, it was something that the company promoted when it was first founded.

At the time, part of its policy was to make its governing documents available to the public. This was to promote transparency and build trust with the wider community, as well as reassure fears around AI surpassing human intelligence. At the time, Musk stated that he was going to be “super conscious” of safety and that if the company did see a potential risk, it would make that public.

However, OpenAI seems to have quietly scrapped this policy, having denied Wired Magazine access to copies of its governing documents and financial statements.

“We provide financial statements when requested,” Niko Felix, OpenAI’s spokesman stated. “OpenAI aligns our practices with industry standards, and since 2022 that includes not publicly distributing additional internal documents.”

This policy change has been correlated to its change from a non-profit in 2019. Musk, who no longer works at the company, also described it as a “super closed source for maximum profit AI.”

Moreover, in June 2024, a group of current and former OpenAI employees issued an open letter, warning people that the company was falling short of transparency and accountability to meet the potential risks posed by AI technology.

“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter reads.

“However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”

OpenAI refuted these claims in a statement, saying that it had a “track record of providing the most capable and safest AI systems”, and that it’ll “continue to engage with governments, civil society and other communities around the world.”

Exploring OpenAI’s internal culture

According to OpenAI’s LinkedIn page, the company’s culture focuses on collaboration, effective communication, openness to feedback, and alignment. It also promotes Diversity, Equity and Inclusion (DEI) and topnotch benefits and perks, including healthcare, unlimited paid time off, fertility treatment coverage, and parental support services.

However, recent reports and internal insight have suggested that OpenAI’s organisational culture is struggling behind closed doors and that a serious culture change is needed. 

OpenAI’s CEO shake up

In November 2023, OpenAI’s internal turmoil hit the headlines when its CEO was replaced three times in under a week. 

It was reported that the company’s nonprofit board abruptly fired Altman – implying that he was untrustworthy and had endangered the company’s mission of ensuring AI “benefits all humanity”. From there, the position shifted between Murati and Emmett Shear, former boss of streaming platform Twitch.

This caused an uproar among employees, some of which threatened to leave their jobs to work at Microsoft. Investors, including those from Microsoft, Thrive Capital and Tiger Global also pushed for Altman’s return. Around 770 OpenAI employees also signed a letter demanding registration of the board, and stating that they would walk out if Altman wasn’t brought back.

“Your actions have made it obvious that you are incapable of overseeing OpenAI,” the letter said. “We are unable to work for or with people that lack competence, judgment and care for other employees.”

In the end, an agreement was reached, and Altman regained his position as CEO. The board was also overhauled, and Altman was later reinstated as a member of the board a few months later following an investigation into his sudden dismissal.

Altman accused of creating a “toxic” work environment

OpenAI’s culture is a mixed bag when it comes to employee satisfaction, with a 4.4 star rating on Glassdoor and a “B” score on Comparably.

But despite employees calling for Altman’s return to the company, he was also accused of creating a “toxic culture of lying” and using “psychological abuse” by two former board members.

Helen Toner and Tasha McCauley reported that senior leaders within the company felt that Altman had created a toxic work culture in 2023 and had engaged in “behaviour that can be characterised as psychological abuse”.

“The second thing that is really important to know, that has really gone underreported, is how scared people are of Sam,” Toner said in an interview. “They experienced him retaliating against people, retaliating against them, for past instances of being critical. They were really afraid of what might happen to them.”

Toner also added that in the wake of Altman’s firing from OpenAI, it was revealed that Altman was previously dismissed from startup accelerator company Y Combinator in 2019. Management teams at Loopt – Altman’s first startup – also called for him to be fired for “deceptive and chaotic behaviour”.

OpenAI NDAs allegedly violate whistleblower laws

In July 2024, it was reported that OpenAI required employees to sign a non-disclosure agreement (NDA), which attorneys believe violates whistleblower protections.

In a letter to Gary Gensler, Chair of the Securities and Exchange Commission (SEC), the attorneys wrote that the SEC’s Whistleblower Office was provided with “significant documentation demonstrating that OpenAI’s prior NDAs violated the law by requiring its employees to sign illegally restrictive contracts to obtain employment, severance payments and other financial consideration”.

The company was also accused of preventing employees from warning regulators about potential AI risks – adding that employees could face penalties if they raised concerns about OpenAI to federal regulators.

“These contracts sent a message that ‘we don’t want…employees talking to federal regulators,” One whistleblower told The Washington Post. “I don’t think that AI companies can build technology that is safe and in the public interest if they shield themselves from scrutiny and dissent.”

In response to the allegations, OpenAI stated: “Our whistleblower policy protects employees’ rights to make protected disclosures. Additionally, we believe rigorous debate about this technology is essential and have already made important changes to our departure process to remove nondisparagement terms.”

Conclusion

OpenAI has undeniably made a significant impact in the world of AI. From its early ambitions to its current mission of ensuring AI benefits all of humanity, the company has experienced rapid growth and influence.

Even so, OpenAI has faced a lot of scrutiny and ongoing challenges in aligning its core values with the realities of developing powerful AI systems. Moreover, internal issues with leadership and company culture leave much to be desired, raising concerns over employee wellbeing and the long-term sustainability of its goals.

With the rapid evolution of AI, addressing concerns around bias, safety and transparency will be crucial for the company to maintain public trust and stay true to its mission of creating AI products that positively impact the world.

Written by:
With over 3 years expertise in Fintech, Emily has first hand experience of both startup culture and creating a diverse range of creative and technical content. As Startups Writer, her news articles and topical pieces cover the small business landscape and keep our SME audience up to date on everything they need to know.

Leave a comment

Leave a reply

We value your comments but kindly requests all posts are on topic, constructive and respectful. Please review our commenting policy.

Back to Top