https://bdtechtalks.com/2020/08/17/openai-gpt-3-commercial-ai/

Greg Brockman (left), CTO of OpenAI, and Sam Altman (right), CEO of OpenAI (Photo by TechCrunch licensed under CC BY-SA 4.0)
This article is part of our series that explore the business of artificial intelligence
A program that can automate website development. A bot that writes letters on behalf of nature. An AI-written blog that trended on Hacker News. Those are just some of the recent stories written about GPT-3, the latest contraption of artificial intelligence research lab OpenAI. GPT-3 is the largest language model ever made, and it has triggered many discussions over how AI will soon transform many industries.
But what has been less discussed is how GPT-3 has transformed OpenAI itself. In the process of creating the most successful natural language processing system ever created, OpenAI has gradually morphed from a nonprofit AI lab to a company that sells AI services.
The lab is in a precarious position, torn between conflicting goals: developing profitable AI services and pursuing human-level AI for the benefit of all. And hanging in the balance is the very mission for which OpenAI was founded.
In March 2019, OpenAI announced that it would be transitioning from a non-profit lab to a “capped-profit” company. This opened the way for funding from investors and large tech companies, with the caveat that their returns will be capped at 100x their investment (talk about capped!).
But why the structural change? In a post, the company announced that the move was meant to “rapidly increase our investments in compute and talent while including checks and balances to actualize our mission.”
The key phrase here is “compute and talent.”
Talent and compute costs are two of the key challenges of AI research. The talent pool for the kind of research OpenAI does is very small. And given the growing interest in commercial AI, there is fierce competition between large tech companies to acquire AI researchers for their own projects. This has triggered an arms race between tech giants, with each offering higher salaries and perks to attract AI researchers.
Google and Facebook have managed to snatch Geoffrey Hinton and Yann LeCun, two of the three pioneers of deep learning. Ian Goodfellow, a well-respected AI researcher and the inventor of generative adversarial networks (GAN), works at Apple. Andrej Karpathy, another AI genius, works at Tesla.
There is still ample interest in academic and scientific research, but with most AI talent being drawn to companies who can dish out stellar salaries, nonprofit AI labs are finding it harder to fill their ranks, unless they can match those salaries. According to a New York Times piece published in 2018, some of OpenAI’s researchers were making more than $1 million a year. DeepMind, another AI research lab, reported paying more than $483 million to its 700 employees in 2018.
Further increasing the cost of AI research is the computational requirements of artificial neural networks, the main component of deep learning algorithms. Before they can perform their tasks, neural networks must be trained on many examples, a process that requires expensive compute resources. In the past few years, OpenAI has engaged in several very costly AI projects, including a robot hand that solves Rubik’s cube, a gaming bot that beat the champions of Dota 2, and a group of AI agents that played hide-and-seek 500 million times.
According to one estimate, training GPT-3 would cost at least $4.6 million. And to be clear, training deep learning models is not a clean, one-shot process. There’s a lot of trial and error and hyperparameter tuning that would probably increase the cost several-fold.
OpenAI is not the first AI research lab to adopt a commercial model. Facing similar problems, DeepMind accepted a $650-million acquisition proposal from Google in 2014.
