The recent exodus of co-founders and top executives from OpenAI, one of the world’s premier artificial intelligence companies, has sparked speculation — is this just routine leadership churn or a harbinger of deeper turmoil within the AI pioneer’s ranks? Those who said goodbye to OpenAI in May:
- Dr. Ilya Sutskever, OpenAI’s co-founder and chief scientist.
- Jan Leike, a co-leader with Dr. Sutskever of the superalignment group.
- Evan Morikawa, lead engineer.
- Diane Yoon, vice president of people.
- Chris Clark, head of nonprofit and strategic initiatives.
Other notable departures in 2024:
- Andrej Karpathy (Feb), co-founder and research scientist.
- Daniel Kokotajlo (Feb), safety team member.
- William Saunders (Feb), superalignment group manager.
- Logan Kilpatrick (Mar), Senior Developer Advocate.
- Leopold Aschenbrenner (April), superalignment team member.
- Pavel Izmailov (April), superalignment team member.
Why it matters: The departures might be a fire alarm or a chance to reinvigorate by shedding dissent. Either way, caution is warranted when considering the adoption of OpenAI’s products like the new ChatGPT Edu. OpenAI recently disbanded its superalignment division, which ensured AI safety, opting to distribute these critical responsibilities throughout the organization.
Did Elon Musk predict the turmoil in 2018 when he stepped away?
What they’re saying:
“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.” ~ Jan Lieke on Twitter, now known as X.
“For years, Sam had made it really difficult for the board to actually do (its) job by withholding information, misrepresenting things that were happening at the company and in some cases outright lying.” ~ Helen Toner said during the TED AI podcast.
Be sure to catch the example Bilawal Sidhu shares at 18:45.
“Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI.” ~ Daniel Kokotajlo wrote on his LessWrong profile.
The big picture: Innovation involves risk, which should be measured and calculated.
The other side:
“We’re really grateful to Jan for everything he’s done for OpenAI, and… we wanted to explain a bit about how we think about our overall strategy.
First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it…
Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn’t easy…
Third, the future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model…” ~ Greg Borockman on X.
“Cohesive teams, the right combination of calmness and urgency, and unreasonable commitment are how things get finished. Long-term orientation is in short supply; try not to worry about what people think in the short term, which will get easier over time.” ~ Sam Altman said in a blog post.
“I think that you can just do stuff in the world. You don’t need to wait, you don’t need to get permission. You can — even if you’re totally unknown in the world, with almost no resources — you can still accomplish an amazing amount.” ~ Altman said in Harvard Magazine.
“Given enough eyeballs, all bugs are shallow.” ~ Eric Raymond, , author of “The Cathedral of the Bazaar.”
There is enormous value in rolling out a product to gain wide exposure and garnish mass feedback.
About Robert
Helping future-proof companies to lead tomorrow’s markets. I tackle complex problems, eliminate roadblocks, and provide a fresh perspective. Like my work? Let me know:
- Give this story CLAP
- SUBSCRIBE to get my articles in your inbox
- Connect: rrsaum@boundlessliving.org
References
Altman, S. (2023, December 21). What I Wish Someone Had Told Me. Sam Altman. May 31, 2024, https://blog.samaltman.com/what-i-wish-someone-had-told-me
Brockman, G [@gdb]. (2024, May 18). We’re really grateful to Jan for everything he’s done for OpenAI, and… we wanted to explain a bit about how we think about our overall strategy. First, we have raised awareness of the risks and opportunities of AGI so that the world can better prepare for it… Second, we have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn’t easy… Third, the future is going to be harder than the past. We need to keep elevating our safety work to match the stakes of each new model… Twitter. https://twitter.com/gdb/status/1791869138132218351
Kokotajlo, D. (2024, April 18). Quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI. LessWrong. https://www.lesswrong.com/users/daniel-kokotajlo
Knight, W. (2024, May 17). OpenAI’s long-term AI risk team has disbanded. Wired. https://www.wired.com/story/openai-superalignment-team-disbanded/
Leike, J. [@janleike]. (2024, May 17). Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI is shouldering an enormous responsibility on behalf of all of humanity. Twitter. https://twitter.com/janleike/status/1791498183543251017
Merrill, S. (2012, February 23). With many eyeballs, all bugs are shallow. TechCrunch. Retrieved May 31, 2024, from https://techcrunch.com/2012/02/23/with-many-eyeballs-all-bugs-are-shallow/.
Metz, R., & Ghaffary, S. (2024, May 17). OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit. Bloomberg.com. https://www.bloomberg.com/news/articles/2024-05-17/openai-dissolves-key-safety-team-after-chief-scientist-ilya-sutskever-s-exit
Morris, C. (2024, May 17). Tracking the high-profile resignations at OpenAI — Fast Company. Fast Company. https://www.fastcompany.com/91126785/openai-resignations-are-reaching-an-alarming-level-here-are-11-key-people-who-have-left
Pasquini, N. (2024, May 2). Sam altman’s vision for the future: openai ceo on progress, safety, and policy . Havard Magazine. Retrieved May 31, 2024, from https://www.harvardmagazine.com/2024/05/open-ai-ceo-sam-altman-harvard.
Sidhu, B., & Toner, H. (2024, May 28). What really went down at OpenAI and the future of regulation w/ Helen Toner. The TED AI Show. other, Chartable SmartLinks . Retrieved May 31, 2024 from https://link.chtbl.com/TEDAI.