5 Revelations from Jan Leike's Resignation: OpenAI's Internal Struggles
Freshly Resigned Superalignment Head Criticizes OpenAI: Chasing Flashy Products, Neglecting AGI Safety
From left to right: Andrej Karpathy, Jan Leike and Ilya Sutskever
OpenAI Loses Two More Leaders—What Does That Mean for AI Safety?
Ilya Sutskever and Jan Leike left the company on May 15th, hours after the GPT-4o launch. Andrej Karpathy departed in February. All are focused on responsible AI development.
Jan Leike Speaks Out After Resignation
Jan Leike, the former Head of Superalignment at OpenAI, spoke publicly after resigning.
To understand why Altman was fired last November and why Ilya and Jan left recently, Jan's statement is key.
Reason for Resignation
Jan Leike had long disagreed with OpenAI's leadership on the company's core priorities. This disagreement reached a tipping point, making his resignation one of the hardest decisions of his life.
OpenAI Priorities
Leike believes OpenAI must figure out how to control AI systems smarter than humans. He sees it as a heavy responsibility for humanity.
We need to work on new models. We need to make them safe. We need to watch them. We need to stop bad things. We need to make them strong. We need to align them. We need to keep things secret. We need to think about the impact on people. We need to work on related topics.
A Hopeful Message
Though he resigned, Leike hopes his colleagues at OpenAI will transform the company culture. He urges them to take their work seriously and understand the implications of general AI. The world relies on them, and so does he.
My Opinion
I believe Ilya and Jan resigned for the same reason: lack of resources. Their task was to research AGI risks.
With ChatGPT's success, much of the company's resources shifted to supporting this product and its growing user base. This shift diverted focus from principles and values aimed at safer and more responsible development.
Maybe Sam Altman wants to make money quickly. After all, without money, there is nothing. Or perhaps GPT-5 is truly nearing AGI. Time will tell.
Hi! My name is Meng. Thank you for reading and engaging with this piece. If you’re new here, make sure to follow me. (Tip: I’ll engage with, and support, your work. 🤫thank me later).
Enjoy
Ilya Announces Departure, Superalignment Head Jan Resigns: OpenAI Splits
Today, OpenAI co-founder and chief scientist Ilya Sutskever announced his departure on Twitter. "After nearly 10 years at OpenAI, I've decided to leave. OpenAI's progress is amazing. I think it will keep making safe, helpful AGI with leaders like Sam, Greg, Mira, and Jakub.
Google Gemini 1.5: Flash 5x Faster than GPT-4 for Olympiad Math
Google announced that Gemini 1.5 is a generational leap over Claude 3.0 and GPT-4 Turbo. In February, Google launched the multi-modal model Gemini 1.5. It improved performance and speed through engineering and infrastructure optimizations, and MoE architecture. It offers longer context, stronger reasoning, and better handling of multi-modal content.
GPT-5 and AGI: The Next Steps for AI
On May 16th, OpenAI's CEO Sam Altman was interviewed by Logan Bartlett. Bartlett is a director and manager at the famous Redpoint venture capital firm in Silicon Valley. OpenAI announced GPT-4o this week. GPT-4o can understand text, video, and audio.