OpenAI on a “Closed” Drama: Rethinking OpenAI’s Closed Model Approach

AI researcher Jiyun Hyo shares his personal analysis around OpenAI's drama, analyzing how the company’s closed model approach and the emergence of open-source alternatives will shape the future of AI

Jiyun Hyo
Contributor

December 3, 2023

OpenAI on a “Closed” Drama: Rethinking OpenAI’s Closed Model Approach

When ChatGPT hit the world in late 2022, we were all shocked. Not only were we shocked, we were mesmerized by what it could do. We were in awe of what was possible. 2023 was like a honeymoon phase for all of us with ChatGPT. However, OpenAI, faced with lots of ethical and legal concerns, started to introduce a lot of guardrails to ChatGPT, which started to stir up discontent from its users. 

However, even with signs of potential problems, OpenAI and Sam Altman still commended some kind of a mythic presence in the AI world. OpenAI, OpenAI, OpenAI… was all we heard all across the media even with all these problems starting to arise. No one openly questioned OpenAI and its endeavors even in the presence of signs indicating otherwise. That is, until the OpenAI drama that shocked the world when the OpenAI board fired Sam Altman on November 20th, 2023. 

This was the awakening moment for me and for many others around the world who had previously been blindingly following OpenAI without questioning their authority and control. 

Opening Our Eyes to the Closed Nature of OpenAI Dealings

I remember walking to the gym that very day in NYC while reading the news on my phone. I couldn’t believe it. I was shocked that Sam Altman, who was already an iconic figure in AI and who also had a board seat at OpenAI, was fired from the CEO position and removed from the board so easily by the rest of the directors. 

I previously had not paid too much attention to the structure of the company. Yet, according to the Startup Snapshot survey “How Open is OpenAI”, I wasn’t the only one. Tech stakeholders reported that they were only somewhat aware that there was even a possibility Altman could be fired from the board, ranking their awareness level a 5 on a 1-10 scale. 

As with anything that shocks you to that extent, I started to question all the rosy views I had of OpenAI. I started to dig deeper into how OpenAI got started, how it was structured, and the set of events that might have led to such a shocking turn of events. Once again, I wasn’t the only one, with many startups, developers and researchers looking at OpenAI with more of an objective lens than they used to. Many were questioning how open the company truly was.

I had the chance to discuss the issue with many founders from my Antler NYC6 cohort and here are some common questions we all started to ask: Is it safe to entrust OpenAI with all our data? How do we know OpenAI won’t introduce new guardrails that could potentially ruin my product? How do we know they won’t use the data we send them to copy our products? Why do we have to rely on OpenAI to build our products?

OpenAI adopts a closed-model approach

All the questions we were asking above seemed to converge on one theme: OpenAI has way too much control over its AI models. The notion of having your product completely rely on a few select individuals started to make us feel uncomfortable. All our questions and concerns naturally led our attention to potential alternatives.

There are certainly many alternatives to OpenAI models. Meta is developing LLAMA models, Google is working on Bard, Anthropic on Claude, Mistral working on its models, etc (Seon Health working on TherapyGPT). What’s interesting about each of these companies is that each has its own unique philosophy on its approach to AI models. Some have chosen to take a more “closed” approach whereas others have adopted a more “open” approach.

OpenAI hosts what we call “closed models” in which the weights of the language models are not released to the public and the public can only access their model through an API. As a result, OpenAI tries to make its own judgment on what kind of guardrails are best for humanity and has constantly been updating its models with new guardrails to protect its language model from delivering “harmful” content.” Google and Anthropic have also taken a similar approach with their models.

On the other hand, Mistral and Meta took a different approach where they made their models open source. Anyone can have access to the weights of their models. Open models have a clear advantage over closed models. Because the models are open source, they can improve faster and the guardrails are more likely to be a result of a consensus among the public instead of a decision made by a private, small group of people. The ousting of Sam Altman and the drama that ensued have only fueled the movement towards open-source models.

It is quite ironic that “Open”AI has adopted a “closed” model approach whereas other companies out there are approaching the problem from a truly “open” approach. It is yet too early to come to a complete judgment on what will happen in the future. However, what is for sure is that the OpenAI drama has opened everyone’s eyes and has made everyone start thinking about its alternatives. As you might say, the religious faith in OpenAI has been broken. 

As a result, people now perceive a higher risk in OpenAI products and have opened up to potential alternatives. Only time will tell what is going to happen but I do believe that the drama will benefit humanity eventually. We humans have always benefited from different perspectives especially in the period of exploration. LLMs are still very early in their development and it would be interesting to see what approach comes out as the winner at the end.

Jiyun Hyo

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Jiyun Hyo

Co-Founder and CEO at VizualMind

Bio

Jiyun is an entrepreneur, AI researcher and investor. He the founder of the Robotics Lab at Duke University.

Our latest articles

© 2024 - Startup Snapshot Design: Obys | Code: Eli Cohen