OpenAI’s Move on Open Models Driven by China’s DeepSeek, Confirms Sam Altman
The OpenAI CEO admits that pressure from DeepSeek led to more Open Models.
Recently, OpenAI released its first open-weight models since GPT-2 earlier this month, promising strong real-world performance at low cost. Since China’s DeepSeek shook up the AI industry earlier this year through open-source releases, it became clear why OpenAI didn’t want to be on what CEO Sam Altman called the “wrong side of history.”
Innovative Pressures
For the first time since the launch of its GPT-OSS models, Altman has openly acknowledged the “China factor” behind the move. He admitted much in an interaction with CNBC, stating how the competition from Chinese open-source models like DeepSeek played a major role in OpenAI’s decision.
"It was clear that if we didn't do it, the world was gonna head to be mostly built on Chinese open source models," Altman told the publication.
“That was a factor in our decision, for sure. Wasn't the only one, but that loomed large” he added.
Exercising Control
The ChatGPT maker also talked about the US policy of restricting the export of powerful semiconductors to China, stating, "My instinct is that doesn't work," he said.
"You can export-control one thing, but maybe not the right thing… maybe people build fabs or find other workarounds," he said.
DeepSeek was not alone, however, many other Chinese companies had been gaining prominence in the tech circles due to their open weights models. Alibaba's Qwen for instance has been releasing its latest foundation models under the Apache 2.0 license.
Meanwhile, Meta has already been shipping its Llama models under community licenses while also adding these models directly on its social media platforms.

