OpenAI Suspects DeepSeek is Using Its Models to Distill R1
OpenAI accuses DeepSeek of using its models to distill R1, sparking discussions on AI model distillation and intellectual property in the competitive AI industry.
"AI Disruption" publication New Year 30% discount link.
OpenAI has stated that it has evidence suggesting that the Chinese large-model platform DeepSeek is using its models to train competitors.
Model distillation is a common industry method for training models. However, DeepSeek may be using this method to build competitive models, which would violate OpenAI's terms of service.
OpenAI has declined to comment further or provide details on the evidence.
Interestingly, OpenAI CEO Sam Altman had just praised DeepSeek's R1 model, calling it strong in performance and very affordable.
The news has sparked considerable discussion on social media, with some netizens considering it normal.
OpenAI has also used data from X and other websites and companies to train its models.
Do you remember the incident when Musk shut down free API access due to data being stolen?
One might wonder if OpenAI will release the evidence they've discovered.
My guess is they won’t release anything, as this seems like an excuse to maintain market expectations and preserve face.
Losers will do anything to avoid looking bad!
Yes, OpenAI has never stolen training data from the internet or other users. What a joke.
Ironically, OpenAI may become even more closed off as a result. It’s unlikely they will publicly share the O3 reasoning chain.