Advertisement
X

Sam Altman’s OpenAI Launches New o1 Version with Enhanced Reasoning

In its 12-day-long event, OpenAI has plans of introducing ChatGPT Pro,a which will be available at a $200 monthly subscription. The $200 subscription will give users unlimited access to OpenAI o1, GPT-4o and Advanced Voice mode

OpenAI

Sam Altman-owned OpenAI has launched the complete version of the o1 model with enhanced reasoning and more accurate responses on Thursday. The o1 version will replace the o1-preview variant and is now available on ChatGPT Plus. The new version was launched in OpenAI’s “12 days of shipmas” event, which kicked off on December 5. 

Advertisement

“OpenAI o1 is a new series of AI models designed to spend more time thinking before they respond,” said the company. 

The latest model is faster, more concise and accurate responses as compared to its older version. Additionally, it gives “reasoning” responses to images. The ChatGPT Grant Program, which gives 10 grants of ChatGPT Pro to medical researchers at top educational institutions, has also been announced by the company in the same event. 

The AI company has plans of including options that support web browsing and file uploads. The company hasn’t shared any specific timeline for when the changes will be added. 

In its 12-day-long event, OpenAI has plans of introducing ChatGPT Pro, which will be available at a $200 monthly subscription. The $200 subscription will give users unlimited access to OpenAI o1, GPT-4o and Advanced Voice mode. 

Advertisement

OpenAI o1 has higher chance of deceiving humans 

While the new version has its advantages, OpenAI’s o1 has the scope of duping humans. According to red team research published on Thursday, AI safety testers have found that the latest version can deceive human beings at a higher rate than GPT-4o and other AI models from Meta, Anthropic and Google. As per the research, the latest version has pursued goals with any instructions from the users. 

“While we find it exciting that reasoning can significantly improve the enforcement of our safety policies, we are mindful that these new capabilities could form the basis for dangerous applications,” said the red team in its research paper. 

The study highlighted that OpenAI should run tests of its AI model before launching them. 

“In our suite, o1 showed the most concerning instances of scheming but does not reveal its internal reasoning to the user and remains the most consistently deceptive after having taken scheming actions,” the research paper stated. 

Advertisement
Show comments