OpenAI has officially unveiled its latest innovation, the o1-pro AI model, which is a more powerful version of its renowned o1 “reasoning” AI model. This advanced model is now accessible through the OpenAI developer API, marking a significant step forward in AI capabilities.
According to OpenAI, the o1-pro model utilizes enhanced computing resources compared to its predecessor, the o1. This upgrade is designed to deliver consistently better responses, especially for complex inquiries. For now, access to o1-pro is limited to select developers who have spent at least $5 on OpenAI’s API services, indicating a focus on engaging a dedicated user base.
The pricing for the o1-pro model is notably high, reflecting its advanced capabilities. OpenAI has set the cost at $150 per million tokens (approximately 750,000 words) for input into the model, and $600 per million tokens generated by the model itself. This pricing is significantly steeper than that of OpenAI’s previous models, being twice the cost of GPT-4.5 for input and ten times the price of the standard o1.
OpenAI is optimistic that the improved performance of o1-pro will persuade developers to invest in this premium service. A spokesperson for OpenAI stated, “o1-pro in the API is a version of o1 that uses more computing to think harder and provide even better answers to the hardest problems.” This sentiment reflects the company’s commitment to enhancing the developer experience by offering more reliable responses through their API.
Despite the promising features of o1-pro, early impressions have been mixed. The model, which has been available for ChatGPT Pro subscribers since December, faced criticism for its performance in certain tasks, such as solving Sudoku puzzles. Users reported that it also struggled with simple optical illusion jokes, raising questions about its effectiveness in real-world applications.
As OpenAI continues to innovate with the o1-pro AI model, the balance between advanced technology and user experience will be crucial. The high pricing and initial feedback suggest that while the model aims to push the boundaries of AI reasoning, its practical performance will ultimately determine its success in the developer community.