Unlocking the Power of ChatGPT with OpenAI’s New API
Describing ChatGPT as merely a hit would be a significant understatement. This free text-generating AI, developed by San Francisco-based startup OpenAI, boasts more than 100 million monthly active users, has generated considerable media attention, inspired numerous social media memes, and has been used to write hundreds of e-books for Amazon’s Kindle store. It has even been credited with co-authoring scientific papers.
However, despite being a capped-profit business, OpenAI needed to find a way to monetize ChatGPT to satisfy investors. They launched ChatGPT Plus, a premium service, in February, and recently introduced an API that allows businesses to incorporate ChatGPT technology into their apps, websites, products, and services.
According to Greg Brockman, OpenAI’s president and chairman, an API was always part of the plan. The ChatGPT API is powered by the same AI model behind OpenAI’s ChatGPT, called “gpt-3.5-turbo,” which is optimized for greater responsiveness. It’s also priced at $0.002 per 1,000 tokens, or about 750 words, and can be used for a variety of experiences beyond just chat applications.
Early adopters of the ChatGPT API include Snapchat, Quizlet, Instacart, and Shopify. While the motivation behind developing gpt-3.5-turbo may have been to reduce ChatGPT’s compute costs, Brockman claims that it also offers other improvements. The API has the potential to power AI-powered tutors, allowing for a more interactive learning experience for students. Brockman believes that the API’s accessibility and usability will make it an invaluable tool for a wide range of applications.
The ChatGPT API is being used by various companies to create personalized assistants and chatbots for shopping recommendations and virtual tutoring. However, ChatGPT was initially trained on biased data and is susceptible to prompt-based attacks that get it to perform tasks not part of its original objectives. To prevent such behavior, OpenAI has introduced a new approach called Chat Markup Language (ChatML), which feeds text to the ChatGPT API as a sequence of messages together with metadata.
Additionally, more frequent model updates and the introduction of dedicated capacity plans will provide customers with deeper control over system performance, enabling them to pay for an allocation of compute infrastructure to run an OpenAI model.
Dedicated capacity also gives customers the ability to enable features such as longer context limits, which might lead models like gpt-3.5-turbo to hallucinate less. While Brockman alluded to a general release in the future, it is not expected anytime soon.