OpenAI Announces Improved Models And APIs
Written by Kay Ewbank   
Monday, 13 November 2023

OpenAI has announced new and improved models and APIs at its first Developer Day Conference. The company also announced it is reducing pricing for parts of its platform.

The improved models start with a new GPT-4 Turbo model that the company says is more capable, cheaper and supports a 128K context window.

OpenAIbanner

GPT-4 Turbo also adds function calling, which lets you describe functions of your app or external APIs to models, and have the model intelligently choose to output a JSON object containing arguments to call those functions. GPT-4 also has improved instruction following and a new JSON mode, which ensures the model will respond with valid JSON.

Alongside the new GPT-4 release, OpenAI also announced a new Assistants API with new tools. The Assistants API is described as a first step towards helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, makes use of extra knowledge, and can call models and tools to perform tasks. The new Assistants API has features including Code Interpreter and Retrieval. Code Interpreter: writes and runs Python code in a sandboxed execution environment, and can generate graphs and charts, and process files with diverse data and formatting. The Retrieval tool augments the assistant with knowledge from outside OpenAI's models, such as proprietary domain data, product information or documents provided by your users.

The Assistants API beta can be tested in OpenAI's Assistants playground.

Elsewhere in the API, OpenAI says GPT-4 Turbo can now accept images as inputs in the Chat Completions API, meaning it will be able to be used in situations such as generating captions, analyzing images in detail, and reading documents with figures. The company plans to add vision support to the main GPT-4 Turbo model as part of its stable release.

Support has also been added for developers to integrate DALL·E 3 directly into their apps. DALL·E 3 is an OpenAI text-to-image model that can generate digital images from natural language descriptions, called prompts.

Text-to-speech support has also been added to GPT-4. The new TTS model offers six preset voices to choose from and two model variants optimised for real-time use and for quality.

opena1new

 

More Information

OpenAI

Related Articles

AI Goes Open Source To The Tune Of $1 Billion 

OpenAI Recruiting Fellows 

Elon Musk Leaves OpenAI Over Conflict of Interest

OpenAI Five Dota 2 Bots Beat Top Human Players

OpenAI Bot Triumphant Playing Dota 2

Open AI And Microsoft Exciting Times For AI 

OpenAI Universe - New Way of Training AIs

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

Banner


Next On The Menu - Edible Robots
21/11/2025

Researchers from the Laboratory of Intelligent Systems at the Ecole Polytechnique Federale de Lausanne in Switzerland have demonstrated robotic batteries and actuators that can be inges [ ... ]



WeatherNext2 From Google DeepMind
23/11/2025

Google is now providing users of Google Search, Gemini and Pixel Weather faster, more accurate and higher resolution weather forecasts thanks to WeatherNext 2, a forecasting model based on a [ ... ]


More News

pico book

 

Comments




or email your comment to: comments@i-programmer.info

Last Updated ( Monday, 13 November 2023 )