Ƶ

Alex Denne
Growth @ Ƶ | Introduction to Contracts @ UCL Faculty of Laws | Serial Founder

GPT-4 Turbo 128k + OpenAI's Dev Day + GPT-4 Vision and GPT Store - Using AI Podcast - Episode 11

16th November 2023
3 min
Text Link

We're back with series 2 of the Using AI Podcast. I'm joined once more by ML Research Scientist Alex Pap and AI Startup Founder Nitish Mutha. We are only a few weeks ago from the anniversary of ChatGPT being released, and OpenAI have just hosted their Dev Day, with some big announcements.

Listen to Using AI on , and watch the episode .



So what did you miss out on OpenAI's Dev Day? Here's a look at what we delve into in the episode:

1. **The New GPT-4 Turbo:** This upgraded model outperforms its predecessor in proficiency while offering a larger 128K context window and budget-friendly rates. GPT-4 Turbo's knowledge bank is updated up to April 2023, ensuring a more realistic and up-to-date AI interaction. The model, currently in preview, will be made available as a stable production version shortly.  

2. **Function Calling Improvements:** Developers will now be able to make API calls for multiple actions from a single message, which previously required separate roundtrips. Furthermore, GPT-4 Turbo guarantees increased accuracy in function calling.

3. **Advanced Instruction Following and JSON Mode:** With an improved approach to following instructions and managing JSON responses, GPT-4 Turbo outclasses its forerunners. Particularly useful for developers interacting with the Chat Completions API, the JSON Mode allows models to respond with valid JSON.

4. **Reproducible Outputs and Log Probabilities:** Developers now have greater control over the model behavior with the introduction of the seed parameter, which ensures consistent and reproducible outputs. A useful tool for building features like autocomplete in a search experience, the log probabilities for the most likely output tokens, is slated for release soon.

5. **Revamped GPT-3.5 Turbo:** An updated version of GPT-3.5 Turbo, now supporting a 16K context window was launched, improvements include JSON mode, parallel function calling and more.

6. **Assistants API, Retrieval, and Code Interpreter:** OpenAI released the Assistants API, designed to make the process of developing AI-assisted applications more straightforward. It introduces a series of tools that lighten the developers' workload, enabling the creation of high-quality AI apps, from a natural language-based data analysis app to a voice-controlled DJ.

7. **GPT-4 Turbo with vision:** GPT-4 Turbo now can accept images as inputs, opening up new possibilities like generated captions, detailed image analysis, and more.

8. **DALL·E 3 and TTS:** OpenAI's text-to-speech API (TTS) permits developers to convert quality text into human-like speech, while DALL·E 3, available through the Images API, can generate programmatically images and designs.

9. **Model Customization and Lower Prices:** OpenAI announced programs for GPT-4 fine-tuning and the creation of Custom Models besides giving the good news of reduced prices across the platform.

For more on this, refer to OpenAI's announcement on their DevDay at ‍

To delve deeper into the GPT series, read OpenAI's .

To learn more about OpenAI's "Copyright Shield" and Whisper large-v3 model, visit the .

Check out the Using AI podcast on , and watch the episode .

Interested in joining our team? Explore career opportunities with us and be a part of the future of Legal AI.

Related Posts

Show all