Chat GPT-4o Mini: Open AI’s Latest Artificial Intelligence (AI) Model
OpenAI introduced the GPT-4o Mini! It’s an intelligent, affordable, and “small” AI model. It says it’s cheaper and smarter – and works as fast as the GPT-3.5 Turbo. Unlike previous versions in which Open AI focused on large language models (LLMs), it ventured into small language models with this release.
AI News: GPT -4o Mini Key Takeaways
-
Regarding textual intelligence, Mini outperforms the 3.5 Turbo, with an MMLU score of 82% versus the latter’s 69.8%. This model also excels in coding, math, and reasoning – further outperforming competitors and previous Chat GPT versions.
-
Improved multilingual understanding of a wide range of non-English languages over 3.5 Turbo.
-
AI also becomes accessible with Mini offered in OpenAI’s API services at a competitive pricing. 60% cheaper than the Turbo model, costing only $0.15 per 1 million input tokens and $0.60 per 1 million output tokens.
-
It supports text and vision capabilities and possible audio and video inputs and outputs SOON support.
-
It can support up to 16K output tokens per request like GPT-4o and has knowledge updated up to October 2023. It offers almost real-time responses for its large 128K token context window.
GPT-4o Mini Features
Based on our research, GPT Mini allows users to develop and build generative AI apps faster because of its huge context window – enabling the app to generate near-real-time responses.
As said, the multimodal model supports text and images, but OpenAI plans to release other input options, such as audio and video, in the future.
It’s also data trained up to October 2023 and packs a massive input context window – 128K tokens and 16K output response token limits per request.
As it uses the same tokenizer as GPT-4o, GPT-4o mini also has enhanced responses for non-English language prompts.
OpenAI also claims Mini outperforms in many aspects like coding, math, multimodal reasoning, and textual intelligence.
-
Reasoning: OpenAI says that GPT-4o mini excels in this benchmark in both text and vision, with a score of 82% on the Massive Multitask Language Understanding (MMLU) dataset versus Gemini Flash (77.9%) and Claude Haiku (73.8%).
-
Multimodal reasoning: On the Massive Multi-discipline Multimodal Understanding (MMMU) dataset, Mini also displays stellar performance, with a score of 59.4% versus Gemini Flash (56.1%) and Claude Haiku (50.2%)
-
Coding: On HumanEval, it scored 87.2% versus Gemini Flash (71.5%) and Claude Haiku (75.9%). The coding proficiency benchmark measures coding proficiency by checking functional correctness in terms of synthesizing programs coming from docstrings.
-
Math: On the Multilingual Grade School Math Benchmark (MGSM), GPT-4o mini scored 87% in math reasoning using grade school-level problems. This is higher than Gemini Flash (75.5%) and Claude Haiku (71.7%).
Also Read: GPT4o: Everything You Need to Know 2024
Based on different benchmarks, GPT-4o Mini focuses on delivering higher-quality and faster responses than other models. Stay tuned for the latest AI news and updates!