BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Mistral AI's Open-Source Mixtral 8x7B Outperforms GPT-3.5

Mistral AI's Open-Source Mixtral 8x7B Outperforms GPT-3.5

This item in japanese

Mistral AI recently released Mixtral 8x7B, a sparse mixture of experts (SMoE) large language model (LLM). The model contains 46.7B total parameters, but performs inference at the same speed and cost as models one-third that size. On several LLM benchmarks, it outperformed both Llama 2 70B and GPT-3.5, the model powering ChatGPT.

Mistral 8x7B has a context length of 32k tokens and can accept the Spanish, French, Italian, German, and English language. Besides the base Mixtral 8x7B model, Mistral AI also released a model called Mixtral 8x7B Instruct, which is fine-tuned for instruction-following using direct preference optimisation (DPO). Both models' weights are released under the Apache 2.0 license. Mistral AI also added support for the model to the vLLM open-source project. According to Mistral AI:

Mistral AI continues its mission to deliver the best open models to the developer community. Moving forward in AI requires taking new technological turns beyond reusing well-known architectures and training paradigms. Most importantly, it requires making the community benefit from original models to foster new inventions and usages.

Mixture of Experts (MoE) models are often used in LLMs as a way to increase model size while keeping training and inference time low. The idea dates back to 1991, and Google applied it to Transformer-based LLMs in 2021. In 2022, InfoQ covered Google's Image-Text MoE model LIMoE, which outperformed CLIP. Later that year, InfoQ also covered Meta's NLB-200 MoE translation model, which can translate between any of over 200 languages.

The key idea of MoE models is to replace the feed-forward layers of the Transformer block with a combination of a router plus a set of expert layers. During inference, the router in a Transformer block selects a subset of the experts to activate. In the Mixtral model, the output for that block is computed by applying the softmax function to the top two experts.

The fine-tuned version of the model, Mistral 8x7B Instruct, was trained using DPO, instead of the RLHF technique used to train ChatGPT. This method was developed by researchers at Stanford University and "matches or improves response quality" compared to RLHF, while being much simpler to implement. DPO uses the same dataset as RLHF, a set of paired responses with one ranked higher than the other, but doesn't require creating a separate reward function for RL.

Mistral AI evaluated their models on benchmarks for several tasks, including code generation, reading comprehension, mathematics, reasoning, and knowledge. Mistral 8x7B outperformed Llama 2 70B on nine of twelve benchmarks. It also outperformed GPT-3.5 on five benchmarks. According to Mistral AI, Mistral 8x7B Instruct's score on the MT-Bench chatbot benchmark makes it "the best open-weights model as of December 2023." The LMSYS leaderboard currently ranks the model 7th, above GPT-3.5, Claude 2.1, and Gemini Pro.

In a discussion on Hacker News, several users pointed out that while all of the model's 46.7B parameters need to be loaded into RAM, inference speed would be comparable to a 13B parameter model. One user said:

This can fit into a Macbook Pro with integrated memory. With all the recent development in the world of local LLMs I regret I settled for only 24Gb RAM on my laptop - but the 13B models work great.

The Mixtral 8x7B and Mixtral 8x7B Instruct models are available on HuggingFace. Mistral AI also offers a hosted version of the model behind their mistral-small API endpoint.

About the Author

Rate this Article

Adoption
Style

BT