Unpacking the Hype: Meta's AI Benchmarks Under the Microscope

Users claim Meta’s benchmarks for its new AI models are a bit misleading.

Apr 7, 2025
Unpacking the Hype: Meta's AI Benchmarks Under the Microscope
Meta and Threads Logo

One of the new flagship AI models Meta released on Saturday, Maverick, ranks second on LM Arena, a test that has human raters compare the outputs of models and choose which they prefer. But it seems the version of Maverick that Meta deployed to LM Arena differs from the version that’s widely available to developers.

Observations

As several AI researchers pointed out on X, Meta noted in its announcement that the Maverick on LM Arena is an “experimental chat version.” A chart on the official Llama website, meanwhile, discloses that Meta’s LM Arena testing was conducted using “Llama 4 Maverick optimized for conversationality.”

This appears to be true because AI companies generally haven’t customized or otherwise fine-tuned their models to score better on different benchmark checkers like LM Arena. The problem with tailoring a model to a benchmark, withholding it, and then releasing a “vanilla variant of that same model is that it makes it challenging for developers to predict exactly how well the model will perform in particular contexts. It’s also misleading. Ideally, benchmarks — woefully inadequate as they are — provide a snapshot of a single model’s strengths and weaknesses across a range of tasks.

Researchers on X have observed stark differences in the behavior of the publicly downloadable Maverick compared with the model hosted on LM Arena. The LM Arena version seems to use a lot of emojis and give incredibly long-winded answers.

So far, there’s no comment from Meta and Chatbot Arena, the organization that maintains LM Arena, for comment.