Today, Kaggle is launching Community Benchmarks, which lets the global AI community design, run and share their own custom benchmarks for evaluating AI models. This is the next step after we launched Kaggle Benchmarks last year, to provide trustworthy and transparent access to evaluations from top-tier research groups like Meta’s MultiLoKo and Google’s FACTS suite.
Why community-driven evaluation matters
AI capabilities have evolved so rapidly that it’s become difficult to evaluate model performance. Not long ago, a single accuracy score on a static dataset was enough to determine model quality. But today, as LLMs evolve into reasoning agents that collaborate, write code and use tools, those static metrics and simple evaluations are no longer sufficient.
Kaggle Community Benchmarks provide developers with a transparent way to validate their specific use cases and bridge the gap between experimental code and production-ready applications.
These real-world use cases demand a more flexible and transparent evaluation framework. Kaggle’s Community Benchmarks provide a more dynamic, rigorous and continuously evolving approach to AI model evaluation — one shaped by the users building and deploying these systems everyday.
How to build your own benchmarks on Kaggle
Benchmarks start with building tasks, which can range from evaluating multi-step reasoning and code generation to testing tool use or image recognition. Once you have tasks, you can add them to a benchmark to evaluate and rank selected models by how they perform across the tasks in the benchmark.
Here’s how you can get started:
- Create a task: Tasks test an AI model’s performance on a specific problem. They allow you to run reproducible tests across different models to compare their accuracy and capabilities.
- Create a benchmark: Once you have created one or more tasks, you can group them into a Benchmark. A benchmark allows you to run tasks across a suite of leading AI models and generate a leaderboard to track and compare their performance.







