Skip to content

LocalScore

An open benchmark which helps you understand how well your computer can handle local AI task

Overview

Github
Code
Stars
26
Forks
2

More Information

LocalScore is an open-source benchmarking tool designed to measure how fast Large Language Models (LLMs) run on your specific hardware. It is also a public database for the benchmark results.

Whether you’re wondering if your computer can smoothly run an 8 billion parameter model or trying to decide which GPU to buy for your local AI setup, LocalScore provides the data you need to make informed decisions.

The benchmark results are meant to be directly comparable to each other. They also should give a fairly good indication of what real world performance you may see on your hardware. Unfortunately the benchmark suite cannot cover all possible scenarios (speculative decoding, etc), but it should give a rough idea of how well your hardware will perform.

Contribute

Join the Discord

Contributors

Other Projects