Run Large Language Models locally with LM Studio LM Studio is a fantastic tool for running models locally. LM Studio supports multiple open-source models. See the library here. We recommend experimenting to find the best-suited model for your use-case. Here are some general recommendations:Documentation Index
Fetch the complete documentation index at: https://docs-v1.agno.com/llms.txt
Use this file to discover all available pages before exploring further.
llama3.3models are good for most basic use-cases.qwenmodels perform specifically well with tool use.deepseek-r1models have strong reasoning capabilities.phi4models are powerful, while being really small in size.
Set up a model
Install LM Studio, download the model you want to use, and run it.Example
After you have the model locally, use theLM Studio model class to access it
View more examples here.
Params
| Parameter | Type | Default | Description |
|---|---|---|---|
id | str | "qwen2.5-7b-instruct-1m" | The id of the LM Studio model to use. |
name | str | "LM Studio " | The name of this chat model instance. |
provider | str | "LM Studio " + id | The provider of the model. |
base_url | str | "http://127.0.0.1:1234/v1" | The base URL for API requests. |
LM Studio also supports the params of OpenAI.