LMStudio
Access LMStudio models locally through NotPixel. Perfect for local development and testing without hitting cloud APIs.
Learn more in the LMStudio documentation .
Setup
1. Download LMStudio
- Go to LMStudio
- Download and install for your OS
- Download the models you want to use
- Start the Local Inference Server (defaults to
http://localhost:1234)
2. Configure Environment
.env
# Optional, defaults to mock if not set but used for local routing
LMSTUDIO_API_KEY=not-needed-for-localUsage
import Ads from 'notpixel';
const ads = new Ads({
publisherId: 'pub_xxx',
model: 'lmstudio/openai/gpt-oss-20b',
input: 'Hello from local AI!',
});
const response = await ads.offer();
console.log(response.text);NotPixel uses the OpenAI-compatible endpoint of LMStudio. Set your baseURL to your local LMStudio endpoint if you’re not using the default router.
Available Models (Examples)
| Model | Context |
|---|---|
lmstudio/openai/gpt-oss-20b | 131K |
lmstudio/qwen/qwen3-30b-a3b-2507 | 262K |
lmstudio/qwen/qwen3-coder-30b | 262K |
Environment Variable
| Variable | Description |
|---|---|
LMSTUDIO_API_KEY | Your local API key (optional) |