← Back to Blog
New Feature April 6, 2026

BunkerM Now Supports Local LLM via LM Studio

BunkerM Local LLM via LM Studio

Your MQTT broker now has a built-in AI assistant that runs entirely on your own hardware. Connect BunkerM to any model loaded in LM Studio and control your entire IoT setup with plain English, no internet connection required, no data ever leaving your network.

Why This Matters

Until now, BunkerM's AI features required a BunkerAI Cloud subscription. That works well for most users, but a growing number of deployments cannot send data outside the network, whether due to compliance requirements, limited connectivity, or a preference for keeping infrastructure fully self-contained.

Local LLM mode solves this by routing all AI requests to a model running on your own machine via LM Studio, an open-source desktop app that runs models locally. BunkerM injects live broker context into every request, so the model knows your connected clients, active topics, latest payloads, and statistics, and can act on them directly.

What It Can Do

The local AI has the same execution capabilities as the cloud version for web chat. You can ask it to create clients, publish messages, delete devices, and query live broker state, all in plain English. A few examples:

  • "Create 10 sensor clients with random credentials" produces 10 real entries in Mosquitto's dynamic security immediately.
  • "What is the current value of home/sensor/temperature?" reads the actual retained payload and returns it.
  • "Turn off the conveyor belt" publishes the correct stop payload to the right topic, based on your topic annotations.

The model receives a fresh snapshot of your broker on every message. There is no stale cache. It sees what your broker sees, right now.

Real-World Use Cases

Manufacturing and Industrial OT

Factory floors running SCADA and PLCs over MQTT often operate on isolated networks with no internet access by design. With local LLM, an operator can ask "which machines have not sent a heartbeat in the last 10 minutes?" or "disable all clients in the maintenance group" without any request touching the outside world. The AI understands the plant's topic structure through annotations set up once in BunkerM's settings.

Healthcare and Life Sciences

Medical device networks and patient monitoring systems face strict data residency requirements. HIPAA and GDPR both create barriers to sending device telemetry to third-party cloud AI services. A hospital using BunkerM to manage its medical IoT network can now query, configure, and automate its MQTT infrastructure with AI assistance while remaining fully compliant. Nothing leaves the hospital network.

Smart Agriculture

Farms and greenhouses are often in areas with poor or expensive connectivity. An agronomist managing irrigation sensors, soil monitors, and climate controllers via MQTT can run BunkerM on a local server or Raspberry Pi and use a small 3B model to control the entire setup with natural language. "Increase irrigation duration for zone 3 by 15 minutes" becomes a single chat message instead of a config file edit.

Home Automation and Enthusiasts

For home lab users already running LM Studio for other purposes, this is a zero-cost upgrade. Load a 7B model, point BunkerM at it, and your MQTT broker becomes conversational. Ask it to rename clients, reorganize topic permissions, or publish test messages while you debug a device, all without leaving the BunkerM interface.

Choosing a Model

BunkerM works with any model LM Studio can load. For broker management tasks, instruction-following models outperform base models significantly. A few that work well in practice:

  • Qwen2.5-7B-Instruct: reliable instruction following, handles batch client creation cleanly
  • Llama-3.2-3B-Instruct: fast on CPU, suitable for read-only queries on modest hardware
  • Mistral-7B-Instruct-v0.3: good balance of speed and accuracy for mixed workloads

Models smaller than 3B can work for simple queries but tend to add unnecessary action blocks to read-only requests despite the system prompt instruction. If you see unwanted actions being executed, switching to a larger model solves it in most cases.

How to Get Started

The setup takes about five minutes:

  1. Install LM Studio, download a model, and start the local server on port 1234.
  2. In BunkerM, go to Settings → Integrations → Local LLM.
  3. Enter http://host.docker.internal:1234 as the server URL (or http://localhost:1234 if running without Docker).
  4. Click Fetch Models, select your loaded model, enable, and save.
  5. Open AI → Chat and switch to Local LLM mode.

Full setup guide, troubleshooting tips, and model recommendations are in the Local LLM documentation.

Availability

Local LLM is available on all plans, including the free Community edition. It requires your own hardware running LM Studio. Cloud AI (BunkerAI) remains available on paid plans for users who prefer a zero-setup option with no hardware requirements.

← All articles Read the docs →