Your MQTT broker now has a built-in AI assistant that runs entirely on your own hardware. Connect BunkerM to any model loaded in LM Studio and control your entire IoT setup with plain English, no internet connection required, no data ever leaving your network.
Until now, BunkerM's AI features required a BunkerAI Cloud subscription. That works well for most users, but a growing number of deployments cannot send data outside the network, whether due to compliance requirements, limited connectivity, or a preference for keeping infrastructure fully self-contained.
Local LLM mode solves this by routing all AI requests to a model running on your own machine via LM Studio, an open-source desktop app that runs models locally. BunkerM injects live broker context into every request, so the model knows your connected clients, active topics, latest payloads, and statistics, and can act on them directly.
The local AI has the same execution capabilities as the cloud version for web chat. You can ask it to create clients, publish messages, delete devices, and query live broker state, all in plain English. A few examples:
The model receives a fresh snapshot of your broker on every message. There is no stale cache. It sees what your broker sees, right now.
Factory floors running SCADA and PLCs over MQTT often operate on isolated networks with no internet access by design. With local LLM, an operator can ask "which machines have not sent a heartbeat in the last 10 minutes?" or "disable all clients in the maintenance group" without any request touching the outside world. The AI understands the plant's topic structure through annotations set up once in BunkerM's settings.
Medical device networks and patient monitoring systems face strict data residency requirements. HIPAA and GDPR both create barriers to sending device telemetry to third-party cloud AI services. A hospital using BunkerM to manage its medical IoT network can now query, configure, and automate its MQTT infrastructure with AI assistance while remaining fully compliant. Nothing leaves the hospital network.
Farms and greenhouses are often in areas with poor or expensive connectivity. An agronomist managing irrigation sensors, soil monitors, and climate controllers via MQTT can run BunkerM on a local server or Raspberry Pi and use a small 3B model to control the entire setup with natural language. "Increase irrigation duration for zone 3 by 15 minutes" becomes a single chat message instead of a config file edit.
For home lab users already running LM Studio for other purposes, this is a zero-cost upgrade. Load a 7B model, point BunkerM at it, and your MQTT broker becomes conversational. Ask it to rename clients, reorganize topic permissions, or publish test messages while you debug a device, all without leaving the BunkerM interface.
BunkerM works with any model LM Studio can load. For broker management tasks, instruction-following models outperform base models significantly. A few that work well in practice:
Models smaller than 3B can work for simple queries but tend to add unnecessary action blocks to read-only requests despite the system prompt instruction. If you see unwanted actions being executed, switching to a larger model solves it in most cases.
The setup takes about five minutes:
http://host.docker.internal:1234 as the server URL (or http://localhost:1234 if running without Docker).Full setup guide, troubleshooting tips, and model recommendations are in the Local LLM documentation.
Local LLM is available on all plans, including the free Community edition. It requires your own hardware running LM Studio. Cloud AI (BunkerAI) remains available on paid plans for users who prefer a zero-setup option with no hardware requirements.