Your Broker Now Has an AI Assistant
AI Routes turn your Coreflux broker into a home for autonomous AI agents. AnAGENT route connects a language model (local or cloud) to everything the broker can already see: MQTT topics, databases, industrial equipment, MCP tools, and your own LoT Actions. The agent can answer questions, trigger Routes, and carry out multi-step tasks — all through plain language.
When to use AI Routes
- Natural-language monitoring — ask “how many sensors are reporting?” instead of building a dashboard
- AI-assisted automation — let the agent decide which Route to trigger for a given request
- Chat UI backend — power a custom chat or assistant interface with MQTT as the transport
- Hands-on discovery — explore your broker’s topics and Routes by asking questions
Minimal Example
The shortest possible AI Route points at a local Ollama model and nothing else. It has no tools yet — it simply answers questions using the LLM:This default assumes Ollama is running locally at
http://localhost:11434. Pull the model first with ollama pull llama3.2 if you haven’t already.Picking a Provider
AI Routes support four providers out of the box. Pick the one that matches where your model lives — local or cloud. Cloud providers need an API key, which you should store as a secret rather than pasting into the Route.- Ollama (local)
- OpenAI
- Anthropic
- Mistral
Runs entirely on your machine — no API key, no cloud dependency. Good for getting started and for privacy-sensitive workloads.
Core Configuration
These are the parameters most users will touch. TheAGENT route has more advanced knobs (history pruning, tracing, token limits, confirmation flows), but you don’t need them on day one.
| Parameter | Type | Default | What it does |
|---|---|---|---|
PROVIDER | string | ollama | Which LLM backend to use: ollama, openai, anthropic, mistral |
MODEL | string | llama3.2 | Model name for the chosen provider |
API_KEY | string | — | API key for cloud providers (use GET SECRET) |
TEMPERATURE | number | 0.7 | Creativity: 0.0 is deterministic, 1.0+ is more creative |
SYSTEM_PROMPT | string | (built-in) | Sets the agent’s persona and rules |
BROKER_TOOLS | boolean | false | Give the agent built-in tools to read/write MQTT and Routes |
MCP_ROUTES | string | — | Comma-separated names of MCP Routes the agent can call |
INTERACTION_MODE | string | autonomous | autonomous runs to completion, interactive can ask you questions |
INTERACTION_TOPIC | string | (auto) | Base MQTT topic for interaction messages |
AGENT_MODE | string | agent | agent (full control) or insight (read-only exploration) |
More options exist for production deployments — conversation history pruning, execution tracing, tool confirmation, max iteration limits. Start with the basics above, then add them as you need them.
What Tools Can the Agent Use?
A fresh agent only knows how to chat. To make it actually do things, you give it tools. There are three sources, and you can mix all three.Broker Tools
SettingBROKER_TOOLS true unlocks built-in tools so the agent can interact with your broker directly — no extra configuration required.
- List MQTT topics and peek at their latest payloads
- Read a specific topic on demand
- Publish messages to any topic
- List your Routes to see what’s connected
MCP Tools
MCP (Model Context Protocol) servers expose tools like file systems, Slack, Google Drive, or custom integrations. Any MCP Route you’ve defined can be wired into an AI Route withMCP_ROUTES:
SlackMcp offers a send-message tool, the agent can call it when appropriate.
Your Other Routes
The agent also sees every other Route you’ve already defined. If you have a PostgreSQL Route with anInsertReading event, the agent automatically gets a tool named SensorDB.trigger_InsertReading. Same for Modbus, REST, MongoDB, and so on — you don’t need to re-declare anything.
Insight vs. Agent Mode
Not every AI Route should be able to publish messages or trigger equipment. UseAGENT_MODE to control how much power the agent has:
| Mode | What it can do | Good for |
|---|---|---|
insight | Read-only: list topics, read payloads, explore Routes | Exploring a broker, answering “what’s connected?” |
agent (default) | Full: read and write, trigger Routes, modify configuration | Automation, chat assistants, hands-on operators |
Talking to Your Agent
Once a Route is defined, you can invoke it from LoT Actions or from any MQTT client. The usual pattern is to create a thin Action that receives a question, hands it to the agent, and publishes the reply.- From a LoT Action
- From any MQTT client
The
CALL AGENT statement runs the agent and returns the result into a variable. This Action takes whatever message lands on assistant/ask and publishes the answer to assistant/reply:Interactive Mode
By default the agent runs autonomously: it picks tools, executes them, and returns a final answer. Sometimes it needs clarification — “which production line did you mean?”. Switch to interactive mode to let the agent pause and ask.- It publishes the question to
assistant/interact/ask - Your UI (or a human with an MQTT client) replies to
assistant/interact/reply/{request_id} - The agent continues where it left off, using your answer
INTERACTION_TIMEOUT seconds, the agent moves on instead of hanging.
How It All Fits Together
The diagram below shows how a user question flows through the broker, gathers tools on the way, and produces a reply. Every box on the right is optional. A minimal agent with justMODEL still works — it’ll simply answer from the LLM’s own knowledge.
Best Practices
Store API keys as secrets
Store API keys as secrets
Never paste API keys directly into a Route definition. Use
KEEP SECRET "MY_KEY" WITH "sk-..." once, then reference it with WITH API_KEY GET SECRET "MY_KEY". Secrets are never logged or exposed in stored definitions.Start in insight mode
Start in insight mode
When you’re still shaping the system prompt or picking a model, run the agent with
WITH AGENT_MODE "insight". It can still read topics and explore Routes, but it can’t publish or trigger anything — so a bad prompt won’t affect production data.Write a focused system prompt
Write a focused system prompt
The default prompt is generic. Tell the agent who it is and what it should refuse to do:
Keep iteration limits sane
Keep iteration limits sane
MAX_ITERATIONS caps how many tool calls the agent can make per task. The default (10) is fine for most questions. Lift it only if you have complex multi-step workflows — runaway loops are both slow and expensive on cloud providers.Troubleshooting
Ollama not reachable
Ollama not reachable
- Make sure Ollama is running:
ollama listshould respond without errors - Confirm the model is pulled:
ollama pull llama3.2 - If Ollama is on a different machine, set
WITH BASE_URL "http://host:11434"explicitly - Check firewall rules between the broker and the Ollama host
The agent never calls any tool
The agent never calls any tool
- Confirm
BROKER_TOOLS "true"is set (or thatMCP_ROUTESlists valid Routes) - Smaller local models sometimes ignore tools — try a more capable model (e.g.
llama3.2overllama3:8b, or switch to a cloud provider) - Sharpen the
SYSTEM_PROMPTso the agent knows it should use tools for data questions
Cloud provider returns 401 or 403
Cloud provider returns 401 or 403
- Verify the secret actually exists: publishing
LIST SECRETSvia the command console should show its name - Check you’re using the right
PROVIDER— Anthropic keys don’t work for OpenAI and vice versa - For Anthropic, make sure your key has access to the specific model you set in
MODEL
The reply is empty or gets cut off
The reply is empty or gets cut off
- Raise
MAX_TOKENS— long answers need more room (default is4096) - Check the model actually supports the request (some smaller models struggle with long context)
- If you chained several tool calls, the agent may have hit
MAX_ITERATIONS— increase it and retry
Next Steps
MCP Routes
Plug external tools — Slack, file systems, Google Drive, custom servers — into your agent.
Route Examples
Browse real Route patterns you can combine with an AI Route.

