Skip to main content

Your Broker Now Has an AI Assistant

AI Routes turn your Coreflux broker into a home for autonomous AI agents. An AGENT route connects a language model (local or cloud) to everything the broker can already see: MQTT topics, databases, industrial equipment, MCP tools, and your own LoT Actions. The agent can answer questions, trigger Routes, and carry out multi-step tasks — all through plain language.
Like hiring an assistant that lives inside your broker. It can see your MQTT topics, use connected tools (databases, Slack, files), and answer questions in plain language — no external middleware, no copy-pasting data into a separate chatbot.

When to use AI Routes

  • Natural-language monitoring — ask “how many sensors are reporting?” instead of building a dashboard
  • AI-assisted automation — let the agent decide which Route to trigger for a given request
  • Chat UI backend — power a custom chat or assistant interface with MQTT as the transport
  • Hands-on discovery — explore your broker’s topics and Routes by asking questions

Minimal Example

The shortest possible AI Route points at a local Ollama model and nothing else. It has no tools yet — it simply answers questions using the LLM:
DEFINE ROUTE Assistant WITH TYPE AGENT
    ADD AGENT_CONFIG
        WITH MODEL "llama3.2"
This default assumes Ollama is running locally at http://localhost:11434. Pull the model first with ollama pull llama3.2 if you haven’t already.

Picking a Provider

AI Routes support four providers out of the box. Pick the one that matches where your model lives — local or cloud. Cloud providers need an API key, which you should store as a secret rather than pasting into the Route.
Runs entirely on your machine — no API key, no cloud dependency. Good for getting started and for privacy-sensitive workloads.
DEFINE ROUTE Assistant WITH TYPE AGENT
    ADD AGENT_CONFIG
        WITH PROVIDER "ollama"
        WITH BASE_URL "http://localhost:11434"
        WITH MODEL "llama3.2"
Set secrets ahead of time with KEEP SECRET "OPENAI_KEY" WITH "sk-...". The secret never appears in logs or stored Route definitions.

Core Configuration

These are the parameters most users will touch. The AGENT route has more advanced knobs (history pruning, tracing, token limits, confirmation flows), but you don’t need them on day one.
ParameterTypeDefaultWhat it does
PROVIDERstringollamaWhich LLM backend to use: ollama, openai, anthropic, mistral
MODELstringllama3.2Model name for the chosen provider
API_KEYstringAPI key for cloud providers (use GET SECRET)
TEMPERATUREnumber0.7Creativity: 0.0 is deterministic, 1.0+ is more creative
SYSTEM_PROMPTstring(built-in)Sets the agent’s persona and rules
BROKER_TOOLSbooleanfalseGive the agent built-in tools to read/write MQTT and Routes
MCP_ROUTESstringComma-separated names of MCP Routes the agent can call
INTERACTION_MODEstringautonomousautonomous runs to completion, interactive can ask you questions
INTERACTION_TOPICstring(auto)Base MQTT topic for interaction messages
AGENT_MODEstringagentagent (full control) or insight (read-only exploration)
More options exist for production deployments — conversation history pruning, execution tracing, tool confirmation, max iteration limits. Start with the basics above, then add them as you need them.

What Tools Can the Agent Use?

A fresh agent only knows how to chat. To make it actually do things, you give it tools. There are three sources, and you can mix all three.

Broker Tools

Setting BROKER_TOOLS true unlocks built-in tools so the agent can interact with your broker directly — no extra configuration required.
DEFINE ROUTE Assistant WITH TYPE AGENT
    ADD AGENT_CONFIG
        WITH MODEL "llama3.2"
        WITH BROKER_TOOLS "true"
With broker tools enabled, the agent can:
  • List MQTT topics and peek at their latest payloads
  • Read a specific topic on demand
  • Publish messages to any topic
  • List your Routes to see what’s connected
This is the fastest way to get an agent that can answer questions like “what’s the current temperature on factory/sensor/01?” or “which Routes are running right now?”.

MCP Tools

MCP (Model Context Protocol) servers expose tools like file systems, Slack, Google Drive, or custom integrations. Any MCP Route you’ve defined can be wired into an AI Route with MCP_ROUTES:
DEFINE ROUTE Assistant WITH TYPE AGENT
    ADD AGENT_CONFIG
        WITH MODEL "llama3.2"
        WITH MCP_ROUTES "FileMcp, SlackMcp"
The agent discovers every tool exposed by those MCP servers automatically. If SlackMcp offers a send-message tool, the agent can call it when appropriate.

Your Other Routes

The agent also sees every other Route you’ve already defined. If you have a PostgreSQL Route with an InsertReading event, the agent automatically gets a tool named SensorDB.trigger_InsertReading. Same for Modbus, REST, MongoDB, and so on — you don’t need to re-declare anything.
Define your data Routes first, then add the AI Route. The agent inherits all of them as tools the moment it starts.

Insight vs. Agent Mode

Not every AI Route should be able to publish messages or trigger equipment. Use AGENT_MODE to control how much power the agent has:
ModeWhat it can doGood for
insightRead-only: list topics, read payloads, explore RoutesExploring a broker, answering “what’s connected?”
agent (default)Full: read and write, trigger Routes, modify configurationAutomation, chat assistants, hands-on operators
Start in insight mode while you experiment. The agent can still explain your broker and read data — but it can’t accidentally publish to production topics while you’re tuning the prompt.

Talking to Your Agent

Once a Route is defined, you can invoke it from LoT Actions or from any MQTT client. The usual pattern is to create a thin Action that receives a question, hands it to the agent, and publishes the reply.
The CALL AGENT statement runs the agent and returns the result into a variable. This Action takes whatever message lands on assistant/ask and publishes the answer to assistant/reply:
DEFINE ACTION AskAssistant
ON TOPIC "assistant/ask" DO
    CALL AGENT "Assistant.execute"
        WITH (task = PAYLOAD)
        RETURN AS {reply}

    PUBLISH TOPIC "assistant/reply" WITH {reply}

Interactive Mode

By default the agent runs autonomously: it picks tools, executes them, and returns a final answer. Sometimes it needs clarification — “which production line did you mean?”. Switch to interactive mode to let the agent pause and ask.
DEFINE ROUTE Assistant WITH TYPE AGENT
    ADD AGENT_CONFIG
        WITH MODEL "llama3.2"
        WITH BROKER_TOOLS "true"
        WITH INTERACTION_MODE "interactive"
        WITH INTERACTION_TOPIC "assistant/interact"
        WITH INTERACTION_TIMEOUT "120"
When the agent needs your input:
  1. It publishes the question to assistant/interact/ask
  2. Your UI (or a human with an MQTT client) replies to assistant/interact/reply/{request_id}
  3. The agent continues where it left off, using your answer
If no one replies within INTERACTION_TIMEOUT seconds, the agent moves on instead of hanging.

How It All Fits Together

The diagram below shows how a user question flows through the broker, gathers tools on the way, and produces a reply. Every box on the right is optional. A minimal agent with just MODEL still works — it’ll simply answer from the LLM’s own knowledge.

Best Practices

Never paste API keys directly into a Route definition. Use KEEP SECRET "MY_KEY" WITH "sk-..." once, then reference it with WITH API_KEY GET SECRET "MY_KEY". Secrets are never logged or exposed in stored definitions.
When you’re still shaping the system prompt or picking a model, run the agent with WITH AGENT_MODE "insight". It can still read topics and explore Routes, but it can’t publish or trigger anything — so a bad prompt won’t affect production data.
The default prompt is generic. Tell the agent who it is and what it should refuse to do:
WITH SYSTEM_PROMPT "You are a factory assistant. Answer questions about the production line using broker tools. Do not publish to any topic starting with 'control/' without explicit confirmation."
MAX_ITERATIONS caps how many tool calls the agent can make per task. The default (10) is fine for most questions. Lift it only if you have complex multi-step workflows — runaway loops are both slow and expensive on cloud providers.

Troubleshooting

  • Make sure Ollama is running: ollama list should respond without errors
  • Confirm the model is pulled: ollama pull llama3.2
  • If Ollama is on a different machine, set WITH BASE_URL "http://host:11434" explicitly
  • Check firewall rules between the broker and the Ollama host
  • Confirm BROKER_TOOLS "true" is set (or that MCP_ROUTES lists valid Routes)
  • Smaller local models sometimes ignore tools — try a more capable model (e.g. llama3.2 over llama3:8b, or switch to a cloud provider)
  • Sharpen the SYSTEM_PROMPT so the agent knows it should use tools for data questions
  • Verify the secret actually exists: publishing LIST SECRETS via the command console should show its name
  • Check you’re using the right PROVIDER — Anthropic keys don’t work for OpenAI and vice versa
  • For Anthropic, make sure your key has access to the specific model you set in MODEL
  • Raise MAX_TOKENS — long answers need more room (default is 4096)
  • Check the model actually supports the request (some smaller models struggle with long context)
  • If you chained several tool calls, the agent may have hit MAX_ITERATIONS — increase it and retry

Next Steps

MCP Routes

Plug external tools — Slack, file systems, Google Drive, custom servers — into your agent.

Route Examples

Browse real Route patterns you can combine with an AI Route.