Skip to main content
Animated sequence: opening the AI Assistant from the dock, typing a prompt, and the assistant responding and applying changes in Coreflux HUB

Your Broker, Now With a Teammate

The AI Assistant lives inside the same Coreflux MQTT broker that already carries your sensor, PLC, and cloud traffic. You describe what you want in everyday language—ask for insights from the data already flowing through your broker, build Dashboards with live widgets and KPIs, or develop LoT (Language of Things) Actions that run on the broker—and the assistant uses the broker’s own tools to answer, draft, refine, and deploy. It is not a separate chat app pasted on top; it works with your live system.
Like a colleague who already sits at your control desk — they can read the same screens you see, suggest the next step, and (when you allow it) wire up the change for you.

When to Use the AI Assistant

  • Ship integrations faster — Describe connections or logic you need; the assistant drafts and deploys Routes, Actions, and panels where your permissions allow.
  • Explore without guesswork — Ask what topics exist, which Routes are healthy, or how a KPI is calculated before you change anything.
  • Hand off safely — Use read-only mode for operators or auditors who should see answers but not apply changes.

Run Models On-Premise or in the Cloud

Where inference runs is an architecture and compliance choice. The AI Assistant supports edge (on-premise) models and cloud providers so you can keep prompts and payloads on your network when you need to, or use hosted APIs when policy allows.
AI Setup provider list grouped into Cloud (OpenAI selected, Anthropic, Mistral) and Local/Private (Ollama with NO API KEY tag)
  • On-premise (edge) — Run models on hardware you control, for example Ollama on the same host or LAN as the broker, so data does not leave your site.
  • Cloud providers — Connect to hosted models such as OpenAI, Anthropic, or Mistral when your organization allows traffic to leave the network.
The same Assistant panel and modes work whether you point at an edge endpoint or a cloud API—you are not choosing a different product, only where the model runs.

Set up the AI Assistant

The first time you open the AI Assistant, a short wizard walks you through three choices—provider, model, and API key—and prepares your broker. You only do this once per broker; after that, clicking the Coreflux icon opens the assistant directly.
1

Open the AI Setup wizard

Click the Coreflux icon in the bottom dock. On a fresh broker, the AI Setup wizard slides in on the right and asks you to pick where the model will run—a Cloud provider (OpenAI, Anthropic, Mistral) or Local / Private (Ollama). Pick one and click Continue.
Coreflux HUB with the AI Setup wizard open on the right showing Cloud providers OpenAI, Anthropic, Mistral and a Local / Private Ollama option, with a Continue button
2

Pick a model

Choose a model from the suggested list, or type the exact model name your provider supports (for example gpt-5.4-mini) into the custom field. Click Continue when you are ready.
AI Setup wizard model step listing gpt-4o, gpt-4o mini, gpt-4.1, o4 mini and a custom model field with gpt-5.4-mini typed in
3

Add your API key

Paste the API key from your chosen provider. The key is stored as an encrypted secret on the broker and is never sent back to your browser. The wizard also shows a quick summary of what it is about to install so you know what to expect.
If you picked Ollama in the earlier step, the wizard skips this screen entirely—your local Ollama endpoint does not need a key.
AI Setup wizard API key step with an OpenAI API key input, a note that the key is stored as an encrypted secret on the broker, and an install summary card
Sign in to the provider you picked in the previous step and follow their docs to create a key:If you picked Ollama, no key is required—the broker talks to your local Ollama endpoint directly.
4

Install and you're ready

Click Install routes. The broker provisions everything the assistant needs in a few seconds and then closes the wizard automatically.
AI Setup wizard showing a Setting up AI Assistant progress indicator with Setting API key secret status
Once the wizard closes, the AI Assistant is live on this broker. From now on, clicking the Coreflux icon opens the assistant directly—no more setup.

Open the AI Assistant

Use the Coreflux icon in the bottom app dock—the first icon on the left. You can peek at the assistant without opening the full side panel, then click when you want the full workspace.
1

Hover to preview

Hover the Coreflux icon. A compact Coreflux AI bubble appears above the dock with an Ask anything… field and Insight / Agent mode pills so you can send a quick prompt or pick a mode before you commit to the full panel.
Coreflux HUB dock with the Coreflux icon highlighted and a compact Coreflux AI popover showing Ask anything placeholder and Insight and Agent toggles
2

Click to open the full panel

Click the Coreflux icon to pin the AI Assistant on the right. You get suggested prompts (for example Create a dashboard or Calculate a new KPI), a Sources area for dragging in panels such as the LoT Editor, and the full conversation thread with edit and close controls in the header.
Coreflux HUB with the full AI Assistant panel open showing suggested actions, Sources with LoT Editor, and Ask me anything input

Agent vs Insight mode

Pick the mode that matches how much change you want the assistant to make. You can switch at the bottom of the panel at any time.
ModeWhat it doesUse it when
AgentCan read broker state and create or update things—Routes, Actions, Models, panels—subject to your account permissions.You want the assistant to build or change integrations, logic, or dashboards.
InsightRead-only—inspects topics, Routes, payloads, and traces; answers questions; can show tables and charts.You want safe exploration, reporting, or to give users answers without deploy rights.
The mode pills sit below the main input so you always know which behavior you have selected before you send a message.

What you can do

These are typical starting points—phrase them in your own words; the assistant will ask follow-ups if something is missing.

Build a dashboard

Ask for live gauges, KPI tiles, or layouts bound to your MQTT topics; the assistant drafts and wires widgets you can refine in the panel.

Develop an Action in LoT

Describe triggers, conditions, and outputs in plain language; the assistant proposes LoT you review and deploy without leaving the conversation.

Calculate a new KPI

Describe the metric or inputs you care about; the assistant drafts logic, confirms details with you, and deploys an Action in Agent mode.

Explain your broker state

Ask what Routes or topics exist, why something errored, or how signals relate—natural in Insight mode when you want answers before any changes.

Example: Build a KPI with Agent mode

Below is one real flow: you ask for a KPI, the assistant requests the missing pieces, then deploys an Action and offers a shortcut to the LoT Editor. Stay in Agent mode for this path.
1

Ask

Type a goal such as Calculate a new KPI. The assistant loads context and may offer quick actions that match what you said.
AI Assistant showing Agent mode selected and suggested prompts including Calculate a new KPI
2

Clarify

When the assistant needs a formula, source topic, or rule, it pauses and asks you directly—you reply in the thread instead of guessing what to type up front.
AI Assistant conversation where the assistant asks the user to provide KPI formula or source data with a Type your answer field
3

Deploy

After you supply enough detail, the assistant can draft LoT, deploy an Action, and show a status card. Use Open in LoT Editor when you want to inspect or tweak the definition yourself.
AI Assistant showing deployed CalculateKPI Action with Running status and Open in LoT Editor link
When you see Running (or equivalent success state) on the deployment card, the new Action is live on the broker—confirm in the LoT Editor if you want to read the exact definition.

Example: Explore safely with Insight mode

Switch to Insight before you send if you only want answers—not changes. You can still get structured output such as tables and follow-up suggestions.
AI Assistant in Insight mode responding to 'Please create a list of routes I have on broker' with a table showing Name, Type, Status, Description, and Events columns for Terminal, BrokerAgent, and MyS7Route
Try questions like Create me a list of Routes or Which Routes are in a warning state? The assistant reads broker state and renders the response in the panel without using write tools.

Built-in safeguards

  • Your permissions apply — The assistant cannot read topics or change Routes beyond what your HUB user is allowed to do.
  • Insight mode is read-only — No deploy, publish, or execute path while Insight is selected.
  • Stop at any time — Long runs can be cancelled from the UI so you are never stuck waiting on a runaway answer.
  • Same deployment path as humans — Anything the assistant deploys still goes through the broker’s normal validation and permission checks.

Best practices

Use Insight to map topics, Routes, and health first. Switch to Agent only when you are ready for the assistant to create or update broker objects.
Open new Actions or Routes in the LoT Editor or Routes app to read names, triggers, and bindings before you rely on them in production.

Next steps

LoT Editor

Open and refine Actions, Models, and Rules the assistant creates—same editor, full control.

Routes overview

Add or monitor Routes the assistant can read and help you configure next.