Skip to main content

Build IIoT Solutions Without Memorizing a Single Command

You don’t need to be a LoT (Language of Things) expert to build industrial IoT solutions with Coreflux. With an AI assistant connected to the Coreflux MCP, you describe what you want in plain English, and the AI writes the LoT code for you — correctly, following best practices, using real syntax from the official documentation. This page walks you through the entire workflow: from setting up the MCP connection to deploying a working IIoT feature, using only natural language prompts. Whether you’re monitoring factory temperatures, logging production data to a database, or bridging sensor networks to the cloud — the process is the same.
Like dictating a blueprint to an expert architect. You describe the building you want — “a sensor system that alerts when temperatures spike” — and the architect draws up structurally sound plans, handles the engineering codes, and hands you something ready to build.

When to Use This Guide

  • You’re new to Coreflux and want to build something real without learning LoT syntax first
  • You have an AI coding assistant (Cursor, Claude, Copilot) and want to use it effectively for IIoT
  • You want a step-by-step walkthrough of the AI-assisted development workflow
  • You’re exploring whether Coreflux fits your industrial automation or IoT data pipeline needs

In This Page


Prerequisites

Before starting, make sure you have:
RequirementDetails
Coreflux BrokerInstalled and running. See the Installation Guide
AI Assistant with MCPCursor, Claude Desktop, Claude.ai, or VS Code with Copilot — connected to the Coreflux MCP
MQTT ClientMQTT Explorer or any MQTT client for verifying results
If you haven’t connected the Coreflux MCP to your AI assistant yet, follow the MCP Setup Guide first. It takes under 5 minutes.
You do not need prior experience with LoT syntax, MQTT, or industrial protocols. The AI assistant handles the technical details — you focus on describing your goals.

The AI-Assisted Workflow

Every AI-assisted Coreflux project follows three phases. This isn’t a suggestion — it’s the workflow that consistently produces working results.
PhaseYou DoThe AI Does
1. PlanDescribe the feature in plain EnglishAsks clarifying questions, proposes an architecture
2. BuildReview and approve the AI’s outputWrites LoT code using the MCP for accurate syntax
3. VerifyTest the result in MQTT ExplorerExplains what to check and helps debug issues
The key insight is that you are the domain expert and the AI is the LoT expert. You know your factory floor, your sensor layout, your business rules. The AI knows the syntax, the patterns, and the best practices. Together, you build faster than either could alone.

Phase 1: Plan Your Feature

The most important step happens before any code is written. A clear plan gives the AI the context it needs to produce working LoT code on the first try.

Describe Your Goal, Not Your Code

Tell the AI what you want to achieve, not how to write it. Let it choose the right LoT building blocks.
The more specific you are on the components you are working with (for example: hardware brands, specific protocols, IP addresses, ports, etc.), the better the end result will tend to be.
A clear, goal-oriented description that gives the AI enough context to make architectural decisions:
I have temperature sensors from brand Y, using ModBus, publishing readings to MQTT topics like 
sensors/temp001/reading as JSON: {"value": 23.5, "unit": "celsius"}.

I want to:
1. Monitor these readings and trigger an alert when temperature exceeds 75°C
2. Store all readings in a PostgreSQL database
3. Convert Celsius to Fahrenheit and publish the converted value

The alert should publish to an alerts/ topic with the sensor ID and the value.

The Planning Checklist

Before you prompt your AI assistant, gather answers to these questions. You don’t need to answer all of them — but the more you provide, the better the result.
QuestionWhy It MattersExample Answer
What data comes in?Defines the trigger topics and payload formatsensors/+/reading with JSON {"value": 23.5}
What should happen to it?Determines which LoT building blocks to useTransform, alert, store, forward
Where should results go?Defines output topics, databases, or external systemsPostgreSQL table, alert topic, cloud broker
What are the thresholds or rules?Sets up conditionals in ActionsAlert when temp > 75°C
How many sensors/devices?Affects topic structure and wildcard usage50 sensors, each with a unique ID
You can share this checklist directly with the AI. Paste it into your prompt and fill in the answers — the AI will use them to design the complete system.

Ask the AI to Propose an Architecture

Once you’ve described your goal, ask the AI to plan before coding. This avoids wasted iteration.
Before writing any code, propose an architecture for this system. 
Tell me which LoT building blocks you'll use (Actions, Models, Routes, Rules) 
and why. List the MQTT topics you'll create.
The AI will consult the Coreflux MCP documentation and respond with something like:
“For this system, I recommend:
  • An Action to monitor temperatures and trigger alerts
  • A Model to structure sensor readings into consistent JSON
  • A Route to store readings in PostgreSQL
  • A callable Action for the Celsius-to-Fahrenheit conversion (reusable)
Topic structure: sensors/+/reading (input), processed/+/fahrenheit (converted), alerts/temperature/+ (alerts)”
Review this before saying “go ahead.” It’s much easier to adjust a plan than to rewrite code.

Phase 2: Build with AI

Now let’s walk through a complete guided demo. We’ll build a temperature monitoring system that reads sensor data, converts units, triggers alerts, and logs to a database — all by prompting the AI.

Step 1: Create the Core Logic (Actions)

Start with the Action that processes incoming data. Paste this prompt into your AI assistant:
Using the Coreflux MCP for reference, create a LoT Action called 
ProcessTemperature that:
- Triggers on topic "sensors/+/temperature" 
- Extracts the sensor ID from the topic
- Gets the temperature value from a JSON payload with key "value"
- If the temperature exceeds 75, publishes an alert to 
  "alerts/temperature/" followed by the sensor ID
- Publishes the reading to "processed/" followed by the sensor ID

Use proper LoT syntax with type casting.
The AI will consult the Coreflux documentation through the MCP and produce working LoT code. Here is what a correct result looks like:
DEFINE ACTION ProcessTemperature
ON TOPIC "sensors/+/temperature" DO
    SET "sensor_id" WITH TOPIC POSITION 2
    SET "temp" WITH (GET JSON "value" IN PAYLOAD AS DOUBLE)
    IF {temp} > 75 THEN
        PUBLISH TOPIC "alerts/temperature/" + {sensor_id} WITH "HIGH TEMP: " + {temp} + "°C"
    PUBLISH TOPIC "processed/" + {sensor_id} WITH {temp}
The AI uses the MCP’s consult_documentation and consult_internal_documentation tools behind the scenes to look up correct syntax. You don’t need to tell it to use the MCP — it does this automatically when it detects a Coreflux-related question.

Step 2: Add Unit Conversion (Callable Action)

Now ask the AI to add a reusable conversion utility:
Create a callable LoT Action called ConvertToFahrenheit that takes a Celsius 
value as input and returns the Fahrenheit equivalent. Then update 
ProcessTemperature to call it and publish the result to 
"processed/" + sensor ID + "/fahrenheit".
The AI will produce two connected Actions:
DEFINE ACTION ConvertToFahrenheit
INPUT celsius AS DOUBLE
DO
    SET "fahrenheit" WITH ({celsius} * 9 / 5 + 32)
RETURN
    OUTPUT fahrenheit

DEFINE ACTION ProcessTemperature
ON TOPIC "sensors/+/temperature" DO
    SET "sensor_id" WITH TOPIC POSITION 2
    SET "temp" WITH (GET JSON "value" IN PAYLOAD AS DOUBLE)
    IF {temp} > 75 THEN
        PUBLISH TOPIC "alerts/temperature/" + {sensor_id} WITH "HIGH TEMP: " + {temp} + "°C"
    PUBLISH TOPIC "processed/" + {sensor_id} WITH {temp}
    CALL ACTION ConvertToFahrenheit
        WITH celsius = {temp}
        RETURN temp_f
    PUBLISH TOPIC "processed/" + {sensor_id} + "/fahrenheit" WITH {temp_f}

Step 3: Structure the Data (Model)

Ask the AI to create a Model that formats raw sensor readings into consistent JSON:
Create a LoT Model called TemperatureReading that publishes structured JSON 
to "sensors/formatted/temperature". It should include the sensor ID as a 
static string "TEMP001", the raw value from the sensor topic as the trigger, 
the unit as "celsius", and a UTC timestamp.
The result:
DEFINE MODEL TemperatureReading WITH TOPIC "sensors/formatted/temperature"
    ADD STRING "sensor_id" WITH "TEMP001"
    ADD DOUBLE "value" WITH TOPIC "sensors/raw/temperature" AS TRIGGER
    ADD STRING "unit" WITH "celsius"
    ADD STRING "timestamp" WITH TIMESTAMP "UTC"

Step 4: Connect to a Database (Route)

Now bring in a Route to persist the data. Ask the AI:
Create a LoT Route called SensorDatabase that stores temperature readings 
in a PostgreSQL database. It should:
- Connect to a PostgreSQL server at "db.example.com" port 5432
- Use database "iot_data" with user "iot_writer" and password "secure_pass"
- Insert readings from "sensors/+/temperature" into a table called 
  "temperature_readings" with columns: timestamp, sensor_id, and value

Use proper LoT route syntax from the Coreflux documentation.
The AI generates:
DEFINE ROUTE SensorDatabase WITH TYPE POSTGRESQL
    ADD SQL_CONFIG
        WITH SERVER "db.example.com"
        WITH PORT '5432'
        WITH DATABASE "iot_data"
        WITH USERNAME "iot_writer"
        WITH PASSWORD "secure_pass"
    ADD EVENT StoreReading
        WITH SOURCE_TOPIC "sensors/+/temperature"
        WITH QUERY "INSERT INTO temperature_readings (timestamp, sensor_id, value) VALUES (NOW(), '{value.topic.2}', '{value.json.value}')"

Step 5: Deploy Everything

Ask the AI how to deploy your code:
How do I deploy all of these LoT definitions to my Coreflux broker? 
I'm using VS Code with the LoT Notebooks extension.
The AI will explain the deployment process:
The recommended way to deploy LoT code is through the LoT Notebooks extension in VS Code or Cursor.
1

Create a Notebook

Create a new .lotnb file in your project. Each cell can contain one or more LoT definitions.
2

Add Your Code

Paste each definition into its own cell — this lets you deploy and test them individually.
3

Run Each Cell

Execute cells in order. The extension sends the LoT code to your connected Coreflux broker.
4

Verify in MQTT Explorer

Open MQTT Explorer and subscribe to # (all topics). Publish a test payload to sensors/temp001/temperature and watch the system respond.

Phase 3: Verify and Iterate

After deploying, test the system by publishing a simulated sensor reading.

Test Your System

Open MQTT Explorer (or any MQTT client) and follow these steps:
1

Subscribe to All Topics

Subscribe to # to see all messages flowing through the broker.
2

Publish a Test Reading

Publish to sensors/temp001/temperature with this payload:
{"value": 82.3}
3

Check the Results

You should see messages appear on these topics:
TopicExpected ValueSource
processed/temp00182.3ProcessTemperature Action
processed/temp001/fahrenheit180.14ConvertToFahrenheit callable
alerts/temperature/temp001HIGH TEMP: 82.3°CAlert threshold exceeded
4

Test Below Threshold

Publish to sensors/temp001/temperature with {"value": 45.0}. Verify that no alert is generated — only the processed and fahrenheit topics should update.

When Something Doesn’t Work

If the output isn’t what you expect, describe the problem to the AI:
I deployed the ProcessTemperature action, but when I publish 
{"value": 82.3} to sensors/temp001/temperature, I don't see 
any alert on alerts/temperature/temp001. What could be wrong?
The AI will consult the MCP documentation and help you debug — checking for type casting issues, topic mismatches, or deployment errors. You don’t need to debug LoT syntax yourself. Describe the symptom, and the AI diagnoses the cause.

Prompt Patterns That Work

After building dozens of IIoT features with AI, these prompt patterns consistently produce the best results.

Pattern 1: Context → Goal → Constraints

Provide the situation, then state what you want, then add any constraints:
Context: I have 20 pressure sensors publishing to "sensors/pressure/+/psi" 
with numeric payloads.
Goal: Alert when any sensor exceeds 150 PSI and log all readings to a 
PostgreSQL table.
Constraints: Alerts must include sensor ID and timestamp. Use a single 
Action for monitoring and a Route for database logging.

Pattern 2: Ask for Explanation Before Code

When learning, ask the AI to explain its choices:
I need to store sensor data in a database. Should I use a Route or an Action 
for this? Explain the trade-offs, then show me the LoT code for the 
recommended approach. Use the Coreflux documentation for reference.

Pattern 3: Incremental Building

Build complex systems one piece at a time rather than all at once:
Step 1: "Create an Action that reads temperature and publishes to a processed topic"
Step 2: "Now add an alert when temperature exceeds 75"
Step 3: "Add a Route to store readings in PostgreSQL"
Step 4: "Create a Model to structure the data as JSON with sensor ID and timestamp"
Each step is small enough for the AI to get right, and you verify as you go.

Pattern 4: Reference the MCP Explicitly

When you need maximum accuracy, tell the AI to consult the documentation:
Check the Coreflux documentation: what is the correct syntax for defining 
a Modbus TCP route that reads holding registers from a PLC at 192.168.1.10?
Give me a complete LoT route definition.
This triggers the AI to call consult_documentation or consult_internal_documentation through the MCP, ensuring the answer uses real, verified syntax.

Common Pitfalls

These mistakes happen frequently when developers start using AI for LoT development. Knowing them upfront saves hours of debugging.
PitfallWhy It HappensHow to Avoid It
Vague promptsAI fills in gaps with assumptions that may not match your systemAlways specify topic structure, payload format, and thresholds
Skipping the planJumping straight to “write me an Action” without contextAsk the AI to propose an architecture before writing code
Not verifying outputTrusting AI code without testing itAlways deploy and test with a sample payload in MQTT Explorer
Prompting without MCPThe AI invents plausible-looking but incorrect LoT syntaxEnsure the Coreflux MCP is connected — check your MCP settings
Building everything at onceOne massive prompt produces code that’s hard to debugBuild incrementally: one Action, one Model, one Route at a time
Ignoring type castingNumeric operations fail silently without AS DOUBLE or AS INTAsk the AI to always include type casts — or review the Best Practices
If your AI assistant is not connected to the Coreflux MCP, it may generate LoT-like code that looks correct but uses invented syntax. Always verify the MCP connection before starting a session. Check that the consult_documentation and consult_internal_documentation tools appear in your assistant’s available tools.

Expanding Your System

Once the basic monitoring system works, you can extend it with a single prompt each. Here are natural next steps:
FeatureExample Prompt
Email alerts”Add a LoT email route that sends an email when alerts/temperature/+ receives a message”
Cloud sync”Create an MQTT bridge route that forwards all processed/ topics to a cloud broker at cloud.example.com”
Industrial protocol”Create a Modbus TCP route to read holding registers 0-10 from a PLC at 192.168.1.50 every 5 seconds”
Access control”Create a LoT Rule that only allows admin users to publish to config/ topics”
Data aggregation”Create an Action that calculates a 5-minute rolling average of temperature readings”
Each of these is a single prompt that the AI can handle using the MCP documentation. The workflow is always the same: describe → review → deploy → verify.

Next Steps