Installing Coreflux with Docker
This guide covers how to install and run Coreflux MQTT Broker using Docker. We'll start with the simplest method and progress to more advanced setups.
Prerequisites
Before you begin, ensure you have Docker installed:
- Docker: Version 20.10 or later
- Install Docker on Linux
- Install Docker Desktop on Windows
- Install Docker Desktop on Mac
Verify your installation:
For the Docker Compose method, you'll also need Docker Compose (included with Docker Desktop):
Method 1: Simple Docker Run (Quickest Start)
The simplest way to run Coreflux is with a single docker run command:
Linux/Mac
docker run -d \
--name coreflux-broker \
-p 1883:1883 \
-p 8883:8883 \
-p 9001:9001 \
-v $(pwd)/coreflux-data:/data \
-e COREFLUX_LOG_LEVEL=INFO \
--restart unless-stopped \
coreflux/broker:latest
Windows (PowerShell)
docker run -d `
--name coreflux-broker `
-p 1883:1883 `
-p 8883:8883 `
-p 9001:9001 `
-v ${PWD}/coreflux-data:/data `
-e COREFLUX_LOG_LEVEL=INFO `
--restart unless-stopped `
coreflux/broker:latest
Verify It's Running
That's it! Your Coreflux broker is now running and accessible on port 1883.
Method 2: Using a Dockerfile (Customizable)
If you need to customize the broker configuration or add your own scripts, create a Dockerfile.
1. Create the Dockerfile
Create a file named Dockerfile in your project directory with the following content:
# Coreflux MQTT Broker - Standalone Dockerfile
FROM coreflux/broker:latest
# Metadata
LABEL maintainer="Coreflux Team"
LABEL description="Coreflux MQTT Broker with LOT Language Support"
LABEL version="1.7.2"
# Set working directory
WORKDIR /app
# Create necessary directories
RUN mkdir -p /data /config /logs /python-scripts
# Environment variables
ENV COREFLUX_DATA_DIR=/data \
COREFLUX_CONFIG_DIR=/config \
COREFLUX_LOG_DIR=/logs \
COREFLUX_PYTHON_DIR=/python-scripts \
COREFLUX_LOG_LEVEL=INFO \
COREFLUX_MQTT_PORT=1883 \
COREFLUX_MQTT_TLS_PORT=8883 \
COREFLUX_WS_PORT=9001
# Expose ports
EXPOSE 1883 8883 9001
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD mosquitto_sub -t '$$SYS/broker/uptime' -C 1 -W 2 || exit 1
# Volume mount points
VOLUME ["/data", "/config", "/logs", "/python-scripts"]
# Default command
CMD ["coreflux-broker", "--config", "/config/coreflux.conf"]
2. Build the Image
3. Run the Container
docker run -d \
--name coreflux-broker \
-p 1883:1883 \
-p 8883:8883 \
-p 9001:9001 \
-v $(pwd)/coreflux-data:/data \
-v $(pwd)/python-scripts:/python-scripts \
coreflux-broker:custom
Method 3: Docker Compose (Full Development Stack)
For a complete development environment with data storage and visualization, use Docker Compose.
1. Create the docker-compose.yml File
Create a file named docker-compose.yml in your project directory with the following content:
version: '3.8'
services:
# Coreflux MQTT Broker - Main IoT platform
coreflux-broker:
image: coreflux/broker:latest
container_name: coreflux-broker
ports:
- "1883:1883" # MQTT
- "8883:8883" # MQTT over TLS
- "9001:9001" # WebSockets
volumes:
- ./coreflux-data:/data
- ./coreflux-config:/config
environment:
- COREFLUX_LOG_LEVEL=INFO
- COREFLUX_DATA_DIR=/data
- COREFLUX_CONFIG_DIR=/config
networks:
- coreflux-network
restart: unless-stopped
healthcheck:
test: ["CMD", "mosquitto_sub", "-t", "$$SYS/broker/uptime", "-C", "1", "-W", "2"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# OpenSearch - Data storage for LOT examples
opensearch:
image: opensearchproject/opensearch:2.11.0
container_name: coreflux-opensearch
environment:
- cluster.name=coreflux-cluster
- node.name=opensearch-node1
- discovery.type=single-node
- bootstrap.memory_lock=true
- "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
- DISABLE_SECURITY_PLUGIN=true # For development only
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 65536
hard: 65536
volumes:
- opensearch-data:/usr/share/opensearch/data
ports:
- "9200:9200" # REST API
- "9600:9600" # Performance Analyzer
networks:
- coreflux-network
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
interval: 30s
timeout: 10s
retries: 5
# OpenSearch Dashboards - Visualization
opensearch-dashboards:
image: opensearchproject/opensearch-dashboards:2.11.0
container_name: coreflux-dashboards
ports:
- "5601:5601"
environment:
- OPENSEARCH_HOSTS=http://opensearch:9200
- DISABLE_SECURITY_DASHBOARDS_PLUGIN=true # For development only
networks:
- coreflux-network
depends_on:
- opensearch
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
interval: 30s
timeout: 10s
retries: 5
networks:
coreflux-network:
driver: bridge
name: coreflux-network
volumes:
opensearch-data:
driver: local
name: coreflux-opensearch-data
2. Start All Services
This will: - Pull the necessary Docker images (first time only) - Create and start the Coreflux broker, OpenSearch, and OpenSearch Dashboards - Set up a Docker network for communication between services
3. Verify All Services
Check that all services are running:
You should see three services running:
- coreflux-broker on ports 1883, 8883, 9001
- coreflux-opensearch on port 9200
- coreflux-dashboards on port 5601
View logs:
Connecting to the Broker
Regardless of which method you used, the broker is now accessible.
Connection Details
Host: localhost (or your Docker host IP)
MQTT Port: 1883
MQTT TLS Port: 8883
WebSocket Port: 9001
Default Username: root
Default Password: coreflux
Using Mosquitto Client
Install mosquitto-clients if you don't have them:
# Ubuntu/Debian
sudo apt-get install mosquitto-clients
# macOS
brew install mosquitto
# Windows (using Chocolatey)
choco install mosquitto
Subscribe to a topic:
Publish a message:
Testing LOT Actions
Deploy a simple heartbeat action:
mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
-t '$SYS/Coreflux/Command' \
-m '-addAction DEFINE ACTION TestHeartbeat
ON EVERY 5 SECONDS DO
PUBLISH TOPIC "system/heartbeat" WITH "alive"'
Subscribe to see the heartbeat:
You should see alive messages every 5 seconds.
Configuration Options
Environment Variables
You can configure the broker using environment variables:
| Variable | Default | Description |
|---|---|---|
COREFLUX_LOG_LEVEL |
INFO |
Logging level (DEBUG, INFO, WARN, ERROR) |
COREFLUX_DATA_DIR |
/data |
Data persistence directory |
COREFLUX_CONFIG_DIR |
/config |
Configuration files directory |
COREFLUX_MQTT_PORT |
1883 |
MQTT port |
COREFLUX_MQTT_TLS_PORT |
8883 |
MQTT over TLS port |
COREFLUX_WS_PORT |
9001 |
WebSocket port |
Persistent Data
Data is automatically persisted in Docker volumes:
- Broker data:
./coreflux-data- Stores Actions, Models, Routes, Rules - Configuration:
./coreflux-config- Broker configuration files - OpenSearch data: Docker managed volume (docker-compose only)
Adding Python Scripts for LOT
LOT can integrate with Python for advanced processing. To add Python scripts:
1. Create a directory for Python scripts:
2. Create your Python script:
cat > python-scripts/Calculator.py << 'EOF'
# Script Name: Calculator
def add(a, b):
"""Add two numbers"""
return a + b
def multiply(a, b):
"""Multiply two numbers"""
return a * b
def safe_divide(a, b):
"""Safely divide with error handling"""
try:
return a / b
except ZeroDivisionError:
return {"error": "Division by zero"}
EOF
3. Mount the scripts directory:
For docker run, add the volume mount:
For docker-compose, add to the volumes section:
services:
coreflux-broker:
volumes:
- ./coreflux-data:/data
- ./coreflux-config:/config
- ./python-scripts:/python-scripts
4. Register the Python script with the broker:
mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
-t '$SYS/Coreflux/Command' \
-m '-addPython /python-scripts/Calculator.py'
5. Use it in a LOT action:
mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
-t '$SYS/Coreflux/Command' \
-m '-addAction DEFINE ACTION MathTest
ON TOPIC "math/calculate" DO
CALL PYTHON "Calculator.add"
WITH (5, 3)
RETURN AS {result}
PUBLISH TOPIC "math/result" WITH {result}'
Accessing OpenSearch Dashboards
If you're using Method 3 (Docker Compose with full stack):
- Open your browser to
http://localhost:5601 - No credentials needed (security is disabled for development)
- Create index patterns to visualize your LOT data stored via OpenSearch routes
Managing Docker Services
For docker run (Method 1 & 2):
Stop the broker:
Start the broker:
Remove the container:
View logs:
For docker-compose (Method 3):
Stop all services:
Start all services:
Restart services:
Stop and remove containers:
Stop and remove containers with volumes:
View logs:
# All services
docker-compose logs -f
# Specific service
docker-compose logs -f coreflux-broker
# Last 100 lines
docker-compose logs --tail=100 coreflux-broker
Troubleshooting
Broker won't start
Check the logs:
Common issues:
- Port already in use: Another MQTT broker or service is using port 1883
- Solution: Change the port mapping in docker-compose.yml (e.g., "11883:1883")
- Permission denied: Volume mount permissions
- Solution: Ensure the directories have correct permissions
Cannot connect to broker
- Verify the broker is running:
- Check if ports are accessible:
- Check firewall settings
OpenSearch won't start
OpenSearch requires sufficient memory. Increase Docker's memory limit:
- Docker Desktop: Settings → Resources → Memory (at least 4GB recommended)
- Linux: No limit by default, but check system resources
Python integration not working
- Verify Python scripts are mounted:
- Check script syntax and headers:
- Must start with
# Script Name: YourScriptName -
Must be valid Python
-
Add the script via MQTT command:
mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
-t '$SYS/Coreflux/Command' \
-m '-addPython /python-scripts/Calculator.py'
Production Deployment
For production use, you should:
- Enable Security:
- Enable OpenSearch security plugin
- Configure TLS/SSL for MQTT
-
Change default credentials
-
Use Secrets Management:
- Use Docker secrets or environment files
-
Don't commit credentials to version control
-
Configure Resource Limits:
services:
coreflux-broker:
deploy:
resources:
limits:
cpus: '2'
memory: 2G
reservations:
memory: 512M
- Enable Monitoring:
- Add Prometheus/Grafana for metrics
-
Configure log aggregation
-
Backup Strategy:
- Regular backups of volumes
- Export LOT definitions
Next Steps
Now that you have Coreflux running in Docker:
- Learn LOT Syntax: LOT Language Introduction
- Create Actions: Working with Actions
- Define Models: Understanding Models
- Set up Routes: Configuring Routes
- Python Integration: Using Python with LOT
Additional Resources
Support
If you encounter issues:
- Check the Troubleshooting section
- Visit the Coreflux Community Discord
- Review the GitHub Issues