Skip to content

Installing Coreflux with Docker

This guide covers how to install and run Coreflux MQTT Broker using Docker. We'll start with the simplest method and progress to more advanced setups.

Prerequisites

Before you begin, ensure you have Docker installed:

Verify your installation:

docker --version

For the Docker Compose method, you'll also need Docker Compose (included with Docker Desktop):

docker-compose --version

Method 1: Simple Docker Run (Quickest Start)

The simplest way to run Coreflux is with a single docker run command:

Linux/Mac

docker run -d \
  --name coreflux-broker \
  -p 1883:1883 \
  -p 8883:8883 \
  -p 9001:9001 \
  -v $(pwd)/coreflux-data:/data \
  -e COREFLUX_LOG_LEVEL=INFO \
  --restart unless-stopped \
  coreflux/broker:latest

Windows (PowerShell)

docker run -d `
  --name coreflux-broker `
  -p 1883:1883 `
  -p 8883:8883 `
  -p 9001:9001 `
  -v ${PWD}/coreflux-data:/data `
  -e COREFLUX_LOG_LEVEL=INFO `
  --restart unless-stopped `
  coreflux/broker:latest

Verify It's Running

docker ps
docker logs coreflux-broker

That's it! Your Coreflux broker is now running and accessible on port 1883.


Method 2: Using a Dockerfile (Customizable)

If you need to customize the broker configuration or add your own scripts, create a Dockerfile.

1. Create the Dockerfile

Create a file named Dockerfile in your project directory with the following content:

# Coreflux MQTT Broker - Standalone Dockerfile
FROM coreflux/broker:latest

# Metadata
LABEL maintainer="Coreflux Team"
LABEL description="Coreflux MQTT Broker with LOT Language Support"
LABEL version="1.7.2"

# Set working directory
WORKDIR /app

# Create necessary directories
RUN mkdir -p /data /config /logs /python-scripts

# Environment variables
ENV COREFLUX_DATA_DIR=/data \
    COREFLUX_CONFIG_DIR=/config \
    COREFLUX_LOG_DIR=/logs \
    COREFLUX_PYTHON_DIR=/python-scripts \
    COREFLUX_LOG_LEVEL=INFO \
    COREFLUX_MQTT_PORT=1883 \
    COREFLUX_MQTT_TLS_PORT=8883 \
    COREFLUX_WS_PORT=9001

# Expose ports
EXPOSE 1883 8883 9001

# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
  CMD mosquitto_sub -t '$$SYS/broker/uptime' -C 1 -W 2 || exit 1

# Volume mount points
VOLUME ["/data", "/config", "/logs", "/python-scripts"]

# Default command
CMD ["coreflux-broker", "--config", "/config/coreflux.conf"]

2. Build the Image

docker build -t coreflux-broker:custom .

3. Run the Container

docker run -d \
  --name coreflux-broker \
  -p 1883:1883 \
  -p 8883:8883 \
  -p 9001:9001 \
  -v $(pwd)/coreflux-data:/data \
  -v $(pwd)/python-scripts:/python-scripts \
  coreflux-broker:custom

Method 3: Docker Compose (Full Development Stack)

For a complete development environment with data storage and visualization, use Docker Compose.

1. Create the docker-compose.yml File

Create a file named docker-compose.yml in your project directory with the following content:

version: '3.8'

services:
  # Coreflux MQTT Broker - Main IoT platform
  coreflux-broker:
    image: coreflux/broker:latest
    container_name: coreflux-broker
    ports:
      - "1883:1883"     # MQTT
      - "8883:8883"     # MQTT over TLS
      - "9001:9001"     # WebSockets
    volumes:
      - ./coreflux-data:/data
      - ./coreflux-config:/config
    environment:
      - COREFLUX_LOG_LEVEL=INFO
      - COREFLUX_DATA_DIR=/data
      - COREFLUX_CONFIG_DIR=/config
    networks:
      - coreflux-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "mosquitto_sub", "-t", "$$SYS/broker/uptime", "-C", "1", "-W", "2"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  # OpenSearch - Data storage for LOT examples
  opensearch:
    image: opensearchproject/opensearch:2.11.0
    container_name: coreflux-opensearch
    environment:
      - cluster.name=coreflux-cluster
      - node.name=opensearch-node1
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "OPENSEARCH_JAVA_OPTS=-Xms512m -Xmx512m"
      - DISABLE_SECURITY_PLUGIN=true  # For development only
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - opensearch-data:/usr/share/opensearch/data
    ports:
      - "9200:9200"   # REST API
      - "9600:9600"   # Performance Analyzer
    networks:
      - coreflux-network
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:9200/_cluster/health || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5

  # OpenSearch Dashboards - Visualization
  opensearch-dashboards:
    image: opensearchproject/opensearch-dashboards:2.11.0
    container_name: coreflux-dashboards
    ports:
      - "5601:5601"
    environment:
      - OPENSEARCH_HOSTS=http://opensearch:9200
      - DISABLE_SECURITY_DASHBOARDS_PLUGIN=true  # For development only
    networks:
      - coreflux-network
    depends_on:
      - opensearch
    restart: unless-stopped
    healthcheck:
      test: ["CMD-SHELL", "curl -f http://localhost:5601/api/status || exit 1"]
      interval: 30s
      timeout: 10s
      retries: 5

networks:
  coreflux-network:
    driver: bridge
    name: coreflux-network

volumes:
  opensearch-data:
    driver: local
    name: coreflux-opensearch-data

2. Start All Services

docker-compose up -d

This will: - Pull the necessary Docker images (first time only) - Create and start the Coreflux broker, OpenSearch, and OpenSearch Dashboards - Set up a Docker network for communication between services

3. Verify All Services

Check that all services are running:

docker-compose ps

You should see three services running: - coreflux-broker on ports 1883, 8883, 9001 - coreflux-opensearch on port 9200 - coreflux-dashboards on port 5601

View logs:

docker-compose logs -f coreflux-broker

Connecting to the Broker

Regardless of which method you used, the broker is now accessible.

Connection Details

Host: localhost (or your Docker host IP)
MQTT Port: 1883
MQTT TLS Port: 8883
WebSocket Port: 9001
Default Username: root
Default Password: coreflux

Using Mosquitto Client

Install mosquitto-clients if you don't have them:

# Ubuntu/Debian
sudo apt-get install mosquitto-clients

# macOS
brew install mosquitto

# Windows (using Chocolatey)
choco install mosquitto

Subscribe to a topic:

mosquitto_sub -h localhost -p 1883 -u root -P coreflux -t "test/topic" -v

Publish a message:

mosquitto_pub -h localhost -p 1883 -u root -P coreflux -t "test/topic" -m "Hello from Docker!"

Testing LOT Actions

Deploy a simple heartbeat action:

mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
  -t '$SYS/Coreflux/Command' \
  -m '-addAction DEFINE ACTION TestHeartbeat
ON EVERY 5 SECONDS DO
    PUBLISH TOPIC "system/heartbeat" WITH "alive"'

Subscribe to see the heartbeat:

mosquitto_sub -h localhost -p 1883 -u root -P coreflux -t "system/heartbeat" -v

You should see alive messages every 5 seconds.

Configuration Options

Environment Variables

You can configure the broker using environment variables:

Variable Default Description
COREFLUX_LOG_LEVEL INFO Logging level (DEBUG, INFO, WARN, ERROR)
COREFLUX_DATA_DIR /data Data persistence directory
COREFLUX_CONFIG_DIR /config Configuration files directory
COREFLUX_MQTT_PORT 1883 MQTT port
COREFLUX_MQTT_TLS_PORT 8883 MQTT over TLS port
COREFLUX_WS_PORT 9001 WebSocket port

Persistent Data

Data is automatically persisted in Docker volumes:

  • Broker data: ./coreflux-data - Stores Actions, Models, Routes, Rules
  • Configuration: ./coreflux-config - Broker configuration files
  • OpenSearch data: Docker managed volume (docker-compose only)

Adding Python Scripts for LOT

LOT can integrate with Python for advanced processing. To add Python scripts:

1. Create a directory for Python scripts:

mkdir -p python-scripts

2. Create your Python script:

cat > python-scripts/Calculator.py << 'EOF'
# Script Name: Calculator
def add(a, b):
    """Add two numbers"""
    return a + b

def multiply(a, b):
    """Multiply two numbers"""
    return a * b

def safe_divide(a, b):
    """Safely divide with error handling"""
    try:
        return a / b
    except ZeroDivisionError:
        return {"error": "Division by zero"}
EOF

3. Mount the scripts directory:

For docker run, add the volume mount:

-v $(pwd)/python-scripts:/python-scripts

For docker-compose, add to the volumes section:

services:
  coreflux-broker:
    volumes:
      - ./coreflux-data:/data
      - ./coreflux-config:/config
      - ./python-scripts:/python-scripts

4. Register the Python script with the broker:

mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
  -t '$SYS/Coreflux/Command' \
  -m '-addPython /python-scripts/Calculator.py'

5. Use it in a LOT action:

mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
  -t '$SYS/Coreflux/Command' \
  -m '-addAction DEFINE ACTION MathTest
ON TOPIC "math/calculate" DO
    CALL PYTHON "Calculator.add"
        WITH (5, 3)
        RETURN AS {result}
    PUBLISH TOPIC "math/result" WITH {result}'

Accessing OpenSearch Dashboards

If you're using Method 3 (Docker Compose with full stack):

  1. Open your browser to http://localhost:5601
  2. No credentials needed (security is disabled for development)
  3. Create index patterns to visualize your LOT data stored via OpenSearch routes

Managing Docker Services

For docker run (Method 1 & 2):

Stop the broker:

docker stop coreflux-broker

Start the broker:

docker start coreflux-broker

Remove the container:

docker rm -f coreflux-broker

View logs:

docker logs -f coreflux-broker

For docker-compose (Method 3):

Stop all services:

docker-compose stop

Start all services:

docker-compose start

Restart services:

docker-compose restart

Stop and remove containers:

docker-compose down

Stop and remove containers with volumes:

docker-compose down -v

View logs:

# All services
docker-compose logs -f

# Specific service
docker-compose logs -f coreflux-broker

# Last 100 lines
docker-compose logs --tail=100 coreflux-broker

Troubleshooting

Broker won't start

Check the logs:

docker-compose logs coreflux-broker

Common issues: - Port already in use: Another MQTT broker or service is using port 1883 - Solution: Change the port mapping in docker-compose.yml (e.g., "11883:1883") - Permission denied: Volume mount permissions - Solution: Ensure the directories have correct permissions

Cannot connect to broker

  1. Verify the broker is running:
docker-compose ps
  1. Check if ports are accessible:
# Linux/Mac
nc -zv localhost 1883

# Windows
Test-NetConnection -ComputerName localhost -Port 1883
  1. Check firewall settings

OpenSearch won't start

OpenSearch requires sufficient memory. Increase Docker's memory limit:

  • Docker Desktop: Settings → Resources → Memory (at least 4GB recommended)
  • Linux: No limit by default, but check system resources

Python integration not working

  1. Verify Python scripts are mounted:
docker-compose exec coreflux-broker ls -la /python-scripts
  1. Check script syntax and headers:
  2. Must start with # Script Name: YourScriptName
  3. Must be valid Python

  4. Add the script via MQTT command:

mosquitto_pub -h localhost -p 1883 -u root -P coreflux \
  -t '$SYS/Coreflux/Command' \
  -m '-addPython /python-scripts/Calculator.py'

Production Deployment

For production use, you should:

  1. Enable Security:
  2. Enable OpenSearch security plugin
  3. Configure TLS/SSL for MQTT
  4. Change default credentials

  5. Use Secrets Management:

  6. Use Docker secrets or environment files
  7. Don't commit credentials to version control

  8. Configure Resource Limits:

services:
  coreflux-broker:
    deploy:
      resources:
        limits:
          cpus: '2'
          memory: 2G
        reservations:
          memory: 512M
  1. Enable Monitoring:
  2. Add Prometheus/Grafana for metrics
  3. Configure log aggregation

  4. Backup Strategy:

  5. Regular backups of volumes
  6. Export LOT definitions

Next Steps

Now that you have Coreflux running in Docker:

  1. Learn LOT Syntax: LOT Language Introduction
  2. Create Actions: Working with Actions
  3. Define Models: Understanding Models
  4. Set up Routes: Configuring Routes
  5. Python Integration: Using Python with LOT

Additional Resources

Support

If you encounter issues: