So you’re running Docker containers and need to schedule some tasks? Let me introduce you to Ofelia – think of it as cron’s cooler, Docker-savvy cousin. Built with Go, it’s lightweight and specifically made for containerized environments. In this guide, we’re going to set up Ofelia using Docker Compose with config files (not labels – we’ll talk about why in a sec).
What’s Ofelia All About?
Ofelia is a job scheduler for Docker that lets you run commands inside your containers, spin up temporary containers for quick jobs, or even run stuff on your host machine. Created by mcuadros, it basically wraps Docker’s API to work like docker exec and docker run, but on a schedule. Pretty neat, right?
What Can It Do?
- Run commands inside your existing containers
- Spin up brand new containers just for a task (then trash them when done)
- Execute stuff directly on your host machine
- Works with Docker Swarm if you’re into that
- Uses familiar cron scheduling syntax (or handy shortcuts like
@hourly) - Send notifications via email, save logs to files, or ping Slack
- Prevent jobs from running on top of each other with the no-overlap feature
Why Config Files Over Labels?
Ofelia can use either labels (stuck right in your docker-compose.yml) or separate config files. Here’s why I prefer config files:
- Everything in One Spot: All your scheduled jobs live in one file instead of scattered across multiple container definitions
- Git-Friendly: Changes to schedules don’t mess with your docker-compose.yml, making version control cleaner
- Keep Things Separate: Your app containers don’t need to know anything about when they’re being scheduled
- Way Easier to Debug: Trust me, reading a config.ini is much nicer than hunting through YAML labels
- Manage Multiple Environments: Swap config files for dev/staging/prod without touching anything else
Let’s Set Up Docker Compose
Alright, time to get our hands dirty. Here’s a basic Docker Compose setup with Ofelia using a config file.
Basic Docker Compose Configuration
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 |
version: '3.8' services: ofelia: image: mcuadros/ofelia:latest container_name: ofelia restart: unless-stopped depends_on: - app command: daemon --config=/etc/ofelia/config.ini volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./ofelia/config.ini:/etc/ofelia/config.ini:ro networks: - app-network app: image: your-app:latest container_name: app restart: unless-stopped volumes: - ./app-data:/data networks: - app-network networks: app-network: driver: bridge |
The Important Bits
- Docker Socket: That
/var/run/docker.sockmount is crucial – it’s how Ofelia talks to Docker - Config File: We’re mounting our
config.inito/etc/ofelia/config.iniinside the container - Read-Only Mounts: The
:roflag means read-only – good security practice - Network: Make sure Ofelia’s on the same network as the containers you want to schedule jobs on
How Config Files Work
The config file is just an INI file with sections for each job you want to run. Simple stuff.
The Four Job Types
Ofelia’s got four ways to run jobs:
- job-exec: Run commands in containers that are already running
- job-run: Spin up a new container, run something, then kill it
- job-local: Run commands right on your host machine
- job-service-run: For Docker Swarm folks (runs as a service)
Config File Examples (The Good Stuff)
Example 1: Basic Job-Exec
This is probably what you’ll use most – running stuff in containers that are already up and running.
|
1 2 3 4 5 6 7 8 9 10 11 12 |
[job-exec "database-backup"] schedule = 0 2 * * * container = postgres command = pg_dump -U postgres mydb > /backup/backup.sql user = postgres [job-exec "cache-clear"] schedule = @every 1h container = app command = php artisan cache:clear |
Example 2: Job-Run
Need to spin up a fresh container just for a task? Job-run’s your friend.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
[job-run "cleanup-task"] schedule = 0 3 * * * image = alpine:latest command = sh -c 'find /data -type f -mtime +30 -delete' volume = /var/app-data:/data:rw environment = TZ=UTC environment = LOG_LEVEL=info delete = true [job-run "data-processor"] schedule = @every 15m image = mycompany/data-processor:latest command = python process.py volume = /host/input:/app/input:ro volume = /host/output:/app/output:rw environment = API_KEY=your-api-key environment = ENVIRONMENT=production network = app-network |
Example 3: Job-Local
Sometimes you just need to run something on the host itself.
|
1 2 3 4 5 6 7 8 9 10 |
[job-local "disk-cleanup"] schedule = 0 4 * * * command = find /tmp -type f -mtime +7 -delete dir = /tmp [job-local "system-health-check"] schedule = @every 10m command = /usr/local/bin/health-check.sh |
Example 4: Job-Service-Run
If you’re running Docker Swarm, here’s how you’d set up a service job.
|
1 2 3 4 5 6 7 |
[job-service-run "swarm-backup"] schedule = 0 2 * * * image = backup-image:latest network = swarm_network command = /backup.sh |
Example 5: Kitchen Sink (Everything at Once)
Here’s what it looks like when you use all the bells and whistles.
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
# Global configuration for logging [global] save-folder = /var/log/ofelia save-only-on-error = false # Job with all available options [job-exec "full-featured-job"] schedule = 0 */6 * * * container = myapp command = /app/scripts/maintenance.sh user = appuser tty = false no-overlap = true # Multiple environment variables [job-run "env-example"] schedule = @daily image = ubuntu:22.04 command = env environment = VAR1=value1 environment = VAR2=value2 environment = VAR3=value3 volume = /host/path:/container/path:ro network = custom-network delete = true |
What Options Can You Use?
Stuff That Works for All Job Types
- schedule: Your cron expression or shortcuts like
@hourly,@daily,@every 10s - no-overlap: Stops a job from running if it’s already running (default: false)
Job-Exec Options
- container: Which container to run the command in (you gotta specify this)
- command: What to actually run (also required)
- user: Run as a specific user (optional)
- tty: Allocate a pseudo-TTY if you need it (default: false)
Job-Run Options
- image: Which Docker image to use (required)
- command: What command to run
- volume: Mount volumes (you can have multiple of these)
- network: Which network to connect to
- environment: Environment variables (multiple allowed)
- delete: Clean up the container when done (default: true)
- user: Run as a specific user
- tty: Allocate a pseudo-TTY (default: false)
Job-Local Options
- command: What to run on the host (required)
- dir: Where to run it from (working directory)
Job-Service-Run Options
- image: Docker image to use (required)
- network: Swarm network to use (required)
- command: What to execute
- delete: Clean up the service after (default: true)
Schedule Formats (Cron Cheat Sheet)
Ofelia uses standard cron format, plus some nice shortcuts to make life easier:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
# Standard cron expressions schedule = 0 2 * * * # Every day at 2 AM schedule = */15 * * * * # Every 15 minutes schedule = 0 */6 * * * # Every 6 hours schedule = 0 0 * * 0 # Every Sunday at midnight schedule = 0 9 * * 1-5 # Weekdays at 9 AM # Shortcut expressions schedule = @yearly # Once a year (0 0 1 1 *) schedule = @monthly # Once a month (0 0 1 * *) schedule = @weekly # Once a week (0 0 * * 0) schedule = @daily # Once a day (0 0 * * *) schedule = @hourly # Once an hour (0 * * * *) # Interval expressions schedule = @every 30s # Every 30 seconds schedule = @every 5m # Every 5 minutes schedule = @every 2h # Every 2 hours schedule = @every 24h # Every 24 hours |
Cool Advanced Stuff
Stop Jobs From Stepping on Each Other
Got a long-running job that might still be going when the next one tries to start? Use no-overlap to prevent that mess:
|
1 2 3 4 5 6 7 |
[job-exec "long-running-task"] schedule = @every 5m container = worker command = /app/process-queue.sh no-overlap = true |
Passing Environment Variables
For job-run stuff, you can pass environment variables into your containers:
|
1 2 3 4 5 6 7 8 9 |
[job-run "api-sync"] schedule = @hourly image = myapp/sync:latest command = python sync.py environment = API_URL=https://api.example.com environment = API_TOKEN=${API_TOKEN} environment = SYNC_MODE=incremental |
Note: Environment variables from the host can be referenced using ${VAR_NAME} syntax.
Mounting Volumes
Need to share files between your host and the container? Mount some volumes:
|
1 2 3 4 5 6 7 8 |
[job-run "log-processor"] schedule = 0 1 * * * image = logprocessor:latest command = process-logs volume = /var/log/app:/logs:ro volume = /var/processed:/output:rw |
Volume format: host-path:container-path:mode where mode is ro (read-only) or rw (read-write).
Running Jobs on Multiple Containers
You can totally schedule different jobs on different containers:
|
1 2 3 4 5 6 7 8 9 10 11 |
[job-exec "app-cleanup"] schedule = @daily container = app command = php artisan cleanup [job-exec "db-optimize"] schedule = @weekly container = database command = mysqlcheck --optimize --all-databases |
Getting Notified When Stuff Happens
Ofelia’s got three ways to log things, all configured in the [global] section:
File Logging
|
1 2 3 4 5 |
[global] save-folder = /var/log/ofelia save-only-on-error = false |
Email Notifications
|
1 2 3 4 5 6 7 8 9 10 |
[global] smtp-host = smtp.example.com smtp-port = 587 smtp-user = notifications@example.com smtp-password = your-password email-to = admin@example.com email-from = ofelia@example.com mail-only-on-error = true |
Slack Notifications
|
1 2 3 4 5 |
[global] slack-webhook = https://hooks.slack.com/services/YOUR/WEBHOOK/URL slack-only-on-error = true |
Full Working Example (Copy-Paste Ready)
Here’s everything put together – a complete setup you can actually use:
docker-compose.yml
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
version: '3.8' services: postgres: image: postgres:15 container_name: postgres environment: POSTGRES_DB: myapp POSTGRES_USER: postgres POSTGRES_PASSWORD: secretpassword volumes: - postgres-data:/var/lib/postgresql/data - ./backups:/backups networks: - app-network redis: image: redis:7-alpine container_name: redis networks: - app-network web: image: nginx:alpine container_name: web volumes: - ./app:/usr/share/nginx/html:ro networks: - app-network ofelia: image: mcuadros/ofelia:latest container_name: ofelia restart: unless-stopped depends_on: - postgres - redis - web command: daemon --config=/etc/ofelia/config.ini volumes: - /var/run/docker.sock:/var/run/docker.sock:ro - ./ofelia/config.ini:/etc/ofelia/config.ini:ro - ./ofelia/logs:/var/log/ofelia networks: - app-network volumes: postgres-data: networks: app-network: driver: bridge |
ofelia/config.ini
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 |
# Global logging configuration [global] save-folder = /var/log/ofelia save-only-on-error = false # Daily database backup at 2 AM [job-exec "postgres-backup"] schedule = 0 2 * * * container = postgres command = pg_dump -U postgres myapp > /backups/myapp_$(date +\%Y\%m\%d).sql user = postgres no-overlap = true # Clear Redis cache every 6 hours [job-exec "redis-cleanup"] schedule = 0 */6 * * * container = redis command = redis-cli FLUSHDB # Rotate nginx logs daily [job-exec "nginx-log-rotation"] schedule = @daily container = web command = sh -c 'mv /var/log/nginx/access.log /var/log/nginx/access.log.old && nginx -s reopen' # Cleanup old backup files (runs in temporary container) [job-run "backup-cleanup"] schedule = 0 3 * * 0 image = alpine:latest command = find /backups -name "*.sql" -type f -mtime +30 -delete volume = ./backups:/backups:rw delete = true # System health check on host [job-local "health-check"] schedule = @every 5m command = docker ps --format "table {{.Names}}\t{{.Status}}" | grep -v "Up" |
Testing Before You Go Live
Don’t just deploy this straight to production – test it first:
1. Validate Config File Syntax: Start the containers and check logs:
|
1 2 3 4 |
docker-compose up -d docker-compose logs -f ofelia |
2. Test Individual Jobs: Modify schedules to run more frequently during testing:
|
1 2 3 |
schedule = @every 30s # For testing |
3. Check Job Execution: Monitor the logs to ensure jobs are executing:
|
1 2 3 |
docker-compose logs ofelia | grep "Started" |
4. Verify Job Output: Check that your jobs are producing expected results.
Additional Resources
Thoughts
Ofelia’s a solid choice for scheduling jobs in Docker. By using config files instead of labels, you keep your scheduling logic separate and your setup cleaner. Whether you’re backing up databases, clearing caches, or processing data, Ofelia’s got you covered.
The secret sauce is planning your schedules properly, testing everything in dev first, and keeping an eye on those logs. With the examples in this guide, you should be good to go. Happy scheduling!
FAQ
How do I run a cron job inside a Docker container?
To run a cron job inside a Docker container, you need to install cron in your Dockerfile, create a crontab file, and start the cron service. Here’s a basic example:
|
1 2 3 4 5 6 7 8 9 |
FROM ubuntu:20.04 RUN apt-get update && apt-get install -y cron COPY mycron /etc/cron.d/mycron RUN chmod 0644 /etc/cron.d/mycron RUN crontab /etc/cron.d/mycron RUN touch /var/log/cron.log CMD cron && tail -f /var/log/cron.log |
The crontab file should end with a newline (LF format, not CRLF) for proper functionality.
Why isn’t my cron job running in Docker?
Common reasons include: cron service not started (use service cron start or cron -f), incorrect file permissions (crontab files need 0644 permissions), missing newline at end of crontab file, CRLF line endings instead of LF (Windows vs Unix format), or the container exiting immediately because no foreground process is running.
How do I access environment variables in Docker cron jobs?
Cron doesn’t automatically pass environment variables. The most common solution is to export your environment variables to /etc/environment before starting cron:
|
1 2 3 4 |
printenv > /etc/environment cron -f |
Alternatively, you can source environment variables directly in your cron script by reading from /proc/1/environ.
What’s the difference between running cron inside a container vs. on the host?
Running cron inside the container keeps everything containerized and portable, but violates the “one process per container” principle. Running cron on the host system and executing Docker containers provides better resource management and easier monitoring. For example:
|
1 2 3 4 |
# Host crontab 0 2 * * * docker run --rm myapp/backup:latest |
How do I keep my Docker container running with cron?
Docker containers need a foreground process to stay running. Use cron -f to run cron in the foreground, or combine cron with another command:
|
1 2 3 |
CMD cron && tail -f /var/log/cron.log |
The tail -f command keeps the container alive by continuously reading the log file.
What are the file permission requirements for cron in Docker?
Crontab files in /etc/cron.d/ must be owned by root and have 0644 permissions. Scripts must be executable (chmod +x). Wrong ownership or permissions will cause cron to reject the file with errors like “WRONG FILE OWNER”.
How do I debug cron jobs in Docker containers?
Install rsyslog to get cron logs, use docker logs container_name to view output, redirect cron job output to a log file, and verify cron is running with service cron status. Run cron in foreground with debug flag for detailed output:
|
1 2 3 |
cron -f -d 8 |
Why does my crontab file format matter in Docker?
Cron requires Unix line endings (LF), not Windows line endings (CRLF). Files created on Windows may have CRLF endings that break cron. Additionally, crontab files must end with an empty newline or cron will reject them with “Missing newline before EOF” error.
What’s the syntax difference for cron files in /etc/cron.d/ vs user crontabs?
Files in /etc/cron.d/ require a username field between the schedule and command, while user crontabs don’t:
|
1 2 3 4 5 6 7 |
# /etc/cron.d/ format (with user) * * * * * root /path/to/script.sh # User crontab format (no user) * * * * * /path/to/script.sh |
Should I use Kubernetes CronJobs instead of Docker cron?
For production environments with Kubernetes, Kubernetes CronJobs are generally recommended. They provide better scheduling, scaling, monitoring, and integration with the Kubernetes ecosystem. However, for simple single-container deployments or development, Docker cron works fine.
How do I handle timezone issues with Docker cron?
Docker containers typically use UTC by default. To set a different timezone, link the appropriate timezone file:
|
1 2 3 4 |
RUN ln -sf /usr/share/zoneinfo/America/New_York /etc/localtime RUN dpkg-reconfigure -f noninteractive tzdata |
Or set the TZ environment variable in your Dockerfile.
Why does Docker’s layered filesystem cause issues with cron?
Docker’s layered filesystem creates multiple hard links to files, and cron has a security policy rejecting files with hard link count greater than 1. The solution is to touch the crontab files before starting cron:
|
1 2 3 4 |
touch /etc/crontab /etc/cron.*/* service cron start |
This creates new instances of the files, breaking the hard links.
What’s the best practice for logging cron output in Docker?
Redirect cron job output to stdout/stderr or a log file that you tail to keep the container running. For example:
|
1 2 3 4 5 6 7 |
# In crontab file * * * * * /script.sh >> /var/log/cron.log 2>&1 # Or redirect to container stdout * * * * * /script.sh > /proc/1/fd/1 2>/proc/1/fd/2 |
Which base image should I use for Docker cron – Ubuntu or Alpine?
Ubuntu is more beginner-friendly with standard cron packages, while Alpine is lightweight but may require additional configuration. Alpine uses /var/spool/cron/crontabs/ instead of /etc/cron.d/ and has slightly different cron syntax (no username field in crontabs). Choose based on your size requirements and familiarity.
How do I run multiple services (cron + application) in one Docker container?
While it violates the one-process-per-container principle, you can run multiple services using a shell command:
|
1 2 3 |
CMD ["sh", "-c", "cron && nginx -g 'daemon off;'"] |
This starts cron in the background and nginx in the foreground. Consider using supervisord for more complex multi-process setups, or better yet, separate containers for better maintainability.
