So, you’ve got containers running on your server, and they’re churning out log files like crazy? Yeah, that’s pretty much every Docker setup ever. If you don’t manage those logs, they’ll eventually eat up all your disk space, and nobody wants that surprise at 2:00.
Enter blacklabelops/logrotate – a super handy Docker container that sits alongside your other containers and automatically rotates, compresses, and cleans up old log files. Think of it as a janitor for your logs. It’s based on the classic Linux logrotate utility but packaged in a container so you can easily deploy it anywhere.
What Does This Thing Actually Do?
This container crawls through directories you specify, finds log files (or whatever file types you tell it to look for), and rotates them based on your schedule. It can compress old logs, delete really old ones, and basically keep your disk usage under control without you having to manually clean things up every week.
The beauty of it? It’s a side-car container. Just point it at your log directories, set a few environment variables, and let it do its thing.
Docker Compose Setup
Here’s a basic docker-compose.yml to get you started. This example rotates Docker container logs daily and keeps the last 7 copies:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
version: '3.8' services: logrotate: image: blacklabelops/logrotate:latest container_name: logrotate restart: unless-stopped volumes: - /var/lib/docker/containers:/var/lib/docker/containers - /var/log/docker:/var/log/docker environment: - LOGS_DIRECTORIES=/var/lib/docker/containers /var/log/docker - LOGROTATE_INTERVAL=daily - LOGROTATE_COPIES=7 - LOGROTATE_COMPRESSION=compress - TZ=America/New_York |
Want something more comprehensive? Here’s a beefed-up version with more options:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 |
version: '3.8' services: logrotate: image: blacklabelops/logrotate:latest container_name: logrotate restart: unless-stopped volumes: - /var/lib/docker/containers:/var/lib/docker/containers - /var/log/docker:/var/log/docker - /var/log/apps:/var/log/apps - ./logs:/logs - ./logrotate-status:/logrotate-status environment: # Directories to scan for logs - LOGS_DIRECTORIES=/var/lib/docker/containers /var/log/docker /var/log/apps # Rotation settings - LOGROTATE_INTERVAL=daily - LOGROTATE_COPIES=14 - LOGROTATE_SIZE=100M # Compression - LOGROTATE_COMPRESSION=compress - LOGROTATE_DELAYCOMPRESS=true # Date formatting - LOGROTATE_DATEFORMAT=-%Y%m%d # Age and size limits - LOGROTATE_MAXAGE=30 - LOGROTATE_MINSIZE=10M # Output directory for old logs - LOGROTATE_OLDDIR=/logs/archive # Status file location - LOGROTATE_STATUSFILE=/logrotate-status/logrotate.status # Logging - LOGROTATE_LOGFILE=/logs/logrotate.log - LOG_FILE=/logs/cron.log # File types to rotate - LOG_FILE_ENDINGS=log json # Timezone - TZ=America/New_York |
Installation Steps
Getting this running is pretty straightforward:
1. Create a directory for your docker-compose file:
|
1 2 3 4 |
mkdir -p ~/logrotate cd ~/logrotate |
2. Create your docker-compose.yml file with one of the examples above (adjust paths and settings to your needs).
3. Create directories for logs and status if you’re using custom paths:
|
1 2 3 |
mkdir -p logs logrotate-status |
4. Fire it up:
|
1 2 3 |
docker-compose up -d |
5. Check if it’s running:
|
1 2 3 4 |
docker-compose ps docker-compose logs -f logrotate |
That’s it. The container will now run on its configured schedule and handle log rotation automatically.
Environment Variables Explained
Alright, let’s break down all the environment variables you can use to customize this thing. There are quite a few, but don’t worry – you don’t need to use them all.
LOGS_DIRECTORIES
What it does: Tells logrotate which directories to scan for log files.
Format: Space-separated list of absolute paths.
Example: LOGS_DIRECTORIES=/var/log/docker /var/log/apps /var/log/nginx
Default: None (you must specify this)
This is the most important setting. Point it at any directories where your containers are writing logs.
LOG_FILE_ENDINGS
What it does: Specifies which file extensions to rotate.
Format: Space-separated list of extensions (without the dot).
Example: LOG_FILE_ENDINGS=log json txt xml
Default: log
By default, it only looks for .log files. If your apps write to .json files or anything else, add them here.
LOGROTATE_INTERVAL
What it does: Sets how often logs get rotated.
Options: hourly, daily, weekly, monthly, yearly
Example: LOGROTATE_INTERVAL=daily
Default: daily
Most folks use daily, but if you have super chatty apps, hourly might be better. Or if logs are minimal, weekly works too.
LOGROTATE_COPIES
What it does: Number of rotated log files to keep before deleting old ones.
Format: Integer value.
Example: LOGROTATE_COPIES=10
Default: 5
So if you set this to 10, you’ll have 10 historical versions of each log file before the oldest one gets deleted. Balance this with your available disk space.
LOGROTATE_SIZE
What it does: Rotate logs when they exceed this size, regardless of the interval.
Format: Number followed by k (kilobytes), M (megabytes), or G (gigabytes).
Example: LOGROTATE_SIZE=100M
Default: None (size-based rotation is disabled by default)
This is great for high-traffic apps. Even if you rotate daily, logs won’t grow past this size because they’ll trigger an extra rotation.
LOGROTATE_COMPRESSION
What it does: Enables compression of rotated logs.
Options: compress or nocompress
Example: LOGROTATE_COMPRESSION=compress
Default: nocompress
Turn this on to save disk space. Compressed logs take up way less room, which is especially useful if you’re keeping many copies.
LOGROTATE_DELAYCOMPRESS
What it does: When compression is enabled, this delays compression of the most recent rotation by one cycle.
Options: true or false
Example: LOGROTATE_DELAYCOMPRESS=false
Default: true (when compression is enabled)
If you have scripts or tools that need to access the most recent rotated log without decompressing it, keep this as true.
LOGROTATE_MODE
What it does: Sets the rotation mode.
Options: copytruncate (default) or create <mode> <owner> <group>
Example: LOGROTATE_MODE=create 0644
Default: copytruncate
Most of the time, copytruncate works fine. It copies the log file and then truncates it, so your app can keep writing to the same file. Use create mode if your log collection tools need the file to be renamed and recreated.
LOGROTATE_OLDDIR
What it does: Moves old rotated logs to a specific directory instead of keeping them alongside the active log.
Format: Full or relative path.
Example: LOGROTATE_OLDDIR=/logs/archive
Default: None (rotated logs stay in the same directory as the active log)
Great for keeping things organized. All your old logs end up in one place rather than scattered around.
LOGROTATE_DATEFORMAT
What it does: Adds a date extension to rotated log files.
Format: strftime format string.
Example: LOGROTATE_DATEFORMAT=-%Y%m%d
Default: None (files are numbered like .1, .2, .3 instead)
If you prefer log files named like app.log-20250115 instead of app.log.1, set this. Makes it way easier to find logs from a specific date.
LOGROTATE_MAXAGE
What it does: Deletes rotated logs older than this many days.
Format: Integer (number of days).
Example: LOGROTATE_MAXAGE=60
Default: None
This works with LOGROTATE_COPIES. Even if you set 10 copies, logs older than maxage get deleted regardless.
LOGROTATE_MINSIZE
What it does: Only rotate if the log is bigger than this size AND the time interval has passed.
Format: Number followed by k, M, or G.
Example: LOGROTATE_MINSIZE=10M
Default: None
This prevents rotation of tiny log files. If your daily logs are usually small, you can set minsize so rotation only happens when the file actually has content worth rotating.
LOGROTATE_CRONSCHEDULE
What it does: Overrides the default cron schedule.
Format: Cron schedule format (go-cron syntax).
Example: LOGROTATE_CRONSCHEDULE=0 3 * * * (runs at 3 AM daily)
Default: Determined by LOGROTATE_INTERVAL
Usually you don’t need this, but if you want precise control over when rotation happens (like “every day at 2 AM”), you can set a custom cron schedule.
LOGROTATE_STATUSFILE
What it does: Location for the logrotate status file, which tracks when each file was last rotated.
Format: Full path to the status file.
Example: LOGROTATE_STATUSFILE=/logrotate-status/logrotate.status
Default: Container volume (not persistent across container recreations)
If you want the status file to persist, mount a volume and point this variable to a path in that volume.
LOGROTATE_LOGFILE
What it does: Log file for logrotate’s own output.
Format: Full path.
Example: LOGROTATE_LOGFILE=/logs/logrotate.log
Default: None
Useful for debugging. You’ll see what logrotate is doing each time it runs.
LOG_FILE
What it does: Log file for the cron daemon’s output.
Format: Full path.
Example: LOG_FILE=/logs/cron.log
Default: None
This logs the cron execution itself, not logrotate’s output. Helps diagnose scheduling issues.
LOGROTATE_PARAMETERS
What it does: Command-line parameters passed to the logrotate command.
Options:
v– Verbose outputd– Debug mode (simulates rotation without actually doing it)f– Force rotation even if conditions aren’t met
Example: LOGROTATE_PARAMETERS=vdf
Default: None
Super handy for testing. Set it to vdf to see what would happen without actually rotating anything.
LOGROTATE_PREROTATE_COMMAND
What it does: Command to run before rotation happens.
Format: Full path to a script or command.
Example: LOGROTATE_PREROTATE_COMMAND=/scripts/stop-logging.sh
Default: None
Use this if you need to signal a process before rotating its logs, like pausing log writes temporarily.
LOGROTATE_POSTROTATE_COMMAND
What it does: Command to run after rotation completes.
Format: Full path to a script or command.
Example: LOGROTATE_POSTROTATE_COMMAND=/usr/bin/killall -HUP nginx
Default: None
Common use case: sending a SIGHUP to a service so it reopens its log files after rotation.
LOGROTATE_AUTOUPDATE
What it does: Enables or disables automatic updating of the logrotate configuration file.
Options: true or false
Example: LOGROTATE_AUTOUPDATE=false
Default: true
By default, the container regenerates its config file before each rotation to catch any new log files. If you want a static config, set this to false.
TZ
What it does: Sets the timezone for the container.
Format: Standard timezone string (e.g., America/New_York, Europe/London, Asia/Tokyo).
Example: TZ=America/Chicago
Default: UTC
This affects when time-based rotations happen. If you want daily rotation to happen at midnight in your local time, set this to your timezone.
Common Use Cases
Rotating Docker Container Logs
|
1 2 3 4 5 6 7 |
environment: - LOGS_DIRECTORIES=/var/lib/docker/containers - LOGROTATE_INTERVAL=daily - LOGROTATE_COPIES=7 - LOGROTATE_COMPRESSION=compress |
High-Traffic Application (Rotate by Size)
|
1 2 3 4 5 6 7 8 |
environment: - LOGS_DIRECTORIES=/var/log/apps - LOGROTATE_INTERVAL=hourly - LOGROTATE_SIZE=500M - LOGROTATE_COPIES=24 - LOGROTATE_COMPRESSION=compress |
Multiple File Types with Date Extensions
|
1 2 3 4 5 6 7 8 |
environment: - LOGS_DIRECTORIES=/var/log/apps - LOG_FILE_ENDINGS=log json xml txt - LOGROTATE_INTERVAL=daily - LOGROTATE_DATEFORMAT=-%Y%m%d - LOGROTATE_COPIES=30 |
Archive Old Logs to Separate Directory
|
1 2 3 4 5 6 7 8 |
environment: - LOGS_DIRECTORIES=/var/log/apps - LOGROTATE_OLDDIR=/logs/archive - LOGROTATE_INTERVAL=weekly - LOGROTATE_MAXAGE=90 - LOGROTATE_COMPRESSION=compress |
Useful Links
Final Thoughts
Setting up logrotate in a container is one of those things you set up once and then forget about – until it saves you from a disk space disaster. Take a few minutes to configure it properly, test it with debug mode, and then let it run.
The key is finding the right balance for your setup. If you have tons of disk space and logs aren’t growing fast, you can keep more copies and rotate less frequently. If you’re on a tight storage budget or have super chatty apps, rotate more often and compress everything.
Either way, future you will be grateful when those logs don’t eat up all your disk space at the worst possible time.
FAQ
What is Docker log rotation and why is it important?
Docker log rotation is the automatic process of managing container log files by limiting their size and quantity on the host filesystem. Without log rotation, Docker containers can continuously write logs until disk space is exhausted, potentially causing system failure. By default, Docker uses the json-file logging driver which does not automatically rotate logs, making manual configuration essential for production environments. Log rotation prevents disk space issues by cycling logs at predetermined intervals and removing old log files when storage limits are reached.
Where are Docker container logs stored on the host system?
Docker container logs are stored in different locations depending on your operating system. On Linux systems, logs are located at /var/lib/docker/containers/[container-id]/[container-id]-json.log. On Windows, they are stored at C:\ProgramData\docker\containers\[container-id]\[container-id]-json.log. For Docker Desktop on macOS, logs are stored within a Docker VM and can be accessed via screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty. Each container has its own dedicated log file named after its full container ID. You can find the exact log path for any container using the command docker inspect –format='{{.LogPath}}’ container-name.
What are the differences between json-file and local logging drivers?
The json-file driver is Docker’s default logging driver that stores logs in JSON format but does not perform log rotation by default. It has significant overhead due to JSON marshaling and unmarshaling, and writes are non-atomic which can cause partial messages. The local logging driver uses a more efficient storage format, performs automatic log rotation by default, and is recommended by Docker for production environments to prevent disk exhaustion. However, both json-file and local drivers support the docker logs command for viewing logs, while other logging drivers may not. The local driver is specifically designed to be more storage-efficient and includes built-in rotation capabilities without additional configuration.
How do I configure Docker daemon for global log rotation?
To configure log rotation globally for all new containers, edit or create the daemon.json file located at /etc/docker/daemon.json on Linux or C:\ProgramData\docker\config\daemon.json on Windows. Add the following configuration:
|
1 2 3 4 5 6 7 8 9 |
{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "5" } } |
After saving the file, restart the Docker daemon with sudo systemctl restart docker on Linux or by restarting the Docker service on Windows. Note that this configuration only affects newly created containers; existing containers must be recreated to use the new settings.
How can I set log rotation for a specific container using Docker Compose?
In Docker Compose, you can specify logging options per service in your docker-compose.yml file using the logging key:
|
1 2 3 4 5 6 7 8 9 10 11 12 |
version: "3.8" services: web: image: nginx:latest logging: driver: "json-file" options: max-size: "10m" max-file: "5" compress: "true" |
The max-size option sets the maximum file size before rotation (e.g., 10m for 10 megabytes), max-file specifies how many rotated log files to retain, and compress enables gzip compression of rotated logs to save disk space. After modifying the docker-compose.yml file, recreate the containers using docker-compose up -d –force-recreate.
How do I configure log rotation for a single container using docker run?
You can configure log rotation when creating a container with the docker run command using log options:
|
1 2 3 4 5 6 7 8 |
docker run \ --log-driver json-file \ --log-opt max-size=10m \ --log-opt max-file=5 \ --log-opt compress=true \ nginx:latest |
This approach provides flexibility for tuning log handling on a deployment-by-deployment basis without affecting other containers. The settings specified at container creation are immutable and cannot be changed without recreating the container.
Why aren’t my existing containers using the new log rotation settings after I updated daemon.json?
Changes to daemon.json only affect newly created containers, not existing ones. This is because container configurations are static from the point of creation and logging driver settings are immutable container properties. To apply new log rotation settings to existing containers, you must stop, remove, and recreate them. Use docker-compose up -d –force-recreate for Compose-managed containers, or manually stop and recreate containers with docker stop, docker rm, and then docker run or docker create with the desired settings. Make sure to backup any important data stored in containers before recreating them, though data in named volumes or bind mounts will be preserved.
What are the recommended max-size and max-file values for log rotation?
Recommended values depend on your application’s logging volume and retention requirements. Common configurations include max-size of 10m to 50m (megabytes) and max-file of 3 to 10 files. For high-traffic applications, consider 50m with 5-7 files for approximately one week of retention. For development environments, 10m with 3 files is typically sufficient. Docker containers using default settings with max-file set to 3 and max-size of 10MB will keep approximately 30MB of logs per container. Calculate total storage by multiplying max-size × max-file × number of containers. Monitor your disk usage and adjust these values based on your actual logging patterns and storage capacity.
How can I check the current log driver for a running container?
You can inspect the logging driver of any container using the docker inspect command:
|
1 2 3 |
docker inspect -f '{{.HostConfig.LogConfig.Type}}' container-name |
To see the full logging configuration including all options, use:
|
1 2 3 |
docker inspect --format='{{.HostConfig.LogConfig}}' container-name |
You can also check the system-wide default logging driver with docker info | grep “Logging Driver” on Linux or docker info and look for the Logging Driver field.
How do I find which containers are using the most disk space for logs?
To identify containers with large log files on Linux, use the following command:
|
1 2 3 |
find /var/lib/docker/containers/ -name "*-json.log" -exec ls -lh {} \; | sort -k5 -hr | head -20 |
For a more detailed analysis including container names:
|
1 2 3 4 5 6 7 8 9 |
#!/bin/bash for container in $(docker ps -aq); do cont_name=$(docker inspect --format='{{.Name}}' $container | sed 's/\///') log_path=$(docker inspect --format='{{.LogPath}}' $container) log_size=$(sudo du -h "$log_path" 2>/dev/null | cut -f1) echo "$cont_name: $log_size" done | sort -k2 -hr |
This helps identify problematic containers that need immediate attention or log rotation configuration.
Can I manually delete Docker log files to free up disk space?
Yes, you can manually truncate log files without stopping containers using:
|
1 2 3 |
sudo cat /dev/null > $(docker inspect --format='{{.LogPath}}' container-name) |
However, this is not recommended for production environments as it’s a temporary solution. The proper approach is to implement automated log rotation through daemon.json or container-specific logging configurations. Manual deletion can also cause issues with external tools that monitor or scrape these log files. Additionally, directly accessing or modifying files in /var/lib/docker/containers/ can interfere with Docker’s logging system and cause unexpected behavior. Always prefer Docker’s built-in rotation mechanisms over manual intervention.
What is the difference between blocking and non-blocking log delivery modes?
Blocking mode (default) means the application must wait for the Docker logging driver to process and accept each log message before continuing execution, ensuring all logs are delivered but potentially causing latency. Non-blocking mode places log messages in an in-memory ring buffer immediately, allowing the application to continue without delay, but risks losing logs if the buffer fills up. Non-blocking mode is configured with:
|
1 2 3 4 5 6 |
docker run \ --log-opt mode=non-blocking \ --log-opt max-buffer-size=4m \ nginx:latest |
Use blocking mode with fast local drivers like json-file or local. Use non-blocking mode with network-based drivers like fluentd, gelf, or awslogs to prevent application performance degradation. The max-buffer-size option controls the ring buffer size, with default of 1MB.
Should I use the compress option for log rotation?
Yes, enabling compression for rotated logs can significantly reduce disk space usage, typically shrinking files to 10-20% of their original size. Enable compression with:
|
1 2 3 4 5 6 7 8 9 10 |
{ "log-driver": "json-file", "log-opts": { "max-size": "10m", "max-file": "5", "compress": "true" } } |
However, be aware that reading compressed log information requires decompression, which causes temporary disk usage increase and increased CPU usage. The docker logs command will automatically decompress rotated files when reading them. For systems with limited CPU resources, you may want to skip compression and instead reduce max-file count or move logs to external storage more quickly.
How do I implement logrotate within a Docker container?
You can integrate logrotate directly into your container for application-specific log management. Add to your Dockerfile:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
RUN apt-get update && apt-get install -y cron logrotate # Create logrotate configuration RUN echo '/var/log/app/*.log {\n\ daily\n\ rotate 7\n\ size 100M\n\ compress\n\ delaycompress\n\ missingok\n\ notifempty\n\ create 0644 appuser appgroup\n\ }' > /etc/logrotate.d/app # Setup cron COPY crontab /etc/cron.d/logrotate-cron RUN chmod 0644 /etc/cron.d/logrotate-cron RUN crontab /etc/cron.d/logrotate-cron |
Create a crontab file with:
|
1 2 3 |
0 0 * * * /usr/sbin/logrotate /etc/logrotate.conf |
This approach is useful for applications that write logs to files inside the container rather than stdout/stderr, though the recommended practice is to log to stdout/stderr and let Docker handle rotation.
What happens to logs when I restart or recreate a container?
When you restart a container (docker restart), the log file continues to grow in the same file, and the container retains its logging configuration. When you recreate a container (docker rm followed by docker run or docker-compose up –force-recreate), a new container with a new ID is created, resulting in a new log file in a new directory. The old container’s log files remain on disk until you manually delete them or prune old container data with docker system prune. Log rotation settings from the previous container are not automatically carried over unless specified again in the docker run command or docker-compose.yml file. For production environments, use named volumes or external logging solutions to persist important log data across container lifecycle changes.
How can I configure time-based log rotation instead of size-based?
Docker’s built-in logging drivers (json-file and local) only support size-based rotation, not time-based rotation. To implement time-based rotation, you have several options: use the journald logging driver which supports time-based rotation through systemd configuration, use external logging solutions like Fluentd or Logstash that support time-based policies, or implement a host-level logrotate configuration. For host-level logrotate, create /etc/logrotate.d/docker-containers:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
/var/lib/docker/containers/*/*.log { daily rotate 7 size 100M compress delaycompress missingok notifempty dateext dateformat -%Y%m%d create 0644 root root } |
However, be cautious with this approach as external tools modifying Docker log files can interfere with Docker’s logging system. The recommended approach for time-based retention is to use centralized logging systems that can archive and manage logs based on time policies.
What are the available logging drivers in Docker and when should I use them?
Docker supports multiple logging drivers for different use cases: json-file (default) stores logs locally as JSON, suitable for development and debugging; local driver uses efficient storage format with automatic rotation, recommended for production; journald integrates with systemd’s journal, good for systems using systemd; syslog sends logs to syslog server for centralized management; fluentd forwards logs to Fluentd collector for aggregation; gelf sends logs to Graylog Extended Log Format endpoints; awslogs sends logs to Amazon CloudWatch; gcplogs sends to Google Cloud Logging; and splunk sends to Splunk logging server. For production environments without external logging infrastructure, use the local driver. For containerized environments requiring centralized logging, use fluentd or gelf. Only json-file and local drivers support the docker logs command. Choose based on your infrastructure, monitoring strategy, and whether you need local log access or centralized aggregation.
How do I troubleshoot log rotation not working as expected?
First, verify the logging configuration is applied to your container with docker inspect -f ‘{{.HostConfig.LogConfig}}’ container-name. If the configuration looks correct but rotation isn’t happening, ensure the container was created after the daemon.json changes, as existing containers don’t inherit new settings. Check that you restarted the Docker daemon after modifying daemon.json using sudo systemctl restart docker. Verify that max-size values are formatted correctly with units (e.g., “10m” not 10). For Docker Compose, ensure you recreated containers with docker-compose up -d –force-recreate. Check the actual log file size with docker inspect –format='{{.LogPath}}’ container-name followed by ls -lh on that path. If using Docker installed via Snap, check for daemon.json in /var/snap/docker/current/config/daemon.json instead of /etc/docker/. Look for Docker daemon errors in system logs with journalctl -u docker.service. Common issues include JSON syntax errors in daemon.json, permission problems, and insufficient disk space preventing rotation.
Can I use different log rotation settings for different containers on the same host?
Yes, you can override daemon.json defaults on a per-container basis. The daemon.json settings serve as defaults for containers that don’t specify logging options. To use different settings for specific containers, configure logging in Docker Compose:
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
version: "3.8" services: high-volume-app: image: app:latest logging: driver: "json-file" options: max-size: "50m" max-file: "10" low-volume-app: image: app:latest logging: driver: "json-file" options: max-size: "5m" max-file: "3" |
Or with docker run commands using –log-opt flags. This flexibility allows you to tune storage usage based on each application’s logging characteristics. High-traffic applications generating many logs can have larger retention, while low-traffic applications can use minimal settings to conserve disk space.
What are best practices for Docker log management in production?
Production Docker log management best practices include: always configure log rotation either globally via daemon.json or per-container to prevent disk exhaustion; use the local logging driver as the default for its efficiency and built-in rotation; implement centralized logging using tools like Fluentd, ELK Stack, or commercial solutions to aggregate logs from multiple containers; add metadata labels and tags to logs for better organization and filtering; configure appropriate max-size and max-file values based on your logging volume and retention requirements; monitor disk space proactively with automated alerts; use non-blocking delivery mode for network-based logging drivers to prevent application latency; enable compression for rotated logs to save disk space; ensure containers log to stdout and stderr rather than internal files; implement proper timestamp handling for accurate log correlation; use structured logging formats like JSON for easier parsing and analysis; regularly test log rotation by simulating high-volume logging; document your logging configuration in infrastructure-as-code; and establish clear log retention policies that comply with regulatory requirements. Consider implementing log sampling for extremely high-volume applications to reduce storage costs while maintaining visibility.
How do I migrate from json-file to local logging driver?
To migrate to the local logging driver, update your daemon.json configuration:
|
1 2 3 4 5 6 7 8 9 |
{ "log-driver": "local", "log-opts": { "max-size": "10m", "max-file": "5" } } |
Restart the Docker daemon with sudo systemctl restart docker. For existing containers, you must recreate them to use the new driver. Plan a maintenance window to stop and recreate containers. For Docker Compose applications, use docker-compose down followed by docker-compose up -d. Note that you cannot access old json-file logs through docker logs after switching drivers, so export any important logs before migration using docker logs container-name > container-logs.txt. The local driver uses a different storage format optimized for efficiency, and logs will start accumulating in the new format. Verify the change with docker info | grep “Logging Driver” and docker inspect for individual containers. The local driver performs similarly to json-file but with better storage efficiency and automatic rotation, making it ideal for production workloads.
What storage space should I allocate for Docker logs?
Calculate storage requirements based on the formula: (max-size × max-file × number of containers) + overhead. For example, with 20 containers, max-size of 10MB, and max-file of 5, you need approximately 1GB (20 × 10MB × 5). Add 20-30% overhead for temporary space during rotation and compression. For production environments, monitor actual usage patterns over time and adjust accordingly. High-traffic applications may generate logs faster than expected, requiring larger allocations. Consider implementing disk space monitoring with alerts when usage exceeds 70-80% of allocated space. In cloud environments, ensure your Docker data volume (/var/lib/docker) has sufficient space separate from the OS volume. For Kubernetes, allocate sufficient node storage for container logs based on pod density. If disk space is limited, reduce max-file count or implement more aggressive log forwarding to external systems. Use docker system df to check current Docker disk usage including logs, and docker system prune to clean up old data. Regular capacity planning reviews should include log storage analysis to prevent unexpected disk full scenarios.
How do I handle logs for short-lived containers?
Short-lived containers present unique logging challenges since they may complete before logs are fully processed. For batch jobs and ephemeral containers, use synchronous logging with blocking mode to ensure all logs are captured before container termination. Configure logging to forward to external systems that persist after the container exits. Use labels and tags to identify short-lived container logs for easier tracking:
|
1 2 3 4 5 6 7 8 |
docker run \ --log-driver=fluentd \ --log-opt fluentd-address=localhost:24224 \ --log-opt tag="batch-job-{{.Name}}" \ --log-opt labels=job_type,execution_id \ batch-job:latest |
Implement docker system prune regularly to clean up log files from stopped containers. Consider using a sidecar logging container pattern for critical short-lived jobs to ensure log collection completes. For Kubernetes CronJobs, configure TTL for finished jobs to automatically clean up resources including logs. Use centralized logging solutions with buffering to handle bursts of short-lived containers without losing data. Monitor log ingestion rates to ensure your logging infrastructure can handle container churn. For containers that complete in seconds, ensure your logging driver’s timeout settings are appropriate to capture all output before the container exits.
What are the performance implications of different log rotation settings?
Log rotation settings impact both I/O performance and application latency. Smaller max-size values cause more frequent rotations, increasing I/O overhead but maintaining smaller individual files. Larger values reduce rotation frequency but risk longer interruptions when rotation occurs. The compression option trades CPU usage for disk space savings—compression typically uses 5-10% additional CPU during rotation but reduces storage by 80-90%. More retained files (higher max-file) increase disk I/O during log reading operations. Blocking delivery mode (default) ensures reliability but may add 1-5ms latency per log line with local drivers, more with network drivers. Non-blocking mode eliminates this latency but requires sufficient buffer size to prevent log loss. The json-file driver has 10-30% more overhead than the local driver due to JSON formatting. For high-performance applications logging thousands of messages per second, use non-blocking mode with the local driver, larger buffer sizes (4-8MB), and disable compression. For typical applications, default settings provide good balance. Always test performance impact in staging environments before applying to production. Monitor application latency, disk I/O wait times, and CPU usage to identify logging bottlenecks.
