CHECKING STATUS
I AM LISTENING TO
|

My Top Self-Hosted Solutions with Docker for 2026

13. March 2026
.SHARE

Table of Contents

A lot has evolved since I shared my top Docker picks in 2025. I’ve streamlined the whole setup, cut the noise, and focused on what I actually use — locally and externally. Better organised, better documented, and somehow more enjoyable than ever. Could not stick to 10, so its 10++++!

My Top Universal Docker Solutions

1. Nginx Proxy Manager – Visual Reverse Proxy Management

Nginx Proxy Manager provides a clean web interface for managing Nginx reverse proxy hosts, SSL certificates, and redirects – without touching a config file. A practical alternative to manually maintaining Nginx or Traefik configurations.

Features

  • Web UI for managing proxy hosts, redirects, and 404 hosts
  • Automatic Let’s Encrypt SSL certificate provisioning and renewal
  • Access lists and basic HTTP authentication per host
  • Custom Nginx configuration snippets per proxy host

Docker Deployment

docker run -d -p 80:80 -p 81:81 -p 443:443 \
  -e DB_SQLITE_FILE=/data/database.sqlite \
  -v npm_data:/data \
  -v npm_letsencrypt:/etc/letsencrypt \
  jc21/nginx-proxy-manager

2. Portainer – Simplified Docker Management

Portainer provides a user-friendly web interface for managing Docker containers, images, and volumes. Using it locally and externally.

Features

  • Web-based UI for container management
  • Supports Docker Swarm and Kubernetes
  • Role-based access control
  • Application deployment templates

Docker Deployment

docker run -d -p 9000:9000 -v /var/run/docker.sock:/var/run/docker.sock portainer/portainer-ce

3. BookStack – Self-Hosted Wiki and Documentation

BookStack is an open-source platform for organizing and storing information in a structured, book-like hierarchy. Ideal for internal documentation and knowledge bases.

Features

  • Hierarchical organization with Shelves, Books, Chapters, and Pages
  • Built-in WYSIWYG and Markdown editor
  • Full-text search across all content
  • Role-based permissions and LDAP/SAML support

Docker Deployment

docker run -d -p 8080:80 \
  -e APP_URL=http://localhost:8080 \
  -e DB_HOST=your_db_host \
  -e DB_DATABASE=bookstack \
  -e DB_USERNAME=bookstack \
  -e DB_PASSWORD=your_password \
  solidnerd/bookstack

4. n8n – Self-Hosted Workflow Automation

n8n is a powerful, self-hosted workflow automation tool that connects apps, APIs, and services through a visual node-based editor. A flexible alternative to Zapier or Make with full data ownership.

Features

  • Visual drag-and-drop workflow builder with 400+ integrations
  • Supports custom JavaScript and Python code nodes
  • Webhook triggers, cron scheduling, and event-based automation
  • Self-hosted with optional queue mode via Redis and PostgreSQL

Docker Deployment

docker run -d -p 5678:5678 \
  -e N8N_BASIC_AUTH_ACTIVE=true \
  -e N8N_BASIC_AUTH_USER=admin \
  -e N8N_BASIC_AUTH_PASSWORD=your_password \
  -v n8n_data:/home/node/.n8n \
  n8nio/n8n

5. Kopia – Fast and Secure Backup Tool

Kopia is an open-source backup and restore tool with client-side encryption, deduplication, and compression. A modern alternative to tools like Duplicati or Restic, offering both a CLI and a web UI for managing backup policies and snapshot repositories.

Features

  • Client-side end-to-end encryption with multiple cipher options
  • Content-addressable deduplication for efficient storage usage
  • Supports local, SFTP, S3-compatible, Google Drive, and rclone backends
  • Web UI and CLI interface with scheduled snapshot policies

Docker Deployment

docker run -d -p 51515:51515 \
  -e KOPIA_PASSWORD=your_repository_password \
  -v kopia_config:/app/config \
  -v kopia_cache:/app/cache \
  -v /path/to/backup/source:/data:ro \
  -v /path/to/repository:/repository \
  kopia/kopia server start \
  --insecure --address=0.0.0.0:51515

6. Monocker – Minimal Docker Container Status Notifications

Monocker is a lightweight, self-hosted Docker container monitoring tool that sends notifications whenever a container changes state. A no-frills solution for staying informed about container crashes or unexpected stops without the overhead of a full monitoring stack.

Features

  • Monitors all running containers for state changes in real time
  • Supports Telegram, Slack, Pushover, ntfy, and other notification channels
  • Per-container include/exclude filtering via labels or environment variables
  • Minimal footprint – single container with no external dependencies

Docker Deployment

docker run -d \
  --name monocker \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e SERVER_LABEL=my-server \
  -e NOTIFICATION_TYPE=ntfy \
  -e NTFY_URL=http://your_ntfy_host/topic \
  petersem/monocker

7. Duplicati – Self-Hosted Encrypted Cloud Backup

Duplicati is a free, open-source backup client with a web-based interface for scheduling encrypted, incremental backups to a wide range of local and cloud storage destinations. A reliable set-and-forget backup solution for self-hosted environments that prioritizes simplicity without sacrificing security.

Features

  • AES-256 client-side encryption before data leaves the machine
  • Incremental backups with deduplication to minimize storage usage
  • Supports S3, Backblaze B2, FTP, SFTP, WebDAV, and 30+ backends
  • Web UI with scheduling, retention policies, and email notifications

Docker Deployment

docker run -d -p 8200:8200 \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/Berlin \
  -v duplicati_config:/config \
  -v duplicati_backups:/backups \
  -v /path/to/source:/source:ro \
  lscr.io/linuxserver/duplicati

My Top Local Docker Solutions

1. Plex – Media Streaming Done Right

Plex enables you to create your own Netflix-like experience by organizing and streaming your media collection.

Features

  • Remote access to your media
  • Automatic media organization and metadata fetching
  • Support for multiple users and devices
  • Integration with streaming services

Docker Deployment

docker run -d --name=plex -p 32400:32400 -v plex_data:/config -e PLEX_CLAIM=claim-token plexinc/pms-docker

2. TimeTagger – Self-Hosted Time Tracking

TimeTagger is an open-source, self-hosted time tracking tool with a visual timeline interface. Designed for freelancers and developers who want a lightweight alternative to commercial time tracking services with full data ownership.

Features

  • Interactive visual timeline for logging and reviewing time entries
  • Tag-based organization with nested tag support
  • Reporting and export for invoicing and analysis
  • REST API for integration with external tools and automation

Docker Deployment

docker run -d -p 80:80 \
  -e TIMETAGGER_BIND=0.0.0.0:80 \
  -v timetagger_data:/root/_timetagger \
  ghcr.io/almarklein/timetagger

3. Firefly III – Self-Hosted Personal Finance Manager

Firefly III is a feature-rich, self-hosted personal finance manager for tracking income, expenses, budgets, and accounts. A privacy-first alternative to cloud-based tools like YNAB or Mint, with full control over your financial data.

Features

  • Multi-account tracking with support for assets, liabilities, and cash
  • Budget management, bill tracking, and recurring transaction rules
  • Detailed reports and charts for income, expenses, and net worth
  • REST API and Data Importer tool for automated bank transaction imports

Docker Deployment

docker run -d -p 8080:8080 \
  -e APP_KEY=your_32_char_app_key \
  -e APP_URL=http://localhost:8080 \
  -e DB_HOST=your_db_host \
  -e DB_DATABASE=firefly \
  -e DB_USERNAME=firefly \
  -e DB_PASSWORD=your_password \
  -v firefly_upload:/var/www/html/storage/upload \
  fireflyiii/core

4. Homebox – Self-Hosted Home Inventory Management

Homebox is a lightweight, self-hosted inventory and organization system designed for tracking household items, assets, and warranties. A practical tool for home labs and households who want a structured, searchable record of their belongings without relying on spreadsheets or cloud services.

Features

  • Item tracking with locations, labels, and custom fields
  • Warranty and purchase tracking with expiry reminders
  • QR code generation for physical labeling of items and locations
  • CSV import/export and REST API for data portability

Docker Deployment

docker run -d -p 3100:7745 \
  -e HBOX_LOG_LEVEL=info \
  -e HBOX_WEB_MAX_UPLOAD_SIZE=10 \
  -v homebox_data:/data \
  ghcr.io/hay-kot/homebox

5. Mealie – Self-Hosted Recipe Manager and Meal Planner

Mealie is a self-hosted recipe management and meal planning application with automatic recipe scraping from any URL. A clean, family-friendly alternative to bookmarking recipes across multiple sites, with full control over your culinary data.

Features

  • One-click recipe import by scraping any recipe URL automatically
  • Meal planning calendar with drag-and-drop weekly schedule
  • Shopping list generation from meal plans and individual recipes
  • Multi-user support with household groups and REST API access

Docker Deployment

docker run -d -p 9925:9000 \
  -e ALLOW_SIGNUP=true \
  -e BASE_URL=http://localhost:9925 \
  -v mealie_data:/app/data \
  ghcr.io/mealie-recipes/mealie

6. Paperless-ngx – Self-Hosted Document Management

Paperless-ngx is a self-hosted document management system that ingests, indexes, and archives scanned documents and PDFs. A community-maintained successor to the original Paperless project, turning a pile of physical and digital paperwork into a fully searchable, tagged document archive.

Features

  • Automatic OCR for scanned documents with full-text search
  • Tag, correspondent, and document type classification with auto-matching rules
  • Multi-user support with fine-grained permissions per document
  • REST API and email ingestion for automated document workflows

Docker Deployment

docker run -d -p 8000:8000 \
  -e PAPERLESS_REDIS=redis://your_redis_host:6379 \
  -e PAPERLESS_DBHOST=your_db_host \
  -e PAPERLESS_SECRET_KEY=your_secret_key \
  -e PAPERLESS_TIME_ZONE=Europe/Berlin \
  -v paperless_data:/usr/src/paperless/data \
  -v paperless_media:/usr/src/paperless/media \
  -v paperless_consume:/usr/src/paperless/consume \
  ghcr.io/paperless-ngx/paperless-ngx

7. Dashy – Self-Hosted Personal Dashboard

Dashy is a highly customizable, self-hosted start page and personal dashboard for organizing links, services, and widgets in one place. A polished home lab hub for quick access to all self-hosted tools and external services from a single browser tab.

Features

  • Fully customizable layout with sections, icons, and themes
  • Built-in status monitoring with live uptime indicators per service
  • Widgets for weather, RSS feeds, system stats, and more
  • YAML-based configuration with optional authentication and multi-user support

Docker Deployment

docker run -d -p 4000:80 \
  -v dashy_config:/app/user-data \
  lissy93/dashy

My Top External Docker Solutions

1. Nextcloud – Your Own Cloud Storage

If you need an alternative to Google Drive or Dropbox, Nextcloud is the best self-hosted solution. It allows you to store files, share documents, and even integrate calendar and email functions.

Features

  • File synchronization and sharing
  • Calendar and contacts integration
  • Built-in office suite (Collabora or OnlyOffice)
  • End-to-end encryption

Docker Deployment

docker run -d -p 8080:80 --name nextcloud -v nextcloud_data:/var/www/html nextcloud

2. Bugsink – Self-Hosted Error Tracking

Bugsink is a lightweight, self-hosted error tracking platform compatible with the Sentry SDK. Designed for teams who want full control over their error data without external dependencies.

Features

  • Sentry SDK compatible – no client-side changes required
  • Supports Python, JavaScript, PHP, and more
  • Issue grouping, deduplication, and event history
  • Minimal dependencies – runs as a single Docker container

Docker Deployment

docker run -d -p 8000:8000 \
  -e SECRET_KEY=your_secret_key \
  -v bugsink_data:/var/lib/bugsink \
  bugsink/bugsink

3. ntfy – Self-Hosted Push Notifications

ntfy is a simple, self-hosted pub/sub notification service that sends push notifications to your phone or desktop via HTTP. No account or API key required – just publish to a topic and subscribe.

Features

  • HTTP-based publish/subscribe for instant push notifications
  • Native Android and iOS apps with background delivery
  • Supports priority levels, tags, icons, and action buttons
  • Easy integration with bash scripts, cron jobs, and monitoring tools

Docker Deployment

docker run -d -p 8080:80 \
  -v ntfy_data:/var/lib/ntfy \
  -v /etc/ntfy/server.yml:/etc/ntfy/server.yml \
  binwiederhier/ntfy serve

4. Matomo – Self-Hosted Web Analytics

Matomo is a fully featured, self-hosted web analytics platform and the most widely used open-source alternative to Google Analytics. Full data ownership with GDPR compliance built in.

Features

  • Comprehensive visitor tracking, heatmaps, and funnel analysis
  • GDPR, CCPA, and ePrivacy compliant out of the box
  • No data sampling – full unfiltered access to raw analytics
  • Plugin ecosystem with tag manager, A/B testing, and SEO tools

Docker Deployment

docker run -d -p 8080:80 \
  -e MATOMO_DATABASE_HOST=your_db_host \
  -e MATOMO_DATABASE_DBNAME=matomo \
  -e MATOMO_DATABASE_USERNAME=matomo \
  -e MATOMO_DATABASE_PASSWORD=your_password \
  -v matomo_data:/var/www/html \
  matomo

5. Mosparo – Self-Hosted Spam Protection

Mosparo is an open-source, self-hosted spam protection solution for web forms. A privacy-friendly alternative to reCAPTCHA and hCaptcha that works without tracking, cookies, or sending data to third-party servers.

Features

  • GDPR-compliant spam filtering without third-party dependencies
  • Invisible honeypot and behavioral analysis – no CAPTCHA challenges
  • WordPress, Typo3, and framework integrations via official plugins
  • Customizable rulesets and submission log with detailed statistics

Docker Deployment

docker run -d -p 8080:80 \
  -e MOSPARO_DB_HOST=your_db_host \
  -e MOSPARO_DB_NAME=mosparo \
  -e MOSPARO_DB_USER=mosparo \
  -e MOSPARO_DB_PASSWORD=your_password \
  -v mosparo_data:/var/www/html/var \
  mosparo/mosparo

6. Uptime Kuma – Self-Hosted Uptime Monitoring

Uptime Kuma is a lightweight, self-hosted monitoring tool for tracking the availability of websites, services, and APIs. A polished alternative to services like UptimeRobot with a real-time dashboard and extensive notification support.

Features

  • Monitors HTTP, TCP, DNS, Docker containers, and more
  • Real-time dashboard with response time graphs and status history
  • 90+ notification integrations including ntfy, Slack, and Telegram
  • Public status pages with custom domains and branding

Docker Deployment

docker run -d -p 3001:3001 \
  -v uptime-kuma_data:/app/data \
  louislam/uptime-kuma

7. Rocket.Chat – Self-Hosted Team Messaging

Rocket.Chat is a feature-rich, self-hosted team communication platform with channels, direct messaging, video conferencing, and an extensible app engine. A robust open-source alternative to Slack or Microsoft Teams with full data sovereignty.

Features

  • Channels, threads, direct messages, and video/audio conferencing
  • Extensible Apps Engine for custom bots and integrations
  • Omnichannel support for live chat, email, and social messaging
  • LDAP, SAML, and OAuth authentication support

Docker Deployment

docker run -d -p 3000:3000 \
  -e MONGO_URL=mongodb://your_mongo_host:27017/rocketchat \
  -e ROOT_URL=http://localhost:3000 \
  -e PORT=3000 \
  -v rocketchat_uploads:/app/uploads \
  rocket.chat

8. VDO.Ninja – Self-Hosted Browser-Based Video Streaming

VDO.Ninja is a free, open-source tool that uses WebRTC to bring remote camera feeds directly into OBS or any browser – with zero latency and no dedicated software required on the sender’s side. Fully self-hostable for complete control over the signaling infrastructure.

Features

  • Browser-to-browser WebRTC video with near-zero latency
  • Direct OBS integration via Browser Source – no capture card needed
  • Supports multi-guest rooms, screen sharing, and audio mixing
  • Self-hostable signaling server and TURN relay for full data control

Docker Deployment

docker run -d -p 8080:8080 \
  -e PORT=8080 \
  -v vdoninja_data:/app/data \
  steveseguin/vdo.ninja

9. Watchtower – Automated Docker Container Updates

Watchtower is a lightweight Docker container that automatically monitors and updates running containers whenever a new image version is available. A set-and-forget solution for keeping a self-hosted Docker stack up to date without manual intervention.

Features

  • Automatically pulls and redeploys updated container images
  • Configurable schedules via cron expressions or polling intervals
  • Per-container opt-out via labels for selective update control
  • Notification support including email, Slack, ntfy, and Apprise

Docker Deployment

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower

10. Garage – Lightweight Self-Hosted S3-Compatible Object Storage

Garage is an open-source, self-hosted distributed object storage service with an S3-compatible API. Designed for small to medium infrastructure where a full MinIO setup would be overkill, offering resilient storage across multiple nodes with minimal resource usage.

Features

  • S3-compatible API – works with existing S3 clients and SDKs
  • Distributed across multiple nodes with configurable replication
  • Designed for geo-distributed and low-bandwidth environments
  • Lightweight binary with low memory footprint – no JVM or heavy runtime

Docker Deployment

docker run -d \
  --name garage \
  -p 3900:3900 -p 3901:3901 \
  -v garage_data:/var/lib/garage/data \
  -v garage_meta:/var/lib/garage/meta \
  -v /etc/garage/garage.toml:/etc/garage/garage.toml \
  dxflrs/garage

11. Mercure – Self-Hosted Real-Time Push Protocol

Mercure is an open-source protocol and server for pushing real-time updates to web browsers and other HTTP clients via Server-Sent Events. A lightweight alternative to WebSockets for live data feeds, notifications, and collaborative features built on standard HTTP.

Features

  • Server-Sent Events over HTTP/2 – no WebSocket upgrade required
  • JWT-based authorization for secure topic subscriptions and publishing
  • Topic-based pub/sub with wildcard and private topic support
  • Built-in metrics, health check endpoint, and high-performance Go runtime

Docker Deployment

docker run -d -p 3000:3000 \
  -e SERVER_NAME=:3000 \
  -e MERCURE_PUBLISHER_JWT_KEY=your_publisher_jwt_key \
  -e MERCURE_SUBSCRIBER_JWT_KEY=your_subscriber_jwt_key \
  -v mercure_data:/data \
  dunglas/mercure

12. Apprise – Universal Notification Gateway

Apprise is an open-source notification library and self-hostable API server that provides a single unified interface for sending alerts to virtually any notification service. A practical abstraction layer for centralizing notifications across a self-hosted infrastructure stack.

Features

  • 100+ supported notification services including Slack, Telegram, and ntfy
  • Simple URL-based configuration for defining notification targets
  • REST API server mode for sending notifications from any application or script
  • Persistent notification groups via configuration files or the web UI

Docker Deployment

docker run -d -p 8000:8000 \
  -v apprise_config:/config \
  caronc/apprise

13. Duplicati – Self-Hosted Encrypted Cloud Backup

Duplicati is a free, open-source backup client with a web-based interface for scheduling encrypted, incremental backups to a wide range of local and cloud storage destinations. A reliable set-and-forget backup solution for self-hosted environments that prioritizes simplicity without sacrificing security.

Features

  • AES-256 client-side encryption before data leaves the machine
  • Incremental backups with deduplication to minimize storage usage
  • Supports S3, Backblaze B2, FTP, SFTP, WebDAV, and 30+ backends
  • Web UI with scheduling, retention policies, and email notifications

Docker Deployment

docker run -d -p 8200:8200 \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/Berlin \
  -v duplicati_config:/config \
  -v duplicati_backups:/backups \
  -v /path/to/source:/source:ro \
  lscr.io/linuxserver/duplicati

14. GoAccess – Real-Time Web Log Analyzer

GoAccess is an open-source, real-time web log analyzer and interactive viewer that runs directly in the terminal or generates self-contained HTML reports. A fast, dependency-free alternative to sending access logs to external analytics platforms, keeping traffic analysis entirely on-premise.

Features

  • Real-time log parsing with live terminal dashboard and HTML report export
  • Supports Nginx, Apache, Caddy, and custom log formats out of the box
  • Visitor analytics including geolocation, browsers, OS, and referrers
  • WebSocket-powered live HTML reports with no external dependencies

Docker Deployment

docker run -d -p 7890:7890 \
  --name goaccess \
  -v /path/to/nginx/logs:/var/log/nginx:ro \
  -v goaccess_data:/srv/data \
  -e LANG=en_US.UTF-8 \
  allinurl/goaccess \
  --log-format=COMBINED \
  --real-time-html \
  --output=/srv/data/report.html \
  /var/log/nginx/access.log

15. Typebot – Self-Hosted Conversational Form Builder

Typebot is an open-source, self-hosted conversational form and chatbot builder with a visual flow editor. A privacy-friendly alternative to Typeform or Landbot for creating engaging multi-step forms, lead generation flows, and customer-facing chat interfaces without sending data to third-party servers.

Features

  • Visual drag-and-drop flow builder with conditional logic and branching
  • Embeddable as a popup, inline widget, or full-page chat interface
  • Native integrations with Google Sheets, Webhooks, Zapier, and more
  • Detailed analytics per block with drop-off rates and completion tracking

Docker Deployment

docker run -d -p 3000:3000 \
  -e DATABASE_URL=postgresql://user:password@your_db_host:5432/typebot \
  -e NEXTAUTH_URL=http://localhost:3000 \
  -e NEXTAUTH_SECRET=your_nextauth_secret \
  -e ENCRYPTION_SECRET=your_encryption_secret \
  -v typebot_data:/app/data \
  baptistearno/typebot-builder

16. Opengist – Self-Hosted Gist and Code Snippet Manager

Opengist is a lightweight, self-hosted pastebin and code snippet manager powered by Git. A privacy-friendly alternative to GitHub Gist with syntax highlighting, public and private snippets, and a familiar Git-based backend for version-controlled snippet history.

Features

  • Git-backed snippet storage with full revision history per gist
  • Syntax highlighting for 100+ languages with raw and download access
  • Public, unlisted, and private snippet visibility options
  • OAuth login support for GitHub, Gitea, and OpenID Connect

Docker Deployment

docker run -d -p 6157:6157 \
  -e OG_SECRET_KEY=your_secret_key \
  -e OG_EXTERNAL_URL=http://localhost:6157 \
  -v opengist_data:/opengist \
  ghcr.io/thomiceli/opengist

17. Imaginary – Self-Hosted Image Processing Microservice

Imaginary is a fast, self-hosted HTTP microservice for real-time image processing and transformation built on top of libvips. A lightweight alternative to cloud-based image APIs like Cloudinary or Imgix, designed for high-throughput resizing, cropping, and format conversion without external dependencies.

Features

  • Real-time image resizing, cropping, rotation, and format conversion via HTTP
  • Supports JPEG, PNG, WebP, AVIF, GIF, TIFF, and SVG input formats
  • URL-based and multipart form API for flexible integration
  • High performance via libvips – significantly faster than ImageMagick

Docker Deployment

docker run -d -p 9000:9000 \
  -e PORT=9000 \
  -e IMAGINARY_KEY=your_api_key \
  -e IMAGINARY_ALLOWED_ORIGINS=http://localhost \
  h2non/imaginary \
  -enable-url-source \
  -key your_api_key

18. Puppeteer Renderer – Self-Hosted Headless Browser Rendering Service

Puppeteer Renderer is a self-hosted HTTP service that uses a headless Chromium browser via Puppeteer to render JavaScript-heavy pages and generate PDFs or screenshots on demand. A practical microservice for server-side rendering, PDF generation, and web scraping without managing a browser runtime directly in your application.

Features

  • Full-page screenshots and PDF generation via simple HTTP requests
  • Server-side rendering of JavaScript-heavy Single Page Applications
  • Configurable viewport, wait conditions, and page timeout options
  • Lightweight HTTP API – easily callable from PHP, Node.js, or any backend

Docker Deployment

docker run -d -p 3000:3000 \
  --cap-add SYS_ADMIN \
  zenato/puppeteer-renderer:2.4.0

FAQ

What is the difference between a Docker image and a Docker container?

A Docker image is a read-only blueprint — it packages your application code, runtime, libraries, and configuration into a layered, portable file. A container is a live, running instance of that image with its own isolated process, filesystem, and network interface. The relationship is similar to a class and an object in programming: you can spin up many containers from a single image, each running independently. Images are built once and reused; containers are created, started, stopped, and destroyed as needed.

How does Docker differ from a virtual machine?

Virtual machines run a full guest operating system on top of a hypervisor, with each VM requiring its own kernel, OS libraries, and allocated RAM — typically gigabytes per instance. Docker containers share the host operating system’s kernel directly and isolate only the application layer using Linux namespaces and cgroups. The result is that containers start in seconds rather than minutes, consume far less memory, and are far more portable. The trade-off is that containers are less isolated than VMs — a kernel-level vulnerability on the host can theoretically affect all containers running on it.

What is a Dockerfile and how does it work?

A Dockerfile is a plain text script that tells Docker how to build an image step by step. Each instruction — FROM, RUN, COPY, ENV, CMD — creates a new read-only layer on top of the previous one. Docker caches these layers, so unchanged steps are not rebuilt, which dramatically speeds up iterative builds. The FROM instruction always comes first and defines the base image (e.g. FROM php:8.3-fpm-alpine). The final CMD or ENTRYPOINT defines what runs when a container starts. Running docker build -t myapp:1.0 . in the same directory as the Dockerfile produces a tagged image ready to run or push to a registry.

What is Docker Compose and when should I use it?

Docker Compose is a tool for defining and running multi-container applications from a single compose.yaml file. Instead of running several docker run commands with long lists of flags, you describe all services, networks, and volumes in one place and bring everything up with docker compose up -d. Use it any time your application has more than one container — a web app with a database and Redis cache is the classic example. As of 2026, the modern CLI plugin syntax is docker compose (with a space); the legacy docker-compose binary with a hyphen has been deprecated and removed.

What is the difference between a named volume and a bind mount?

A named volume (-v mydata:/var/lib/data) is fully managed by Docker and stored inside Docker’s own storage area on the host. Docker handles creation, permissions, and lifecycle — it is the recommended approach for persistent application data like databases. A bind mount (-v /host/path:/container/path) maps a specific directory from your host filesystem directly into the container. Bind mounts are ideal during development for syncing source code in real time, but they expose host paths and permission issues more readily. For production data persistence, always prefer named volumes.

Why does my container lose all data when I restart or remove it?

By default, any data written inside a container’s filesystem exists only for the lifetime of that container. When the container is removed, the data goes with it. To persist data across restarts and container replacements, you must mount a volume or bind mount at the path where the application writes its data. For example, a PostgreSQL container needs -v pgdata:/var/lib/postgresql/data to retain its databases. This is one of the most common mistakes for Docker beginners — always check the image’s documentation for which paths need to be mounted.

How do container ports and port mapping work?

Containers run in an isolated network namespace with their own internal ports. To make a service accessible from your host or the internet, you must map a host port to a container port using the -p flag: -p 8080:80 maps host port 8080 to container port 80. The left side is always the host, the right side the container. To restrict access to localhost only — useful for services like databases that should never be publicly exposed — bind to the loopback interface: -p 127.0.0.1:5432:5432. In Docker Compose, the same syntax applies under the ports: key.

How do containers communicate with each other?

Containers on the same Docker user-defined network can reach each other by service name — Docker’s internal DNS resolver at 127.0.0.11 handles name resolution automatically. In a Compose stack, all services share a default network unless specified otherwise, so a PHP container can connect to a MySQL container simply using mysql as the hostname. Containers on separate networks or using the default bridge network cannot resolve each other by name. Never use localhost inside a container to reach another container — localhost refers to the container itself, not the host or any sibling service.

Should I use the latest tag for images in production?

No. The latest tag is a floating pointer that changes whenever the maintainer pushes a new build — pulling it on different days can produce different images without any warning. In production, always pin images to a specific version tag, for example postgres:16.2-alpine3.19 or redis:7.2-alpine. This makes deployments reproducible and prevents unexpected breaking changes from an upstream update. Reserve latest for quick local experiments only, and use tools like Watchtower with include/exclude labels if you want automated controlled updates in production.

How do I pass environment variables into a container securely?

Use an .env file in the same directory as your Compose file and reference variables with ${VARIABLE_NAME} syntax in compose.yaml. Docker Compose loads .env automatically. Never hardcode credentials directly in compose.yaml or commit real .env files to version control — add .env to your .gitignore and commit only an .env.example template with placeholder values. For production environments, consider using Docker Secrets (in Swarm mode) or injecting variables through your CI/CD pipeline’s secret management rather than shipping .env files to the server.

What restart policies are available and which should I use?

Docker offers four restart policies. no (the default) never restarts automatically. always restarts the container whenever it stops, including after a host reboot — useful for core infrastructure services. unless-stopped behaves like always but does not restart if you manually stopped the container before the reboot, which makes it the most practical choice for self-hosted services. on-failure restarts only on a non-zero exit code and accepts an optional maximum retry count. For most self-hosted Docker stacks, restart: unless-stopped is the right default in Compose files.

How do I view logs from a running container?

Use docker logs <container_name> to print a container’s stdout and stderr output. Add -f to follow logs in real time (like tail -f), --tail 100 to show only the last 100 lines, or --since 1h to show only the last hour’s output. In a Compose stack, docker compose logs -f servicename tails a specific service. By default, Docker uses the json-file logging driver with no size limit — on busy containers this can fill your disk over time. Set a log rotation policy in /etc/docker/daemon.json or per-container in Compose using the logging key with max-size and max-file options.

How do I get a shell inside a running container?

Run docker exec -it <container_name> /bin/bash to open an interactive bash shell inside a running container. If the image is based on Alpine Linux or another minimal distribution, bash may not be installed — use /bin/sh instead. The -i flag keeps stdin open and -t allocates a pseudo-TTY, which together give you an interactive terminal. From inside the shell you can inspect files, run commands, check environment variables with env, or test network connectivity with ping or curl. In Compose, the equivalent is docker compose exec servicename /bin/sh.

What are multi-stage builds and why should I use them?

Multi-stage builds use multiple FROM instructions in a single Dockerfile to separate the build environment from the runtime environment. A typical pattern builds the application in a full SDK image (with compilers, dev dependencies, and build tools) and then copies only the compiled output into a minimal runtime image like alpine or distroless. The final image contains none of the build toolchain — only what the application needs to run. This dramatically reduces image size, shrinks the attack surface, and speeds up pulls and deployments. For any production image, multi-stage builds should be the default approach rather than the exception.

How do I make sure a dependent service is ready before my app container starts?

depends_on in Compose controls start order but only waits for the container to start — not for the service inside it to be ready to accept connections. A database container can be running while PostgreSQL is still initializing. The correct solution is to add a healthcheck to the dependency and use depends_on: condition: service_healthy in the dependent service. For example, define a healthcheck on your database service using pg_isready or mysqladmin ping, and Compose will hold the app container until the health check passes. Alternatively, use a lightweight wrapper script like wait-for-it.sh as the container entrypoint.

How do I reduce the size of my Docker images?

Start with a minimal base image — alpine variants are typically 5–10x smaller than their full Debian or Ubuntu counterparts. Use multi-stage builds to exclude build tools from the final image. Chain RUN commands with && and clean up package caches in the same layer to avoid bloating intermediate layers: RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*. Use a .dockerignore file to exclude node_modules, .git, test files, and local config from the build context. Finally, avoid installing unnecessary packages — every tool added to a runtime image is both storage overhead and a potential vulnerability.

How do I free up disk space taken up by Docker?

Docker accumulates unused images, stopped containers, dangling build cache, and unused networks over time. Run docker system prune to remove all stopped containers, dangling images, unused networks, and build cache in one command. Add -a to also remove images not referenced by any running container — use this carefully on production hosts. To target specific resource types: docker image prune -a for images, docker volume prune for unused volumes (this deletes data — be careful), and docker builder prune for build cache only. Run docker system df first to see exactly how much space each category is consuming.

Should I run containers as root?

No. Running containers as the root user is a significant security risk — if an attacker escapes the container, they have root access to the host. Define a non-root user in your Dockerfile using USER after installing dependencies: RUN addgroup -S appgroup && adduser -S appuser -G appgroup followed by USER appuser. Many official images already run as non-root by default (e.g. the Node.js image uses the node user). For containers that need access to host resources like /var/run/docker.sock, limit the scope to the minimum required rather than using --privileged mode, which effectively removes all container isolation.

What is the difference between docker compose up, start, and run?

docker compose up creates and starts all services defined in the Compose file, building images if necessary — it is the primary command for bringing a full stack online. Adding -d runs services in detached (background) mode. docker compose start only starts services that have already been created but are currently stopped — it does not create new containers. docker compose run starts a one-off container for a specific service, overriding the default command — commonly used to run database migrations, management commands, or interactive shells without starting the entire stack. For example: docker compose run --rm app php artisan migrate.

How do I update a running container to a new image version?

With Docker Compose, update the image tag in your compose.yaml, then run docker compose pull to fetch the new image and docker compose up -d to recreate only the containers whose image has changed. Compose handles the stop, remove, and recreate cycle automatically. For a single container managed with docker run, the manual process is: docker pull image:newtag, docker stop container, docker rm container, then docker run with the new tag and the same volume and network flags as before. Always verify your volumes are correctly mounted before removing the old container to avoid data loss.

What is /var/run/docker.sock and why do some containers need it?

/var/run/docker.sock is the Unix socket that the Docker daemon listens on. Mounting it into a container (-v /var/run/docker.sock:/var/run/docker.sock) gives that container the ability to communicate with the Docker daemon on the host — effectively allowing it to manage other containers, pull images, and inspect the Docker environment. Tools like Portainer, Watchtower, and Traefik require this to function. It is a significant security boundary: any container with access to the Docker socket has near-root-equivalent access to the host. Never expose it to untrusted containers or mount it in containers that process user input from the internet.

How do I limit CPU and memory usage for a container?

Resource limits prevent a misbehaving container from starving other services on the same host. In a Compose file, define limits under the deploy.resources key: set limits.memory: 512M to cap RAM and limits.cpus: '1.0' to cap CPU. You can also define reservations to guarantee a minimum allocation. With plain docker run, use --memory 512m and --cpus 1.0 flags. Without limits, a single runaway container (such as one processing a large file or stuck in a loop) can consume all available host resources and bring down your entire stack. Setting sensible limits is especially important on shared or production hosts.

What is Docker Hub and are there alternatives?

Docker Hub is the default public registry where Docker pulls images from when no registry is specified. Typing docker pull nginx is shorthand for docker pull docker.io/library/nginx:latest. As of 2025, Docker Hub enforces rate limits for unauthenticated pulls and has tightened its free tier for image retention. Popular alternatives include the GitHub Container Registry (ghcr.io), which many open-source projects now use as their primary registry, as well as self-hosted options like Harbor or a plain Docker Registry container for fully private infrastructure. Always prefer images from verified publishers or official Docker Library images for production use.

When does Docker make sense and when should I use something else?

Docker excels at packaging applications with their dependencies, standardizing development environments, and running self-hosted services reliably on a single host with Docker Compose. It is the right tool for the vast majority of web apps, APIs, databases, and self-hosted tools. It becomes the wrong tool when you need high availability across multiple nodes — at that point, Kubernetes or Docker Swarm becomes relevant. Docker is not ideal for GUI desktop applications, workloads that require direct hardware access below what --device flags support, or anything that depends deeply on a specific kernel version. For single-server self-hosted infrastructure, Docker Compose handles most real-world needs without the operational complexity of an orchestrator.

Let’s Talk!

Looking for a reliable partner to bring your project to the next level? Whether it’s development, design, security, or ongoing support—I’d love to chat and see how I can help.

Get in touch,
and let’s create something amazing together!

RELATED POSTS

Updated: 03/2026 When I first put together the open-source KPI roundup back in October 2025, the landscape was already impressive. Fast forward to early 2026 and things have moved fast — we’re talking major version releases, AI features landing in free tiers, and a couple of genuinely exciting new tools that deserve a spot on […]

Spam hasn’t gone anywhere. Keep coming back to it, as I hate cleaning up my INBOX and so do my clients! In fact, AI-generated spam has made the problem noticeably worse through 2025 and into 2026 — it’s more convincing, passes basic heuristic checks that would’ve flagged template spam instantly, and arrives in higher volumes. […]

Quick Answer: Modern developers have a rich ecosystem of dummy data tools — from classic Lorem Ipsum text generators and Lorem Picsum image placeholders to powerful libraries like @faker-js/faker and Falso, plus self-hosted Docker mock API servers like Mockoon, Smocker, and JSON Server. Let’s be honest — every developer has been there. You’re building a […]

Alexander

I am a full-stack developer. My expertise include:

  • Server, Network and Hosting Environments
  • Data Modeling / Import / Export
  • Business Logic
  • API Layer / Action layer / MVC
  • User Interfaces
  • User Experience
  • Understand what the customer and the business needs


I have a deep passion for programming, design, and server architecture—each of these fuels my creativity, and I wouldn’t feel complete without them.

With a broad range of interests, I’m always exploring new technologies and expanding my knowledge wherever needed. The tech world evolves rapidly, and I love staying ahead by embracing the latest innovations.

Beyond technology, I value peace and surround myself with like-minded individuals.

I firmly believe in the principle: Help others, and help will find its way back to you when you need it.