š³ Basics (List, Pull, Run and Delete)
References https://docs.docker.com/get-started/introduction/whats-next/ https://www.youtube.com/watch?v=b0HMimUb4f0&ab_channel=mCoding
docker image ls #list down all the images
docker ps #list down the running containers
docker ps -a #list down all the containers
#Pull Images
docker pull python #Will take around 1GB
docker pull python:3.11-slim #will takw around 140mb
docker pull python:3.11-alpine #will takw around 50mb
#slim images - usually Debial-based Linux
#alpine images - Alpine Linux
#Stop Docker container
docker stop <container-name>
#Delete Docker Container/s
docker container prune #Remove all stopped containers
docker run -p 5000:80 -d --rm nginx # automatically delete when the container is stopped using --rm
docker rm -f <container-name> # Force remove a container (running or stopped)
#Delete a Docker image
docker rmi <image_name>:<tag>
docker rmi <image_id>
docker rmi -f <image_name>:<tag>
Start and run new container using remote Image
docker run --name=base-container -ti ubuntu
- -t: Allocates a pseudo-TTY (terminal) | terminal features like history and auto complete
- -i: Keeps STDIN open (interactive mode) | interactive terminal session
Build Custome Images with names/tags
#The most basicĀ `docker build`Ā command, build with no name/tag
#The final `.` in the command provides the path or URL to the build context. At this location,
#the builder will find the Dockerfile and other referenced files.
docker build .
#build with a name/tag
docker build -t my-username/my-image .
#add another tag for existing image
docker image tag my-username/my-image another-username/another-image:v1
Create and run containers using Images
#This builds a Docker image from a Dockerfile in the current directory (.), and tags it as mysite.
#You now have a custom Docker image called mysite on your machine.
docker build -t mysite .
#This creates and runs a container from the image mysite, gives it a name, and maps ports so you can access it from your browser.
docker run --name my-container-using-mysite -p 3000:80 mysite
docker run -p 3000:80 -d nginx
#-p : for port mapping
#-d : for detached mode | Run the container in the background
Port mapping with NginX
docker run nginx #will pull if its not already pulled
#We can see this is running in the docker desktop, but if we visit localhost in the browser it shows an error
#This is because by default containers are isolated form the host machine.
#So, we have to publish the port, so host can see it
docker run -p HOST_PORT:CONTAINER_PORT nginx
docker run -p 80:80 nginx
#If you omit the HOST_PORT the containerās port 80 onto an ephemeral port on the host: and you can find it by
docker ps
#now we can see the localhost is working on the browser
docker run -P nginx #publish all of the exposed ports configured by the image
Debugging
docker exec -it <name or GUID> /bin/bash
#-i : interactive terminal session
#-t : terminal features like history and auto complete
docker logs <container-name> or GUID #Get logs
Publishing
docker push my-username/my-image
Dockerfile
FROM node:20-alpine #define your base image
WORKDIR /app # This will specify where future commands will run and the directory files will be copied inside the container image.
COPY package.json yarn.lock ./ # Read below note
COPY . . #Copy all of the files from your project on your machine into the container image
RUN yarn install --production
EXPOSE 3000
CMD ["node", "./src/index.js"] # start command
Note: For Node-based applications, dependencies are defined in the package.json file. You'll want to reinstall the dependencies if that file changes, but use cached dependencies if the file is unchanged. So, start by copying only that file first, then install the dependencies, and finally copy everything else. Then, you only need to recreate the yarn dependencies if there was a change to the package.json file.
Great start! Here's an improved and polished version of your Docker short note, with grammar corrections, clearer phrasing, and some extra helpful information added:
Overriding container defaults
Overriding the network ports
#You can use the -p option in docker run to map container ports to host ports, allowing you to run the multiple instances of the container without any conflict.
docker run -d -p HOST_PORT:CONTAINER_PORT postgres
Setting environment variables
docker run -e foo=bar postgres env
#The .env file acts as a convenient way to set environment variables for your Docker containers without cluttering your command line #with numerous -e flags. To use a .env file, you can pass --env-file option with the docker run command.
docker run --env-file .env postgres env
Restricting the container to consume the resources
#You can use the --memory and --cpus flags with the docker run command to restrict how much CPU and memory a container can use
docker run -e POSTGRES_PASSWORD=secret --memory="512m" --cpus="0.5" postgres
#You can use the docker stats command to monitor the real-time resource usage of running containers.
Run Postgres container in a controlled network
#By default, containers automatically connect to a special network called a bridge network when you run them. This bridge network acts like a virtual bridge, allowing containers on the same host to communicate with each other while keeping them isolated from the outside world and other hosts.
#You create a custom network by passing --network flag with the docker run command. All containers without a --network flag are attached to the default bridge network.
docker network create mynetwork
docker network ls
#Connect Postgres to the custom network by using the following command:
docker run -d -e POSTGRES_PASSWORD=secret -p 5434:5432 --network mynetwork postgres
Persisting container data
- For example, if you restart a database container, you might not want to start with an empty database. So, how do you persist files?
Container volumes
- Volumes are a storage mechanism that provide the ability to persist data beyond the lifecycle of an individual container. Think of it like providing a shortcut or symlink from inside the container to outside the container.
docker volume create log-data # create a volume named log-data
docker run -d -p 80:80 -v log-data:/logs docker/welcome-to-docker # the volume will be mounted (or attached) into the container at /logs
# Mounts a named volume called log-data to the containerās /logs directory.
# Any files written by the container to /logs will be stored in the log-data volume on your host, which persists even if the container is removed.
docker volume ls
docker volume inspect log-data
- If the volume log-data doesn't exist, Docker will automatically create it for you.
- When the container runs, all files it writes into the /logs folder will be saved in this volume, outside of the container. If you delete the container and start a new container using the same volume, the files will still be there.
- You can attach the same volume to multiple containers to share files between containers.
- Volumes have their own lifecycle beyond that of containers and can grow quite large depending on the type of data and applications youāre using.
docker volume ls # list all volumes
docker volume rm <volume-name-or-id> # remove a volume (only works when the volume is not attached to any containers)
docker volume prune # remove all unused (unattached) volumes
Sharing local files with containers
- Containers are isolated, meaning they donāt access the hostās filesystem by default.
-
To persist data or
share files between host and container,
Docker provides:
- Volumes: Managed by Docker, ideal for persistent storage. Survive container restarts.
- Bind mounts: Link specific host directories/files to the container. Great for development and real-time updates.
š¹ -v
vs --mount
-
-v /host/path:/container/path
:- Simple and auto-creates missing host dirs.
- Good for quick tasks.
-
--mount type=bind,source=/host/path,target=/container/path
:- More readable and feature-rich.
- Fails if the host path doesn't exist.
- Recommended by Docker for better control.
Use volumes for persistent container data, and bind mounts to share real-time code or config files during development.
docker run -v /HOST/PATH:/CONTAINER/PATH -it nginx
docker run --mount type=bind,source=/HOST/PATH,target=/CONTAINER/PATH,readonly nginx
Docker Bind Mount Permissions & Performance Summary
-
Bind mounts allow containers to access and share files from the host system.
-
Use access flags with the mount to control permissions:
-
:ro
ā Read-only: container can read but not modify/delete host files. -
:rw
ā Read-write: container can read, modify, or delete host files. -
Example:
docker run -v /host/dir:/container/dir:rw nginx
-
-
Best practice: Use
:ro
for config files to prevent accidental modification.
Synchronized File Share
- For large or frequently accessed codebases, bind mounts can be slow.
- Synchronized file shares enhance performance using filesystem caching, especially in VM-based Docker setups (like Docker Desktop).
- They ensure faster and more efficient file access between host and container in development environments.
mkdir public_html
cd public_html
touch index.html #add some html content in the file
docker run -d --name my_site -p 8080:80 -v .:/usr/local/apache2/htdocs/ httpd:2.4
OR
docker run -d --name my_site -p 8080:80 --mount type=bind,source=./,target=/usr/local/apache2/htdocs/ httpd:2.4
# Now go to the docker desktop
# Open the container
# Goto the Files tab
# Search in /usr/local/apache2/htdocs/
Caching
-
Docker uses a layered file system: each
instruction in a
Dockerfile
creates a new layer. - When a layer changes, all layers after it are invalidated and must be rebuilt.
- Most instructions are cached by default to improve build speed.
-
Using
RUN
commands to delete files doesn't remove them from earlier layers; it just creates a new layer.š Tip: Never store sensitive information in Docker images (e.g., API keys, passwords), as layers can be inspected.
docker build --no-cache . # Build image without using any cached layers
Multi-container applications
- Used to separate build environment from the final runtime environment.
- Helps reduce image size and keep the final image clean.
- You can target a specific stage when building:
docker build -t mysite-frontend --target runner .
Docker Compose
- Managing multiple containers (frontend, backend, DB) individually is complex.
- Docker Compose simplifies orchestration of multi-container applications.
-
Define all services in a single
docker-compose.yml
file.
Example docker-compose.yml
version: "3.8"
services:
backend:
image: mysite-backend
container_name: mysite-backend
pull_policy: never # Prevents accidentally pulling from Docker Hub
build:
context: ./backend
dockerfile: Dockerfile
target: runner # Use specific stage from Dockerfile
ports:
- "8000:8000"
frontend:
image: mysite-frontend
container_name: mysite-frontend
pull_policy: never
build:
context: ./frontend
dockerfile: Dockerfile
ports:
- "80:80"
Commands
docker compose build # Build all services
docker compose up # Start all services
docker compose down # Stop and remove containers
-
pull_policy: never
helps avoid pulling images with the same name from Docker Hub. - Docker Compose allows you to run the full stack with a single command ā great for development and testing!