🐳 Basics (List, Pull, Run and Delete)


References https://docs.docker.com/get-started/introduction/whats-next/ https://www.youtube.com/watch?v=b0HMimUb4f0&ab_channel=mCoding

docker image ls #list down all the images
docker ps #list down the running containers
docker ps -a #list down all the containers

#Pull Images
docker pull python #Will take around 1GB
docker pull python:3.11-slim #will takw around 140mb
docker pull python:3.11-alpine #will takw around 50mb

#slim images - usually Debial-based Linux
#alpine images - Alpine Linux

#Stop Docker container
docker stop <container-name> 

#Delete Docker Container/s
docker container prune #Remove all stopped containers
docker run -p 5000:80 -d --rm nginx # automatically delete when the container is stopped using --rm
docker rm -f <container-name> # Force remove a container (running or stopped)

#Delete a Docker image
docker rmi <image_name>:<tag>
docker rmi <image_id>
docker rmi -f <image_name>:<tag>

Start and run new container using remote Image


docker run --name=base-container -ti ubuntu

Build Custome Images with names/tags


#The most basicĀ `docker build`Ā command, build with no name/tag
#The final `.` in the command provides the path or URL to the build context. At this location,
#the builder will find the Dockerfile and other referenced files.

docker build .

#build with a name/tag
docker build -t my-username/my-image .

#add another tag for existing image
docker image tag my-username/my-image another-username/another-image:v1

Create and run containers using Images


#This builds a Docker image from a Dockerfile in the current directory (.), and tags it as mysite.
#You now have a custom Docker image called mysite on your machine.
docker build -t mysite .

#This creates and runs a container from the image mysite, gives it a name, and maps ports so you can access it from your browser.
docker run --name my-container-using-mysite -p 3000:80 mysite

docker run -p 3000:80 -d nginx

#-p : for port mapping
#-d : for detached mode | Run the container in the background

Port mapping with NginX


docker run nginx #will pull if its not already pulled

#We can see this is running in the docker desktop, but if we visit localhost in the browser it shows an error
#This is because by default containers are isolated form the host machine.
#So, we have to publish the port, so host can see it

docker run -p HOST_PORT:CONTAINER_PORT nginx
docker run -p 80:80 nginx

#If you omit the HOST_PORT the container’s port 80 onto an ephemeral port on the host: and you can find it by
docker ps
#now we can see the localhost is working on the browser

docker run -P nginx #publish all of the exposed ports configured by the image

Debugging


docker exec -it <name or GUID> /bin/bash

#-i : interactive terminal session
#-t : terminal features like history and auto complete

docker logs <container-name> or GUID #Get logs

Publishing


docker push my-username/my-image

Dockerfile


FROM node:20-alpine #define your base image

WORKDIR /app # This will specify where future commands will run and the directory files will be copied inside the container image.

COPY package.json yarn.lock ./ # Read below note

COPY . . #Copy all of the files from your project on your machine into the container image

RUN yarn install --production

EXPOSE 3000

CMD ["node", "./src/index.js"] # start command

Note: For Node-based applications, dependencies are defined in the package.json file. You'll want to reinstall the dependencies if that file changes, but use cached dependencies if the file is unchanged. So, start by copying only that file first, then install the dependencies, and finally copy everything else. Then, you only need to recreate the yarn dependencies if there was a change to the package.json file.

Great start! Here's an improved and polished version of your Docker short note, with grammar corrections, clearer phrasing, and some extra helpful information added:


Overriding container defaults



Overriding the network ports


#You can use the -p option in docker run to map container ports to host ports, allowing you to run the multiple instances of the container without any conflict.
docker run -d -p HOST_PORT:CONTAINER_PORT postgres

Setting environment variables


docker run -e foo=bar postgres env

#The .env file acts as a convenient way to set environment variables for your Docker containers without cluttering your command line #with numerous -e flags. To use a .env file, you can pass --env-file option with the docker run command.

docker run --env-file .env postgres env

Restricting the container to consume the resources


#You can use the --memory and --cpus flags with the docker run command to restrict how much CPU and memory a container can use
docker run -e POSTGRES_PASSWORD=secret --memory="512m" --cpus="0.5" postgres

#You can use the docker stats command to monitor the real-time resource usage of running containers. 

Run Postgres container in a controlled network


#By default, containers automatically connect to a special network called a bridge network when you run them. This bridge network acts like a virtual bridge, allowing containers on the same host to communicate with each other while keeping them isolated from the outside world and other hosts.

#You create a custom network by passing --network flag with the docker run command. All containers without a --network flag are attached to the default bridge network.

docker network create mynetwork
docker network ls

#Connect Postgres to the custom network by using the following command:
docker run -d -e POSTGRES_PASSWORD=secret -p 5434:5432 --network mynetwork postgres

Persisting container data



Container volumes


docker volume create log-data # create a volume named log-data
docker run -d -p 80:80 -v log-data:/logs docker/welcome-to-docker # the volume will be mounted (or attached) into the container at /logs

# Mounts a named volume called log-data to the container’s /logs directory.

# Any files written by the container to /logs will be stored in the log-data volume on your host, which persists even if the container is removed.
docker volume ls
docker volume inspect log-data
docker volume ls # list all volumes
docker volume rm <volume-name-or-id> #  remove a volume (only works when the volume is not attached to any containers)
docker volume prune # remove all unused (unattached) volumes

Sharing local files with containers



šŸ”¹ -v vs --mount


Use volumes for persistent container data, and bind mounts to share real-time code or config files during development.

docker run -v /HOST/PATH:/CONTAINER/PATH -it nginx
docker run --mount type=bind,source=/HOST/PATH,target=/CONTAINER/PATH,readonly nginx

Docker Bind Mount Permissions & Performance Summary



Synchronized File Share


mkdir public_html
cd public_html
touch index.html #add some html content in the file
docker run -d --name my_site -p 8080:80 -v .:/usr/local/apache2/htdocs/ httpd:2.4
OR
docker run -d --name my_site -p 8080:80 --mount type=bind,source=./,target=/usr/local/apache2/htdocs/ httpd:2.4
# Now go to the docker desktop
# Open the container
# Goto the Files tab
# Search in /usr/local/apache2/htdocs/

Caching


docker build --no-cache .  # Build image without using any cached layers

Multi-container applications


docker build -t mysite-frontend --target runner .

Docker Compose



Example docker-compose.yml


version: "3.8"

services:
  backend:
    image: mysite-backend
    container_name: mysite-backend
    pull_policy: never  # Prevents accidentally pulling from Docker Hub
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: runner  # Use specific stage from Dockerfile
    ports:
      - "8000:8000"

  frontend:
    image: mysite-frontend
    container_name: mysite-frontend
    pull_policy: never
    build:
      context: ./frontend
      dockerfile: Dockerfile
    ports:
      - "80:80"

Commands


docker compose build       # Build all services
docker compose up          # Start all services
docker compose down        # Stop and remove containers