Docker Finger Food
Docker Finger Food
Below you will find docker finger food. This is and will always be a work in progress post.
- Docker Finger Food
- Use an APT proxy from a container
- Delete dangling images
- Don't install apt recommends
- Non interactive apt
- Clean up your apt cache
- Clean up your pip cache
- Manage application logs
- Docker build
- Docker run
- Docker exec
- Docker prune
- Docker useradd
- Export and Import
Use an APT proxy from a container
If you are not using an APT proxy you should. If you are, then you will find this useful.
In your ´Dockerfile´
ARG APT_PROXY
RUN echo 'Acquire::http { Proxy "'$APT_PROXY'"; }' \
| tee /etc/apt/apt.conf.d/01proxy &&\
apt-get update && apt-get -y install ...
Then, when you build your docker image,
docker build \
--build-arg APT_PROXY="http://apt-cacher:3142" -t your/image .
Credit --> run apt-get with proxy in Dockerfile. To install your apt-cacher container you can try this one, or this other one, our just build your own.
Delete dangling images
As you work with your Dockerfile to build your dream image you will generate dangling images in the process. We all do it. This will help you,
docker rmi $(docker images -q --filter "dangling=true")
Credit -->Dangling images
Don't install apt recommends
In your Dockerfile
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
ca-certificates \
wget
The magic is done by --no-install-recommends
parameter.
Non interactive apt
Another trick to consider is to use DEBIAN_FRONTEND=noninteractive
as part of the RUN
line as the preferred option. It is not recommended to use ENV DEBIAN_FRONTEND=noninteractive
--> Source.
Clean up your apt cache
You don't want to keep your .deb files in your docker image. So you delete them once your dependencies have been installed.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -y \
rm -rf /var/lib/apt/lists/*
Clean up your pip cache
In the same way that we don't want to keep .deb files in our docker image, we should not bloat it with python packages
RUN python -m pip install --no-cache-dir --upgrade pip \
pip install --no-cache-dir request
Manage application logs
In this section I will go through the steps to set up the JSON logging driver. There are multiple options to setup logging drivers, please see reference below.
First you need to setup the docker daemon logging. To do this you need to edit the /etc/docker/daemon.json
which is the default location for linux systems.
{
"log-driver": "json-file",
"log-level": "info",
"log-opts": {
"max-size": "5m",
"max-file": "5",
"compress": "true"
}
}
This is telling the docker daemon to user json-file as logging driver, max log file size 5MB, i till rotate logs and keep 5 files, and will use compression for rotated files.
Once he daemon.json
file is updated, you need to restart dockerd. Any new container will use these settings. All the previously created containers will use previous settings (most probably default settings.)
Another aspect to configure is the log delivery mode from the container to the log driver
docker run -it --log-opt mode=non-blocking --log-opt max-buffer-size=4m ubuntu tail -f /var/log/syslog
The command above is using the non-blocking
mode which is NOT the default, and using buffer of 4MB. The non-blocking
mode stores log messages in an intermediate per-container ring buffer for consumption by driver. While direct
is blocking and delivers directly to the driver.
In docker-compose you would do:
version: "3.9"
services:
some-service:
image: some-service
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "5"
compress: "true"
mode: "non-blocking"
max-buffer-size: "4m"
Reference:
Docker build
Docker build cheat-sheet
docker build -t user/image:tag .
-t
parameter will set the image name in the user/image:tag fashion. This is telling docker to build an image based on the Dockerfile and contect in the current directory ".".
Docker run
# fresh ubuntu
docker run -ti --name bionic ubuntu:latest
Docker exec
Docker exec cheat-sheet.
docker exec -ti --user container_user container_name bash
This will run in interactive mode (-ti) under container_user (the user in the container) on container container_name, bash.
Docker prune
This is a continuation of section Delete dangling images. The prune command will help you to keep your host clean. As you with Docker containers, images, volumes and more will start to pile up.
# to clean up images not used by any container - BE CAREFUL
docker image prune
# to clean up stopped containers - BE CAREFUL
docker container prune
# to clean up unused volumes - not attached to a container - BE CAREFUL
docker volume prune
Docker useradd
To manage container user and uid/gid is tricky. There are different alternatives, like using the --user
parameter. However this is not dealing with containers /etc/passwd
so your container user will be homeless and whoami
will not work. So usually I try to solve the problem at build time. See below
ARG USER
ARG USER_ID=1000
ARG USER_GID=1000
ARG BASEPATH=/opt/app
RUN groupadd --gid "${USER_GID}" "${USER}" && \
useradd -ms /bin/bash --uid ${USER_ID} --gid ${USER_GID} ${USER} &&\
echo "${USER} ALL=(ALL) NOPASSWD:ALL" | tee -a /etc/sudoers &&\
mkdir ${BASEPATH} && chown ${USER}:${USER} -R ${BASEPATH}
WORKDIR ${BASEPATH}
USER ${USER}
COPY --chown=${USER}:${USER} . .
This will create a user based on environment variables passed at build time. And set the permissions on the image filesystem.
docker build -t user/image \
--build-arg USER=username \
--build-arg USER_ID=$(id -u username) \
--build-arg USER_GID=$(id -g username) \
.
Then when you run the containers
docker run -it container_name
# to use current user
docker run -it container_name
In docker compose
# This is an example
version: '3.3'
services:
app:
build:
context: .
args:
USER: $USER
USER_ID: $USER_ID
USER_GID: $USER_GID
image: user/image:tag
container_name: "container"
This will require an .env
file in the working directory.
USER=dockeruser
USER_ID=1000
USER_GID=1000
Then run composer like this,
docker-compose up
When I'm working on top of an existing build, for example for PostgreSQL, I would build my image
FROM postgres:last
ARG UID
ARG GID
ARG OLD_UID
ARG OLD_GID
RUN usermod -u $UID postgres && \
groupmod -g $GID postgres && \
find / -group $OLD_GID -exec chgrp -h postgres {} + && \
find / -user $OLD_UID -exec chown -h postgres {} +
This will keep the postgres user name within the container, and align the uid and gid with the host filesystem.
References:
- Understanding how uid and gid work in Docker containers,
- How to add users to a container
- How to Change a USER and GROUP ID on Linux For All Owned Files
Export and Import
To export a container into a tarball
docker export container_name | gzip > container_name.tgz
To import the tarball
zcat container_name.tgz | docker import - container_name
docker run -i -t container_name bash