Shipping code to the server should not be hard
Docker allows applications to be shipped to new computers, and irrespective of the local environment the application will work. That is, it isolates that application scripts from the environment running them.
- Docker Hub: Repository of free public images that can be downloaded and run.
- Docker Volumes: are the preferred way of storing data needed and created by containers.
- Docker Image: A single file with all the dependencies & config details required to a run a program - Docker Container: An instance of an image with its own set of hardware resources. Runs the program.
- Docker Client: Tool that receives commands and passes them on to the Docker server.
- Docker Server: Tool that is responsible for creating images, running containers, etc
- Kernal: Governs access between programs running on a machine, and the resources available to this machine: hard drive, CPU, & memory
- Alpine: Used in image titles to indicate a filesystem with the bare minimum for executing a given program.
- Visit the install page and follow instructions
- Access Docker Hub
A Docker Image is a snapshot of a filesystem (e.g., installations of
python, chrome), and a set of start-up commands (e.g., RUN Chrome).
Base images should be selected FROM in the Dockerfile based on the
set of included default programs and the needs of the resulting
container.
An instance of a Docker Image that has specific access to the machine’s resources, to run specific programs. It combines 2 (Linux) OS features:
- Namespacing: A feature of operating systems that allows segmentation of hard disk resources (or any other resource: network access, inter-process visibility & communication, users) that are dedicated to specific programs (or group of processes/programs).
- Control Groups: AKA
cgroups- Limits the amount of computational resources alloted to a process or group of processes/programs.
Combining these 2 features allows containers to limit the amount of resources it’s processes can talk to, and the bandwidth on those resources.
Note that Docker Containers share the kernel with the rest of the processes on the machine using the rest of the resources.
- Create a Dockerfile at the application root
- Create a Docker Image (using the Docker Client)
- Build container w/ Docker Image and Docker Client
A file stored at the application root directory. It is essentially a
list of shell-like commands to run, and exit upon completion. It begins
with configurations (e.g., pip install ...) then commands (e.g.,
RUN python ...)
The Dockerfile has the following flow:
- Specify a
FROMbase image RUNsome commands to install additional programs- Specify a
CMDto run on container startup
COPY <root_path> <docker_path>: Copyroot_pathfrom the directory where the Docker file is location, todocker_pathwithin the Docker environment. Often need to project make files available inside the container.--from=<tag>allowsCOPYing from another container phase in theDockerfile(see tagging inFROM)
CMD: Default command that is run if not overwritten in therun/startcommand. Note: this command is not run at time of image creation, but rather at time of containerstart.ENV <name> <value>: Set environment variablenametovalue. Use for permanent values in the Docker environment.EXPOSE<port_num>: Used by web services to open ports. Otherwise, inactive but could be used as a note to devs for which port to expose in theruncommand.FROM <container-name>- define container type as pre-builtcontainer-name, searchable on DockerHub or locally built image. Docker will check for images already downloaded first, then download if the image is missing. TheFROMsource may be tagged withAScommandMAINTAINER:whois responsible for this applicationRUN <cmd>: Executecmdwithin theFROMenvironment. Note: Docker will create a temporary container (out of theFROMimage) to execute these commands as their ownCMDs. Then shut that container down and deletes it.WORKDIR <dir>: Change the working directory todirso that any following commands will operate on that directory instead of the root of theFROMimage. That is best practice, as someCOPYcommands may move files with the same name as default files in the image root. Ifdirdoes not exist by default, Docker will create it.#: Comments! A\may be used on the line before and
Docker will cache images in between FROM/RUN/CMD steps, and will
load that image from cash on docker build ... commands in the future
for any Dockerfile flows in the same order.
Simply executing docker will bring up abbreviated help.
docker <CMD> --help brings up specific docker help.
docker build <path>: Build the application found atpathwith enclosedDockerfile.- The
-t <docker_id>/<project/repo_name>:<version_num>option tags the resulting image for convenient use. For example,docker build -t kdonavin/ostk_proj:latest - The
-fspecifies the location/name of theDockerfile. Useful for testing a devDockerfile.dev
- The
docker commit [-c <CMD>] <container_id>: Manually create an image from a container that any has any number of modifications.-coption overrides the default command with a newCMD.docker images: Show a table of downloaded images
docker run [-aitp] <image> <cmd>- (very important cmd) make and start a container from an image. If the image is not in the local cache, it will be sought in the Docker hub, or other specified repository. Equivalent to runningdocker create+docker start. Override default startup commends withcmd. E.g.,docker run busybox ls.-atells docker to attach any output from the container to the terminal window. Note: The default command for the pre-existing container cannot be replaced here.-e: Set environmental variable in docker env.-itriggers interactive mode, and-tattaches the terminal inside the container to the current terminal.-p [XXXX:YYYY]: Route incoming requests on portXXXXto thisYYYYport inside the container.-v: Volumes option. Creates alias links from the container to files outside in the host machine. E.g.,-v $(pwd):/appmaps all files in the current working directory as volumes attached to/app. Note: if any other files are previously installed in the/appmain directory, they will be overwritten with the volume command. If a directory should be preserved, the user can “bookmark” it with-v /app/dir_to_bookmarkwithout a colon.
docker cp <file> container_name:/path/to/somewhere- copyfiletocontainer_namein a given pathdocker create: Create a docker containerdocker exec [-it] <containerID> <cmd>: Executecmdwithin a running container.-iallows interactive mode, and-tattaches the current terminal to the VM’s terminal. Without the attached terminal, theSTDIN/STDOUTwill still be sent forth and back but without the features of a terminal interface (e.g., auto-complete). To access a generic terminal shell, usesh/bash/zshfor the command.docker logs <id>- Return logs of container (including browser link and token). If a Docker container isstarted without the-aappend flag,logscan be used to retrieve the output.docker ps [-a]- Show running containers. The-a/--alladditionally shows any containers ever run on the machinedocker rm <name | id>- Removes a containerdocker start [-a] <name | id>- Start a pre-existing container (preferable to run, which creates a new container first).docker stop/kill <name | id>- Stop containername/id.stopuses the commandSIGTERM(terminate signal) command, allowing container process cleanup (i.e., similar toCTRL-C). Stop automatically transitions tokillafter 10 seconds, which rather sends theSIGKILLprocess command.
docker system prune- clean out stopped containers, “dangling” images, “dangling” build caches, and unused networks. Note that this will require redownload of images from Docker Hub if they are needed again. Good practice to run this command whenever the user is finished with a task requiring docker Docker as containers and images take up disk space.
A separate CLI to streamline multi-container infrastructure. The tool
uses special syntax corresponding to Docker CLI commands recorded in a
file call docker-compose.yml
services:type of container(s) needed along with<options>required (e.g.,volumes:,build:, etc.)
services:
nameOfService:
...<options>Note: any image specified in the services are linked on the same
network and are free to talk to each other. That is, they function as a
sort of domain name that can be used to gain inter-access between Docker
containers (e.g., access the nameOfService server on port 5000 at
nameOfService:5000). - build: specify the context: project file
location and dockerfile: location, or simply allow docker-compose to
impute this from . - restart: Specify what to do on exit of
processes running in the specific service. There are 4 options: -
"no": default, do not restart. Note the quotes in this case only to
avoid YAML interpreting no as a false boolean. - always: always
restart on exit - on-failure: only restart if the container stops with
an error code (e.g., non-zero exit code) - unless-stopped: always
restart unless we (the devs) forecibly stop (e.g., CTRL-C) -
volumes: Examples where restarts are important to consider are a
web-server (always restart) vs. a process application (on-failure to
avoid repeating the process over and over) - command: Commands to run
upon up-ing this service. Written in an array, e.g.,
["npm", "run", "test"]. - environment: A list of env variables to
make available at run time (not available to the image on its own).
...
environment:
- REDIS_HOST=redis
- REDIS_PORT=6379
- ...Note that environment: variables that are not set (i.e., no =) will
attempt to be copied from the parent environment (if it exists).
up [--build -d]: Create and start containers. If no compose YAML is specified, Docker compose usesdocker-compose.ymlin the current directory. Adding--buildalso (re)builds the containers specified in the compose YAML.Thedoption stands for “detached” and runs the compose services without outputing to the master terminal or entering interactive mode.down: Stop and remove containers. If none are specified, this command acts onservicesfound in thedocker-compose.yml.
- Travis CI: testing tool to be run on a master branch of a code-base before it is integrated (automatically) with the dockerhost.
A Continuous Integration/Continuous Deployment workflow tool built into GitHub. GitHub Actions uses YAML files to define workflows, which are automated processes that run one or more jobs. These jobs are sets of steps that execute on the same runner, which is a virtual machine. GitHub Actions revolves around 4 main concepts:
- triggers (when to run)
- jobs (what to do)
- steps (how to do it)
- actions (reusable units of code)
project-root/
├── .github/
│ └── workflows/
│ └── main.yml
└── (rest of your project files)
The .github/workflows directory in your repository root is where you
place your workflow files.
- The Elastic Beanstalk application is great for app development because it automatically scales up VMs runnng Dockers with our container inside as request load increases.
NGINX is a versatile software platform used as a web server, reverse proxy, load balancer, and more. Particularly useful in multi-container Docker apps as a reverse proxy and a gateway between clients and backend (i.e., “upstream”) server containers.
Using a nginx/default.conf file in an nginx dir of an Docker Compose
app, a single gateway port may be specified:
server {
listen 80;
...
}As well as rules about how to access different Docker containers within:
server {
...
location /api/ {
rewrite /api/(.*) /$1 break; # Remove /api prefix
proxy_pass http://api; # To the 'api' service
}
}The official Docker nginx image may be found here: https://hub.docker.com/_/nginx