How to debug and SSH into a Docker container

SSH into a local or remote docker container and enable SSH for a docker container running on a remote host.

SSH, also known as Secure Socket Shell, is a network protocol that provides administrators with a secure way to access a remote computer.

In this blog post, I’ll demonstrate how to SSH into a Docker container.

Before we go over how, let’s look at some of the reasons why we would need to SSH into a Docker container:

  1. Debugging a failed docker build
  2. Checking the content of the application and its build inside the docker container
  3. Checking the logs of the application
  4. Installing something inside the container at runtime in order to test something
  5. Checking why a specific process or task is failing inside the docker container

Prerequisites

  • Docker should be installed
  • Basic knowledge of Docker, dockerfile, docker container, and docker image

Debugging a Failed Process

Let’s start by debugging a failed process in a docker build.We’ll start by building a  a sample React project inside a node container, and then create an NGINX--based deployable Docker image using that build.

Create a `Dockerfile` at the root of an existing react project. You can also create a new React project by following react docs:. 

Add the following content inside the Dockerfile.

FROM node:12-stretch AS builder
COPY .  /app
RUN npm install
RUN CI=true npm test
RUN npm run build
 
FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80

Now, run the following command at the root of the project to build the React app and create an NGINX image from that build.

docker build -t my-react-app .

The build of the application will fail with the following logs.

Sending build context to Docker daemon  634.4kB
Step 1/8 : FROM node:12-stretch AS builder
---> e0782a1551ac
Step 2/8 : COPY .  /app
---> Using cache
---> 88145a7c892c
Step 3/8 : RUN npm install
---> Using cache
---> 7c660577c5bc
Step 4/8 : RUN CI=true npm test
---> Running in 7163f4674319
npm ERR! code ENOENT
npm ERR! syscall open
npm ERR! path /package.json
npm ERR! errno -2
npm ERR! enoent ENOENT: no such file or directory, open '/package.json'
npm ERR! enoent This is related to npm not being able to find a file.
npm ERR! enoent
 
npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2021-02-28T08_15_46_672Z-debug.log
The command '/bin/sh -c CI=true npm test' returned a non-zero code: 254

By looking at logs, we can see that it couldn’t find the `package.json` file inside the project.

We have copied the current directory code inside the docker build.

Why is it not able to find the package.json? Let’s dig deeper and see why the problem occurred.

In the output above, we can see that every step is being executed in a separate layer (or “container”), and there is an ID of the image specified for each step. This image was built as a result of executing the specified command, and this image is a layer for the overall image built at the end. 

Our 3rd step passed, but the 4th one failed.

We will use the image from the 3rd step to run a container, and then SSH into it to see why the `CI=true npm test` command failed. 

Note that the ID of the image in the 3rd step in my case is 7c660577c5bc is likely different in your case. You can find your ID in the output somewhere under step 3.

Basically, we’ll be creating a container from the image in its last good state, and we run the steps causing a failure on top of that image to see why the specific step is failing.

Run container using above mentioned image id using the following command, which will drop you into a shell.

docker run --rm -it 7c660577c5bc bash

Now, if you try to run the same command which failed inside this new shell, you will see the same error when the build failed.

CI=true npm test

Let’s try to figure out why it is failing and why it can’t find the package.json file inside the current directory.

If you list the content of the current directory using `ls` to see what is actually present in the current working directory, then you will see that you are not inside the react app project directory, which pretty much explains that why there was no package.json file present while running `CI=true npm test` command.

Thanks to the interactive shell, we figured out what went wrong while running a specific command. Now it's very easy to fix it, we just need to make sure that the current working directory is the project root before we run the project related command.

You can exit the container shell using the `exit` command which will exit and remove the container. `exit` command just exits the container but the container gets removed because we have run the container using an argument of `--rm`, which means remove the container on exit.

Now, let’s edit our `Dockerfile` to set the current working directory to project root.

The dockerfile will look something like this now.

FROM node:12-stretch AS builder
WORKDIR /app
COPY .  /app
RUN npm install
RUN CI=true npm test
RUN npm run build
 
FROM nginx
COPY --from=builder /app/build /usr/share/nginx/html
EXPOSE 80

Save the dockerfile and run the docker build command again.

docker build -t my-react-app .

This time, you will see that it installs the node modules, runs the tests, builds the react project and then finally puts the build inside the nginx image successfully.

The last few lines of build logs will look similar to this.

Removing intermediate container 929daead2e57
---> 9ecd14421459
Step 7/9 : FROM nginx
---> 992e3b7be046
Step 8/9 : COPY --from=builder /app/build /usr/share/nginx/html
---> Using cache
---> d40cc72df2ad
Step 9/9 : EXPOSE 80
---> Using cache
---> fad959cf4639
Successfully built fad959cf4639
Successfully tagged my-react-app:latest

Once the image is successfully built, it is ready to run locally, or push to Docker hub or AWS ECR or any third party service.

If you would like to run it locally and see the application running, then you can do it like this.

docker run --rm -p 80:80 my-react-app

Now you can visit your react app by visiting the url: http://localhost/

For now, you have learned how to debug a failed build, now let’s have a look at how to get inside a running docker container to perform the following tasks

  1. Checking the content of the application and its build inside docker container
  2. Checking the logs of the application
  3. Installing something inside the container at runtime in order to test something
  4. Checking why a specific process or task is failing inside docker container

If you have experience with deploying applications/debugging/managing on remote servers, then you would probably know how to do all of the above tasks on a server, but what if you want to do the same stuff with a docker container.

Usually, in order to perform such debugging tasks on a remote server, you would ssh into that server first and then navigate around in a remote shell to figure out what you are looking for, but if an application is running inside a docker container, you won’t find the logs or many other stuff related to the application on the host server (server where docker container is running).

So in that scenario, you can ssh into a docker container by running the following command.

docker exec -it {container_id} bash

Or 

docker exec -it {container_name} bash

This command will take you into a shell similar to when you do ssh to a remote server and then you can perform all the same stuff in this shell.

In order to get the docker container id or name, you can run the following command.

docker ps

It will display all running containers, you can find your respective container id or name from the list.

As an example, we can run a docker container from our previous built image of react app and then ssh into it to see the config of nginx or maybe logs of application or content of the build directory.

docker run -d --name react-container -p 80:80 my-react-app

We have created a docker container with the name of `react-container` based on our existing docker image `my-react-app`.

Now, let’s get inside that container to see the content of our build which is deployed inside the nginx directory.

docker exec -it react-container bash

List the content of nginx files serving the directory.

ls /usr/share/nginx/html/
50x.html  asset-manifest.json  favicon.ico  index.html  logo192.png  logo512.png  manifest.json  robots.txt  static

As you are inside the container shell, now you can play around and check logs of nginx or application or install any new package for testing and moreover, don’t worry to mess up anything. If you think you have messed up something or changed some configuration or mistakenly deleted some files from the docker container, don’t panic, everything is going to be fine, just exit that container, remove it and recreate it from the same base image you used before and everything will be right back.

How to setup ssh for a remote docker container and connect to it

It is quite easy to ssh into a running docker container if you have access to the host machine where docker containers are running. But sometimes it can be very useful when you can directly ssh into a docker container remotely without the need to access the host machine.

Following are some of the scenarios where having remote ssh access into a docker container would make sense.

  1. If you are running a lot of applications in different docker containers but you need to get into a specific docker container directly without having to ssh into the remote server first.
  2. It could also be possible that a developer doesn't have ssh access to a server, but you would like to give him/her access to a specific docker container so that he can’t mess around with other applications. 
  3. Another scenario could be that you provide a hosting service and each user application is being run in a container and you would like to provide users access to their application server/shell directly without giving access to all other user’s data. This way, they will have an interactive shell to manage their data, upgrade dependencies for their specific project or figure out some bugs related to their specific application.

If you notice the above scenarios, it becomes very critical when it comes to giving ssh access to someone who shouldn’t have access to everything. Giving full ssh access to the server could be dangerous if someone is installing some unwanted packages or making some changes which could badly impact other running applications or in some cases could put the server in a really bad state.

So if we look at the solutions, giving access to a specific docker container seems to be the best solution.

Now let’s look at how we can accomplish to provide direct ssh access to a remote container.

Create a `Dockerfile` inside a temporary directory with the following content inside it.

FROM ubuntu:14.04
 
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:my_strong_password' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
 
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd
 
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
 
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]

Build the docker image by running the following command inside the temporary folder where your Dockerfile is.

docker build -t remote_ssh .

After the docker image has been built successfully with the name of `remote_ssh`, run the following command to run a container based on that image.

docker run -d --name test -p 4022:22 remote_ssh

This will run the container and map port 22 of the docker container to port 4022 of the host machine.

Now you can connect to this container remotely using the following command.

ssh root@server_ip -p 4022

server_ip is the ip address of your remote server on which docker container is running, if you are just testing on the same machine, you can test it using `localhost` as well.

root is the user which we have specified in our Dockerfile.

4022 is the ssh port of the container we have created above.

After running the above command, it will prompt you for the password, then enter the password we have specified in our Dockerfile which is `my_strong_password`.

If you would like to setup ssh into your own Dockerfile differently, then either you can build an image from this Dockerfile and base your application image on top of that ssh base image or you can simply copy paste the commands of this Dockerfile directly into your application Dockerfile.

You can have only one `CMD` command in your dockerfile, If you already have a CMD in your dockerfile, then you need to use supervisor in your dockerfile in order to run multiple daemon processes in the background at the start of the docker container.