How to keep a Docker container running after starting several services? (Edited)

Hi everyone! My name is Stan

I’m new to Docker and would like to ask you, as I see many questions on this topic here. What methods do you use to keep a Docker container alive and running when you start multiple services?

I create a Docker container where, through the script start-services.sh, services like a web server and additional daemons are started. In the Dockerfile, I write:

		
CMD ["sh", "/opt/start-services.sh"]

The script starts the services successfully, but then it finishes and the container stops immediately. I have seen several solutions, but I am curious who uses them in practice and for which tasks.Ā 

Please share all methods if you know how to do it. Execute code examples are welcome. I will wait for your reply and any comment. Also, you can upvote this question just to improve its status.

Stan Keller

6 months ago

17 answers

133 views

Rating

08
Answer

Answers

Maria Witte

6 months ago

1 comment

Edited

Rating

00

I’d like to add three more variations:

1. Using Docker Compose (dc) with the optionsĀ stdin_open and restart:

When running with dc, you can set options to keep the container running even if the main process finishes. For instance, in docker-compose.yml you can add:

		
services: Ā Ā app: Ā Ā Ā Ā image: myapp:latest Ā Ā Ā Ā stdin_open: true Ā Ā Ā Ā restart: always

Pros:

– Easy configuration through the compose file.

– Automatically restarts the container in case of failure.

Cons:

– It does not solve the core problem (the main process ending), it only softens it.

– It requires using dc.

2. Running the container with the flags -t and -d:

Running with the -t flag (pseudo-terminal) together with -d (detached mode) helps keep the container active:

		
docker run -td myapp:latest

Pros:

– Simple command.

– Keeping the pseudo-terminal may prevent immediate exit.

Cons:

– This method depends on the specific application; if PID 1 finishes, the container will still stop.

– It is not a solution for process management inside the container.

3. Using "wait" to track the main process’s lifecycle:

If your script starts a service, you can record its PID and run wait. Such as:

		
#!/bin/bash service nginx start NGINX_PID=$(pgrep nginx) wait $NGINX_PID

Pros:

– The container will keep running as long as nginx is running.

– It allows the container to finish correctly when the service stops.

Cons:

– It requires precise configuration and tracking of the PID.

– If the process spawns child processes, it might not work perfectly.

In general, there are a lot of options in Docker...

Reply

    Tomaso Ruiz

    6 months ago

    1 comment

    Rating

    00

    Using options like stdin_open, restart, or the -t/-d flags (from Maria) only work around the problem, because they do not change the fact that if the main process (PID 1) stops, the container will stop. This does not solve the issue of properly managing services.

    Reply

      Maria Witte

      6 months ago

      Rating

      00

      Yes, without a doubt, monitoring and management need attention. But the question was not about service monitoring and management—it was about listing methods to solve the task.

      Reply

Johannes Martins

6 months ago

1 comment

Rating

00

I can recall a few more ways or variations of what has already been mentioned:

1. Running the container with a light init process, such as tini.

This init process is launched as ENTRYPOINT and handles signal forwarding and managing child processes. Its benefits include proper signal handling (SIGTERM, SIGINT, etc.), ease, low overhead, and being a universal solution for Docker containers. However, by itself it does not solve the problem of multiple processes – you still need to organize the service startup (for instance, using Supervisor or another method).

2. An infinite loop in the script.

At the end of the service startup script, you can add a simple infinite loop. In bash it looks like this:

		
while true; do Ā Ā sleep 1000 done

The advantage of an infinite loop is its simplicity and clarity without needing extra software. This method guarantees that the main process (PID 1) does not exit. The clear disadvantage is the lack of a mechanism to manage or monitor child processes, and termination signals may not be handled properly. This method, like exec tail -f /dev/null, is usually used as a temporary solution for development, not for production.

3. Running systemd inside the container.

Running a full systemd in the container to manage services. This allows you to use the standard service manager features. Its advantage is the ability to manage dependencies and logging through systemd, allowing you to start several services with detailed settings. The downside is a more complex container configuration, a larger image size, and increased startup time.

Reply

    Tomaso Ruiz

    6 months ago

    Rating

    00

    With an infinite loop, it is hard to monitor failures, manage services, and maintain the proper state of the container.

    Reply

Stan Keller

6 months ago

Rating

00

Hello again! Thank you, everyone, for your answers!

Reply

Luca Bianchi

6 months ago

3 comments

Rating

00

According to modern recommendations, each container should do one task.

If possible, consider splitting your services into separate containers and combining them with Docker Compose. This makes scaling and managing dependencies easier.

docker-compose.yml:

		
version: "3.9" services: Ā Ā web: Ā Ā Ā Ā image: mywebserver:latest Ā Ā Ā Ā ports: Ā Ā Ā Ā Ā Ā - "80:80" Ā Ā daemon: Ā Ā Ā Ā image: mydaemon:latest

Reply

    Maria Witte

    6 months ago

    Rating

    00

    Splitting the services in DockerĀ  makes it easier to update, scale, and debug each part of your application. This approach fits the microservices concept that is widely used today. I think this is the best option for production.

    Reply

    Tomaso Ruiz

    6 months ago

    Rating

    00

    I agree with Mark.Ā 

    After several years of working with Docker, I'm convinced that for a beginner, following best practices is far better than searching for workarounds. There might be some very specific cases where putting several services in one container works best, but that is not for Docker beginner.Ā 

    For that reason, I plan to downvote this question.

    Reply

    Stan Keller

    6 months ago

    1 comment

    Edited

    Rating

    00

    I'm looking for ways to keep the docker container running on a Rapsbury Pi in conjunction with raspbx.

    However, I'm not sure if there will be enough hardware resources left to run Docker Compose, so I thought I'd pose my question to the world.

    For many people it seems strange or impossible that Docker can run there, but remember how Doom was run on a home fridge and output a picture on its screen =)

    Reply

      Adam Kowalski

      6 months ago

      Rating

      00

      Yeah, it looks pretty weird to me.

      Reply

Igor Schulz

6 months ago

1 comment

Edited

Rating

00

If you prefer not to modify the container’s architecture and want to continue launching services via a script, you can simply append an instruction at the end of your start-services.sh that never terminates:

		
#!/bin/bash # Start necessary services service apache2 start service mydaemon start # Keep the container active exec tail -f /dev/null

Or you can change the Dockerfile so that the main process waits indefinitely:

		
CMD ["sleep", "infinity"]

Reply

    Maria Witte

    6 months ago

    1 comment

    Rating

    00

    This approach is suitable for development and testing, but for production it is better to use a process manager or Docker Compose. The command exec tail -f /dev/null does not use many resources and ensures that the main process (PID 1) keeps running.

    Reply

      Tomaso Ruiz

      6 months ago

      Rating

      00

      But here a problem arises with monitoring and managing service failures – if one process crashes, the container continues to run and hides the problem.

      Reply

Stefano Bruns

6 months ago

1 comment

Edited

Rating

00

Modern Docker practice is to run a single process per container.

If you need multiple services, the best method is to use a process manager that keeps the Docker container running and manages the services.Ā 

As an illustration, you can use Supervisor in foreground mode.

dockerfile:

		
FROM ubuntu:latest # Install necessary packages RUN apt-get update && apt-get install -y apache2 supervisor # Copy Supervisor configuration file COPY supervisord.conf /etc/supervisord.conf # Copy scripts or other files if needed COPY start-services.sh /opt/start-services.sh # Run Supervisor in foreground mode CMD ["/usr/bin/supervisord", "-n"]

supervisord.conf:

		
[supervisord] nodaemon=true [program:webserver] command=apache2 -DFOREGROUND autorestart=true [program:daemon] command=/usr/local/bin/mydaemon --run autorestart=true

Reply

    Maria Witte

    6 months ago

    1 comment

    Rating

    00

    I agree, using Supervisor in Docker helps to centrally manage multiple services and provides automatic restart if they fail. I think this solution is somewhat limited for production environments, even though it goes against the ā€œone process per containerā€ rule.

    Reply

      Tomaso Ruiz

      6 months ago

      Edited

      Rating

      00

      In this setup, the Docker container remains active as long as the process with PID 1 (Supervisor, in this case) is running.

      However, this does not ensure that every service is working correctly.

      IĀ think this can only be used on dev for experimental purposes.

      Reply