How to keep a Docker container running after starting several services? (Edited)
Hi everyone! My name is Stan
Iām new to Docker and would like to ask you, as I see many questions on this topic here. What methods do you use to keep a Docker container alive and running when you start multiple services?
I create a Docker container where, through the script start-services.sh, services like a web server and additional daemons are started. In the Dockerfile, I write:
CMD ["sh", "/opt/start-services.sh"]
The script starts the services successfully, but then it finishes and the container stops immediately. I have seen several solutions, but I am curious who uses them in practice and for which tasks.Ā
Please share all methods if you know how to do it. Execute code examples are welcome. I will wait for your reply and any comment. Also, you can upvote this question just to improve its status.
Answers
Maria Witte
6 months ago
1 comment
Rating
Iād like to add three more variations:
1. Using Docker Compose (dc) with the optionsĀ stdin_open and restart:
When running with dc, you can set options to keep the container running even if the main process finishes. For instance, in docker-compose.yml you can add:
Pros:
ā Easy configuration through the compose file.
ā Automatically restarts the container in case of failure.
Cons:
ā It does not solve the core problem (the main process ending), it only softens it.
ā It requires using dc.
2. Running the container with the flags -t and -d:
Running with the -t flag (pseudo-terminal) together with -d (detached mode) helps keep the container active:
Pros:
ā Simple command.
ā Keeping the pseudo-terminal may prevent immediate exit.
Cons:
ā This method depends on the specific application; if PID 1 finishes, the container will still stop.
ā It is not a solution for process management inside the container.
3. Using "wait" to track the main processās lifecycle:
If your script starts a service, you can record its PID and run wait. Such as:
Pros:
ā The container will keep running as long as nginx is running.
ā It allows the container to finish correctly when the service stops.
Cons:
ā It requires precise configuration and tracking of the PID.
ā If the process spawns child processes, it might not work perfectly.
In general, there are a lot of options in Docker...
Tomaso Ruiz
6 months ago
1 comment
Rating
Using options like stdin_open, restart, or the -t/-d flags (from Maria) only work around the problem, because they do not change the fact that if the main process (PID 1) stops, the container will stop. This does not solve the issue of properly managing services.
Maria Witte
6 months ago
Rating
Yes, without a doubt, monitoring and management need attention. But the question was not about service monitoring and managementāit was about listing methods to solve the task.
Johannes Martins
6 months ago
1 comment
Rating
I can recall a few more ways or variations of what has already been mentioned:
1. Running the container with a light init process, such as tini.
This init process is launched as ENTRYPOINT and handles signal forwarding and managing child processes. Its benefits include proper signal handling (SIGTERM, SIGINT, etc.), ease, low overhead, and being a universal solution for Docker containers. However, by itself it does not solve the problem of multiple processes ā you still need to organize the service startup (for instance, using Supervisor or another method).
2. An infinite loop in the script.
At the end of the service startup script, you can add a simple infinite loop. In bash it looks like this:
The advantage of an infinite loop is its simplicity and clarity without needing extra software. This method guarantees that the main process (PID 1) does not exit. The clear disadvantage is the lack of a mechanism to manage or monitor child processes, and termination signals may not be handled properly. This method, like exec tail -f /dev/null, is usually used as a temporary solution for development, not for production.
3. Running systemd inside the container.
Running a full systemd in the container to manage services. This allows you to use the standard service manager features. Its advantage is the ability to manage dependencies and logging through systemd, allowing you to start several services with detailed settings. The downside is a more complex container configuration, a larger image size, and increased startup time.
Tomaso Ruiz
6 months ago
Rating
With an infinite loop, it is hard to monitor failures, manage services, and maintain the proper state of the container.
Stan Keller
6 months ago
Rating
Hello again! Thank you, everyone, for your answers!
Luca Bianchi
6 months ago
3 comments
Rating
According to modern recommendations, each container should do one task.
If possible, consider splitting your services into separate containers and combining them with Docker Compose. This makes scaling and managing dependencies easier.
docker-compose.yml:
Maria Witte
6 months ago
Rating
Splitting the services in DockerĀ makes it easier to update, scale, and debug each part of your application. This approach fits the microservices concept that is widely used today. I think this is the best option for production.
Tomaso Ruiz
6 months ago
Rating
I agree with Mark.Ā
After several years of working with Docker, I'm convinced that for a beginner, following best practices is far better than searching for workarounds. There might be some very specific cases where putting several services in one container works best, but that is not for Docker beginner.Ā
For that reason, I plan to downvote this question.
Stan Keller
6 months ago
1 comment
Rating
I'm looking for ways to keep the docker container running on a Rapsbury Pi in conjunction with raspbx.
However, I'm not sure if there will be enough hardware resources left to run Docker Compose, so I thought I'd pose my question to the world.
For many people it seems strange or impossible that Docker can run there, but remember how Doom was run on a home fridge and output a picture on its screen =)
Adam Kowalski
6 months ago
Rating
Yeah, it looks pretty weird to me.
Igor Schulz
6 months ago
1 comment
Rating
If you prefer not to modify the containerās architecture and want to continue launching services via a script, you can simply append an instruction at the end of your start-services.sh that never terminates:
Or you can change the Dockerfile so that the main process waits indefinitely:
Maria Witte
6 months ago
1 comment
Rating
This approach is suitable for development and testing, but for production it is better to use a process manager or Docker Compose. The command exec tail -f /dev/null does not use many resources and ensures that the main process (PID 1) keeps running.
Tomaso Ruiz
6 months ago
Rating
But here a problem arises with monitoring and managing service failures ā if one process crashes, the container continues to run and hides the problem.
Stefano Bruns
6 months ago
1 comment
Rating
Modern Docker practice is to run a single process per container.
If you need multiple services, the best method is to use a process manager that keeps the Docker container running and manages the services.Ā
As an illustration, you can use Supervisor in foreground mode.
dockerfile:
supervisord.conf:
Maria Witte
6 months ago
1 comment
Rating
I agree, using Supervisor in Docker helps to centrally manage multiple services and provides automatic restart if they fail. I think this solution is somewhat limited for production environments, even though it goes against the āone process per containerā rule.
Tomaso Ruiz
6 months ago
Rating
In this setup, the Docker container remains active as long as the process with PID 1 (Supervisor, in this case) is running.
However, this does not ensure that every service is working correctly.
IĀ think this can only be used on dev for experimental purposes.