Skip Navigation

Headaches with signal propagation when piping in a Docker container

Recently at work I've been thrown into running some Python scripts in a Docker container (all previous Docker-experience is limited to pulling images from container registries to host some stuff at home). It's a fairly simple script, but I want to do two things simultaneously that I have so far been unable to accomplish: redirecting some prints to a file while also allowing the script to run a cleanup process when it gets a SIGTERM. I'm posting this here because I think this is mainly signal handling thing in Linux, but maybe it's more Docker specific (or even Docker Swarm)?

I'm not on my work computer now, but the entrypoint in the Dockerfile is basically something like this:

 
    
ENTRYPOINT ['/bin/bash', '-c', 'python', 'my_script.py', '|', 'tee', 'some_file.txt']

  

Once I started piping, the signal handling in my script stopped working when the containers were shut down. If I understood it correctly it's because tee becomes the main process (or at least the main child of the main process which is bash?) and Python is deferred to the background and thus never gets the signal to terminate gracefully. But surely there must be some elegant way to make sure it also gets it?

And yes, I understand I can rewrite my script to handle this directly, and that is my plan for work tomorrow, but I want to understand this better to broaden my Linux-knowledge. But my head was spinning after reading up on this (I got lost at trap), and I was hoping someone here had a succinct explanation on what is going on under the hood here?

6 comments
6 comments