I'm new to the whole Docker Container topic and currently trying to run multiple python scripts in shell via bash script (cause it seemed to be the easiest thing to do in terms of running multiple python scripts at the same time). Before that I build my Image via the following Dockerfile
FROM debian:buster-slim
ENV PACKAGES1="build-essential git python3"
RUN apt-get update && \
apt-get install -y $PACKAGES1
COPY /mnt /mnt
CMD [ "/bin/bash", "/mnt/setup_bash.sh" ]
to execute the setup_bash.sh
#! /bin/bash
python3 script1.py &
python3 script2.py &
after running the resulting container he keeps restarting and doesn't stay active. Meanwhile the docker logs command doens't display any errors so I'm kinda clueless what's the problem.
The main process of the system exits, so docker is killed. You are running two processes in the background and the main bash scripts quits. You could:
run one script on foreground, or
run sleep infinity to keep the main script running
refactor it all and for complex setups consider using service management, like supervisord
Like with option 2:
#! /bin/bash
python3 script1.py &
python3 script2.py &
sleep infinity # don't quit
As I said in the comments, if your script is exiting before the processes are finished, you can use the wait command to wait for all the scripts to finish before continuing.
#! /bin/bash
python3 script1.py &
python3 script2.py &
wait
echo "Finished!"
Related
I'm using a python script for send websocket notification,
as suggested here.
The script is _wsdump.py and I have a script script.sh that is:
#!/bin/sh
set -o allexport
. /root/.env set
env
python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message" &
If I try to dockerizing this script with this Dockerfile:
FROM python:3.8-slim-buster
RUN set -xe \
pip install --upgrade pip wheel && \
pip3 install websocket-client
ENV TZ="Europe/Rome"
ADD utils/_wsdump.py /utils/_wsdump.py
ADD .env /root/.env
ADD script.sh /
ENTRYPOINT ["./script.sh"]
CMD []
I have a strange behaviour:
if I execute docker run -it --entrypoint=/bin/bash mycontainer and after that I call the script.sh everything works fine and I receive the notification.
if I run mycontainer with docker run mycontainer I see no errors but the notification doesn't arrive.
What could be the cause?
Your script doesn't launch a long-running process; it tries to start something in the background and then completes. Since the script completes, and it's the container's ENTRYPOINT, the container exits as well.
The easy fix is to remove the & from the end of the last line of the script to cause the Python process to run in the foreground, and the container will stay alive until the process completes.
There's a more general pattern of an entrypoint wrapper script that I'd recommend adopting here. If you look at your script, it does two things: (1) set up the environment, then (2) run the actual main container command. I'd suggest using the Docker CMD for that actual command
# end of Dockerfile
ENTRYPOINT ["./script.sh"]
CMD python3 /utils/_wsdump.py "wss://mywebsocketserver:3000/message" -t "message"
You can end the entrypoint script with the magic line exec "$#" to run the CMD as the actual main container process. (Technically, it replaces the current shell script with a command constructed by replaying the command-line arguments; in a Docker context the CMD is passed as arguments to the ENTRYPOINT.)
#!/bin/sh
# script.sh
# set up the environment
. /root/.env set
# run the main container command
exec "$#"
With this use you can debug the container setup by replacing the command part (only), like
docker run --rm your-image env
to print out its environment. The alternate command env will replace the Dockerfile CMD but the ENTRYPOINT will remain in place.
You install script.sh to the root dir /, but your ENTRYPOINT is defined to run the relative path ./script.sh.
Try changing ENTRYPOINT to reference the absolute path /script.sh instead.
I need a cron to run a single shell script. This shell script should activate the python virtualenv and then execute python scripts.
This is my cron:
0 0 * * 1 /home/ubuntu/crontab_weekly.sh
This is my crontab_weekly.sh file:
cd /home/ubuntu/virtualenvironment/scripts \
&& source /home/ubuntu/venv3.8/bin/activate \
&& python script1.py \
& python script2.py \
& python script3.py \
& python script4.py \
The idea is to enter the directory where the scripts are hosted, then activate the venv, and only then start executing the scripts. The scripts should be executed in parallel.
But in my case only script1.py is executed and the following scripts are not executed.
Where is my problem?
Remember that & means to run the entire previous command asynchronously. This includes anything before a &&. Commands that run asynchronously run in separate processes.
To take a simplified example of your problem, let's say we asynchronously change directories, run pwd, and asynchronously run pwd again.
#!/bin/sh
cd / && \
pwd \
& pwd
On my computer, this outputs:
/home/nick
/
The cd / was meant to affect both pwd calls, but it only affected the first one, because the second one runs in a different process. (They also printed out of order in this case, the second one first.)
So, how can you write this script in a more robust fashion?
First, I would turn on strict error handling with -e. This exits as soon as any (non-asynchronous) command returns a non-zero exit code. Second, I would avoid the use of &&, because strict error handling deals with this. Third, I would use wait at the end to make sure the script doesn't exit until all of the sub-scripts have exited.
#!/bin/sh
set -e
cd /
pwd &
pwd &
wait
The general idea is that you turn on strict error handling, do all of your setup in a synchronous fashion, then launch your four scripts asynchronously, and wait for all to finish.
To apply this to your program:
#!/bin/sh
set -e
cd /home/ubuntu/virtualenvironment/scripts
source /home/ubuntu/venv3.8/bin/activate
python script1.py &
python script2.py &
python script3.py &
python script4.py &
wait
I have a python script. Script have selenium with Chrome and go to a website, take data and put in CSV file.
This is a very long work.
I put the script on the server. And run. All work.
But I need script work in the background.
chmod +x createdb.py
nohup python ./createdb.py &
And I see
(env)$ nohup ./createdb.py &
[1] 32257
(env)$ nohup: ignoring input and appending output to 'nohup.out'
Press Enter.
(env)$ nohup ./createdb.py &
[1] 32257
(env)$ nohup: ignoring input and appending output to 'nohup.out'
[1]+ Exit 1 nohup ./createdb.py
Then it runs and immediately writes errors to the file, that Chrome did not start or there was no click.
I want to remind you that if you start without nohup, then everything will work.
What am I doing wrong? How to run a script?
Thank you very much.
You could create a background daemon (service)
You taged Ubuntu 16.04 it means you got systemd, for more information on how to set it up, please visit this link
create a file called <my_service>.system
and put it there: /etc/systemd/system
you systemd unit could look like this:
[Unit]
Description=my service
After=graphical.target
[Service]
Type=simple
WorkingDirectory=/my_dir
ExecStart=python my_script.py
[Install]
WantedBy=multi-user.target
then all you have to do is, reload systemd manage and start your service:
sudo systemctl daemon-reload
sudo systemctl myservice start
You can use the screen command, it works perfectly.
Here is a very good link: https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/
You can use a simple command, from the env directory:
(env)$ python /path/to/createdb.py > logger.txt 2>&1 &
This will help for storing the program logs in a defined file called "logger.txt"
Using ubuntu's 16.04 crontab and #reboot to run python3 script. The script runs properly on reboot as I see the logged output. However, my script's os.system command is not running. It runs fine if ran outside of crontab. My scripts are all executable.
crontab -l output:
SHELL=/bin/bash
#reboot nohup /usr/bin/python3 -u /home/path/scheduler.py >> /path/log.out &
scheduler.py code:
#...(check if web server is running...if not restart)
os.system('nohup /usr/bin/python3 -u /path/webserver/main.py &')
print('this function ran')
When I logged the output of the os.system command , there was no output.
As a side note, I am running python schedule commands to check the general health of a webserver. crontab doesn't seem to be the right tool for this so I just use crontab to start my python scheduler on reboot.
I am using flask as the webserver, and would use gunicorn and systemctrl if I could get it to work... but it didn't so this is my workaround.
The point is that, the command called by os.system is not in default path.
For example, tcpdump is not in /usr/bin/.
So, you can solve the problem by adding the full path of the command.
I was facing the same issue when we try to run python script directly in crontab it just by passes the os.system() commands.
Make launcher.sh:
#!bin/bash
cd /home/pi/
sudo python example.py
Then, make your script executable:
chmod 755 launcher.sh
And at last, add your script to crontab:
crontab -e
and add this line at the end:
#reboot sh /home/pi/launcher.sh
(I set the program to run at each reboot)
I have sample code to run some mode in python which needs to run 4000 times to complete the process. I have been created a docker build using the docker file below.
FROM python:2
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
RUN pip install fbprophet
CMD ["python", "./startup.py"]
Inside the startup.py file I am creating one shell script file which having 4000 nohup commands need to run as python scripts,
here is the example of sample nohup command will start at the end of "startup.py" script.
nohup python `runprocess.py` arg1 arg2
My problem is if I start the build using docker run command, let say docker build name is startup-build
docker run startup-build
This will create the shell script inside the container and start only 2 or 3 nohup commands from the file, not entire commands. ideally, it should start 100 processes at a time because the script file has 'wait' command on after every 100 lines.
I don't know why is this happening. I am running this docker image in GCP container optimized OS VM, The actual problem is the container starting while 'docker run' not using the entire resources available in the VM and not completing the process on time.
Is it because of docker image can't run shell command inside container parallel ? or does there nohup command have any limitation?