Cron activate virtualenv and run multiple python scripts from shell script - python

I need a cron to run a single shell script. This shell script should activate the python virtualenv and then execute python scripts.
This is my cron:
0 0 * * 1 /home/ubuntu/crontab_weekly.sh
This is my crontab_weekly.sh file:
cd /home/ubuntu/virtualenvironment/scripts \
&& source /home/ubuntu/venv3.8/bin/activate \
&& python script1.py \
& python script2.py \
& python script3.py \
& python script4.py \
The idea is to enter the directory where the scripts are hosted, then activate the venv, and only then start executing the scripts. The scripts should be executed in parallel.
But in my case only script1.py is executed and the following scripts are not executed.
Where is my problem?

Remember that & means to run the entire previous command asynchronously. This includes anything before a &&. Commands that run asynchronously run in separate processes.
To take a simplified example of your problem, let's say we asynchronously change directories, run pwd, and asynchronously run pwd again.
#!/bin/sh
cd / && \
pwd \
& pwd
On my computer, this outputs:
/home/nick
/
The cd / was meant to affect both pwd calls, but it only affected the first one, because the second one runs in a different process. (They also printed out of order in this case, the second one first.)
So, how can you write this script in a more robust fashion?
First, I would turn on strict error handling with -e. This exits as soon as any (non-asynchronous) command returns a non-zero exit code. Second, I would avoid the use of &&, because strict error handling deals with this. Third, I would use wait at the end to make sure the script doesn't exit until all of the sub-scripts have exited.
#!/bin/sh
set -e
cd /
pwd &
pwd &
wait
The general idea is that you turn on strict error handling, do all of your setup in a synchronous fashion, then launch your four scripts asynchronously, and wait for all to finish.
To apply this to your program:
#!/bin/sh
set -e
cd /home/ubuntu/virtualenvironment/scripts
source /home/ubuntu/venv3.8/bin/activate
python script1.py &
python script2.py &
python script3.py &
python script4.py &
wait

Related

Docker: Container permanently restarting

I'm new to the whole Docker Container topic and currently trying to run multiple python scripts in shell via bash script (cause it seemed to be the easiest thing to do in terms of running multiple python scripts at the same time). Before that I build my Image via the following Dockerfile
FROM debian:buster-slim
ENV PACKAGES1="build-essential git python3"
RUN apt-get update && \
apt-get install -y $PACKAGES1
COPY /mnt /mnt
CMD [ "/bin/bash", "/mnt/setup_bash.sh" ]
to execute the setup_bash.sh
#! /bin/bash
python3 script1.py &
python3 script2.py &
after running the resulting container he keeps restarting and doesn't stay active. Meanwhile the docker logs command doens't display any errors so I'm kinda clueless what's the problem.
The main process of the system exits, so docker is killed. You are running two processes in the background and the main bash scripts quits. You could:
run one script on foreground, or
run sleep infinity to keep the main script running
refactor it all and for complex setups consider using service management, like supervisord
Like with option 2:
#! /bin/bash
python3 script1.py &
python3 script2.py &
sleep infinity # don't quit
As I said in the comments, if your script is exiting before the processes are finished, you can use the wait command to wait for all the scripts to finish before continuing.
#! /bin/bash
python3 script1.py &
python3 script2.py &
wait
echo "Finished!"

How to check if a script is being called from terminal or from another script

I am writing a python script and i want to execute some code only if the python script is being run directly from terminal and not from any another script.
How to do this in Ubuntu without using any extra command line arguments .?
The answer in here DOESN't WORK:
Determine if the program is called from a script in Python
Here's my directory structure
home
|-testpython.py
|-script.sh
script.py contains
./testpython.py
when I run ./script.sh i want one thing to happen .
when I run ./testpython.py directly from terminal without calling the "script.sh" i want something else to happen .
how do i detect such a difference in the calling way . Getting the parent process name always returns "bash" itself .
I recommend using command-line arguments.
script.sh
./testpython.py --from-script
testpython.py
import sys
if "--from-script" in sys.argv:
# From script
else:
# Not from script
You should probably be using command-line arguments instead, but this is doable. Simply check if the current process is the process group leader:
$ sh -c 'echo shell $$; python3 -c "import os; print(os.getpid.__name__, os.getpid()); print(os.getpgid.__name__, os.getpgid(0)); print(os.getsid.__name__, os.getsid(0))"'
shell 17873
getpid 17874
getpgid 17873
getsid 17122
Here, sh is the process group leader, and python3 is a process in that group because it is forked from sh.
Note that all processes in a pipeline are in the same process group and the leftmost is the leader.

Running Python process with cronjob and checking it is still running every minute

I have a Python script that I'd like to run from a cronjob and then check every minute to see if it is still running and if not then start it again.
Cronjob is:
/usr/bin/python2.7 /home/mydir/public_html/myotherdir/script.py
there is some info on this but most answers don't really detail the full process clearly, e.g.:
Using cron job to check if python script is running
e.g. in that case, it doesn't state how to run the initial process and record the PID. It leaves me with a lot of questions unfortunately.
Therefore, could anyone give me a simple guide to how to do this?
e.g. full shell script required, what command to start the script, and so on.
There is now a better way to do this via utility called flock, directly from cron task within a single line. flock will acquire a lock before it runs your app and release it after it is run. The format is as follows:
* * * * * /usr/bin/flock -n /tmp/fcj.lockfile <your cron job>
For your case, this'd be:
* * * * * /usr/bin/flock -n /tmp/fcj.lockfile /usr/bin/python2.7 /home/mydir/public_html/myotherdir/script.py
-n tells it to fail immediately if it cannot acquire the lock. There is also -w <seconds> option, it will fail if it cannot acquire the lock within given time frame.
* * * * /usr/bin/flock -w <seconds> /tmp/fcj.lockfile <your cron job>
It's not that hard. First set up the crontab to run a checker every minute:
* * * * * /home/mydir/check_and_start_script.sh
Now in /home/mydir/check_and_start_script.sh,
#!/bin/bash
pid_file='/home/mydir/script.pid'
if [ ! -s "$pid_file" ] || ! kill -0 $(cat $pid_file) > /dev/null 2>&1; then
echo $$ > "$pid_file"
exec /usr/bin/python2.7 /home/mydir/public_html/myotherdir/script.py
fi
This checks to see if there's a file with the process id of the last run of the script. If it's there, it reads the pid and checks if the process is still running. If not, then it puts the pid of the currently running shell in the file and executes the python script in the same process, terminating the shell. Otherwise it does nothing.
Don't forget to make the script executable
chmod 755 /home/mydir/check_and_start_script.sh
You should actually take a step back and think what and why you are doing it
Do you need to actually start your long_running program from cron? The reason i ask is that you then write another program(watcher) that starts your long_running program. I hope that you see that yo actually do not need cron to start your long_running program.
So what you have is a watcher firing every minute and starting long_running program if it is not running.
Now you need to actually understand what you are trying to do because there are many ways this can be done. If this is an exercise watcher can call ps and work out from there if it runs and start it.
in bash you can do
[ -z "$(ps -ef | grep -v grep | grep myprogname)" ] && { echo runit; }
you just replace myprogname with the name and run it with command that starts your long_running code on the back ground - perhaps
however you should use another tool. systemd, daemontools or others
In your crontab file, have this code:
command='/usr/bin/python2.7 /home/mydir/public_html/myotherdir/script.py'
*/1 * * * * pgrep -af "$command" || $command
Explanation:
Every minute, pgrep checks whether the python script is already running. If it is running, the 2nd statement will not be called since we used ||. If pgrep fails to find the python script in the existing processes, the 2nd statement will be executed and the python script will be back on work

Have python script run in background of unix

I have a python script that I want to execute in the background on my unix server. The catch is that I need the python script to wait for the previous step to finish before moving onto the next task, yet I want my job to continue to run after I exit.
I think I can set up as follows but would like confirmation:
An excerpt of the script looks like this where command 2 is dependent on the output from command 1 since it outputs an edited executable file in same directory. I would like to point out that commands 1 and 2 do not have the nohup/& included.
subprocess.call('unix command 1 with options', shell=True)
subprocess.call('unix command 2 with options', shell=True)
If when I initiate my python script like so:
% nohup python python_script.py &
Will my script run in the background since I explicitly did not put nohup/& in my scripted unix commands but instead ran the python script in the background?
yes, by running your python script with nohup (no hangup), your script won't keel over when the network is severed and the trailing & symbol will run your script in the background.
You can still view the output of your script, nohup will pipe the stdout to the nohop.out file. You can babysit the output in real time by tailing that output file:
$ tail -f nohop.out
quick note about the nohup.out file...
nohup.out The output file of the nohup execution if
standard output is a terminal and if the
current directory is writable.
or append the command with & to run the python script as a deamon and tail the logs.
$ nohup python python_script.py > my_output.log &
$ tail -f my_output.log
You can use nohup
chomd +x /path/to/script.py
nohup python /path/to/script.py &
Or
Instead of closing your terminal, use logout It is not SIGHUP when you do logout thus the shell won't send a SIGHUP to any of its children.children.

Starting/stopping a background Python process wtihout nohup + ps aux grep + kill

I usually use:
nohup python -u myscript.py &> ./mylog.log & # or should I use nohup 2>&1 ? I never remember
to start a background Python process that I'd like to continue running even if I log out, and:
ps aux |grep python
# check for the relevant PID
kill <relevantPID>
It works but it's a annoying to do all these steps.
I've read some methods in which you need to save the PID in some file, but that's even more hassle.
Is there a clean method to easily start / stop a Python script? like:
startpy myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
stoppy myscript.py
Or could this long part nohup python -u myscript.py &> ./mylog.log & be written in the shebang of the script, such that I could start the script easily with ./myscript.py instead of writing the long nohup line?
Note : I'm looking for a one or two line solution, I don't want to have to write a dedicated systemd service for this operation.
As far as I know, there are just two (or maybe three or maybe four?) solutions to the problem of running background scripts on remote systems.
1) nohup
nohup python -u myscript.py > ./mylog.log 2>&1 &
1 bis) disown
Same as above, slightly different because it actually remove the program to the shell job lists, preventing the SIGHUP to be sent.
2) screen (or tmux as suggested by neared)
Here you will find a starting point for screen.
See this post for a great explanation of how background processes works. Another related post.
3) Bash
Another solution is to write two bash functions that do the job:
mynohup () {
[[ "$1" = "" ]] && echo "usage: mynohup python_script" && return 0
nohup python -u "$1" > "${1%.*}.log" 2>&1 < /dev/null &
}
mykill() {
ps -ef | grep "$1" | grep -v grep | awk '{print $2}' | xargs kill
echo "process "$1" killed"
}
Just put the above functions in your ~/.bashrc or ~/.bash_profile and use them as normal bash commands.
Now you can do exactly what you told:
mynohup myscript.py # will automatically continue running in
# background even if I log out
# two days later, even if I logged out / logged in again the meantime
mykill myscript.py
4) Daemon
This daemon module is very useful:
python myscript.py start
python myscript.py stop
Do you mean log in and out remotely (e.g. via SSH)? If so, a simple solution is to install tmux (terminal multiplexer). It creates a server for terminals that run underneath it as clients. You open up tmux with tmux, type in your command, type in CONTROL+B+D to 'detach' from tmux, and then type exit at the main terminal to log out. When you log back in, tmux and the processes running in it will still be running.

Categories

Resources