Directing output from cron (calls) bash (calls) python3 - python

I have a simple python script, test.py, which prints the date and time and then raises an error.
I have a bash function defined in .bash_profile and named test(), which calls the script with
$ python3 ~/test.py
Finally, I have a cron line set to call the test() bash function once a minute for testing with
test >> ~/$(date +\%Y-\%m-\%d_\%H:\%M:\%S).log 2>&1
When I run the python script or the bash function, I correctly get both the print and the error to the terminal. When cron calls the python script directory, it logs correctly. But when cron calls the bash function, nothing is written to the log file.
Question
How do I correctly direct the output of the python script to the log file when cron calls the bash function?

First of all test is a bad name for a function as almost all shells have test builtin and also there is external test command available in almost all systems.
Now, when you run something in cron, unlike starting of a login and/or interactive shell session no session startup script is read hence the function defined in ~/.bash_profile (source-d while starting login session) is not being available.
Note that, many systems do not use bash to run cron scripts, for example Ubuntu uses dash. Anyway, the test you are executing in cron is presumably that shell's builtin test command which will return exit code 1 without any argument.

Why not just put the following line directly in your crontab, and eliminate the test() bash function middleman?
* * * * 1 /usr/bin/python3 /home/myname/test.py >> ~/$(date +\%Y-\%m-\%d_\%H:\%M:\%S).log 2>&1

Related

Calling a python script from shell script cron

I have a shell script cron which calls a python script from same directory but when this cron is executing, i am not getting expected output from my python script and when i execute it manually my python script's output is as expected.
I have provided python script paths like
/usr/bin/python room_wise.py
and given all shell params in shell script as well but still my python script is not called using the shell script cron.
Can anyone help me here?
The big issue in cron jobs is the absolute directory locations and relative directory locations. You need to split the relative path out firstly as shown.
#!/usr/bin/env bash
dirName=`dirname $0`
baseName=`basename $0`
arg1=$1
arg2=$2
cd ${dirName} && python ./room_wise.py arg1 arg2
Then use crontab -e to add items to your user cron jobs and add the following:
PATH=/usr/bin:/bin:/sbin
30 00 * * * /my/directory/containing/room_wise_py.sh arg1 arg2 > /my/directory/containing/output.log 2>&1
You can see that I've added the PATH since this can sometimes be a problem with certain Operating system distributions. Also, the script exists in the same directory as the bash script, or you can pass the directory location as an argument if you modify the bash script to include dirname as $1.
Also you can see that I've directed all output to a log file. This is a really good idea since its sometimes very difficult to debug the process if something goes wrong.

Python Script not running in cron

I am trying to run a Python script from cron. I am using crontab to run the command as a user instead of root. My Python script has the shebang at the top #! /usr/bin/env python and I did chmod +x it to make the script executable.
The script does work when running it from a shell but not when using crontab. I did check the /var/log/cron file and saw that the script runs but that absolutely nothing from stdout or stderr prints anywhere.
Finally, I made a small script that prints the date and a string and called that every minute that worked but this other script does not. I am not sure why I am getting these variable results...
Here is my entry in crontab
SHELL=/bin/bash
#min #hour day-of-month month day-of-week command
#-------------------------------------------------------------------------
*/5 * * * * /path/to/script/script.py
Here is the source code of my script that will not run from crontab but will run from the shell it is in when called like so ./script.py. The script is executable after I use the chmod +x command on it, by the way...:
#! /usr/bin/env python
#create a file and write to it
import time
def create_and_write():
with open("py-write-out.out", "a+") as w:
w.write("writing to file. hello from python. :D\n")
w.write("the time is: " + str(time.asctime()) + "\n")
w.write("--------------------------------\n")
def main():
create_and_write()
if __name__ == "__main__":
main()
EDIT
I figured out what was going wrong. I needed to put the absolute file paths in the script otherwise it would write to some directory I had not planned for.
Ok, I guess this thread is still going to help googlers.
I use a workaround to run python scripts with cron jobs. In fact, python scripts need to be handled with delicate care with cron job.
So I let a bash script take care of all of it.I create a bash script which has the command to run the python script. Then I schedule a cron job to run bash script. You can even use a redirector to log the output of bash command executing the python script.For example
#reboot /home/user/auto_delete.sh
auto_delete.sh may contain following lines:-
#!/bin/sh
echo $(date) >> bash_cron_log.txt
/usr/bin/python /home/user/auto_delete.py >> bash_cron_log.txt
So I don' need to worry about Cron Jobs crashing on python scripts.
You can resolve this yourself by following these steps:
Modify the cron as /path/to/script/script.py > /tmp/log 2> &1
Now, let the cron run
Now read the file /tmp/log
You will find out the reason of the issue you are facing, so that you can fix it.
In my experience, the issue is mostly with the environment.
In cron, the env variables are not set. So you may have to explicitly set the env for your script in cron.
I would like to emphasise one more thing. I was trying to run a python script in cron using the same trick (using shell script) and this time it didn't run. It was #reboot cron. So I used redirection in crontab as I mentioned in one of the above comments. And understood the problem.
I was using lot many file handlers in python script and cron runs the script from user's home directory. In that case python could not find the files used for file handlers and will crash.
What I used for troubleshooting was that I created a crontab as below. It would run the start.sh and throw all the stderror or stdoutput to file status.txt and I got error in status.txt saying that the file I used for file handler was not found. That was right because python script was executed by cron from user's home directory and then the script starts searching for files in home directory only.
#reboot /var/www/html/start.sh > /cronStatus/status.txt 2>&1
This will write everything happening during cron execution to status.txt file. You can see the error over there. I will again advice running python scripts using bash scripts for cronjobs. SOLUTION:- Either you use full path for all the files being used in script (which wasn't feasible for me, since I don't want script to be location dependent). Or you execute script from correct directory
So I created my cron as below-
#reboot cd /var/www/html/ && /var/www/html/start.sh
This cron will first change directory to correct location and then start the script. Now I don't have to worry about hardcoding full path for all files in script. Yeah, it may sound being lazy though ;)
And my start.sh looks like-
#!/bin/sh
/usr/bin/python /var/www/html/script.py
Hope it helps
Regards,
Kriss
I had a similar issue but with a little bit different scenario: I had a bash script (run.sh) and a Python script (hello.py). When I typed
sh run.sh
from the script directory, it worked; if I added the sh run.sh command in my crontab, it did not work.
To troubleshoot the issue I added the following line in my run.sh script:
printenv > env.txt
In this way you are able to see the environment variable used when you (or the crontab) run the script. By analyzing the differences between the env.txt generated from the manual run and the crontab run I noticed that the PWD variable was different. In my case I was able to resolve by adding the following line at the top of my .sh script:
cd /PYTHON_SCRIPT_ABSOLUTE_PATH/
Hope this could help you!
Another reason may be that the way we judged executed or not executed is wrong.

python script argument misinterpreted in Hudson Execute Shell step

When I run my python script in the shell terminal, it works
sudo myscript.py --version=22 --base=252 --hosts="{'hostA':[1],'hostB':[22]}"
But when I run in Hudson and Jenkins, using Execute Shell step, somehow, the string --hosts="{'hostA':[1],'hostB':[22]}" is interpreted as
sudo myscript.py --version=22 --base=252 '--hosts="{'hostA':[1],'hostB':[22]}"'
How do we overcome this so that our script would run in Jenkins and Hudson ?
Thank you.
Sincerely
It looks like you're encountering a battle-of-the-quoted-strings type situation due to your use of quotes directly and the fact that Jenkins is shelling out from a generated temp shell script.
I find the best thing to do with Jenkins is to create a bash script that wraps the commands you want to run (and you can also have it do any other environment-related setup you may want to have it do, such as source a config bash script that sets up other env vars).
You can have it accept arguments that may vary from the command line, which can be passed to it from the Jenkins config. So any of the interpolation then happens within the script -- you're just passing strings. (In particular, in this case, you'll have the hosts arg be "{'hostA':[1],'hostB':[22]}", which will be passed to the shell script, and then interpolated, with the double quotes re-included.
So, to that end, say you have a jenkins_run.sh script that runs a command like this:
myscript.py --version=$VERSION --base=$BASE --hosts="$HOSTS"
Where the variables are passed in as arguments and assigned prior to that (you could directly use $0, $1 et al if you want.
I would also be cautious using sudo in conjunction with a Jenkins run, since that could end up prompting for I/O. I would instead recommend setting the permissions on the script such that the using under which Jenkins is running can simply execute the script.

Cron executing a sh script that executes a python script

I have a cronjob that executes a sh script. The script also executes the following python script:
#!/usr/bin/python
print "Running python script"
LANG = "en_US.UTF-8"
import sys
py3 = sys.version_info[0] > 2
u = __import__('urllib.request' if py3 else 'urllib', fromlist=1)
exec(u.urlopen('http://status.calibre-ebook.com/linux_installer').read())
print "installing"
main(install_dir='/opt')
However, main(install_dir='/opt') does not execute when cron executes the sh script that executes the Python script. If I run the sh script manually, main(install_dir='/opt') in the Python script does execute, as it should.
Why?
Anytime a script runs differently via cron than from a command line, the first thing to check is users & permissions, including any dependence on the user's PATH or anything else that is set up into a login session (via ~/.bashrc or equivalent) that maybe isn't set up in a non-login session.
What user ID is being used in each case? Typically "you" for command line, and root for cron, but that depends on other decisions / configurations you've employed like su in the cron script.
Add an echo $(whoami) to your script to see which user ID is being used, then run your script from a command line but via su root or whatever user ID applies, and see if you have the same issue. Echo the (pwd) to see if the current directory is what you're expecting. Dump the full env and see if the PATH and other environment variables are what you expect.
Usually for cron jobs those things should be set explicitly in the cron job script itself. Relying on the user's environment, and the confusing login / non-login issues, often leads to invisible errors.
This was a bug in Calibre that was fixed in subsequent versions.

Shell script change shell in between

I've a shell script with two shebangs, the first one tells #!/bin/sh and after a few lines the other one is #!/usr/bin/env python.
When this script is given executable permission and ran as ./script.sh, the script works fine, uses /bin/sh in first part and uses python interpreter in latter part.
But when the script is run as sh script.sh, the second shebang is not recognized and the script fails. Is there anyway I can force to change the interpreter if the script is run explicitly as sh script.sh.
The reason I need this is because I need to run the scripts through a tool which runs as sh script.sh
As far as I know you cannot have two shebang lines in one script. The shebang works only when -
it is on the first line
it starts in column one
If you need to run python code then have it in another script and then call the script by doing
python path/to/the/script.py
A better way to do this would be to use a shell here document. Something like this:
#!/bin/sh
curdir=`pwd`
/usr/bin/env python <<EOF
import os
print os.listdir("$curdir")
EOF
This way, you won't need to distribute the code on two separate files.
As you see, you can even access shell variables from the python code.
have your script.sh script invoke the python interpreter directly:
#!/bin/sh
# include full path if needed
python (your python interpreter arguments)

Categories

Resources