Python script not sending database log to server from crontab - python

The issue is that when I execute the Python script normally from the terminal it is working fine but when the same file is being executed from the cron, there is no update at the server end.
File permissions have been set to 755. Earlier getting an error "No MTA installed, discarding output"; to solve that I use >/dev/null 2>&1 at the end of my cron job. After that I get no error but still the issue remains the same. Also I have mentioned the environment on top of my python script.
configuration of cron is as follows:
* * * * * sudo python3 /home/pi/json_working/json_to_server_update.py >/dev/null 2>&1

the problem is solved now. I am using user crontab and i solved the issue by using os.path.isfile(os.path.join("path", "file name")) rather than os.path.isfile("path of file"). The latter one is actually a path and not a file so the output was always false and the sync was not made due to that. Now everything is working fine.

Related

Cron job not running with python import modules

I need to run a cron job on a python script to generate basemap plots.
The script by itself runs ok manually.
A simple print("Hello") at the start of the program with the rest commented out also runs ok on cron with
*/10 * * * * /usr/bin/python3 ~/PythonFiles/TestScript.py > /dev/null 2>&1 >>log.txt
I made the file an executable using chmod +x and added a shebang (#!/home/usr/anaconda3/bin/python) at the start of the program. I can monitor activity in the log file via a printed message at the start of the program too
When I come to run the "normal" program which includes modules (urllib.request, datetime, matplotlib, basemap, pygrib, numpy, ...), the script then stops outputting anything to log.txt
So I suspect it is to do with modules and possibly their locations. I checked and they seem to have been installed in various places (.../pkgs, .../conda-meta, .../site-packages, etc...)
First of all, is what I suspect correct?
Secondly, how do I fix it so that cron knows where to find all the libraries to run the job?
Many thanks!
I suspected it was to do with module location paths. After trawling through websites and tweaking inputs to cron, the following works!
SHELL=/bin/sh
HOME=/home/stephane
PYTHONPATH=/home/stephane/anaconda3/bin/python
PATH=/home/stephane/anaconda3/lib/python3.6/site-packages
*/2 * * * * /home/stephane/anaconda3/bin/python ~/PythonFiles/TestScript.py >/dev/null 2>&1 >> log.txt
Note: matplotlib seems to need "import matplotlib as mpl; mpl.use('Agg')" to run off cron.
Thanks to all!

how to properly run Python script with crontab on every system startup

I have a Python script that should open my Linux terminal, browser, file manager and text editor on system startup. I decided crontab is a suitable way to automatically run the script. Unfortunately, it doesn't went well, nothing happened when I reboot my laptop. So, I captured the output of the script to a file in order to get some clues. It seems my script is only partially executed. I use Debian 8 (Jessie), and here's my Python script:
#!/usr/bin/env python3
import subprocess
import webbrowser
def action():
subprocess.call('gnome-terminal')
subprocess.call('subl')
subprocess.call(('xdg-open', '/home/fin/Documents/Learning'))
webbrowser.open('https://reddit.com/r/python')
if __name__ == '__main__':
action()
here's the entry in my crontab file:
#reboot python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
Here's the content of capture_report.txt file (I trim several lines, since its too long, it only prints my folder structures. seems like it came from 'xdg-open' line on Python script):
Directory list of /home/fin/Documents/Learning/
Type Format Sort
[Tree ] [Standard] [By Name] [Update]
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
/
... the rest of my dir stuctures goes here
I have no other clue what's possible going wrong here. I really appreciate your advice guys. thanks.
No, cron is not suitable for this. The cron daemon has no connection to your user's desktop session, which will not be running at system startup, anyway.
My recommendation would be to hook into your desktop environment's login scripts, which are responsible for starting various desktop services for you when you log in, anyway, and easily extended with your own scripts.
I'd do as tripleee suggested, but your job might be failing because it requires an X session, since you're trying to open a browser. You should put export DISPLAY=:0; after the schedule in your cronjob, as in
#reboot export DISPLAY=:0; python3 /home/fin/Labs/my-cheatcodes/src/dsktp_startup_script/dsktp_startup_script.py > capture_report.txt
If this doesn't work, you could try replacing :0 with the output of echo $DISPLAY in a graphical terminal.

Cron and Facebook Friend Tracker not willing to work together

I've been trying to automatically execute the script facebook-online-friend-tracker, which opens chrome, logs into facebook and writes the number of online friends onto a .csv file (https://github.com/bhamodi/facebook-online-friend-tracker).
I wrapped it into a script that I've called facebooktracker.
When I execute ./facebooktracker manually from the terminal, everything works fine. But since I want to collect some statistics, I have set up a cron job to work every 10 minutes.
By using: crontab -e, I've set:
*/10 * * * * /home/enrico/facebooktracker
and it does not work, meaning that it doesn't write on the .csv file (syslog shows that the command has been executed though).
I have tried to use Cron to execute a simple script that writes "Hello world" on a file and it works fine, I've tried to use Cron to open a GUI application and it works fine.
Therefore it looks like the script works fine, cron works fine, but they are not willing to work together.
Things that I have tried (yet to no avail) are:
*/10 * * * * env DISPLAY =:0 /home/enrico/facebooktracker
*/10 * * * * env DISPLAY =:0 /home/enrico/facebooktracker > /dev/null 2>&1
Directly use the script facebook-online-friend-tracker without wrapping, by setting in cron:
*/10 * * * * /home/enrico/anaconda2/bin/facebook-online-friend-tracker --user "username" --password "password" --path "path"
Add echo "Hello world" at the end of the facebooktracker script, setting the output to a .log file ( >> facebooktrackerlog.log) and it does write "Hello world", but still doesn't write the number of online facebook friends on the .csv file
I've run out of ideas. Anyone has a clue? I'd really appreciate it. Thanks!
When you run it manually it's running as you with any .profile etc loaded. When cron runs it your profile hasn't been loaded. Try loading your profile as the first part of the cron job.
I'm the author of the facebook-online-friend-tracker script. I see that you had some trouble setting up a cron job to execute the script every 10 minutes. You'll be happy to know that as of v2.0.0, I've implemented scheduling as part of the script. You no longer have to set up cron jobs or task schedulers. Simply run the script once and follow the prompts. To upgrade, simple run:
pip install facebook-online-friend-tracker --upgrade

In cron jobs (Python), what does the -m flag stand for and how is it used?

I am trying to set up a cron job which is executing a python file of mine. The python file is using some manually installed modules. The cron job now throws an error, as it 'cannot find' the specified module (yes, I tested it: if executed manually the script does work & have access to the module).
I did now recieve the cryptic info (from the hoster’s support) to 'try adding the -m flag to the command, followed by the path to the module that it cannot find.' Unfortunatelly I do not quite understand this advice.
Assuming that my cron job command (via Cpanel) would out of the box be:
0 * * * * python /home/public_html/cgi-bin/cronrun.py
which works if the python script does not rely on external modules.
So my questions are:
Is the -m flag appropriate?
If so, how do I use it?
And what do I do, if there is more than just one additional module that the script needs?
Thank you very much in advance!
Your cron job won't likely be running with the same environment that you have. To see this, first run env > [somepath_that_you_can_reach]. Then set up a cron to do the same thing in a shell script using a different path. Compare the two. You will need your PYTHONPATH to be the same for the cron job to work. If that is the problem, then in your python script:
import sys
sys.path.append('[the path part that you need for it to work]')
before your import statements.

Python Script not running in cron

I am trying to run a Python script from cron. I am using crontab to run the command as a user instead of root. My Python script has the shebang at the top #! /usr/bin/env python and I did chmod +x it to make the script executable.
The script does work when running it from a shell but not when using crontab. I did check the /var/log/cron file and saw that the script runs but that absolutely nothing from stdout or stderr prints anywhere.
Finally, I made a small script that prints the date and a string and called that every minute that worked but this other script does not. I am not sure why I am getting these variable results...
Here is my entry in crontab
SHELL=/bin/bash
#min #hour day-of-month month day-of-week command
#-------------------------------------------------------------------------
*/5 * * * * /path/to/script/script.py
Here is the source code of my script that will not run from crontab but will run from the shell it is in when called like so ./script.py. The script is executable after I use the chmod +x command on it, by the way...:
#! /usr/bin/env python
#create a file and write to it
import time
def create_and_write():
with open("py-write-out.out", "a+") as w:
w.write("writing to file. hello from python. :D\n")
w.write("the time is: " + str(time.asctime()) + "\n")
w.write("--------------------------------\n")
def main():
create_and_write()
if __name__ == "__main__":
main()
EDIT
I figured out what was going wrong. I needed to put the absolute file paths in the script otherwise it would write to some directory I had not planned for.
Ok, I guess this thread is still going to help googlers.
I use a workaround to run python scripts with cron jobs. In fact, python scripts need to be handled with delicate care with cron job.
So I let a bash script take care of all of it.I create a bash script which has the command to run the python script. Then I schedule a cron job to run bash script. You can even use a redirector to log the output of bash command executing the python script.For example
#reboot /home/user/auto_delete.sh
auto_delete.sh may contain following lines:-
#!/bin/sh
echo $(date) >> bash_cron_log.txt
/usr/bin/python /home/user/auto_delete.py >> bash_cron_log.txt
So I don' need to worry about Cron Jobs crashing on python scripts.
You can resolve this yourself by following these steps:
Modify the cron as /path/to/script/script.py > /tmp/log 2> &1
Now, let the cron run
Now read the file /tmp/log
You will find out the reason of the issue you are facing, so that you can fix it.
In my experience, the issue is mostly with the environment.
In cron, the env variables are not set. So you may have to explicitly set the env for your script in cron.
I would like to emphasise one more thing. I was trying to run a python script in cron using the same trick (using shell script) and this time it didn't run. It was #reboot cron. So I used redirection in crontab as I mentioned in one of the above comments. And understood the problem.
I was using lot many file handlers in python script and cron runs the script from user's home directory. In that case python could not find the files used for file handlers and will crash.
What I used for troubleshooting was that I created a crontab as below. It would run the start.sh and throw all the stderror or stdoutput to file status.txt and I got error in status.txt saying that the file I used for file handler was not found. That was right because python script was executed by cron from user's home directory and then the script starts searching for files in home directory only.
#reboot /var/www/html/start.sh > /cronStatus/status.txt 2>&1
This will write everything happening during cron execution to status.txt file. You can see the error over there. I will again advice running python scripts using bash scripts for cronjobs. SOLUTION:- Either you use full path for all the files being used in script (which wasn't feasible for me, since I don't want script to be location dependent). Or you execute script from correct directory
So I created my cron as below-
#reboot cd /var/www/html/ && /var/www/html/start.sh
This cron will first change directory to correct location and then start the script. Now I don't have to worry about hardcoding full path for all files in script. Yeah, it may sound being lazy though ;)
And my start.sh looks like-
#!/bin/sh
/usr/bin/python /var/www/html/script.py
Hope it helps
Regards,
Kriss
I had a similar issue but with a little bit different scenario: I had a bash script (run.sh) and a Python script (hello.py). When I typed
sh run.sh
from the script directory, it worked; if I added the sh run.sh command in my crontab, it did not work.
To troubleshoot the issue I added the following line in my run.sh script:
printenv > env.txt
In this way you are able to see the environment variable used when you (or the crontab) run the script. By analyzing the differences between the env.txt generated from the manual run and the crontab run I noticed that the PWD variable was different. In my case I was able to resolve by adding the following line at the top of my .sh script:
cd /PYTHON_SCRIPT_ABSOLUTE_PATH/
Hope this could help you!
Another reason may be that the way we judged executed or not executed is wrong.

Categories

Resources