python code to add jobs to crontab not working - python

I have written a small python script to automate the process of adding jobs to crontab but the job added via the script is not working and same job when given manually working fine
HERE IS THE CODE:
#!/usr/bin/python3
def scheduler(time=["*","*","*","*","*"],message="no message set"):
crontab_pointer=open('/var/spool/cron/crontabs/sky','a')
schedule_string="\n"+" ".join(time)+" "+message+"\n"
crontab_pointer.write(schedule_string)
crontab_pointer.close()
if __name__=="__main__":
scheduler(time=["52","18","*","*","*"],message="env DISPLAY=:0 /home/sky/scripts/notify2.sh")

Permissions
Make sure the user you're running your python script as root. I did some quick testing and other users can't access their /var/spool/cron/crontabs/$username files. This is by design if I can remember correctly. You're supposed to use the crontab -e command to edit your crontab.
sudo python editcron.py
Really, the Python you've written isn't exactly wrong. It opens the file, adds the string, then closes it. Nothing ground-breaking here. I just added some file system checks in to make sure you can get to that file.
Code
import os
def scheduler(time=['*', '*', '*', '*', '*'], message='no message set', username='sky'):
crontab_fn = '/var/spool/cron/crontabs/{!s}'.format(username)
if not os.path.exists(crontab_fn):
raise StandardError("File {} missing".format(crontab_fn))
if not os.access(crontab_fn, os.W_OK):
raise StandardError("Cannot write to file, run as root")
crontab_fh = open(crontab_fn, 'a')
schedule_string = "\n{t:s} {m:s}\n".format(
t=' '.join(time),
m=message
)
crontab_fh.write(schedule_string)
crontab_fh.close()
if __name__ == "__main__":
time = ["52","18","*","*","*"]
message = "env DISPLAY=:0 /home/sky/scripts/notify2.sh"
scheduler(time, message)

NOTES from man cron:
cron searches its spool area (/var/spool/cron/crontabs) for crontab files (which are named after accounts in
/etc/passwd); crontabs found are loaded into memory. Note that crontabs in this directory should not be
accessed directly - the crontab command should be used to access and update them.
Question: ... same job when given manually working fine
I assume you use crontab <filename> here!
Search for a python module or use module subprocess.run(...) to start crontab <filename> from within your .py.
using-the-subprocess-module
Come back and Flag your Question as answered if this is working for you or comment why not.

Related

Problem with broken backup and python script

Right up front to be clear, I am not fluent in programming or python, but generally can accomplish what I need to with some research. Please excuse any bad formatting structure, as this is my first post to a board like this
I recently updated my laptop from Ubuntu 18.04 to 20.04. I created a full system backup with Dejadup, which due to a missing file, could not be restored. Research brought me to post on here from 2019 for manually restoring these files. The process called for 2 scripts, 1 to unpack and the second to reconstruct the files, both created by Hamish Downer.
The first,
"for f in duplicity-full.*.difftar.gz; do echo "$f"; tar xf "$f"; done"
seemed to work well and did unpack the files.
The second,
#!/usr/bin/env python3
import argparse
from pathlib import Path
import shutil
import sys"
is the start of a re-constructor script. Using terminal from within the directory I am trying to rebuild I enter the first line and return.
When I enter the second line of code the terminal just "hangs" with no activity, and will only come back to the prompt if I double click the cursor. I receive no errors or warnings. When I enter the third line of code
"from pathlib import Path"
and return I then get an error
from: can't read /var/mail/pathlib
The problem seems to originate with the "import argparse" command and I assume is due to a symlink.
argparse is located in /usr/local/lib/python3.8/dist-packages (1.4.0)
python3 is located in /usr/bin/
Python came with the Ubuntu 20.04 distribution package.
Any help with reconstructing these files would be greatly appreciated, especially in a batch as this script is meant to do versus trying to do them one file at a time.
Update: I have tried adding the "re-constructor" part of this script without success. This is a link to the script I want to use:
https://askubuntu.com/questions/1123058/extract-unencrypted-duplicity-backup-when-all-sigtar-and-most-manifest-files-are
Re-constructor script:
class FileReconstructor():
def __init__(self, unpacked_dir, restore_dir):
self.unpacked_path = Path(unpacked_dir).resolve()
self.restore_path = Path(restore_dir).resolve()
def reconstruct_files(self):
for leaf_dir in self.walk_unpacked_leaf_dirs():
target_path = self.target_path(leaf_dir)
target_path.parent.mkdir(parents=True, exist_ok=True)
with target_path.open('wb') as target_file:
self.copy_file_parts_to(target_file, leaf_dir)
def copy_file_parts_to(self, target_file, leaf_dir):
file_parts = sorted(leaf_dir.iterdir(), key=lambda x: int(x.name))
for file_part in file_parts:
with file_part.open('rb') as source_file:
shutil.copyfileobj(source_file, target_file)
def walk_unpacked_leaf_dirs(self):
"""
based on the assumption that all leaf files are named as numbers
"""
seen_dirs = set()
for path in self.unpacked_path.rglob('*'):
if path.is_file():
if path.parent not in seen_dirs:
seen_dirs.add(path.parent)
yield path.parent
def target_path(self, leaf_dir_path):
return self.restore_path / leaf_dir_path.relative_to(self.unpacked_path)
def parse_args(argv):
parser = argparse.ArgumentParser()
parser.add_argument(
'unpacked_dir',
help='The directory with the unpacked tar files',
)
parser.add_argument(
'restore_dir',
help='The directory to restore files into',
)
return parser.parse_args(argv)
def main(argv):
args = parse_args(argv)
reconstuctor = FileReconstructor(args.media/jerry/ubuntu, args.media/jerry/Restored)
return reconstuctor.reconstruct_files()
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
I think you are typing the commands to the shell instead of python interpreter. Please check your prompt, python (started with python3) has >>>.
Linux has an import command (part of the ImageMagick) and understands import argparse, but it does something completely different.
import - saves any visible window on an X server and outputs it as an
image file. You can capture a single window, the entire screen, or any
rectangular portion of the screen.
This matches the described behaviour. import waits for a mouse click and then creates a large output file. Check if there is a new file named argparse.
An executable script contains instruction to be processed by an interpreter and there are many possible interpreters, several shells (bash and alternatives), languages like Perl, Python, etc. and also some very specialized like nft for firewall rules.
If you execute a script from the command line, the shell reads its first line. If it starts with #! characters (called "shebang"), it uses the program listed on that line. (note: /usr/bin/env there is just a helper to find the exact location of a program).
But if you want to use an interpreter interactively, you need to start it explicitly. The shebang line has no special meaning in this situation, only as the very first line of a script. Otherwise it is just a comment and is ignored.

Crontab Python Script not running

I know this question has been asked before but I still haven't been able to get it to work. My crontab file just has this:
0 5 * * * /home/harry/my_env/bin/python /home/harry/compile_stats/process_tonight.py
Here's what my process_tonight.py looks like:
import datetime
import sys
sys.path.append('/home/harry/compile_stats/')
import compile # Module in above path
print("Processing last night\n")
date = str(datetime.datetime.today().year) + "-" + str(datetime.datetime.today().month) + "-" + str(datetime.datetime.today().day-1)
compile.process(date, date)
This file works perfectly fine when I just run it regularly from the command line but doesn't work when I schedule it.
I also looked at my /var/log/syslog file and the task I'm looking to run isn't showing up there.
Any ideas?
EDIT:
The time it's set to run in my example (5 A.M) is just a random time to put in. It's not running for any time I put in there.
EDIT 2#:
As per user speedyturkey I simplified my python script to better diagnose the problem:
import datetime
#import sys
#sys.path.append('/home/harry/compile_stats/')
#import compile # Module in above path
print("Processing last night\n")
date = str(datetime.datetime.today().year) + "-" + str(datetime.datetime.today().month) + "-" + str(datetime.datetime.today().day-1)
#compile.process(date, date)
Nothing is happening so I guess the problem isn't with the import.
As per the comments, I believe the problem is in how you are calling the python script in the crontab. Run the exact command you've given crontab and fix any problems it returns.
Ok, I was able to get it to work by creating a specific cron file, putting the info and there and loading it in.
So process_tonight.cron contains this:
0 5 * * * /home/harry/my_env/bin/python /home/harry/compile_stats/process_tonight.py
And I just loaded it into crontab:
crontab process_tonight.cron
Not really sure why this works and the other way doesn't (maybe someone else has an idea).
Instead of trying to modify the path from within your python script, you could do something like:
cd /home/harry/compile_stats/ && ./process_tonight.py
which would make it easier to import compile correctly. Note that this would also require making process_tonight.py executable (chmod +x process_tonight.py) and adding a shebang pointing to your Python interpreter (I guess... #!/home/harry/my_env/bin/python).
EDIT in response to Edit #2 above:
It is actually not possible to tell if it is running from the code you have written - the print statements are not being redirected. I suggest changing the code to perform some kind of side effect that you can check. For example, import subprocess and then do (example):
subprocess.call("date > /home/harry/compile_stats/date.txt")
If the script is executed properly, it will redirect the output of date to the file specified.
I know it's a bit silly, but checking your system time/timezone might be helpfull )))
I set my job to run at 5AM and when I logged in at 8AM there was no result of my script. So I spent over 1 hour trying to figure out what's the problem before I noticed that system time is incorrect and 5AM haven't come yet.
Have you tried running it from a shell script? I was having the same issue with my python script. I ended up putting the command in a shell script and running that. It threw an error that the library wasn't imported, so I installed it with pip and the --user flag. Now cron runs the shell script no issues.

Cron Job File Creation - Created File Permissions

I'm running an hourly cron job for testing. This job runs a python file called "rotateLogs". Cron can't use extensions, so the first line of the file is #!/usr/bin/python. This python file(fileA) then calls another python file(fileB) elsewhere in the computer. fileB logs out to a log file with the time stamp, etc. However, when fileB is run through fileA as a cron job, it creates its log files as rw-r--r-- files.
The problem is that if I then try to log to the files from fileB, they can't write to it unless you run them with sudo permissions. So I am looking for some way to deal with this. Ideally, it would be nice to simply make the files as rw-rw-r-- files, but I don't know how to do that with cron. Thank you for any help.
EDIT: rotateLogs(intentionally not .py):
#!/usr/bin/python
#rotateLogs
#Calls the rotateLog function in the Communote scripts folder
#Designed to be run as a daily log rotation cron job
import sys,logging
sys.path.append('/home/graeme/Communote/scripts')
import localLogging
localLogging.localLog("Hourly log",logging.error)
print "Hello"
There is no command in crontab, but it is running properly on the hourly cron(at 17 minutes past the hour).
FileB's relevant function:
def localLog(strToLog,severityLevel):
#Allows other scripts to log easily
#Takes their log request and appends it to the log file
logging.basicConfig(filename=logDirPath+getFileName(currDate),format="%(asctime)s %(message)s")
#Logs strToLog, such as logging.warning(strToLog)
severityLevel(strToLog)
return
I'm not sure how to find the user/group of the cronjob, but it's simply in /etc/cron.hourly, which I think is root?
It turns out that cron does not source any shell profiles (/etc/profile, ~/.bashrc), so the umask has to be set in the script that is being called by cron.
When using user-level crontabs (crontab -e), the umask can be simply set as follows:
0 * * * * umask 002; /path/to/script
This will work even if it is a python script, as the default value of os.umask inherits from the shell's umask.
However, placing a python script in /etc/cron.hourly etc., there is no way to set the umask except in the python script itself:
import os
os.umask(002)

Trying to run a python script with crontab every hour, but one section of the python code does not execute

I wrote a script using python and selenium that tries to register for a class called puppy play. Crontab runs the script every hour and sends any output to a file called "cronpup.log". This section of code is in my python script and it just checks to see if the registration was successful or not then appends the results to the file "pup.log".
# Pup Logger
f = open("pup.log", "a+")
f.write(time.strftime("%Y-%m-%d %H:%M:%S "))
if pups == 1:
f.write("Pups!\n")
elif pups == 0:
f.write("No Pups\n")
else:
f.write("Ruh Roh, Something is wrong\n")
f.close()
This creates the "pup.log" file with entries like the following
$ pup.log
2014-10-17 17:49:18 No Pups
2014-10-17 19:37:28 No Pups
I can run the python script just fine from the terminal, but when crontab executes the script no new entries are made in "pup.log". I've checked the output from crontab and have found nothing. Here is crontab's output
$ cronpup.log
.
----------------------------------------------------------------------
Ran 1 test in 81.314s
OK
It seems like crontab is just ignoring that section of the code, but that seems pretty silly. Any ideas how to get this working?
The line
f = open("pup.log", "a+")
is your problem. Open is looking the the current working directory for pup.log, creating it if necessary, and appending to it. If you run from the terminal while in the same directory as the python script, that's where pup.log will appear. The cwd when running from cron is the home directory of the user the job is running as, so when run from cron it's dropping a pup.log file somewhere else on your system.
You can either hardcode a full path, or use
os.chdir(os.path.dirname(os.path.abspath(__file__)))
to set the current working directory to the directory the python file is in, or modify the above to put pup.log whereever you like.

Python - send KDE knotify message with cron job on linux?

I'm trying to send a notification to KDE's knotify from a cron job. The code below works fine but when I run it as a cron job the notification doesnt appear.
#!/usr/bin/python2
import dbus
import gobject
album = "album"
artist = "artist"
title = "title"
knotify = dbus.SessionBus().get_object("org.kde.knotify", "/Notify")
knotify.event("warning", "kde", [], title, u"by %s from %s" % (artist, album), [], [], 0, 0, dbus_interface="org.kde.KNotify")
Anyone know how I can run this as a cron job?
You need to supply an environment variable called DBUS_SESSION_BUS_ADDRESS.
You can get the value from a running kde session.
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:abstract=/tmp/dbus-iHb7INjMEc,guid=d46013545434477a1b7a6b27512d573c
In your kde startup (autostart module in configuration), create a script entry to run after your environment starts up. Output this environment variable value to a temp file in your home directory and then you can set the environment variable within your cron job or python script from the temp file.
#!/bin/bash
echo $DBUS_SESSION_BUS_ADDRESS > $HOME/tmp/kde_dbus.session
As of 2019 KDE5, it still works but is slightly different results:
$ echo $DBUS_SESSION_BUS_ADDRESS
unix:path=/run/user/1863/bus
To test it, you can do the following:
$ qdbus org.freedesktop.ScreenSaver /ScreenSaver SimulateUserActivity
You may need to use qdbus-qt5 if you still have the old kde4 binaries installed along with kde5. You can determine which one you should use with the following:
export QDBUS_CMD=$(which qdbus-qt5 2> /dev/null || which qdbus || exit 1)
I run this with a sleep statement when I want to prevent my screensaver from engaging and it works. I run it remotely from another computer beside my main one.
For those who want to know how I lock and unlock the remote screensaver, it's a different command...
loginctl lock-session 1
or
loginctl unlock-session 1
That is assuming that your session is the first one. You can add scripts to the KDE notification events for screensaver start and stop. Hope this information helps someone who wants to synchronize their screen savers across more than one computer.
I know this is long answer, but I wanted to provide an example for you to test with and a practical use case where I use it today.

Categories

Resources