I have a variable list of programs that I want to kick off from a cron job. The solution I have settled on at least for now is to write the actual cron job in python, then run through the list, starting each program with:
outf=open('the_command.log','w')
subprocess.Popen(['nohup','the_command', ...],stdout=outf)
outf.close()
The problem with this is that it creates a nohup.out file - the same one for each process, it seems. If I did this same thing from the command line, it might look like:
$ nohup the_command ... > the_command.log 2>&1
This works fine, except I get a message from nohup when I run it:
nohup: ignoring input and redirecting stderr to stdout
I have tried to redirect stderr to /dev/null, but the result is that the_command.log is empty. How can I solve this?
I solved this by using a different command detach from http://inglorion.net/software/detach/
But I now consider this to be improper. It would be better to use oneshot services started by your cron job script or make your cron entry cause a oneshot service to be started.
With this there would be no need to detach as the processes aren't your scripts children rather they are children of the supervisor. Any init that supports starting a normally down service and does not restart it when it exits can be used.
Related
In Django I need to run a shell command at some point. The command takes 6-10 minutes so I like to get live stdout from the command in my Django View in order to live track the command.
I now how to run the command and get live output with subprocess but I have no clue of how to pass the live output to the views.
I would suggest running the sub process into cron and storing the output to a file/db where you can get the progress from the view. Another option is to go for threading using celery.
The problem is that the view is not persistent, so you can not keep the hook to the output pipe across http requests.
My python script needs to be killed every hour and after I need to restarted it. I need this to do because it's possible sometimes (I create screenshots) a browser window is hanging because of a user login popup or something.. Anyway. I created 2 files 'reload.py' and 'screenshot.py'. I run reload.py by cronjob.
I thought something like this would work
# kill process if still running
try :
os.system("killall -9 screenshotTaker");
except :
print 'nothing to kill'
# reload or start process
os.execl("/path/to/script/screenshots.py", "screenshotTaker")
The problem is, and what I read aswel the second argument of execl (the given process name) doesn't work? How can I set a process name for it to make the kill do it's work?
Thanks in advance!
The first argument to os.execl is the path to the executable. The remaining arguments are passed to that executable as if their where typed on the command-line.
If you want "screenshotTaker" become the name of the process, that is "screenshots.py" responsibility to do so. Do you do something special in that sense in that script?
BTW, a more common approach is to keep track (in /var/run/ usually) of the PID of the running program. And kill it by PID. This could be done with Python (using os.kill) At system-level, some distribution have helpers for that exact purpose. For example, on Debian there is start-stop-daemon. Here is a excerpt of the man:
start-stop-daemon(8) dpkg utilities start-stop-daemon(8)
NAME
start-stop-daemon - start and stop system daemon programs
SYNOPSIS
start-stop-daemon [options] command
DESCRIPTION
start-stop-daemon is used to control the creation and termination of
system-level processes. Using one of the matching options,
start-stop-daemon can be configured to find existing instances of a
running process.
I have a bunch of files that I need to print via PDF printer and after it is printed I need to perform additional tasks, but only when it is finally completed.
So to do this from my python script i call command "lpr path/to/file.doc -P PDF"
But this command immediately returns 0 and I have no way to track when printing process is finished, was it successful or not etc...
There is an option to send email when printing is done, but to wait for email after I start printing looks very hacky to me.
Do you have some ideas how to get this done?
Edit 1
There are a plenty of ways to check if printer is printing something at current moment. Therefore at the moment after I start printing something I run lpq command every 0.5 second to find out if it is still printing. But this looks to m e not the best way to do it. I want to be able get alerted or something when actual printing process is finished. Was it successful or not etc...
If you have CUPS, you can use the System V-compatible lp instead of lpr. This prints, on stdout, a job id, e.g.
request id is PDF-5 (1 file(s))
(this is for the virtual printer cups-pdf). You can then grep for this id in the output of lpstat:
lpstat | grep '^PDF-5 '
If that produces no output, then your job is done. lpstat -l produces more status information, but its output will also be a bit harder to parse.
Obviously, there are cleaner Python solutions then running this actual shell code. Unfortunately, I couldn't find a way to check the status of a single job without plowing through the list of jobs.
You can check the state of the printer using the lpstat command (man lpstat). To wait for a process to finish, get the PID of the process and pass it wait command as argument
I am writing a Python program which runs a virtual terminal. Currently I am launching it like so:
import pexpect, thread
def create_input(child, scrollers, textlength=80, height=12):
while 1:
newtext = child.readline()
print newtext
child = pexpect.spawn("bash", timeout=30000)
thread.start_new_thread(create_input,(child))
This works, and I can send commands to it via child.send(command). However, I only get entire lines as output. This means that if I launch something like Nano or Links, I don't receive any output until the process has completed. I also can't see what I'm typing until I press enter. Is there any way to read the individual characters as bash outputs them?
You would need to change the output of whatever program bash is running to be unbuffered instead of line buffering. A good amount of programs have a command line option for unbuffered output.
The expect project has a tool called unbuffer that looks like it can give you all bash output unbuffered. I have personally never used it, but there are other answers here on SO that also recommend it: bash: force exec'd process to have unbuffered stdout
The problem is lies in something else. If you open an interactive shell normally a terminal window is opened that runs bash, sh, csh or whatever. See the word terminal!
In the old days, we connected a terminal to a serial port (telnet does the same but over ip), again the word terminal.
Even a dumb terminal respond to ESC codes, to report its type and to set the cursor position, colors, clear screen etc.
So you are starting a subprocess with interactive output, but there is no way in telling that shell and subprocesses are to a terminal in this setup other than with bash startup parameters if there are any.
I suggest you enable telnetd but only on localhost (127.0.0.1)
Within your program, make a socket and connect to localhost:telnet and look up how to emulate a proper terminal. If a program is in line mode you are fine but if you go to full screen editing, somewhere you will need an array of 80x24 or 132x24 or whatever size you want to store its characters, color. You also need to be able to shift lines up in that array.
I have not looked but I cannot imagine there is no telnet client example in python, and a terminal emu must be there too!
Another great thing is that telnet sessions clean up if the the ip connection is lost, eliminating ghost processes.
Martijn
I have followed the suggestion in this question
as I am using Django, I have set the script to store date and time of each run of the script in the db, but no entry has been stored yet in the database.
Is there a way to figure out, other than typing "top" and searching through?
First, I would probably configure cron to mail yourself any output by using MAILTO:
In /etc/crontab:
MAILTO=username
Second, I usually add something to my script that (almost) cannot possibly fail, like the following:
#!/bin/sh
echo "$0 ran on `date +%c`" >> /tmp/crontab_test.log
# ... rest of program
If you're calling a python script directly from cron, you could do something similar or create a wrapper shell script.
If you have sendmail installed, you can add the following to '/etc/aliases'
root: your_name#domain.com
After you do that, update the aliases running this command:
sudo newaliases
Cron will automatically email you every time a job is run. No need to specify that in the crontab file.
Also, make sure you test your email capabilities (e.g. make sure you are able to send emails from the server) and lastly, create a trivial cronjob and test if you receive an email.
Do not assume!
In addition to setting up cron to send email, you can send the output of cron to a seperate syslog log facility by adding the following to your /etc/syslog.conf.
# Log cron stuff
cron.* /var/log/cron.log
This should log a message to /var/log/cron.log each time a job is run.