I need to created a daemon in python. I did search and found a good piece of code. The daemon should be started automatically after system boots and it should be started if it was unexpectedly closed. I went through chapter about daemons in Advanced programming in the Unix environment and have two questions.
To run script automatically after the boot I need put my daemon script to /etc/init.d. Is that correct?
What should I do to respawn the daemon? According to the book I need add a respawn entry into /etc/inittab, but I don't have /etc/inittab on my system. Should I create it by myself?
I suggest you look into upstart if you're on Ubuntu. It's way better than inittab but does involve some learning curve to be honest.
Edit (by Blair): here is an adapted example of an upstart script I wrote for one of my own programs recently. A basic upstart script like this is fairly readable/understandable, though (like many such things) they can get complicated when you start doing fancy stuff.
description "mydaemon - my cool daemon"
# Start and stop conditions. Runlevels 2-5 are the
# multi-user (i.e, networked) levels. This means
# start the daemon when the system is booted into
# one of these runlevels and stop when it is moved
# out of them (e.g., when shut down).
start on runlevel [2345]
stop on runlevel [!2345]
# Allow the service to respawn automatically, but if
# crashes happen too often (10 times in 5 seconds)
# theres a real problem and we should stop trying.
respawn
respawn limit 10 5
# The program is going to daemonise (double-fork), and
# upstart needs to know this so it can track the change
# in PID.
expect daemon
# Set the mode the process should create files in.
umask 022
# Make sure the log folder exists.
pre-start script
mkdir -p -m0755 /var/log/mydaemon
end script
# Command to run it.
exec /usr/bin/python /path/to/mydaemon.py --logfile /var/log/mydaemon/mydaemon.log
To create a daemon, use double fork() as shown in the code you found.
Then you need to write an init script for your daemon and copy it into /etc/init.d/.
http://www.novell.com/coolsolutions/feature/15380.html
There are many ways to specify how the daemon will be auto-started, e.g., chkconfig.
http://linuxcommand.org/man_pages/chkconfig8.html
Or you can manually create the symlinks for certain runlevels.
Finally you need to restart the service when it unexpectedly exits. You may include a respawn entry for the serivce in /etc/inittab.
http://linux.about.com/od/commands/l/blcmdl5_inittab.htm
Related
I have a task to write a Python script which has to parse a web-page once a week. I wrote the script but do not know how can I make it to work once a week. Could someone share an advice and write possible solution?
Have a look at cron. Its not python, but fits the job much better in my opinion. For example:
#weekly python path/to/your/script
A similar question was discussed here.
Whether the script itself should repeat a task periodically usually depends on how frequently the task should repeat. Once a week is usually better left to a scheduling tool like cron or at.
However, a simple method inside the script is to wrap your main logic in a loop that sleeps until the next desired starting time, then let the script run continuously. (Note that a script cannot reliably restart itself, or showing how to do so is beyond the scope of this question. Prefer an external solution.)
Instead of
def main():
...
if __name__ == '__main__':
main()
use
import tim
one_week = 7 * 24 * 3600 # Seconds in a week
def main():
...
if __name__ == '__main__':
while True:
start = time.time()
main()
stop = time.time()
elapsed = stop - start
time.sleep(one_week - elapsed)
Are you planning to run it locally? Are you working with a virtual environment?
Task scheduler option
If you are running it locally, you can use Task scheduler from Windows. Setting up the task can be a bit tricky I found, so here is an overview:
Open Task Scheduler > Create Task (on right actions menu)
In tab "General" name the task
In tab "Triggers" define your triggers (i.e. when you want to schedule the tasks)
In tab "Actions" press on new > Start a program. Under Program/script point to the location (full path) of your python executable (python.exe). If you are working with a virtual environment it is typically in venv\Scripts\python.exe. The full path would be C:\your_workspace_folder\venv\Scripts\python.exe. Otherwise it will be most likely in your Program Files.
Within the same tab, under Add arguments, enter the full path to your python script. For instance: "C:\your_workspace_folder\main.py" (note that you need the ").
Press Ok and save your task.
Debugging
To test if your schedule works you could right click on the task in Task scheduler and press on Run. However, then you don't see the logs of what is happening. I recommend therefore to open a terminal (eg cmd) and type the following:
C:\your_workspace_folder\venv\Scripts\python.exe "C:\your_workspace_folder\main.py"
This allows you to see the full trace of your code and if its running properly. Typical errors that occur are related to file paths (eg if you are not using the full path but a relative path).
Sleeping mode
It can happen that some of the tasks do not run because you don't have administrator privileges and your computer goes in sleeping mode. What I found as a workaround is to keep the computer from going into sleeping mode using a .vbs script. Simply open notepad and create a new file named idle.vbs (extension should be .vbs so make sure you select all programs). In there paste the following code:
Dim objResult
Set objShell = WScript.CreateObject("WScript.Shell")
Do While True
objResult = objShell.sendkeys("{NUMLOCK}{NUMLOCK}")
Wscript.Sleep (60000)
Loop
Is there a good way to automatically restart an instance if it reaches the end of a start up script?
I have a Python script that I want to run continuously on Compute Engine which checks the pub/sub from a GAE instance that's running a CRON job. I haven't figured out a good way to catch every possible error and there are many edge cases that are hard to test (e.g. the instance running out of memory). It would be better if I could just restart the instance every time the script finishes (because it should never finish). The autorestart option won't work because the instance doesn't shutdown, it just stops running the script.
A simple shutdown -r now may be enough.
Or if you prefer gcloud:
gcloud compute instances reset $(hostname)
Mind that reset is a real reset, without a proper OS shutdown.
You might also need to check this documentation before performing 'Resetting or Restarting operation in an instance'
I am using systemd on Raspbian to run a Python script script.py. The my.service file looks like this:
[Unit]
Description=My Python Script
Requires=other.service
[Service]
Restart=always
ExecStart=/home/script.py
ExecStop=/home/script.py
[Install]
WantedBy=multi-user.target
When the Required=other.service stops, I want my.service to stop immediately and also terminate Python process running script.py.
However, when trying this out by stopping other.service and then monitoring the state of my.service using systemctl, it seems like it takes good while for my.service to actually enter a 'failed' state (stopped). It seems that calling ExecStop to the script is not enough to terminate my.service itself and the subsequent script.py in a minute manner.
Just to be extra clear: I want the script to terminate pretty immediately in a way that is analogous to Ctrl + C. Basic Python clean-up is OK, but I don't want systemd to be waiting for a 'graceful' response time-out, or something like that.
Questions:
Is my interpretation of the delay correct, or is it just systemctl that is slow to update its status overview?
What is the recommendable way to stop the service and terminate the script. Should I include some sort of SIGINT catching in the Python script? If so, how? Or is there something that can be done in my.service to expedite the stopping of the service and killing of the script?
I think you should look into TimeoutStopSec and it's default value param DefaultTimeoutStartSec. On the priovided links, there are some more info about WatchdogSec and other options that you might find usefull. It looks like DefaultTimeoutStartSec's default is 90 seconds, which might be the delay you are experiencing..?
Under unit section options you should use Requisite=other.service This is similar to Requires= However, if the units listed here are not started already, they will not be started and the transaction will fail immediately.
For triggering script execution again under unit section you can use OnFailure= which is a space-separated list of one or more units that are activated when this unit enters the "failed" state.
Also using BindsTo= option configures requirement dependencies, very similar in style to Requires=, however in addition to this behavior, it also declares that this unit is stopped when any of the units listed suddenly disappears. Units can suddenly, unexpectedly disappear if a service terminates on its own choice, a device is unplugged or a mount point unmounted without involvement of systemd.
I think in your case BindsTo= is the option to use since it causes the current unit to stop when the associated unit terminates.
From systemd.unit man
My current project contains quite a few custom commands inside an app which act as listeners from a BUS, and each of the task are blocking means they will have to run in their own processes.
[bus]
consume_pay_transaction_completed
consume_pay_transaction_declined
consume_pay_transaction_failed
This makes development/testing difficult because I will have to run each command individually to test the workflow.
I am wondering how easy to write a master command and make the other ones as slaves, monitor their health and respawn them if necessary. Are there any existing utilities/libraries in Django or Python to assist me to write a command 'start_all'
[bus]
consume_pay_transaction_completed
consume_pay_transaction_declined
consume_pay_transaction_failed
start_all
The start_all-command could be done with call_command.
Monitoring health and respawning them if necessary sounds like a job for something like celery.
My python script needs to be killed every hour and after I need to restarted it. I need this to do because it's possible sometimes (I create screenshots) a browser window is hanging because of a user login popup or something.. Anyway. I created 2 files 'reload.py' and 'screenshot.py'. I run reload.py by cronjob.
I thought something like this would work
# kill process if still running
try :
os.system("killall -9 screenshotTaker");
except :
print 'nothing to kill'
# reload or start process
os.execl("/path/to/script/screenshots.py", "screenshotTaker")
The problem is, and what I read aswel the second argument of execl (the given process name) doesn't work? How can I set a process name for it to make the kill do it's work?
Thanks in advance!
The first argument to os.execl is the path to the executable. The remaining arguments are passed to that executable as if their where typed on the command-line.
If you want "screenshotTaker" become the name of the process, that is "screenshots.py" responsibility to do so. Do you do something special in that sense in that script?
BTW, a more common approach is to keep track (in /var/run/ usually) of the PID of the running program. And kill it by PID. This could be done with Python (using os.kill) At system-level, some distribution have helpers for that exact purpose. For example, on Debian there is start-stop-daemon. Here is a excerpt of the man:
start-stop-daemon(8) dpkg utilities start-stop-daemon(8)
NAME
start-stop-daemon - start and stop system daemon programs
SYNOPSIS
start-stop-daemon [options] command
DESCRIPTION
start-stop-daemon is used to control the creation and termination of
system-level processes. Using one of the matching options,
start-stop-daemon can be configured to find existing instances of a
running process.