Restart Python.py when it stops working - python

I am using Cherrypy framework to run my python code on server. But the process stops working when the load increases.
Every time this happens I have to manually go and start the python code. Is there any way i can use Gunicorn with Cherrypy so that Gunicorn can start the code automatically when it stops working.
Any other solution will also work in this case. Just want to make sure that the python program does not stop working.

I use a cron that checks the memory load every few minutes and resets cherrypy when the memory exceeds 500MB -- so that the web host doesn't complain to me with emails. Something on my server doesn't release memory when a function ends as it should, so this is a pragmatic work around.
This hack may be weird because I reset it using an HTTP request, but that's because I spent hours trying to figure out how to do this withing the BASH and gave up. It works.
CRON PART
*/2 * * * * /usr/local/bin/python2.7 /home/{mypath}/cron_reset_cp.py > $HOME/cron.log 2>&1
And code inside cron_reset_cp.py...
#cron for resetting cherrypy /cp/ when 500+ MB
import os
#assuming starts in /home/my_username/
os.chdir('/home/my_username/cp/')
import mem
C = mem.MemoryMonitor('my_username') #this function adds up all the memory
memory = int(float(C.usage()))
if memory > 500:#MB
#### Tried: pid = os.getpid() #current process = cronjob --- THIS approach did not work for me.
import urllib2
cp = urllib2.urlopen('http://myserver.com/cp?reset={password}')
Then I added this function to reset the cherrypy via cron OR after a github update from any browser (assuming only I know the {password})
The reset url would be http://myserver.com/cp?reset={password}
def index(self, **kw):
if kw.get('reset') == '{password}':
cherrypy.engine.restart()
ip = cherrypy.request.headers["X-Forwarded-For"] #get_client_ip
return 'CherryPy RESETTING for duty, sir! requested by '+str(ip)
The MemoryMonitor part is from here:
How to get current CPU and RAM usage in Python?

Python uses many error handling strategies to control flow. A simple try/except statement could throw an exception if, say, your memory overflowed, a load increased, or any number of issues making your code stall (hard to see without the actual code).
In the except clause, you could clear any memory you allocated and restart your processes again.

Depending on your OS, try the following logic:
Implement a os.pid() > /path/pid.file
Create a service script that connected to your web-port
Try to fetch data
If no data was recieved, kill PID #/path/pid.file
restart script
Your main script:
import os
with open('./pidfile.pid', 'wb') as fh:
fh.write(str(os.getpid()))
... Execute your code as per normal ...
service.py script:
from socket import *
from os import kill
s = socket()
try:
s.connect(('127.0.0.1', 80))
s.send('GET / HTTP/1.1\r\n\r\n')
len = len(s.recv(8192))
s.close()
except:
len = 0
if len <= 0:
with open('/path/to/pidfile.pid', 'rb') as fh:
kill(int(fh.read()))
And have a cronjob (execute in a console):
sudo export EDITOR=nano; crontab -e
Now you're in the text-editor editing your cronjobs, write the following two lines at the bottom:
*/2 * * * * cd /path/to/service.py; python service.py
*/5 * * * * cd /path/to/main/script.py; python script.py
Press Ctrl+X and when asked to save changes, write Y and enjoy.
Also, instead of restarting your script from within service.py script, i'd suggest that service.py kills the PID located in /path/pid.file and let your OS handle starting up your script if the PID is missing in /path/, Linux at least have very nifty features for this.
Best practice Ubuntu
It's more than considered best practice to use the systems service status apache2 for instance, the service scripts lets you reload, stop, start, get job states and what not.
Check out: http://upstart.ubuntu.com/cookbook/#service-job
And check the service scripts for all the other applications and not only use them as skeletons but make sure your application follows the same logic.

Perhaps you need supervisord to monitor your gunicorn process and restart it when it's necessary:
http://www.onurguzel.com/managing-gunicorn-processes-with-supervisor/

Related

Reading PC turn-on and shut-down times?

im writing a little routine to track my sleep cycle. Usually when i wake up I turn on my PC within minutes so reading out when the system turns on and when it shuts down would be great. This program here does the same function https://www.neuber.com/free/pctime/
I tried googling for a lib or function that can call these system events but most of the results are for turning on and shutting down the PC with command, so my question is:
What would be the best way to get the time the pc turns on and off?
Thanks
If you're on Linux (I'll assume Systemd here), you could write a service that executes code on startup/shutdown. That code would write the current timestamp to a CSV file, along with an indicator "startup" or "shutdown".
Here's a Python3 script that takes as first argument the type of timestamp to log to "updownlog.txt":
import os
import sys
import time
def main():
logfile = "updownlog.csv"
write_header = False
if len(sys.argv) != 2:
sys.exit("Error: script takes exactly one argument")
if sys.argv[1] != "shutdown" and sys.argv[1] != "startup":
sys.exit("Error: First argument should be 'startup' or 'shutdown'")
typ = sys.argv[1]
if not os.path.exists(logfile):
write_header = True
with open("updownlog.csv", "a") as f:
now = time.time()
if write_header:
f.write("type,timestamp\n")
f.write("{},{}\n".format(typ, now))
if __name__ == "__main__":
main()
Next, you'll need to create the system-service triggering this script. I'm shamelessly copying a solution offered in this answer on UnixSX: all credit to "John 9631"! If you still use an init.d based system there are great answers in that thread, too.
So, create the service file for your logging:
vim /etc/systemd/system/log_start_stop.service
and copy in the file content:
[Unit]
Description=Log startup and shutdown times
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart="/home/Sungod3k/log.py startup"
ExecStop="/home/Sungod3k/log.py shutdown"
[Install]
WantedBy=multi-user.target
Then enable the service with the command:
systemctl enable log_start_stop
Granted, this won't yet tell you whether yo have a sleep deficit yet, so you'll need to do some post-processing, e.g. with Python or R, or even awk.

Restart python script

I have a script that collects data from the streaming API. I'm getting an error at random that I believe it's coming from twitter's end for whatever reason. It doesn't happen at specific time, I've been seen it as early as 10 minutes after running my script, and other times after 2 hours.
My question is how do I create another script (outside the running one) that can catch if it terminated with an error, then restart after a delay.
I did some searching and most were related to using bash on linux, I'm on windows. Other suggestions were to use Windows Task Scheduler but that can only be set for a known time.
I came across the following code:
import os, sys, time
def main():
print "AutoRes is starting"
executable = sys.executable
args = sys.argv[:]
args.insert(0, sys.executable)
time.sleep(1)
print "Respawning"
os.execvp(executable, args)
if __name__ == "__main__":
main()
If I'm not mistaken that runs inside the code correct? Issue with that is my script is currently collecting data and I can't terminate to edit.
How about this?
from os import system
from time import sleep
while True: #manually terminate when you want to stop streaming
system('python streamer.py')
sleep(300) #sleep for 5 minutes
In the meanwhile, when something goes wrong in streamer.py , end it from there by invoking sys.exit(1)
Make sure this and streamer.py are in the same directory.

Python script running in one system will perform the result in another system

I am trying to write a python script which when executes will open a Maya file in another computer and creates its playblast there. Is this possible? Also I would like to add one more thing that the systems I use are all Windows. Thanks
Yes it is possible, i do this all the time on several computers. First you need to access the computer. This has been answered. Then call maya from within your shell as follows:
maya -command myblast -file filetoblast.ma
you will need myblast.mel somewhere in your script path
myblast.mel:
global proc myblast(){
playblast -widthHeight 1920 1080 -percent 100
-fmt "movie" -v 0 -f (`file -q -sn`+".avi");
evalDeferred("quit -f");
}
Configure what you need in this file such as shading options etc. Please note calling Maya GUI uses up one license and playblast need that GUI (you could shave some seconds by not doing the default GUI)
In order to execute something on a remote computer, you've got to have some sort of service running there.
If it is a linux machine, you can simply connect via ssh and run the commands. In python you can do that using paramiko:
import paramiko
ssh = paramiko.SSHClient()
ssh.connect('127.0.0.1', username='foo', password='bar')
stdin, stdout, stderr = ssh.exec_command("echo hello")
Otherwise, you can use a python service, but you'll have to run it beforehand.
You can use Celery as previously mentioned, or ZeroMQ, or more simply use RPyC:
Simply run the rpyc_classic.py script on the target machine, and then you can run python on it:
conn = rpyc.classic.connect("my_remote_server")
conn.modules.os.system('echo foo')
Alternatively, you can create a custom RPyC service (see documentation).
A final option is using an HTTP server like previously suggested. This may be easiest if you don't want to start installing everything. You can use Bottle which is a simple HTTP framework in python:
Server-side:
from bottle import route, run
#route('/run_maya')
def index(name):
# Do whatever
return 'kay'
run(host='localhost', port=8080)
Client-side:
import requests
requests.get('http://remote_server/run_maya')
One last option for cheap rpc is to run maya.standalone from a a maya python ("mayapy", usually installed next to the maya binary). The standalone is going to be running inside a regular python script so it can uses any of the remote procedure tricks in KimiNewts answer.
You can also create your own mini-server using basic python. The server could use the maya command port, or a wsgi server using the built in wsgiref module. Here is an example which uses wsgiref running inside a standalone to control a maya remotely via http.
We've been dealing with the same issue at work. We're using Celery as the task manager and have code like this inside of the Celery task for playblasting on the worker machines. This is done on Windows and uses Python.
import os
import subprocess
import tempfile
import textwrap
MAYA_EXE = r"C:\Program Files\Autodesk\Maya2016\bin\maya.exe"
def function_name():
# the python code you want to execute in Maya
pycmd = textwrap.dedent('''
import pymel.core as pm
# Your code here to load your scene and playblast
# new scene to remove quicktimeShim which sometimes fails to quit
# with Maya and prevents the subprocess from exiting
pm.newFile(force=True)
# wait a second to make sure quicktimeShim is gone
time.sleep(1)
pm.evalDeferred("pm.mel.quit('-f')")
''')
# write the code into a temporary file
tempscript = tempfile.NamedTemporaryFile(delete=False, dir=temp_dir)
tempscript.write(pycmd)
tempscript.close()
# build a subprocess command
melcmd = 'python "execfile(\'%s\')";' % tempscript.name.replace('\\', '/')
cmd = [MAYA_EXE, '-command', melcmd]
# launch the subprocess
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
proc.wait()
# when the process is done, remove the temporary script
try:
os.remove(tempscript.name)
except WindowsError:
pass

Python application becomes non-responsive due to subprocess

I have written a Python application, using Flask, that serves a simple website that I can use to start playback of streaming video on my Raspberry Pi (microcomputer). Essentially, the application allows be to use my phone or tablet as a remote control.
I tested the application on Mac OS, and it works fine. After deploying it to the Raspberry Pi (with the Raspbian variant of Debian installed), it serves the website just fine, and starting playback also works as expected. But, stopping the playback fails.
Relevant code is hosted here: https://github.com/lcvisser/mlbviewer-remote/blob/master/remote/mlbviewer-remote.py
The subprocess is started like this:
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s i=t1' % (team, mm, dd, yy)
player = subprocess.Popen(cmd, shell=True, bufsize=-1, cwd=sys.argv[1])
This works fine.
The subprocess is supposed to stop after this:
player.send_signal(signal.SIGINT)
player.communicate()
This does work on Mac OS, but it does not work on the Raspberry Pi: the application hangs until the subprocess (started as cmd) is finished by itself. It seems like SIGINT is not sent or not received by the subprocess.
Any ideas?
(I have posted this question also here: https://unix.stackexchange.com/questions/133946/application-becomes-non-responsive-to-requests-on-raspberry-pi as I don't know if this is an OS problem or if it a Python/Flask-related problem.)
UPDATE:
Trying to use player.communicate() as suggested by Jan Vlcinsky below (and after finally seeing the warning here) did not help.
I'm thinking about using the proposed solution by Jan Vlcinsky, but if Flask does not even receive the request, I don't think that would receive the issue.
UPDATE 2:
Yesterday night I was fortunate to have a situation in which I was able to exactly pinpoint the issue. Updated the question with relevant code.
I feel like the solution of Jan Vlcinsky will just move the problem to a different application, which will keep the Flask application responsive, but will let the new application hang.
UPDATE 3:
I edited the original part of the question to remove what I now know not to be relevant.
UPDATE 4: After the comments of #shavenwarthog, the following information might be very relevant:
On Mac, mlbplay.py starts something like this:
rmtpdump <some_options_and_url> | mplayer -
When sending SIGINT to mlbplay.py, it terminates the process group created by this piped command (if I understood correctly).
On the Raspberry Pi, I'm using omxplayer, but to avoid having to change the code of mlbplay.py (which is not mine), I made a script called mplayer, with the following content:
#!/bin/bash
MLBTV_PIPE=mlbpipe
if [ ! -p $MLBTV_PIPE ]
then
mkfifo $MLBTV_PIPE
fi
cat <&0 > $MLBTV_PIPE | omxplayer -o hdmi $MLBTV_PIPE
I'm now guessing that this last line starts a new process group, which is not terminated by the SIGINT signal and thus making my app hang. If so, I should somehow get the process group ID of this group to be able to terminate it properly. Can someone confirm this?
UPDATE 5: omxplayer does handle SIGINT:
https://github.com/popcornmix/omxplayer/blob/master/omxplayer.cpp#L131
UPDATE 6: It turns out that somehow my SIGINT transforms into a SIGTERM somewhere along the chain of commands. SIGTERM is not handled properly by omxplayer, which appears to be the problem why things keep hanging. I solved this by implementing a shell script that manages the signals and translates them to proper omxplayer commands (sort-of a lame version of what Jan suggested).
SOLUTION: The problem was in player.send_signal(). The signal was not properly handled along the chain of commands, which caused the parent app to hang. The solution is to implement wrappers for commands that don't handle the signals well.
In addition: used Popen(cmd.split()) rather than using shell=True. This works a lot better when sending signals!
The problem is marked in following snippet:
#app.route('/watch/<year>/<month>/<day>/<home>/<away>/')
def watch(year, month, day, home, away):
global session
global watching
global player
# Select video stream
fav = config.get('favorite')
if fav:
fav = fav[0] # TODO: handle multiple favorites
if fav in (home, away):
# Favorite team is playing
team = fav
else:
# Use stream of home team
team = home
else:
# Use stream of home team
team = home
# End session
session = None
# Start mlbplay
mm = '%02i' % int(month)
dd = '%02i' % int(day)
yy = str(year)[-2:]
cmd = 'python2.7 mlbplay.py v=%s j=%s/%s/%s' % (team, mm, dd, yy)
# problem is here ----->
player = subprocess.Popen(cmd, shell=True, cwd=sys.argv[1])
# < ------problem is here
# Render template
game = {}
game['away_code'] = away
game['away_name'] = TEAMCODES[away][1]
game['home_code'] = home
game['home_name'] = TEAMCODES[home][1]
watching = game
return flask.render_template('watching.html', game=game)
You are starting up new process for executing shell command, but do not wait until it completes. You seem to rely on a fact, that the command line process itself is single one, but your frontend is not taking care of it and can easily start another one.
Another problem could be, you do not call player.communicate() and your process could block if stdout or stderr get filled by some output.
Proposed solution - split process controller from web app
You are trying to create UI for controlling a player. For this purpose, it would be practical splitting your solution into frontend and backend. Backend would serve as player controller and would offer methods like
start
stop
nowPlaying
To integrate front and backend, multiple options are available, one of them being zerorpc as shown here: https://stackoverflow.com/a/23944303/346478
Advantage would be, you could very easily create other frontends (like command line one, even remote one).
One more piece of the puzzle: proc.terminate() vs send_signal.
The following code forks a 'player' (just a shell with sleep in this case), then prints its process information. It waits a moment, terminates the player, then verifies that the process is no more, it has ceased to be.
Thanks to #Jan Vlcinsky for adding the proc.communicate() to the code.
(I'm running Linux Mint LMDE, another Debian variation.)
source
# pylint: disable=E1101
import subprocess, time
def show_procs(pid):
print 'Process Details:'
subprocess.call(
'ps -fl {}'.format(pid),
shell=True,
)
cmd = '/bin/sleep 123'
player = subprocess.Popen(cmd, shell=True)
print '* player started, PID',player.pid
show_procs(player.pid)
time.sleep(3)
print '\n*killing player'
player.terminate()
player.communicate()
show_procs(player.pid)
output
* player started, PID 20393
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
0 S johnm 20393 20391 0 80 0 - 1110 wait 17:30 pts/4 0:00 /bin/sh -c /bin/sleep 123
*killing player
Process Details:
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD

Avoid python setup time

This image below says python takes lot of time in user space. Is it possible to reduce this time at all ?
In the sense I will be running a script several 100 times. Is it possible to start python so that it takes time to initialize once and doesn't do it the subsequent time ??
I just searched for the same and found this:
http://blogs.gnome.org/johan/2007/01/18/introducing-python-launcher/
Python-launcher does not solve the problem directly, but it points into an interesting direction: If you create a small daemon which you can contact via the shell to fork a new instance, you might be able to get rid of your startup time.
For example get the python-launcher and socat¹ and do the following:
PYTHONPATH="../lib.linux-x86_64-2.7/" python python-launcher-daemon &
echo pass > 1
for i in {1..100}; do
echo 1 | socat STDIN UNIX-CONNECT:/tmp/python-launcher-daemon.socket &
done
Todo: Adapt it to your program, remove the GTK stuff. Note the & at the end: Closing the socket connection seems to be slow.
The essential trick is to just create a server which opens a socket. Then it reads all the data from the socket. Once it has the data, it forks like the following:
pid = os.fork()
if pid:
return
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
signal.signal(signal.SIGCHLD, signal.SIG_DFL)
glob = dict(__name__="__main__")
print 'launching', program
execfile(program, glob, glob)
raise SystemExit
Running 100 programs that way took just 0.7 seconds for me.
You might have to switch from forking to just executing the code instead of forking if you want to be really fast.
(That’s what I also do with emacsclient… My emacs takes ~30s to start (due to excessive use of additional libraries I added), but emacsclient -c shows up almost instantly.)
¹: http://www.socat.org
Write the "do this several 100 times" logic in your Python script. Call it ONCE from that other language.
Use timeit instead:
http://docs.python.org/library/timeit.html

Categories

Resources