How to exit function with signal on Windows? - python

I have the following code written in Python 2.7 on Windows. I want to check for updates for the current python script and update it, if there is an update, with a new version through ftp server preserving the filename and then executing the new python script after terminating the current through the os.kill with SIGNTERM.
I went with the exit function approach but I read that in Windows this only works with the atexit library and default python exit methods. So I used a combination of the atexit.register() and the signal handler.
***necessary libraries***
filematch = 'test.py'
version = '0.0'
checkdir = os.path.abspath(".")
dircontent = os.listdir(checkdir)
r = StringIO()
def exithandler():
try:
try:
if filematch in dircontent:
os.remove(checkdir + '\\' + filematch)
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
for filename in ftp.nlst(filematch):
fhandle = open(filename, 'wb')
ftp.retrbinary('RETR ' + filename, fhandle.write)
fhandle.close()
subprocess.Popen([sys.executable, "test.py"])
print 'Test file successfully updated.'
except Exception as e:
print e
ftp = FTP(ip address)
ftp.login(username, password)
ftp.cwd('/Test')
ftp.retrbinary('RETR version.txt', r.write)
if(r.getvalue() != version):
atexit.register(exithandler)
somepid = os.getpid()
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
os.kill(somepid, signal.SIGTERM)
print 'Successfully replaced and started the file'
Using the:
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
I get:
Traceback (most recent call last):
File "C:\Users\STiX\Desktop\Python Keylogger\test.py", line 50, in <module>
signal.signal(SIGTERM, lambda signum, stack_frame: exit(1))
NameError: name 'SIGTERM' is not defined
But I get the job done without a problem except if I use the current code in a more complex script where the script give me the same error but terminates right away for some reason.
On the other hand though, if I use it the correct way, signal.SIGTERM, the process goes straight to termination and the exit function never executed. Why is that?
How can I make this work on Windows and get the outcome that I described above successfully?

What you are trying to do seems a bit complicated (and dangerous from an infosec-perspective ;-). I would suggest to handle the reload-file-when-updated part of the functionality be adding a controller class that imports the python script you have now as a module and, starts it and the reloads it when it is updated (based on a function return or other technique) - look this way for inspiration - https://stackoverflow.com/a/1517072/1010991
Edit - what about exe?
Another hacky technique for manipulating the file of the currently running program would be the shell ping trick. It can be used from all programming languages. The trick is to send a shell command that is not executed before after the calling process has terminated. Use ping to cause the delay and chain the other commands with &. For your use case it could be something like this:
import subprocess
subprocess.Popen("ping -n 2 -w 2000 1.1.1.1 > Nul & del hack.py & rename hack_temp.py hack.py & hack.py ", shell=True)
Edit 2 - Alternative solution to original question
Since python does not block write access to the currently running script an alternative concept to solve the original question would be:
import subprocess
print "hello"
a = open(__file__,"r")
running_script_as_string = a.read()
b = open(__file__,"w")
b.write(running_script_as_string)
b.write("\nprint 'updated version of hack'")
b.close()
subprocess.Popen("python hack.py")

Related

Correct way to implement auto-update feature for PyQt

I'm nearly finished with a PyQt application for Windows, which I would like to have automatically update from a ZIP archive hosted on a remote server.
I have most of the update script complete, with it correctly downloading the new source and extracting it. My final step is to stop the software, replace the old source and restart the app.
My question is about which of the following is more appropriate:
to run the updater script through a system call to python and killing the software using python.
run the updater script using a system call to a batch file that kills the main software before overwriting it's source.
import the updater as a module and perform all of it's actions in the same process as the main software.
I can provide the scripts if necessary.
UPDATE:
So I've been exploring all the ways to do this including using multiprocessing (which produced a child process that was killed along with the parent) and subprocess.
The latter apparently can run the child process separately, which will allow me to shutdown the main app before extracting the new source. Here's what I have working:
#staticmethod
def install(folder):
# stop Pierre, unpack newest version, then restart Pierre.
try:
with open('pierre.pid', mode='r') as pid:
os.kill(int(pid.read()), signal.SIGINT)
with zipfile.ZipFile(file=folder) as zipped:
zipped.extractall(path='src')
try:
pierre = os.path.join(os.path.abspath(os.getcwd()), 'src/pierre.py')
exec(pierre)
except OSError as exc:
logging.error("Restarting Pierre failed. " + str(exc))
try:
os.remove('src.zip')
except OSError as exc:
logging.error("Deletion of zip failed. " + str(exc))
except zipfile.BadZipFile:
logging.error("Pierre update file is corrupt.")
except Exception as exc:
logging.error("Pierre update install failed. " + str(exc))
What's not working:
#staticmethod
def update_process():
# Begin the update process by spawning the updater script.
script = 'python ' + os.getcwd() + '\\updater.py'
subprocess.Popen([script])
The subprocess is producing a FileNotFoundError despite the path working when run manually in command prompt. (The second method is what starts the script, leading to the first method.)
I've got it. Here's what the process spawner looks like now:
#staticmethod
def update_process():
# Begin the update process by spawning the updater script.
script = os.path.join(os.getcwd() + '/updater.py')
script = script.replace('\\', '/')
subprocess.Popen([sys.executable, script], shell=True)
This launches the separate updater script.

Python subprocess in .exe

I'm creating a python script that will copy files and folder over the network. it's cross-platform so I make an .exe file using cx_freeze
I used Popen method of the subprocess module
if I run .py file it is running as expected but when i create .exe subprocess is not created in the system
I've gone through all documentation of subprocess module but I didn't find any solution
everything else (I am using Tkinter that also works fine) is working in the .exe accept subprocess.
any idea how can I call subprocess in .exe.file ??
This file is calling another .py file
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail=db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path=detail[2]
if not os.path.exists(source_path):
showerror("Invalid Path","Please select valid path", parent=self.new_frame)
return
self.forms.new_scheduler.start_scheduler_button.destroy()
#Create stop scheduler button
if getattr(self.forms.new_scheduler, "stop_scheduler_button", None)==None:
self.forms.new_scheduler.stop_scheduler_button = tk.Button(self.new_frame, text='Stop scheduler', width=10, command=lambda:self.stop_scheduler_action(scheduler_id, scheduler_name, list_index))
self.forms.new_scheduler.stop_scheduler_button.grid(row=11, column=1, sticky=E, pady=10, padx=1)
scheduler_id=str(scheduler_id)
# Get python paths
if sys.platform == "win32":
proc = subprocess.Popen(['where', "python"], env=None, stdout=subprocess.PIPE)
else:
proc = subprocess.Popen(['which', "python"], env=None,stdout=subprocess.PIPE)
out, err = proc.communicate()
if err or not out:
showerror("", "Python not found", parent=self.new_frame)
else:
try:
paths = out.split(os.pathsep)
# Create python path
python_path = (paths[len(paths) - 1]).split('\n')[0]
cmd = os.path.realpath('scheduler.py')
#cmd='scheduler.py'
if sys.platform == "win32":
python_path=python_path.splitlines()
else:
python_path=python_path
# Run the scheduler file using scheduler id
proc = subprocess.Popen([python_path, cmd, scheduler_id], env=None, stdout=subprocess.PIPE)
message="Started the scheduler : %s" %(scheduler_name)
showinfo("", message, parent=self.new_frame)
#Add process id to scheduler table
process_id=proc.pid
#showinfo("pid", process_id, parent=self.new_frame)
def get_process_id(name):
child = subprocess.Popen(['pgrep', '-f', name], stdout=subprocess.PIPE, shell=False)
response = child.communicate()[0]
return [int(pid) for pid in response.split()]
print(get_process_id(scheduler_name))
# Add the process id in database
self.db.add_process_id(scheduler_id, process_id)
# Add the is_running status in database
self.db.add_status(scheduler_id)
except Exception as e:
showerror("", e)
And this file is called:
def scheduler_copy():
date= strftime("%m-%d-%Y %H %M %S", localtime())
logFile = scheduler_name + "_"+scheduler_id+"_"+ date+".log"
#file_obj=open(logFile, 'w')
# Call __init__ method of xcopy file
xcopy=XCopy(connection_ip, username , password, client_name, server_name, domain_name)
check=xcopy.connect()
# Cretae a log file for scheduler
file_obj=open(logFile, 'w')
if check is False:
file_obj.write("Problem in connection..Please check connection..!!")
return
scheduler_next_run=schedule.next_run()
scheduler_next_run="Next run at: " +str(scheduler_next_run)
# If checkbox_value selected copy all the file to new directory
if checkbox_value==1:
new_destination_path=xcopy.create_backup_directory(share_folder, destination_path, date)
else:
new_destination_path=destination_path
# Call backup method for coping data from source to destination
try:
xcopy.backup(share_folder, source_path, new_destination_path, file_obj, exclude)
file_obj.write("Scheduler completed successfully..\n")
except Exception as e:
# Write the error message of the scheduler to log file
file_obj.write("Scheduler failed to copy all data..\nProblem in connection..Please check connection..!!\n")
# #file_obj.write("Error while scheduling")
# return
# Write the details of scheduler to log file
file_obj.write("Total skipped unmodified file:")
file_obj.write(str(xcopy.skipped_unmodified_count))
file_obj.write("\n")
file_obj.write("Total skipped file:")
file_obj.write(str(xcopy.skipped_file))
file_obj.write("\n")
file_obj.write("Total copied file:")
file_obj.write(str(xcopy.copy_count))
file_obj.write("\n")
file_obj.write("Total skipped folder:")
file_obj.write(str(xcopy.skipped_folder))
file_obj.write("\n")
# file_obj.write(scheduler_next_run)
file_obj.close()
There is some awkwardness in your source code, but I won't spend time on that. For instance, if you want to find the source_path, it's better to use a for loop with break/else:
for detail in scheduler_detail:
source_path = detail[2]
break # found
else:
# not found: raise an exception
...
Some advice:
Try to separate the user interface code and the sub-processing, avoid mixing the two.
Use exceptions and exception handlers.
If you want portable code: avoid system call (there are no pgrep on Windows).
Since your application is packaged in a virtualenv (I make the assumption cx_freeze does this kind of thing), you have no access to the system-wide Python. You even don't have that on Windows. So you need to use the packaged Python (this is a best practice anyway).
If you want to call a Python script like a subprocess, that means you have two packaged applications: you need to create an exe for the main application and for the scheduler.py script. But, that's not easy to communicate with it.
Another solution is to use multiprocessing to spawn a new Python process. Since you don't want to wait for the end of processing (which may be long), you need to create daemon processes. The way to do that is explained in the multiprocessing module.
Basically:
import time
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.daemon = True
p.start()
# let it live and die, don't call: `p.join()`
time.sleep(1)
Of course, we need to adapt that with your problem.
Here is how I would do that (I removed UI-related code for clarity):
import scheduler
class SchedulerError(Exception):
pass
class YourClass(object):
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail = db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path = detail[2]
break
else:
raise SchedulerError("Invalid Path", "Missing source path", parent=self.new_frame)
if not os.path.exists(source_path):
raise SchedulerError("Invalid Path", "Please select valid path", parent=self.new_frame)
p = Process(target=scheduler.scheduler_copy, args=('source_path',))
p.daemon = True
p.start()
self.db.add_process_id(scheduler_id, p.pid)
To check if your process is still running, I recommend you to use psutil. It's really a great tool!
You can define your scheduler.py script like that:
def scheduler_copy(source_path):
...
Multiprocessing vs Threading Python
Quoting this answer: https://stackoverflow.com/a/3044626/1513933
The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for.
Here, the advantage of multiprocessing over multithreading is that you can kill (or terminate) a process; you can't kill a thread. You may need psutil for that.
This is not an exact solution you are looking for, but following suggestion should be preferred for two reasons.
These are more pythonic way
subprocess is slightly expensive
Suggestions you can consider
Don't use subprocess for fetching system path. Try check os.getenv('PATH') to get env variable & try to find if python is in the path. For windows, one has to manually add Python path or else you can directly check in Program Files I guess
For checking process ID's you can try psutils. A wonderful answer is provided here at how do I get the process list in Python?
Calling another script from a python script. This does not look cool. Not bad, but I would not prefer this at all.
In above code, line - if sys.platform == "win32": has same value in if and else condition ==> you dont need a conditional statement here.
You wrote pretty fine working code to tell you. Keep Coding!
If you want to run a subprocess in an exe file, then you can use
import subprocess
program=('example')
arguments=('/command')
subprocess.call([program, arguments])

Check if a Python script is already running in Windows

What is the best/easiest why to check if a specific Python script already running in Windows?
I have a script that goes over all files in a folder and copies them to another folder (sort to Movie or TV Shows folder).
I want to make sure when the script starts that there isn't another process (of the same script) that is already running, so I wouldn't have issues with 2 scripts that are trying to move the same files.
I have tried to create a file in the start of the script and deleting it when the script finishes, but I got into problems when the script crashes and/or throws an error.
I know that I can use psutil, but then I will get the process name (python.exe) and I'm looking for a why to distinguish if the Python process is running my script or another program.
You can use psutil.Process().cmdline() to see the complete command line of a process.
Alternatively, you could lock the files you're working on. See the answer to this question how to do this on ms-windows. The thing with locks is that you have to be careful to remove them, especially when an error occurs.
For Windows o.s. you can use timestamp.txt file
timestamp = 'timestamp.txt'
...
elif windows:
try:
new_timestamp = False
if not os.path.exists(timestamp):
new_timestamp = True
try:
with open(timestamp, 'a') as f_timestamp:
f_timestamp.write(str(int_t0))
except IOError as e:
out1 = 'M. Cannot open file for writing. Error: %s - %s.' \
% (e.logfile, e.strerror) + ' -> Exit code 3'
logging.error(out1)
sys.exit(3)
if not new_timestamp and os.path.exists(timestamp):
out1 = 'N. Script ' + __file__ + ' is already running.'
print(out1)
logging.error(out1)
sys.exit(0)
except IOError as e:
out1 = 'J. Cannot open file. Error: %s - %s.' \
% (e.filepath, e.strerror) + ' -> Exit code 4'
logging.error(out1)
...
try:
f_timestamp.close()
os.remove(timestamp)
except OSError as e:
logging.error('B. Cannot delete ' + timestamp + \
' Error: %s - %s.' % (e.filename, e.strerror))
Use lockfile. It is cross-platform, uses native OS functions and much more robust than any home-brewn lock file creation schemas
I have solved it by using an empty dummy file.
At the start of the process I check if the file exists, if not I create a new file, run the process and delete it in the end (even if the process failed), if the file does exist, that means that the process is running now so I terminate the current (new) process.

how to prevent failure of subprocess stopping the main process in python

I wrote a python script to run a command called "gtdownload" on a bunch of files with multiprocessing. The function "download" is where I am having trouble with.
#/usr/bin/env python
import os, sys, subprocess
from multiprocessing import Pool
def cghub_dnld_file(file1, file2, file3, np):
<open files>
<read in lines>
p = Pool(int(np))
map_args = [(line.rstrip(),name_lines[i].rstrip(),bar_lines[i].rstrip()) for i, line in enumerate(id_lines)]
p.map(download_wrapper,map_args)
def download(id, name, bar):
<check if file has been downloaded, if not download>
<.....>
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dnld_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dnld_cmd,shell=True)
def download_wrapper(args):
return download(*args)
def main():
<read in arguments>
<...>
cghub_dnld_file(file1,file2,file3,threads)
if __name__ == "__main__":
main()
If this file does not exist in the database, gtdownload would quit which also kills my python job with the following error:
Traceback (most recent call last):
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 102, in <module>
main()
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 98, in main
cghub_dnld_file(file1,file2,file3,threads)
File "/rsrch1/rists/djiao/bin/cghub_dnld.py", line 22, in cghub_dnld_file
p.map(download_wrapper,map_args)
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 250, in map
return self.map_async(func, iterable, chunksize).get()
File "/rsrch1/rists/apps/x86_64-rhel6/anaconda/lib/python2.7/multiprocessing/pool.py", line 554, in get
raise self._value
OSError: [Errno 2] No such file or directory
The actual error message from gtdownload :
Welcome to gtdownload-3.8.5a.
Ready to download
Communicating with GT Executive ...
Headers received from the client: 'HTTP/1.1 100 Continue
HTTP/1.1 404 Not Found
Date: Tue, 29 Jul 2014 18:49:57 GMT
Server: Apache/2.2.15 (Red Hat and CGHub)
Strict-Transport-Security: max-age=31536000
X-Powered-By: PHP/5.3.3
Content-Length: 669
Connection: close
Content-Type: text/xml
'
Error: You have requested to download a uuid which either does not exist within the system, or has not yet reached the 'live' state. The requested action will not be performed. Please double check the supplied uuid or contact thelpdesk for further assistance.
I would like the script to skip the one that does not exist and start gtdownload on the next one. I tried to output the stderr of subprocess.call to a pipe and see if there is the "error" keyword. But it seems it stops at the exact subprocess.call command. Same thing with os.system.
I made a MCV case without the multiprocessing and subprocess did not kill the main process at all. Looks like multiprocessing messes things up although I had it run with 1 thread just for testing.
#!/usr/bin/env python
import subprocess
#THis is the id that gtdownload had problem with
id = "df1e073f-4485-4d5f-8659-cd8eaac17329"
link = "https://cghub.ucsc.edu/cghub/data/analysis/download/" + id
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
print dlnd_cmd
subprocess.call(dlnd_cmd,shell=True)
print "done"
Clearly multiprocessing conflicts subprocess.call but it is not clear to me why.
What is the best way to avoid the failure of subprocess killing the main process?
Handle the exception in some appropriate way and move on, of course.
try:
subprocess.call(dlnd_cmd)
except OSError as e:
print 'failed to download: {!r}'.format(e)
However, this may not be appropriate here. The kinds of exceptions that subprocess.call raises are usually not transient things that you can just log and work around; if it's not working now, it will continue to not work forever until you fix the underlying problem (a bug in your script, or gtdownload not being installed right, or whatever).
For example, if the code you showed us is your actual code:
dlnd_cmd = "gtdownload -c ~/.cghub.key --max-children 4 -vv -d " + link + " > gt.out 2>gt.err"
subprocess.call(dlnd_cmd)
… then this is guaranteed to raise an OSError for the reason explained in dano's answer: call (without shell=True) will try to take that entire string—spaces, shell-redirection, etc.—as the name of an executable program to find on your $PATH. And there is no such program. So it will raise an OSError(errno.ENOENT). (Which is exactly what you're seeing.) Just logging that doesn't do you any good; it's a good thing that your entire process is exiting, so you can debug that problem.
subprocess.call should not kill the main process. Something else must be wrong with your script or your conclusions about the script's behaviour is wrong. Did you try printing some trace output after the subprocess call?
You have to use shell=True to use subprocess.call with a string for an argument (and with shell redirection):
subprocess.call(dlnd_cmd, shell=True)
Without shell=True, subprocess tries to treat your entire command string like a single executable name, which of course doesn't exist, and leads to the No such file or directory exception.
See this answer for more info on when to use a string vs. when to use a sequence with subprocess.

Check to see if python script is running

I have a python daemon running as a part of my web app/ How can I quickly check (using python) if my daemon is running and, if not, launch it?
I want to do it that way to fix any crashes of the daemon, and so the script does not have to be run manually, it will automatically run as soon as it is called and then stay running.
How can i check (using python) if my script is running?
A technique that is handy on a Linux system is using domain sockets:
import socket
import sys
import time
def get_lock(process_name):
# Without holding a reference to our socket somewhere it gets garbage
# collected when the function exits
get_lock._lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
# The null byte (\0) means the socket is created
# in the abstract namespace instead of being created
# on the file system itself.
# Works only in Linux
get_lock._lock_socket.bind('\0' + process_name)
print 'I got the lock'
except socket.error:
print 'lock exists'
sys.exit()
get_lock('running_test')
while True:
time.sleep(3)
It is atomic and avoids the problem of having lock files lying around if your process gets sent a SIGKILL
You can read in the documentation for socket.close that sockets are automatically closed when garbage collected.
Drop a pidfile somewhere (e.g. /tmp). Then you can check to see if the process is running by checking to see if the PID in the file exists. Don't forget to delete the file when you shut down cleanly, and check for it when you start up.
#/usr/bin/env python
import os
import sys
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
file(pidfile, 'w').write(pid)
try:
# Do some actual work here
finally:
os.unlink(pidfile)
Then you can check to see if the process is running by checking to see if the contents of /tmp/mydaemon.pid are an existing process. Monit (mentioned above) can do this for you, or you can write a simple shell script to check it for you using the return code from ps.
ps up `cat /tmp/mydaemon.pid ` >/dev/null && echo "Running" || echo "Not running"
For extra credit, you can use the atexit module to ensure that your program cleans up its pidfile under any circumstances (when killed, exceptions raised, etc.).
The pid library can do exactly this.
from pid import PidFile
with PidFile():
do_something()
It will also automatically handle the case where the pidfile exists but the process is not running.
My solution is to check for the process and command line arguments
Tested on windows and ubuntu linux
import psutil
import os
def is_running(script):
for q in psutil.process_iter():
if q.name().startswith('python'):
if len(q.cmdline())>1 and script in q.cmdline()[1] and q.pid !=os.getpid():
print("'{}' Process is already running".format(script))
return True
return False
if not is_running("test.py"):
n = input("What is Your Name? ")
print ("Hello " + n)
Of course the example from Dan will not work as it should be.
Indeed, if the script crash, rise an exception, or does not clean pid file, the script will be run multiple times.
I suggest the following based from another website:
This is to check if there is already a lock file existing
\#/usr/bin/env python
import os
import sys
if os.access(os.path.expanduser("~/.lockfile.vestibular.lock"), os.F_OK):
#if the lockfile is already there then check the PID number
#in the lock file
pidfile = open(os.path.expanduser("~/.lockfile.vestibular.lock"), "r")
pidfile.seek(0)
old_pid = pidfile.readline()
# Now we check the PID from lock file matches to the current
# process PID
if os.path.exists("/proc/%s" % old_pid):
print "You already have an instance of the program running"
print "It is running as process %s," % old_pid
sys.exit(1)
else:
print "File is there but the program is not running"
print "Removing lock file for the: %s as it can be there because of the program last time it was run" % old_pid
os.remove(os.path.expanduser("~/.lockfile.vestibular.lock"))
This is part of code where we put a PID file in the lock file
pidfile = open(os.path.expanduser("~/.lockfile.vestibular.lock"), "w")
pidfile.write("%s" % os.getpid())
pidfile.close()
This code will check the value of pid compared to existing running process., avoiding double execution.
I hope it will help.
There are very good packages for restarting processes on UNIX. One that has a great tutorial about building and configuring it is monit. With some tweaking you can have a rock solid proven technology keeping up your daemon.
Came across this old question looking for solution myself.
Use psutil:
import psutil
import sys
from subprocess import Popen
for process in psutil.process_iter():
if process.cmdline() == ['python', 'your_script.py']:
sys.exit('Process found: exiting.')
print('Process not found: starting it.')
Popen(['python', 'your_script.py'])
There are a myriad of options. One method is using system calls or python libraries that perform such calls for you. The other is simply to spawn out a process like:
ps ax | grep processName
and parse the output. Many people choose this approach, it isn't necessarily a bad approach in my view.
I'm a big fan of Supervisor for managing daemons. It's written in Python, so there are plenty of examples of how to interact with or extend it from Python. For your purposes the XML-RPC process control API should work nicely.
Try this other version
def checkPidRunning(pid):
'''Check For the existence of a unix pid.
'''
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True
# Entry point
if __name__ == '__main__':
pid = str(os.getpid())
pidfile = os.path.join("/", "tmp", __program__+".pid")
if os.path.isfile(pidfile) and checkPidRunning(int(file(pidfile,'r').readlines()[0])):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do some actual work here
main()
os.unlink(pidfile)
Rather than developing your own PID file solution (which has more subtleties and corner cases than you might think), have a look at supervisord -- this is a process control system that makes it easy to wrap job control and daemon behaviors around an existing Python script.
The other answers are great for things like cron jobs, but if you're running a daemon you should monitor it with something like daemontools.
ps ax | grep processName
if yor debug script in pycharm always exit
pydevd.py --multiproc --client 127.0.0.1 --port 33882 --file processName
try this:
#/usr/bin/env python
import os, sys, atexit
try:
# Set PID file
def set_pid_file():
pid = str(os.getpid())
f = open('myCode.pid', 'w')
f.write(pid)
f.close()
def goodby():
pid = str('myCode.pid')
os.remove(pid)
atexit.register(goodby)
set_pid_file()
# Place your code here
except KeyboardInterrupt:
sys.exit(0)
Here is more useful code (with checking if exactly python executes the script):
#! /usr/bin/env python
import os
from sys import exit
def checkPidRunning(pid):
global script_name
if pid<1:
print "Incorrect pid number!"
exit()
try:
os.kill(pid, 0)
except OSError:
print "Abnormal termination of previous process."
return False
else:
ps_command = "ps -o command= %s | grep -Eq 'python .*/%s'" % (pid,script_name)
process_exist = os.system(ps_command)
if process_exist == 0:
return True
else:
print "Process with pid %s is not a Python process. Continue..." % pid
return False
if __name__ == '__main__':
script_name = os.path.basename(__file__)
pid = str(os.getpid())
pidfile = os.path.join("/", "tmp/", script_name+".pid")
if os.path.isfile(pidfile):
print "Warning! Pid file %s existing. Checking for process..." % pidfile
r_pid = int(file(pidfile,'r').readlines()[0])
if checkPidRunning(r_pid):
print "Python process with pid = %s is already running. Exit!" % r_pid
exit()
else:
file(pidfile, 'w').write(pid)
else:
file(pidfile, 'w').write(pid)
# main programm
....
....
os.unlink(pidfile)
Here is string:
ps_command = "ps -o command= %s | grep -Eq 'python .*/%s'" % (pid,script_name)
returns 0 if "grep" is successful, and the process "python" is currently running with the name of your script as a parameter .
A simple example if you only are looking for a process name exist or not:
import os
def pname_exists(inp):
os.system('ps -ef > /tmp/psef')
lines=open('/tmp/psef', 'r').read().split('\n')
res=[i for i in lines if inp in i]
return True if res else False
Result:
In [21]: pname_exists('syslog')
Out[21]: True
In [22]: pname_exists('syslog_')
Out[22]: False
I was looking for an answer on this and in my case, came to mind a very easy and very good solution, in my opinion (since it's not possible to exist a false positive on this, I guess - how can the timestamp on the TXT be updated if the program doesn't do it):
--> just keep writing on a TXT the current timestamp in some time interval, depending on your needs (here each half hour was perfect).
If the timestamp on the TXT is outdated relatively to the current one when you check, then there was a problem on the program and it should be restarted or what you prefer to do.
A portable solution that relies on multiprocessing.shared_memory:
import atexit
from multiprocessing import shared_memory
_ensure_single_process_store = {}
def ensure_single_process(name: str):
if name in _ensure_single_process_store:
return
try:
shm = shared_memory.SharedMemory(name='ensure_single_process__' + name,
create=True,
size=1)
except FileExistsError:
print(f"{name} is already running!")
raise
_ensure_single_process_store[name] = shm
atexit.register(shm.unlink)
Usually you wouldn't have to use atexit, but sometimes it helps to clean up upon abnormal exit.
Consider the following example to solve your problem:
#!/usr/bin/python
# -*- coding: latin-1 -*-
import os, sys, time, signal
def termination_handler (signum,frame):
global running
global pidfile
print 'You have requested to terminate the application...'
sys.stdout.flush()
running = 0
os.unlink(pidfile)
running = 1
signal.signal(signal.SIGINT,termination_handler)
pid = str(os.getpid())
pidfile = '/tmp/'+os.path.basename(__file__).split('.')[0]+'.pid'
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do some actual work here
while running:
time.sleep(10)
I suggest this script because it can be executed one time only.
Using bash to look for a process with the current script's name. No extra file.
import commands
import os
import time
import sys
def stop_if_already_running():
script_name = os.path.basename(__file__)
l = commands.getstatusoutput("ps aux | grep -e '%s' | grep -v grep | awk '{print $2}'| awk '{print $2}'" % script_name)
if l[1]:
sys.exit(0);
To test, add
stop_if_already_running()
print "running normally"
while True:
time.sleep(3)
This is what I use in Linux to avoid starting a script if already running:
import os
import sys
script_name = os.path.basename(__file__)
pidfile = os.path.join("/tmp", os.path.splitext(script_name)[0]) + ".pid"
def create_pidfile():
if os.path.exists(pidfile):
with open(pidfile, "r") as _file:
last_pid = int(_file.read())
# Checking if process is still running
last_process_cmdline = "/proc/%d/cmdline" % last_pid
if os.path.exists(last_process_cmdline):
with open(last_process_cmdline, "r") as _file:
cmdline = _file.read()
if script_name in cmdline:
raise Exception("Script already running...")
with open(pidfile, "w") as _file:
pid = str(os.getpid())
_file.write(pid)
def main():
"""Your application logic goes here"""
if __name__ == "__main__":
create_pidfile()
main()
This approach works good without any dependency on an external module.

Categories

Resources