I'm creating a make-release script where the full application is prepared for deployment on the machine where it is needed. It should be able to be build on a standard linux build machine (with docker)
The thing is that I struggle to find a way to make a single python script independed from the build machine.
"""Return hash of the key, used for prometheus web-portal access configuration"""
import sys
try:
import bcrypt
except ImportError as e:
print(e)
sys.exit(1)
args = sys.argv
if len(args) == 2:
try:
PASSWORD = str(args[1])
hashed_password = bcrypt.hashpw(PASSWORD.encode("utf-8"), bcrypt.gensalt())
print(hashed_password.decode())
sys.exit(0)
except (IndexError, SystemError, ValueError, RuntimeError, UnicodeError) as e:
print(e)
sys.exit(1)
else:
print('not enough arguments given')
sys.exit(1)
Depending on the return code the print is either a hash or an error code.
It is therefor important to have the return code to handle the script different.
# get hashed key for prometheus
HASHED_SECRET=$(python3 src/generate_prometheus_passhash.py ${PROMETHEUS_WEB_PASS})
RETURN_CODE=$?
if [ ${RETURN_CODE} == 0 ]; then
<Use hash>
else
echo "error: ${HASHED_SECRET}"
exit 1;
fi
Does anyone have a good solution for this?
EDIT:
In summary.
The python script has to run in a docker container to prevent library dependencies on the host machine.
I want to get both the return code (1 or 0 from the script) as the print (stdout) out of the docker container where the python script runs in.
The calls should be done via bash.
You could use the || logic to set a bad hash in HASH if the return code is not 0, and check for the value of $HASH with a if statement:
script.py
#!/usr/bin/env python3
import sys
args = sys.argv
if (len(args) < 2):
exit(1)
else:
print("ABCDEFG")
exit(0)
test.sh
#!/bin/sh
echo "Trying something that will fail"
HASH=$(python3 script.py || echo "BAD-HASH")
echo "HASH: $HASH"
echo "Trying something that will work"
HASH=$(python3 script.py a b c || echo "BAD-HASH")
echo "HASH: $HASH"
Output
$ ./test.sh
Trying something that will fail
HASH: BAD-HASH
Trying something that will work
HASH: ABCDEFG
I'm spawning multiple CMDs with a given python file, using subprocess.popen. All with an input() at the end. The problem is if there is any raised exception in the code the window just closes and I can't see what happened to it.
I want it to either way stay open no matter the error. so I can see it. Or get the error back at the main window like this script failed to run because of this..
I'm running this on Windows:
import sys
import platform
from subprocess import Popen,PIPE
pipelines = [("Name1","path1"),
("Name2","path2")]
# define a command that starts new terminal
if platform.system() == "Windows":
new_window_command = "cmd.exe /c start".split()
else: #XXX this can be made more portable
new_window_command = "x-terminal-emulator -e".split()
processes = []
for i in range(len(pipelines)):
# open new consoles, display messages
echo = [sys.executable, "-c",
"import sys; print(sys.argv[1]); from {} import {}; obj = {}(); obj.run(); input('Press Enter..')".format(pipelines[i][1],pipelines[i][0],pipelines[i][0])]
processes.append(Popen(new_window_command + echo + [pipelines[i][0]]))
for proc in processes:
proc.wait()
To see the error, try to wrap the desired code fragment in try / except
try:
...
except Exception as e:
print(e)
I'm trying to get and kill all other running python instances of the same script, I found an edge case where the path is not in psutil's cmdline list, when the process is started with ./myscript.py and not python ./myscript.py
the script's content is, note the shebang:
#!/bin/python
import os
import psutil
import sys
import time
for proc in psutil.process_iter():
if "python" in proc.name():
print("name", proc.name())
script_path = sys.argv[0]
proc_script_path = sys.argv[0]
if len(proc.cmdline()) > 1:
proc_script_path = proc.cmdline()[1]
else:
print("there's no path in cmdline")
if script_path.startswith("." + os.sep) or script_path.startswith(".." + os.sep):
script_path = os.path.normpath(os.path.join(os.getcwd(), script_path))
if proc_script_path.startswith("." + os.sep) or proc_script_path.startswith(".." + os.sep):
proc_script_path = os.path.normpath(os.path.join(proc.cwd(), proc_script_path))
print("script_path", script_path)
print("proc_script_path", proc_script_path)
print("my pid", os.getpid())
print("other pid", proc.pid)
if script_path == proc_script_path and os.getpid() != proc.pid:
print("terminating instance ", proc.pid)
proc.kill()
time.sleep(300)
how can I get the script path of a python process when it's not in psutil's cmdline?
When invoking a python script like this, the check if 'python' in proc.name() is the problem. This will not show python or python3 for the scripts in question, but it will show the script name. Try the following:
import psutil
for proc in proc.process_iter():
print('Script name: {}, cmdline: {}'.format(proc.name(), proc.cmdline()))
You should see something like ():
Script name: myscript.py, cmdline: ['/usr/bin/python3', './myscript.py']
Hope this helps.
when the process is started with ./relative/or/absolute/path/to/script.py and not python /relative/or/absolute/path/to/script.py
the psutil.Process.name() is script.py and not python.
To get the list of process paths running your script.py:
ps -eo pid,args|awk '/script.py/ && $2 != "awk" {print}'
To get get the list of process paths running your script.py not having psutil in the path. Replace your script.py and psutil in the following script.
ps -eo pid,args|awk '! /psutil/ && /script.py/ && $2 != "awk" {print}'
explanation:
ps -eo pid,args list all processes, providing process id and process path (args)
! /psutil/ match all process paths not having psutil in the path.
&& /script.py/ and match all process paths having script.py in the path.
&& $2 != "awk" and not wanting this awk process.
{print} output the matched lines.
When I run vi --version, I see VIM - Vi IMproved 7.3 and yet when I run the following script, it prints that I have version 7.2. Why?
The pathname is vi. which vi prints /usr/local/bin/vim and that --version is 7.3.
which gvim prints /usr/bin/gvim, and that --version prints a newer version of vim as well.
echo $EDITOR prints vi.
#!/usr/bin/python
import os
import sys
import os.path
import subprocess
import tempfile
def exec_vimcmd(commands, pathname='', error_stream=None):
"""Run a list of Vim 'commands' and return the commands output."""
try:
perror = error_stream.write
except AttributeError:
perror = sys.stderr.write
if not pathname:
pathname = os.environ.get('EDITOR', 'gvim')
args = [pathname, '-u', 'NONE', '-esX', '-c', 'set cpo&vim']
fd, tmpname = tempfile.mkstemp(prefix='runvimcmd', suffix='.clewn')
commands.insert(0, 'redir! >%s' % tmpname)
commands.append('quit')
for cmd in commands:
args.extend(['-c', cmd])
output = f = None
try:
try:
print "args are"
print args
subprocess.Popen(args).wait()
f = os.fdopen(fd)
output = f.read()
print "output is"
print output
print "that's the end of the output"
except (OSError, IOError), err:
if isinstance(err, OSError) and err.errno == errno.ENOENT:
perror("Failed to run '%s' as Vim.\n" % args[0])
perror("Please set the EDITOR environment variable or run "
"'pyclewn --editor=/path/to/(g)vim'.\n\n")
else:
perror("Failed to run Vim as:\n'%s'\n\n" % str(args))
perror("Error; %s\n", err)
raise
finally:
if f is not None:
f.close()
exec_vimcmd(['version'])
The args printed are
['vi', '-u', 'NONE', '-esX', '-c', 'set cpo&vim', '-c', 'redir! >/var/folders/86/062qtcyx2rxbnmn8mtpkyghs0r0r_z/T/runvimcmducLQCe.clewn', '-c', 'version', '-c', 'quit']
Find out what value is being assigned to pathname, and see if it agrees with which vim or which gvim entered at the command prompt. Your script is looking at your $EDITOR environment variable, but when you run (g)vim from the command line it searches your $PATH to find the first hit. For example, you may be running /opt/local/bin/vim from the CLI, but /usr/bin/vim from your script.
I have a python daemon running as a part of my web app/ How can I quickly check (using python) if my daemon is running and, if not, launch it?
I want to do it that way to fix any crashes of the daemon, and so the script does not have to be run manually, it will automatically run as soon as it is called and then stay running.
How can i check (using python) if my script is running?
A technique that is handy on a Linux system is using domain sockets:
import socket
import sys
import time
def get_lock(process_name):
# Without holding a reference to our socket somewhere it gets garbage
# collected when the function exits
get_lock._lock_socket = socket.socket(socket.AF_UNIX, socket.SOCK_DGRAM)
try:
# The null byte (\0) means the socket is created
# in the abstract namespace instead of being created
# on the file system itself.
# Works only in Linux
get_lock._lock_socket.bind('\0' + process_name)
print 'I got the lock'
except socket.error:
print 'lock exists'
sys.exit()
get_lock('running_test')
while True:
time.sleep(3)
It is atomic and avoids the problem of having lock files lying around if your process gets sent a SIGKILL
You can read in the documentation for socket.close that sockets are automatically closed when garbage collected.
Drop a pidfile somewhere (e.g. /tmp). Then you can check to see if the process is running by checking to see if the PID in the file exists. Don't forget to delete the file when you shut down cleanly, and check for it when you start up.
#/usr/bin/env python
import os
import sys
pid = str(os.getpid())
pidfile = "/tmp/mydaemon.pid"
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
file(pidfile, 'w').write(pid)
try:
# Do some actual work here
finally:
os.unlink(pidfile)
Then you can check to see if the process is running by checking to see if the contents of /tmp/mydaemon.pid are an existing process. Monit (mentioned above) can do this for you, or you can write a simple shell script to check it for you using the return code from ps.
ps up `cat /tmp/mydaemon.pid ` >/dev/null && echo "Running" || echo "Not running"
For extra credit, you can use the atexit module to ensure that your program cleans up its pidfile under any circumstances (when killed, exceptions raised, etc.).
The pid library can do exactly this.
from pid import PidFile
with PidFile():
do_something()
It will also automatically handle the case where the pidfile exists but the process is not running.
My solution is to check for the process and command line arguments
Tested on windows and ubuntu linux
import psutil
import os
def is_running(script):
for q in psutil.process_iter():
if q.name().startswith('python'):
if len(q.cmdline())>1 and script in q.cmdline()[1] and q.pid !=os.getpid():
print("'{}' Process is already running".format(script))
return True
return False
if not is_running("test.py"):
n = input("What is Your Name? ")
print ("Hello " + n)
Of course the example from Dan will not work as it should be.
Indeed, if the script crash, rise an exception, or does not clean pid file, the script will be run multiple times.
I suggest the following based from another website:
This is to check if there is already a lock file existing
\#/usr/bin/env python
import os
import sys
if os.access(os.path.expanduser("~/.lockfile.vestibular.lock"), os.F_OK):
#if the lockfile is already there then check the PID number
#in the lock file
pidfile = open(os.path.expanduser("~/.lockfile.vestibular.lock"), "r")
pidfile.seek(0)
old_pid = pidfile.readline()
# Now we check the PID from lock file matches to the current
# process PID
if os.path.exists("/proc/%s" % old_pid):
print "You already have an instance of the program running"
print "It is running as process %s," % old_pid
sys.exit(1)
else:
print "File is there but the program is not running"
print "Removing lock file for the: %s as it can be there because of the program last time it was run" % old_pid
os.remove(os.path.expanduser("~/.lockfile.vestibular.lock"))
This is part of code where we put a PID file in the lock file
pidfile = open(os.path.expanduser("~/.lockfile.vestibular.lock"), "w")
pidfile.write("%s" % os.getpid())
pidfile.close()
This code will check the value of pid compared to existing running process., avoiding double execution.
I hope it will help.
There are very good packages for restarting processes on UNIX. One that has a great tutorial about building and configuring it is monit. With some tweaking you can have a rock solid proven technology keeping up your daemon.
Came across this old question looking for solution myself.
Use psutil:
import psutil
import sys
from subprocess import Popen
for process in psutil.process_iter():
if process.cmdline() == ['python', 'your_script.py']:
sys.exit('Process found: exiting.')
print('Process not found: starting it.')
Popen(['python', 'your_script.py'])
There are a myriad of options. One method is using system calls or python libraries that perform such calls for you. The other is simply to spawn out a process like:
ps ax | grep processName
and parse the output. Many people choose this approach, it isn't necessarily a bad approach in my view.
I'm a big fan of Supervisor for managing daemons. It's written in Python, so there are plenty of examples of how to interact with or extend it from Python. For your purposes the XML-RPC process control API should work nicely.
Try this other version
def checkPidRunning(pid):
'''Check For the existence of a unix pid.
'''
try:
os.kill(pid, 0)
except OSError:
return False
else:
return True
# Entry point
if __name__ == '__main__':
pid = str(os.getpid())
pidfile = os.path.join("/", "tmp", __program__+".pid")
if os.path.isfile(pidfile) and checkPidRunning(int(file(pidfile,'r').readlines()[0])):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do some actual work here
main()
os.unlink(pidfile)
Rather than developing your own PID file solution (which has more subtleties and corner cases than you might think), have a look at supervisord -- this is a process control system that makes it easy to wrap job control and daemon behaviors around an existing Python script.
The other answers are great for things like cron jobs, but if you're running a daemon you should monitor it with something like daemontools.
ps ax | grep processName
if yor debug script in pycharm always exit
pydevd.py --multiproc --client 127.0.0.1 --port 33882 --file processName
try this:
#/usr/bin/env python
import os, sys, atexit
try:
# Set PID file
def set_pid_file():
pid = str(os.getpid())
f = open('myCode.pid', 'w')
f.write(pid)
f.close()
def goodby():
pid = str('myCode.pid')
os.remove(pid)
atexit.register(goodby)
set_pid_file()
# Place your code here
except KeyboardInterrupt:
sys.exit(0)
Here is more useful code (with checking if exactly python executes the script):
#! /usr/bin/env python
import os
from sys import exit
def checkPidRunning(pid):
global script_name
if pid<1:
print "Incorrect pid number!"
exit()
try:
os.kill(pid, 0)
except OSError:
print "Abnormal termination of previous process."
return False
else:
ps_command = "ps -o command= %s | grep -Eq 'python .*/%s'" % (pid,script_name)
process_exist = os.system(ps_command)
if process_exist == 0:
return True
else:
print "Process with pid %s is not a Python process. Continue..." % pid
return False
if __name__ == '__main__':
script_name = os.path.basename(__file__)
pid = str(os.getpid())
pidfile = os.path.join("/", "tmp/", script_name+".pid")
if os.path.isfile(pidfile):
print "Warning! Pid file %s existing. Checking for process..." % pidfile
r_pid = int(file(pidfile,'r').readlines()[0])
if checkPidRunning(r_pid):
print "Python process with pid = %s is already running. Exit!" % r_pid
exit()
else:
file(pidfile, 'w').write(pid)
else:
file(pidfile, 'w').write(pid)
# main programm
....
....
os.unlink(pidfile)
Here is string:
ps_command = "ps -o command= %s | grep -Eq 'python .*/%s'" % (pid,script_name)
returns 0 if "grep" is successful, and the process "python" is currently running with the name of your script as a parameter .
A simple example if you only are looking for a process name exist or not:
import os
def pname_exists(inp):
os.system('ps -ef > /tmp/psef')
lines=open('/tmp/psef', 'r').read().split('\n')
res=[i for i in lines if inp in i]
return True if res else False
Result:
In [21]: pname_exists('syslog')
Out[21]: True
In [22]: pname_exists('syslog_')
Out[22]: False
I was looking for an answer on this and in my case, came to mind a very easy and very good solution, in my opinion (since it's not possible to exist a false positive on this, I guess - how can the timestamp on the TXT be updated if the program doesn't do it):
--> just keep writing on a TXT the current timestamp in some time interval, depending on your needs (here each half hour was perfect).
If the timestamp on the TXT is outdated relatively to the current one when you check, then there was a problem on the program and it should be restarted or what you prefer to do.
A portable solution that relies on multiprocessing.shared_memory:
import atexit
from multiprocessing import shared_memory
_ensure_single_process_store = {}
def ensure_single_process(name: str):
if name in _ensure_single_process_store:
return
try:
shm = shared_memory.SharedMemory(name='ensure_single_process__' + name,
create=True,
size=1)
except FileExistsError:
print(f"{name} is already running!")
raise
_ensure_single_process_store[name] = shm
atexit.register(shm.unlink)
Usually you wouldn't have to use atexit, but sometimes it helps to clean up upon abnormal exit.
Consider the following example to solve your problem:
#!/usr/bin/python
# -*- coding: latin-1 -*-
import os, sys, time, signal
def termination_handler (signum,frame):
global running
global pidfile
print 'You have requested to terminate the application...'
sys.stdout.flush()
running = 0
os.unlink(pidfile)
running = 1
signal.signal(signal.SIGINT,termination_handler)
pid = str(os.getpid())
pidfile = '/tmp/'+os.path.basename(__file__).split('.')[0]+'.pid'
if os.path.isfile(pidfile):
print "%s already exists, exiting" % pidfile
sys.exit()
else:
file(pidfile, 'w').write(pid)
# Do some actual work here
while running:
time.sleep(10)
I suggest this script because it can be executed one time only.
Using bash to look for a process with the current script's name. No extra file.
import commands
import os
import time
import sys
def stop_if_already_running():
script_name = os.path.basename(__file__)
l = commands.getstatusoutput("ps aux | grep -e '%s' | grep -v grep | awk '{print $2}'| awk '{print $2}'" % script_name)
if l[1]:
sys.exit(0);
To test, add
stop_if_already_running()
print "running normally"
while True:
time.sleep(3)
This is what I use in Linux to avoid starting a script if already running:
import os
import sys
script_name = os.path.basename(__file__)
pidfile = os.path.join("/tmp", os.path.splitext(script_name)[0]) + ".pid"
def create_pidfile():
if os.path.exists(pidfile):
with open(pidfile, "r") as _file:
last_pid = int(_file.read())
# Checking if process is still running
last_process_cmdline = "/proc/%d/cmdline" % last_pid
if os.path.exists(last_process_cmdline):
with open(last_process_cmdline, "r") as _file:
cmdline = _file.read()
if script_name in cmdline:
raise Exception("Script already running...")
with open(pidfile, "w") as _file:
pid = str(os.getpid())
_file.write(pid)
def main():
"""Your application logic goes here"""
if __name__ == "__main__":
create_pidfile()
main()
This approach works good without any dependency on an external module.