running and stopping a script in subprocess - python

I am not familiar with subprocesses and I would like to have some help with the following problem.
I have 3 apps. Lets say I am running them with command like this:
python manage.py app1
python manage.py app2
python manage.py app2
I want to make a main script to control them like run_app1 or stop_app1
Everything runs in linux.
My apologies for my poor explanation. I have a problem called Dyslexia, also known as reading disorder. It is some times hard for me to write down what I am thinking.
Thank you for reading or any help

Using the subprocess Python module a first step could be something like this:
# Master
from multiprocessing.connection import Listener
from subprocess import Popen, PIPE
import socket
import sys
port = 10000
lstn = Listener(('localhost', int(port)), authkey=b'secret')
proc = Popen((sys.executable, 'worker.py', str(port)), stdout=PIPE, stderr=PIPE)
conn = lstn.accept()
conn.send([1, 'Brian', None])
print(proc.stdout.readline())
# Worker
from multiprocessing.connection import Client
import sys
port = int(sys.argv[1])
conn = Client(('localhost', port), authkey=b'secret')
while True:
try:
msg = conn.recv()
print('Received: %s', str(msg))
sys.stdout.flush()
except EOFError:
break
The master process initializes a listener and then opens the worker process. Messages can be sent to the worker via the connection object and stdout and stderr go back to the master process.

Related

WinError6 The Handle is Invalid Python 3+ multiprocessing

I am running a Python 3.7 Flask application which uses flask_socketio to setup a socketio server for browser clients, another python process to connect to a separate remote socketio server & exchange messages, and another python process to read input from a PIR sensor.
Both python processes communicate over multiprocessing.Queue - but, the socketio process always gets either [WinError6] - Invalid Handle or [WinError5] - Permission Denied. I have absolutely no idea what I'm doing wrong.
Here's the top-level (server) code; it does not appear to have issues:
from shotsocket import init as shotsocket_init
from shotsocket import util as matchmaking_util
import multiprocessing, os, config, uuid
match_queue = multiprocessing.Queue()
shot_queue = multiprocessing.Queue()
app = Flask(__name__, static_url_path='', static_folder='templates')
socketio = SocketIO(app)
_rooms = [] # I don't plan to keep this in memory, just doing it for debug / dev
...
The above works fine and dandy. The 2nd to last line in the following block is the issue.
# THIS IS THE FUNC WHERE WE ARE TRYING TO USE
# THE BROKEN QUEUE
#socketio.on('connect')
def listen():
room_key = str(uuid.uuid4())
join_room(room_key)
_rooms.append((room_key, request.sid))
possible_match = matchmaking_util.match_pending_clients(_rooms)
if possible_match:
shot_queue.put_nowait(possible_match)
print('put it in there')
Here's how I start these processes:
if __name__ == '__main__':
debug = os.environ.get('MOONSHOT_DEBUG', False)
try:
proc = multiprocessing.Process(target=start, args=(debug,match_queue))
proc.start()
shot_proc = multiprocessing.Process(target=shotsocket_init, args=(shot_queue,))
shot_proc.start()
socketio.run(app, host='0.0.0.0')
except KeyboardInterrupt:
socketio.stop()
proc.join()
shot_proc.join()
And here's the entirety of shotsocket (the code that cannot read the queue)
import socketio, multiprocessing # mp for the type
sio = socketio.Client(engineio_logger=True)
sio.connect('redacted woot', transports=['websocket'])
#sio.on('connect')
def connect():
print("connected to shot server")
def init(queue: multiprocessing.Queue):
while True:
try:
# WE NEVER GET PAST THIS LINE
print(queue.get())
except Exception as e:
continue
if not queue.empty():
print('queue empty')
shot = queue.get()
print(shot)
match_id, opponents = shot
sio.emit('start', {'id': match_id, 'opponents': [opponents[0], opponents[1]]})
I'm pulling my hair out. What the heck am I doing wrong?
Solution
I have no idea why this fixes the problem, but switching from multiprocessing.Queue to queue.Queue and multiprocessing.Process to threading.Thread did it.

Create process in tornado web server

I have a multiproccessing tornado web server and I want to create another process that will do some things in the background.
I have a server with to following code
start_background_process
app = Application([<someurls>])
server = HTTPServer(app)
server.bind(8888)
server.start(4) # Forks multiple sub-processes
IOLoop.current().start()
def start_background_process():
process = multiprocessing.Process(target=somefunc)
process.start()
and everything is working great.
However when I try to close the server (by crtl c or send signal)
I get AssertionError: can only join a child process
I understood the cause of this problem:
when I create a process with multiprocess a call for the process join method
is registered in "atexit" and because tornado does a simple fork all its childs also call the join method of the process I created and the can't since the process is their brother and not their son?
So how can I open a process normally in tornado?
"HTTPTserver start" uses os.fork to fork the 4 sub-processes as it can be seen in its source code.
If you want your method to be executed by all the 4 sub-processes, you have to call it after the processes have been forked.
Having that in mind your code can be changed to look as below:
import multiprocessing
import tornado.web
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
# A simple external handler as an example for completion
from handlers.index import IndexHandler
def method_on_sub_process():
print("Executing in sub-process")
def start_background_process():
process = multiprocessing.Process(target=method_on_sub_process)
process.start()
def main():
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
if __name__ == "__main__":
main()
Furthermore to keep the behavior of your program clean during any keyboard interruption , surround the instantiation of the sever by a try...except clause as below:
def main():
try:
app = tornado.web.Application([(r"/", IndexHandler)])
server = HTTPServer(app)
server.bind(8888)
server.start(4)
start_background_process()
IOLoop.current().start()
except KeyboardInterrupt:
IOLoop.instance().stop()

How to start a python-daemon over ssh connection?

I want to log into a remote computer using the python library paramiko,
then start a daemon process using the python-daemon library which, after
the programm terminates, is still working as some kind of job queue.
This is my code so far:
(in this example the daemon will just open a file and print some random numbers into it)
#client.py
import paramiko
def main():
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect('machine1', username='user1')
command = 'python server_daemon.py'
stdin,stdout,stderr = ssh.exec_command(command)
ssh.close()
if __name__=="__main__":
main()
'
#server_daemon.py
import time
import daemon
def main():
with daemon.DaemonContext():
s = [str(x)+"\n" for x in range(1000)]
for i in s:
with open("test.txt", "a") as f:
f.write(i)
time.sleep(0.4)
while True:
pass
if __name__=="__main__":
main()
Unfortunately this doesn't seem to do the thing,
if I remove the daemonizing context from the script it seems to work but I have to wait for the server to finish.
I also tried to redirect the output to /dev/null and this didn't work,
thanks for any suggestions.

how do I diagnose a vanishing port listener?

I'm pulling data off a port using a python process, launched as an upstart job on an Ubuntu server. The data is sent using TCP with each client sending a single relatively small string of information:
The upstart config:
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 3 5
setuid takeaim
setgid takeaim
exec /home/takeaim/production/deploy/production/update_service_demon.sh
The update_service_demon.sh script (I found it easier to debug separating this out of upstart):
#!/bin/bash
# Make sure we're in the right virtual env and location
source /home/takeaim/.virtualenvs/production/bin/activate
source /home/takeaim/.virtualenvs/production/bin/postactivate
cd /home/takeaim/production
exec python drupdate/dr_update_service.py
The python script (it dispatches the real work to a celery worker):
from collections import defaultdict
import select
import socket
from django.conf import settings
from drupdate.tasks import do_dr_update
def create_server_socket():
"""Set up the and return server socket"""
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.setblocking(0)
server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
server_socket.bind(('0.0.0.0', settings.DRUPDATE['PORT']))
server_socket.listen(settings.DRUPDATE['MAX_CONNECT_REQUESTS'])
return server_socket
def serve(echo_only=False):
message_length = settings.DRUPDATE['MSG_LENGTH']
message_chunks = defaultdict(list)
server_socket = create_server_socket()
inputs = [server_socket]
while inputs:
readable, writable, exceptional = select.select(inputs, [], inputs)
for sock in readable:
if sock is server_socket:
client_socket, address = server_socket.accept()
client_socket.setblocking(0)
inputs.append(client_socket)
else:
chunk = sock.recv(message_length)
if chunk:
message_chunks[sock].append(chunk)
else:
# This client_socket is finished, hand off message for processing
message = ''.join(message_chunks[sock])
if echo_only:
print(message)
else:
do_dr_update.delay(message)
inputs.remove(sock)
sock.close()
for sock in exceptional:
inputs.remove(sock)
sock.close()
if sock is server_socket:
# replace bad server socket
server_socket = create_server_socket()
inputs.append(server_socket)
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description="Process incoming DR messages")
parser.add_argument('--echo', help='Just echo incoming messages to the console - no updates will take place',
dest='echo_only', action='store_true', default=False)
args = parser.parse_args()
serve(echo_only=args.echo_only)
The process disappears every now and then despite the restart. I'm reluctant to make the restarts unlimited unless I can understand why the process disappears. A manual restart works fine... until it disappears again. It can be up for days and then just vanishes.
What is the best way to find out what is going on?
Add enough logging to the system to enable traces to be analysed after a failure.
Here are some suggestions for logging in order of verbosity:
Replace the exec python drupdate/dr_update_service.py call with the following snippet which will log the exit code of your python process to syslog on exit. The exit code may give some clues as to how the process terminate. eg If the process terminate by a signal the exit code will be >= 128.
python drupdate/dr_update_service.py || logger "He's dead Jim, exit code $?"
Add a try/except block around your server call in __main__. In the exception handler, print the traceback to file or a logging subsystem.
If the above methods fail to provide clues, wrap your entire script with a call to strace -f -tt and divert the output to a log file. This will trace the entire set of system calls made by your program, their arguments and return codes. This will help debug issues which may be related to system calls which return errors. Applying this method will slow down your process and generate a huge amount of output which may in turn change the behaviour of your program and mask the underlying issue.

Interacting with long-running python script

I have a long running Python script which collect tweets from Twitter, and I would like to know how its doing every once in awhile.
Currently, I am using the signal library to catch interrupts, at which point I call my print function. Something like this:
import signal
def print_info(count):
print "#Tweets:", count
#Print out the process ID so I can interrupt it for info
print 'PID:', os.getpid()
#Start listening for interrupts
signal.signal(signal.SIGUSR1, functools.partial(print_info, tweet_count))
And whenever I want my info, I open up a new terminal and issue my interrupt:
$kill -USR1 <pid>
Is there a better way to do this? I am aware I could have my script something at scheduled intervals, but I am more interested in knowing on demand, and potentially issuing other commands as well.
Sending a signal to process would interrupt the process. Below you will find an approach that uses dedicated thread to emulate python console. The console is exposed as a unix socket.
import traceback
import importlib
from code import InteractiveConsole
import sys
import socket
import os
import threading
from logging import getLogger
# template used to generate file name
SOCK_FILE_TEMPLATE = '%(dir)s/%(prefix)s-%(pid)d.socket'
log = getLogger(__name__)
class SocketConsole(object):
'''
Ported form :eventlet.backdoor.SocketConsole:.
'''
def __init__(self, locals, conn, banner=None): # pylint: diable=W0622
self.locals = locals
self.desc = _fileobject(conn)
self.banner = banner
self.saved = None
def switch(self):
self.saved = sys.stdin, sys.stderr, sys.stdout
sys.stdin = sys.stdout = sys.stderr = self.desc
def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved
def finalize(self):
self.desc = None
def _run(self):
try:
console = InteractiveConsole(self.locals)
# __builtins__ may either be the __builtin__ module or
# __builtin__.__dict__ in the latter case typing
# locals() at the backdoor prompt spews out lots of
# useless stuff
import __builtin__
console.locals["__builtins__"] = __builtin__
console.interact(banner=self.banner)
except SystemExit: # raised by quit()
sys.exc_clear()
finally:
self.switch_out()
self.finalize()
class _fileobject(socket._fileobject):
def write(self, data):
self._sock.sendall(data)
def isatty(self):
return True
def flush(self):
pass
def readline(self, *a):
return socket._fileobject.readline(self, *a).replace("\r\n", "\n")
def make_threaded_backdoor(prefix=None):
'''
:return: started daemon thread running :main_loop:
'''
socket_file_name = _get_filename(prefix)
db_thread = threading.Thread(target=main_loop, args=(socket_file_name,))
db_thread.setDaemon(True)
db_thread.start()
return db_thread
def _get_filename(prefix):
return SOCK_FILE_TEMPLATE % {
'dir': '/var/run',
'prefix': prefix,
'pid': os.getpid(),
}
def main_loop(socket_filename):
try:
log.debug('Binding backdoor socket to %s', socket_filename)
check_socket(socket_filename)
sockobj = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
sockobj.bind(socket_filename)
sockobj.listen(5)
except Exception, e:
log.exception('Failed to init backdoor socket %s', e)
return
while True:
conn = None
try:
conn, _ = sockobj.accept()
console = SocketConsole(locals=None, conn=conn, banner=None)
console.switch()
console._run()
except IOError:
log.debug('IOError closing connection')
finally:
if conn:
conn.close()
def check_socket(socket_filename):
try:
os.unlink(socket_filename)
except OSError:
if os.path.exists(socket_filename):
raise
Example program:
make_threaded_backdoor(prefix='test')
while True:
pass
Example session:
mmatczuk#cactus:~$ rlwrap nc -U /var/run/test-3196.socket
Python 2.7.6 (default, Mar 22 2014, 22:59:56)
[GCC 4.8.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
>>> import os
>>> os.getpid()
3196
>>> quit()
mmatczuk#cactus:~$
This is a pretty robust tool that can be used to:
dump threads,
inspect process memory,
attach debugger on demand, pydev debugger (work for both eclipse and pycharm),
force GC,
monkeypatch function definition on the fly
and even more.
I personally write information to a file so that I have it afterwards, although this has the disadvantage of perhaps being slightly slower because it has to write to a file every time or every few times it retrieves a tweet.
Anyways, if you write it to a file "output.txt", you can open up bash and either type in tail output.txt for the latest 10 lines printed in the file, or you can type tail -f output.txt, which continuously updates the terminal prompt with the lines that you are writing to the file. If you wish to stop, just Ctrl-C.
Here's an example long-running program that also maintains a status socket. When a client connects to the socket, the script writes some status information to the socket.
#!/usr/bin/python
import os
import sys
import argparse
import random
import threading
import socket
import time
import select
val1 = 0
val2 = 0
lastupdate = 0
quit = False
# This function runs in a separate thread. When a client connects,
# we write out some basic status information, close the client socket,
# and wait for the next connection.
def connection_handler(sock):
global val1, val2, lastupdate, quit
while not quit:
# We use select() with a timeout here so that we are able to catch the
# quit flag in a timely manner.
rlist, wlist, xlist = select.select([sock],[],[], 0.5)
if not rlist:
continue
client, clientaddr = sock.accept()
client.send('%s %s %s\n' % (lastupdate, val1, val2))
client.close()
# This function starts the listener thread.
def start_listener():
sock = socket.socket(socket.AF_UNIX)
try:
os.unlink('/var/tmp/myprog.socket')
except OSError:
pass
sock.bind('/var/tmp/myprog.socket')
sock.listen(5)
t = threading.Thread(
target=connection_handler,
args=(sock,))
t.start()
def main():
global val1, val2, lastupdate
start_listener()
# Here is the part of our script that actually does "work".
while True:
print 'updating...'
lastupdate = time.time()
val1 = val1 + random.randint(1,10)
val2 = val2 + random.randint(100,200)
print 'sleeping...'
time.sleep(5)
if __name__ == '__main__':
try:
main()
except (Exception,KeyboardInterrupt,SystemExit):
quit=True
raise
You could write a simple Python client to connect to the socket, or you could use something like socat:
$ socat - unix:/var/tmp/myprog.sock
1403061693.06 6 152
I had write a similar application before.
Here is what I did:
When there are only a few commands needed, I just use signal as you did, just for not making it too complicated. By command, I mean something that you want you application to do, such as print_info in your post.
But when application updated, there are more different commands needed, I began to use a special thread listening on a socket port or reading a local file for accepting commands. Suppose the application need to support prinf_info1 print_info2 print_info3, so you can use a client connect to the target port and write print_info1 to make the application execute command print_info1 (Or just write print_info1 to a local file if you are using the reading local file mechanism).
When using the listening on a socket port mechanism, the disadvantage is it will take a bit more work to write a client to give commands, the advantage is you can give orders anywhere.
When using the reading a local file mechanism, the disadvantage is you have to make the thread check the file in a loop and it will use a bit resource, the advantage is giving orders is very simple (just write a string to a file) and you don't need to write a client and socket listen server.
rpyc is the perfect tool for this task.
In short, you define a rpyc.Service class which exposes the commands you want to expose, and start an rpyc.Server thread.
Your client then connects to your process, and calls the methods which are mapped to the commands your service exposes.
It's as simple and clean as that. No need to worry about sockets, signals, object serialization.
It has other cool features as well, for example the protocol being symmetric.
Your question relates to interprocess communication. You can achieve this by communicating over a unix socket or TCP port, by using a shared memory, or by using a message queue or cache system such as RabbitMQ and Redis.
This post talks about using mmap to achieve shared memory interprocess communication.
Here's how to get started with redis and RabbitMQ, both are rather simple to implement.

Categories

Resources