Interprocess communcation between docker and host system - python

I have a python program that does some machine learning. This is supposed to be accessible over network using HTTP. Since I want Apache to act as a server,I use a python script to send the data which is received to my program using python multiprocessing.connection.
For eg script to send will be
#!/usr/bin/python
from multiprocessing.connection import Client
import cgi
from job import *
form = cgi.FieldStorage()
address = ('localhost', 6000)
conn = Client(address, authkey='secretpass')
conn.send(form)
And the receiving script will be
from multiprocessing.connection import Listener
import threading
print "Starting listener"
address = ('localhost', 6000)
listener = Listener(address, authkey='secretpass')
while True:
conn = listener.accept()
msg = conn.recv()
conn.close()
# Do stuff with msg
listener.close()
Once I trigger the url, Apache will call the first script, and it will send the python object to other script. Other script will receive it and do the processing.
Now, I would like to put the ML part into a docker container while Apache will be in the host system. In that case how will I communicate ?

As part of the Processing library you will find the process Queue. This structure exists to allow messages to be passed between processes. If you are working on Linux it is a matter of setting up a global variable and pushing messages. The pattern is usually: any process can post, and a single process reads. With two or more queues you can easily set up back and forth communications without worry about collisions or lost messages.
This becomes harder in Windows and other more restrictive systems, as there are no globals shared between processes, and no way to pass a complex structure at creation of a process. In Windows it is far easier to simply stick to threads.
Details of the multi-processing/threads in python can be found here:
16.6. multiprocessing — Process-based “threading” interface

Related

Create a scalable local server that redirects (sending and receiving) data between different client scripts identifying each of them?

I am having difficulties to create a script that serves as a server that interconnects python scripts locally, these will take the role of clients and will be active (or not).
The goal is that if an operation is running in script1.py and it requires sending or receiving data from script2.py (which must be running in parallel) server.py will communicate between those 2 scripts.
The problem I'm having is how to make those communications. Each communication must contain the message as a string of characters and the name of the destination script from which information is expected to be received or sent.
In this case, script1.py will send a character string to the server indicating that the server should forward it to script2.py, which should be waiting for information. Then script2.py processes that information (in this case it translates it from English to Spanish only) and then sends it to the server so that it forwards it to script1.py, which must be waiting for that information.
This would be the flow diagram in pseudocode that the server and its clients should have:
# server.py
import socket, time
socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
socket.bind(('', 5555)) # socket class that assigns an 'IP address' and a 'port number' to a socket instance
socket.listen(5) #it is receiving but in a generic way, but I need it to redirect the information
# script2.py
import socket
from threading import Thread
from unicodedata import normalize
from googletrans import Translator
server_addr = ('server_ip', ----) # the server's address
I think that since in this case they will only be character strings, a TCP protocol could be useful since it has good reliability and should only transmit data from strings that do not usually exceed 3 lines of an A4 sheet
The fact that it must be a scalable server is a problem for me, that is, the server will always be an intermediary between scripts that only indicate who to send and the information to be sent

Keeping ports open in a python script that is run continuously

I'm trying to develop a server script using python 3.4 that runs perpetually and responds to client requests on up to 5 separate ports. My preferred platform is Debian 8.0. The platform currently runs on a virtual machine in the cloud. My script works fine when I run it off the command line - I need to now (1) keep it running once I log off the server and (2) keep several ports open through the script so that a windows client can connect to them.
For (1),
After trying several options [I tried using upstart, added the script to rc.local, used nohup with & to run it off the terminal, etc] that didn't seem to work, I eventually found something that does seem to keep the script running, even if it's not very elegant - I wrote an hourly cron script that checks to see if the script is running in the process list, and if not, to execute it.
Whenever I login to the VM now, I see the following output when I type 'ps -ef':
root 22007 21992 98 Nov10 14-12:52:59 /usr/bin/python3.4 /home/userxyz/cronserver.py
I assume that the script is running based on the fact that there is an active process in the system. I mention this part because I suspect that there could be a correlation with part (2) of my issue.
For (2),
The script is supposed to open ports 49100 - 49105 and listen for connection requests, etc. When I run the script from the terminal, zenmap from my client machine verifies that these ports are open. However, when the cron job initiates the script, these ports don't seem to stay open. My windows client program can't connect to the script either.
The python code I use for listening to a port:
f = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
f.bind((serviceIP, 49101))
f.listen(5)
while True:
scName, address = f.accept()
[code to handle request]
scName.shutdown(socket.SHUT_WR)
scName.close()
Any insight or assistance would be greatly appreciated!
What you ask is not easy because it depends on a variety of factors:
What is the frequency of the data received?
How many clients are expected to connect to this server?
Is there a chance two clients try to connect at the same time?
How long it takes to handle some received data?
What do you need to do with your data?
Write to a database?
Write to a file?
Calculate something?
Etc.
Depending on your answer you'll have some design decisions to make for your solution.
But since you need an answer, here's a hack that represent a way to do things:
import socketserver
import threading
import datetime
class SleepyGaryReceptionHandler(socketserver.BaseRequestHandler):
log_file_name = "/tmp/sleepygaryserver.log"
def handle(self):
# self.request is defined in BaseRequestHandler
data_received = self.request.recv(1024)
# self.client_address is also defined in BaseRequestHandler
sender_address = self.client_address[0]
# This is where you are supposed to do something with your data
# This is an example
self.write_to_log('Someone from {} sent us "{}"'.format(sender_address,
data_received))
# A way to stop the server from going on forever
# But you could do this other ways but it depends what condition
# should cause the shutdown
if data_received.startswith(b"QUIT"):
finishing_thread = threading.Thread(target=self.finish_in_another_thread)
finishing_thread.start()
# This will be called in another thread to terminate the server
# self.server is also defined in BaseRequestHandler
def finish_in_another_thread(self):
self.write_to_log("Shutting down the server")
self.server.shutdown()
# Write something (with a timestamp) to a text file so that we
# know something is happenning
def write_to_log(self, message):
timestamp = datetime.datetime.now()
timestamp_text = timestamp.isoformat(sep=' ', timespec='seconds')
with open(self.log_file_name, mode='a') as log_file:
log_file.write("{}: {}\n".format(timestamp_text, message))
service_address = "localhost"
port_number = 49101
server = socketserver.TCPServer((service_address, port_number),
SleepyGaryReceptionHandler)
server.serve_forever()
I'm using here the socketserver module instead of listening directly at a socket. This standard library module has been written to simplify writing a server. so use it!
All I do here is write to a text file what has been received. You would have to adapt it to your use.
But to have it running continuously use a cron job but to start it at the startup of the computer. Since this script will block until the server is stopped, we have to run it in the background. It would look something like that:
#reboot /usr/bin/python3 /home/sleepygary/sleppys_server.py &
I have tested it and after 5 hours it still does his thing.
Now like I said, it is a hack. If you want to go all the way and do things like any other services on your computer you have to program it in a certain way. You can find more information on this page: https://www.freedesktop.org/software/systemd/man/daemon.html
I'm really tired so there may be some errors here and there.

Python: how to host a websocket and interact with a serial port without blocking?

I am busy developing a Python system that uses web-sockets to send/received data from a serial port.
For this to work I need to react to data from the serial port as it is received. Problem is to detect incoming data the serial port needs to queried continuously looking for incoming data. Most likely a continuous loop. From previous experiences(Slow disk access + heavy traffic) using Flask this sounds like it could cause the web-sockets to be blocked. Will this be the case or is there a work around?
I have looked at how NodeJS interact with serial ports and it seems much nicer. It raises an event when there is incoming data instead of querying it all the time. Is this an option in Python?
Extra Details:
For now it will only be run on Linux.(Raspbian)
Flask was my first selection but I am open to other Python Frameworks.
pyserial for serial connection.(Is the only option I know of)
Python provides the select module in the stdlib which can do what you want. It DOES depend on what operating system you are using though. So since you haven't provided that information I can't be that helpful. However a simple example under Linux would be:
import select
epoll = select.epoll()
# Do stuff to create serial connection and websocket connection
epoll.register(websocket_file_descriptor, select.EPOLLIN)
epoll.register(serial_file_descriptor, select.EPOLLIN)
while True:
events = epoll.poll(1)
# Do stuff with the event,
for fileno, event in events:
if fileno == serial_file_descriptor:
data = os.read(serial_file_descriptor)
os.write(websocket_file_descriptor, data)
elif fileno == websocket_file_descriptor:
data = os.read(websocket_file_descriptor)
# Do something with the incoming data
That's a basic, incomplete, example. But it should give you an idea of the general process of using a system like epoll.
Simply start a subprocess that listens to the serial socket and raises an event when it has a message. Have a separate sub-process for each web port that does the same.

Constantly running python script, calling functions via terminal

quick question that I'm never even sure is possible :3
I have a python script, a network script that connects to a server and remains connected until I either disconnect or it kicks me (which it normally shouldn't), which is constantly receiving data and doing other tasks.
I was curious if it's at all possible while the script is running, to trigger functions from within the script? Say while the script was running, if I had the urge to send some sort of data to the server, I could type it up and send it to the function that handles this?
Wasn't quite sure if it was possible or not, as I've never had to attempt or even seen it done. If it helps, I'm on Ubuntu linux running the script from the terminal.
The usual 'UNIX-way' to solve such problems is to poll or select on both the socket and the standard input file descriptors. You then handle network input on 'IN' event on the socket and terminal input on 'IN' event on the stdin file descriptor.
This is not portable to Windows (which sucks), but that is the most natural way to do it on UNIX-like systems. And you don't get all the problems which come with threads (which often need polling in Python too, as they get 'unkillable' otherwise).
Take a look at gevent:
gevent is a coroutine-based Python networking library that uses
greenlet to provide a high-level synchronous API on top of the
libevent event loop.
and gevent.socket.
Jacek Konieczny's solution is good and simple. Should you want more flexible message passing, consider ZeroMQ. This gives you lots of power to easily create various messaging solutions around your main program. Using a single thread, your main program would look something like this:
#!/usr/bin/env python
import zmq
from time import sleep
CTX = zmq.Context()
incoming = CTX.socket(zmq.PULL)
incoming.bind("tcp://127.0.0.1:3000")
outgoing = CTX.socket(zmq.PUB)
outgoing.bind("tcp://127.0.0.1:3001")
# Poller for the incoming messages
poller = zmq.Poller()
poller.register(incoming, zmq.POLLIN)
def main():
while True:
# Do things on the network
print("[Did things on the network]")
# Send messages if you want
outgoing.send("Important message")
# Poll for incoming messages
socks = dict(poller.poll(zmq.NOBLOCK))
if incoming in socks and socks[incoming] == zmq.POLLIN:
message = incoming.recv()
# Handle message
print("[Handled message '%s']" % message)
sleep(1) # Only for this dummy program
if __name__ == "__main__":
main()
You would then write a client (in any language that has ZeroMQ bindings) that pushes and subscribes to messages from the main program. Example pusher:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
pusher = CTX.socket(zmq.PUSH)
pusher.connect("tcp://127.0.0.1:3000")
def main():
pusher.send("Message to main program")
if __name__ == "__main__":
main()
Example subscriber:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
subscriber = CTX.socket(zmq.SUB)
subscriber.connect("tcp://127.0.0.1:3001")
subscriber.setsockopt(zmq.SUBSCRIBE, "")
def main():
while True:
msg = subscriber.recv()
print("[Received message] %s" % msg)
if __name__ == "__main__":
main()
It sounds like you will want to combine the pusher and subscriber programs into one. If you decide to use ZeroMQ have a look at the excellent user guide.
You can of course also use ZeroMQ with multiple threads or processess (just be careful not to share individual ZeroMQ sockets between threads).
Without more details, I can only provide you with general ideas. In order to do two things at once (download from the server and wait for data to send) you will need to use either multiple threads or processes. There is a tutorial with some examples of multiple threads here. If you use multiple processes, you would be using the multiprocessing package.
With either solution, you would need a similar setup. I'll use the term thread for the rest, but you could easily replace that with process if you used multiple processes instead. You would probably have (at least) a thread to send and receive data (this might be two threads) and a separate thread to wait for something to send. This is a simplified example of the producer/consumer problem. The thread that waits for the commands/data would be a simple input loop that produces data to send, while the thread that sends data would consume the data as it sends it to the server.
Stick your server stuff in another thread (investigate the threading module) and use the main thread for interaction with the user via raw_input/input.

python xinetd client disconnection handling

This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()

Categories

Resources