Read from two serial ports asynchronously - python

I'd like to read from two (or more) serial ports (/dev/ttyUSB0 etc) at the same time in python on Linux. I want to read complete lines from each port (whichever has data) and process the results in the order received (without race conditions). As a simple example could just write the lines to a single merged file.
I assume the way to do this is based on pyserial, but I can't quite figure out how to do it. Pyserial has non-blocking reads using asyncio and using threads. Asyncio is marked as experimental. I assume there wouldn't be any race conditions if the processing is done in asyncio.Protocol.data_received(). In the case of threads, the processing would probably have to be protected by a mutex.
Perhaps this can also be done not in pyserial. The two serial ports can be opened as files and then read from when data is available using select().

Consider using aioserial.
Here's an example:
import asyncio
import concurrent.futures
import queue
import aioserial
async def readline_and_put_to_queue(
aioserial_instance: aioserial.AioSerial,
q: queue.Queue):
while True:
q.put(await aioserial_instance.readline_async())
async def process_queue(q: queue.Queue):
with concurrent.futures.ThreadPoolExecutor(max_workers=1) as executor:
while True:
line: bytes = await asyncio.get_running_loop().run_in_executor(
executor, q.get)
print(line.decode(errors='ignore'), end='', flush=True)
q.task_done()
q: queue.Queue = queue.Queue()
aioserial_ttyUSB0: aioserial.AioSerial = \
aioserial.AioSerial(port='/dev/ttyUSB0')
aioserial_ttyUSB1: aioserial.AioSerial = \
aioserial.AioSerial(port='/dev/ttyUSB1', baudrate=115200)
asyncio.run(asyncio.wait([
readline_and_put_to_queue(aioserial_ttyUSB0, q),
readline_and_put_to_queue(aioserial_ttyUSB1, q),
process_queue(q),
]))

As suggested by #AlexHall in a comment, here is a solution that uses one thread for each serial port and a queue to synchronize access:
import serial
import Queue
import threading
queue = Queue.Queue(1000)
def serial_read(s):
while True:
line = s.readline()
queue.put(line)
serial0 = serial.Serial('/dev/ttyUSB0')
serial1 = serial.Serial('/dev/ttyUSB1')
thread1 = threading.Thread(target=serial_read, args=(serial0,),).start()
thread2 = threading.Thread(target=serial_read, args=(serial1,),).start()
while True:
line = queue.get(True, 1)
print line
It may be possible to write this more elegantly, but it works.

You could try to take the values in order and memorise it in variables:
a = data1.read ()
b = data2.read ()
And after process it in order:
If len (a) != 0 or len (b ) != 0:
Process a
Process b
Using this method if one or both of the value has data, process it

Related

How to check output of a sub process but also hide it? [duplicate]

NB. I have seen Log output of multiprocessing.Process - unfortunately, it doesn't answer this question.
I am creating a child process (on windows) via multiprocessing. I want all of the child process's stdout and stderr output to be redirected to a log file, rather than appearing at the console. The only suggestion I have seen is for the child process to set sys.stdout to a file. However, this does not effectively redirect all stdout output, due to the behaviour of stdout redirection on Windows.
To illustrate the problem, build a Windows DLL with the following code
#include <iostream>
extern "C"
{
__declspec(dllexport) void writeToStdOut()
{
std::cout << "Writing to STDOUT from test DLL" << std::endl;
}
}
Then create and run a python script like the following, which imports this DLL and calls the function:
from ctypes import *
import sys
print
print "Writing to STDOUT from python, before redirect"
print
sys.stdout = open("stdout_redirect_log.txt", "w")
print "Writing to STDOUT from python, after redirect"
testdll = CDLL("Release/stdout_test.dll")
testdll.writeToStdOut()
In order to see the same behaviour as me, it is probably necessary for the DLL to be built against a different C runtime than than the one Python uses. In my case, python is built with Visual Studio 2010, but my DLL is built with VS 2005.
The behaviour I see is that the console shows:
> stdout_test.py
Writing to STDOUT from python, before redirect
Writing to STDOUT from test DLL
While the file stdout_redirect_log.txt ends up containing:
Writing to STDOUT from python, after redirect
In other words, setting sys.stdout failed to redirect the stdout output generated by the DLL. This is unsurprising given the nature of the underlying APIs for stdout redirection in Windows. I have encountered this problem at the native/C++ level before and never found a way to reliably redirect stdout from within a process. It has to be done externally.
This is actually the very reason I am launching a child process - it's so that I can connect externally to its pipes and thus guarantee that I am intercepting all of its output. I can definitely do this by launching the process manually with pywin32, but I would very much like to be able to use the facilities of multiprocessing, in particular the ability to communicate with the child process via a multiprocessing Pipe object, in order to get progress updates. The question is whether there is any way to both use multiprocessing for its IPC facilities and to reliably redirect all of the child's stdout and stderr output to a file.
UPDATE: Looking at the source code for multiprocessing.Processs, it has a static member, _Popen, which looks like it can be used to override the class used to create the process. If it's set to None (default), it uses a multiprocessing.forking._Popen, but it looks like by saying
multiprocessing.Process._Popen = MyPopenClass
I could override the process creation. However, although I could derive this from multiprocessing.forking._Popen, it looks like I would have to copy a bunch of internal stuff into my implementation, which sounds flaky and not very future-proof. If that's the only choice I think I'd probably plump for doing the whole thing manually with pywin32 instead.
The solution you suggest is a good one: create your processes manually such that you have explicit access to their stdout/stderr file handles. You can then create a socket to communicate with the sub-process and use multiprocessing.connection over that socket (multiprocessing.Pipe creates the same type of connection object, so this should give you all the same IPC functionality).
Here's a two-file example.
master.py:
import multiprocessing.connection
import subprocess
import socket
import sys, os
## Listen for connection from remote process (and find free port number)
port = 10000
while True:
try:
l = multiprocessing.connection.Listener(('localhost', int(port)), authkey="secret")
break
except socket.error as ex:
if ex.errno != 98:
raise
port += 1 ## if errno==98, then port is not available.
proc = subprocess.Popen((sys.executable, "subproc.py", str(port)), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
## open connection for remote process
conn = l.accept()
conn.send([1, "asd", None])
print(proc.stdout.readline())
subproc.py:
import multiprocessing.connection
import subprocess
import sys, os, time
port = int(sys.argv[1])
conn = multiprocessing.connection.Client(('localhost', port), authkey="secret")
while True:
try:
obj = conn.recv()
print("received: %s\n" % str(obj))
sys.stdout.flush()
except EOFError: ## connection closed
break
You may also want to see the first answer to this question to get non-blocking reads from the subprocess.
I don't think you have a better option than redirecting a subprocess to a file as you mentioned in your comment.
The way consoles stdin/out/err work in windows is each process when it's born has its std handles defined. You can change them with SetStdHandle. When you modify python's sys.stdout you only modify where python prints out stuff, not where other DLL's are printing stuff. Part of the CRT in your DLL is using GetStdHandle to find out where to print out to. If you want, you can do whatever piping you want in windows API in your DLL or in your python script with pywin32. Though I do think it'll be simpler with subprocess.
Alternatively - and I know this might be slightly off-topic, but helped in my case for the same problem - , this can be resolved with screen on Linux:
screen -L -Logfile './logfile_%Y-%m-%d.log' python my_multiproc_script.py
this way no need to implement all the master-child communication
I assume I'm off base and missing something, but for what it's worth here is what came to mind when I read your question.
If you can intercept all of the stdout and stderr (I got that impression from your question), then why not add or wrap that capture functionality around each of your processes? Then send what is captured through a queue to a consumer that can do whatever you want with all of the outputs?
In my situation I changed sys.stdout.write to write to a PySide QTextEdit. I couldn't read from sys.stdout and I didn't know how to change sys.stdout to be readable. I created two Pipes. One for stdout and the other for stderr. In the separate process I redirect sys.stdout and sys.stderr to the child connection of the multiprocessing pipe. On the main process I created two threads to read the stdout and stderr parent pipe and redirect the pipe data to sys.stdout and sys.stderr.
import sys
import contextlib
import threading
import multiprocessing as mp
import multiprocessing.queues
from queue import Empty
import time
class PipeProcess(mp.Process):
"""Process to pipe the output of the sub process and redirect it to this sys.stdout and sys.stderr.
Note:
The use_queue = True argument will pass data between processes using Queues instead of Pipes. Queues will
give you the full output and read all of the data from the Queue. A pipe is more efficient, but may not
redirect all of the output back to the main process.
"""
def __init__(self, group=None, target=None, name=None, args=tuple(), kwargs={}, *_, daemon=None,
use_pipe=None, use_queue=None):
self.read_out_th = None
self.read_err_th = None
self.pipe_target = target
self.pipe_alive = mp.Event()
if use_pipe or (use_pipe is None and not use_queue): # Default
self.parent_stdout, self.child_stdout = mp.Pipe(False)
self.parent_stderr, self.child_stderr = mp.Pipe(False)
else:
self.parent_stdout = self.child_stdout = mp.Queue()
self.parent_stderr = self.child_stderr = mp.Queue()
args = (self.child_stdout, self.child_stderr, target) + tuple(args)
target = self.run_pipe_out_target
super(PipeProcess, self).__init__(group=group, target=target, name=name, args=args, kwargs=kwargs,
daemon=daemon)
def start(self):
"""Start the multiprocess and reading thread."""
self.pipe_alive.set()
super(PipeProcess, self).start()
self.read_out_th = threading.Thread(target=self.read_pipe_out,
args=(self.pipe_alive, self.parent_stdout, sys.stdout))
self.read_err_th = threading.Thread(target=self.read_pipe_out,
args=(self.pipe_alive, self.parent_stderr, sys.stderr))
self.read_out_th.daemon = True
self.read_err_th.daemon = True
self.read_out_th.start()
self.read_err_th.start()
#classmethod
def run_pipe_out_target(cls, pipe_stdout, pipe_stderr, pipe_target, *args, **kwargs):
"""The real multiprocessing target to redirect stdout and stderr to a pipe or queue."""
sys.stdout.write = cls.redirect_write(pipe_stdout) # , sys.__stdout__) # Is redirected in main process
sys.stderr.write = cls.redirect_write(pipe_stderr) # , sys.__stderr__) # Is redirected in main process
pipe_target(*args, **kwargs)
#staticmethod
def redirect_write(child, out=None):
"""Create a function to write out a pipe and write out an additional out."""
if isinstance(child, mp.queues.Queue):
send = child.put
else:
send = child.send_bytes # No need to pickle with child_conn.send(data)
def write(data, *args):
try:
if isinstance(data, str):
data = data.encode('utf-8')
send(data)
if out is not None:
out.write(data)
except:
pass
return write
#classmethod
def read_pipe_out(cls, pipe_alive, pipe_out, out):
if isinstance(pipe_out, mp.queues.Queue):
# Queue has better functionality to get all of the data
def recv():
return pipe_out.get(timeout=0.5)
def is_alive():
return pipe_alive.is_set() or pipe_out.qsize() > 0
else:
# Pipe is more efficient
recv = pipe_out.recv_bytes # No need to unpickle with data = pipe_out.recv()
is_alive = pipe_alive.is_set
# Loop through reading and redirecting data
while is_alive():
try:
data = recv()
if isinstance(data, bytes):
data = data.decode('utf-8')
out.write(data)
except EOFError:
break
except Empty:
pass
except:
pass
def join(self, *args):
# Wait for process to finish (unless a timeout was given)
super(PipeProcess, self).join(*args)
# Trigger to stop the threads
self.pipe_alive.clear()
# Pipe must close to prevent blocking and waiting on recv forever
if not isinstance(self.parent_stdout, mp.queues.Queue):
with contextlib.suppress():
self.parent_stdout.close()
with contextlib.suppress():
self.parent_stderr.close()
# Close the pipes and threads
with contextlib.suppress():
self.read_out_th.join()
with contextlib.suppress():
self.read_err_th.join()
def run_long_print():
for i in range(1000):
print(i)
print(i, file=sys.stderr)
print('finished')
if __name__ == '__main__':
# Example test write (My case was a QTextEdit)
out = open('stdout.log', 'w')
err = open('stderr.log', 'w')
# Overwrite the write function and not the actual stdout object to prove this works
sys.stdout.write = out.write
sys.stderr.write = err.write
# Create a process that uses pipes to read multiprocess output back into sys.stdout.write
proc = PipeProcess(target=run_long_print, use_queue=True) # If use_pipe=True Pipe may not write out all values
# proc.daemon = True # If daemon and use_queue Not all output may be redirected to stdout
proc.start()
# time.sleep(5) # Not needed unless use_pipe or daemon and all of stdout/stderr is desired
# Close the process
proc.join() # For some odd reason this blocks forever when use_queue=False
# Close the output files for this test
out.close()
err.close()
Here is the simple and straightforward way for capturing stdout for multiprocessing.Process:
import app
import io
import sys
from multiprocessing import Process
def run_app(some_param):
sys.stdout = io.TextIOWrapper(open(sys.stdout.fileno(), 'wb', 0), write_through=True)
app.run()
app_process = Process(target=run_app, args=('some_param',))
app_process.start()
# Use app_process.termninate() for python <= 3.7.
app_process.kill()

Python update Database during multiprocessing

I am using multiprocessing to perform jobs parallel, my Goal is to use multi cpu core and hence i choosen multiprocessing module instead of threading module
Now i have method, which uses subprocess module to execute linux shell command, i need to filter it and update the results to DB.
For every thread, subprocess execution time may differ some threads input execution time may be 10 seconds for other it may be 15 seconds.
My concern is will always get same thread execution result or different thread execution result,
or i have to go for locking mechanism, if yes can you provide me example that suitable for my requirement
Below is the example code:
#!/usr/bin/env python
import json
from subprocess import check_output
import multiprocessing
class Test:
# Convert bytes to UTF-8 string
#staticmethod
def bytes_to_string(string_convert):
if not isinstance(string_convert, bytes) and isinstance(string_convert, str):
return string_convert, True
elif isinstance(string_convert, bytes):
string_convert = string_convert.decode("utf-8")
else:
print("Passed in non-byte type to convert to string: {0}".format(string_convert))
return "", False
return string_convert, True
# Execute commands in Linux shell
#staticmethod
def command_output(command):
try:
output = check_output(command)
except Exception as e:
return e, False
output, state = Test.bytes_to_string(output)
return output, True
#staticmethod
def run_multi(num):
test_result, success = Test.command_output(["curl", "-sb", "-H", "Accept: application/json", "http://127.0.0.1:5500/stores"])
out = json.loads(test_result)
#Update Database is safer here or i need to use any locks
if __name__ == '__main__':
test = Test()
input_list = list(range(0, 1000))
numberOfThreads = 100
p = multiprocessing.Pool(numberOfThreads)
p.map(test.run_multi, input_list)
p.close()
p.join()
Depends on what sort of updates you're doing in the database...
If it's a full database, it'll have its own locking mechanisms; you'll need to work with them, but other than that it's already designed to handle concurrent access.
For example, if the update involves inserting a row, you can just do that; the database will end up with all the rows, each exactly once.

Running into a Multithreading issue connecting to multiple devices at the same time

I am defining the main function with def get_info(). This function doesn't take the arguments. This program uses argumentParser to parse the arguments from the command line. The argument provided is the CSV file with --csv option. This picks up the csv file from the current directory and read the lines each containing an IP address, logs into devices serially and runs few commands return the output and appends in the text file. When the code runs it removes the old text file from the directory and create a new output text file when executed.
Problem: I want to achieve this using threading module so that it takes 5 devices in parallel and outputs to a file. The problem I am running is with the lock issues as the same object is being used by same process at the same time. Here the sample code I have written. The threading concept is very new to me so please understand.
import getpass
import csv
import time
import os
import netmiko
import paramiko
from argparse import ArgumentParser
from multiprocessing import Process, Queue
def get_ip(device_ip,output_q):
try:
ssh_session = netmiko.ConnectHandler(device_type='cisco_ios', ip=device_row['device_ip'],
username=ssh_username, password=ssh_password)
time.sleep(2)
ssh_session.clear_buffer()
except (netmiko.ssh_exception.NetMikoTimeoutException,
netmiko.ssh_exception.NetMikoAuthenticationException,
paramiko.ssh_exception.SSHException) as s_error:
print(s_error)
def main():
show_vlanfile = "pool.txt"
if os.path.isfile(show_vlanfile):
try:
os.remove(show_vlanfile)
except OSError as e:
print("Error: %s - %s." %(e.filename, e.strerror))
parser = ArgumentParser(description='Arguments for running oneLiner.py')
parser.add_argument('-c', '--csv', required=True, action='store', help='Location of CSV file')
args = parser.parse_args()
ssh_username = input("SSH username: ")
ssh_password = getpass.getpass('SSH Password: ')
with open(args.csv, "r") as file:
reader = csv.DictReader(file)
output_q = Queue(maxsize=5)
procs = []
for device_row in reader:
# print("+++++ {0} +++++".format(device_row['device_ip']))
my_proc = Process(target=show_version_queue, args=(device_row, output_q))
my_proc.start()
procs.append(my_proc)
# Make sure all processes have finished
for a_proc in procs:
a_proc.join()
commands = ["terminal length 0","terminal width 511","show run | inc hostname","show ip int brief | ex una","show
vlan brief","terminal length 70"]
output = ''
for cmd in commands:
output += "\n"
output += ssh_session.send_command(cmd)
output += "\n"
with open("pool.txt", 'a') as outputfile:
while not output_q.empty():
output_queue = output_q.get()
for x in output_queue:
outputfile.write(x)
if name == "main":
main()
Somewhat different take...
I run effectively a main task, and then just fire up a (limited) number of threads; and they communicate via 2 data queues - basically "requests" and "responses".
Main task
dumps the requests into the request queue.
fires up a number (i.e. 10 or so...) worker tasks.
sits on the "response" queue waiting for results. The results can be simple user info messages about status, error messages, or DATA responses to be written out to files.
When all the threads finish, program shuts down.
Workers basically:
get a request. If none, just shut down
connect to the device
send a log message to the response queue that it's started.
does what it has to do.
puts the result as DATA to the response queue
closes the connection to the device
loop back to the start
This way you don't inadvertently flood the processing host, as you have a limited number of concurrent threads going, all doing exactly the same thing in their own sweet time, until there's nothing left to do.
Note that you DON'T do any screen/file IO in the threads, as it will get jumbled with the different tasks running at the same time. Each essentially only sees inputQ, outputQ, and the Netmiko sessions that get cycled through.
It looks like you have code that is from a Django example:
def main():
'''
Use threads and Netmiko to connect to each of the devices in the database.
'''
devices = NetworkDevice.objects.all()
You should move the argument parsing into the main thread. You should read the CSV file in the main thread. You should have each child thread be the Netmiko-SSH connection.
I say this as your current solution--has all of the SSH connections happen in one thread which is not what you intended.
At a high-level main() should have argument parsing, delete old output file, obtain username/password (assuming these are the same for all the devices), loop over CSV file obtaining the IP address for each device.
Once you have the IP address, then you create a thread, the thread uses Netmiko-SSH to connect to each device and retrieve your output. I would then use a Queue to pass back the output from each device (back to the main thread).
Then back in the main thread, you would write all of the output to a single file.
It would look a bit like this:
https://github.com/ktbyers/netmiko/blob/develop/examples/use_cases/case16_concurrency/threads_netmiko.py
Here is an example using a queue (with multiprocessing) though you can probably adapt this using a thread-Queue pretty easily.
https://github.com/ktbyers/netmiko/blob/develop/examples/use_cases/case16_concurrency/processes_netmiko_queue.py

Python, How to break out of multiple threads

I am following one of the examples in a book I am reading ("Violent Python"). It is to create a zip file password cracker from a dictionary. I have two questions about it. First it says to thread it as I have written in the code to increase performance but when I timed it (I know time.time() is not great for timing) there was about a twelve second difference in favor of not threading. Is this because it is taking longer to start the threads? Second if I do it without the threads I can break as soon as the correct value is found by printing the result and the entering the statement exit(0). Is there a way to get the same result using threading so that if I find the result I am looking for I can end all other threads simultaneously?
import zipfile
from threading import Thread
import time
def extractFile(z, password, starttime):
try:
z.extractall(pwd=password)
except:
pass
else:
z.close()
print('PWD IS ' + password)
print(str(time.time()-starttime))
def main():
start = time.time()
z = zipfile.ZipFile('test.zip')
pwdfile = open('words.txt')
pwds = pwdfile.read()
pwdfile.close()
for pwd in pwds.splitlines():
t = Thread(target=extractFile, args=(z, pwd, start))
t.start()
#extractFile(z, pwd, start)
print(str(time.time()-start))
if __name__ == '__main__':
main()
In CPython, the Global Interpreter Lock ("GIL") enforces the restriction that only one thread at a time can execute Python bytecode.
So in this application, it is probably better to use the map method of a multiprocessing.Pool, since every try is independant of the others;
import multiprocessing
import zipfile
def tryfile(password):
rv = passwd
with zipfile.ZipFile('test.zip') as z:
try:
z.extractall(pwd=password)
except:
rv = None
return rv
with open('words.txt') as pwdfile:
data = pwdfile.read()
pwds = data.split()
p = multiprocessing.Pool()
results = p.map(tryfile, pwds)
results = [r for r in results if r is not None]
This will start (by default) as many processes as your computer has cores. If will keep running tryfile() with a different passwords in these processes until the list pwds is exhausted, gather the results and return them. The last list comprehension is to discard the None results.
Note that this code could be improved to stop shut down the map once the password is found. You'd probably have to use map_async and a shared variable in that case. It would also be nice to load the zipfile only once and share that.
This code is slow because python has a Global Interpreter Lock, which means only one thread can execute at a time. This causes multithreaded code to run slower than serial code in Python. If you want to create a truly multithreaded application, you'd have to use the Multiprocessing Module.
To break out of the threads and get the return value, you can use os._exit(1) First, import the os module at the top of your file:
import os
Then, change your extractFile function to use os._exit(1):
def extractFile(z, password, starttime):
try:
z.extractall(pwd=password)
except:
pass
else:
z.close()
print('PWD IS ' + password)
print(str(time.time()-starttime))
os._exit(1)

python os.mkfifo() for Windows

Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task):
In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects).
Some extra details:
I have a python module (not written by me) that plays a certain game through stdin and stdout (using raw_input() and print). I also have a Windows executable playing the same game, through stdin and stdout as well. I want to make them play one against the other, and log all their communication.
Here's the code I can write (the get_fifo() function is not implemented, because that's what I don't know to do it Windows):
class Pusher(Thread):
def __init__(self, source, dest, p1, name):
Thread.__init__(self)
self.source = source
self.dest = dest
self.name = name
self.p1 = p1
def run(self):
while (self.p1.poll()==None) and\
(not self.source.closed) and (not self.source.closed):
line = self.source.readline()
logging.info('%s: %s' % (self.name, line[:-1]))
self.dest.write(line)
self.dest.flush()
exe_to_pythonmodule_reader, exe_to_pythonmodule_writer =\
get_fifo()
pythonmodule_to_exe_reader, pythonmodule_to_exe_writer =\
get_fifo()
p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
old_stdin = sys.stdin
old_stdout = sys.stdout
sys.stdin = exe_to_pythonmodule_reader
sys.stdout = pythonmodule_to_exe_writer
push1 = Pusher(p1.stdout, exe_to_pythonmodule_writer, p1, '1')
push2 = Pusher(pythonmodule_to_exe_reader, p1.stdin, p1, '2')
push1.start()
push2.start()
ret = pythonmodule.play()
sys.stdin = old_stdin
sys.stdout = old_stdout
Following the two answers above, I accidentally bumped into the answer. os.pipe() does the job. Thank you for your answers.
I'm posting the complete code in case someone else is looking for this:
import subprocess
from threading import Thread
import time
import sys
import logging
import tempfile
import os
import game_playing_module
class Pusher(Thread):
def __init__(self, source, dest, proc, name):
Thread.__init__(self)
self.source = source
self.dest = dest
self.name = name
self.proc = proc
def run(self):
while (self.proc.poll()==None) and\
(not self.source.closed) and (not self.dest.closed):
line = self.source.readline()
logging.info('%s: %s' % (self.name, line[:-1]))
self.dest.write(line)
self.dest.flush()
def get_reader_writer():
fd_read, fd_write = os.pipe()
return os.fdopen(fd_read, 'r'), os.fdopen(fd_write, 'w')
def connect(exe):
logging.basicConfig(level=logging.DEBUG,\
format='%(message)s',\
filename=LOG_FILE_NAME,
filemode='w')
program_to_grader_reader, program_to_grader_writer =\
get_reader_writer()
grader_to_program_reader, grader_to_program_writer =\
get_reader_writer()
p1 = subprocess.Popen(exe, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
old_stdin = sys.stdin
old_stdout = sys.stdout
sys.stdin = program_to_grader_reader
sys.stdout = grader_to_program_writer
push1 = Pusher(p1.stdout, program_to_grader_writer, p1, '1')
push2 = Pusher(grader_to_program_reader, p1.stdin, p1, '2')
push1.start()
push2.start()
game_playing_module.play()
sys.stdin = old_stdin
sys.stdout = old_stdout
fil = file(LOG_FILE, 'r')
data = fil.read()
fil.close()
return data
if __name__=='__main__':
if len(sys.argv) != 2:
print 'Usage: connect.py exe'
print sys.argv
exit()
print sys.argv
print connect(sys.argv[1])
On Windows, you are looking at (Named or Anonymous) Pipes.
A pipe is a section of shared memory that processes use for communication. The process that creates a pipe is the pipe server. A process that connects to a pipe is a pipe client. One process writes information to the pipe, then the other process reads the information from the pipe.
To work with Windows Pipes, you can use Python for Windows extensions (pywin32), or the Ctypes module. A special utility module, win32pipe, provides an interface to the win32 pipe API's. It includes implementations of the popen[234]() convenience functions.
See how-to-use-win32-apis-with-python and similar SO questions (not specific to Pipes, but points to useful info).
For a cross-platform solution, I'd recommend building the file-like object on top of a socket on localhost (127.0.0.1) -- that's what IDLE does by default to solve a problem that's quite similar to yours.
os.pipe() returns an anonymous pipe, or a named pipe on Windows, which is very lightweight and efficient.
TCP sockets (as suggested by user1495323) are more heavyweight: you can see them with netstat for example, and each one requires a port number, and the number of available ports is limited to 64k per peer (e.g. 64k from localhost to localhost).
On the other hand, named pipes (on Windows) are limited because:
You can't use select() for nonblocking I/O on Windows, because they're not sockets.
There's no apparent way to read() with a timeout, and
Even making them non-blocking is difficult.
And sockets can be wrapped in Python-compatible filehandles using makefile(), which allows them to be used to redirect stdout or stderr. This makes this an attractive option for some use cases, such as sending stdout from one thread to another.
A socket can be constructed with an automatically-assigned port number like this (based on the excellent Python socket HOWTO):
with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as input_socket:
# Avoid socket exhaustion by setting SO_REUSEADDR <https://stackoverflow.com/a/12362623/648162>:
input_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
# localhost doesn't work if the definition is missing from the hosts file,
# and 127.0.0.1 only works with IPv4 loopback, but socket.gethostname()
# should always work:
input_socket.bind((socket.gethostname(), 0))
random_port_number = input_socket.getsockname()[1]
input_socket.listen(1)
# Do something with input_socket, for example pass it to another thread.
output_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# close() should not strictly be necessary here, but since connect() could fail, it avoids leaking fds
# in that case. "If a file descriptor is given, it is closed when the returned I/O object is closed".
with output_socket:
output_socket.connect((socket.gethostname(), random_port_number))
The user of input_socket (e.g. another thread) can then do:
with input_socket:
while True:
readables, _, _ = select.select([input_socket], [], [input_socket], 1.0)
if len(readables) > 0:
input_conn, addr = self.input_socket.accept()
break
with input_conn:
while True:
readables, _, errored = select.select([input_conn], [], [input_conn], 1.0)
if len(errored) > 0:
print("connection errored, stopping")
break
if len(readables) > 0:
read_data = input_conn.recv(1024)
if len(read_data) == 0:
print("connection closed, stopping")
break
else:
print(f"read data: {read_data!r}")

Categories

Resources