fifo - reading in a loop - python

I want to use os.mkfifo for simple communication between programs. I have a problem with reading from the fifo in a loop.
Consider this toy example, where I have a reader and a writer working with the fifo. I want to be able to run the reader in a loop to read everything that enters the fifo.
# reader.py
import os
import atexit
FIFO = 'json.fifo'
#atexit.register
def cleanup():
try:
os.unlink(FIFO)
except:
pass
def main():
os.mkfifo(FIFO)
with open(FIFO) as fifo:
# for line in fifo: # closes after single reading
# for line in fifo.readlines(): # closes after single reading
while True:
line = fifo.read() # will return empty lines (non-blocking)
print repr(line)
main()
And the writer:
# writer.py
import sys
FIFO = 'json.fifo'
def main():
with open(FIFO, 'a') as fifo:
fifo.write(sys.argv[1])
main()
If I run python reader.py and later python writer.py foo, "foo" will be printed but the fifo will be closed and the reader will exit (or spin inside the while loop). I want reader to stay in the loop, so I can execute the writer many times.
Edit
I use this snippet to handle the issue:
def read_fifo(filename):
while True:
with open(filename) as fifo:
yield fifo.read()
but maybe there is some neater way to handle it, instead of repetitively opening the file...
Related
Getting readline to block on a FIFO

You do not need to reopen the file repeatedly. You can use select to block until data is available.
with open(FIFO_PATH) as fifo:
while True:
select.select([fifo],[],[fifo])
data = fifo.read()
do_work(data)
In this example you won't read EOF.

A FIFO works (on the reader side) exactly this way: it can be read from, until all writers are gone. Then it signals EOF to the reader.
If you want the reader to continue reading, you'll have to open again and read from there. So your snippet is exactly the way to go.
If you have mutliple writers, you'll have to ensure that each data portion written by them is smaller than PIPE_BUF on order not to mix up the messages.

The following methods on the standard library's pathlib.Path class are helpful here:
Path.is_fifo()
Path.read_text/Path.read_bytes
Path.write_text/Path.write_bytes
Here is a demo:
# reader.py
import os
from pathlib import Path
fifo_path = Path("fifo")
os.mkfifo(fifo_path)
while True:
print(fifo_path.read_text()) # blocks until data becomes available
# writer.py
import sys
from pathlib import Path
fifo_path = Path("fifo")
assert fifo_path.is_fifo()
fifo_path.write_text(sys.argv[1])

Related

How to read and write the same file at a time in python

There are three python programs, writer program (writer.py) writes in to the file output.txt and two reader programs (reader_1.py, reader_2.py) read from the same output.txt file at a same time.
What is the best way to achieve the synchronization between these three programs?
How to avoid reading by the reader program, if the other program is writing in to the output file?
How to handle single writer and multiple readers problem efficiently in python?
I tried to implement the fnctl locking mechanism, but this module is not found in my python.
writer.py
#!/usr/bin/python
import subprocess
import time
cycle = 10
cmd="ls -lrt"
def poll():
with open("/home/output.txt", 'a') as fobj:
fobj.seek(0)
fobj.truncate()
try:
subprocess.Popen(cmd, shell=True, stdout=fobj)
except Exception:
print "Exception Occured"
# Poll the Data
def do_poll():
count = int(time.time())
while True:
looptime = int(time.time())
if (looptime - count) >= cycle:
count = int(time.time())
print('Begin polling cycle')
poll()
print('End polling cycle')
def main():
do_poll()
if __name__ == "__main__":
main()
reader_1.py
#!/usr/bin/python
with open("/home/output10.txt", 'r') as fobj:
f=fobj.read()
print f
reader_2.py
#!/usr/bin/python
with open("/home/output10.txt", 'r') as fobj:
f=fobj.read()
print f
Note: reader_1.py and reader_2.py runs continuously in while loop.
Due to this reason same file being accessed by three programs at same time.
Looking for ideas.
Solution #1: Added fnctl locking mechanism to writer.py program. But not sure this is efficiently locking the file.
#!/usr/bin/python
import subprocess
import time
import os
import fcntl, os
report_cycle = 2
cmd='ls -lrt'
def poll(devnull):
with open("/home/output10.txt", 'a') as fobj:
try:
fcntl.flock(fobj, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
print "flock() failed to hold an exclusive lock."
fobj.seek(0)
fobj.truncate()
try:
subprocess.call(cmd, shell=True, stdout=fobj, stderr=devnull)
except Exception:
print "Exception Occured"
# Unlock file
try:
fcntl.flock(fobj, fcntl.LOCK_UN)
except:
print "flock() failed to unlock file."

How to change stdin in jupyter notebook?

I would like assign a file to sys.stdin so that I can read contents of the file with input(). The code below runs as expected as script but it is problematic when it is written in notebook. After calling function input() it shows me an input textbook, which I do not want since I reassigned stdin to a file. So, I expect that the line in the file would be read instead.
import sys
file = open("input.in")
sys.stdin = file
val = input()
print(val)
It seems to me notebook ignores sys.stdin. I couldn't find why this is happening and how to fix it.
Thanks.
Update
I end up with overriding input function. It will do the job but I leave question open to see if someone has a better solution.
file = open("input.in")
input = file.readline
You can read the file to an in-memory buffer and then direct stdin to read from that. For example, to redisplay a file
import sys
import io # in python2, import StringIO
input_file = open('myfile.txt', 'r')
sys.stdin = io.StringIO(input_file.read())
for line in sys.stdin:
print(line, end='')
For your purpose, you may want
import sys
import io # in python2, import StringIO
input_file = open('myfile.txt', 'r')
sys.stdin = io.StringIO(input_file.read())
val = sys.stdin.readline()
# Rest of program using val

How to not write to file while reading and vise-versa

I have a python program (say reader.py) which uses file setting.py to read from:
while( True ):
...
execfile( settings.py )
...
But there is other python program (say writer.py) that uses this file to write to:
...
try:
settings = open('settings.py', 'w')
settings.truncate()
settings.write( 'some text')
except IOError:
print('Cannot write to file')
finally:
settings.close()
...
Note1: reader.py and writer.py do not ''know'' about each other.
Note2: reader.py reads settings.py cyclically, though writer.py writes to file when user wants to (not necessarily right after he/she clicked ''write'', it just means that there is no any rule when to write).
Question: What is the best way to cooperate two programs in order to avoid any contradiction? I know this might depend on platform. I am using Linux. Distributions are: Ubuntu, Scientific Linux.
EDIT1: If I choose to use FiFo I encounter the following problem: Once writer has write to settings file it will probably never write again but reader should have access to settings anyway in this case. In other words, reader should have an ability to read from file and not to wait for writer in this case. Otherwise reader has to wait for writer.
Ordinary using of FiFo does not allow reader to read from file if writer does not write (until it has written). How to deal with this problem?
You may be interested in using a named pipe for your interprocess communications. Available in Linux, it is a special type of file designed for client (writer.py), server (reader.py), tasks. After writing to the pipe, the client will wait until the server has received the data. This allows you to sync the two processes somewhat.
Linux Manual for FiFo
Python doc: os.mkfifo(path[, mode])
I found the following solution which seems to be working. I use flock to create locks.
Reader:
import errno
import fcntl
from time import *
path = "testLock.py"
f = open(path, "r")
while True:
try:
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
except IOError as e:
if e.errno != errno.EAGAIN:
raise
else:
sleep(1)
print 'Waiting...'
#reader's action
execfile(path)
#drop lock
fcntl.flock(f, fcntl.LOCK_UN)
Writer:
import errno
import fcntl
from time import *
path = "testLock.py"
f = open(path, "w")
while True:
try:
fcntl.flock(f, fcntl.LOCK_SH | fcntl.LOCK_NB)
break
except IOError as e:
if e.errno != errno.EAGAIN:
raise
else:
sleep(1)
print 'Waiting...'
#writer's action
for i in (1,10,2):
f.write('print "%d" % i')
sleep(1)
#drop lock
fcntl.flock(f, fcntl.LOCK_UN)
I have some question here:
Qusetion 1: Is it correct usage of LOCK_EX and LOCK_SH I mean are they in the right place?
Question 2: Is the reader's action i.e execfile correct here? If the file is already opened is execfile try to open it anyway?

send string between python script

I want to send 'hello world' to a script in python already running in ubuntu.
The script that's always running is this one (part of it):
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
sys.stdin.readline()
it throws the pid of the running process so I can send stuff by console with:
echo 'hello script!' > /proc/PID/fd/0
It will print it in the console! but I can't send \x15 or EOF or anything to break sys.stdin.readline() and do some other stuff in my script, for example:
def f(e):
print 'we already read:',s
while True:
s = sys.stdin.readline()
print 'we break the readline'
f(s)
.....blablabla some other stuff, and then we return to the top of the while to keep reading...
Does anyone know how to do it? The script that send the string will not always be running, but the script that receives the info will be always running.
PROBLEM SOLVED!
Thank's to Rafael this is the solution:
Reader:
import os
import sys
path = "/tmp/my_program.fifo"
try:
os.mkfifo(path)
except OSError:
pass
fifo = open(path, "r")
while True:
for line in fifo:
linea = line
print "Received: " + linea,
fifo.close()
if linea =='quit':
break
fifo = open(path, "r")
Sender:
# -*- coding: utf-8 -*-
import os
path = "/tmp/my_program.fifo"
fifo = open(path, "w")
fifo.write("Hello Wordl!!\n")
fifo.close()
Since you obviously don't have a problem with being limited to a Unix system, you can use named pipes to communicate with the program. Very unix-y way to work.
Python provides the os.mkfifo function to ease creating named pipes; otherwise they work just like files.
Write to a text file that is read by the already running program. The two can interact via this file. For example, these two programs simultaneously read and write to an initially empty text file.
already.py
# executed 1st
import time
while True:
text = open('file.txt').read()
print 'File contents: ' + text
time.sleep(5)
program.py
# executed 2nd
import time
while True:
text = open('file.txt', 'a')
text.write(raw_input('Enter data: '))
text.close()
time.sleep(5)

how to create log files for test execution

I am trying to create a testcontroller and wants the execution of tests to be collected a a file.
i know using, tee and redirecting the test script execution to a certain file, but I am interested to do it with python over linux.
So, in this case whenever a test is executed the log file should get created, and all the execution logs including stdin,stdout and stderr should get collected to this file.
Requesting some body to suggest me, how to implement this kind of idea!
Thanks
OpenFile
There are several good logging modules, starting with the built-in logging, here is the official cookbook. Among the more interesting 3rd party libraries is Logbook, here is a pretty bare example just scratching the surface of its very cool features:
import logbook
def f(i,j):
return i+j
logger = logbook.Logger('my application logger')
log = logbook.FileHandler('so.log')
log.push_application()
try:
f(1, '2')
logger.info('called '+f.__name__)
except:
logger.warn('failed on ')
try:
f(1, 2)
logger.info('called '+f.__name__)
except:
logger.warn('choked on, ')
so.log then looks like this:
[2011-05-19 07:40] WARNING: my application logger: failed on
[2011-05-19 07:40] INFO: my application logger: called f
Try this:
import sys
# Save the current stream
save_out = sys.stdout
# Define the log file
f = "a_log_file.log"
# Append to existing log file.
# Change 'a' to 'w' to recreate the log file each time.
fsock = open(f, 'a')
# Set stream to file
sys.stdout = fsock
###
# do something here
# any print function calls will send the stream to file f
###
# Reset back the stream to what it was
# any print function calls will send the stream to the previous stream
sys.stdout = save_out
fsock.close()
Open and write to a file:
mylogfile = 'bla.log'
f = open(mylogfile, 'a')
f.write('i am logging! logging logging!....loggin? timber!....')
f.close()
look in root direct of script for 'bla.log' and read, enjoy
You can write a function like this:
def writeInLog(msg):
with open("log", "a") as f:
f.write(msg+"\n")
It will open the file "log", and append ("a") the message followed by a newline, then close the file.
# Save the current stream
saveout = sys.stdout
f = "a_log_file.log"
fsock = open(f, 'w')
# Set stream to file
sys.stdout = fsock
###
# do something here
# any print function will send the stream to file f
###
# Reset back the stream to what it was
sys.stdout = saveout
fsock.close()

Categories

Resources