How to not write to file while reading and vise-versa - python

I have a python program (say reader.py) which uses file setting.py to read from:
while( True ):
...
execfile( settings.py )
...
But there is other python program (say writer.py) that uses this file to write to:
...
try:
settings = open('settings.py', 'w')
settings.truncate()
settings.write( 'some text')
except IOError:
print('Cannot write to file')
finally:
settings.close()
...
Note1: reader.py and writer.py do not ''know'' about each other.
Note2: reader.py reads settings.py cyclically, though writer.py writes to file when user wants to (not necessarily right after he/she clicked ''write'', it just means that there is no any rule when to write).
Question: What is the best way to cooperate two programs in order to avoid any contradiction? I know this might depend on platform. I am using Linux. Distributions are: Ubuntu, Scientific Linux.
EDIT1: If I choose to use FiFo I encounter the following problem: Once writer has write to settings file it will probably never write again but reader should have access to settings anyway in this case. In other words, reader should have an ability to read from file and not to wait for writer in this case. Otherwise reader has to wait for writer.
Ordinary using of FiFo does not allow reader to read from file if writer does not write (until it has written). How to deal with this problem?

You may be interested in using a named pipe for your interprocess communications. Available in Linux, it is a special type of file designed for client (writer.py), server (reader.py), tasks. After writing to the pipe, the client will wait until the server has received the data. This allows you to sync the two processes somewhat.
Linux Manual for FiFo
Python doc: os.mkfifo(path[, mode])

I found the following solution which seems to be working. I use flock to create locks.
Reader:
import errno
import fcntl
from time import *
path = "testLock.py"
f = open(path, "r")
while True:
try:
fcntl.flock(f, fcntl.LOCK_EX | fcntl.LOCK_NB)
break
except IOError as e:
if e.errno != errno.EAGAIN:
raise
else:
sleep(1)
print 'Waiting...'
#reader's action
execfile(path)
#drop lock
fcntl.flock(f, fcntl.LOCK_UN)
Writer:
import errno
import fcntl
from time import *
path = "testLock.py"
f = open(path, "w")
while True:
try:
fcntl.flock(f, fcntl.LOCK_SH | fcntl.LOCK_NB)
break
except IOError as e:
if e.errno != errno.EAGAIN:
raise
else:
sleep(1)
print 'Waiting...'
#writer's action
for i in (1,10,2):
f.write('print "%d" % i')
sleep(1)
#drop lock
fcntl.flock(f, fcntl.LOCK_UN)
I have some question here:
Qusetion 1: Is it correct usage of LOCK_EX and LOCK_SH I mean are they in the right place?
Question 2: Is the reader's action i.e execfile correct here? If the file is already opened is execfile try to open it anyway?

Related

How to read and write the same file at a time in python

There are three python programs, writer program (writer.py) writes in to the file output.txt and two reader programs (reader_1.py, reader_2.py) read from the same output.txt file at a same time.
What is the best way to achieve the synchronization between these three programs?
How to avoid reading by the reader program, if the other program is writing in to the output file?
How to handle single writer and multiple readers problem efficiently in python?
I tried to implement the fnctl locking mechanism, but this module is not found in my python.
writer.py
#!/usr/bin/python
import subprocess
import time
cycle = 10
cmd="ls -lrt"
def poll():
with open("/home/output.txt", 'a') as fobj:
fobj.seek(0)
fobj.truncate()
try:
subprocess.Popen(cmd, shell=True, stdout=fobj)
except Exception:
print "Exception Occured"
# Poll the Data
def do_poll():
count = int(time.time())
while True:
looptime = int(time.time())
if (looptime - count) >= cycle:
count = int(time.time())
print('Begin polling cycle')
poll()
print('End polling cycle')
def main():
do_poll()
if __name__ == "__main__":
main()
reader_1.py
#!/usr/bin/python
with open("/home/output10.txt", 'r') as fobj:
f=fobj.read()
print f
reader_2.py
#!/usr/bin/python
with open("/home/output10.txt", 'r') as fobj:
f=fobj.read()
print f
Note: reader_1.py and reader_2.py runs continuously in while loop.
Due to this reason same file being accessed by three programs at same time.
Looking for ideas.
Solution #1: Added fnctl locking mechanism to writer.py program. But not sure this is efficiently locking the file.
#!/usr/bin/python
import subprocess
import time
import os
import fcntl, os
report_cycle = 2
cmd='ls -lrt'
def poll(devnull):
with open("/home/output10.txt", 'a') as fobj:
try:
fcntl.flock(fobj, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
print "flock() failed to hold an exclusive lock."
fobj.seek(0)
fobj.truncate()
try:
subprocess.call(cmd, shell=True, stdout=fobj, stderr=devnull)
except Exception:
print "Exception Occured"
# Unlock file
try:
fcntl.flock(fobj, fcntl.LOCK_UN)
except:
print "flock() failed to unlock file."

I need to prevent a python script from running twice

I need to prevent a python script from running more than once. So far I have:
import fcntl
def lockFile(lockfile):
fp = open(lockfile, 'w')
try:
fcntl.flock(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
except IOError:
return False
return True
if not lockFile("myfile.lock"):
sys.exit(0)
Problem is the sys.exit() never gets called even if the file is there. Maybe this is a platform dependent way of doing things? I just need to write a lockfile, check for its existence and if it's not there or stale, create a new one. Ideas?
Writing a file will create a new file if none exist; you could instead try to read the file first: if there is none, an error is raised and a file a file written; if there is a file, the program exits.
try:
with open('lockfile.txt', 'r') as f:
lock = f.readline().strip().split()
if lock[0] == 'locked':
print('exiting')
sys.exit(0)
except FileNotFoundError:
with open('lockfile.txt', 'w') as f:
f.write('locked')
print('file written')
if __name__ == '__main__':
pass
If you need something more sophisticated, you could look up the atexit module
You can check if you file exists by using os.path.exists (docs here). If it does, then you can use the sys.exit you mentioned before. If you need something more than sys.exit, try the atexit module #Reblochon suggested. The script will then assume the file is ready to lock and the method will report its success back to the user via a boolean.
import os
import sys
import fcntl
FILE_NAME = 'myfile.lock'
def lockFile(lockfile):
fp = open(lockfile, 'w') # create a new one
try:
fcntl.flock(fp, fcntl.LOCK_EX | fcntl.LOCK_NB)
success = True # the file has been locked
except IOError:
success = False # an error occurred
fp.close() # make sure to close your file
return success
if os.path.exists(FILE_NAME): # exit the script if it exists
sys.exit(0)
print('success', lockFile(FILE_NAME))

Exception for Python ftplib in my program?

I wrote this program to draw data from a text file on a website's directory (of which is edited by the user on the site) but it seems to crash. A lot.
from sys import argv
import ftplib
import serial
from time import sleep
one = "0"
repeat = True
ser = serial.Serial("COM3", 9600)
while repeat == True:
path = 'public_html/'
filename = 'fileone.txt'
ftp = ftplib.FTP("*omitted*")
ftp.login("*omitted*", "*omitted*")
ftp.cwd(path)
ftp.retrbinary("RETR " + filename ,open(filename, 'wb').write)
ftp.quit()
txt = open(filename)
openup = txt.read()
ser.write(openup)
print(openup)
Does anyone know any kind of way to stop it from crashing? I was thinking of using an exception but I'm no Python expert. The program does what it's meant to do, by the way, and the address and login have been omitted for obvious reasons. Also if possible I ask for an exception to stop the program from crashing when it disconnects from the serial port.
Thanks in advance!
Two things:
You might want to put all the ftplib related code in a try-except block like so:
try:
#code related to ftplib
except Exception, e: #you can fill this in after you encounter the exception once
print str(e)
You seem to be opening the file but not closing it when you're done. This might also cause errors later. The best way to do this would be:
with open(filename, 'r') as txt:
openup = txt.read()
This way the file will be closed automatically once you're outside the 'with' block.

Shared file access between Python and Matlab

I have a Matlab application that writes in to a .csv file and a Python script that reads from it. These operations happen concurrently and at their own respective periods (not necessarily the same). All of this runs on Windows 7.
I wish to know :
Would the OS inherently provide some sort of locking mechanism so that only one of the two applications - Matlab or Python - have access to the shared file?
In the Python application, how do I check if the file is already "open"ed by Matlab application? What's the loop structure for this so that the Python application is blocked until it gets access to read the file?
I am not sure about window's API for locking files
Heres a possible solution:
While matlab has the file open, you create an empty file called "data.lock" or something to that effect.
When python tries to read the file, it will check for the lock file, and if it is there, then it will sleep for a given interval.
When matlab is done with the file, it can delete the "data.lock" file.
Its a programmatic solution, but it is simpler than digging through the windows api and finding the right calls in matlab and python.
If Python is only reading the file, I believe you have to lock it in MATLAB because a read-only open call from Python may not fail. I am not sure how to accomplish that, you may want to read this question atomically creating a file lock in MATLAB (file mutex)
However, if you are simply consuming the data with python, did you consider using a socket instead of a file?
In Windows on the Python side, CreateFile can be called (directly or indirectly via the CRT) with a specific sharing mode. For example, if the desired sharing mode is FILE_SHARE_READ, then the open will fail if the file is already open for writing. If the latter call instead succeeds, then a future attempt to open the file for writing will fail (e.g. in Matlab).
The Windows CRT function _wsopen_s allows setting the sharing mode. You can call it with ctypes in a Python 3 opener:
import sys
import os
import ctypes as ctypes
import ctypes.util
__all__ = ['shdeny', 'shdeny_write', 'shdeny_read']
_SH_DENYRW = 0x10 # deny read/write mode
_SH_DENYWR = 0x20 # deny write mode
_SH_DENYRD = 0x30 # deny read
_S_IWRITE = 0x0080 # for O_CREAT, a new file is not readonly
if sys.version_info[:2] < (3,5):
_wsopen_s = ctypes.CDLL(ctypes.util.find_library('c'))._wsopen_s
else:
# find_library('c') may be deprecated on Windows in 3.5, if the
# universal CRT removes named exports. The following probably
# isn't future proof; I don't know how the '-l1-1-0' suffix
# should be handled.
_wsopen_s = ctypes.CDLL('api-ms-win-crt-stdio-l1-1-0')._wsopen_s
_wsopen_s.argtypes = (ctypes.POINTER(ctypes.c_int), # pfh
ctypes.c_wchar_p, # filename
ctypes.c_int, # oflag
ctypes.c_int, # shflag
ctypes.c_int) # pmode
def shdeny(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYRW, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
def shdeny_write(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYWR, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
def shdeny_read(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYRD, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
For example:
if __name__ == '__main__':
import tempfile
filename = tempfile.mktemp()
fw = open(filename, 'w')
fw.write('spam')
fw.flush()
fr = open(filename)
assert fr.read() == 'spam'
try:
f = open(filename, opener=shdeny_write)
except PermissionError:
fw.close()
with open(filename, opener=shdeny_write) as f:
assert f.read() == 'spam'
try:
f = open(filename, opener=shdeny_read)
except PermissionError:
fr.close()
with open(filename, opener=shdeny_read) as f:
assert f.read() == 'spam'
with open(filename, opener=shdeny) as f:
assert f.read() == 'spam'
os.remove(filename)
In Python 2 you'll have to combine the above openers with os.fdopen, e.g.:
f = os.fdopen(shdeny_write(filename, os.O_RDONLY|os.O_TEXT), 'r')
Or define an sopen wrapper that lets you pass the share mode explicitly and calls os.fdopen to return a Python 2 file. This will require a bit more work to get the file mode from the passed in flags, or vice versa.

fifo - reading in a loop

I want to use os.mkfifo for simple communication between programs. I have a problem with reading from the fifo in a loop.
Consider this toy example, where I have a reader and a writer working with the fifo. I want to be able to run the reader in a loop to read everything that enters the fifo.
# reader.py
import os
import atexit
FIFO = 'json.fifo'
#atexit.register
def cleanup():
try:
os.unlink(FIFO)
except:
pass
def main():
os.mkfifo(FIFO)
with open(FIFO) as fifo:
# for line in fifo: # closes after single reading
# for line in fifo.readlines(): # closes after single reading
while True:
line = fifo.read() # will return empty lines (non-blocking)
print repr(line)
main()
And the writer:
# writer.py
import sys
FIFO = 'json.fifo'
def main():
with open(FIFO, 'a') as fifo:
fifo.write(sys.argv[1])
main()
If I run python reader.py and later python writer.py foo, "foo" will be printed but the fifo will be closed and the reader will exit (or spin inside the while loop). I want reader to stay in the loop, so I can execute the writer many times.
Edit
I use this snippet to handle the issue:
def read_fifo(filename):
while True:
with open(filename) as fifo:
yield fifo.read()
but maybe there is some neater way to handle it, instead of repetitively opening the file...
Related
Getting readline to block on a FIFO
You do not need to reopen the file repeatedly. You can use select to block until data is available.
with open(FIFO_PATH) as fifo:
while True:
select.select([fifo],[],[fifo])
data = fifo.read()
do_work(data)
In this example you won't read EOF.
A FIFO works (on the reader side) exactly this way: it can be read from, until all writers are gone. Then it signals EOF to the reader.
If you want the reader to continue reading, you'll have to open again and read from there. So your snippet is exactly the way to go.
If you have mutliple writers, you'll have to ensure that each data portion written by them is smaller than PIPE_BUF on order not to mix up the messages.
The following methods on the standard library's pathlib.Path class are helpful here:
Path.is_fifo()
Path.read_text/Path.read_bytes
Path.write_text/Path.write_bytes
Here is a demo:
# reader.py
import os
from pathlib import Path
fifo_path = Path("fifo")
os.mkfifo(fifo_path)
while True:
print(fifo_path.read_text()) # blocks until data becomes available
# writer.py
import sys
from pathlib import Path
fifo_path = Path("fifo")
assert fifo_path.is_fifo()
fifo_path.write_text(sys.argv[1])

Categories

Resources