I have a file that an application updates every few seconds, and I want to extract a single number field in that file, and record it into a list for use later. So, I'd like to make an infinite loop where the script reads a source file, and any time it notices a change in a particular figure, it writes that figure to an output file.
I'm not sure why I can't get Python to notice that the source file is changing:
#!/usr/bin/python
import re
from time import gmtime, strftime, sleep
def write_data(new_datapoint):
output_path = '/media/USBHDD/PythonStudy/torrent_data_collection/data_one.csv'
outfile = open(output_path, 'a')
outfile.write(new_datapoint)
outfile.close()
forever = 0
previous_data = "0"
while forever < 1:
input_path = '/var/lib/transmission-daemon/info/stats.json'
infile = open(input_path, "r")
infile.seek(0)
contents = infile.read()
uploaded_bytes = re.search('"uploaded-bytes":\s(\d+)', contents)
if uploaded_bytes:
current_time = strftime("%Y-%m-%d %X", gmtime())
current_data = uploaded_bytes.group(1)
if current_data != previous_data:
write_data(","+ current_time + "$" + uploaded_bytes.group(1))
previous_data = uploaded_bytes.group(1)
infile.close()
sleep(5)
else:
print "couldn't write" + strftime("%Y-%m-%d %X", gmtime())
infile.close()
sleep(60)
As is now, the (messy) script writes once correctly, and then I can see that although my source file (stats.json) file is changing, my script never picks up on any changes. It keeps on running, but my output file doesn't grow.
I thought that an open() and a close() would do the trick, and then tried throwing in a .seek(0).
What file method am I missing to ensure that python re-opens and re-reads my source file, (stats.json)?
Unless you are implementing some synchronization mechanism or could guarantee somehow atomic read and write, I think you are calling for race condition and subtle bugs here.
Imagine the "reader" accessing the file whereas the "writer" hasn't completed its write cycle. There is a risk of reading incomplete/inconsistent data. In "modern" systems, you could also hit the cache -- and not seeing file modifications "live" as they appends.
I can think of two possible solutions:
You forgot the parentheses on the close in the else of the infinite loop.
infile.close --> infile.close()
The program that is changing the JSON file is not closing the file, and therefore it is not actually changing.
Two problems I see:
Are you sure your file is really updated on filesystem? I do not know on what operating system you are playing with your code, but caching may kick your a$$ in this case, if the files is not flushed by producer.
Your problem is worth considering pipe instead of file, however I cannot guarantee what transmission will do if it stuck on writing to pipe if your consumer is dead.
Answering your problems, consider using one of the following:
pynotifyu
watchdog
watcher
These modules are intended to monitor changes on filesystem and then call proper actions. Method in your example is primitive, has big performance penalty and couple other problems mentioned already in other answers.
Ilya, would it help to check(os.path.getmtime), whether stats.json changed before you process the file?
Moreover, i'd suggest to make advantage of the fact it's JSON file:
import json
import os
import sys
dir_name ='/home/klaus/.config/transmission/'
# stats.json of daemon might be elsewhere
file_name ='stats.json'
full_path = os.path.join(dir_name, file_name)
with open(full_path) as fp:
json.load(fp)
data = json.load(fp)
print data['uploaded-bytes']
Thanks for all the answers, unfortunately my error was in the shell, and not in the script with Python.
The cause of the problem turned out to be the way I was putting the script in the background. I was doing: Ctrl+Z which I thought would put the task in the background. But it does not, Ctrl+Z only suspends the task and returns you to the shell, a subsequent bg command is necessary for the script to run on infinite loop in the background
Related
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
Alright, so I ended up going with the code I wrote here, on my website link is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
# Only allows locking on writable files, might cause
# strange results for reading.
import fcntl, os
def lock_file(f):
if f.writable(): fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
if f.writable(): fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen can be used in a with block where one would normally use an open statement.
WARNINGS:
If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
As of (Nov 9th, 2020) this code only locks writable files on Posix systems. At some point after the posting and before this date, it became illegal to use the fcntl.lock on read-only files.
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
I prefer lockfile — Platform-independent file locking
Locking is platform and device specific, but generally, you have a few options:
Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, it's ignored.
Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be a major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
Here's an example of how to use the filelock library, which is similar to Evan Fossmark's implementation:
from filelock import FileLock
lockfile = r"c:\scr.txt"
lock = FileLock(lockfile + ".lock")
with lock:
file = open(path, "w")
file.write("123")
file.close()
Any code within the with lock: block is thread-safe, meaning that it will be finished before another process has access to the file.
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + '\n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
this worked for me:
Do not occupy large files, distribute in several small ones
you create file Temp, delete file A and then rename file Temp to A.
import os
import json
def Server():
i = 0
while i == 0:
try:
with open(File_Temp, "w") as file:
json.dump(DATA, file, indent=2)
if os.path.exists(File_A):
os.remove(File_A)
os.rename(File_Temp, File_A)
i = 1
except OSError as e:
print ("file locked: " ,str(e))
time.sleep(1)
def Clients():
i = 0
while i == 0:
try:
if os.path.exists(File_A):
with open(File_A,"r") as file:
DATA_Temp = file.read()
DATA = json.loads(DATA_Temp)
i = 1
except OSError as e:
print (str(e))
time.sleep(1)
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
If you just need Mac/POSIX this should work without external packages.
import sys
import stat
import os
filePath = "<PATH TO FILE>"
if sys.platform == 'darwin':
flags = os.stat(filePath).st_flags
if flags & ~stat.UF_IMMUTABLE:
os.chflags(filePath, flags & stat.UF_IMMUTABLE)
and if you want to unlock a file just change,
if flags & stat.UF_IMMUTABLE:
os.chflags(filePath, flags & ~stat.UF_IMMUTABLE)
I have a script which loads a YAML file as an object. The related part is very simple:
def run_test_spec(self, file_path):
try:
with open(file_path, 'r') as f:
test_spec = yaml.load(f)
if test_spec:
do_test(test_spec)
else:
print("empty test_spec")
except BaseException as err:
print("error in loading yaml file:", file_path)
The file_path was passed in after finishing some comparisons on file entries with for entries in os.scandir(some_directory) (there is no break statement within the for loop).
It has been running fine until recently. The test_spec gets the value None after the first run. I debugged it with Pycharm. It the breakpoint is set at the line if test_spec:, test_spec is None but if the breakpoint is set either at the line with open(...) or yaml.load(), test_spec gets loaded properly. In the end, I added a time.sleep(0.2) statement before with open(...), then it works all the time.
What was the likely cause of it? Is it the problem of with open(...) or yaml.load()? How do get it right without the sleep?
Edited on June 27, 2018,
I did further debugging, and found the line in the code which makes the difference. In file /usr/local/lib/python3.5/dist-packages/yaml/reader.py on my machine:
def update_raw(self, size=4096):
data = self.stream.read(size)
if self.raw_buffer is None:
self.raw_buffer = data
else:
self.raw_buffer += data
self.stream_pointer += len(data)
if not data:
self.eof = True
If the breakpoint is set to the first line (data = ...), data is read fine with the content of the file, however, it the breakpoint is set to the second line (if self.raw_buffer is None:), data is read in as an empty string, which caused a StreamEndEvent and thus the empty return from yaml.load().
I could not step in self.stream.read(size), which only got me to some code in /usr/lib/python3.5/codecs.py.
I don't think it is the Python library caused this problem. Probably it has something to do with my code. I noticed this happens after I run a test, which involves spawning two child processes running as pipe and kills the second process with terminate(). I checked the program with psutil, there is only one thread, no child processes, no open files after the run. Looks like it is clean. But then the new request files could not be read, unless I added a sleep or did a break before the stream read. If the second process, also in a pipe, terminates by itself, the issue does not occur.
If no breakpoint is set but just print the f.tell() before calling yaml.load(f), it is always 0, whether the yaml.load(f) returns None or not.
PyYAML got a new release yesterday (2018-06-26). There were no announcements that this indicated an API break, but as can be expected from the major version number change there was.
The (unsafe) load() that you use has been renamed
danger_load() by the merge of this PR
You can pin your PyYAML install on 3.x ( pip install "pyyaml<4" ) or change your code to use danger_load(). The best solution would probably be to write explicit representers for the objects that now are dumped using !!python/path_to_your_type, so you can use the safe_load().
I could not find any announcement of possible breakage in the documentation.
I am using seek function to extract new lines in an updated file. My code looks like this:
read_data=open('path-to-myfile','r')
read_data.seek(0,2)
while True:
time.sleep(sometime)
new_data=read_data.readlines()
do something with new_data
myfile is a csv file that will be constantly updated
The problem is that usually after several loops inside the while, new_data return nothing. It can be different loop numbers. While I checked myfile, it is still updating..... So any problem I have on my code ? Or is there any other way to do this ?
Any help appreciated !!
You have two programs accessing the same file on disk? If that is the case, then the resource may be locking. I set up an example script that writes to a file, and another file that reads for changes based on the code you provided.
So in one instance of python:
import time
while True:
time.sleep(2)
with open('test.txt','a') as read_data:
read_data.seek(0,2)
read_data.write("bibbity boopity\n")
And in another instance of python
import time
read_data=open('test.txt','r')
read_data.seek(0,2)
while True:
time.sleep(1)
new_data=read_data.readlines()
print(new_data)
In this case, the resource is updating slower than its being read, so changes printed by the bottom prog will be blank. But if I speed up the changes per second, well I still see them. But there are some instances where not all the updates are seen.
You may want to use asynchronous file reading to catch all the changes. Python 3 asyncio library doesn't support async file read/write, but curio does.
See also this question
I currently have code that reads raw content from a file chosen by the user:
def choosefile():
filec = tkFileDialog.askopenfile()
# Wait a few to prevent attempting to displayng the file's contents before the entire file was read.
time.sleep(1)
filecontents = filec.read()
But, sometimes people open big files that take more than 2 seconds to open. Is there a callback for FileObject.read([size])? For people who don't know what a callback is, it's a operation executed once another operation has executed.
Slightly modified from the docs:
#!/usr/bin/env python
import signal, sys
def handler(signum, frame):
print "You took too long"
sys.exit(1)
f = open(sys.argv[1])
# Set the signal handler and a 2-second alarm
signal.signal(signal.SIGALRM, handler)
signal.alarm(2)
contents = f.read()
signal.alarm(0) # Disable the alarm
print contents
Answer resolved by asker
Hm, I made a mistake at first. tkFileDialog.askopenfile() does not read the file, but FileObject.read() reads the file, and blocks the code. I found the solution according to #kindall. I'm not a complete expert at Python, though.
Your question seems to assume that Python will somehow start reading your file while some other code executes, and therefore you need to wait for the read to catch up. This is not even slightly true; both open() and read() are blocking calls and will not return until the operation has completed. Your sleep() is not necessary and neither is your proposed workaround. Simply open the file and read it. Python won't do anything else while that is happening.
Thanks kindall! Resolved code:
def choosefile():
filec = tkFileDialog.askopenfile()
filecontents = filec.read()
I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as they are often only Unix based or Windows based.
Alright, so I ended up going with the code I wrote here, on my website link is dead, view on archive.org (also available on GitHub). I can use it in the following fashion:
from filelock import FileLock
with FileLock("myfile.txt"):
# work with the file as it is now locked
print("Lock acquired.")
The other solutions cite a lot of external code bases. If you would prefer to do it yourself, here is some code for a cross-platform solution that uses the respective file locking tools on Linux / DOS systems.
try:
# Posix based file locking (Linux, Ubuntu, MacOS, etc.)
# Only allows locking on writable files, might cause
# strange results for reading.
import fcntl, os
def lock_file(f):
if f.writable(): fcntl.lockf(f, fcntl.LOCK_EX)
def unlock_file(f):
if f.writable(): fcntl.lockf(f, fcntl.LOCK_UN)
except ModuleNotFoundError:
# Windows file locking
import msvcrt, os
def file_size(f):
return os.path.getsize( os.path.realpath(f.name) )
def lock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_RLCK, file_size(f))
def unlock_file(f):
msvcrt.locking(f.fileno(), msvcrt.LK_UNLCK, file_size(f))
# Class for ensuring that all file operations are atomic, treat
# initialization like a standard call to 'open' that happens to be atomic.
# This file opener *must* be used in a "with" block.
class AtomicOpen:
# Open the file with arguments provided by user. Then acquire
# a lock on that file object (WARNING: Advisory locking).
def __init__(self, path, *args, **kwargs):
# Open the file and acquire a lock on the file before operating
self.file = open(path,*args, **kwargs)
# Lock the opened file
lock_file(self.file)
# Return the opened file object (knowing a lock has been obtained).
def __enter__(self, *args, **kwargs): return self.file
# Unlock the file and close the file object.
def __exit__(self, exc_type=None, exc_value=None, traceback=None):
# Flush to make sure all buffered contents are written to file.
self.file.flush()
os.fsync(self.file.fileno())
# Release the lock on the file.
unlock_file(self.file)
self.file.close()
# Handle exceptions that may have come up during execution, by
# default any exceptions are raised to the user.
if (exc_type != None): return False
else: return True
Now, AtomicOpen can be used in a with block where one would normally use an open statement.
WARNINGS:
If running on Windows and Python crashes before exit is called, I'm not sure what the lock behavior would be.
The locking provided here is advisory, not absolute. All potentially competing processes must use the "AtomicOpen" class.
As of (Nov 9th, 2020) this code only locks writable files on Posix systems. At some point after the posting and before this date, it became illegal to use the fcntl.lock on read-only files.
There is a cross-platform file locking module here: Portalocker
Although as Kevin says, writing to a file from multiple processes at once is something you want to avoid if at all possible.
If you can shoehorn your problem into a database, you could use SQLite. It supports concurrent access and handles its own locking.
I have been looking at several solutions to do that and my choice has been
oslo.concurrency
It's powerful and relatively well documented. It's based on fasteners.
Other solutions:
Portalocker: requires pywin32, which is an exe installation, so not possible via pip
fasteners: poorly documented
lockfile: deprecated
flufl.lock: NFS-safe file locking for POSIX systems.
simpleflock : Last update 2013-07
zc.lockfile : Last update 2016-06 (as of 2017-03)
lock_file : Last update in 2007-10
I prefer lockfile — Platform-independent file locking
Locking is platform and device specific, but generally, you have a few options:
Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, it's ignored.
Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.
For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be a major issue.
If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).
Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.
Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.
Your best bet is have a separate process that coordinates read/write access to that file.
Here's an example of how to use the filelock library, which is similar to Evan Fossmark's implementation:
from filelock import FileLock
lockfile = r"c:\scr.txt"
lock = FileLock(lockfile + ".lock")
with lock:
file = open(path, "w")
file.write("123")
file.close()
Any code within the with lock: block is thread-safe, meaning that it will be finished before another process has access to the file.
Locking a file is usually a platform-specific operation, so you may need to allow for the possibility of running on different operating systems. For example:
import os
def my_lock(f):
if os.name == "posix":
# Unix or OS X specific locking here
elif os.name == "nt":
# Windows specific locking here
else:
print "Unknown operating system, lock unavailable"
I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.
Here is the code:
def errlogger(error):
while True:
if not exists('errloglock'):
lock = open('errloglock', 'w')
if exists('errorlog'): log = open('errorlog', 'a')
else: log = open('errorlog', 'w')
log.write(str(datetime.utcnow())[0:-7] + ' ' + error + '\n')
log.close()
remove('errloglock')
return
else:
check = stat('errloglock')
if time() - check.st_ctime > 0.01: remove('errloglock')
print('waiting my turn')
EDIT---
After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:
lock = open('errloglock', 'w')
to just after:
remove('errloglock')
so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.
Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:
lock.close()
which I had immediately following the open statement, so I have removed it in this edit.
this worked for me:
Do not occupy large files, distribute in several small ones
you create file Temp, delete file A and then rename file Temp to A.
import os
import json
def Server():
i = 0
while i == 0:
try:
with open(File_Temp, "w") as file:
json.dump(DATA, file, indent=2)
if os.path.exists(File_A):
os.remove(File_A)
os.rename(File_Temp, File_A)
i = 1
except OSError as e:
print ("file locked: " ,str(e))
time.sleep(1)
def Clients():
i = 0
while i == 0:
try:
if os.path.exists(File_A):
with open(File_A,"r") as file:
DATA_Temp = file.read()
DATA = json.loads(DATA_Temp)
i = 1
except OSError as e:
print (str(e))
time.sleep(1)
The scenario is like that:
The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.
Here is my working code:
from lockfile import LockFile
lock = LockFile(lock_file_path)
status = ""
if not lock.is_locked():
lock.acquire()
status = lock.path + ' is locked.'
print status
else:
status = lock.path + " is already locked."
print status
return status
I found a simple and worked(!) implementation from grizzled-python.
Simple use os.open(..., O_EXCL) + os.close() didn't work on windows.
You may find pylocker very useful. It can be used to lock a file or for locking mechanisms in general and can be accessed from multiple Python processes at once.
If you simply want to lock a file here's how it works:
import uuid
from pylocker import Locker
# create a unique lock pass. This can be any string.
lpass = str(uuid.uuid1())
# create locker instance.
FL = Locker(filePath='myfile.txt', lockPass=lpass, mode='w')
# aquire the lock
with FL as r:
# get the result
acquired, code, fd = r
# check if aquired.
if fd is not None:
print fd
fd.write("I have succesfuly aquired the lock !")
# no need to release anything or to close the file descriptor,
# with statement takes care of that. let's print fd and verify that.
print fd
If you just need Mac/POSIX this should work without external packages.
import sys
import stat
import os
filePath = "<PATH TO FILE>"
if sys.platform == 'darwin':
flags = os.stat(filePath).st_flags
if flags & ~stat.UF_IMMUTABLE:
os.chflags(filePath, flags & stat.UF_IMMUTABLE)
and if you want to unlock a file just change,
if flags & stat.UF_IMMUTABLE:
os.chflags(filePath, flags & ~stat.UF_IMMUTABLE)