how to solve python ilock FileNotFoundError? - python

so i am trying to use ilock in python as a system wide lock, but after few iterations in my code i get the following error, what might cause such an error? and how i can start solving it
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/ilock-bfe0d208735d8d5f20bb2b8abcf8bf67d696f23629b4ee2d4e7304f69063db61.lock'

I'm seeing the same error. Looks to me like it's a bug in the ilock library.
In short, when two ILock objects are created with the same unique name, they'll use the same file as the locking entity (passed to portalocker). They create the file (if it doesn't exist) using open(path, 'w') upon ILock.__enter__, and they call os.unlink(path) upon ILock.__exit__.
However consider the following scenario:
process1: ILock.__enter__ # file is created, lock acquired
process2: ILock.__enter__ # file already exists, lock pending
process1: does its thing under the lock
process1: ILock.__exit__ # file is unlinked, lock released
process2: does its thing under the lock
process2: ILock.__exit__ # Error: cannot unlink, file does not exist
On the surface, it could be that this can be fixed by silently allowing unlink to fail; or perhaps, by recreating the file as necessary after the lock has been acquired. I am not sure though if portalocker would behave nicely in this case.
Perhaps the easiest workaround is to simply NEVER delete the file (get rid of os.unlink altogether).

Related

Python: Can't save file/Windows Error 32

I have written a function which is called when my program is done with its job.
def allDone(self, event):
dlg = wx.MessageBox("All done!", "Ask Alfred", wx.OK | wx.ICON_INFORMATION)
os.unlink(self.fpath)
os.rename(self.temp, self.fpath)
self.pathBox.Clear()
However, its not working as expected. Its supposed to delete the original file, then rename the temp file to the original files path.
Instead, its only executing the unlink, deleting the file at self.fpath.
The exact error I get is:
File "G:/AskNorbert/finder.py", line 151, in allDone
os.rename(self.temp, self.fpath)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
Make sure that you have called flush() and then close() on the temporary file before attempting to rename it, to ensure you are not locking it.
It is also worth calling time.sleep(0.2), to give the OS time to finish anything it is doing, after the close and before the rename.

Lock file for access on windows

Using portalocker we can lock a file for access through the following way:
f=open("M99","r+")
portalocker.lock(f,portalocker.LOCK_EX)
The lock over the file can be removed using
f.close() #or
portalocker.unlock(file) #needs `file` ie reference to file it locked ..pretty obvious too
Can this same thing be done by any other way in python wherein
We can lock the file for access
Restart Python (so no longer have the original Python file object or file number).
Unlock the file for access in the new process.
I cannot save f or file object so can't use pickle or something either. Is there a way using the Python standard library or some win32api call?
Any windows utility will also do...any command line from windows?
It appears you want to lock access to resources where the lock persists between program invocations. You need a different strategy for that.
Create a lock file using exclusive create mode; in Python 2 this requires using the os.open() call (followed by os.fdopen() to produce a Python file object), in Python 3 you can use the 'x' mode when using the built-in open().
In Python 2:
import os
LOCKFILE = r'some\path\to\lockfile'
class AlreadyLocked(Exception):
pass
def lock():
try:
fd = os.open(LOCKFILE, os.O_WRONLY | os.O_CREAT | os.O_EXCL)
except IOError:
# file already exists
raise AlreadyLocked()
with os.fdopen(fd, 'w') as lockfile:
# write the PID of the current process so you can debug
# later if a lockfile can be deleted after a program crash
lockfile.write(os.getpid())
def unlock():
os.remove(LOCKFILE)
In Python 3 the lock() function would be:
def lock():
try:
with open(LOCKFILE, 'x') as lockfile:
# write the PID of the current process so you can debug
# later if a lockfile can be deleted after a program crash
lockfile.write(os.getpid())
except IOError:
# file already exists
raise AlreadyLocked()
You need to use exclusive create mode to avoid race conditions; in exclusive create mode the file can only be created if it doesn't yet exist, a condition checked by the Operating System, rather than by a separate step in Python which would open a window for another program to create the lock as well.
Now you can lock and unlock without tracking the file descriptor. The lockfile is now a signal file; if it is present something has claimed a lock, and deleting the file means something is unlocked.
This does mean that access to the files or directories you are trying to protect is only protected because all your code honours this lock system, not because the OS is enforcing locks on those files or directories.
This all means that this only works if all access to the shared resource is handled by processes that cooperate in this strategy. It cannot be used if another process doesn't honour this scheme. In that case your only option is to use OS level locking and you have to keep your process running for the full duration of the lock.
there is a method in win32api to set file attributes if you have a read of the following:
python SetFileAttributes
MSDN file attributes
these give you the python method to set file attributes:
win32api.SetFileAttributes(file, win32con.FILE_ATTRIBUTE_NORMAL)
where file is the name/path of the file, and not a file object
and the second argument is a attribute mask, is you wanted to set several attributes at once, you can use bitwise xor to add them:
win32con.FILE_ATTRIBUTE_HIDDEN | win32con.FILE_ATTRIBUTE_READONLY
and there are more constants named in the MSDN page.
EDIT:
for file locking you can also look at the win32file.LockFileEx method
i haven't used this before so it may take some playing around, but it appears to need you to pass it a file object (not a path) and then certain constants to set the access permissions, more info on the constants can be found on MSDN
You could use subprocess to open the file in notepad or excel:
import subprocess, time
subprocess.call('start excel.exe "\lockThisFile.txt\"', shell = True)
time.sleep(10) # if you need the file locked before executing the next commands, you may need to sleep it for a few seconds
or
subprocess.call('notepad > lockThisFile.txt', shell = True)
As written you need shell = True, otherwise windows will give you a syntax error.
(subprocess.Popen() works as well)
You can then close the process later using:
subprocess.call('taskkill /f /im notepad.exe') # or excel.exe
Other options include
-write some C++ code and call it from python (https://msdn.microsoft.com/en-us/library/windows/desktop/aa365203(v=vs.85).aspx)
-call 3rd party programs with subprocess.call():
FileLocker http://www.jensscheffler.de/filelocker (https://superuser.com/questions/294826/how-to-purposefully-exclusively-lock-a-file)
Easy File Locker http://www.xoslab.com/efl.html and Dispatch (from win32com.client import Dispatch), although last choice is the most complex

Can't access temporary files created with tempfile

I am using tempfile.NamedTemporaryFile() to store some text until the program ends. On Unix is working without any issues but on Windows the file returned isn't accessible for reading or writing: python gives Errno 13. The only way is to set delete=False and manually delete the file with os.remove(). Why?
This causes the IOError because the file can be opened only once after it is created.
The reason is because NamedTemporaryFile creates the file with FILE_SHARE_DELETE flag on Windows. On Windows when a file has been created/opened with specific share flag all subsequent open operations have to pass this share flag. It's not the case with Python's open function which does not pass FILE_SHARE_DELETE flag. See my answer on How to create a temporary file that can be read by a subprocess? question for more details and a workaround.
Take a look: http://docs.python.org/2/library/tempfile.html
tempfile.NamedTemporaryFile([mode='w+b'[, bufsize=-1[, suffix=''[, prefix='tmp'[, dir=None[, delete=True]]]]]])
This function operates exactly as TemporaryFile() does, except that the file is guaranteed to have a visible name in the file system (on Unix, the directory entry is not unlinked). That name can be retrieved from the name attribute of the file object. Whether the name can be used to open the file a second time, while the named temporary file is still open, varies across platforms (it can be so used on Unix; it cannot on Windows NT or later). If delete is true (the default), the file is deleted as soon as it is closed.
Thanks to #Rnhmjoj here is a working solution:
file = NamedTemporaryFile(delete=False)
file.close()
You have to keep the file with the delete-flag and then close it after creation. This way, Windows will unlock the file and you can do stuff with it!

python unit testing os.remove fails file system

Am doing a bit of unit testing on a function which attempts to open a new file, but should fail if the file already exists. when the function runs sucessfully, the new file is created, so i want to delete it after every test run, but it doesn't seem to be working:
class MyObject_Initialisation(unittest.TestCase):
def setUp(self):
if os.path.exists(TEMPORARY_FILE_NAME):
try:
os.remove(TEMPORARY_FILE_NAME)
except WindowsError:
#TODO: can't figure out how to fix this...
#time.sleep(3)
#self.setUp() #this just loops forever
pass
def tearDown(self):
self.setUp()
any thoughts? The Windows Error thrown seems to suggest the file is in use... could it be that the tests are run in parallel threads?
I've read elsewhere that it's 'bad practice' to use the filesystem in unit testing, but really? Surely there's a way around this that doesn't invole dummying the filesystem?
If you're just looking for a temporary file, have a look at tempfile - this should handle the clean-up all on its own.
Do you remember to explicitly close file handler that operates on TEMPORARY_FILE_NAME?
From Python Documentation:
On Windows, attempting to remove a
file that is in use causes an
exception to be raised;

Detect file handle leaks in python?

My program appears to be leaking file handles. How can I find out where?
My program uses file handles in a few different places—output from child processes, call ctypes API (ImageMagick) opens files, and they are copied.
It crashes in shutil.copyfile, but I'm pretty sure this is not the place it is leaking.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Python25\Lib\site-packages\magpy\magpy.py", line 874, in main
magpy.run_all()
File "C:\Python25\Lib\site-packages\magpy\magpy.py", line 656, in run_all
[operation.operate() for operation in operations]
File "C:\Python25\Lib\site-packages\magpy\magpy.py", line 417, in operate
output_file = self.place_image(output_file)
File "C:\Python25\Lib\site-packages\magpy\magpy.py", line 336, in place_image
shutil.copyfile(str(input_file), str(self.full_filename))
File "C:\Python25\Lib\shutil.py", line 47, in copyfile
fdst = open(dst, 'wb')
IOError: [Errno 24] Too many open files: 'C:\\Documents and Settings\\stuart.axon\\Desktop\\calzone\\output\\wwtbam4\\Nokia_NCD\\nl\\icon_42x42_V000.png'
Press any key to continue . . .
I had similar problems, running out of file descriptors during subprocess.Popen() calls. I used the following script to debug on what is happening:
import os
import stat
_fd_types = (
('REG', stat.S_ISREG),
('FIFO', stat.S_ISFIFO),
('DIR', stat.S_ISDIR),
('CHR', stat.S_ISCHR),
('BLK', stat.S_ISBLK),
('LNK', stat.S_ISLNK),
('SOCK', stat.S_ISSOCK)
)
def fd_table_status():
result = []
for fd in range(100):
try:
s = os.fstat(fd)
except:
continue
for fd_type, func in _fd_types:
if func(s.st_mode):
break
else:
fd_type = str(s.st_mode)
result.append((fd, fd_type))
return result
def fd_table_status_logify(fd_table_result):
return ('Open file handles: ' +
', '.join(['{0}: {1}'.format(*i) for i in fd_table_result]))
def fd_table_status_str():
return fd_table_status_logify(fd_table_status())
if __name__=='__main__':
print fd_table_status_str()
You can import this module and call fd_table_status_str() to log the file descriptor table status at different points in your code.
Also, make sure that subprocess.Popen instances are destroyed. Keeping references of Popen instances in Windows prevent the GC from running. And if the instances are kept, the associated pipes are not closed. More info here.
Use Process Explorer, select your process, View->Lower Pane View->Handles - then look for what seems out of place - usually lots of the same or similar files open points to the problem.
lsof -p <process_id> works well on several UNIX-like systems including FreeBSD.
Look at the output from ls -l /proc/$pid/fd/ (substituting the PID of your process, of course) to see which files are open [or, on win32, use Process Explorer to list open files]; then figure out where in your code you're opening them, and make sure that close() is being called. (Yes, the garbage collector will eventually close things, but it's not always fast enough to avoid running out of fds).
Checking for any circular references which might be preventing garbage collection is also a good practice. (The cycle collector will eventually dispose of these -- but it may not run frequently enough to avoid file descriptor exhaustion; I've been bitten by this personally).
While the OP has a Windows system, I'm sure plenty of people here (such as myself) are looking for others too (it's not even tagged Windows).
Google has a psutil package with a get_open_files() method. It looks like an excellent interface, but it hasn't been maintained in a couple years it seems. I actually wrote an implementation for my own Python 2 project on Linux. I'm using it with unittest to make sure my functions clean up their resources.
import os
# calling this **synchronously** will accurately relay open files on Linux
def get_open_files(pid):
# directory spawned by Python process, containing its file descriptors
path = "/proc/%d/fd" % pid
# list the abspaths belonging to that directory
links = ["%s/%s" % (path, f) for f in os.listdir(path)]
# filter out the bad ones returned by os.listdir()
valid_links = filter(lambda f: os.path.exists(f), links)
# these links are fd integers, so map them to their actual file devices
devices = map(lambda f: os.readlink(f), valid_links)
# remove any ones that are stdin, stdout, stderr, etc.
return filter(lambda f: "/dev/pts" not in f, devices)
Python's own test suite has a refleak module that utilizes fd_count. Works across operating systems and is available on full installs:
>>> from test.support.os_helper import fd_count
>>> fd_count()
27
On Python 3.9 and earlier, the os_helper doesn't exist, so from test.support import fd_count.

Categories

Resources