I'm writing a bunch of files on several hard disks. All my files don't fit on a single hard drive so I write those on next one if the first one is out of space. I catch the IOError 28 to figure this out.
My exact problem is that when I try to remove the last file written (incomplete file) to the first disk I get a new Exception that I don't fully understand. It seems that with-block can't close the file because there is no space left on a disk.
I'm on windows and disks are formatted to NTFS.
Could someone please help me.
# Here's a sample code
# I recommend first to fill a disk to almost full with a large dummy file.
# On windows you could create a dummy file with
# 'fsutil file createnew large.txt 1000067000000'
import os
import errno
fill = 'J:/fill.txt'
try:
with open(fill, 'wb') as f:
while True:
n = f.write(b"\0")
except IOError as e:
if e.errno == errno.ENOSPC:
os.remove(fill)
Here's the traceback:
Traceback (most recent call last):
File "nospacelef.py", line 8, in <module>
n = f.write(b"\0")
IOError: [Errno 28] No space left on device
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "nospacelef.py", line 8, in <module>
n = f.write(b"\0")
IOError: [Errno 28] No space left on device
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "nospacelef.py", line 11, in <module>
os.remove(fill)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process: 'J:/fill.txt'
Answering to my own question.
I filed a bug to python [1][2]. It was already fixed in 3.3+. There is no fix for 3.2 that I used. I upgraded my python version so I'm not suffering this problem anymore.
[1] http://bugs.python.org/issue25202
[2] http://bugs.python.org/issue16597
Related
Whenever I try to write or modify the data of a file in any way I get this error everytime:
OSError: [Errno 9] Bad file descriptor
Here's what I've been trying to do:
# output.txt is the file already created inside of the same directory
with open(__file__.rsplit("\\", 1)[0] + "\\output.txt", "w") as f:
f.write("this should write to 'output.txt'")
I would try to dump with json or append with normal data but I'm continuously recieving the same error.
In the above example, here is the entire output of the terminal after execution:
C:\Users\USER\Documents\Programming\Code\Python\Testing>c:\Users\USER\Documents\Programming\Code\Python\Testing\z_3.py
OSError: [Errno 9] Bad file descriptor
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\USER\Documents\Programming\Code\Python\Testing\z_3.py", line 4, in <module>
f.write("this should write to 'output.txt'")
OSError: [Errno 9] Bad file descriptor
This is all very strange because I've been able to write to files normally but I've just recently reinstalled windows and now it won't work.
I'm new with pyshark, and I write a sample code by searching on the tutorial
import pyshark
cap = pyshark.FileCapture("input.cap")
cap_1 = cap[0]
and then it give me an error
/Users/tingyugu/anaconda3/bin/python /Users/tingyugu/PycharmProjects/final/test.py
Traceback (most recent call last):
File "/Users/tingyugu/anaconda3/lib/python3.6/site-packages/pyshark/capture/file_capture.py", line 70, in __getitem__
next(self)
File "/Users/tingyugu/anaconda3/lib/python3.6/site-packages/pyshark/capture/file_capture.py", line 60, in __next__
packet = self._packet_generator.send(None)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/tingyugu/PycharmProjects/final/test.py", line 5, in <module>
cap_1 = cap[0]
File "/Users/tingyugu/anaconda3/lib/python3.6/site-packages/pyshark/capture/file_capture.py", line 73, in __getitem__
raise KeyError('Packet of index %d does not exist in capture' % packet_index)
KeyError: 'Packet of index 0 does not exist in capture'
I know the reason is that there is no packets in the cap, but my friend can read the file by pyshark
I use the python 3.6.0 anaconda and the pyshark is 0.3.7 in anaconda
if you are on jupyter see this issue on PyShark Repo. I had the same problem, seems like pyshark does not go well with jupyter. I'm gonna assume it might have same issues with ipython as well.
there are some pull requests like this one on their repo too as a fix but nothing merged yet.
Is there a way to get more specific Error messages in Python? E.g The full error code or at least the line the error occurred on, the exact file that cannot be found rather than a generic "The system cannot find the file specified")
for file in ['C:/AA/HA.csv', 'C:/AA1/HA1.csv']:
try:
os.remove(file)
except OSError as e:
pass
print(getattr(e, 'message', repr(e)))
#print(getattr(e, 'message', repr(e)))
#print(e.message)
#print('File Not Removed')
The following prints twice:
FileNotFoundError(2, 'The system cannot find the file specified')
While this is great, is there a way to get more precise error messages for bug fixing?
The following stops the job but gives out in console the exact line being 855 as well as the file directory ''C:/AC/HA.csv''.
os.remove('C:/AA/HA.csv')
Traceback (most recent call last):
File "C:/ACA.py", line 855, in <module>
os.remove('C:/AC/HA.csv')
FileNotFoundError: [WinError 2] The system cannot find the file specified: ''C:/AC/HA.csv''
See the traceback module:
import os
import traceback
for file in ['C:/AA/HA.csv', 'C:/AA1/HA1.csv']:
try:
os.remove(file)
except OSError as e:
traceback.print_exc()
Output:
Traceback (most recent call last):
File "C:\test.py", line 6, in <module>
os.remove(file)
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:/AA/HA.csv'
Traceback (most recent call last):
File "C:\test.py", line 6, in <module>
os.remove(file)
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:/AA1/HA1.csv'
My Python Interpreter (v2.6.5) raises the above error in the following codepart:
fd = open("some_filename", "r")
fd.seek(-2, os.SEEK_END) #same happens if you exchange the second arg. w/ 2
data=fd.read(2);
last call is fd.seek()
Traceback (most recent call last):
File "bot.py", line 250, in <module>
fd.seek(iterator, os.SEEK_END);
IOError: [Errno 22] Invalid argument
The strange thing with this is that the exception occurs just when executing my entire code, not if only the specific part with the file opening.
At the runtime of this part of code, the opened file definitely exists, disk is not full, the variable "iterator" contains a correct value like in the first codeblock.
What could be my mistake?
Thanks in advance
From lseek(2):
EINVAL
whence is not one of SEEK_SET,
SEEK_CUR, SEEK_END; or the resulting
file offset would be negative, or
beyond the end of a seekable device.
So double-check the value of iterator.
I am working with large matrixes, so I am using NumPy's memmap. However, I am getting an error as apparently the file descriptors used by memmap are not being closed.
import numpy
import tempfile
counter = 0
while True:
temp_fd, temporary_filename = tempfile.mkstemp(suffix='.memmap')
map = numpy.memmap(temporary_filename, dtype=float, mode="w+", shape=1000)
counter += 1
print counter
map.close()
os.remove(temporary_filename)
From what I understand, the memmap file is closed when the method close() is called. However, the code above cannot loop forever, as it eventually throws the "[Errno 24] Too many open files" error:
1016
1017
1018
1019
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python2.5/site-packages/apport_python_hook.py", line 38, in apport_excepthook
ImportError: No module named packaging_impl
Original exception was:
Traceback (most recent call last):
File "./memmap_loop.py", line 11, in <module>
File "/usr/lib/python2.5/site-packages/numpy/core/memmap.py", line 226, in __new__
EnvironmentError: [Errno 24] Too many open files
Does anybody know what I am overlooking?
Since the memmap does not take the open file descriptor, but the file name, I suppose you leak the temp_fd file descriptor. Does os.close(temp_fd) help?
Great that it works.
Since you can pass numpy.memmap a file-like object, you could create one from the file descriptor you already have, temp_fd.
fobj = os.fdopen(temp_fd, "w+")
numpy.memmap(fobj, ...