Is there a way to cache python file handles? I have a function which takes a netCDF file path as input, opens it, extracts some data from the netCDF file and closes it. It gets called a lot of times, and the overhead of opening the file each time is high.
How can I make it faster by maybe caching the file handle? Perhaps there is a python library to do this
Yes, you can use following python libraries:
dill (required)
python-memcached (optional)
Let's follow the example. You have two files:
# save.py - it puts deserialized file handler object to memcached
import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
file_handler = open('data.txt', 'r')
mc.set("file_handler", dill.dumps(file_handler))
print 'saved!'
and
# read_from_file.py - it gets deserialized file handler object from memcached,
# then serializes it and read lines from it
import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
file_handler = dill.loads(mc.get("file_handler"))
print file_handler.readlines()
Now if you run:
python save.py
python read_from_file.py
you can get what you want.
Why it works?
Because you didn't close the file (file_handler.close()), so object still exist in memory (has not been garbage collected, because of weakref) and you can use it. Even in different process.
Solution
import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
serialized = mc.get("file_handler")
if serialized:
file_handler = dill.loads(serialized)
else:
file_handler = open('data.txt', 'r')
mc.set("file_handler", dill.dumps(file_handler))
print file_handler.readlines()
What about this?
filehandle = None
def get_filehandle(filename):
if filehandle is None or filehandle.closed():
filehandle = open(filename, "r")
return filehandle
You may want to encapsulate this into a class to prevent other code from messing with the filehandle variable.
Related
I have a Matlab application that writes in to a .csv file and a Python script that reads from it. These operations happen concurrently and at their own respective periods (not necessarily the same). All of this runs on Windows 7.
I wish to know :
Would the OS inherently provide some sort of locking mechanism so that only one of the two applications - Matlab or Python - have access to the shared file?
In the Python application, how do I check if the file is already "open"ed by Matlab application? What's the loop structure for this so that the Python application is blocked until it gets access to read the file?
I am not sure about window's API for locking files
Heres a possible solution:
While matlab has the file open, you create an empty file called "data.lock" or something to that effect.
When python tries to read the file, it will check for the lock file, and if it is there, then it will sleep for a given interval.
When matlab is done with the file, it can delete the "data.lock" file.
Its a programmatic solution, but it is simpler than digging through the windows api and finding the right calls in matlab and python.
If Python is only reading the file, I believe you have to lock it in MATLAB because a read-only open call from Python may not fail. I am not sure how to accomplish that, you may want to read this question atomically creating a file lock in MATLAB (file mutex)
However, if you are simply consuming the data with python, did you consider using a socket instead of a file?
In Windows on the Python side, CreateFile can be called (directly or indirectly via the CRT) with a specific sharing mode. For example, if the desired sharing mode is FILE_SHARE_READ, then the open will fail if the file is already open for writing. If the latter call instead succeeds, then a future attempt to open the file for writing will fail (e.g. in Matlab).
The Windows CRT function _wsopen_s allows setting the sharing mode. You can call it with ctypes in a Python 3 opener:
import sys
import os
import ctypes as ctypes
import ctypes.util
__all__ = ['shdeny', 'shdeny_write', 'shdeny_read']
_SH_DENYRW = 0x10 # deny read/write mode
_SH_DENYWR = 0x20 # deny write mode
_SH_DENYRD = 0x30 # deny read
_S_IWRITE = 0x0080 # for O_CREAT, a new file is not readonly
if sys.version_info[:2] < (3,5):
_wsopen_s = ctypes.CDLL(ctypes.util.find_library('c'))._wsopen_s
else:
# find_library('c') may be deprecated on Windows in 3.5, if the
# universal CRT removes named exports. The following probably
# isn't future proof; I don't know how the '-l1-1-0' suffix
# should be handled.
_wsopen_s = ctypes.CDLL('api-ms-win-crt-stdio-l1-1-0')._wsopen_s
_wsopen_s.argtypes = (ctypes.POINTER(ctypes.c_int), # pfh
ctypes.c_wchar_p, # filename
ctypes.c_int, # oflag
ctypes.c_int, # shflag
ctypes.c_int) # pmode
def shdeny(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYRW, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
def shdeny_write(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYWR, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
def shdeny_read(file, flags):
fh = ctypes.c_int()
err = _wsopen_s(ctypes.byref(fh),
file, flags, _SH_DENYRD, _S_IWRITE)
if err:
raise IOError(err, os.strerror(err), file)
return fh.value
For example:
if __name__ == '__main__':
import tempfile
filename = tempfile.mktemp()
fw = open(filename, 'w')
fw.write('spam')
fw.flush()
fr = open(filename)
assert fr.read() == 'spam'
try:
f = open(filename, opener=shdeny_write)
except PermissionError:
fw.close()
with open(filename, opener=shdeny_write) as f:
assert f.read() == 'spam'
try:
f = open(filename, opener=shdeny_read)
except PermissionError:
fr.close()
with open(filename, opener=shdeny_read) as f:
assert f.read() == 'spam'
with open(filename, opener=shdeny) as f:
assert f.read() == 'spam'
os.remove(filename)
In Python 2 you'll have to combine the above openers with os.fdopen, e.g.:
f = os.fdopen(shdeny_write(filename, os.O_RDONLY|os.O_TEXT), 'r')
Or define an sopen wrapper that lets you pass the share mode explicitly and calls os.fdopen to return a Python 2 file. This will require a bit more work to get the file mode from the passed in flags, or vice versa.
The goal is to download a file from the internet, and create from it a file object, or a file like object without ever having it touch the hard drive. This is just for my knowledge, wanting to know if its possible or practical, particularly because I would like to see if I can circumvent having to code a file deletion line.
This is how I would normally download something from the web, and map it to memory:
import requests
import mmap
u = requests.get("http://www.pythonchallenge.com/pc/def/channel.zip")
with open("channel.zip", "wb") as f: # I want to eliminate this, as this writes to disk
f.write(u.content)
with open("channel.zip", "r+b") as f: # and his as well, because it reads from disk
mm = mmap.mmap(f.fileno(), 0)
mm.seek(0)
print mm.readline()
mm.close() # question: if I do not include this, does this become a memory leak?
r.raw (HTTPResponse) is already a file-like object (just pass stream=True):
#!/usr/bin/env python
import sys
import requests # $ pip install requests
from PIL import Image # $ pip install pillow
url = sys.argv[1]
r = requests.get(url, stream=True)
r.raw.decode_content = True # Content-Encoding
im = Image.open(r.raw) #NOTE: it requires pillow 2.8+
print(im.format, im.mode, im.size)
In general if you have a bytestring; you could wrap it as f = io.BytesIO(r.content), to get a file-like object without touching the disk:
#!/usr/bin/env python
import io
import zipfile
from contextlib import closing
import requests # $ pip install requests
r = requests.get("http://www.pythonchallenge.com/pc/def/channel.zip")
with closing(r), zipfile.ZipFile(io.BytesIO(r.content)) as archive:
print({member.filename: archive.read(member) for member in archive.infolist()})
You can't pass r.raw to ZipFile() directly because the former is a non-seekable file.
I would like to see if I can circumvent having to code a file deletion line
tempfile can delete files automatically f = tempfile.SpooledTemporaryFile(); f.write(u.content). Until .fileno() method is called (if some api requires a real file) or maxsize is reached; the data is kept in memory. Even if the data is written on disk; the file is deleted as soon as it closed.
Your answer is u.content. The content is in the memory. Unless you write it to a file, it won’t be stored on disk.
This is what I ended up doing.
import zipfile
import requests
import StringIO
u = requests.get("http://www.pythonchallenge.com/pc/def/channel.zip")
f = StringIO.StringIO()
f.write(u.content)
def extract_zip(input_zip):
input_zip = zipfile.ZipFile(input_zip)
return {i: input_zip.read(i) for i in input_zip.namelist()}
extracted = extract_zip(f)
I have the following view code that attempts to "stream" a zipfile to the client for download:
import os
import zipfile
import tempfile
from pyramid.response import FileIter
def zipper(request):
_temp_path = request.registry.settings['_temp']
tmpfile = tempfile.NamedTemporaryFile('w', dir=_temp_path, delete=True)
tmpfile_path = tmpfile.name
## creating zipfile and adding files
z = zipfile.ZipFile(tmpfile_path, "w")
z.write('somefile1.txt')
z.write('somefile2.txt')
z.close()
## renaming the zipfile
new_zip_path = _temp_path + '/somefilegroup.zip'
os.rename(tmpfile_path, new_zip_path)
## re-opening the zipfile with new name
z = zipfile.ZipFile(new_zip_path, 'r')
response = FileIter(z.fp)
return response
However, this is the Response I get in the browser:
Could not convert return value of the view callable function newsite.static.zipper into a response object. The value returned was .
I suppose I am not using FileIter correctly.
UPDATE:
Since updating with Michael Merickel's suggestions, the FileIter function is working correctly. However, still lingering is a MIME type error that appears on the client (browser):
Resource interpreted as Document but transferred with MIME type application/zip: "http://newsite.local:6543/zipper?data=%7B%22ids%22%3A%5B6%2C7%5D%7D"
To better illustrate the issue, I have included a tiny .py and .pt file on Github: https://github.com/thapar/zipper-fix
FileIter is not a response object, just like your error message says. It is an iterable that can be used for the response body, that's it. Also the ZipFile can accept a file object, which is more useful here than a file path. Let's try writing into the tmpfile, then rewinding that file pointer back to the start, and using it to write out without doing any fancy renaming.
import os
import zipfile
import tempfile
from pyramid.response import FileIter
def zipper(request):
_temp_path = request.registry.settings['_temp']
fp = tempfile.NamedTemporaryFile('w+b', dir=_temp_path, delete=True)
## creating zipfile and adding files
z = zipfile.ZipFile(fp, "w")
z.write('somefile1.txt')
z.write('somefile2.txt')
z.close()
# rewind fp back to start of the file
fp.seek(0)
response = request.response
response.content_type = 'application/zip'
response.app_iter = FileIter(fp)
return response
I changed the mode on NamedTemporaryFile to 'w+b' as per the docs to allow the file to be written to and read from.
current Pyramid version has 2 convenience classes for this use case- FileResponse, FileIter. The snippet below will serve a static file. I ran this code - the downloaded file is named "download" like the view name. To change the file name and more set the Content-Disposition header or have a look at the arguments of pyramid.response.Response.
from pyramid.response import FileResponse
#view_config(name="download")
def zipper(request):
path = 'path_to_file'
return FileResponse(path, request) #passing request is required
docs:
http://docs.pylonsproject.org/projects/pyramid/en/latest/api/response.html#
hint: extract the Zip logic from the view if possible
I am wondering whether there is a way to upload a zip file to django web server and put the zip's files into django database WITHOUT accessing the actual file system in the process (e.g. extracting the files in the zip into a tmp dir and then load them)
Django provides a function to convert python File to Django File, so if there is a way to convert ZipExtFile to python File, it should be fine.
thanks for help!
Django model:
from django.db import models
class Foo:
file = models.FileField(upload_to='somewhere')
Usage:
from zipfile import ZipFile
from django.core.exceptions import ValidationError
from django.core.files import File
from io import BytesIO
z = ZipFile('zipFile')
istream = z.open('subfile')
ostream = BytesIO(istream.read())
tmp = Foo(file=File(ostream))
try:
tmp.full_clean()
except Validation, e:
print e
Output:
{'file': [u'This field cannot be blank.']}
[SOLUTION] Solution using an ugly hack:
As correctly pointed out by Don Quest, file-like classes such as StringIO or BytesIO should represent the data as a virtual file. However, Django File's constructor only accepts the build-in file type and nothing else, although the file-like classes would have done the job as well. The hack is to set the variables in Django::File manually:
buf = bytesarray(OPENED_ZIP_OBJECT.read(FILE_NAME))
tmp_file = BytesIO(buf)
dummy_file = File(tmp_file) # this line actually fails
dummy_file.name = SOME_RANDOM_NAME
dummy_file.size = len(buf)
dummy_file.file = tmp_file
# dummy file is now valid
Please keep commenting if you have a better solution (except for custom storage)
There's an easier way to do this:
from django.core.files.base import ContentFile
uploaded_zip = zipfile.ZipFile(uploaded_file, 'r') # ZipFile
for filename in uploaded_zip.namelist():
with uploaded_zip.open(filename) as f: # ZipExtFile
my_django_file = ContentFile(f.read())
Using this, you can convert a file that was uploaded to memory directly to a django file. For a more complete example, let's say you wanted to upload a series of image files inside of a zip to the file system:
# some_app/models.py
class Photo(models.Model):
image = models.ImageField(upload_to='some/upload/path')
...
# Upload code
from some_app.models import Photo
for filename in uploaded_zip.namelist():
with uploaded_zip.open(filename) as f: # ZipExtFile
new_photo = Photo()
new_photo.image.save(filename, ContentFile(f.read(), save=True)
Without knowing to much about Django, i can tell you to take a look at the "io" package.
You could do something like:
from zipfile import ZipFile
from io import StringIO
zname,zipextfile = 'zipcontainer.zip', 'file_in_archive'
istream = ZipFile(zname).open(zipextfile)
ostream = StringIO(istream.read())
And then do whatever you would like to do with your "virtual" ostream Stream/File.
I've used the following django file class to avoid the need to read ZipExtFile into a another datastructure (StingIO or BytesIO) while properly impelementing what Django needs in order to save the file directly.
from django.core.files.base import File
class DjangoZipExtFile(File):
def __init__(self, zipextfile, zipinfo):
self.file = zipextfile
self.zipinfo = zipinfo
self.mode = 'r'
self.name = zipinfo.filename
self._size = zipinfo.file_size
def seek(self, position):
if position != 0:
#this will raise an unsupported operation
return self.file.seek(position)
#TODO if we have already done a read, reopen file
zipextfile = archive.open(path, 'r')
zipinfo = archive.getinfo(path)
djangofile = DjangoZipExtFile(zipextfile, zipinfo)
storage = DefaultStorage()
result = storage.save(djangofile.name, djangofile)
I need to download a zip archive of text files, dispatch each text file in the archive to other handlers for processing, and finally write the unzipped text file to disk.
I have the following code. It uses multiple open/close on the same file, which does not seem elegant. How do I make it more elegant and efficient?
zipped = urllib.urlopen('www.abc.com/xyz.zip')
buf = cStringIO.StringIO(zipped.read())
zipped.close()
unzipped = zipfile.ZipFile(buf, 'r')
for f_info in unzipped.infolist():
logfile = unzipped.open(f_info)
handler1(logfile)
logfile.close() ## Cannot seek(0). The file like obj does not support seek()
logfile = unzipped.open(f_info)
handler2(logfile)
logfile.close()
unzipped.extract(f_info)
Your answer is in your example code. Just use StringIO to buffer the logfile:
zipped = urllib.urlopen('www.abc.com/xyz.zip')
buf = cStringIO.StringIO(zipped.read())
zipped.close()
unzipped = zipfile.ZipFile(buf, 'r')
for f_info in unzipped.infolist():
logfile = unzipped.open(f_info)
# Here's where we buffer:
logbuffer = cStringIO.StringIO(logfile.read())
logfile.close()
for handler in [handler1, handler2]:
handler(logbuffer)
# StringIO objects support seek():
logbuffer.seek(0)
unzipped.extract(f_info)
You could say something like:
handler_dispatch(logfile)
and
def handler_dispatch(file):
for line in file:
handler1(line)
handler2(line)
or even make it more dynamic by constructing a Handler class with multiple handlerN functions, and applying each of them inside handler_dispatch. Like
class Handler:
def __init__(self:)
self.handlers = []
def add_handler(handler):
self.handlers.append(handler)
def handler_dispatch(self, file):
for line in file:
for handler in self.handlers:
handler.handle(line)
Open the zip file once, loop through all the names, extract the file for each name and process it, then write it to disk.
Like so:
for f_info in unzipped.info_list():
file = unzipped.open(f_info)
data = file.read()
# If you need a file like object, wrap it in a cStringIO
fobj = cStringIO.StringIO(data)
handler1(fobj)
handler2(fobj)
with open(filename,"w") as fp:
fp.write(data)
You get the idea