Here is my problem: I have a file in HDFS which can potentially be huge (=not enough to fit all in memory)
What I would like to do is avoid having to cache this file in memory, and only process it line by line like I would do with a regular file:
for line in open("myfile", "r"):
# do some processing
I am looking to see if there is an easy way to get this done right without using external libraries. I can probably make it work with libpyhdfs or python-hdfs but I'd like if possible to avoid introducing new dependencies and untested libs in the system, especially since both of these don't seem heavily maintained and state that they shouldn't be used in production.
I was thinking to do this using the standard "hadoop" command line tools using the Python subprocess module, but I can't seem to be able to do what I need since there is no command line tools that would do my processing and I would like to execute a Python function for every linein a streaming fashion.
Is there a way to apply Python functions as right operands of the pipes using the subprocess module? Or even better, open it like a file as a generator so I could process each line easily?
cat = subprocess.Popen(["hadoop", "fs", "-cat", "/path/to/myfile"], stdout=subprocess.PIPE)
If there is another way to achieve what I described above without using an external library, I'm also pretty open.
Thanks for any help !
You want xreadlines, it reads lines from a file without loading the whole file into memory.
Edit:
Now I see your question, you just need to get the stdout pipe from your Popen object:
cat = subprocess.Popen(["hadoop", "fs", "-cat", "/path/to/myfile"], stdout=subprocess.PIPE)
for line in cat.stdout:
print line
If you want to avoid adding external dependencies at any cost, Keith's answer is the way to go. Pydoop, on the other hand, could make your life much easier:
import pydoop.hdfs as hdfs
with hdfs.open('/user/myuser/filename') as f:
for line in f:
do_something(line)
Regarding your concerns, Pydoop is actively developed and has been used in production for years at CRS4, mostly for computational biology applications.
Simone
In the last two years, there has been a lot of motion on Hadoop-Streaming. This is pretty fast according to Cloudera: http://blog.cloudera.com/blog/2013/01/a-guide-to-python-frameworks-for-hadoop/ I've had good success with it.
You can use the WebHDFS Python Library (built on top of urllib3):
from hdfs import InsecureClient
client_hdfs = InsecureClient('http://host:port', user='root')
with client_hdfs.write(access_path) as writer:
dump(records, writer) # tested for pickle and json (doesnt work for joblib)
Or you can use the requests package in python as:
import requests
from json import dumps
params = (('op', 'CREATE')
('buffersize', 256))
data = dumps(file) # some file or object - also tested for pickle library
response = requests.put('http://host:port/path', params=params, data=data) # response 200 = successful
Hope this helps!
Related
I have a lot of pickle files. Currently I read them in a loop but it takes a lot of time. I would like to speed it up but don't have any idea how to do that.
Multiprocessing wouldn't work because in order to transfer data from a child subprocess to the main process data need to be serialized (pickled) and deserialized.
Using threading wouldn't help either because of GIL.
I think that the solution would be some library written in C that takes a list of files to read and then runs multiple threads (without GIL). Is there something like this around?
UPDATE
Answering your questions:
Files are partial products of data processing for the purpose of ML
There are pandas.Series objects but the dtype is not known upfront
I want to have many files because we want to pick any subset easily
I want to have many smaller files instead of one big file because deserialization of one big file takes more memory (at some point in time we have serialized string and deserialized objects)
The size of the files can vary a lot
I use python 3.7 so I believe it's cPickle in fact
Using pickle is very flexible because I don't have to worry about underlying types - I can save anything
I agree with what has been noted in the comments, namely that due to the constraint of python itself (chiefly, the GIL lock, as you noted) and there may simply be no faster loading the information beyond what you are doing now. Or, if there is a way, it may be both highly technical and, in the end, only gives you a modest increase in speed.
That said, depending on the datatypes you have, it may be faster to use quickle or pyrobuf.
I think that the solution would be some library written in C that
takes a list of files to read and then runs multiple threads (without
GIL). Is there something like this around?
In short: no. pickle is apparently good enough for enough people that there are no major alternate implementations fully compatible with the pickle protocol. As of sometime in python 3, cPickle was merged with pickle, and neither release the GIL anyway which is why threading won't help you (search for Py_BEGIN_ALLOW_THREADS in _pickle.c and you will find nothing).
If your data can be re-structured into a simpler data format like csv, or a binary format like numpy's npy, there will be less cpu overhead when reading your data. Pickle is built for flexibility first rather than speed or compactness first. One possible exception to the rule of more complex less speed is the HDF5 format using h5py, which can be fairly complex, and I have used to max out the bandwidth of a sata ssd.
Finally you mention you have many many pickle files, and that itself is probably causing no small amount of overhead. Each time you open a new file, there's some overhead involved from the operating system. Conveniently you can combine pickle files by simply appending them together. Then you can call Unpickler.load() until you reach the end of the file. Here's a quick example of combining two pickle files together using shutil
import pickle, shutil, os
#some dummy data
d1 = {'a': 1, 'b': 2, 1: 'a', 2: 'b'}
d2 = {'c': 3, 'd': 4, 3: 'c', 4: 'd'}
#create two pickles
with open('test1.pickle', 'wb') as f:
pickle.Pickler(f).dump(d1)
with open('test2.pickle', 'wb') as f:
pickle.Pickler(f).dump(d2)
#combine list of pickle files
with open('test3.pickle', 'wb') as dst:
for pickle_file in ['test1.pickle', 'test2.pickle']:
with open(pickle_file, 'rb') as src:
shutil.copyfileobj(src, dst)
#unpack the data
with open('test3.pickle', 'rb') as f:
p = pickle.Unpickler(f)
while True:
try:
print(p.load())
except EOFError:
break
#cleanup
os.remove('test1.pickle')
os.remove('test2.pickle')
os.remove('test3.pickle')
I think you should try and use mmap(memory mapped files) that is similar to open() but way faster.
Note: If your each file is big in size then use mmap otherwise If the files are small in size use regular methods.
I have written a sample that you can try.
import mmap
from time import perf_counter as pf
def load_files(filelist):
start = pf() # for rough time calculations
for filename in filelist:
with open(filename, mode="r", encoding="utf8") as file_obj:
with mmap.mmap(file_obj.fileno(), length=0, access=mmap.ACCESS_READ) as mmap_file_obj:
data = pickle.load(mmap_file_obj)
print(data)
print(f'Operation took {pf()-start} sec(s)')
Here mmap.ACCESS_READ is the mode to open the file in binary. The file_obj returned by open is just used to get the file descriptor which is used to open the stream to the file via mmap as a memory mapped file.
As you can see below in the documentation of python open returns the file descriptor or fd for short. So we don't have to do anything with the file_obj operation wise. We just need its fileno() method to get its file descriptor. Also we are not closing the file_obj before the mmap_file_obj. Please take a proper look. We are closing the the mmap block first.
As you said in your comment.
open (file, flags[, mode])
Open the file file and set various flags according to flags and possibly its mode according to mode.
The default mode is 0777 (octal), and the current umask value is first masked out.
Return the file descriptor for the newly opened file.
Give it a try and see how much impact does it do on your operation
You can read more about mmap here. And about file descriptor here
You can try multiprocessing:
import os,pickle
pickle_list=os.listdir("pickles")
output_dict=dict.fromkeys(pickle_list, '')
def pickle_process_func(picklename):
with open("pickles/"+picklename, 'rb') as file:
dapickle=pickle.load(file)
#if you need previus files output wait for it
while(!output_dict[pickle_list[pickle_list.index(picklename)-1]]):
continue
#thandosomesh
print("loaded")
output_dict[picklename]=custom_func_i_dunno(dapickle)
from multiprocessing import Pool
with Pool(processes=10) as pool:
pool.map(pickle_process_func, pickle_list)
Consider using HDF5 via h5py instead of pickle. The performance is generally much better than pickle with numerical data in Pandas and numpy data structures and it supports most common data types and compression.
I'm looking for fastest way to generate fake files for stress test where is a possibility to pass file size. Currently I'm using simple
with open("{}".format(i), 'wb') as f: f.write(os.urandom(FILE_SIZE))
but for my case, creating each file takes too long. It seems to me that the Faker library does not have the method to generate fake files
EDIT: The code below is just a part of whole script so any CMD/OS commands are not a solution for my problem.
You can follow the below to get command with explanation and then run the same command in for loop.
How to create a file with ANY given size in Linux?
Wouldn't it be better to use OS commands for that?
dd if=/dev/urandom of=/tmp/x bs=1M count=1
You could start that using subprocess module:
subprocess.check_call("dd if=/dev/urandom of=/tmp/y bs=1M count=1".split(" "))
This tutorial https://www.dataquest.io/blog/python-json-tutorial/ has a 600MB file that they work with, however when I run their code
import ijson
filename = "md_traffic.json"
with open(filename, 'r') as f:
objects = ijson.items(f, 'meta.view.columns.item')
columns = list(objects)
I'm running into 10+ minutes of waiting for the file to be read into ijson and I'm really confused how this is supposed to be reasonable. Shouldn't there be parsing? Am I missing something?
The main problem is not that you are creating a list after parsing (that only collects the individual results into a single structure), but that you are using the default pure-python backend provided by ijson.
There are other backends that can be used which are way faster. In ijson's homepage it is explained how you can import those. The yajl2_cffi backend is the fastest currently available at the moment, but I've created a new yajl2_c backend (there's a pull request pending acceptance) that performs even better.
In my laptop (Intel(R) Core(TM) i7-5600U) using the yajl2_cffi backend your code runs in ~1.5 minutes. Using the yajl2_c backend it runs in ~10.5 seconds (python 3) and ~15 seconds (python 2.7.12).
Edit: #lex-scarisbrick is of course also right in that you can quickly break out of the loop if you are only interested in the column names.
This looks like a direct copy/paste of the tutorial found here:
https://www.dataquest.io/blog/python-json-tutorial/
The reason it's taking so long is the list() around the output of the ijson.items function. This effectively forces parsing of the entire file before returning any results. Taking advantage of the ijson.items being a generator, the first result can be returned almost immediately:
import ijson
filename = "md_traffic.json"
with open(filename, 'r') as f:
for item in ijson.items(f, 'meta.view.columns.item'):
print(item)
break
EDIT: The very next step in the tutorial is print(columns[0]), which is why I included printing the first item in the answer. Also, it's not clear whether the question was for Python 2 or 3, so the answer uses syntax that works in both, albeit inelegantly.
I tried running your code and I killed the program after 25 minutes. So yes 10 minutes it's reasonable fast.
Is there any library to show progress when adding files to a tar archive in python or alternativly would be be possible to extend the functionality of the tarfile module to do this?
In an ideal world I would like to show the overall progress of the tar creation as well as an ETA as to when it will be complete.
Any help on this would be really appreciated.
Unfortunately it doesn't look like there is an easy way to get byte by byte numbers.
Are you adding really large files to this tar file? If not, I would update progress on a file-by-file basis so that as files are added to the tar, the progress is updated based on the size of each file.
Supposing that all your filenames are in the variable toadd and tarfile is a TarFile object. How about,
from itertools import imap
from operator import attrgetter
# you may want to change this depending on how you want to update the
# file info for your tarobjs
tarobjs = imap(tarfile.getattrinfo, toadd)
total = sum(imap(attrgetter('size'), tarobjs))
complete = 0.0
for tarobj in tarobjs:
sys.stdout.write("\rPercent Complete: {0:2.0d}%".format(complete))
tarfile.add(tarobj)
complete += tarobj.size / total * 100
sys.stdout.write("\rPercent Complete: {0:2.0d}%\n".format(complete))
sys.stdout.write("Job Done!")
Find or write a file-like that wraps a real file which provides progress reporting, and pass it to Tarfile.addfile() so that you can know how many bytes have been requested for inclusion in the archive. You may have to use/implement throttling in case Tarfile tries to read the whole file at once.
I have recently written a wrapper library that provides a progress callback. Have a look at it on git hub:
https://github.com/thomaspurchas/tarfile-Progress-Reporter
Feel free to ask for help if you need it.
Seems like you can use the filter parameter of tarfile.add()
with tarfile.open(<tarball path>, 'w') as tarball:
tarball.add(<some file>, filter = my_filter)
def my_filter(tarinfo):
#increment some count
#add tarinfo.size to some byte counter
return tarinfo
All information you can get from a TarInfo object is available to you.
How are you adding files to the tar file? Is is through "add" with recursive=True? You could build the list of files yourself and call "add" one-by-one, showing the progress as you go. If you're building from a stream/file then it looks like you could wrap that fileobj to see the read status and pass that into addfile.
It does not look like you will need to modify tarfile.py at all.
I'm using xlwt in python to create a Excel spreadsheet. You could interchange this for almost anything else that generates a file; it's what I want to do with the file that's important.
from xlwt import *
w = Workbook()
#... do something
w.save('filename.xls')
I want to I have two use cases for the file: I stream it out to the user's browser or I attach it to an email. In both cases the file only needs to exist the duration of the web request that generates it.
What I'm getting at, the reason for starting this thread is saving to a real file on the filesystem has its own hurdles (stopping overwriting, cleaning up the file once done). Is there somewhere I could "save" it where it lives only in memory and only for the duration of the request?
cStringIO
(or mmap if it should be mutable)
Generalising the answer, as you suggested: If the "anything else that generates a file" won't accept a file-like object as well as a filepath, then you can reduce the hassle by using tempfile.NamedTemporaryFile