Python Write dynamically huge files avoiding 100% CPU Usage - python

I am parsing a huge CSV approx 2 GB files with the help of this great stuff. Now have to generate dynamic files for each column in a new file where column name as file name. So I written this code to write the dynamic files:
def write_CSV_dynamically(self, header, reader):
"""
:header - CSVs first row in string format
:reader - CSVs all other rows in list format
"""
try:
headerlist =header.split(',') #-- string headers
zipof = lambda x, y: zip(x.split(','), y.split(','))
filename = "{}.csv".format(self.dtstamp)
filename = "{}_"+filename
filesdct = {filename.format(k.strip()):open(filename.format(k.strip()), 'a')\
for k in headerlist}
for row in reader:
for key, data in zipof(header, row):
filesdct[filename.format(key.strip())].write( str(data) +"\n" )
for _, v in filesdct.iteritems():
v.close()
except Exception, e:
print e
Now its taking around 50 secs to write these huge files using 100% CPU.As there are other heavy things running on my server. I want to block my program to use only 10% to 20% of the CPU and write these files. No matter if it takes 10-15 mins.
How can I optimize my code, so that it should limit 10-20% CPU usage.

There is number of ways to achieve this:
Nice the process - plain and simple.
cpulimit - just pass your script and cpu usage as parameters:
cpulimit -P /path/to/your/script -l 20
Python's resource package to set limits from the script. Bear in mind it works with absolute CPU time.

Related

Python asynchronous file download + parsing + outputting to JSON

To briefly explain context, I am downloading SEC prospectus data for example. After downloading I want to parse the file to extract certain data, then output the parsed dictionary to a JSON file which consists of a list of dictionaries. I would use a SQL database for output, but the research cluster admins at my university are being slow getting me access. If anyone has any suggestions for how to store the data for easy reading/writing later I would appreciate it, I was thinking about HDF5 as a possible alternative.
A minimal example of what I am doing with the spots that I think I need to improved labeled.
def classify_file(doc):
try:
data = {
'link': doc.url
}
except AttributeError:
return {'flag': 'ATTRIBUTE ERROR'}
# Do a bunch of parsing using regular expressions
if __name__=="__main__":
items = list()
for d in tqdm([y + ' ' + q for y in ['2019'] for q in ['1']]):
stream = os.popen('bash ./getformurls.sh ' + d)
stacked = stream.read().strip().split('\n')
# split each line into the fixed-width fields
widths=(12,62,12,12,44)
items += [[item[sum(widths[:j]):sum(widths[:j+1])].strip() for j in range(len(widths))] for item in stacked]
urls = [BASE_URL + item[4] for item in items]
resp = list()
# PROBLEM 1
filelimit = 100
for i in range(ceil(len(urls)/filelimit)):
print(f'Downloading: {i*filelimit/len(urls)*100:2.0f}%... ',end='\r',flush=True)
resp += [r for r in grequests.map((grequests.get(u) for u in urls[i*filelimit:(i+1)*filelimit]))]
# PROBLEM 2
with Pool() as p:
rs = p.map_async(classify_file,resp,chunksize=20)
rs.wait()
prospectus = rs.get()
with open('prospectus_data.json') as f:
json.dump(prospectus,f)
The getfileurls.sh referenced is a bash script I wrote that was faster than doing it in python since I could use grep, the code for that is
#!/bin/bash
BASE_URL="https://www.sec.gov/Archives/"
INDEX="edgar/full-index/"
url="${BASE_URL}${INDEX}$1/QTR$2/form.idx"
out=$(curl -s ${url} | grep "^485[A|B]POS")
echo "$out"
PROBLEM 1: So I am currently pulling about 18k files in the grequests map call. I was running into an error about too many files being open so I decided to split up the urls list into manageable chunks. I don't like this solution, but it works.
PROBLEM 2: This is where my actual error is. This code runs fine on a smaller set of urls (~2k) on my laptop (uses 100% of my cpu and ~20GB of RAM ~10GB for the file downloads and another ~10GB when the parsing starts), but when I take it to the larger 18k dataset using 40 cores on a research cluster it spins up to ~100GB RAM and ~3TB swap usage then crashes after parsing about 2k documents in 20 minutes via a KeyboardInterrupt from the server.
I don't really understand why the swap usage is getting so crazy, but I think I really just need help with memory management here. Is there a way to create an generator of unsent requests that will be sent when I call classify_file() on them later? Any help would be appreciated.
Generally when you have runaway memory usage with a Pool it's because the workers are being re-used and accumulating memory with each iteration. You can occasionally close and re-open the pool to prevent this but it's so common of an issue that Python now has a built-in parameter to do it for you...
Pool(...maxtasksperchild) is the number of tasks a worker process can complete before it will exit and be replaced with a fresh worker process, to enable unused resources to be freed. The default maxtasksperchild is None, which means worker processes will live as long as the pool.
There's no way for me to tell you what the right value is but you generally want to set it low enough that resources can be freed fairly often but not so low that it slows things down. (Maybe a minutes worth of processing... just as a guess)
with Pool(maxtasksperchild=5) as p:
rs = p.map_async(classify_file,resp,chunksize=20)
rs.wait()
prospectus = rs.get()
For your first problem, you might consider just using requests and moving the call inside of the worker process you already have. Pulling 18K worth of URLs and caching all that data initially is going to take time and memory. If it's all encapsulated in the worker, you'll minimize data usage and you wont need to spin up so many open file handles.

Hitting AWS Lambda Memory limit in Python

I am looking for some advice on this project. My thought was to use Python and a Lambda to aggregate the data and respond to the website. The main parameters are date ranges and can be dynamic.
Project Requirements:
Read data from monthly return files stored in JSON (each file contains roughly 3000 securities and is 1.6 MB in size)
Aggregate the data into various buckets displaying counts and returns for each bucket (for our purposes here lets say the buckets are Sectors and Market Cap ranges which can vary)
Display aggregated data on a website
Issue I face
I have successfully implemted this in an AWS Lambda, however in testing requests that are 20 years of data (and yes I get them), I begin to hit the memory limits in AWS Lambda.
Process I used:
All files are stored in S3, so I use the boto3 library to obtain the files, reading them into memory. This is still small and not of any real significance.
I use json.loads to convert the files into a pandas dataframe. I was loading all of the files into one large dataframe. - This is where the it runs out of memory.
I then pass the dataframe to custom aggregations using groupby to get my results. This part is not as fast as I would like but does the job of getting what I need.
The end result dataframe that is then converted back into JSON and is less than 500 MB.
This entire process when it works locally outside the lambda is about 40 seconds.
I have tried running this with threads and processing single frames at once but the performance degrades to about 1 min 30 seconds.
While I would rather not scrap everything and start over, I am willing to do so if there is a more efficient way to handle this. The old process did everything inside of node.js without the use of a lambda and took almost 3 minutes to generate.
Code currently used
I had to clean this a little to pull out some items but here is the code used.
Read data from S3 into JSON this will result in a list of string data.
while not q.empty():
fkey = q.get()
try:
obj = self.s3.Object(bucket_name=bucket,key=fkey[1])
json_data = obj.get()['Body'].read().decode('utf-8')
results[fkey[1]] = json_data
except Exception as e:
results[fkey[1]] = str(e)
q.task_done()
Loop through the JSON files to build a dataframe for working
for k,v in s3Data.items():
lstdf.append(buildDataframefromJson(k,v))
def buildDataframefromJson(key, json_data):
tmpdf = pd.DataFrame(columns=['ticker','totalReturn','isExcluded','marketCapStartUsd',
'category','marketCapBand','peGreaterThanMarket', 'Month','epsUsd']
)
#Read the json into a dataframe
tmpdf = pd.read_json(json_data,
dtype={
'ticker':str,
'totalReturn':np.float32,
'isExcluded':np.bool,
'marketCapStartUsd':np.float32,
'category':str,
'marketCapBand':str,
'peGreaterThanMarket':np.bool,
'epsUsd':np.float32
})[['ticker','totalReturn','isExcluded','marketCapStartUsd','category',
'marketCapBand','peGreaterThanMarket','epsUsd']]
dtTmp = datetime.strptime(key.split('/')[3], "%m-%Y")
dtTmp = datetime.strptime(str(dtTmp.year) + '-'+ str(dtTmp.month),'%Y-%m')
tmpdf.insert(0,'Month',dtTmp, allow_duplicates=True)
return tmpdf

Is there a way to speed this up - moving csvs from Azure Blob storage to vm, appending to single csv, using python

My job collects multiple times a day from a streaming job and drops a csv into blob storage. After a few weeks of collecting data, I will run a python script to do some machine learning. To set up the training data, I am first moving all data within a range into a single csv on the virtual machine so it can train on that single csv in one pass.
Using the code below, I find that moving the data from blob storage to the virtual machine using blob_service.get_blob_to_path() takes on average 25 seconds per file, even if they're small 3mb files. The appending portion goes incredibly fast at milliseconds per file.
Is there a better way to do this? I thought increasing max connections would help, but I do not see any performance improvement.
blob_service = BlockBlobService(account_name=ACCOUNT_NAME, account_key=ACCOUNT_KEY)
# get list of blob files?
blobs = []
marker = None
while True:
batch = blob_service.list_blobs(CONTAINER_NAME, marker=marker)
blobs.extend(batch)
if not batch.next_marker:
break
marker = batch.next_marker
for blob in blobs:
print(time.time()-start_time)
split_name = blob.name.split('/')
# year/month/day...
blob_date = pd.to_datetime(str(split_name[0])+'-'+str(split_name[1])+'-'+str(split_name[2]))
# s=arg start date, e=arg end date
if blob_date > s and blob_date <= e:
print('Appending: '+ blob.name, end='')
blob_service.get_blob_to_path(CONTAINER_NAME,blob.name,
'./outputs/last_blob.csv',
open_mode='wb',
max_connections=6)
print(' ... adding to training csv ' +str(time.time()-start_time))
with open('./outputs/all_training_data.csv','ab') as f_out:
with open('./outputs/last_blob.csv','rb') as blob_in:
for line in blob_in:
f_out.write(line)
else:
print('** SKIPPING: '+ blob.name)
Additional notes:
This is being done using Azure Machine Learning Workbench as part of my train.py process.
--edit--
Data Science VM and Storage Account are both in SC US.
DSVM is DS4_V2 standard (8c cpu, 28gb memory).
Total size of all blobs combined for my current test is probably close to 200MB.
I timed the copy and it goes very quickly heres some sample output, where the time prints line up with the code up top. The first file takes 13 seconds to download, 0.01 to append. Second takes 6 seconds, then 0.013 to append. Third takes 24 seconds to download.
1666.6139023303986
Appending: 2017/10/13/00/b1.csv ... adding to training csv 1679.0256536006927
1679.03680062294
Appending: 2017/10/13/01/b2.csv ... adding to training csv 1685.968115568161
1685.9810137748718
Appending: 2017/10/13/02/b3.csv ... adding to training csv 1709.5959916114807
This is all happening in the docker container thrown in the vm. I am not sure where it lands in terms of storage/premium/ssd. The VM itself has 56gb "local ssd" as the configuration for the ds4_v2.
##### BEGIN TRAINING PIPELINE
# Create the outputs folder - save any outputs you want managed by AzureML here
os.makedirs('./outputs', exist_ok=True)
I have not tried going the parallel route and would need a bit of guidance on how to tackle that.

MPI in Python: load data from a file by line concurrently

I'm new to python as well as MPI.
I have a huge data file, 10Gb, and I want to load it into, i.e., a list or whatever more efficient, please suggest.
Here is the way I load the file content into a list
def load(source, size):
data = [[] for _ in range(size)]
ln = 0
with open(source, 'r') as input:
for line in input:
ln += 1
data[ln%size].sanitize(line)
return data
Note:
source: is file name
size: is the number of concurrent process, I divide data into [size] of sublist.
for parallel computing using MPI in python.
Please advise how to load data more efficient and faster. I'm searching for days but I couldn't get any results matches my purpose and if there exists, please comment with a link here.
Regards
If I have understood the question, your bottleneck is not Python data structures. It is the I/O speed that limits the efficiency of your program.
If the file is written in continues blocks in the H.D.D then I don't know a way to read it faster than reading the file starting form the first bytes to the end.
But if the file is fragmented, create multiple threads each reading a part of the file. The must slow down the process of reading but modern HDDs implement a technique named NCQ (Native Command Queueing). It works by giving high priority to the read operation on sectors with addresses near the current position of the HDD head. Hence improving the overall speed of read operation using multiple threads.
To mention an efficient data structure in Python for your program, you need to mention what operations will you perform to the data? (delete, add, insert, search, append and so on) and how often?
By the way, if you use commodity hardware, 10GBs of RAM is expensive. Try reducing the need for this amount of RAM by loading the necessary data for computation then replacing the results with new data for the next operation. You can overlap the computation with the I/O operations to improve performance.
(original) Solution using pickling
The strategy for your task can go this way:
split the large file to smaller ones, make sure they are divided on line boundaries
have Python code, which can convert smaller files into resulting list of records and save them as
pickled file
run the python code for all the smaller files in parallel (using Python or other means)
run integrating code, taking pickled files one by one, loading the list from it and appending it
to final result.
To gain anything, you have to be careful as overhead can overcome all possible gains from parallel
runs:
as Python uses Global Interpreter Lock (GIL), do not use threads for parallel processing, use
processes. As processes cannot simply pass data around, you have to pickle them and let the other
(final integrating) part to read the result from it.
try to minimize number of loops. For this reason it is better to:
do not split the large file to too many smaller parts. To use power of your cores, best fit
the number of parts to number of cores (or possibly twice as much, but getting higher will
spend too much time on swithing between processes).
pickling allows saving particular items, but better create list of items (records) and pickle
the list as one item. Pickling one list of 1000 items will be faster than 1000 times pickling
small items one by one.
some tasks (spliting the file, calling the conversion task in parallel) can be often done faster
by existing tools in the system. If you have this option, use that.
In my small test, I have created a file with 100 thousands lines with content "98-BBBBBBBBBBBBBB",
"99-BBBBBBBBBBB" etc. and tested converting it to list of numbers [...., 98, 99, ...].
For spliting I used Linux command split, asking to create 4 parts preserving line borders:
$ split -n l/4 long.txt
This created smaller files xaa, xab, xac, xad.
To convert each smaller file I used following script, converting the content into file with
extension .pickle and containing pickled list.
# chunk2pickle.py
import pickle
import sys
def process_line(line):
return int(line.split("-", 1)[0])
def main(fname, pick_fname):
with open(pick_fname, "wb") as fo:
with open(fname) as f:
pickle.dump([process_line(line) for line in f], fo)
if __name__ == "__main__":
fname = sys.argv[1]
pick_fname = fname + ".pickled"
main(fname, pick_fname)
To convert one chunk of lines into pickled list of records:
$ python chunk2pickle xaa
and it creates the file xaa.pickled.
But as we need to do this in parallel, I used parallel tool (which has to be installed into
system):
$ parallel -j 4 python chunk2pickle.py {} ::: xaa xab xac xad
and I found new files with extension .pickled on the disk.
-j 4 asks to run 4 processes in parallel, adjust it to your system or leave it out and it will
default to number of cores you have.
parallel can also get list of parameters (input file names in our case) by other means like ls
command:
$ ls x?? |parallel -j 4 python chunk2pickle.py {}
To integrate the results, use script integrate.py:
# integrate.py
import pickle
def main(file_names):
res = []
for fname in file_names:
with open(fname, "rb") as f:
res.extend(pickle.load(f))
return res
if __name__ == "__main__":
file_names = ["xaa.pickled", "xab.pickled", "xac.pickled", "xad.pickled"]
# here you have the list of records you asked for
records = main(file_names)
print records
In my answer I have used couple of external tools (split and parallel). You may do similar task
with Python too. My answer is focusing only on providing you an option to keep Python code for
converting lines to required data structures. Complete pure Python answer is not covered here (it
would get much longer and probably slower.
Solution using process Pool (no explicit pickling needed)
Following solution uses multiprocessing from Python. In this case there is no need to pickle results
explicitly (I am not sure, if it is done by the library automatically, or it is not necessary and
data are passed using other means).
# direct_integrate.py
from multiprocessing import Pool
def process_line(line):
return int(line.split("-", 1)[0])
def process_chunkfile(fname):
with open(fname) as f:
return [process_line(line) for line in f]
def main(file_names, cores=4):
p = Pool(cores)
return p.map(process_chunkfile, file_names)
if __name__ == "__main__":
file_names = ["xaa", "xab", "xac", "xad"]
# here you have the list of records you asked for
# warning: records are in groups.
record_groups = main(file_names)
for rec_group in record_groups:
print(rec_group)
This updated solution still assumes, the large file is available in form of four smaller files.

Reading JSON from gigabytes of .txt files and add to the same list

I have 300 txt files (each between 80-100mb) that I should have to put to a list object and using the all content in the same time. I already created a working solution, but unfortunately it crashes due MemoryError when I load more than 3 txt-s. I'm not sure that it matters but I have a lot of ram so I could easily load 30GB to the memory if it can solve the problem.
Basically I would like to loop through the 300 txt files inside the same for loop. Is it possible to create a list object that holds 30GB of content? Or achieve it in any different way? I would really appreciate if somebody could explain me the ideal solution or any useful tips.
Here is how I tried, it produces the Memory Error after loading 3 txt.
def addContentToList(filenm):
with open(filenm, encoding="ISO-8859-1") as v:
jsonContentTxt.extend(json.load(v))
def createFilenameList(name):
for r in range(2,300):
file_str = "%s%s.txt" % (name, r,)
filenames.append(file_str)
filename1 = 'log_1.txt'
filename2 = 'log_'
filenames = []
jsonContentTxt = []
with open(filename, encoding="ISO-8859-1") as f:
jsonContentTxt = json.load(f)
createFilenameList(filename2)
for x in filenames:
addContentToList(x)
json_data = json.dumps(jsonContentTxt)
content_list = json.loads(json_data)
print (content_list)
Put down the chocolate-covered banana and step away from the European currency systems.
Text files are a really bad idea to store data like this. You should use a database. I recommend PostgreSQL and SQLite.
Apart from that, your error is probably due to using a 32-bit version of Python (which will cap your memory allocation to 2GB), use 64-bit instead. Even so I think you'd be better off by using a more proper tool for the job and not allocating 30GB of memory space.

Categories

Resources