Loading large dataset into Pandas Python - python

I would like to load large .csv (3.4m rows, 206k users) open sourced dataset from InstaCart https://www.instacart.com/datasets/grocery-shopping-2017
Basically, I have trouble loading orders.csv into Pandas DataFrame. I would like to learn best practices for loading large files into Pandas/Python.

Best option would be to read the data in chunks instead of loading the whole file into memory.
Luckily, read_csv method accepts chunksize argument.
for chunk in pd.read_csv(file.csv, chunksize=somesize):
process(chunk)
Note: By specifying a chunksize to read_csv or read_table, the return value will be an iterable object of type TextFileReader:
Also see:
read_csv
Iterating through files chunk by chunk

When you have large data frames that might not fit in memory, dask is quite useful. The main page I've linked to has examples on how you can create a dask dataframe that has the same API as the pandas one but which can be distributed.

Depending on your machine you may be able to read all of it in memory by specifying the data types while reading the csv file. When a csv is read by pandas then the default data types used may not be the best ones. Using dtype you can specify the data types. It reduces the size of the data frame read into the memory.

Related

How can I read a large number of files with Pandas?

Number of file : 894
total file size : 22.2GB
I have to do machine learning by reading many csv files. There is not enough memory to read at once.
Specifically to load a large number of files that do not fit in memory, one can use dask:
import dask.dataframe as dd
df = dd.read_csv('file-*.csv')
This will create a lazy version of the data, meaning the data will be loaded only when requested, e.g. df.head() will load the data from the first 5 rows only. Where possible pandas syntax will apply to dask dataframes.
For machine learning you can use dask-ml which has tight integration with sklearn, see docs.
You can read your files in chunks but not during the training phase. You have to select an appropriate algorithm for your files. However, having such big files for model training mostly means you have to do some data preparation first, which will reduce the size of the files significantly.

From hdf5 files to csv files with Python

I have to process hdf5 files. Each of them contains data that can be loaded into a pandas DataFrame formed by 100 columns and almost 5E5 rows. Each hdf5 file weighs approximately 130MB.
So I want to fetch the data from the hdf5 file then apply some processing and finally save the new data in a csv file. In my case, the performance of the process is very important because I will have to repeat it.
So far I have focused on Pandas and Dask to get the job done. Dask is good for parallelization and I will get good processing times with a stronger PC and more CPUs.
However some of you have already encountered this problem and found the best optimization ?
As others have mentioned in the comments, unless you have to move it to CSV, I'd recommend keeping it in HDF5. However, below is a description of how you might do it if you do have to carry out the conversion.
It sounds like you have a function for loading the HDF5 file into a pandas data frame. I would suggest using dask's delayed API to create a list of delayed pandas data frames, and then convert them into a dask data frame. The snipped below is copied from the linked page, with an added line to save to CSV.
import dask.dataframe as dd
from dask.delayed import delayed
from my_custom_library import load
filenames = ...
dfs = [delayed(load)(fn) for fn in filenames]
df = dd.from_delayed(dfs)
df.to_csv(filename, **kwargs)
See dd.to_csv() documentation for info on options for saving to CSV.

Can analyzing the data without creating a data frame in python for data loaded in database possible?

The data is stored in the DBeaver database.I would like to analyze my data through python without creating a data frame.And the python is installed in my computer. As the data is huge, creating a data frame will consume my ram and space.
So, it it possible to directly link my python code to the database and do the necessary aggregation or data manipulation and gather only the output
If you use python directly then also it will consume more ram and space. and if you do directly data analysis with database then it will may lead to unexpected results
instead you can use this Dask Dataframe from Dask Official ... Dask Wikipedia
with dask dataframe you can do data analysis even if you have big dataset
I don't know in which scale you want to work with your data and how big is your data set but if you are going to change the data in large scale i would recommend creating csv file which contains your dataset and working with pandas dataFrames reading csv files are fairly fast and easy to work if your interested you can visit here and read parts needed.
https://pandas.pydata.org/pandas-docs/stable/user_guide/10min.html

What is the best choice for data storage in Pandas for mixed type data?

I am working on a large dataset that is stored as ndjson where each row of the data is a json object, I read this in line by line and use pandas json_normalise() to flatten each one and save it in a list as a dataframe, I then concat this list afterwards.
The whole process takes ~2 hours on a high powered machine so I would like to save the result so I dont have to repeat it, however, I have tried using to_hdfs and to_parquet but both have been failing and I believe it is due to the majority of columns having mixed data types where there could be strings, floats and ints which is an unavoidable consequence of a messy data collection system.
What would be the most appropriate way of storing this unprocessed data prior to cleaning it?
I think here should help pickle.
For write DataFrame/Series use to_pickle .
For read back use read_pickle.

Fastest way to parse large CSV files in Pandas

I am using pandas to analyse large CSV data files. They are around 100 megs in size.
Each load from csv takes a few seconds, and then more time to convert the dates.
I have tried loading the files, converting the dates from strings to datetimes, and then re-saving them as pickle files. But loading those takes a few seconds as well.
What fast methods could I use to load/save the data from disk?
As #chrisb said, pandas' read_csv is probably faster than csv.reader/numpy.genfromtxt/loadtxt. I don't think you will find something better to parse the csv (as a note, read_csv is not a 'pure python' solution, as the CSV parser is implemented in C).
But, if you have to load/query the data often, a solution would be to parse the CSV only once and then store it in another format, eg HDF5. You can use pandas (with PyTables in background) to query that efficiently (docs).
See here for a comparison of the io performance of HDF5, csv and SQL with pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#performance-considerations
And a possibly relevant other question: "Large data" work flows using pandas
Posting this late in response to a similar question that had found simply using modin out of the box fell short. The answer will be similar with dask - use all of the below strategies in combination for best results, as appropriate for your use case.
The pandas docs on Scaling to Large Datasets have some great tips which I'll summarize here:
Load less data. Read in a subset of the columns or rows using the usecols or nrows parameters to pd.read_csv. For example, if your data has many columns but you only need the col1 and col2 columns, use pd.read_csv(filepath, usecols=['col1', 'col2']). This can be especially important if you're loading datasets with lots of extra commas (e.g. the rows look like index,col1,col2,,,,,,,,,,,. In this case, use nrows to read in only a subset of the data to make sure that the result only includes the columns you need.
Use efficient datatypes. By default, pandas stores all integer data as signed 64-bit integers, floats as 64-bit floats, and strings as objects or string types (depending on the version). You can convert these to smaller data types with tools such as Series.astype or pd.to_numeric with the downcast option.
Use Chunking. Parsing huge blocks of data can be slow, especially if your plan is to operate row-wise and then write it out or to cut the data down to a smaller final form. You can use the chunksize and iterator arguments to loop over chunks of the data and process the file in smaller pieces. See the docs on Iterating through files chunk by chunk for more detail. Alternately, use the low_memory flag to get Pandas to use the chunked iterator on the backend, but return a single dataframe.
Use other libraries. There are a couple great libraries listed here, but I'd especially call out dask.dataframe, which specifically works toward your use case, by enabling chunked, multi-core processing of CSV files which mirrors the pandas API and has easy ways of converting the data back into a normal pandas dataframe (if desired) after processing the data.
Additionally, there are some csv-specific things I think you should consider:
Specifying column data types. Especially if chunking, but even if you're not, specifying the column types can dramatically reduce read time and memory usage and highlight problem areas in your data (e.g. NaN indicators or Flags that don't meet one of pandas's defaults). Use the dtypes parameter with a single data type to apply to all columns or a dict of column name, data type pairs to indicate the types to read in. Optionally, you can provide converters to format dates, times, or other numerical data if it's not in a format recognized by pandas.
Specifying the parser engine - pandas can read csvs in pure python (slow) or C (much faster). The python engine has slightly more features (e.g. currently the C parser can't read files with complex multi-character delimeters and it can't skip footers). Try using the argument engine='c' to make sure the C engine is being used. If your file can't be read by the c engine, I'd try fixing the file(s) first manually (e.g. stripping out a footer or standardizing the delimiters) and then parsing with the C engine, if possible.
Make sure you're catching all NaNs and data flags in numeric columns. This can be a tough one, and specifying specific data types in your inputs can be helpful in catching bad cases. Use the na_values, keep_default_na, date_parser, and converters argumentss to pd.read_csv. Currently, the default list of values interpreted as NaN are ['', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan', '1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan', 'null'].For example, if your numeric columns have non-numeric values coded as notANumber then this would be missed and would either cause an error (if you had dtypes specified) or would cause pandas to re-categorieze the entire column as an object column (suuuper bad for memory and speed!).
Read the pd.read_csv docs over and over and over again. Many of the arguments to read_csv have important performance considerations. pd.read_csv is optimized to smooth over a large amount of variation in what can be considered a csv, and the more magic pandas has to be ready to perform (determine types, interpret nans, convert dates (maybe), skip headers/footers, infer indices/columns, handle bad lines, etc) the slower the read will be. Give it as many hints/constraints as you can and you might see performance increase a lot! And if it's still not enough, many of these tweaks will also apply to the dask.dataframe API, so this scales up further nicely.
Additionally, if you have the option, save the files in a stable binary storage format. Apache Parquet is a good columnar storage format with pandas support, but there are many others (see that Pandas IO guide for more options). Pickles can be a bit brittle across pandas versions (of course, so can any binary storage format, but this is usually less a concern for explicit data storage formats rather than pickles), and CSVs are inefficient and under-specified, hence the need for type conversion and interpretation.
One thing to check is the actual performance of the disk system itself. Especially if you use spinning disks (not SSD), your practical disk read speed may be one of the explaining factors for the performance. So, before doing too much optimization, check if reading the same data into memory (by, e.g., mydata = open('myfile.txt').read()) takes an equivalent amount of time. (Just make sure you do not get bitten by disk caches; if you load the same data twice, the second time it will be much faster because the data is already in RAM cache.)
See the update below before believing what I write underneath
If your problem is really parsing of the files, then I am not sure if any pure Python solution will help you. As you know the actual structure of the files, you do not need to use a generic CSV parser.
There are three things to try, though:
Python csv package and csv.reader
NumPy genfromtext
Numpy loadtxt
The third one is probably fastest if you can use it with your data. At the same time it has the most limited set of features. (Which actually may make it fast.)
Also, the suggestions given you in the comments by crclayton, BKay, and EdChum are good ones.
Try the different alternatives! If they do not work, then you will have to do write something in a compiled language (either compiled Python or, e.g. C).
Update: I do believe what chrisb says below, i.e. the pandas parser is fast.
Then the only way to make the parsing faster is to write an application-specific parser in C (or other compiled language). Generic parsing of CSV files is not straightforward, but if the exact structure of the file is known there may be shortcuts. In any case parsing text files is slow, so if you ever can translate it into something more palatable (HDF5, NumPy array), loading will be only limited by the I/O performance.
Modin is an early-stage project at UC Berkeley’s RISELab designed to facilitate the use of distributed computing for Data Science. It is a multiprocess Dataframe library with an identical API to pandas that allows users to speed up their Pandas workflows.
Modin accelerates Pandas queries by 4x on an 8-core machine, only requiring users to change a single line of code in their notebooks.
pip install modin
if using dask
pip install modin[dask]
import modin by typing
import modin.pandas as pd
It uses all CPU cores to import csv file and it is almost like pandas.
Most of the solutions are helpful here, I would like to say that parallelizing the loading can also help. Simple code below:
import os
import glob
path = r'C:\Users\data' # or whatever your path
data_files = glob.glob(os.path.join(path, "*.psv")) #list of all the files to be read
import reader
from multiprocessing import Pool
def read_psv_all (file_name):
return pd.read_csv(file_name,
delimiter='|', # change this as needed
low_memory=False
)
pool = Pool(processes=3) # can change 3 to number of processors you want to utilize
df_list = pool.map(read_psv_all, data_files)
df = pd.concat(df_list, ignore_index=True,axis=0, sort=False)
Note that if you are using windows/jupyter, it might be a sinister combination to use parallel processing. You might need to use if __name__ == '__main__' in your code.
Along with this, do utilize columns, dtypes which would definitely help.

Categories

Resources