python CSV module vs Pandas - python

I am using pandas to read CSV file data, but the CSV module is also there to manage the CSV file.
so my questions are :-
what is the difference between these both?
what are the cons of using pandas over the CSV module?

Based upon benchmarks
CSV is faster to load data for smaller datasets (< 1K rows)
Pandas is several times faster for larger datasets
Code to Generate Benchmarks
Benchmarks

csv is a built-in module but pandas not. if you want only reading csv file you should not install pandas because you must install it and increasing in dependencies of project is not a best practice.
if you want to analyze data of csv file with pandas, pandas changes csv file to dataframe needed for manipulating data with pandas and you should not use csv module for these cases.
if you have a big data or data with large volume you should consider libraries like numpy and pandas.

Pandas is better then csv for managing data and doing operations on the data. CSV doesn't provide you with the scientific data manipulation tools that Pandas does.
If you are talking only about the part of reading the file it depends. You may simply google both modules online but generally I find it more comfortable to work with Pandas. it provides easier readability as well since printing there is better too.

Related

How can I read a SAS format datafile in Vaex without converting it to a pandas data frame first?

I was trying to load a 30GB SAS format data file in pandas, but the memory does not allow me to do so. I then find a python library called Vaex, which suppose to analyze big datasets with no memory wasted. However, Vaex can only read data from certain file formats, such as CSV or HDF5. The method provided by its website below suggests converting the sas to pandas before it's been converted to vaex. It then back to my previous problems that I cannot even open this big data file using pandas. Thanks in Advance!!!!
pandas_df = pd.read_sas('./data/io/sample_airline.sas7bdat')
df = vaex.from_pandas(pandas_df, copy_index=False)
df

From hdf5 files to csv files with Python

I have to process hdf5 files. Each of them contains data that can be loaded into a pandas DataFrame formed by 100 columns and almost 5E5 rows. Each hdf5 file weighs approximately 130MB.
So I want to fetch the data from the hdf5 file then apply some processing and finally save the new data in a csv file. In my case, the performance of the process is very important because I will have to repeat it.
So far I have focused on Pandas and Dask to get the job done. Dask is good for parallelization and I will get good processing times with a stronger PC and more CPUs.
However some of you have already encountered this problem and found the best optimization ?
As others have mentioned in the comments, unless you have to move it to CSV, I'd recommend keeping it in HDF5. However, below is a description of how you might do it if you do have to carry out the conversion.
It sounds like you have a function for loading the HDF5 file into a pandas data frame. I would suggest using dask's delayed API to create a list of delayed pandas data frames, and then convert them into a dask data frame. The snipped below is copied from the linked page, with an added line to save to CSV.
import dask.dataframe as dd
from dask.delayed import delayed
from my_custom_library import load
filenames = ...
dfs = [delayed(load)(fn) for fn in filenames]
df = dd.from_delayed(dfs)
df.to_csv(filename, **kwargs)
See dd.to_csv() documentation for info on options for saving to CSV.

Import CSV file as PySpark Dataset (NOT Dataframes)

How can I import CSV file into PySpark as a dataset? Note that I am NOT asking about how to import them into dataframes.
While reading this page from Databricks, I learned some benefits of datasets over dataframes.
https://databricks.com/blog/2016/07/14/a-tale-of-three-apache-spark-apis-rdds-dataframes-and-datasets.html
I want to learn how to work with them instead of RDDs and dataframes.
The linked blog post gives you the answer that it is impossible because of the python:
Note: Since Python and R have no compile-time type-safety, we only have untyped APIs, namely DataFrames.

python pandas memory error while merging big csv files

I had posted a question with regard to memory errors while working with large csv files using pandas dataframe. To be more clear, I'm asking another question: I have memory errors while merging big csv files (more than 30 million rows). So, what is the solution for this? Thanks!
Using Python/Pandas to process datasets with tens of millions of rows isn't ideal. Rather than processing a massive CSV, consider warehousing your data into a database like Redshift where you'll be able to query and manipulate your data thousands of times faster than you could do with Pandas. Once your data is in a database you can use SQL to aggregate/filter/reshape your data into "bite size" exports and extracts for local analysis using Pandas if you'd like.
Long term, consider using Spark which is a distributed data analysis framework built on Scala. It definitely has a steeper learning curve than Pandas but borrows a lot of the core concepts.
Redshift: https://aws.amazon.com/redshift/
Spark: http://spark.apache.org/

Fastest way to parse large CSV files in Pandas

I am using pandas to analyse large CSV data files. They are around 100 megs in size.
Each load from csv takes a few seconds, and then more time to convert the dates.
I have tried loading the files, converting the dates from strings to datetimes, and then re-saving them as pickle files. But loading those takes a few seconds as well.
What fast methods could I use to load/save the data from disk?
As #chrisb said, pandas' read_csv is probably faster than csv.reader/numpy.genfromtxt/loadtxt. I don't think you will find something better to parse the csv (as a note, read_csv is not a 'pure python' solution, as the CSV parser is implemented in C).
But, if you have to load/query the data often, a solution would be to parse the CSV only once and then store it in another format, eg HDF5. You can use pandas (with PyTables in background) to query that efficiently (docs).
See here for a comparison of the io performance of HDF5, csv and SQL with pandas: http://pandas.pydata.org/pandas-docs/stable/io.html#performance-considerations
And a possibly relevant other question: "Large data" work flows using pandas
Posting this late in response to a similar question that had found simply using modin out of the box fell short. The answer will be similar with dask - use all of the below strategies in combination for best results, as appropriate for your use case.
The pandas docs on Scaling to Large Datasets have some great tips which I'll summarize here:
Load less data. Read in a subset of the columns or rows using the usecols or nrows parameters to pd.read_csv. For example, if your data has many columns but you only need the col1 and col2 columns, use pd.read_csv(filepath, usecols=['col1', 'col2']). This can be especially important if you're loading datasets with lots of extra commas (e.g. the rows look like index,col1,col2,,,,,,,,,,,. In this case, use nrows to read in only a subset of the data to make sure that the result only includes the columns you need.
Use efficient datatypes. By default, pandas stores all integer data as signed 64-bit integers, floats as 64-bit floats, and strings as objects or string types (depending on the version). You can convert these to smaller data types with tools such as Series.astype or pd.to_numeric with the downcast option.
Use Chunking. Parsing huge blocks of data can be slow, especially if your plan is to operate row-wise and then write it out or to cut the data down to a smaller final form. You can use the chunksize and iterator arguments to loop over chunks of the data and process the file in smaller pieces. See the docs on Iterating through files chunk by chunk for more detail. Alternately, use the low_memory flag to get Pandas to use the chunked iterator on the backend, but return a single dataframe.
Use other libraries. There are a couple great libraries listed here, but I'd especially call out dask.dataframe, which specifically works toward your use case, by enabling chunked, multi-core processing of CSV files which mirrors the pandas API and has easy ways of converting the data back into a normal pandas dataframe (if desired) after processing the data.
Additionally, there are some csv-specific things I think you should consider:
Specifying column data types. Especially if chunking, but even if you're not, specifying the column types can dramatically reduce read time and memory usage and highlight problem areas in your data (e.g. NaN indicators or Flags that don't meet one of pandas's defaults). Use the dtypes parameter with a single data type to apply to all columns or a dict of column name, data type pairs to indicate the types to read in. Optionally, you can provide converters to format dates, times, or other numerical data if it's not in a format recognized by pandas.
Specifying the parser engine - pandas can read csvs in pure python (slow) or C (much faster). The python engine has slightly more features (e.g. currently the C parser can't read files with complex multi-character delimeters and it can't skip footers). Try using the argument engine='c' to make sure the C engine is being used. If your file can't be read by the c engine, I'd try fixing the file(s) first manually (e.g. stripping out a footer or standardizing the delimiters) and then parsing with the C engine, if possible.
Make sure you're catching all NaNs and data flags in numeric columns. This can be a tough one, and specifying specific data types in your inputs can be helpful in catching bad cases. Use the na_values, keep_default_na, date_parser, and converters argumentss to pd.read_csv. Currently, the default list of values interpreted as NaN are ['', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan', '1.#IND', '1.#QNAN', '<NA>', 'N/A', 'NA', 'NULL', 'NaN', 'n/a', 'nan', 'null'].For example, if your numeric columns have non-numeric values coded as notANumber then this would be missed and would either cause an error (if you had dtypes specified) or would cause pandas to re-categorieze the entire column as an object column (suuuper bad for memory and speed!).
Read the pd.read_csv docs over and over and over again. Many of the arguments to read_csv have important performance considerations. pd.read_csv is optimized to smooth over a large amount of variation in what can be considered a csv, and the more magic pandas has to be ready to perform (determine types, interpret nans, convert dates (maybe), skip headers/footers, infer indices/columns, handle bad lines, etc) the slower the read will be. Give it as many hints/constraints as you can and you might see performance increase a lot! And if it's still not enough, many of these tweaks will also apply to the dask.dataframe API, so this scales up further nicely.
Additionally, if you have the option, save the files in a stable binary storage format. Apache Parquet is a good columnar storage format with pandas support, but there are many others (see that Pandas IO guide for more options). Pickles can be a bit brittle across pandas versions (of course, so can any binary storage format, but this is usually less a concern for explicit data storage formats rather than pickles), and CSVs are inefficient and under-specified, hence the need for type conversion and interpretation.
One thing to check is the actual performance of the disk system itself. Especially if you use spinning disks (not SSD), your practical disk read speed may be one of the explaining factors for the performance. So, before doing too much optimization, check if reading the same data into memory (by, e.g., mydata = open('myfile.txt').read()) takes an equivalent amount of time. (Just make sure you do not get bitten by disk caches; if you load the same data twice, the second time it will be much faster because the data is already in RAM cache.)
See the update below before believing what I write underneath
If your problem is really parsing of the files, then I am not sure if any pure Python solution will help you. As you know the actual structure of the files, you do not need to use a generic CSV parser.
There are three things to try, though:
Python csv package and csv.reader
NumPy genfromtext
Numpy loadtxt
The third one is probably fastest if you can use it with your data. At the same time it has the most limited set of features. (Which actually may make it fast.)
Also, the suggestions given you in the comments by crclayton, BKay, and EdChum are good ones.
Try the different alternatives! If they do not work, then you will have to do write something in a compiled language (either compiled Python or, e.g. C).
Update: I do believe what chrisb says below, i.e. the pandas parser is fast.
Then the only way to make the parsing faster is to write an application-specific parser in C (or other compiled language). Generic parsing of CSV files is not straightforward, but if the exact structure of the file is known there may be shortcuts. In any case parsing text files is slow, so if you ever can translate it into something more palatable (HDF5, NumPy array), loading will be only limited by the I/O performance.
Modin is an early-stage project at UC Berkeley’s RISELab designed to facilitate the use of distributed computing for Data Science. It is a multiprocess Dataframe library with an identical API to pandas that allows users to speed up their Pandas workflows.
Modin accelerates Pandas queries by 4x on an 8-core machine, only requiring users to change a single line of code in their notebooks.
pip install modin
if using dask
pip install modin[dask]
import modin by typing
import modin.pandas as pd
It uses all CPU cores to import csv file and it is almost like pandas.
Most of the solutions are helpful here, I would like to say that parallelizing the loading can also help. Simple code below:
import os
import glob
path = r'C:\Users\data' # or whatever your path
data_files = glob.glob(os.path.join(path, "*.psv")) #list of all the files to be read
import reader
from multiprocessing import Pool
def read_psv_all (file_name):
return pd.read_csv(file_name,
delimiter='|', # change this as needed
low_memory=False
)
pool = Pool(processes=3) # can change 3 to number of processors you want to utilize
df_list = pool.map(read_psv_all, data_files)
df = pd.concat(df_list, ignore_index=True,axis=0, sort=False)
Note that if you are using windows/jupyter, it might be a sinister combination to use parallel processing. You might need to use if __name__ == '__main__' in your code.
Along with this, do utilize columns, dtypes which would definitely help.

Categories

Resources