How to import large csv file and perform operations - python

I have a problems with opening a large csv file (>5GB) and perform some simple operations on it. I have made this code:
import pandas as pd
import numpy as np
import os
import glob
os.chdir('C:\\Users\\data')
df = pd.read_csv('myfile.csv', low_memory=False, header=None, names= ['column1','column2', 'column3'])
df
Even setting low_meemory false it does not work. I used the following code that I found in this site but it does not work either.
import pandas as pd
import numpy as np
import os
import glob
os.chdir('C:\\Users\\data')
mylist = []
for chunk in pd.read_csv('SME_all.csv', sep=';', chunksize=20000):
mylist.append(chunk)
big_data = pd.concat(mylist, axis= 0)
del mylist
df = pd.read_csv('myfile.csv', low_memory=False, header=None,
error_bad_lines = False, names=['column1','column2', 'column3'])
df
Any suggestion? Should I consider using other application such as Apache Spark?

There are lots of approaches.
Perhaps the simplest is to split your CSV into multiple files. This only works if you don't need to aggregate the data in any way, such as groupby.
You can try specifying dtypes on import, otherwise Pandas may interpret columns as objects which will take up more memory.
You can iterate over the CSV using python's built in CSV reader, and perform operations on each row if that's the type of work you're trying to do.
You can look at Dask, or using PySpark on Google's dataproc or Azure's databricks.

Related

change np to pandas to avoid memoryerror [duplicate]

I'm getting the following error:
MemoryError: Unable to allocate array with shape (118, 840983) and data type float64
in my python code whenever I am running a python pandas.readcsv() function to read a text file. Why is this??
This is my code:
import pandas as pd
df = pd.read_csv("LANGEVIN_DATA.txt", delim_whitespace=True)
The MemoryError means, you file is too large to readcsv in one time, you need used the chunksize to avoid the error.
just like:
import pandas as pd
df = pd.read_csv("LANGEVIN_DATA.txt", delim_whitespace=True, chunksize=1000)
you can read the official document for more help.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html

Is it possibe to change similar libraries (Data Analysis) in Python within the same code?

I use the modin library for multiprocessing.
While the library is great for faster processing, it fails at merge and I would like to revert to default pandas in between the code.
I understand as per PEP 8: E402 conventions, import should be declared once and at the top of the code however my case would need otherwise.
import pandas as pd
import modin.pandas as mpd
import os
import ray
ray.init()
os.environ["MODIN_ENGINE"] = "ray"
df = mpd.read_csv()
do stuff
Then I would like to revert to default pandas within the same code
but how would i do the below in pandas as there does not seem to be a clear way to switch from pd and mpd in the below lines and unfortunately modin seems to take precedence over pandas.
df = df.loc[:, df.columns.intersection(['col1', 'col2'])]
df = df.drop_duplicates()
df = df.sort_values(['col1', 'col2'], ascending=[True, True])
Is it possible?
if yes, how?
You can simply do the following :
import modin.pandas as mpd
import pandas as pd
This way you have both modin as well as original pandas in memory and you can efficiently switch as per your need.
Since many have posted answers however in this particular case, as applicable and pointed out by #Nin17 and this comment from Modin GitHub, to convert from Modin to Pandas for single core processing of some of the operations like df.merge you can use
import pandas as pd
import modin.pandas as mpd
import os
import ray
ray.init()
os.environ["MODIN_ENGINE"] = "ray"
df_modin = mpd.read_csv() #reading dataframe into Modin for parallel processing
df_pandas = df_modin._to_pandas() #converting Modin Dataframe into pandas for single core processing
and if you would like to reconvert the dataframe to a modin dataframe for parallel processing
df_modin = mpd.DataFrame(df_pandas)
You can try pandarallel package instead of modin , It is based on similar concept : https://pypi.org/project/pandarallel/#description
Pandarallel Benchmarks : https://libraries.io/pypi/pandarallel
As #Nin17 said in a comment on the question, this comment from the Modin GitHub describes how to convert a Modin dataframe to pandas. Once you have a pandas dataframe, you call any pandas method on it. This other comment from the same issue describes how to convert the pandas dataframe back to a Modin dataframe.

combining multiple files into a single file with DataFrame

I have been able to generate several CSV files through an API. Now I am trying to combine all CSV's into a unique Master file so that I can then work on it. But it does not work. Below code is what I have attempted What am I doing wrong?
import glob
import pandas as pd
from pandas import read_csv
master_df = pd.DataFrame()
for file in files:
df = read_csv(file)
master_df = pd.concat([master_df, df])
del df
master_df.to_csv("./master_df.csv", index=False)
Although it is hard to tell what the precise problem is without more information (i.e., error message, pandas version), I believe it is that in the first iteration, master_df and df do not have the same columns. master_df is an empty DataFrame, whereas df has whatever columns are in your CSV. If this is indeed the problem, then I'd suggest storing all your data-frames (each of which represents one CSV file) in a single list, and then concatenating all of them. Like so:
import pandas as pd
df_list = [pd.read_csv(file) for file in files]
pd.concat(df_list, sort=False).to_csv("./master_df.csv", index=False)
Don't have time to find/generate a set of CSV files and test this right now, but am fairly sure this should do the job (assuming pandas version 0.23 or compatible).

python/pandas "Kernel died, restarting" while loading a csv file

While trying to load a big csv file (150 MB) I get the error "Kernel died, restarting". Then only code that I use is the following:
import pandas as pd
from pprint import pprint
from pathlib import Path
from datetime import date
import numpy as np
import matplotlib.pyplot as plt
basedaily = pd.read_csv('combined_csv.csv')
Before it used to work, but I do not know why it is not working anymore. I tried to fixed it using engine="python" as follows:
basedaily = pd.read_csv('combined_csv.csv', engine='python')
But it gives me an error execution aborted.
Any help would be welcome!
Thanks in advance!
It may be because of the lack of memory you got this error. You can split your data in many data frames, do your work than you can re merge them, below some useful code that you may use:
import pandas as pd
# the number of row in each data frame
# you can put any value here according to your situation
chunksize = 1000
# the list that contains all the dataframes
list_of_dataframes = []
for df in pd.read_csv('combined_csv.csv', chunksize=chunksize):
# process your data frame here
# then add the current data frame into the list
list_of_dataframes.append(df)
# if you want all the dataframes together, here it is
result = pd.concat(list_of_dataframes)

Problem reading a data from a file with pandas Python (pandas.io.parsers.TextFileReader)

i want to read a dataset from a file with pandas, but when i use pd.read_csv(), the program read it, but when i want to see the dataframe appears:
pandas.io.parsers.TextFileReader at 0x1b3b6b3e198
As additional informational the file is too large (around 9 Gigas)
The file use as a separator the vertical lines, and i tried using chunksize but it doesn't work.
import pandas as pd
df = pd.read_csv(r"C:\Users\dguerr\Documents\files\Automotive\target_file", iterator=True, sep='|',chunksize=1000)
I want to import my data in the traditional pandas dataframe format.
You can load it chunk by chunk by doing:
import pandas as pd
path_to_file = "C:/Users/dguerr/Documents/Acxiom files/Automotive/auto_model_target_file"
chunk_size = 1000
for chunk in pd.read_csv(path_to_file,chunksize=chunk_size):
# do your stuff
You might want to check encoding types within a DataFrame. Your pd.read_csv defaults to utf8, should you be using latin1 for instance, this could potentially lead to such errors.
import pandas as pd
df = pd.read_csv('C:/Users/dguerr/Documents/Acxiom files/Automotive/auto_model_target_file',
encoding='latin-1', chunksize=1000)

Categories

Resources