I'm writing a fixed-width file to CSV. Because the file is too large to read at once, I'm reading the file in chunks of 100000 and appending to CSV. This is working fine, however it's adding an index to the rows despite having set index = False.
How can I complete the CSV file without index?
infile = filename
outfile = outfilename
cols = [(0,10), (12,19), (22,29), (34,41), (44,52), (54,64), (72,80), (82,106), (116,144), (145,152), (161,169), (171,181)]
for chunk in pd.read_fwf(path, colspecs = col_spec, index=False, chunksize=100000):
chunk.to_csv(outfile,mode='a')
The to_csv method has a header parameter, indicating if to output the header. In this case, you probably do not want this for writes that are not the first write.
So, you could do something like this:
for i, chunk in enumerate(pd.read_fwf(...)):
first = i == 0
chunk.to_csv(outfile, header=first, mode='a')
Related
I have some giant CSV files - like 23 GB size - in which i want to accomplish this with their column headers -
If there is a column name SFID, perform this -
Rename column "Id" to "IgnoreId"
Rename column "SFID" to "Id"
else-
Do nothing
All the google search results i see are about how to import the csv in a dataframe, rename the column, export it back into a csv.
To me it feels like giant waste of time/memory, because we are effectively just working with very first row of the CSV file (which represents headers). I dont know if it is necessary to load whole csv as dataframe and export to a new csv (or export it to same csv, effectively overwriting it).
Being huge CSVs, i have to load them in small chunksize and perform the operation which takes time and memory. Again, feels liek waste of memory becuase apart from the headers, we are not really doing anything with remaining chunksizes
Is there a way i just load up header of a csv file, make changes to headers, and save it back into same csv file?
I am open to ideas of using something other that pandas as well. Only real constraint is that CSV files are too big to just double click and open.
Write the header row first and copy the data rows using shutil.copyfileobj
shutil.copyfileobj took 38 seconds for a 0.5 GB file whereas fileinput took 125 seconds for the same.
Using shutil.copyfileobj
df = pd.read_csv(filename, nrows=0) # read only the header row
if 'SFID' in df.columns:
# rename columns
df.rename(columns = {"Id": "IgnoreId", "SFID":"Id"}, inplace = True)
# construct new header row
header_row = ','.join(df.columns) + "\n"
# modify header in csv file
with open(filename, "r+") as f1, open(filename, "r+") as f2:
f1.readline() # to move the pointer after header row
f2.write(header_row)
shutil.copyfileobj(f1, f2) # copies the data rows
Using fileinput
if 'SFID' in df.columns:
# rename columns
df.rename(columns = {"Id": "IgnoreId", "SFID":"Id"}, inplace = True)
# construct new header row
header_row = ','.join(df.columns)
# modify header in csv file
f = fileinput.input(filename, inplace=True)
for line in f:
if fileinput.isfirstline():
print(header_row)
else:
print(line, end = '')
f.close()
For huge file a simple command line solution with the stream editor sed might be faster than a python script:
sed -e '1 {/SFID/ {s/Id/IgnoreId/; s/SFID/Id/}}' -i myfile.csv
This changes Id to IgnoreId and SFID to Id in the first line if it contains SFID. If other column header also contain the string Id (e.g. ImportantId) then you'll have to refine the regexes in the s command accordingly.
I have about 200 CSV files and I need to combine them on specific columns. Each CSV file contains 1000 filled rows on specific columns. My file names are like below:
csv_files = [en_tr_translated0.csv, en_tr_translated1000.csv, en_tr_translated2000.csv, ......... , en_tr_translated200000.csv]
My csv file columns are like below:
The two first columns are prefilled up to same 200.000 rows/sentences in the all csv files. My each en_tr_translated{ }.csv files contains 1000 translated sentences related with their file name. For example:
en_tr_translated1000.csv file contains translated sentences from 0 to 1000th row, en_tr_translated2000.csv file contains translated sentences from 1000th to 2000th row etc. Rest is nan/empty. Below is an example image from en_tr_translated3000.csv file.
I want to copy/merge/join the rows to have one full csv file that contains all the translated sentences. I tried the below code:
out = pd.read_csv(path + 'en_tr_translated0.csv', sep='\t', names=['en_sentence', 'tr_sentence', 'translated_tr_sentence', 'translated_en_sentence'], dtype=str, encoding='utf-8', low_memory=False)
##
i = 1000
for _ in tqdm(range(200000)):
new = pd.read_csv(path + f'en_tr_translated{i}.csv', sep='\t', names=['en_sentence', 'tr_sentence', 'translated_tr_sentence', 'translated_en_sentence'], dtype=str, encoding='utf-8', low_memory=False)
out.loc[_, 'translated_tr_sentence'] = new.loc[_, 'translated_tr_sentence']
out.loc[_, 'translated_en_sentence'] = new.loc[_, 'translated_en_sentence']
if _ == i:
i += 1000
Actually, it works fine but my problem is, it takes 105 HOURS!!
Is there any faster way to do this? I have to do this for like 5 different datasets and this is getting very annoying.
Any suggestion is appreciated.
Your input files have one row of data exactly as one row in the file, correct? So it would probably be even faster if you don't even use pandas. Although if done correctly 200.000 should be still very fast no matter if using pandas or not.
For doing it without: Just open each file, move to the fitting index, write 1000 lines to the output file. Then move on to next file. You might have to fix headers etc. and look out that there is no shift in the indices, but here is an idea of how to do that:
with open(path + 'en_tr_translated_combined.csv', 'w') as f_out: # open out file in write modus
for filename_index in tqdm(range(0, 201000, 1000)): # iterate over each index in steps of 1000 between 0 and 200000
with open(path + f'en_tr_translated{filename_index}.csv') as f_in: # open file with that index
for row_index, line in enumerate(f_in): # iterate over its rows
if row_index < filename_index: # skip rows until you reached the ones with content in translation
continue
if row_index > filename_index + 1000: # close the file if you reached the part where the translations end
break
f_out.write(line) # for the inbetween: copy the content to out file
I would load all the files, drop the rows that are not fully filled, and afterwards concatenate all of the dataframes.
Something like:
dfs = []
for ff in Path('.').rglob('*.csv'):
dfs.append((pd.read_csv(ff, names=['en_sentence', 'tr_sentence', 'translated_tr_sentence', 'translated_en_sentence'], dtype=str, encoding='utf-8', low_memory=True).dropna())
df = pd.concat(dfs)
My program writes data into a CSV file using the pandas' to_csv function. At first run, the CSV file is originally empty and my code wrote data in it (which is supposed to be). At the second run, (take note that I'm still using the same CSV file), my code wrote data in it again (which is good). The problem is, there is a large number of empty rows between the data from the first run and the data from the second run.
Below is my code:
#place into a file
csvFile = open(file, 'a', newline = '',encoding='utf-8')
if file_empty == True:
df.to_csv(csvFile, sep=',', columns=COLS, index=False, mode='ab', encoding='utf-8') #header true
else:
df.to_csv(csvFile, sep=',', columns=COLS, header=False, index=False, mode='ab', encoding='utf-8') #header false
I used the variable file_empty in order for the program to not write column headers if there is already data present in the CSV file.
Below is the sample output from the CSV file:
Last data from first run is in line 396 of CSV file,
first row data from second run is in line 1308 of the same CSV file.
So there are empty rows starting from line 397 up to line 1307. How can I remove them so that when the program is run again, there is no empty rows between them?
Here is the data sample and code to append the data and remove blank lines..
below are the lines may help you
import pandas
conso_frame = pandas.read_csv('consofile1.csv')
df_2 = pandas.read_csv('csvfile2.csv')
# Column Names should be same
conso_frame = conso_frame.append(df_2)
print(conso_frame)
conso_frame.dropna(subset = ["Intent"], inplace=True)
print(conso_frame)
conso_frame.to_csv('consofile1.csv', index=False)
I have some giant CSV files - like 23 GB size - in which i want to accomplish this with their column headers -
If there is a column name SFID, perform this -
Rename column "Id" to "IgnoreId"
Rename column "SFID" to "Id"
else-
Do nothing
All the google search results i see are about how to import the csv in a dataframe, rename the column, export it back into a csv.
To me it feels like giant waste of time/memory, because we are effectively just working with very first row of the CSV file (which represents headers). I dont know if it is necessary to load whole csv as dataframe and export to a new csv (or export it to same csv, effectively overwriting it).
Being huge CSVs, i have to load them in small chunksize and perform the operation which takes time and memory. Again, feels liek waste of memory becuase apart from the headers, we are not really doing anything with remaining chunksizes
Is there a way i just load up header of a csv file, make changes to headers, and save it back into same csv file?
I am open to ideas of using something other that pandas as well. Only real constraint is that CSV files are too big to just double click and open.
Write the header row first and copy the data rows using shutil.copyfileobj
shutil.copyfileobj took 38 seconds for a 0.5 GB file whereas fileinput took 125 seconds for the same.
Using shutil.copyfileobj
df = pd.read_csv(filename, nrows=0) # read only the header row
if 'SFID' in df.columns:
# rename columns
df.rename(columns = {"Id": "IgnoreId", "SFID":"Id"}, inplace = True)
# construct new header row
header_row = ','.join(df.columns) + "\n"
# modify header in csv file
with open(filename, "r+") as f1, open(filename, "r+") as f2:
f1.readline() # to move the pointer after header row
f2.write(header_row)
shutil.copyfileobj(f1, f2) # copies the data rows
Using fileinput
if 'SFID' in df.columns:
# rename columns
df.rename(columns = {"Id": "IgnoreId", "SFID":"Id"}, inplace = True)
# construct new header row
header_row = ','.join(df.columns)
# modify header in csv file
f = fileinput.input(filename, inplace=True)
for line in f:
if fileinput.isfirstline():
print(header_row)
else:
print(line, end = '')
f.close()
For huge file a simple command line solution with the stream editor sed might be faster than a python script:
sed -e '1 {/SFID/ {s/Id/IgnoreId/; s/SFID/Id/}}' -i myfile.csv
This changes Id to IgnoreId and SFID to Id in the first line if it contains SFID. If other column header also contain the string Id (e.g. ImportantId) then you'll have to refine the regexes in the s command accordingly.
I have 18 csv files, each is approximately 1.6Gb and each contain approximately 12 million rows. Each file represents one years' worth of data. I need to combine all of these files, extract data for certain geographies, and then analyse the time series. What is the best way to do this?
I have tired using pd.read_csv but i hit a memory limit. I have tried including a chunk size argument but this gives me a TextFileReader object and I don't know how to combine these to make a dataframe. I have also tried pd.concat but this does not work either.
Here is the elegant way of using pandas to combine a very large csv files.
The technique is to load number of rows (defined as CHUNK_SIZE) to memory per iteration until completed. These rows will be appended to output file in "append" mode.
import pandas as pd
CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"
for csv_file_name in csv_file_list:
chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE)
for chunk in chunk_container:
chunk.to_csv(output_file, mode="a", index=False)
But If your files contain headers than it makes sense to skip the header in the upcoming files except the first one. As repeating header is unexpected. In this case the solution is as the following:
import pandas as pd
CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"
first_one = True
for csv_file_name in csv_file_list:
if not first_one: # if it is not the first csv file then skip the header row (row 0) of that file
skip_row = [0]
else:
skip_row = []
chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE, skiprows = skip_row)
for chunk in chunk_container:
chunk.to_csv(output_file, mode="a", index=False)
first_one = False
The memory limit is hit because you are trying to load the whole csv in memory. An easy solution would be to read the files line by line (assuming your files all have the same structure), control it, then write it to the target file:
filenames = ["file1.csv", "file2.csv", "file3.csv"]
sep = ";"
def check_data(data):
# ... your tests
return True # << True if data should be written into target file, else False
with open("/path/to/dir/result.csv", "a+") as targetfile:
for filename in filenames :
with open("/path/to/dir/"+filename, "r") as f:
next(f) # << only if the first line contains headers
for line in f:
data = line.split(sep)
if check_data(data):
targetfile.write(line)
Update: An example of the check_data method, following your comments:
def check_data(data):
return data[n] == 'USA' # < where n is the column holding the country
You can convert the TextFileReader object using pd.DataFrame like so: df = pd.DataFrame(chunk), where chunk is of type TextFileReader. You can then use pd.concat to concatenate the individual dataframes.