Too many open output files while splitting a CSV - python

Very novice attempt at python here.
I tried implementing something like was discussed in this question Splitting csv file based on a particular column using Python
My goal is to take a file with 15 million lines of 500 ticker symbols and put each ticker in their own file.
However, when I'm running it, I'm getting
OSError: [Errno 24] Too many open files: 'APH.csv'
All of the lines of data are in order (ie all of the lines of data for ticker "A" are one right after another, so I could close a file before going on to the next one). I'm not sure where in this code I would close the file before going on to the next one. FYI - this is on a Mac if that matters.
My code is
import csv
with open('WIKI_PRICES_big.csv') as fin:
csvin = csv.DictReader(fin)
# Category -> open file lookup
outputs = {}
for row in csvin:
cat = row['ticker']
# Open a new file and write the header
if cat not in outputs:
fout = open('{}.csv'.format(cat), 'w')
dw = csv.DictWriter(fout, fieldnames=csvin.fieldnames)
dw.writeheader()
outputs[cat] = fout, dw
# Always write the row
outputs[cat][1].writerow(row)
# Close all the files
for fout, _ in outputs.values():
fout.close()

Based on the file structure you describe, the following should do it.
The trick is that if the ticker values are always in order, you only need to keep open a single file output file at any one time. You can then close the old one and reopen the new one when you come across a new ticker value.
import csv
fout = False
with open('WIKI_PRICES_big.csv') as fin:
csvin = csv.DictReader(fin)
seen = []
for row in csvin:
cat = row['ticker']
# Open a new file and write the header.
if cat not in seen:
seen.append(cat)
if fout: # Close old file if we have one.
fout.close()
fout = open('{}.csv'.format(cat), 'w')
dw = csv.DictWriter(fout, fieldnames=csvin.fieldnames)
dw.writeheader()
# Always write the row
dw.writerow(row)
fout.close()

Related

How to add data to existing rows of a CSV file? [duplicate]

This question already has answers here:
How to add a string to each line in a file?
(3 answers)
Closed 9 months ago.
I have already an existing CSV file that I am accessing and I want to append the data to the first row, but it writes data at the end of the file.
What I am getting:
But I want the data to append like this:
Code I have done so far:
import CSV
with open('explanation.csv' , 'a', newline="") as file:
myFile = csv.writer(file)
myFile.writerow(["1"])
What you're actually wanting to do is replace data in an existing CSV file with new values, however in order to update a CSV file you must rewrite the whole thing.
One way to do that is by reading the whole thing into memory, updating the data, and then use it to overwrite the existing file. Alternatively you could process the file a row-at-a-time and store the results in a temporary file, then replace the original with the temporary file when finished updating them all.
The code to do the latter is shown below:
import csv
import os
from pathlib import Path
from tempfile import NamedTemporaryFile
filepath = Path('explanation.csv') # CSV file to update.
with open(filepath, 'r', newline='') as csv_file, \
NamedTemporaryFile('w', newline='', dir=filepath.parent, delete=False) as tmp_file:
reader = csv.reader(csv_file)
writer = csv.writer(tmp_file)
# Replace value in the first column of the first 5 rows.
for data_value in range(1, 6):
row = next(reader)
row[0] = data_value
writer.writerow(row)
writer.writerows(reader) # Copy remaining rows of original file.
# Replace original file with updated version.
os.replace(tmp_file.name, filepath)
print('CSV file updated')
You could read in the entire file, append your rows in memory, and then write the entire file:
def append(fname, data):
with open(fname) as f:
reader = csv.reader(f)
data = list(reader) + list(data)
with open(fname, 'w') as f:
writer = csv.writer(f)
writer.writerows(data)

How to read two csv files and to concatenate them?

First, I need to import two csv files.
Then I need to remove header in both files.
After that, I would like to take one column from both files and to concatenate them.
I have tried to open files, but I'm not sure how to concatenate.
Can anyone give advice how to proceed?
import csv
x = []
chamber_temperature = []
with open(r"C:\Users\mm02058\Documents\test.txt", 'r') as file:
reader = csv.reader(file, delimiter='\t')
with open(r"C:\Users\mm02058\Documents\test.txt", 'r') as file1:
reader_1 = csv.reader(file1, delimiter='\t')
for row in (reader):
x.append(row[0])
chamber_temperature.append(row[1])
for row in (reader_1):
x.append(row[0])
chamber_temperature.append(row[1])
The immediate bug is that you are trying to read from reader1 outside the with block, which means Python has already closed the file.
But the nesting of the with calls is just confusing and misleading anyway. Here is a generalization which should allow you to extend with more new files easily.
import csv
x = []
chamber_temperature = []
for filename in (r"C:\Users\mm02058\Documents\test.txt",
r"C:\Users\mm02058\Documents\test.txt"):
with open(filename, 'r') as file:
for idx, row in enumerate(csv.reader(file, delimiter='\t')):
if idx == 0:
continue # skip header line
x.append(row[0])
chamber_temperature.append(row[1])
Because of how you have structured your code, the context manager for file1 will close the file before it has been used by the for loop.
Use a single context manager to open both files e.g
with open('file1', 'r') as file1, open('file2', 'r') as file2:
# Your code in here
for row in (reader_1):
x.append(row[0])
chamber_temperature.append(row[1])
You are getting this error because you have placed this codeblock outside the 2nd loop and now the file has been closed.
You can either open both the files at once with this
with open('file1', 'r') as file1, open('file2', 'r') as file2:
# Your code in here
or you can use pandas for opening and concatenating csv files
import pandas as pd
data = pd.read_csv(r'file.csv', header=None)
and then refer here Concatenate dataframes

Combine two rows into one in a csv file with Python

I am trying to combine multiple rows in a csv file together. I could easily do it in Excel but I want to do this for hundreds of files so I need it to be as a code. I have tried to store rows in arrays but it doesn't seem to work. I am using Python to do it.
So lets say I have a csv file;
1,2,3
4,5,6
7,8,9
All I want to do is to have a csv file as this;
1,2,3,4,5,6,7,8,9
The code I have tried is this;
fin = open("C:\\1.csv", 'r+')
fout = open("C:\\2.csv",'w')
for line in fin.xreadlines():
new = line.replace(',', ' ', 1)
fout.write (new)
fin.close()
fout.close()
Could you please help?
You should be using the csv module for this as splitting CSV manually on commas is very error-prone (single columns can contain strings with commas, but you would incorrectly end up splitting this into multiple columns). The CSV module uses lists of values to represent single rows.
import csv
def return_contents(file_name):
with open(file_name) as infile:
reader = csv.reader(infile)
return list(reader)
data1 = return_contents('csv1.csv')
data2 = return_contents('csv2.csv')
print(data1)
print(data2)
combined = []
for row in data1:
combined.extend(row)
for row in data2:
combined.extend(row)
with open('csv_out.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
writer.writerow(combined)
That code gives you the basis of the approach but it would be ugly to extend this for hundreds of files. Instead, you probably want os.listdir to pull all the files in a single directory, one by one, and add them to your output. This is the reason that I packed the reading code into the return_contents function; we can repeat the same process millions of times on different files with only one set of code to do the actual reading. Something like this:
import csv
import os
def return_contents(file_name):
with open(file_name) as infile:
reader = csv.reader(infile)
return list(reader)
all_files = os.listdir('my_csvs')
combined_output = []
for file in all_files:
data = return_contents('my_csvs/{}'.format(file))
for row in data:
combined_output.extend(row)
with open('csv_out.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
writer.writerow(combined_output)
If you are specially dealing with csv file format. I recommend you to use csv package for the file operations. If you also use with...as statement, you don't need to worry about closing the file etc. You just need to define the PATH then program will iterate all .csv files
Here is what you can do:
PATH = "your folder path"
def order_list():
data_list = []
for filename in os.listdir(PATH):
if filename.endswith(".csv"):
with open("data.csv") as csvfile:
read_csv = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_NONNUMERIC)
for row in read_csv:
data_list.extend(row)
print(data_list)
if __name__ == '__main__':
order_list()
Store your data in pandas df
import pandas as pd
df = pd.read_csv('file.csv')
Store the modified dataframe into new one
df_2 = df.groupby('Column_Name').agg(lambda x: ' '.join(x)).reset_index() ## Write Name of your column
Write the df to new csv
df2.to_csv("file_modified.csv")
You could do it also like this:
fIn = open("test.csv", "r")
fOut = open("output.csv", "w")
fOut.write(",".join([line for line in fIn]).replace("\n",""))
fIn.close()
fOut.close()
I've you want now to run it on multiple file you can run it as script with arguments:
import sys
fIn = open(sys.argv[1], "r")
fOut = open(sys.argv[2], "w")
fOut.write(",".join([line for line in fIn]).replace("\n",""))
fIn.close()
fOut.close()
So now expect you use some Linux System and the script is called csvOnliner.py you could call it with:
for i in *.csv; do python csvOnliner.py $i changed_$i; done
With windows you could do it in a way like this:
FOR %i IN (*.csv) DO csvOnliner.py %i changed_%i

Python- Can't overwrite first column of .csv file with new time stamp

I have a .csv file (see image):
In the image there is a time column with datetime strings, I have a program that takes this column and only reads the times H:M:S. Yet, not only in my program I am attempting to take the column to read only the time stamp H:M:S , but I am also attempting to overwrite the time column of the first file and replace it with only the H:M:S time stamp onto a the new .csv with the following code.
CODE:
import csv
import datetime as dt
import os
File = 'C:/Users/Alan Cedeno/Desktop/Test_Folder/HiSAM1_data_160215_164858.csv'
root, ext = os.path.splitext(File)
output = root + '-new.csv'
with open(File,'r') as csvinput,open(output, 'w') as csvoutput:
writer = csv.writer(csvoutput, lineterminator='\n')
reader = csv.reader(csvinput)
all = []
row = next(reader)
for line in reader:
row.append(dt.datetime.strptime(line[0],'%m/%d/%Y %H:%M:%S').time())
all.append(row)
for row in reader:
row.append(row[0])
all.append(row)
writer.writerows(all)
The program works, and takes the datetime strings and overwrites the string with the time stamp H:M:S in a new .csv file. However, here is the problem, the output file instead of replacing the time column it replaced every column obtaining an output file that looks like this. See 2nd image:
At this point I don' t really know how to make the new output file to look like the file of the first image, with the format H:M:S in the first column ONLY, not all scrambled like in the second image. Any suggestions?
SCREENSHOT FOR BAH:
See the K column, it should be column A of the first image, and columns B,C,D,E,F,G,I,and J should stay the same like in image 1.
Download LInk of .csv file: http://www.speedyshare.com/z2jwq/HiSAM1-data-160215-164858.csv
The main problem with your code seems that you're keeping appending to the first row the time of each of the line in the csv, which results in the second image posted in the question.
The idea is to keep track of the different lines and modify just the first element of each line. Also, if you want, you should keep the first line, which indicates the labels of the column. For solving the issue, the code would look like:
import csv
import datetime as dt
import os
File = 'C:/Users/Alan Cedeno/Desktop/Test_Folder/HiSAM1_data_160215_164858.csv'
root, ext = os.path.splitext(File)
output = root + '-new.csv'
with open(File,'r') as csvinput,open(output, 'w') as csvoutput:
writer = csv.writer(csvoutput, lineterminator='\n')
reader = csv.reader(csvinput)
rows = [next(reader)]
for line in reader:
line[0] = str(dt.datetime.strptime(line[0],'%m/%d/%Y %H:%M:%S').time())
rows.append(line)
writer.writerows(rows)
Note the list rows has the modified lines from the csvinput.
The resulting output csv file (tested with the first line in the question duplicated) would be
With some simplified data:
#!python3
import csv
import datetime as dt
import os
File = 'data.csv'
root, ext = os.path.splitext(File)
output = root + '-new.csv'
# csv module documents opening with `newline=''` mode in Python 3.
with open(File,'r',newline='') as csvinput,open(output, 'w',newline='') as csvoutput:
writer = csv.writer(csvoutput)
reader = csv.reader(csvinput)
# Copy the header
row = next(reader)
writer.writerow(row)
# Edit the first column of each row.
for row in reader:
row[0] = dt.datetime.strptime(row[0],'%m/%d/%Y %H:%M:%S').time()
writer.writerow(row)
Input:
Time,0.3(/L),0.5(/L)
02/15/2016 13:44:01,88452,16563
02/15/2016 13:44:02,88296,16282
Output:
Time,0.3(/L),0.5(/L)
13:44:01,88452,16563
13:44:02,88296,16282
If actually on Python 2, the csv module documents using binary mode. Replace the with line with:
with open(File,'rb') as csvinput,open(output, 'wb') as csvoutput:
You cannot overwrite a single row in the CSV file. You'll have to write all the rows you want to a new file and then rename it back to the original file name.
Your pattern of usage may fit a database better than a CSV file. Look into the sqlite3 module for a lightweight database.

add a new column to an existing csv file

I have a csv file with 5 columns and I want to add data in a 6th column. The data I have is in an array.
Right now, the code that I have will insert the data I would want in the 6th column only AFTER all the data that already exists in the csv file.
For instance I have:
wind, site, date, time, value
10, 01, 01-01-2013, 00:00, 5.1
89.6 ---> this is the value I want to add in a 6th column but it puts it after all the data from the csv file
Here is the code I am using:
csvfile = 'filename'
with open(csvfile, 'a') as output:
writer = csv.writer(output, lineterminator='\n')
for val in data:
writer.writerow([val])
I thought using 'a' would append the data in a new column, but instead it just puts it after ('under') all the other data... I don't know what to do!
Appending writes data to the end of a file, not to the end of each row.
Instead, create a new file and append the new value to each row.
csvfile = 'filename'
with open(csvfile, 'r') as fin, open('new_'+csvfile, 'w') as fout:
reader = csv.reader(fin, newline='', lineterminator='\n')
writer = csv.writer(fout, newline='', lineterminator='\n')
if you_have_headers:
writer.writerow(next(reader) + [new_heading])
for row, val in zip(reader, data)
writer.writerow(row + [data])
On Python 2.x, remove the newline='' arguments and change the filemodes from 'r' and 'w' to 'rb' and 'wb', respectively.
Once you are sure this is working correctly, you can replace the original file with the new one:
import os
os.remove(csvfile) # not needed on unix
os.rename('new_'+csvfile, csvfile)
csv module does not support writing or appending column. So the only thing you can do is: read from one file, append 6th column data, and write to another file. This shows as below:
with open('in.txt') as fin, open('out.txt', 'w') as fout:
index = 0
for line in fin:
fout.write(line.replace('\n', ', ' + str(data[index]) + '\n'))
index += 1
data is a int list.
I test these codes in python, it runs fine.
We have a CSV file i.e. data.csv and its contents are:
#data.csv
1,Joi,Python
2,Mark,Laravel
3,Elon,Wordpress
4,Emily,PHP
5,Sam,HTML
Now we want to add a column in this csv file and all the entries in this column should contain the same value i.e. Something text.
Example
from csv import writer
from csv import reader
new_column_text = 'Something text'
with open('data.csv', 'r') as read_object, \
open('data_output.csv', 'w', newline='') as write_object:
csv_reader = reader(read_object)
csv_writer = writer(write_object)
for row in csv_reader:
row.append(new_column_text)
csv_writer.writerow(row)
Output
#data_output.csv
1,Joi,Python,Something text
2,Mark,Laravel,Something text
3,Elon,Wordpress,Something text
4,Emily,PHP,Something text
5,Sam,HTML,Something text
The append mode of opening files is meant to add data to the end of a file. what you need to do is provide random access to your file writing. you need to use the seek() method
you can see and example here:
http://www.tutorialspoint.com/python/file_seek.htm
or read the python docs on it here: https://docs.python.org/2.4/lib/bltin-file-objects.html which isn't terribly useful
if you want to add to the end of a column you may want to open the file read a line to figure out it's length then seek to the end.

Categories

Resources