First, I need to import two csv files.
Then I need to remove header in both files.
After that, I would like to take one column from both files and to concatenate them.
I have tried to open files, but I'm not sure how to concatenate.
Can anyone give advice how to proceed?
import csv
x = []
chamber_temperature = []
with open(r"C:\Users\mm02058\Documents\test.txt", 'r') as file:
reader = csv.reader(file, delimiter='\t')
with open(r"C:\Users\mm02058\Documents\test.txt", 'r') as file1:
reader_1 = csv.reader(file1, delimiter='\t')
for row in (reader):
x.append(row[0])
chamber_temperature.append(row[1])
for row in (reader_1):
x.append(row[0])
chamber_temperature.append(row[1])
The immediate bug is that you are trying to read from reader1 outside the with block, which means Python has already closed the file.
But the nesting of the with calls is just confusing and misleading anyway. Here is a generalization which should allow you to extend with more new files easily.
import csv
x = []
chamber_temperature = []
for filename in (r"C:\Users\mm02058\Documents\test.txt",
r"C:\Users\mm02058\Documents\test.txt"):
with open(filename, 'r') as file:
for idx, row in enumerate(csv.reader(file, delimiter='\t')):
if idx == 0:
continue # skip header line
x.append(row[0])
chamber_temperature.append(row[1])
Because of how you have structured your code, the context manager for file1 will close the file before it has been used by the for loop.
Use a single context manager to open both files e.g
with open('file1', 'r') as file1, open('file2', 'r') as file2:
# Your code in here
for row in (reader_1):
x.append(row[0])
chamber_temperature.append(row[1])
You are getting this error because you have placed this codeblock outside the 2nd loop and now the file has been closed.
You can either open both the files at once with this
with open('file1', 'r') as file1, open('file2', 'r') as file2:
# Your code in here
or you can use pandas for opening and concatenating csv files
import pandas as pd
data = pd.read_csv(r'file.csv', header=None)
and then refer here Concatenate dataframes
Related
This question already has answers here:
How to add a string to each line in a file?
(3 answers)
Closed 9 months ago.
I have already an existing CSV file that I am accessing and I want to append the data to the first row, but it writes data at the end of the file.
What I am getting:
But I want the data to append like this:
Code I have done so far:
import CSV
with open('explanation.csv' , 'a', newline="") as file:
myFile = csv.writer(file)
myFile.writerow(["1"])
What you're actually wanting to do is replace data in an existing CSV file with new values, however in order to update a CSV file you must rewrite the whole thing.
One way to do that is by reading the whole thing into memory, updating the data, and then use it to overwrite the existing file. Alternatively you could process the file a row-at-a-time and store the results in a temporary file, then replace the original with the temporary file when finished updating them all.
The code to do the latter is shown below:
import csv
import os
from pathlib import Path
from tempfile import NamedTemporaryFile
filepath = Path('explanation.csv') # CSV file to update.
with open(filepath, 'r', newline='') as csv_file, \
NamedTemporaryFile('w', newline='', dir=filepath.parent, delete=False) as tmp_file:
reader = csv.reader(csv_file)
writer = csv.writer(tmp_file)
# Replace value in the first column of the first 5 rows.
for data_value in range(1, 6):
row = next(reader)
row[0] = data_value
writer.writerow(row)
writer.writerows(reader) # Copy remaining rows of original file.
# Replace original file with updated version.
os.replace(tmp_file.name, filepath)
print('CSV file updated')
You could read in the entire file, append your rows in memory, and then write the entire file:
def append(fname, data):
with open(fname) as f:
reader = csv.reader(f)
data = list(reader) + list(data)
with open(fname, 'w') as f:
writer = csv.writer(f)
writer.writerows(data)
I have a large number of csv files/dataframes that are too large to store together in memory. However, I noticed the size of the columns are different between these dataframes. My columns are permutations of "ACGT" (DNA Sequences). I followed the instructions from this question on how to write multiple csvs with different columns, however I get the following error: AttributeError: 'str' object has no attribute 'keys'. I found this question to address the error, however I am unsure where to edit the code to make the 'line' object a dictionary. I am also worried my csv files which have an index column without a header value may be messing up my code or the format of my fieldnames (str derived from permutations) may be an issue. If there is a way to concat multiple csv files with different in another language I am amendable to that however I have run into issues with this question as well.
import glob
import csv
import os
mydir = "test_csv/"
file_list = glob.glob(mydir + "/*.csv") # Include slash or it will search in the wrong directory!!
file_list
import itertools
fieldnames = []
for p in itertools.product('ACGT', repeat=8):
fieldnames.append("".join(p))
for filename in file_list:
with open(filename, "r", newline="") as f_in:
reader = csv.reader(f_in)
headers = next(reader)
with open("Outcombined.csv", "w", newline="") as f_out:
writer = csv.DictWriter(f_out, fieldnames=fieldnames)
for filename in file_list:
with open(filename, "r", newline="") as f_in:
reader = csv.DictReader(f_in)
for line in headers:
writer.writerow(line)
You only need to write the header once, so do that before your file_list loop:
with open('Outcombined.csv','w',newline='') as f_out:
writer = csv.DictWriter(f_out,fieldnames=fieldnames)
writer.writeheader() # write header based on `fieldnames`
for filename in file_list:
with open(filename,'r',newline='') as f_in:
reader = csv.DictReader(f_in)
for line in reader:
writer.writerow(line)
The DictWriter will take care of placing the values under the correct headers.
I am trying to combine multiple rows in a csv file together. I could easily do it in Excel but I want to do this for hundreds of files so I need it to be as a code. I have tried to store rows in arrays but it doesn't seem to work. I am using Python to do it.
So lets say I have a csv file;
1,2,3
4,5,6
7,8,9
All I want to do is to have a csv file as this;
1,2,3,4,5,6,7,8,9
The code I have tried is this;
fin = open("C:\\1.csv", 'r+')
fout = open("C:\\2.csv",'w')
for line in fin.xreadlines():
new = line.replace(',', ' ', 1)
fout.write (new)
fin.close()
fout.close()
Could you please help?
You should be using the csv module for this as splitting CSV manually on commas is very error-prone (single columns can contain strings with commas, but you would incorrectly end up splitting this into multiple columns). The CSV module uses lists of values to represent single rows.
import csv
def return_contents(file_name):
with open(file_name) as infile:
reader = csv.reader(infile)
return list(reader)
data1 = return_contents('csv1.csv')
data2 = return_contents('csv2.csv')
print(data1)
print(data2)
combined = []
for row in data1:
combined.extend(row)
for row in data2:
combined.extend(row)
with open('csv_out.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
writer.writerow(combined)
That code gives you the basis of the approach but it would be ugly to extend this for hundreds of files. Instead, you probably want os.listdir to pull all the files in a single directory, one by one, and add them to your output. This is the reason that I packed the reading code into the return_contents function; we can repeat the same process millions of times on different files with only one set of code to do the actual reading. Something like this:
import csv
import os
def return_contents(file_name):
with open(file_name) as infile:
reader = csv.reader(infile)
return list(reader)
all_files = os.listdir('my_csvs')
combined_output = []
for file in all_files:
data = return_contents('my_csvs/{}'.format(file))
for row in data:
combined_output.extend(row)
with open('csv_out.csv', 'w', newline='') as outfile:
writer = csv.writer(outfile)
writer.writerow(combined_output)
If you are specially dealing with csv file format. I recommend you to use csv package for the file operations. If you also use with...as statement, you don't need to worry about closing the file etc. You just need to define the PATH then program will iterate all .csv files
Here is what you can do:
PATH = "your folder path"
def order_list():
data_list = []
for filename in os.listdir(PATH):
if filename.endswith(".csv"):
with open("data.csv") as csvfile:
read_csv = csv.reader(csvfile, delimiter=',', quoting=csv.QUOTE_NONNUMERIC)
for row in read_csv:
data_list.extend(row)
print(data_list)
if __name__ == '__main__':
order_list()
Store your data in pandas df
import pandas as pd
df = pd.read_csv('file.csv')
Store the modified dataframe into new one
df_2 = df.groupby('Column_Name').agg(lambda x: ' '.join(x)).reset_index() ## Write Name of your column
Write the df to new csv
df2.to_csv("file_modified.csv")
You could do it also like this:
fIn = open("test.csv", "r")
fOut = open("output.csv", "w")
fOut.write(",".join([line for line in fIn]).replace("\n",""))
fIn.close()
fOut.close()
I've you want now to run it on multiple file you can run it as script with arguments:
import sys
fIn = open(sys.argv[1], "r")
fOut = open(sys.argv[2], "w")
fOut.write(",".join([line for line in fIn]).replace("\n",""))
fIn.close()
fOut.close()
So now expect you use some Linux System and the script is called csvOnliner.py you could call it with:
for i in *.csv; do python csvOnliner.py $i changed_$i; done
With windows you could do it in a way like this:
FOR %i IN (*.csv) DO csvOnliner.py %i changed_%i
I want to read in a csv file, sort it, then rewrite a new file. Any help?
You should probably take a look at the python documentation of the csv module:
https://docs.python.org/3.6/library/csv.html
You could also use pandas, but that might be overkill if you're new to python.
To give you some initial code to play with:
# file test.csv
2,a,x
0,b,y
1,c,z
Code:
import csv
csv_lines = []
# read csv
with open('test.csv') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
csv_lines.append(row)
# sort by first column
csv_lines_sorted = sorted(csv_lines, key=lambda x: x[0])
# write csv
with open('test_sorted.csv', 'w') as csvfile:
writer = csv.writer(csvfile)
for row in csv_lines_sorted:
writer.writerow(row)
So I have a text file that looks like this:
1,989785345,"something 1",,234.34,254.123
2,234823423,"something 2",,224.4,254.123
3,732847233,"something 3",,266.2,254.123
4,876234234,"something 4",,34.4,254.123
...
I'm running this code right here:
file = open("file.txt", 'r')
readFile = file.readline()
lineID = readFile.split(",")
print lineID[1]
This lets me break up the content in my text file by "," but what I want to do is separate it into columns because I have a massive number of IDs and other things in each line. How would I go about splitting the text file into columns and call each individual row in the column one by one?
You have a CSV file, use the csv module to read it:
import csv
with open('file.txt', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
This still gives you data by row, but with the zip() function you can transpose this to columns instead:
import csv
with open('file.txt', 'rb') as csvfile:
reader = csv.reader(csvfile)
for column in zip(*reader):
Do be careful with the latter; the whole file will be read into memory in one go, and a large CSV file could eat up all your available memory in the process.