How can I append columns from csv files to one file? [duplicate] - python

This question already has answers here:
How to add a new column to a CSV file?
(11 answers)
Closed 4 years ago.
I'm writing a script in Python. I have a bunch of csv files that each contain 1 column. These are what files might look like this:
FirstFile.csv
First
a
b
c
SecondFile.csv
Second
a2
b2
c2
I want some resultant file (let's call it result.csv) to be created that looks like:
First Second
a a2
b b2
c c2
How can I append all the csv's in a directory in python and append all the columns so I have a result.csv that looks like this (but, of course, with many more columns)?

You can try using Pandas.
import pandas as pd
result = pd.concat([ pd.read_csv(f) for f in filenames ],axis=1)
result.to_csv("result.csv",index=False)
Create a list of your file names (e.g. filenames)
Import Pandas
Use the concat function with list comprehension

You can use the csv module:
Create 10 files:
filenames = []
for i in range(10):
filenames.append(f"file_{i}.txt")
with open(filenames[-1],"w") as f:
f.write(f"Header{i}\n")
for row in range(5):
f.write(f"text_{i}_{row}\n")
Read in all files:
data = []
for f in filenames: # filled when creating files, you can use os.walk to fill yours
with open(f) as r:
data.append([x.strip() for x in r])
# data is a list of columns, we need a list of list of columns, so we transpose the data:
transpose = zip(*data)
# write the joined file
import csv
with open("joined.txt","w", newline="") as j:
w = csv.writer(j)
w.writerows(transpose)
Check if it is ok:
with open("joined.txt") as j:
print(j.read())
Output:
Header0,Header1,Header2,Header3,Header4,Header5,Header6,Header7,Header8,Header9
text_0_0,text_1_0,text_2_0,text_3_0,text_4_0,text_5_0,text_6_0,text_7_0,text_8_0,text_9_0
text_0_1,text_1_1,text_2_1,text_3_1,text_4_1,text_5_1,text_6_1,text_7_1,text_8_1,text_9_1
text_0_2,text_1_2,text_2_2,text_3_2,text_4_2,text_5_2,text_6_2,text_7_2,text_8_2,text_9_2
text_0_3,text_1_3,text_2_3,text_3_3,text_4_3,text_5_3,text_6_3,text_7_3,text_8_3,text_9_3
text_0_4,text_1_4,text_2_4,text_3_4,text_4_4,text_5_4,text_6_4,text_7_4,text_8_4,text_9_4
data looks like this:
[['Header0', 'text_0_0', 'text_0_1', 'text_0_2', 'text_0_3', 'text_0_4'], # one files data
['Header1', 'text_1_0', 'text_1_1', 'text_1_2', 'text_1_3', 'text_1_4'],
['Header2', 'text_2_0', 'text_2_1', 'text_2_2', 'text_2_3', 'text_2_4'],
['Header3', 'text_3_0', 'text_3_1', 'text_3_2', 'text_3_3', 'text_3_4'],
['Header4', 'text_4_0', 'text_4_1', 'text_4_2', 'text_4_3', 'text_4_4'],
['Header5', 'text_5_0', 'text_5_1', 'text_5_2', 'text_5_3', 'text_5_4'],
['Header6', 'text_6_0', 'text_6_1', 'text_6_2', 'text_6_3', 'text_6_4'],
['Header7', 'text_7_0', 'text_7_1', 'text_7_2', 'text_7_3', 'text_7_4'],
['Header8', 'text_8_0', 'text_8_1', 'text_8_2', 'text_8_3', 'text_8_4'],
['Header9', 'text_9_0', 'text_9_1', 'text_9_2', 'text_9_3', 'text_9_4']]
Transposed it looks like:
[('Header0', 'Header1', 'Header2', 'Header3', 'Header4', 'Header5', 'Header6', 'Header7', 'Header8', 'Header9'),
('text_0_0', 'text_1_0', 'text_2_0', 'text_3_0', 'text_4_0', 'text_5_0', 'text_6_0', 'text_7_0', 'text_8_0', 'text_9_0'),
('text_0_1', 'text_1_1', 'text_2_1', 'text_3_1', 'text_4_1', 'text_5_1', 'text_6_1', 'text_7_1', 'text_8_1', 'text_9_1'),
('text_0_2', 'text_1_2', 'text_2_2', 'text_3_2', 'text_4_2', 'text_5_2', 'text_6_2', 'text_7_2', 'text_8_2', 'text_9_2'),
('text_0_3', 'text_1_3', 'text_2_3', 'text_3_3', 'text_4_3', 'text_5_3', 'text_6_3', 'text_7_3', 'text_8_3', 'text_9_3'),
('text_0_4', 'text_1_4', 'text_2_4', 'text_3_4', 'text_4_4', 'text_5_4', 'text_6_4', 'text_7_4', 'text_8_4', 'text_9_4')]

I'm sure there are more pythonic ways, but this will work (so long as all files have an identical number of lines).
input_files = ['FirstFile.csv', 'SecondFile.csv']
csv_separator = '\t'
data = []
for file in input_files:
partial_data = []
with open(file, 'r') as f:
for line in f:
partial_data.append(line.strip('\n'))
data.append(partial_data)
with open('output.csv','w') as output:
for item in range(len(data[0])):
line = []
for part in range(len(data)):
line.append(data[part][item])
output.write(csv_separator.join(line)+'\n')

If you're looking for a pure python solution it's probably best to csv.DictReader and csv.DictWriter so you have more control over how the data is formatted. Also, everything is 'generated' on the fly so it will be more memory efficient with very large files.
import csv
with open('csv1.csv') as csv1, open('csv2.csv') as csv2:
r1 = csv.DictReader(csv1)
r2 = csv.DictReader(csv2)
with open('csv3.csv', 'w') as csv3:
writer = csv.DictWriter(csv3,
fieldnames=["First", "Second"],
lineterminator='\n'
)
writer.writeheader()
writer.writerows({**x, **y} for x, y in zip(r1, r2))

Related

Python: extracting data values from one file with IDs from a second file

I’m new to coding, and trying to extract a subset of data from a large file.
File_1 contains the data in two columns: ID and Values.
File_2 contains a large list of IDs, some of which may be present in File_1 while others will not be present.
If an ID from File_2 is present in File_1, I would like to extract those values and write the ID and value to a new file, but I’m not sure how to do this. Here is an example of the files:
File_1: data.csv
ID Values
HOT224_1_0025m_c100047_1 16
HOT224_1_0025m_c10004_1 3
HOT224_1_0025m_c100061_1 1
HOT224_1_0025m_c10010_2 1
HOT224_1_0025m_c10020_1 1
File_2: ID.xlsx
IDs
HOT224_1_0025m_c100047_1
HOT224_1_0025m_c100061_1
HOT225_1_0025m_c100547_1
HOT225_1_0025m_c100561_1
I tried the following:
import pandas as pd
data_file = pd.read_csv('data.csv', index_col = 0)
ID_file = pd.read_excel('ID.xlsx')
values_from_ID = data_file.loc[['ID_file']]
The following error occurs:
KeyError: "None of [['ID_file']] are in the [index]"
Not sure if I am reading in the excel file correctly.
I also do not know how to write the extracted data to a new file once I get the code to do it.
Thanks for your help.
With pandas:
import pandas as pd
data_file = pd.read_csv('data.csv', index_col=0, delim_whitespace=True)
ID_file = pd.read_excel('ID.xlsx', index_col=0)
res = data_file.loc[ID_file.index].dropna()
res.to_csv('result.csv')
Content of result.csv:
IDs,Values
HOT224_1_0025m_c100047_1,16.0
HOT224_1_0025m_c100061_1,1.0
In steps:
You need to read your csv with whitespace delimited:
data_file = pd.read_csv('data.csv', index_col=0, delim_whitespace=True)
it looks like this:
>>> data_file
Values
ID
HOT224_1_0025m_c100047_1 16
HOT224_1_0025m_c10004_1 3
HOT224_1_0025m_c100061_1 1
HOT224_1_0025m_c10010_2 1
HOT224_1_0025m_c10020_1 1
Now, read your Excel file, using the ids as index:
ID_file = pd.read_excel('ID.xlsx', index_col=0)
and you use its index with locto get the matching entries from your first dataframe. Drop the missing values with dropna():
res = data_file.loc[ID_file.index].dropna()
Finally, write to the result csv:
res.to_csv('result.csv')
You can do it using a simple dictionary in Python. You can make a dictionary from file 1 and read the IDs from File 2. The IDS from file 2 can be checked in the dictionary and only the matching ones can be written to your output file. Something like this could work :
with open('data.csv','r') as f:
lines = f.readlines()
#Skip the CSV Header
lines = lines[1:]
table = {l.split()[0]:l.split()[1] for l in lines if len(l.strip()) != 0}
with open('id.csv','r') as f:
lines = f.readlines()
#Skip the CSV Header
lines = lines[1:]
matchedIDs = [(l.strip(),table[l.strip()]) for l in line if l.strip() in table]
Now you will have your matched IDs and their values in a list of tuples called matchedIDs. You can write them in any format you like in a file.
I'm also new to python programming. So the code that I used below might not be the most efficient. The situation I assumed is that find ids in data.csv also in id.csv, there might be some ids in data.csv not in id.csv and vise versa.
import pandas as pd
data = pd.read_csv('data.csv')
id2 = pd.read_csv('id.csv')
data.ID = data['ID']
id2.ID = idd['IDs']
d=[]
for row in data.ID:
d.append(row)
f=[]
for row in id2.ID:
f.append(row)
g=[]
for i in d:
if i in f:
g.append(i)
data = pd.read_csv('data.csv',index_col='ID')
new_data = data.loc[g,:]
new_data.to_csv('new_data.csv')
This is the code I ended up using. It worked perfectly. Thanks to everyone for their responses.
import pandas as pd
data_file = pd.read_csv('data.csv', index_col=0)
ID_file = pd.read_excel('ID.xlsx', index_col=0)
res = data_file.loc[ID_file.index].dropna()
res.to_csv('result.csv')

Merge 2 csv file with one unique column but different header [duplicate]

This question already has answers here:
Merging two CSV files using Python
(2 answers)
Closed 7 years ago.
I want to merge 2 csv file using some scripting language (like bash script or python).
1st.csv (this data is from mysql query)
member_id,name,email,desc
03141,ej,ej#domain.com,cool
00002,jes,jes#domain.com,good
00002,charmie,charm#domain.com,sweet
2nd.csv (from mongodb query)
id,address,create_date
00002,someCity,20150825
00003,newCity,20140102
11111,,20150808
The examples are not the actual, though i know that some of the member_id from qsl and the id from mongodb are the same.
(*and i wish my output will be something like this)
desiredoutput.csv
meber_id,name,email,desc,address,create_date
03141,ej,ej#domain.com,cool,,
00002,jes,jes#domain.com,good,someCity,20150825
00002,charmie,charm#domain.com,sweet,
11111,,,,20150808
help will be much appreciated. thanks in advance
#########################################################################
#!/usr/bin/python
import csv
import itertools as IT
filenames = ['1st.csv', '2nd.csv']
handles = [open(filename, 'rb') for filename in filenames]
readers = [csv.reader(f, delimiter=',') for f in handles]
with open('desiredoutput.csv', 'wb') as h:
writer = csv.writer(h, delimiter=',', lineterminator='\n', )
for rows in IT.izip_longest(*readers, fillvalue=['']*2):
combined_row = []
for row in rows:
row = row[:1] # column where 1 know there are identical data
if len(row) == 1:
combined_row.extend(row)
else:
combined_row.extend(['']*1)
writer.writerow(combined_row)
for f in handles:
f.close()
#########################################################################
just read and tried this code(manipulate) in this site too
Since you haven't posted an attempt, I'll give you a general answer (using Python) to get you started.
Create a dict, d
Iterate over all the rows of the first file, convert each row into a list and store it in d using meber_id as the key and the list as the value.
Iterate over all the rows of the second file, convert each row into a list leaving out the id column and update the list under d[id] with the new list if d[id] exists, otherwise store the new list under d[id].
Finally, iterate over the values in d and print them out comma separated to a file.
Edit
In your attempt, you are trying to use izip_longest to iterate over the rows of both files at the same time. But this would work only if there were an equal number of rows in both files and they were in the same order.
Anyhow, here is one way of doing it.
Note: This is using the Python 3.4+ csv module. For 2.7 it might look a little different.
import csv
d = {}
with open("file1.csv", newline="") as f:
for row in csv.reader(f):
d.setdefault(row[0], []).append(row + [""] * 3)
with open("file2.csv", newline="") as f:
for row in csv.reader(f):
old_row = d.setdefault(row[0][0], [row[0], "", "", ""])
old_row[4:] = row[1:]
with open("out.csv", "w", newline="") as f:
writer = csv.writer(f)
for rows in d.values():
writer.writerows(rows)
Here goes a suggestion using pandas I've got from this answer and pandas doc about merging.
import pandas as pd
first = pd.read_csv('1st.csv')
second = pd.read_csv('2nd.csv')
merged = pd.concat([first, second], axis=1)
This will output:
meber_id name email desc id address create_date
3141 ej ej#domain.com cool 2 someCity 20150825
2 jes jes#domain.com good 11 newCity 20140102
11 charmie charm#domain.com sweet 11111 NaN 20150808

Combinging Multiple Json Objects as one DataFrame in Python Pandas

I'm not sure what I'm missing here but I have 2 zip files that contain json files and I'm just trying to combine the data I extract from the files and combine as one dataframe but my loop keeps giving me separate records. Here is what I have prior to constructing DF. I tried pd.concat but I think my issue is more to do with the way I'm reading the files in the first place.
data = []
for FileZips in glob.glob('*.zip'):
with zipfile.ZipFile(FileZips, 'r') as myzip:
for logfile in myzip.namelist():
with myzip.open(logfile) as f:
contents = f.readlines()[-2]
jfile = json.loads(contents)
print len(jfile)
returns:
40935
40935
You can use read_json (assuming it's valid json).
I would also break this up into more functions for readability:
def zip_to_df(zip_file):
with zipfile.ZipFile(zip_file, 'r') as myzip:
return pd.concat((log_as_df(loglife, myzip)
for logfile in myzip.namelist()),
ignore_index=True)
def log_as_df(logfile, myzip):
with myzip.open(logfile, 'r') as f:
contents = f.readlines()[-2]
return pd.read_json(contents)
df = pd.concat(map(zip_to_df, glob.glob('*.zip')), ignore_index=True)
Note: This does more concats, but I think it's worth it for readability, you could do just one concat...
I was able to get what I need with a small adjustment to my indent!!
dfs = []
for FileZips in glob.glob('*.zip'):
with zipfile.ZipFile(FileZips, 'r') as myzip:
for logfile in myzip.namelist():
with myzip.open(logfile, 'r') as f:
contents = f.readlines()[-2]
jfile = json.loads(contents)
dfs.append(pd.DataFrame(jfile))
df = pd.concat(dfs, ignore_index=True)
print len(df)

Comparing two csv files and getting difference

I have two csv file I need to compare and then spit out the differnces:
CSV FORMAT:
Name Produce Number
Adam Apple 5
Tom Orange 4
Adam Orange 11
I need to compare the two csv files and then tell me if there is a difference between Adams apples on sheet and sheet 2 and do that for all names and produce numbers. Both CSV files will be formated the same.
Any pointers will be greatly appreciated
I have used csvdiff
$pip install csvdiff
$csvdiff --style=compact col1 a.csv b.csv
Link to package on pypi
I found this link useful
If your CSV files aren't so large they'll bring your machine to its knees if you load them into memory, then you could try something like:
import csv
csv1 = list(csv.DictReader(open('file1.csv')))
csv2 = list(csv.DictReader(open('file2.csv')))
set1 = set(csv1)
set2 = set(csv2)
print set1 - set2 # in 1, not in 2
print set2 - set1 # in 2, not in 1
print set1 & set2 # in both
For large files, you could load them into a SQLite3 database and use SQL queries to do the same, or sort by relevant keys and then do a match-merge.
One of the best utilities for comparing two different files is diff.
See Python implementation here: Comparing two .txt files using difflib in Python
import csv
def load_csv_to_dict(fname, get_key, get_data):
with open(fname, 'rb') as inf:
incsv = csv.reader(inf)
incsv.next() # skip header
return {get_key(row):get_data(row) for row in incsv}
def main():
key = lambda r: tuple(r[0:2])
data = lambda r: int(r[2])
f1 = load_csv_to_dict('file1.csv', key, data)
f2 = load_csv_to_dict('file2.csv', key, data)
f1keys = set(f1.iterkeys())
f2keys = set(f2.iterkeys())
print("Keys in file1 but not file2:")
print(", ".join(str(a)+":"+str(b) for a,b in (f1keys-f2keys)))
print("Keys in file2 but not file1:")
print(", ".join(str(a)+":"+str(b) for a,b in (f2keys-f1keys)))
print("Differing values:")
for k in (f1keys & f2keys):
a,b = f1[k], f2[k]
if a != b:
print("{}:{} {} <> {}".format(k[0],k[1], a, b))
if __name__=="__main__":
main()
If you want to use Python's csv module along with a function generator, you can use nested looping and compare large .csv files. The example below compares each row using a cursory comparision:
import csv
def csv_lazy_get(csvfile):
with open(csvfile) as f:
r = csv.reader(f)
for row in r:
yield row
def csv_cmp_lazy(csvfile1, csvfile2):
gen_2 = csv_lazy_get(csvfile2)
for row_1 in csv_lazy_get(csvfile1):
row_2 = gen_2.next()
print("row_1: ", row_1)
print("row_2: ", row_2)
if row_2 == row_1:
print("row_1 is equal to row_2.")
else:
print("row_1 is not equal to row_2.")
gen_2.close()
Here a start that does not use difflib. It is really just a point to build from because maybe Adam and apples appear twice on the sheet; can you ensure that is not the case? Should the apples be summed, or is that an error?
import csv
fsock = open('sheet.csv','rU')
rdr = csv.reader(fsock)
sheet1 = {}
for row in rdr:
name, produce, amount = row
sheet1[(name, produce)] = int(amount) # always an integer?
fsock.close()
# repeat the above for the second sheet, then compare
You get the idea?

Python- Import Multiple Files to a single .csv file

I have 125 data files containing two columns and 21 rows of data and I'd like to import them into a single .csv file (as 125 pairs of columns and only 21 rows).
This is what my data files look like:
I am fairly new to python but I have come up with the following code:
import glob
Results = glob.glob('./*.data')
fout='c:/Results/res.csv'
fout=open ("res.csv", 'w')
for file in Results:
g = open( file, "r" )
fout.write(g.read())
g.close()
fout.close()
The problem with the above code is that all the data are copied into only two columns with 125*21 rows.
Any help is very much appreciated!
This should work:
import glob
files = [open(f) for f in glob.glob('./*.data')] #Make list of open files
fout = open("res.csv", 'w')
for row in range(21):
for f in files:
fout.write( f.readline().strip() ) # strip removes trailing newline
fout.write(',')
fout.write('\n')
fout.close()
Note that this method will probably fail if you try a large number of files, I believe the default limit in Python is 256.
You may want to try the python CSV module (http://docs.python.org/library/csv.html), which provides very useful methods for reading and writing CSV files. Since you stated that you want only 21 rows with 250 columns of data, I would suggest creating 21 python lists as your rows and then appending data to each row as you loop through your files.
something like:
import csv
rows = []
for i in range(0,21):
row = []
rows.append(row)
#not sure the structure of your input files or how they are delimited, but for each one, as you have it open and iterate through the rows, you would want to append the values in each row to the end of the corresponding list contained within the rows list.
#then, write each row to the new csv:
writer = csv.writer(open('output.csv', 'wb'), delimiter=',')
for row in rows:
writer.writerow(row)
(Sorry, I cannot add comments, yet.)
[Edited later, the following statement is wrong!!!] "The davesnitty's generating the rows loop can be replaced by rows = [[]] * 21." It is wrong because this would create the list of empty lists, but the empty lists would be a single empty list shared by all elements of the outer list.
My +1 to using the standard csv module. But the file should be always closed -- especially when you open that much of them. Also, there is a bug. The row read from the file via the -- even though you only write the result here. The solution is actually missing. Basically, the row read from the file should be appended to the sublist related to the line number. The line number should be obtained via enumerate(reader) where reader is csv.reader(fin, ...).
[added later] Try the following code, fix the paths for your puprose:
import csv
import glob
import os
datapath = './data'
resultpath = './result'
if not os.path.isdir(resultpath):
os.makedirs(resultpath)
# Initialize the empty rows. It does not check how many rows are
# in the file.
rows = []
# Read data from the files to the above matrix.
for fname in glob.glob(os.path.join(datapath, '*.data')):
with open(fname, 'rb') as f:
reader = csv.reader(f)
for n, row in enumerate(reader):
if len(rows) < n+1:
rows.append([]) # add another row
rows[n].extend(row) # append the elements from the file
# Write the data from memory to the result file.
fname = os.path.join(resultpath, 'result.csv')
with open(fname, 'wb') as f:
writer = csv.writer(f)
for row in rows:
writer.writerow(row)

Categories

Resources