I have a bunch of csv files with the same columns but in different order. We are trying to upload them with SQL*Plus but we need the columns with a fixed column arrange.
Example
required order: A B C D E F
csv file: A C D E B (sometimes a column is not in the csv because it is not available)
is it achievable with python? we are using Access+Macros to do it... but it is too time consuming
PS. Sorry if anyone get upset for my English skills.
You can use the csv module to read, reorder, and then and write your file.
Sample File:
$ cat file.csv
A,B,C,D,E
a1,b1,c1,d1,e1
a2,b2,c2,d2,e2
Code
import csv
with open('file.csv', 'r') as infile, open('reordered.csv', 'a') as outfile:
# output dict needs a list for new column ordering
fieldnames = ['A', 'C', 'D', 'E', 'B']
writer = csv.DictWriter(outfile, fieldnames=fieldnames)
# reorder the header first
writer.writeheader()
for row in csv.DictReader(infile):
# writes the reordered rows to the new file
writer.writerow(row)
output
$ cat reordered.csv
A,C,D,E,B
a1,c1,d1,e1,b1
a2,c2,d2,e2,b2
So one way to tackle this problem is to use pandas library which can be easily install using pip. Basically, you can download csv file to pandas dataframe then re-order the column and save it back to csv file. For example, if your sample.csv looks like below:
A,C,B,E,D
a1,b1,c1,d1,e1
a2,b2,c2,d2,e2
Here is a snippet to solve the problem.
import pandas as pd
df = pd.read_csv('/path/to/sample.csv')
df_reorder = df[['A', 'B', 'C', 'D', 'E']] # rearrange column here
df_reorder.to_csv('/path/to/sample_reorder.csv', index=False)
csv_in = open("<filename>.csv", "r")
csv_out = open("<filename>.csv", "w")
for line in csv_in:
field_list = line.split(',') # split the line at commas
output_line = ','.join(field_list[0], # rejoin with commas, new order
field_list[2],
field_list[3],
field_list[4],
field_list[1]
)
csv_out.write(output_line)
csv_in.close()
csv_out.close()
You can use something similar to this to change the order, replacing ';' with ',' in your case.
Because you said you needed to do multiple .csv files, you could use the glob module for a list of your files
for file_name in glob.glob('<Insert-your-file-filter-here>*.csv'):
#Do the work here
The csv module allows you to read csv files with their values associated to their column names. This in turn allows you to arbitrarily rearrange columns, without having to explicitly permute lists.
for row in csv.DictReader(open("foo.csv")):
print row["b"], row["a"]
2 1
22 21
Given the file foo.csv:
a,b,d,e,f
1,2,3,4,5
21,22,23,24,25
Related
I am trying to add a header to my CSV file.
I am importing data from a .csv file which has two columns of data, each containing float numbers. Example:
11 22
33 44
55 66
Now I want to add a header for both columns like:
ColA ColB
11 22
33 44
55 66
I have tried this:
with open('mycsvfile.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow(('ColA', 'ColB'))
I used 'a' to append the data, but this added the values in the bottom row of the file instead of the first row. Is there any way I can fix it?
One way is to read all the data in, then overwrite the file with the header and write the data out again. This might not be practical with a large CSV file:
#!python3
import csv
with open('file.csv',newline='') as f:
r = csv.reader(f)
data = [line for line in r]
with open('file.csv','w',newline='') as f:
w = csv.writer(f)
w.writerow(['ColA','ColB'])
w.writerows(data)
i think you should use pandas to read the csv file, insert the column headers/labels, and emit out the new csv file. assuming your csv file is comma-delimited. something like this should work:
from pandas import read_csv
df = read_csv('test.csv')
df.columns = ['a', 'b']
df.to_csv('test_2.csv')
I know the question was asked a long time back. But for others stumbling across this question, here's an alternative to Python.
If you have access to sed (you do if you are working on Linux or Mac; you can also download Ubuntu Bash on Windows 10 and sed will come with it), you can use this one-liner:
sed -i 1i"ColA,ColB" mycsvfile.csv
The -i will ensure that sed will edit in-place, which means sed will overwrite the file with the header at the top. This is risky.
If you want to create a new file instead, do this
sed 1i"ColA,ColB" mycsvfile.csv > newcsvfile.csv
In this case, You don't need the CSV module. You need the fileinput module as it allows in-place editing:
import fileinput
for line in fileinput.input(files=['mycsvfile.csv'], inplace=True):
if fileinput.isfirstline():
print 'ColA,ColB'
print line,
In the above code, the print statement will print to the file because of the inplace=True parameter.
For the issue where the first row of the CSV file gets replaced by the header, we need to add an option.
import pandas as pd
df = pd.read_csv('file.csv', **header=None**)
df.to_csv('file.csv', header = ['col1', 'col2'])
You can set reader.fieldnames in your code as list
like in your case
with open('mycsvfile.csv', 'a') as fd:
reader = csv.DictReader(fd)
reader.fieldnames = ["ColA" , "ColB"]
for row in fd
I need to re-order columns in a csv but I'll need to call each column from a dictionary.
EXAMPLE:
Sample input csv File:
$ cat file.csv
A,B,C,D,E
a1,b1,c1,d1,e1
a2,b2,c2,d2,e2
Code
import csv
with open('file.csv', 'r') as infile, open('reordered.csv', 'a') as outfile:
order_of_headers_should_be = ['A', 'C', 'D', 'E', 'B']
dictionary = {'A':'X1','B':'Y1','C':'U1','D':'T1','E':'K1'}
writer = csv.DictWriter(outfile)
# reorder the header first
writer.writeheader()
for row in csv.DictReader(infile):
# writes the reordered rows to the new file
writer.writerow(row)
The Output csv file needs to look like this:
$ cat reordered.csv
X1,U1,T1,K1,Y1
a1,c1,d1,e1,b1
a2,c2,d2,e2,b2
Trying to make a variable to call the dictionary
You can do this by permuting the keys when you are about to write the row like so:
for row in csv.DictReader(infile):
# writes the reordered rows to the new file
writer.writerow({dictionary[i]: row[i] for i in row})
Note the use of a dictionary comprehension.
I have a CSV file and when I read it by importing the CSV library I get as the output:
['exam', 'id_student', 'grade']`
['maths', '573834', '7']`
['biology', '573834', '8']`
['biology', '578833', '4']
['english', '581775', '7']`
# goes on...
I need to edit it by creating a 4th column called 'Passed' with two possible values: True or False depending on whether the grade of the row is >= 7 (True) or not (False), and then count how many times each student passed an exam.
If it's not possible to edit the CSV file that way, I would need to just read the CSV file and then create a dictionary of lists with the following output:
dict = {'id_student':[573834, 578833, 581775], 'passed_count': [2,0,1]}
# goes on...
Thanks
Try using importing csv as pandas dataframe
import pandas as pd
data=pd.read_csv('data.csv')
And then use:
data['passed']=(data['grades']>=7).astype(bool)
And then save dataframe to csv as:
data.to_csv('final.csv',index=False)
It is totally possible to "edit" CSV.
Assuming you have a file students.csv with the following content:
exam,id_student,grade
maths,573834,7
biology,573834,8
biology,578833,4
english,581775,7
Iterate over input rows, augment the field list of each row with an additional item, and save it back to another CSV:
import csv
with open('students.csv', 'r', newline='') as source, open('result.csv', 'w', newline='') as result:
csvreader = csv.reader(source)
csvwriter = csv.writer(result)
# Deal with the header
header = next(csvreader)
header.append('Passed')
csvwriter.writerow(header)
# Process data rows
for row in csvreader:
row.append(str(int(row[2]) >= 7))
csvwriter.writerow(row)
Now result.csv has the content you need.
If you need to replace the original content, use os.remove() and os.rename() to do that:
import os
os.remove('students.csv')
os.rename('result.csv', 'students.csv')
As for counting, it might be an independent thing, you don't need to modify CSV for that:
import csv
from collections import defaultdict
with open('students.csv', 'r', newline='') as source:
csvreader = csv.reader(source)
next(csvreader) # Skip header
stats = defaultdict(int)
for row in csvreader:
if int(row[2]) >= 7:
stats[row[1]] += 1
print(stats)
You can include counting into the code above and have both pieces in one place. defaultdict (stats) has the same interface as dict if you need to access that.
I am trying to add a header to my CSV file.
I am importing data from a .csv file which has two columns of data, each containing float numbers. Example:
11 22
33 44
55 66
Now I want to add a header for both columns like:
ColA ColB
11 22
33 44
55 66
I have tried this:
with open('mycsvfile.csv', 'a') as f:
writer = csv.writer(f)
writer.writerow(('ColA', 'ColB'))
I used 'a' to append the data, but this added the values in the bottom row of the file instead of the first row. Is there any way I can fix it?
One way is to read all the data in, then overwrite the file with the header and write the data out again. This might not be practical with a large CSV file:
#!python3
import csv
with open('file.csv',newline='') as f:
r = csv.reader(f)
data = [line for line in r]
with open('file.csv','w',newline='') as f:
w = csv.writer(f)
w.writerow(['ColA','ColB'])
w.writerows(data)
i think you should use pandas to read the csv file, insert the column headers/labels, and emit out the new csv file. assuming your csv file is comma-delimited. something like this should work:
from pandas import read_csv
df = read_csv('test.csv')
df.columns = ['a', 'b']
df.to_csv('test_2.csv')
I know the question was asked a long time back. But for others stumbling across this question, here's an alternative to Python.
If you have access to sed (you do if you are working on Linux or Mac; you can also download Ubuntu Bash on Windows 10 and sed will come with it), you can use this one-liner:
sed -i 1i"ColA,ColB" mycsvfile.csv
The -i will ensure that sed will edit in-place, which means sed will overwrite the file with the header at the top. This is risky.
If you want to create a new file instead, do this
sed 1i"ColA,ColB" mycsvfile.csv > newcsvfile.csv
In this case, You don't need the CSV module. You need the fileinput module as it allows in-place editing:
import fileinput
for line in fileinput.input(files=['mycsvfile.csv'], inplace=True):
if fileinput.isfirstline():
print 'ColA,ColB'
print line,
In the above code, the print statement will print to the file because of the inplace=True parameter.
For the issue where the first row of the CSV file gets replaced by the header, we need to add an option.
import pandas as pd
df = pd.read_csv('file.csv', **header=None**)
df.to_csv('file.csv', header = ['col1', 'col2'])
You can set reader.fieldnames in your code as list
like in your case
with open('mycsvfile.csv', 'a') as fd:
reader = csv.DictReader(fd)
reader.fieldnames = ["ColA" , "ColB"]
for row in fd
I am looking for a a way to read just the header row of a large number of large CSV files.
Using Pandas, I have this method available, for each csv file:
>>> df = pd.read_csv(PATH_TO_CSV)
>>> df.columns
I could do this with just the csv module:
>>> reader = csv.DictReader(open(PATH_TO_CSV))
>>> reader.fieldnames
The problem with these is that each CSV file is 500MB+ in size, and it seems to be a gigantic waste to read in the entire file of each just to pull the header lines.
My end goal of all of this is to pull out unique column names. I can do that once I have a list of column headers that are in each of these files.
How can I extract only the header row of a CSV file, quickly?
Expanding on the answer given by Jeff It is now possbile to use pandas without actually reading any rows.
In [1]: import pandas as pd
In [2]: import numpy as np
In [3]: pd.DataFrame(np.random.randn(10, 4), columns=list('abcd')).to_csv('test.csv', mode='w')
In [4]: pd.read_csv('test.csv', index_col=0, nrows=0).columns.tolist()
Out[4]: ['a', 'b', 'c', 'd']
pandas can have the advantage that it deals more gracefully with CSV encodings.
I might be a little late to the party but here's one way to do it using just the Python standard library. When dealing with text data, I prefer to use Python 3 because unicode. So this is very close to your original suggestion except I'm only reading in one row rather than the whole file.
import csv
with open(fpath, 'r') as infile:
reader = csv.DictReader(infile)
fieldnames = reader.fieldnames
Hopefully that helps!
I've used iglob as an example to search for the .csv files, but one way is to use a set, then adjust as necessary, eg:
import csv
from glob import iglob
unique_headers = set()
for filename in iglob('*.csv'):
with open(filename, 'rb') as fin:
csvin = csv.reader(fin)
unique_headers.update(next(csvin, []))
Here's one way. You get 1 row.
In [9]: DataFrame(np.random.randn(10,4),columns=list('abcd')).to_csv('test.csv',mode='w')
In [10]: read_csv('test.csv',index_col=0,nrows=1)
Out[10]:
a b c d
0 0.365453 0.633631 -1.917368 -1.996505
What about:
pandas.read_csv(PATH_TO_CSV, nrows=1).columns
That'll read the first row only and return the columns found.
you have missed nrows=1 param to read_csv
>>> df= pd.read_csv(PATH_TO_CSV, nrows=1)
>>> df.columns
it depends on what the header will be used for, if you needed the headers for comparison purposes only (my case) this code will be simple and super fast, it will read the whole header as one string. you can transform all the collected strings together according to your needs:
for filename in glob.glob(files_path+"\*.csv"):
with open(filename) as f:
first_line = f.readline()
it is easy you can use this:
df = pd.read_csv("path.csv", skiprows=0, nrows=2)
df.columns.to_list()
In this case you can only read really few row for get your header
if you are only interested in the headers and would like to use pandas, the only extra thing you need to pass in apart from the csv file name is "nrows=0":
headers = pd.read_csv("test.csv", nrows=0)
import pandas as pd
get_col = list(pd.read_csv("first_test_pipe.csv",sep="|",nrows=1).columns)
print(get_col)