Getting empty CSV in python - python

I have written a piece of code that compares data from two csv's and writes the final output to a new csv. The problem is except for the header nothing else is being written into the csv. Below is my code,
import csv
data_3B = open('3B_processed.csv', 'r')
reader_3B = csv.DictReader(data_3B)
data_2A = open('2A_processed.csv', 'r')
reader_2A = csv.DictReader(data_2A)
l_3B_2A = [["taxable_entity_id", "return_period", "3B", "2A"]]
for row_3B in reader_3B:
for row_2A in reader_2A:
if row_3B["taxable_entity_id"] == row_2A["taxable_entity_id"] and row_3B["return_period"] == row_2A["return_period"]:
l_3B_2A.append([row_3B["taxable_entity_id"], row_3B["return_period"], row_3B["total"], row_2A["total"]])
with open("3Bvs2A_new.csv", "w") as csv_file:
writer = csv.writer(csv_file)
writer.writerows(l_3B_2A)
csv_file.close()
How do I solve this?
Edit:
2A_processed.csv sample:
taxable_entity_id,return_period,total
2d9cc638-5ed0-410f-9a76-422e32f34779,072019,0
2d9cc638-5ed0-410f-9a76-422e32f34779,062019,0
2d9cc638-5ed0-410f-9a76-422e32f34779,082019,0
e5091f99-e725-44bc-b018-0843953a8771,082019,0
e5091f99-e725-44bc-b018-0843953a8771,052019,41711.5
920da7ba-19c7-45ce-ba59-3aa19a6cb7f0,032019,2862.94
410ecd0f-ea0f-4a36-8fa6-9488ba3c095b,082018,48253.9
3B_processed sample:
taxable_entity_id,return_period,total
1e5ccfbc-a03e-429e-b79a-68041b69dfb0,072017,0.0
1e5ccfbc-a03e-429e-b79a-68041b69dfb0,082017,0.0
1e5ccfbc-a03e-429e-b79a-68041b69dfb0,092017,0.0
f7d52d1f-00a5-440d-9e76-cb7fbf1afde3,122017,0.0
1b9afebb-495d-4516-96bd-1e21138268b7,072017,146500.0
1b9afebb-495d-4516-96bd-1e21138268b7,082017,251710.0

The csv.DictReader objects in your code can only read through the file once, because they are reading from file objects (created with open). Therefore, the second and subsequent times through the outer loop, the inner loop does not run, because there are no more row_2A values in reader_2A - the reader is at the end of the file after the first time.
The simplest fix is to read each file into a list first. We can make a helper function to handle this, and also ensure the files are closed properly:
def lines_of_csv(filename):
with open(filename) as source:
return list(csv.DictReader(source))
reader_3B = lines_of_csv('3B_processed.csv')
reader_2A = lines_of_csv('2A_processed.csv')

I put your code into a file test.py and created test files to simulate your csvs.
$ python3 ./test.py
$ cat ./3Bvs2A_new.csv
taxable_entity_id,return_period,3B,2A
1,2,3,2
$ cat ./3B_processed.csv
total,taxable_entity_id,return_period,3B,2A
3,1,2,3,4
3,4,3,2,1
$ cat ./2A_processed.csv
taxable_entity_id,return_period,2A,3B,total
1,2,3,4,2
4,3,2,1,2
So as you can see the order of the columns doesn't matter as they are being accessed correctly using the dict reader and if the first row is a match your code works but there are no rows left in the second csv file after the processing the first row from the first file. I suggest making a dictionary if taxable_entity_id and return_period tuple values, processing the first csv file by adding totals into the dict then running through the second one and looking them up.
row_lookup = {}
for row in first_csv:
rowLookup[(row['taxable_entity_id'], row['return_period'])] = row['total']
for row in second_csv:
if (row['taxable_entity_id'],row['return_period']) in row_lookup.keys():
newRow = [row['taxable_entity_id'], row['return_period'], row['total'] ,row_lookup[(row['taxable_entity_id'],row['return_period']] ]
Of course that only works if pairs of taxable_entity_ids and return_periods are always unique... Hard to say exactly what you should do without knowing the exact nature of your task and full format of your csvs.

You can do this with pandas if the data frames are equal-sized like this :
reader_3B=pd.read_csv('3B_processed.csv')
reader_2A=pd.read_csv('2A_processed.csv')
l_3B_2A=row_3B[(row_3B["taxable_entity_id"] == row_2A["taxable_entity_id"])&(row_3B["return_period"] == row_2A["return_period"])]
l_3B_2A.to_csv('3Bvs2A_new.csv')

Related

Converting JSON to CSV with part of JSON value as row headers

I have just started to learn Python and I have a task of converting a JSON to a CSV file as semicolon as the delimiter and with three constraints.
My JSON is:
{"_id": "5cfffc2dd866fc32fcfe9fcc",
"tuple5": ["system1/folder", "system3/folder"],
"tuple4": ["system1/folder/text3.txt", "system2/folder/text3.txt"],
"tuple3": ["system2/folder/text2.txt"],
"tuple2": ["system2/folder"],
"tuple1": ["system1/folder/text1.txt", "system2/folder/text1.txt"],
"tupleSize": 3}
The output CSV should be in a form:
system1 ; system2 ; system3
system1/folder ; ~ ; system3/folder
system1/folder/text3.txt ; system2/folder/text3.txt ; ~
~ ; system2/folder/text2.txt ; ~
~ ; system2/folder ; ~
system1/folder/text1.txt ; system2/folder/text1.txt ; ~
So the three constraints are that the tupleSize will indicate the number of rows, the first part of the array elements i.e., sys1, sys2 and sys3 will be the array elements and finally only those elements belonging to a particular system will have the values in the CSV file (rest is ~).
I found a few posts regarding the conversion in Python like this and this. None of them had any constraints any way related to these and I am unable to figure out how to approach this.
Can someone help?
EDIT: I should mention that the array elements are dynamic and thus the row headers may vary in the CSV file.
What you want to do is fairly substantial, so if it's just a Python learning exercise, I suggest you begin with more elementary tasks.
I also think you've got what most folks call rows and columns reversed — so be warned that everything below, including the code, is using them in the opposite sense to the way you used them in your question.
Anyway, the code below first preprocesses the data to determine what the columns or fieldnames of the CSV file are going to be and to make sure there are the right number of them as specified by the 'tupleSize' key.
Assuming that constraint is met, it then iterates through the data a second time and extracts the column/field values from each key value, putting them into a dictionary whose contents represents a row to be written to the output file — and then does that when finished.
Updated
Modified to remove all keys that start with "_id" in the JSON object dictionary.
import csv
import json
import re
SEP = '/' # Value sub-component separator.
id_regex = re.compile(r"_id\d*")
json_string = '''
{"_id1": "5cfffc2dd866fc32fcfe9fc1",
"_id2": "5cfffc2dd866fc32fcfe9fc2",
"_id3": "5cfffc2dd866fc32fcfe9fc3",
"tuple5": ["system1/folder", "system3/folder"],
"tuple4": ["system1/folder/text3.txt", "system2/folder/text3.txt"],
"tuple3": ["system2/folder/text2.txt"],
"tuple2": ["system2/folder"],
"tuple1": ["system1/folder/text1.txt", "system2/folder/text1.txt"],
"tupleSize": 3}
'''
data = json.loads(json_string) # Convert JSON string into a dictionary.
# Remove non-path items from dictionary.
tupleSize = data.pop('tupleSize')
_ids = {key: data.pop(key)
for key in tuple(data.keys()) if id_regex.search(key)}
#print(f'_ids: {_ids}')
max_columns = int(tupleSize) # Use to check a contraint.
# Determine how many columns are present and what they are.
columns = set()
for key in data:
paths = data[key]
if not paths:
raise RuntimeError('key with no paths')
for path in paths:
comps = path.split(SEP)
if len(comps) < 2:
raise RuntimeError('component with no subcomponents')
columns.add(comps[0])
if len(columns) > max_columns:
raise RuntimeError('too many columns - conversion aborted')
# Create CSV file.
with open('converted_json.csv', 'w', newline='') as file:
writer = csv.DictWriter(file, delimiter=';', restval='~',
fieldnames=sorted(columns))
writer.writeheader()
for key in data:
row = {}
for path in data[key]:
column, *_ = path.split(SEP, maxsplit=1)
row[column] = path
writer.writerow(row)
print('Conversion complete')

How to parse very big files in python?

I have a very big tsv file: 1.5 GB. i want to parse this file. Im using the following function:
def readEvalFileAsDictInverse(evalFile):
eval = open(evalFile, "r")
evalIDs = {}
for row in eval:
ids = row.split("\t")
if ids[0] not in evalIDs.keys():
evalIDs[ids[0]] = []
evalIDs[ids[0]].append(ids[1])
eval.close()
return evalIDs
It is take more than 10 hours and it is still working. I dont know how to accelerate this step and if there is another method to parse such as file
several issues here:
testing for keys with if ids[0] not in evalIDs.keys() takes forever in python 2, because keys() is a list. .keys() is rarely useful anyway. A better way already is if ids[0] not in evalIDs, but, but...
why not use a collections.defaultdict instead?
why not use csv module?
overriding eval built-in (well, not really an issue seeing how dangerous it is)
my proposal:
import csv, collections
def readEvalFileAsDictInverse(evalFile):
with open(evalFile, "r") as handle:
evalIDs = collections.defaultdict(list)
cr = csv.reader(handle,delimiter='\t')
for ids in cr:
evalIDs[ids[0]].append(ids[1]]
the magic evalIDs[ids[0]].append(ids[1]] creates a list if doesn't already exist. It's also portable and very fast whatever the python version and saves a if
I don't think it could be faster with default libraries, but a pandas solution probably would.
Some suggestions:
Use a defaultdict(list) instead of creating inner lists yourself or using dict.setdefault().
dict.setfdefault() will create the defautvalue every time, thats a time burner - defautldict(list) does not - it is optimized:
from collections import defaultdict
def readEvalFileAsDictInverse(evalFile):
eval = open(evalFile, "r")
evalIDs = defaultdict(list)
for row in eval:
ids = row.split("\t")
evalIDs[ids[0]].append(ids[1])
eval.close()
If your keys are valid file names you might want to investigate awk for much more performance then doing this in python.
Something along the lines of
awk -F $'\t' '{print > $1}' file1
will create your split files much faster and you can simply use the latter part of the following code to read from each file (assuming your keys are valid filenames) to construct your lists. (Attributation: here ) - You would need to grab your created files with os.walk or similar means. Each line inside the files will still be tab-seperated and contain the ID in front
If your keys are not filenames in their own right, consider storing your different lines into different files and only keep a dictionary of key,filename around.
After splitting the data, load the files as lists again:
Create testfile:
with open ("file.txt","w") as w:
w.write("""
1\ttata\ti
2\tyipp\ti
3\turks\ti
1\tTTtata\ti
2\tYYyipp\ti
3\tUUurks\ti
1\ttttttttata\ti
2\tyyyyyyyipp\ti
3\tuuuuuuurks\ti
""")
Code:
# f.e. https://stackoverflow.com/questions/295135/turn-a-string-into-a-valid-filename
def make_filename(k):
"""In case your keys contain non-filename-characters, make it a valid name"""
return k # assuming k is a valid file name else modify it
evalFile = "file.txt"
files = {}
with open(evalFile, "r") as eval_file:
for line in eval_file:
if not line.strip():
continue
key,value, *rest = line.split("\t") # omit ,*rest if you only have 2 values
fn = files.setdefault(key, make_filename(key))
# this wil open and close files _a lot_ you might want to keep file handles
# instead in your dict - but that depends on the key/data/lines ratio in
# your data - if you have few keys, file handles ought to be better, if
# have many it does not matter
with open(fn,"a") as f:
f.write(value+"\n")
# create your list data from your files:
data = {}
for key,fn in files.items():
with open(fn) as r:
data[key] = [x.strip() for x in r]
print(data)
Output:
# for my data: loaded from files called '1', '2' and '3'
{'1': ['tata', 'TTtata', 'tttttttata'],
'2': ['yipp', 'YYyipp', 'yyyyyyyipp'],
'3': ['urks', 'UUurks', 'uuuuuuurks']}
Change evalIDs to a collections.defaultdict(list). You can avoid the if to check if a key is there.
Consider splitting the file externally using split(1) or even inside python using a read offset. Then use multiprocessing.pool to parallelise the loading.
Maybe, you can make it somewhat faster; change it:
if ids[0] not in evalIDs.keys():
evalIDs[ids[0]] = []
evalIDs[ids[0]].append(ids[1])
to
evalIDs.setdefault(ids[0],[]).append(ids[1])
The 1st solution searches 3 times in the "evalID" dictionary.

Python:Loop through .csv of urls and save it as another column

New to python, read a bunch and watched a lot of videos. I can't get it to work and i'm getting frustrated.
I have a List of links like below:
"KGS ID","Latitude","Longitude","Location","Operator","Lease","API","Elevation","Elev_Ref","Depth_start","Depth_stop","URL"
"1002880800","37.2354869","-100.4607509","T32S R29W, Sec. 27, SW SW NE","Stanolind Oil and Gas Co.","William L. Rickers 1","15-119-00164","2705"," KB","2790","7652","http://www.kgs.ku.edu/WellLogs/32S29W/1043696830.zip"
"1002880821","37.1234622","-100.1158111","T34S R26W, Sec. 2, NW NW NE","SKELLY OIL CO","GRACE MCKINNEY 'A' 1","15-119-00181","2290"," KB","4000","5900","http://www.kgs.ku.edu/WellLogs/34S26W/1043696831.zip"
I'm trying to get python to go to "URL" and save it in a folder named "location" as filename "API.las".
ex) ......"location"/Section/"API".las
C://.../T32S R29W/Sec.27/15-119-00164.las
The file has hundred of rows and links to download. I wanted to implement a sleep function at the also to not bombard the servers.
What are some of the different ways to do this? I've tried pandas and a few other methods... any ideas?
You will have to do something like this
for link, file_name in zip(links, file_names):
u = urllib.urlopen(link)
udata = u.read()
f = open(file_name+".las", "w")
f.write(udata)
f.close()
u.close()
If the contents of your file are not what you wanted, you might want to look at a scraping library like BeautifulSoup for parsing.
This might be a little dirty, but it's a first pass at solving the problem. This is all contingent on each value in the CSV being encompassed in double quotes. If this is not true, this solution will need heavy tweaking.
Code:
import os
csv = """
"KGS ID","Latitude","Longitude","Location","Operator","Lease","API","Elevation","Elev_Ref","Depth_start","Depth_stop","URL"
"1002880800","37.2354869","-100.4607509","T32S R29W, Sec. 27, SW SW NE","Stanolind Oil and Gas Co.","William L. Rickers 1","15-119-00164","2705"," KB","2790","7652","http://www.kgs.ku.edu/WellLogs/32S29W/1043696830.zip"
"1002880821","37.1234622","-100.1158111","T34S R26W, Sec. 2, NW NW NE","SKELLY OIL CO","GRACE MCKINNEY 'A' 1","15-119-00181","2290"," KB","4000","5900","http://www.kgs.ku.edu/WellLogs/34S26W/1043696831.zip"
""".strip() # trim excess space at top and bottom
root_dir = '/tmp/so_test'
lines = csv.split('\n') # break CSV on newlines
header = lines[0].strip('"').split('","') # grab first line and consider it the header
lines_d = [] # we're about to perform the core actions, and we're going to store it in this variable
for l in lines[1:]: # we want all lines except the top line, which is a header
line_broken = l.strip('"').split('","') # strip off leading and trailing double-quote
line_assoc = zip(header, line_broken) # creates a tuple of tuples out of the line with the header at matching position as key
line_dict = dict(line_assoc) # turn this into a dict
lines_d.append(line_dict)
section_parts = [s.strip() for s in line_dict['Location'].split(',')] # break Section value to get pieces we need
file_out = os.path.join(root_dir, '%s%s%s%sAPI.las'%(section_parts[0], os.path.sep, section_parts[1], os.path.sep)) # format output filename the way I think is requested
# stuff to show what's actually put in the files
print file_out, ':'
print ' ', '"%s"'%('","'.join(header),)
print ' ', '"%s"'%('","'.join(line_dict[h] for h in header))
output:
~/so_test $ python so_test.py
/tmp/so_test/T32S R29W/Sec. 27/API.las :
"KGS ID","Latitude","Longitude","Location","Operator","Lease","API","Elevation","Elev_Ref","Depth_start","Depth_stop","URL"
"1002880800","37.2354869","-100.4607509","T32S R29W, Sec. 27, SW SW NE","Stanolind Oil and Gas Co.","William L. Rickers 1","15-119-00164","2705"," KB","2790","7652","http://www.kgs.ku.edu/WellLogs/32S29W/1043696830.zip"
/tmp/so_test/T34S R26W/Sec. 2/API.las :
"KGS ID","Latitude","Longitude","Location","Operator","Lease","API","Elevation","Elev_Ref","Depth_start","Depth_stop","URL"
"1002880821","37.1234622","-100.1158111","T34S R26W, Sec. 2, NW NW NE","SKELLY OIL CO","GRACE MCKINNEY 'A' 1","15-119-00181","2290"," KB","4000","5900","http://www.kgs.ku.edu/WellLogs/34S26W/1043696831.zip"
~/so_test $
Approach 1 :-
Your file has suppose 1000 rows.
Create masterlist which has the data stored in this form ->
[row1,row2,row3 and so on]
Once done, loop through this masterlist. You will get a a row in string format in every iteration.
split it make a list and splice the last column of url i.e. row[-1]
and append it to a empty list named result_url. Once it has run for all rows, save it in a file and you can easily create a directory using os module and move your file over there
Approach 2 :-
If file is too huge, read line one by one in try block and process your data (using csv module you can get each row as a list, splice url and write it to file API.las everytime).
Once your program moves 1001th line it will move to except block where you can just 'pass' or write a print to get notified.
In approach 2, you are not saving all data in any data structure, you just storing a single row at executing it, so it is more fast.
import csv, os
directory_creater = os.mkdir('Locations')
fme = open('./Locations/API.las','w+')
with open('data.csv','r') as csvfile:
spamreader = csv.reader(csvfile, delimiter = ',')
print spamreader.next()
while True:
try:
row= spamreader.next()
get_url = row[-1]
to_write = get_url+'\n'
fme.write(to_write)
except:
print "Program has run. Check output."
exit(1)
This code can do all that you mentioned efficiently in less time.

Combining files in python using

I am attempting to combine a collection of 600 text files, each line looks like
Measurement title Measurement #1
ebv-miR-BART1-3p 4.60618701
....
evb-miR-BART1-200 12.8327289
with 250 or so rows in each file. Each file is formatted that way, with the same data headers. What I would like to do is combine the files such that it looks like this
Measurement title Measurement #1 Measurement #2
ebv-miR-BART1-3p 4.60618701 4.110878867
....
evb-miR-BART1-200 12.8327289 6.813287556
I was wondering if there is an easy way in python to strip out the second column of each file, then append it to a master file? I was planning on pulling each line out, then using regular expressions to look for the second column, and appending it to the corresponding line in the master file. Is there something more efficient?
It is a small amount of data for today's desktop computers (around 150000 measurements) - so keeping everything in memory, and dumping to a single file will be easier than an another strategy. If it would not fit in RAM, maybe using SQL would be a nice approach there -
but as it is, you can create a single default dictionary, where each element is a list -
read all your files and collect the measurements to this dictionary, and dump it to disk -
# create default list dictionary:
>>> from collections import defaultdict
>>> data = defaultdict(list)
# Read your data into it:
>>> from glob import glob
>>> import csv
>>> for filename in glob("my_directory/*csv"):
... reader = csv.reader(open(filename))
... # throw away header row:
... reader.readrow()
... for name, value in reader:
... data[name].append(value)
...
>>> # and record everything down in another file:
...
>>> mydata = open("mydata.csv", "wt")
>>> writer = csv.writer(mydata)
>>> for name, values in sorted(data.items()):
... writer.writerow([name] + values)
...
>>> mydata.close()
>>>
Use the csv module to read the files in, create a dictionary of the measurement names, and make the values in the dictionary a list of the values from the file.
I don't have comment privileges yet, therefore a separate answer.
jsbueno's answer works really well as long as you're sure that the same measurement IDs occur in every file (order is not important, but the sets should be equal!).
In the following situation:
file1:
measID,meas1
a,1
b,2
file2:
measID,meas1
a,3
b,4
c,5
you would get:
outfile:
measID,meas1,meas2
a,1,3
b,2,4
c,5
instead of the desired:
outfile:
measID,meas1,meas2
a,1,3
b,2,4
c,,5 # measurement c was missing in file1!
I'm using commas instead of spaces as delimiters for better visibility.

How do I handle closing double quotes in CSV column with python?

This is the python script:
f = open('csvdata.csv','rb')
fo = open('out6.csv','wb')
for line in f:
bits = line.split(',')
bits[1] = '"input"'
fo.write( ','.join(bits) )
f.close()
fo.close()
I have a CSV file and I'm replacing the content of the 2nd column with the string "input". However, I need to grab some information from that column content first.
The content might look like this:
failurelog_wl","inputfile/source/XXXXXXXX"; "**X_CORD2**"; "Invoice_2M";
"**Y_CORD42**"; "SIZE_ID37""
It has weird type of data as you can see, especially that it has 2 double quotes at the end of the line instead of just one that you would expect.
I need to extract the XCORD and YCORD information, like XCORD = 2 and YCORD = 42, before replacing the column value. I then want to insert an extra column, named X_Y, which represents (2_42).
How can I modify my script to do that?
If I understand your question correctly, you can use a simple regular expression to pull out the numbers you want:
import re
f = open('csvdata.csv','rb')
fo = open('out6.csv','wb')
for line in f:
bits = line.split(',')
x_y_matches = re.match('.*X_CORD(\d+).*Y_CORD(\d+).*', bits[1])
assert x_y_matches is not None, 'Line had unexpected format: {0}'.format(bits[1])
x_y = '({0}_{1})'.format(x_y_matches.group(1), x_y_matches.group(2))
bits[1] = '"input"'
bits.append(x_y)
fo.write( ','.join(bits) )
f.close()
fo.close()
Note that this will only work if column 2 always says 'X_CORD' and 'Y_CORD' immediately before the numbers. If it is sometimes a slightly different format, you'll need to adjust the regular expression to allow for that. I added the assert to give a more useful error message if that happens.
You mentioned wanting the column to be named X_Y. Your script appears to assume that there is no header, and my modified version definitely makes this assumption. Again, you'd need to adjust for that if there is a header line.
And, yes, I agree with the other commenters that using the csv module would be cleaner, in general, for reading and writing csv files.

Categories

Resources