I have extracted the data using python Selenium from the site below.
https://portfoliomanager.energystar.gov/pm/targetFinder;jsessionid=F6FC40FBDE075BDA3834643F9BD65E37?execution=e1s2
Please have a look at the table "Metrics Comparison for Your Design and/or Target".
I have extracted the table as a text format.
Here is the sample output of the text below
Metric Design Project Design Target Median Property*
ENERGY STAR score (1-100) Not Available 75 50
Source EUI (kBtu/ft²) 3.1 Not Available 127.9
Site EUI (kBtu/ft²) 1.0 Not Available 40.7
Source Energy Use (kBtu) 314.0 Not Available 12,793.0
Site Energy Use (kBtu) 100.0 Not Available 4,074.2
Energy Cost ($) 2,000.00 Not Available 81,484.00
Total GHG Emissions (Metric Tons CO2e) 0.0 Not Available 0.5
I tried to convert the text into json,
import csv
import json
with open('file.txt', 'rb') as csvfile:
filereader = csv.reader(csvfile, delimiter=' ')
i = 0
header = []
out_data = []
for row in filereader:
row = [elem for elem in row if elem]
if i == 0:
i += 1
header = row
else:
row[0:4] = [row[0]+" "+row[1]+" "+row[2]+" "+row[3]]
_dict = {}
for elem, header_elem in zip(row, header):
_dict[header_elem] = elem
out_data.append(_dict)
print json.dumps(out_data)
The JSON format output which i got was like
[{"Project": "75", "Metric": "ENERGY STAR score (1-100)", "Design": "50"}]
The JSON format output should be in the form of
[{"Design Project": "Not Available", "Design Target": "75", "Metric": "ENERGY STAR score (1-100)", "Median Property*": "50"}]
You forgotten create data and header for other json keys (like Design
Project, Design Target etc)
This is correct version:
import csv
import json
with open('test.txt', 'r') as csvfile: # Opens file
filereader = csv.reader(csvfile, delimiter=' ')
i = 0
header = []
out_data = []
for row in filereader:
row = [elem for elem in row if elem]
if i == 0:
i += 2
row[1:3] = [row[1]+" "+row[2]] # Design Project key
row[2:4] = [row[2]+" "+row[3]] # Design Target key
row[3:5] = [row[3]+" "+row[4]] # Median Property*
header = row
else:
row[0:4] = [row[0]+" "+row[1]+" "+row[2]+" "+row[3]] # Metric value
if len(row) == 5: # check conditions for better parse
row[1:3] = [row[1]+" "+row[2]] # Design Project value
_dict = {}
for elem, header_elem in zip(row, header):
_dict[header_elem] = elem
out_data.append(_dict)
print json.dumps(out_data)
It work only if structure of your data is constant, and key/value consists of the same number of words.
You can add additional conditions (like me in line 21):
if len(row) == 5: # check conditions for better parse
row[1:3] = [row[1]+" "+row[2]] # Design Project value
Related
hey I'm trying to extract certain row from a CSV file with content in this form:
POS,Transaction id,Product,Quantity,Customer,Date
1,E100,TV,1,Test Customer,2022-09-19
2,E100,Laptop,3,Test Customer,2022-09-20
3,E200,TV,1,Test Customer,2022-09-21
4,E300,Smartphone,2,Test Customer,2022-09-22
5,E300,Laptop,5,New Customer,2022-09-23
6,E300,TV,1,New Customer,2022-09-23
7,E400,TV,2,ABC,2022-09-24
8,E500,Smartwatch,4,ABC,2022-09-25
the code I wrote is the following
def csv_upload_view(request):
print('file is being uploaded')
if request.method == 'POST':
csv_file = request.FILES.get('file')
obj = CSV.objects.create(file_name=csv_file)
with open(obj.file_name.path, 'r') as f:
reader = csv.reader(f)
reader.__next__()
for row in reader:
data = "".join(row)
data = data.split(";")
#data.pop()
print(data[0], type(data))
transaction_id = data[0]
product = data[1]
quantity = int(data[2])
customer = data[3]
date = parse_date(data[4])
In the console then I get the following output:
Quit the server with CONTROL-C.
[22/Sep/2022 15:16:28] "GET /reports/from-file/ HTTP/1.1" 200 11719
file is being uploaded
1E100TV1Test Customer2022-09-19 <class 'list'>
So that I get the correct row put everything concatenated. If instead I put in a space in the " ".join.row I get the entire row separated with empty spaces - what I would like to do is access this row with
transaction_id = data[0]
product = data[1]
quantity = int(data[2])
customer = data[3]
date = parse_date(data[4])
but I always get an
IndexError: list index out of range
I also tried with data.replace(" ",";") but this gives me another error and the data type becomes a string instead of a list:
ValueError: invalid literal for int() with base 10: 'E'
Can someone please show me what I'm missing here?
I'm not sure why you are joining/splitting the row up. And you realize your split is using a semicolon?
I would expect something like this:
import csv
from collections import namedtuple
Transaction = namedtuple('Transaction', ['id', 'product', 'qty', 'customer', 'date'])
f_name = 'data.csv'
transactions = [] # to hold the result
with open(f_name, 'r') as src:
src.readline() # burn the header row
reader = csv.reader(src) # if you want to use csv reader
for data in reader:
#print(data) <-- to see what the csv reader gives you...
t = Transaction(data[1], data[2], int(data[3]), data[4], data[5])
transactions.append(t)
for t in transactions:
print(t)
The above "catches" results with a namedtuple, which is obviously optional. You could put them in lists, etc.
Also csv.reader will do the splitting (by comma) by default. I edited my previous answer.
As far as your question goes... You mention extracting a "certain row" but you gave no indication how you would find such row. If you know the row index/number, you could burn lines with readline or such, or just keep a counter while you read. If you are looking for keyword in the data, just pop a conditional statement in either before or after splitting up the line.
This way you can split the rows (and find which row you want based on some provided value)
with open('data.csv') as csv_file:
csv_reader = csv.reader(csv_file, delimiter = ',')
line_count = 0
for row in csv_reader:
# Line 0 is the header
if line_count == 0:
print(f'Column names are {", ".join(row)}')
line_count += 1
else:
line_count += 1
# Here you can check if the row value is equal what you're finding
# row[0] = POS
# row[1] = Transaction id
# row[2] = Product
# row[3] = Quantity
# row[4] = Customer
# row[5] = Date
if row[2] = "TV":
#If you want to add all variables into a single string:
data = ",".join(row)
# Make each row into a single variable:
transaction_id = row[0]
product = row[1]
quantity = row[2]
customer = row[3]
date = row[4]
I have a syntax issue while using dictionary for the first time. I hope you guys can help me with it. I'm reading election data from a .csv file. I'm trying to put the names and votes for each candidate into a dictionary so the display would look like this:
Election Results
-------------------------
Total Votes: 369711
-------------------------
Charles Casper Stockham: 23.049% (85213)
Diana DeGette: 73.812% (272892)
Raymon Anthony Doane: 3.139% (11606)
Here are my codes below. I have already captured the votes for each candidate in each sum variable. I'm just trying to figure out what's the syntax for putting them in the dictionary so they look the display above.
# Import the os module
import os
# Module for reading CSV files
import csv
# Store the file path associated with the file
csvpath = os.path.join('Resources', 'election_data.csv')
# Open the file "read mode" and store contents in the variable
with open(csvpath) as csvfile:
csvreader = csv.reader(csvfile, delimiter =',')
print(csvreader)
csv_header = next(csvreader)
total_ballots = 0
sumCharles = sumDiana = sumRaymon = 0
candidates = dict()
candidates_list = ["Charles Casper Stockham","Diana DeGette","Raymon Anthony Doane"]
candidates["names"] = candidates_list
votes_list = [sumCharles , sumDiana , sumRaymon]
candidates["votes"] = votes_list
for row in csvreader:
total_ballots = total_ballots + 1
if (row[2] == "Charles Casper Stockham"):
sumCharles = sumCharles + 1
if (row[2] == "Diana DeGette"):
sumDiana = sumDiana + 1
if (row[2] == "Raymon Anthony Doane"):
sumRaymon = sumRaymon + 1
print(candidates)
print(total_ballots)
Assuming that the element in position 2 is the name of the candidate and that each line in the csv is a single vote:
# Import the os module
import os
# Module for reading CSV files
import csv
# Store the file path associated with the file
csvpath = os.path.join('Resources', 'election_data.csv')
# Open the file "read mode" and store contents in the variable
with open(csvpath) as csvfile:
csvreader = csv.reader(csvfile, delimiter =',')
candidates = dict()
total_ballots = 0
for row in csvreader:
total_ballots += 1
candidates[row[2]] += 1
print("Election Results")
print("-"*25)
print("Total Votes:", total_ballots)
print("-"*25)
for candidate, votes in candidates.items():
percentage = votes / total_ballots * 100
print("{}: {:.3f}% ({})".format(candidate, percentage, votes)
I have the following CSV file about family expenses:
Family, Medical, Travel, Education
Smith, 346, 566, 45
Taylor, 56,837,848
I want to be able to calculate the row totals and column totals. For example:
Smith = 346+566+45
Taylor = 56+837+848
Medical = 346+56
Travel = 566+837
Education = 45+848
I have the following so far:
import csv
file = open('Family expenses.csv', newline='')
reader = csv.reader(file)
header = next(reader)
data = [row for row in header]
ndata = []
x = 0
for x in range(0, 3):
for i in data[x]:
i.split(',')
x += 1
ndata.append(i)
rdata = [int(s) if s.isdecimal() else s for s in ndata]
There's no need for pandas for this; using DictReader makes it easy:
import csv
file = open("Family expenses.csv", newline="")
reader = csv.DictReader(file, skipinitialspace=True)
results = {}
for row in reader:
results[row["Family"]] = 0 # initialize result for each family name
for key, value in row.items():
if key == "Family":
continue
if key not in results: # initialize result for each category
results[key] = 0
results[key] += float(value) # add value for category
results[row["Family"]] += float(value) # add value for family name
for key, result in results.items():
print(key, result)
I used skipinitialspace because there were some whitespaces in your CSV data.
#Using a list in Python. Here you go
import csv
file = open('Family expenses.csv', newline='')
reader = csv.reader(file)
header = next(reader) #read first row & skip first row (header)
header.pop(0) #removing [0,0] first row first column for column wise sum heading
num_of_cols = len(header) #counting #columns
sum_col=[0,0,0] #a list for columnwise sum
j,temp=0,0
for row in reader:
sum_row,i = 0,0
print(row[0])
for i in range(1,len(row)):
sum_row+=int(row[i])
sum_col[i-1]=int(sum_col[i-1])+int(row[i])
print(sum_row)
print(header)
print(sum_col)`
I have a large dataset that looks like the following
party,cp,qualifier,amount
ABC,DEF,GOOGLE_2,100
ABC,DEF,GOOGLE_2,200
GHI,JKL,FACEBOOK_1,500
GHI,JKL,FACEBOOK_1,-600
I would like to output :
ABC,DEF,GOOGLE,300
GHI,JKL,FACEBOOK,-100
Here is my python code so far:
headers = ["valuation_date","party_group_name","type","party_name","cp_group_name","cp_name","qualifier","amount"]
data = {}
with open(t1file,'rb') as f:
reader = csv.reader(f)
headers = reader.next()
for row in reader:
party = row[headers.index('party')]
cp = row[headers.index('cp')]
qualifier = row[headers.index('qualifier')]
amount = row[headers.index('amount')]
if row[headers.index('type')] == "Equity":
new_qualifier = qualifier.split("_")[0]
if party in data.keys():
if cp in data.keys():
if new_qualifier in data.keys():
data[party][cp][new_qualifier] += float(amount)
else:
data[party][cp][qualifier][amount] = data[party][cp][new_qualifier][amount]
else:
data[cp] = cp
else:
data[party] = party
When I run the above code I get the following error:
data[party][cp][qualifier][amount] = data[party][cp][new_qualifier][amount]
TypeError: string indices must be integers, not str
Very rusty with python apologize if it's glaringly obivous but any insights as to what i'm doing wrong ?
Thanks !
you can use pandas.drop_duplicates to drop duplicates of multiple columns and combine it with pandas.groupby() & sum to get the desired result
>>>import pandas as pd
>>>#read file using pandas.read_csv()
>>>df
party cp qualifier amount
0 ABC DEF GOOGLE_2 100
1 ABC DEF GOOGLE_2 200
2 GHI JKL FACEBOOK_1 500
3 GHI JKL FACEBOOK_1 -600
>>>df['Total'] = df.groupby(['party','cp','qualifier'])['amount'].transform('sum')
>>>print(df.drop_duplicates(subset=['party','cp','qualifier'], keep='last'))
party cp qualifier amount Total
1 ABC DEF GOOGLE_2 200 300
3 GHI JKL FACEBOOK_1 -600 -100
Below
from collections import defaultdict
PARTY_IDX = 0
CP_IDX = 1
QUALIFIER_IDX = 2
AMOUNT_IDX = 3
data = defaultdict(int)
with open('del-me.csv') as f:
lines = [l.strip() for l in f.readlines()]
for idx, line in enumerate(lines):
if idx > 0:
fields = line.split(',')
party = fields[PARTY_IDX]
cp = fields[CP_IDX]
qualifier = fields[QUALIFIER_IDX]
qualifier = qualifier[:qualifier.find('_')]
key = ','.join([party, cp, qualifier])
amount = int(fields[AMOUNT_IDX])
data[key] += amount
with open('out.csv', 'w') as f:
for k, v in data.items():
f.write('{},{}\n'.format(k, v))
del-me.csv
party,cp,qualifier,amount
ABC,DEF,GOOGLE_2,100
ABC,DEF,GOOGLE_2,200
GHI,JKL,FACEBOOK_1,500
GHI,JKL,FACEBOOK_1,-600
out.csv
ABC,DEF,GOOGLE,300
GHI,JKL,FACEBOOK,-100
You have already enough answers, but let me correct your own code to help you derive the answer and understand the original issue:
import csv as csv
headers = ["valuation_date","party_group_name","party_name","cp_group_name","cp_name","qualifier","amount"]
data = {}
with open('test_data.csv','rt', encoding='utf-8') as f:
reader = csv.reader(f)
headers = next(reader)
for row in reader:
party = row[headers.index('party')]
cp = row[headers.index('cp')]
qualifier = row[headers.index('qualifier')]
amount = row[headers.index('amount')]
if row[headers.index('type')] == "Equity":
new_qualifier = qualifier.split("_")[0]
if party in data.keys():
cp_ = data[party]
if cp in cp_.keys():
qualifier_ = data[party][cp]
if new_qualifier in qualifier_.keys():
data[party][cp][new_qualifier] += float(amount)
else:
data[party][cp][qualifier][amount] = {}
else:
data[cp] = {}
else:
data[party] = {}
data[party][cp] = {}
data[party][cp][qualifier.split("_")[0]] = float(amount)
print(data)
This gives you
{'ABC': {'DEF': {'GOOGLE': 300.0}}, 'GHI': {'JKL': {'FACEBOOK': -100.0}}}
The problem was how you were populating your dictionary and how you were accessing it.
In order to simplify things, you might use just one key for the dict which is composed out of the identifying parts of a given line.
You might have to extract values by the header names like you already did. The following is based on the specified input. rsplit is used to split the string once at the end in order to use the party,cp,qualifier combination as a key and extract the amount.
def sumUp():
d = {}
with open(t1file,'rb') as f:
for line in f:
if 'party' in line:
continue # skip header
key, value = line.rsplit(',', 1) # split once at the end
d[key] = d[key] + int(value) if key in d else int(value)
You can do it like this:
from csv import DictReader, DictWriter
map_dic = dict()
with open('test1.csv', 'r') as fr:
csv_reader = DictReader(fr, delimiter=',')
for line in csv_reader:
key = '{}_{}_{}'.format(line['party'], line['cp'], line['qualifier'])
if key not in map_dic.keys():
map_dic[key] = {'party': line['party'], 'cp': line['cp'], 'qualifier': line['qualifier'], 'amount': int(line['amount'])}
else:
map_dic[key]['amount'] = map_dic[key]['amount'] + int(line['amount'])
with open('test2.csv', 'w') as csvfile:
writer = DictWriter(csvfile, fieldnames=['party', 'cp', 'qualifier', 'amount'])
writer.writeheader()
for key, data in map_dic.items():
writer.writerow(data)
I have a question about removing duplicates in Python. I've read a bunch of posts but have not yet been able to solve it. I have the following csv file:
EDIT
Input:
ID, Source, 1.A, 1.B, 1.C, 1.D
1, ESPN, 5,7,,,M
1, NY Times,,10,12,W
1, ESPN, 10,,Q,,M
Output should be:
ID, Source, 1.A, 1.B, 1.C, 1.D, duplicate_flag
1, ESPN, 5,7,,,M, duplicate
1, NY Times,,10,12,W, duplicate
1, ESPN, 10,,Q,,M, duplicate
1, NY Times, 5 (or 10 doesn't matter which one),7, 10, 12, W, not_duplicate
In words, if the ID is the same, take values from the row with source "NY Times", if the row with "NY Times" has a blank value and the duplicate row from the "ESPN" source has a value for that cell, take the value from the row with the "ESPN" source. For outputting, flag the original two lines as duplicates and create a third line.
To clarify a bit further, since I need to run this script on many different csv files with different column headers, I can't do something like:
def main():
with open(input_csv, "rb") as infile:
input_fields = ("ID", "Source", "1.A", "1.B", "1.C", "1.D")
reader = csv.DictReader(infile, fieldnames = input_fields)
with open(output_csv, "wb") as outfile:
output_fields = ("ID", "Source", "1.A", "1.B", "1.C", "1.D", "d_flag")
writer = csv.DictWriter(outfile, fieldnames = output_fields)
writer.writerow(dict((h,h) for h in output_fields))
next(reader)
first_row = next(reader)
for next_row in reader:
#stuff
Because I want the program to run on the first two columns independently of whatever other columns are in the table. In other words, "ID" and "Source" will be in every input file, but the rest of the columns will change depending on the file.
Would greatly appreciate any help you can provide! FYI, "Source" can only be: NY Times, ESPN, or Wall Street Journal and the order of priority for duplicates is: take NY Times if available, otherwise take ESPN, otherwise take Wall Street Journal. This holds for every input file.
The below code reads all of the records into a big dictionary whose keys are their identifiers and whose values are dictionaries mapping source names to entire data rows. Then it iterates through the dictionary and gives you the output you asked for.
import csv
header = None
idfld = None
sourcefld = None
record_table = {}
with open('input.csv', 'rb') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
row = [x.strip() for x in row]
if header is None:
header = row
for i, fld in enumerate(header):
if fld == 'ID':
idfld = i
elif fld == 'Source':
sourcefld = i
continue
key = row[idfld]
sourcename = row[sourcefld]
if key not in record_table:
record_table[key] = {sourcename: row, "all_rows": [row]}
else:
if sourcename in record_table[key]:
cur_row = record_table[key][sourcename]
for i, fld in enumerate(row):
if cur_row[i] == '':
record_table[key][sourcename][i] = fld
else:
record_table[key][sourcename] = row
record_table[key]["all_rows"].append(row)
print ', '.join(header) + ', duplicate_flag'
for recordid in record_table:
rowdict = record_table[recordid]
final_row = [''] * len(header)
rowcount = len(rowdict)
for sourcetype in ['NY Times', 'ESPN', 'Wall Street Journal']:
if sourcetype in rowdict:
row = rowdict[sourcetype]
for i, fld in enumerate(row):
if final_row[i] != '':
continue
if fld != '':
final_row[i] = fld
if rowcount > 1:
for row in rowdict["all_rows"]:
print ', '.join(row) + ', duplicate'
print ', '.join(final_row) + ', not_duplicate'