Python - 3.8
I am trying to write a csv file from a list of dictionary. I followed the official website example. writer is able to write headers. But it is not writing the rows from dictionary that i loops from the list.
csv_path = "/home/tmp/file.csv"
with open(csv_path, 'w') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames=["RequestTimestamp", "MerchantId", "MerchantName", "ChannelName",
"MerchantUSN", "BatchNumber", "UniqueId", "TransactionId", "InvoiceId",
"ReferenceTimestamp", "CustomerName", "AccountHolder", "PaymentType",
"Currency", "Debit", "Credit", "SettledAmount", "Result", "bankResult", "ReconStatus"])
writer.writeheader()
for data in recons:
result_dict = utils.data_formatter_handler(data)
writer.writerow(data)
Related
I am trying to convert a CSV file to JSON but there is a header in my csv that is empty. Is there a way to name it when outputting it to JSON?
Example data
"" Calories Fat Sodium
Bread 100 10 23
I got this code from geeksforgeeks
import csv
import json
# Function to convert a CSV to JSON
# Takes the file paths as arguments
def make_json(csvFilePath, jsonFilePath):
# create a dictionary
data = {}
# Open a csv reader called DictReader
with open(csvFilePath, encoding='utf-8') as csvf:
csvReader = csv.DictReader(csvf)
# Convert each row into a dictionary
# and add it to data
for rows in csvReader:
# Assuming a column named 'No' to
# be the primary key
key = rows['']
data[key] = rows
# Open a json writer, and use the json.dumps()
# function to dump data
with open(jsonFilePath, 'w', encoding='utf-8') as jsonf:
jsonf.write(json.dumps(data, indent=4))
# Driver Code
# Decide the two file paths according to your
# computer system
csvFilePath = r'Names.csv'
jsonFilePath = r'Names.json'
# Call the make_json function
make_json(csvFilePath, jsonFilePath)
I did this and it gets the first row, but i'm not sure how to rename it when i output it to JSON.
It appears as "":"Bread" in the JSON file.
key = rows['']
Thanks in advance if anyone can help!
Edit: Expected output
{
"Food": "Bread",
"Calories": "45",
"Fat (g)": "0",
"Carb. (g)": "11",
"Fiber (g)": "0",
"Protein": "0",
"Sodium": "10"
}
Currently, I'm trying to import a json file created by the following python script from a csv file.
import csv, json
csvFilePath ='USvideos.csv'
jsonFilePath = 'USvideos.json'
data = {}
with open(csvFilePath, encoding = 'utf8') as csvFile:
csvReader = csv.DictReader(csvFile)
for csvRow in csvReader:
video_id = csvRow['video_id']
data[video_id] = csvRow
with open(jsonFilePath, 'w') as jsonFile:
jsonFile.write(json.dumps(data, indent=4))
Problem statementThe problem is that I need to get a json file without the part in parenthesis by modifying the python script which it cames from
("2kyS6SvSYSE": ) {
"video_id": "2kyS6SvSYSE",
"trending_date": "17.20.11",
"title": "WE WANT TO TALK ABOUT OUR MARRIAGE"
},
("1ZAPwfrtAFY":) {
"video_id": "1ZAPwfrtAFY",
"trending_date": "17.20.11"
}
Purpose of solving it
I need to solve this because I want to import data appropriately in MongoDB
Guessing as to the output JSON format you need but can you give this a try?
import csv, json
csvFilePath ='USvideos.csv'
jsonFilePath = 'USvideos.json'
data = []
with open(csvFilePath, encoding = 'utf8') as csvFile:
csvReader = csv.DictReader(csvFile)
for csvRow in csvReader:
data.append(csvRow)
with open(jsonFilePath, 'w') as jsonFile:
jsonFile.write(json.dumps(data, indent=4))
I'm using poloniex trader api to get realtime market ticker info. It works perfect on console. When ever i request for market ticker in returns i get this {'last': '0.07269671', 'high24hr': '0.07379970', 'quoteVolume': '71582.56540639', 'isFrozen': '0', 'lowestAsk': '0.07277290', 'percentChange': '-0.00551274', 'id': 148, 'low24hr': '0.07124645', 'highestBid': '0.07269671', 'baseVolume': '5172.41552885'}
Problem is it's only storing item name/list name such as - low24hr, lowestAsk, highestBid etc. Not their value like -low24hr : 0.07124645
polo = Poloniex()
ticker_data = (polo('returnTicker')['BTC_ETH'])
out = csv.writer(open("myfile.csv","w"), delimiter=',',quoting=csv.QUOTE_ALL,)
out.writerow(ticker_data)
print(ticker_data)
Here is what my csv file looks like-
Your problem is that out.writerow(ticker_data) takes only the keys of the dictionary and writes them to the file. Try to use a csv.DictWriter:
with open('myfile.csv', 'w', newline='') as csv_file:
# Pass the keys of the `ticker_data` as the fieldnames.
writer = csv.DictWriter(csv_file, fieldnames=ticker_data, quoting=csv.QUOTE_ALL)
# Write the fieldnames.
writer.writeheader()
# Write the values.
writer.writerow(ticker_data)
with open('my_file.csv', 'w') as f:
for key, value in ticker_data.items(): f.write('{0},{1}\n'.format(key, value))
From here.
I had a Python beginners course last year. Now I am trying to get a csv to json converter. I have searched quite some time and adapted and changed some of the code I found, until the output looked similar to what I want. I am using Python 3.4.2.
#kvorobiev this is an excerpt of my CSV, but it will do for the case. The first time Converting will work. After the second time you will see that the order of the headings will change within the json file.
The csv file looks like this
Document;Item;Category
4;10;C
What I am getting in the output file as of now (after applying the changes from kvorobiev):
[
{
"Item": "10",
"Category": "C",
"Document": "4"
};
]
The json string I want to get in the output file should look like:
[
{
"Document": "4",
"Item": "10",
"Category": "C"
},
]
You will notice the headings are in the wrong order.
Here is the code:
import json
import csv
csvfile = open('file1.csv', 'r')
jsonfile = open('file1.csv'.replace('.csv','.json'), 'w')
jsonfile.write('[' + '\n' + ' ')
fieldnames = csvfile.readline().replace('\n','').split(';')
num_lines = sum(1 for line in open('file.csv')) -1
reader = csv.DictReader(csvfile, fieldnames)
i = 0
for row in reader:
i += 1
json.dump(row, jsonfile, indent=4,sort_keys=False)
if i < num_lines:
jsonfile.write(',')
jsonfile.write('\n')
jsonfile.write(' ' + ']')
print('Done')
Thanks for helping.
Replace line
reader = csv.DictReader(csvfile, fieldnames)
with
reader = csv.DictReader(csvfile, fieldnames, delimiter=';')
Also, you open file1.csv and later get lines number from file.csv
num_lines = sum(1 for line in open('file.csv')) -2
Your solution could be reduced to
import json
import csv
csvfile = open('file1.csv', 'r')
jsonfile = open('file1.csv'.replace('.csv','.json'), 'w')
jsonfile.write('{\n[\n')
fieldnames = csvfile.readline().replace('\n','').split(';')
reader = csv.DictReader(csvfile, fieldnames, delimiter=';')
for row in reader:
json.dump(row, jsonfile, indent=4)
jsonfile.write(';\n')
jsonfile.write(']\n}')
If you want to save order of columns from csv you could use
from collections import OrderedDict
...
for row in reader:
json.dump(OrderedDict([(f, row[f]) for f in fieldnames]), jsonfile, indent=4)
jsonfile.write(';\n')
jsonfile.write(']\n}')
I have a project that was using Scraperwiki to write to their sqlite store, but I need to just write a CSV. The catch is that all the data is stored in a dict, which works fine for writing to sqlite:
scraperwiki.sqlite.save(unique_keys = ['somekey'], data = data, table_name='fancy')
I run that after I scrape each row. There's no order to a dict so I can't just write the values to CSV. I've been looking over csv.DictWriter and collections.defaultdict, and I'm still wrapping my head around how I'd refactor my code so that I can write data, which is a dictionary to CSV instead. Here's an example of my code as structured now:
def store_exception(exception, line_number, some_string):
data = {
'timestamp' : datetime.now(),
'line_number': line_number,
'message' : exception,
'string' : some_string
}
scraperwiki.sqlite.save(unique_keys = ['timestamp'], data = data, table_name='error_log')
I think I want something like this though:
def store_exception(exception, line_number, some_string):
data = {
'timestamp' : datetime.now(),
'line_number': line_number,
'message' : exception,
'string' : some_string
}
d = defaultdict(lambda: "")
d_order = d['timestamp'],d['line_number'],d['message'],d['string']
with open('some/path.csv', 'w') as csvfile:
linewriter = csv.DictWriter(csvfile, d_order, delimiter='|',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
linewriter.writerow(data)
That seems inefficient, though. Do I need both collections.defaultdict and csv.DictWriter?
This should do it. You don't need the default_dict at all, d_order just needs to be a list for field names, or in this case dict keys:
def store_exception(exception, line_number, some_string):
data = {
'timestamp' : datetime.now(),
'line_number': line_number,
'message' : exception,
'string' : some_string
}
d_order = ['timestamp', 'line_number', 'message', 'string']
with open('some/path.csv', 'w') as csvfile:
linewriter = csv.DictWriter(csvfile, d_order, delimiter='|',
quotechar='"', quoting=csv.QUOTE_MINIMAL)
linewriter.writerow(data)