I just want to write dictionary values into text file line wise line.I can write whole dictionary in to the file using:
log_disk={}
log=open('log.txt','w')
log.write(str(log_disk))
log.close()
Any help will be appreciated.In addition I want to avoid those keys which have value 'Empty' while writing into the file.
Just loop over the values then:
with open('log.txt','w') as log:
for value in log_disk.values():
log.write('{}\n'.format(value))
Write data as JSON unformatted string
You may dump the data as JSON, without formatting JSON data it is written on one line.
Do not forget to append newline:
>>> import json
>>> data = {}
>>> with open(fname, "a") as f:
... json.dump(data, f)
... f.write("\n")
...
Try with another data:
>>> data = {"a": "aha", "b": "bebe"}
>>> with open(fname, "a") as f:
... json.dump(data, f)
... f.write("\n")
...
It does not have to be dictionary, lists are working too:
>>> data = range(10)
>>> data
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> with open(fname, "a") as f:
... json.dump(data, f)
... f.write("\n")
...
Reading data line by line
>>> with open(fname) as f:
... for line in f:
... print json.loads(line)
...
{}
{u'a': u'aha', u'b': u'bebe'}
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
You just need to convert log_disk.values() to a list and you can write them directly
with open(filename) as openfile:
json.dump(list(log_disk.values()), file)
Related
I've been struggling a bit with getting my input file into the right format for my algorithm.
I want to read this text file:
1 -> 7,8
11 -> 1,19
219 -> 1,9,8
Into this dictionary:
{ 1: [7, 8], 11: [1, 19], 219: [1, 9, 8]}
I've tried this code:
with open("file.txt", "r+") as f:
f.write(f.read().replace("->", " "))
f.close()
d = {}
with open("file.txt") as file:
for line in file:
(key, val) = line.split()
d[key] = val
But with this code it get's stuck on the fact that there are more than 2 arguments in the second column. How can make a list out of the elements in the second column and use that list as the value for each key?
There is no need to do that pre-processing step to remove the '->'. Simply use:
d = {}
with open("file.txt") as file:
for line in file:
left,right = line.split('->')
d[int(left)] = [int(v) for v in right.split(',')]
You can even use dictionary comprehension and make it a one-liner:
with open("file.txt") as file:
d = {int(left) : [int(v) for v in right.split(',')]
for left,right in (line.split('->') for line in file)
}
This gives:
>>> d
{1: [7, 8], 11: [1, 19], 219: [1, 9, 8]}
Nest a generator expression with str.split in a dictionary comprehension, converting the key to an integer and mapping the value to integers:
with open('file.txt') as f:
result = {int(k):list(map(int, v.split(','))) for k,v in (line.split(' -> ') for line in f)}
I have over a thousand array categories in a text file, for example: Category A1 and Cateogry A2: (array in matlab code)
A1={[2,1,2]};
A1={[4,2,1,2,3]};
A2={[3,3,2,1]};
A2={[4,4,2,2]};
A2={[2,2,1,1,1]};
I would like to use Python to help me read the file and group them into:
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
Use a dict to group, I presume you mean group as strings as they are not valid python containers coming from a .mat matlab file:
from collections import OrderedDict
od = OrderedDict()
with open("infile") as f:
for line in f:
name, data = line.split("=")
od.setdefault(name,[]).append(data.rstrip(";\n"))
from pprint import pprint as pp
pp((od.values()))
[['{[2,1,2]}', '{[4,2,1,2,3]}'],
['{[3,3,2,1]}', '{[4,4,2,2]}', '{[2,2,1,1,1]}']]
To group the data in your file just write the content:
with open("infile", "w") as f:
for k, v in od.items():
f.write("{}=[{}];\n".format(k, " ".join(v))))
Output:
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
Which is actually your desired output with the semicolons removed from each sub array, the elements grouped and the semicolon added to the end of the group to keep the data valid in your matlab file.
The collections.OrderedDict will keep the order from your original file where using a normal dict will have no order.
A safer approach when updating a file is to write to a temp file then replace the original file with the updated using a NamedTemporaryFile and shutil.move:
from collections import OrderedDict
od = OrderedDict()
from tempfile import NamedTemporaryFile
from shutil import move
with open("infile") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
for line in f:
name, data = line.split("=")
od.setdefault(name, []).append(data.rstrip("\n;"))
for k, v in od.items():
temp.write("{}=[{}];\n".format(k, " ".join(v)))
move(temp.name, "infile")
If the code errored in the loop or your comp crashed during the write, your original file would be preserved.
You can first loop over you lines and then split your lines with = then use ast.literal_eval and str.strip to extract the list within brackets and at last use a dictionary with a setdefault method to get your expected result :
import ast
d={}
with open('file_name') as f :
for line in f:
var,set_=line.split('=')
d.setdefault(var,[]).append(ast.literal_eval(set_.strip("{}\n;")))
print d
result :
{'A1': [[2, 1, 2], [4, 2, 1, 2, 3]], 'A2': [[3, 3, 2, 1], [4, 4, 2, 2], [2, 2, 1, 1, 1]]}
If you want the result to be exactly as your expected format you can do :
d={}
with open('ex.txt') as f,open('new','w')as out:
for line in f:
var,set_=line.split('=')
d.setdefault(var,[]).append(set_.strip(";\n"))
print d
for i,j in d.items():
out.write('{}=[{}];\n'.format(i,' '.join(j)))
At last you'll have the following result in new file :
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
I would like to write a Python dictionary inside a CSV file.
My code is:
import csv
cluster = {}
cluster['cluster0'] = [0,'value1','value2','value3']
cluster['cluster1'] = [1,'value1','value2','value3']
csvfile2 = "//home/tom/Desktop/cluster.csv"
with open(csvfile2, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(cluster)
But instead of getting:
0,value1,value2,value3
1,value1,value2,value3
I have inside my CSV file:
c,l,u,s,t,e,r,0
c,l,u,s,t,e,r,1
Any suggestion please?
Instead of the dictionary name, you should call the .values() method
with open(csvfile2, "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(cluster.values())
As an example:
d = {1: [1,2,3], 2: [4,5,6]}
>>> d.keys()
[1, 2]
>>> d.values()
[[1, 2, 3], [4, 5, 6]]
I have a csv file
col1, col2, col3
1, 2, 3
4, 5, 6
I want to create a list of dictionary from this csv.
output as :
a= [{'col1':1, 'col2':2, 'col3':3}, {'col1':4, 'col2':5, 'col3':6}]
How can I do this?
Use csv.DictReader:
import csv
with open('test.csv') as f:
a = [{k: int(v) for k, v in row.items()}
for row in csv.DictReader(f, skipinitialspace=True)]
Will result in :
[{'col2': 2, 'col3': 3, 'col1': 1}, {'col2': 5, 'col3': 6, 'col1': 4}]
Another simpler answer:
import csv
with open("configure_column_mapping_logic.csv", "r") as f:
reader = csv.DictReader(f)
a = list(reader)
print a
Using the csv module and a list comprehension:
import csv
with open('foo.csv') as f:
reader = csv.reader(f, skipinitialspace=True)
header = next(reader)
a = [dict(zip(header, map(int, row))) for row in reader]
print a
Output:
[{'col3': 3, 'col2': 2, 'col1': 1}, {'col3': 6, 'col2': 5, 'col1': 4}]
Answering here after long time as I don't see any updated/relevant answers.
df = pd.read_csv('Your csv file path')
data = df.to_dict('records')
print( data )
# similar solution via namedtuple:
import csv
from collections import namedtuple
with open('foo.csv') as f:
fh = csv.reader(open(f, "rU"), delimiter=',', dialect=csv.excel_tab)
headers = fh.next()
Row = namedtuple('Row', headers)
list_of_dicts = [Row._make(i)._asdict() for i in fh]
Well, while other people were out doing it the smart way, I implemented it naively. I suppose my approach has the benefit of not needing any external modules, although it will probably fail with weird configurations of values. Here it is just for reference:
a = []
with open("csv.txt") as myfile:
firstline = True
for line in myfile:
if firstline:
mykeys = "".join(line.split()).split(',')
firstline = False
else:
values = "".join(line.split()).split(',')
a.append({mykeys[n]:values[n] for n in range(0,len(mykeys))})
Simple method to parse CSV into list of dictionaries
with open('/home/mitul/Desktop/OPENEBS/test.csv', 'rb') as infile:
header = infile.readline().split(",")
for line in infile:
fields = line.split(",")
entry = {}
for i,value in enumerate(fields):
entry[header[i].strip()] = value.strip()
data.append(entry)
I have looked at other questions on SO like this one but they are too techy for me to understand (only been learning a few days).
I am making a phonebook and i am trying to save a dictionary like so,
numbers = {}
def save(a):
x = open("phonebook.txt", "w")
for l in a:
x.write(l, a[l])
x.close()
But i get error write() only takes 1 argument and obv im passing 2, so my question is how can i do this in a beginner freindly way and could you describe it in a non techy way.
Thanks a lot.
It's better to use json module for dumping/loading dictionary to/from a file:
>>> import json
>>> numbers = {'1': 2, '3': 4, '5': 6}
>>> with open('numbers.txt', 'w') as f:
... json.dump(numbers, f)
>>> with open('numbers.txt', 'r') as f:
... print json.load(f)
...
{u'1': 2, u'3': 4, u'5': 6}
While JSON is a good choice and is cross-language and supported by browsers, Python has its own serialization format called pickle that is much more flexible.
import pickle
data = {'Spam': 10, 'Eggs': 5, 'Bacon': 11}
with open('/tmp/data.pickle', 'w') as pfile:
pickle.dump(data, pfile)
with open('/tmp/data.pickle', 'r') as pfile:
read_data = pickle.load(pfile)
print(read_data)
Pickle is Python-specific, doesn't work with other languages, and be careful to never load pickle data from untrusted sources (such as over the web) as it's not considered "safe".
Pickle works for other data types too, including instances of your own classes.
You need to use the json module and JSONEncode your dict, then you can use the module to write the new object to file.
When you read the file, you need to JSONDecode to convert it back into a python dict.
>>> import json
>>> d = {1:1, 2:2, 3:3}
>>> d
{1: 1, 2: 2, 3: 3}
>>> json.JSONEncoder().encode(d)
'{"1": 1, "2": 2, "3": 3}'
>>> with open('phonebook.txt', 'w') as f:
f.write(json.JSONEncoder().encode(d))
>>> with open('phonebook.txt', 'r') as f:
print f.readlines()
['{"1": 1, "2": 2, "3": 3}']