There are similar questions/answers on SO, but this refers to a specific error, and I have referred to the relevant SO topics to solve this, but with no luck.
The code I have seeks to retrieve lines from a text file and read them into a dictionary. It works, but as you can see below, not completely.
File
"['a', 5]"
"['b', 2]"
"['c', 3]"
"['d', 0]"
Code
def readfiletodict():
with open("testfile.txt","r",newline="") as f:
mydict={} #create a dictionary called mydict
for line in f:
(key,val) = line.split(",")
mydict[key]=val
print(mydict) #test
for keys in mydict:
print(keys) #test to see if the keys are being retrieved correctly
readfiletodict()
Desired output:
I wish the dictionary to hold keys: a,b,c,d and corresponding values as shown in the file, without the unwanted character. Simiarly, I need the values to be stored correctly in the dictionary as integers (so that they can be worked with later)
For quick replication see: https://repl.it/KgQe/0 for the whole code and problem
Current (erroneous) output:
Python 3.6.1 (default, Dec 2015, 13:05:11)
[GCC 4.8.2] on linux
{'"[\'a\'': ' 5]"\r\n', '"[\'b\'': ' 2]"\r\n', '"[\'c\'': ' 3]"\r\n', '"[\'d\'': ' 0]"\r\n'}
"['a'
"['b'
"['c'
"['d'
The Stackoverflow answer I have used in my current code is from: Python - file to dictionary? but it doesn't quite work for me...
Your code slightly modified - the key is to strip out all the chars that we don't care about ([Python]: str.rstrip([chars])):
def readfiletodict():
with open("testfile.txt", "r") as f:
mydict = {} #create a dictionary called mydict
for line in f:
key, val = line.strip("\"\n[]").split(",")
mydict[key.strip("'")] = val.strip()
print(mydict) #test
for key in mydict:
print(key) #test to see if the keys are being retrieved correctly
readfiletodict()
Output:
(py35x64_test) c:\Work\Dev\StackOverflow\q46041167>python a.py
{'d': '0', 'c': '3', 'a': '5', 'b': '2'}
d
c
a
b
The efficient way to do this would be using python lists as suggested by #Tico.
However, if for some reason you can't, you can try this.
lineFormat = re.sub('[^A-Za-z0-9,]+', '', line)
this will transform "['a', 5]" to a,5. Now you can apply your split function.
(key,val) = lineFormat.split(",")
mydict[key]=val
It's much easier if you transform your string_list in a real python list, so you don't need parsing. Use json loads:
import json
...
list_line = json.loads(line)
...
Hope it helps!
You can use regex and a dict-comprehension to do that:
#!/usr/bin/env python
import re
with open('file.txt', 'r') as f: l = f.read().splitlines()
d = {''.join(re.findall('[a-zA-Z]+',i)):int(''.join(re.findall('\d',i))) for i in l}
Result:
{'a': 5, 'c': 3, 'b': 2, 'd': 0}
Using only a very basic knowledge of Python:
>>> mydict = {}
>>> with open('temp.txt') as the_input:
... for line in the_input:
... values = line.replace('"', '').replace("'", '').replace(',', '').replace('[', '').replace(']', '').rstrip().split(' ')
... mydict[values[0]] = int(values[1])
...
>>> mydict
{'a': 5, 'b': 2, 'c': 3, 'd': 0}
In other words, discard all of the punctuation, leaving only the blank between the two values needed for the dictionary. Split on that blank, then put the pieces from the split into the dictionary.
Edit: In a similar vein, using a regex. The re.sub looks for the various alternative characters given by its first argument and any that are found are replaced by its second argument, an empty string. The alternatives are delimited by the '|' character in a regex pattern. Some of the alternatives, such as the '[', must be escaped with an '\' because on their own they have special meanings within a regex expression.
>>> mydict = {}
>>> with open('temp.txt') as the_input:
... for line in the_input:
... values = re.sub(r'"|\'|\,|\[|\]|,', '', line).split(' ')
... mydict[values[0]] = int(values[1])
...
>>> mydict
{'a': 5, 'b': 2, 'c': 3, 'd': 0}
You were almost there, missing two things:
stripping the keys
converting the values
The following code does what you need (I think):
#!/usr/bin/env python
# -*- coding: utf-8 -*-
output = dict()
with open('input', 'r') as inputfile:
for line in inputfile:
line = line.strip('"[]\n')
key, val = line.split(',')
output[key.strip("'")] = int(val)
Be careful however, since this code is very brittle. It won't process any variations on the input format you have provided correctly. To build on top of this, I'd recommend to at least use except ValueError for the int conversion and to think about the stripping characters again.
Related
I have one string as below:
key_val = "count=2, name=['hello', 'hi'], word='Dial::100', roll=12"
I need to get the dictionary from the string as below:
d_key_val = {'count'=2, 'name'=['hello', 'hi'], 'word'='Dial::100', 'roll'=12}
I tried with the following:
regx = r'(?P<key>\w+)=(?P<value>\[.+?\]|\d+|\S+)'
r_key_val = re_findall(regx, key_val)
for key, value in r_key_val:
d_key_val[key] = value
But it is storing values as all string:
d_key_val = {'count'='2', 'name'="['hello', 'hi']", 'word'="'Dial::100'", 'roll'='12'}
Is there any way or regex to store the values as same data type as it has in string?
If you are 100% sure that the data is "safe", you could eval it as the parameters to dict:
>>> key_val = "count=2, name=['hello', 'hi'], word='Dial::100', roll=12"
>>> eval("dict(%s)" % key_val)
{'count': 2, 'name': ['hello', 'hi'], 'roll': 12, 'word': 'Dial::100'}
If you are not sure, better don't use eval, though.
Alternatively, you could use your regex and use ast.literal_eval to evaluate the value:
>>> regx = r'(?P<key>\w+)=(?P<value>\[.+?\]|\d+|\S+)'
>>> {k: ast.literal_eval(v) for k, v in re.findall(regx, key_val)}
{'count': 2, 'name': ['hello', 'hi'], 'roll': 12, 'word': ('Dial::100',)}
(Note: I did not check your regex in detail.) You could also try to apply ast.literal_eval to the entire expression, instead of the less safe eval, but this would require some preprocessing, e.g. replacing = with : and adding quotes to the keys, that might not work well with e.g. string values containing those symbols.
regex cannot do that, but you can! You can write a function like the following that takes the values regex writes out and converts them to the appropriate type.
def type_converter(v):
if v[0] == '[' and v[-1] == ']':
v = v.replace('[', '').replace(']', '')
return [type_converter(x) for x in v.split(',')]
try:
v = int(v)
except ValueError:
try:
v = float(v)
except ValueError:
pass
finally:
return v
To add this to your code, simply do:
regx = r'(?P<key>\w+)=(?P<value>\[.+?\]|\d+|\S+)'
r_key_val = re_findall(regx, key_val)
for key, value in r_key_val:
d_key_val[key] = type_converter(value) # <- this
Example:
lst = ['2', '1.2' ,'foo', '[1, 2]']
print([type(type_converter(x)) for x in lst ])
# [<class 'int'>, <class 'float'>, <class 'str'>, <class 'list'>]
Note that the order in which the try-blocks are written is very important since float('1') does not raise any Errors but the correct type is int!
For some reason my code refuses to convert to uppercase and I cant figure out why. Im trying to then write the dictionary to a file with the uppercase dictionary values being inputted into a sort of template file.
#!/usr/bin/env python3
import fileinput
from collections import Counter
#take every word from a file and put into dictionary
newDict = {}
dict2 = {}
with open('words.txt', 'r') as f:
for line in f:
k,v = line.strip().split(' ')
newDict[k.strip()] = v.strip()
print(newDict)
choice = input('Enter 1 for all uppercase keys or 2 for all lowercase, 3 for capitalized case or 0 for unchanged \n')
print("Your choice was " + choice)
if choice == 1:
for k,v in newDict.items():
newDict.update({k.upper(): v.upper()})
if choice == 2:
for k,v in newDict.items():
dict2.update({k.lower(): v})
#find keys and replace with word
print(newDict)
with open("tester.txt", "rt") as fin:
with open("outwords.txt", "wt") as fout:
for line in fin:
fout.write(line.replace('{PETNAME}', str(newDict['PETNAME:'])))
fout.write(line.replace('{ACTIVITY}', str(newDict['ACTIVITY:'])))
myfile = open("outwords.txt")
txt = myfile.read()
print(txt)
myfile.close()
In python 3 you cannot do that:
for k,v in newDict.items():
newDict.update({k.upper(): v.upper()})
because it changes the dictionary while iterating over it and python doesn't allow that (It doesn't happen with python 2 because items() used to return a copy of the elements as a list). Besides, even if it worked, it would keep the old keys (also: it's very slow to create a dictionary at each iteration...)
Instead, rebuild your dict in a dict comprehension:
newDict = {k.upper():v.upper() for k,v in newDict.items()}
You should not change dictionary items as you iterate over them. The docs state:
Iterating views while adding or deleting entries in the dictionary may
raise a RuntimeError or fail to iterate over all entries.
One way to update your dictionary as required is to pop values and reassign in a for loop. For example:
d = {'abc': 'xyz', 'def': 'uvw', 'ghi': 'rst'}
for k, v in d.items():
d[k.upper()] = d.pop(k).upper()
print(d)
{'ABC': 'XYZ', 'DEF': 'UVW', 'GHI': 'RST'}
An alternative is a dictionary comprehension, as shown by #Jean-FrançoisFabre.
I have a text file and its content is something like this:
A:3
B:5
C:7
A:8
C:6
I need to print:
A numbers: 3, 8
B numbers: 5
C numbers: 7, 6
I'm a beginner so if you could give some help I would appreciate it. I have made a dictionary but that's pretty much all I know.
You could use an approach that keeps the values in a dictionary:
d = {} # create an empty dictionary
for line in open(filename): # opens the file
k, v = line.split(':') # unpack each line in the char before : and after
if k in d: # add the values to the dictionary
d[k].append(v)
else:
d[k] = [v]
This gives you a dictionary containing your file in a format that you can utilize to get the desired output:
for key, values in sorted(d.items()):
print(key, 'numbers:' ', '.join(values))
The sorted is required because dictionaries are unordered.
Note that using collections.defaultdict instead of a normal dict could simplify the approach somewhat. The:
d = {}
...
if k in d: # add the values to the dictionary
d[k].append(v)
else:
d[k] = [v]
could then be replaced by:
from collections import defaultdict
d = defaultdict(list)
...
d[k].append(v)
Short version (Which should sort in alphabetic order)
d = {}
lines = [line.rstrip('\n') for line in open('filename.txt')]
[d.setdefault(line[0], []).append(line[2]) for line in lines]
[print(key, 'numbers:', ', '.join(values)) for key,values in sorted(d.items())]
Or if you want to maintain the order as they appear in file (file order)
from collections import OrderedDict
d = OrderedDict() # Empty dict
lines = [line.rstrip('\n') for line in open('filename.txt')] # Get the lines
[d.setdefault(line[0], []).append(line[2]) for line in lines] # Add lines to dictionary
[print(key, 'numbers:', ', '.join(values)) for key,values in d.items()] # Print lines
Tested with Python 3.5.
You can treat your file as csv (comma separated value) so you can use the csv module to parse the file in one line. Then use defaultdict with input in the costructor the class list to say that to create it when the key not exists. Then use OrderedDict class because standard dictionary don't keeps the order of your keys.
import csv
from collection import defaultdict, OrderedDict
values = list(csv.reader(open('your_file_name'), delimiter=":")) #[['A', '3'], ['B', '5'], ['C', '7'], ['A', '8'], ['C', '6']]
dct_values = defaultdict(list)
for k, v in values:
dct_values[k].append(v)
dct_values = OrderedDict(sorted(dct_values.items()))
Then you can simply print iterating the dictionary.
A very easy way to group by key is by external library, if you are interested try PyFunctional
I have over a thousand array categories in a text file, for example: Category A1 and Cateogry A2: (array in matlab code)
A1={[2,1,2]};
A1={[4,2,1,2,3]};
A2={[3,3,2,1]};
A2={[4,4,2,2]};
A2={[2,2,1,1,1]};
I would like to use Python to help me read the file and group them into:
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
Use a dict to group, I presume you mean group as strings as they are not valid python containers coming from a .mat matlab file:
from collections import OrderedDict
od = OrderedDict()
with open("infile") as f:
for line in f:
name, data = line.split("=")
od.setdefault(name,[]).append(data.rstrip(";\n"))
from pprint import pprint as pp
pp((od.values()))
[['{[2,1,2]}', '{[4,2,1,2,3]}'],
['{[3,3,2,1]}', '{[4,4,2,2]}', '{[2,2,1,1,1]}']]
To group the data in your file just write the content:
with open("infile", "w") as f:
for k, v in od.items():
f.write("{}=[{}];\n".format(k, " ".join(v))))
Output:
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
Which is actually your desired output with the semicolons removed from each sub array, the elements grouped and the semicolon added to the end of the group to keep the data valid in your matlab file.
The collections.OrderedDict will keep the order from your original file where using a normal dict will have no order.
A safer approach when updating a file is to write to a temp file then replace the original file with the updated using a NamedTemporaryFile and shutil.move:
from collections import OrderedDict
od = OrderedDict()
from tempfile import NamedTemporaryFile
from shutil import move
with open("infile") as f, NamedTemporaryFile(dir=".", delete=False) as temp:
for line in f:
name, data = line.split("=")
od.setdefault(name, []).append(data.rstrip("\n;"))
for k, v in od.items():
temp.write("{}=[{}];\n".format(k, " ".join(v)))
move(temp.name, "infile")
If the code errored in the loop or your comp crashed during the write, your original file would be preserved.
You can first loop over you lines and then split your lines with = then use ast.literal_eval and str.strip to extract the list within brackets and at last use a dictionary with a setdefault method to get your expected result :
import ast
d={}
with open('file_name') as f :
for line in f:
var,set_=line.split('=')
d.setdefault(var,[]).append(ast.literal_eval(set_.strip("{}\n;")))
print d
result :
{'A1': [[2, 1, 2], [4, 2, 1, 2, 3]], 'A2': [[3, 3, 2, 1], [4, 4, 2, 2], [2, 2, 1, 1, 1]]}
If you want the result to be exactly as your expected format you can do :
d={}
with open('ex.txt') as f,open('new','w')as out:
for line in f:
var,set_=line.split('=')
d.setdefault(var,[]).append(set_.strip(";\n"))
print d
for i,j in d.items():
out.write('{}=[{}];\n'.format(i,' '.join(j)))
At last you'll have the following result in new file :
A1=[{[2,1,2]} {[4,2,1,2,3]}];
A2=[{[3,3,2,1]} {[4,4,2,2]} {[2,2,1,1,1]}];
I have a text file with eight names, sorted by name, like this:
Anna
David
Dennis
Morgan
Lana
Peter
Joanna
Karen
And now I want to put them into a dictionary and add different keys to each of the name.
The names are on new lines. What I want to add to the names in the dict, are different binary numbers from 000-111.
How can I do this?
I have tried stuff like this:
with open ('tennis.txt', 'r') as f:
for line in f:
dict={}
for line in open('file.txt'):
bin[0]=next(f)
bin[1]=next(f)
bin[2]=next(f)
bin[3]=next(f)
bin[4]=next(f)
bin[5]=next(f)
bin[6]=next(f)
bin[7]=next(f)
Based on andybuckley's answer, you can get it done like this:
d = {}
f = open("tennis.txt")
for i, l in enumerate(f):
# cut the '0b' chars, so you will get your dict keys just like you want
bin_num = bin(i)[2:]
# if the key is shorter than 3 chars, add 0 to the beginning
while len(bin_num) < 3:
bin_num = '0' + bin_num
d[bin_num] = l[:-1]
f.close()
for i in sorted(d.items()):
print i
EDIT : Thanks to #pepr - remember to close the opened file.
Output:
('000', 'Anna')
('001', 'David')
('010', 'Dennis')
('011', 'Morgan')
('100', 'Lana')
('101', 'Peter')
('110', 'Joanna')
('111', 'Karen')
It's a bit hard to know what you want: I've interpreted the question as wanting to read names from a text file, and to insert each into a dict with an increasing binary key. Here's an interactive Python3 session which does that and shows the populated dictionary:
>>> d = {}
>>> for i, l in enumerate(open("tennis.txt")):
... d[bin(i)] = l[:-1]
>>> d
{'0b10': 'Dennis', '0b11': 'Morgan', '0b110': 'Joanna', '0b0': 'Anna', '0b1': 'David', '0b101': 'Peter', '0b100': 'Lana', '0b111': 'Karen'}
Note that I've used "d" rather than "dict" as the name for the dictionary variable, since I don't want the variable name to hide the class name: it's always a good idea to avoid using the same names for variables and classes, although Python will not object.
Use a dict comprehension, zfill, and enumerate:
with open('/tmp/names.txt') as f:
print({bin(k)[2:].zfill(3): v.strip() for k,v in enumerate(f)})
Prints:
{'000': 'Anna', '001': 'David', '011': 'Morgan', '010': 'Dennis', '101': 'Peter', '100': 'Lana', '110': 'Joanna', '111': 'Karen'}
If you don't know how many lines there are in the file in order to use the right number for zfill, you can just count them first:
with open(fn) as f:
i=max(ln for ln,line in enumerate(f) if line.strip())
print(i, bin(i)[2:])
fill=len(bin(i)[2:])
f.seek(0)
print({bin(k)[2:].zfill(fill): v.strip() for k,v in enumerate(f) if v.strip()})