I am having a problem updating values in a dictionary in python. I am trying to update a nested value (either as an int or list) for a single fist level key, but instead i update the values, for all first level keys.
I start by creating the dictionary:
kmerdict = {}
innerdict = {'endcover':0, 'coverdict':{}, 'coverholder':[], 'uncovered':0, 'lowstart':0,'totaluncover':0, 'totalbases':0}
for kmer in kmerlist: # build kmerdict
kmerdict [kmer] = {}
for chrom in fas: #open file and read line
chromnum = chrom[3:-3]
kmerdict [kmer][chromnum] = innerdict
Then i am walking through chromosomes (as plain text files) from a list (fas, not shown), and taking 7mer strings (k=7) as the key. If that key is in a list of keys i am looking for (kmerlist) and trying to use that to reference a single value nested in the dictionary:
for chrom in fas: #open file and read line
chromnum = chrom[3:-3]
p = 0 #chromosome position counter
thisfile = "/var/store/fa/" + chrom
thischrom = open(thisfile)
thischrom.readline()
thisline = thischrom.readline()
thisline = string.strip(thisline.lower())
l=0 #line counter
workline = thisline
while(thisline):
if len(workline) > k-1:
thiskmer = ''
thiskmer = workline[0:k] #read five bases
if thiskmer in kmerlist:
thisuncovered = kmerdict[thiskmer][chromnum]['uncovered']
thisendcover = kmerdict[thiskmer][chromnum]['endcover']
thiscoverholder = kmerdict[thiskmer][chromnum]['coverholder']
if p >= thisendcover:
thisuncovered += (p - thisendcover)
thisendcover = ((p+k) + ext)
thiscoverholder.append(p)
elif p < thisendcover:
thisendcover = ((p+k) + ext)
thiscoverholder.append(p)
print kmerdict[thiskmer]
p += 1
workline = workline[1:]
else:
thisline = thischrom.readline()
thisline = string.strip(thisline.lower())
workline = workline+thisline
l+=1
print kmerdict
but when i print the dictionary, all "thiskmer" levels are getting updated with the same values. I'm not very good with dictionaries, and i can't see the error of my ways, but they are profound! Can anyone enlighten me?
Hope i've been clear enough. I've been tinkering with this code for too long now :(
confession -- I haven't spent the time to figure out all of your code -- only the first part. The first problem you have is in the setup:
kmerdict = {}
innerdict = {'endcover':0, 'coverdict':{}, 'coverholder':[], 'uncovered':0,
'lowstart':0,'totaluncover':0, 'totalbases':0}
for kmer in kmerlist: # build kmerdict
kmerdict [kmer] = {}
for chrom in fas: #open file and read line
chromnum = chrom[3:-3]
kmerdict [kmer][chromnum] = innerdict
You create innerdict once and then proceed to use the same dictionary over an over again. In other words, every kmerdict[kmer][chromnum] refers to the same objects. Perhaps changing the last line to:
kmerdict [kmer][chromnum] = copy.deepcopy(innerdict)
would help (with an appropriate import of copy at the top of your file)? Alternatively, you could just move the creation of innerdict into the inner loop as pointed out in the comments:
def get_inner_dict():
return {'endcover':0, 'coverdict':{}, 'coverholder':[], 'uncovered':0,
'lowstart':0,'totaluncover':0, 'totalbases':0}
kmerdict = {}
for kmer in kmerlist: # build kmerdict
kmerdict [kmer] = {}
for chrom in fas: #open file and read line
chromnum = chrom[3:-3]
kmerdict [kmer][chromnum] = get_inner_dict()
-- I decided to use a function to make it easier to read :).
Related
I want to calculate document frequencies of text documents. First I created the term dictionary and calculated the term frequencies. I have no problems in these steps, but when I try to use the function below it gives an error:
def computeDF(docList):
df = {}
df = dict.fromkeys(docList[0].keys(), 0)
for doc in docList:
for word, val in doc.items():
if val > 0:
df[word] += 1
for word, val in df.items():
df[word] = float(val)
return df
Called the function like this:
dictList = []
for i in range(N):
# creating dictionary for all documents
tokens = processed_text[i]
dictionary = dict.fromkeys(tokens,0)
# calculation of term frequencies for all documents
for word in tokens:
dictionary[word] += 1
tf = termFreq(dictionary, tokens)
dictList.append(dictionary)
df = computeDF(dictList)
I called the function with list of 10 dictionaries, because it works with list object.
N = 10 (num of documents)
dictList continues like this: dictList
Error:
line 155, in <module> df = computeDF(dictList)
line 134, in computeDF df[word] += 1
KeyError: 'flagstaff'
It works when I try the function in different python file with same object types. I don't understand what is the problem. How can I solve this?
Where you have df = dict.fromkeys(docList[0].keys(), 0) you need something like
keys = set()
for doc in docList:
keys = keys.union(set(doc.keys()))
df = dict.fromkeys(docList[0].keys(), 0)
That way you have keys for all your docs not just the first one. If you want todo it in one line you can do it like this:
keys = set().union(*[set(doc.keys()) for doc in docList])
I wrote a code that reads a whole DNA genome and returns a dictionary o all the 8-primers with their locations, i want to loop through this dictionary and sort these codons into 4 other dictionaries based on the letter they start with A,T,G and C.
But I couldn't figure out how to check the first letter of each key.
This is my code:
"""
Generating all the possible 8-codon primers.
saving them in a text file with their locations.
"""
import csv
##MAIN FUNCTION:
def k_mer(Text, k):
dictionary = {}
for i in range (len(Text) - k + 1):
if(Text[i: i+k] in dictionary):
dictionary[Text[i: i+k]].append(i)
else:
dictionary[Text[i: i+k]] = [i]
return dictionary
##INPUT:
# open the file with the original sequence
myfile = open('Vibrio_cholerae.txt')
# set the file to the variable Text to read and scan
Text = myfile.read()
result = k_mer(Text.strip(), 8)
with open("result.txt","w") as f:
from collections import Counter
wr = csv.writer(f,delimiter=":")
wr.writerows(Counter(result).items())
Given your dictionary, it's just this. It's not a complicated problem.
groupby = {'A':{}, 'C':{}, 'G':{}, 'T':{} }
for k,v in dictionary.items():
groupby[k[0]][k] = v
I'm new to programming and python and I'm looking for a way to distinguish between two input formats in the same input file text file. For example, let's say I have an input file like so where values are comma-separated:
5
Washington,A,10
New York,B,20
Seattle,C,30
Boston,B,20
Atlanta,D,50
2
New York,5
Boston,10
Where the format is N followed by N lines of Data1, and M followed by M lines of Data2. I tried opening the file, reading it line by line and storing it into one single list, but I'm not sure how to go about to produce 2 lists for Data1 and Data2, such that I would get:
Data1 = ["Washington,A,10", "New York,B,20", "Seattle,C,30", "Boston,B,20", "Atlanta,D,50"]
Data2 = ["New York,5", "Boston,10"]
My initial idea was to iterate through the list until I found an integer i, remove the integer from the list and continue for the next i iterations all while storing the subsequent values in a separate list, until I found the next integer and then repeat. However, this would destroy my initial list. Is there a better way to separate the two data formats in different lists?
You could use itertools.islice and a list comprehension:
from itertools import islice
string = """
5
Washington,A,10
New York,B,20
Seattle,C,30
Boston,B,20
Atlanta,D,50
2
New York,5
Boston,10
"""
result = [[x for x in islice(parts, idx + 1, idx + 1 + int(line))]
for parts in [string.split("\n")]
for idx, line in enumerate(parts)
if line.isdigit()]
print(result)
This yields
[['Washington,A,10', 'New York,B,20', 'Seattle,C,30', 'Boston,B,20', 'Atlanta,D,50'], ['New York,5', 'Boston,10']]
For a file, you need to change it to:
with open("testfile.txt", "r") as f:
result = [[x for x in islice(parts, idx + 1, idx + 1 + int(line))]
for parts in [f.read().split("\n")]
for idx, line in enumerate(parts)
if line.isdigit()]
print(result)
You're definitely on the right track.
If you want to preserve the original list here, you don't actually have to remove integer i; you can just go on to the next item.
Code:
originalData = []
formattedData = []
with open("data.txt", "r") as f :
f = list(f)
originalData = f
i = 0
while i < len(f): # Iterate through every line
try:
n = int(f[i]) # See if line can be cast to an integer
originalData[i] = n # Change string to int in original
formattedData.append([])
for j in range(n):
i += 1
item = f[i].replace('\n', '')
originalData[i] = item # Remove newline char in original
formattedData[-1].append(item)
except ValueError:
print("File has incorrect format")
i += 1
print(originalData)
print(formattedData)
The following code will produce a list results which is equal to [Data1, Data2].
The code assumes that the number of entries specified is exactly the amount that there is. That means that for a file like this, it will not work.
2
New York,5
Boston,10
Seattle,30
The code:
# get the data from the text file
with open('filename.txt', 'r') as file:
lines = file.read().splitlines()
results = []
index = 0
while index < len(lines):
# Find the start and end values.
start = index + 1
end = start + int(lines[index])
# Everything from the start up to and excluding the end index gets added
results.append(lines[start:end])
# Update the index
index = end
I've been playing around with a program that will take in information from two files and then write the information out to a single file in sorted order.
So what i did was store each line of the file as an element in a list. I create another function that splits each element into a 2d array where i can easily access the name variables. From there i want to create a nested for loop that as it iterates it checks for the highest value in the array, removes the value from the list and appending it to a new list until there's a sorted list.
I think I am like 90% of the way there, but I am having trouble wrapping my head around the logic of sorting algorithms. It seems like the problem just keeps getting more complex and i keep wanting to use pointers. If someone could help shine some light on the subject I would greatly appreciate it.
import os
from http.cookiejar import DAYS
from macpath import split
# This program reads a given input file and finds its longest line.
class Employee:
def __init__(self, EmployeeID, name, wage, days):
self.EmployeeID = EmployeeID
self.name = name
self.wage = wage
self.days = days
def Extraction(file,file2):
employList = []
while True:
line1 = file.readline().strip()
line2 = file2.readline().strip()
#print(type(line1))
employList.append(line1)
#print(line1)
employList.append(line2)
#print(line2)
if line1 == '' or line2 == '':
break
return employList
def Sort(mylist):
splitlist = []
sortedlist = []
print(len(mylist))
for items in range(len(mylist)):
#print(mylist[items].split())
splitlist.append(mylist[items].split())
print(splitlist)
#print(splitlist[1][1])
#print(splitlist[1][2])
highest = "z"
print(highest)
sortingLength = len(splitlist)
for i in range(10):
for items in range(len(splitlist)-2):
if highest > splitlist[items][2]:
istrue = highest < splitlist[items][2]
highest = splitlist[items][1]
print(items)
print(istrue)
print('marker')
print(splitlist[items][2])
if items == (len(splitlist)-2):
print("End of list",splitlist[items][2])
print(highest)
print(splitlist.index(highest))
print(splitlist[len(splitlist)-1][2])
print(sortingLength)
fPath = 'C:/Temp'
fileName = 'payroll1.txt'
fullFileName = os.path.join(fPath,fileName)
fileName2 = 'payroll2.txt'
fullFileName2 = os.path.join(fPath,fileName2)
f = open(fullFileName,'r')
f2 = open(fullFileName2, 'r')
employeeList = Extraction(f,f2)#pulling out each line in the file and placing into a list
Sort(employeeList)
ReportName= "List of Employees:"
marker = '-'* len(ReportName)
print (ReportName + ' \n' + marker)
total = 0
f.close()
I am having trouble with once having the higest value trying to append that value to a sortedlist, removing the value from the splitlist, and re running the code.
Using the sorted method is much easier and already built-in, per Joran's suggestion. I've edited your reading method so that it builds two lists of tuples, representing the line and the length of the line. The sorted method will return a list sorted according to the key (line length) and descending order (reverse=True)
from operator import itemgetter
class Employee:
def __init__(self, EmployeeID, name, wage, days):
self.EmployeeID = EmployeeID
self.name = name
self.wage = wage
self.days = days
def Extraction(file,file2):
employList = []
mylines = [(i, len(l.strip()), 'file1') for i,l in enumerate(file.readlines())]
mylines2 = [(i, len(l.strip()), 'file2') for i,l in enumerate(file2.readlines())]
employList = [*mylines, *mylines2]
return employList
fPath = 'C:/Temp'
fileName = 'payroll1.txt'
fullFileName = os.path.join(fPath,fileName)
fileName2 = 'payroll2.txt'
fullFileName2 = os.path.join(fPath,fileName2)
f = open(fullFileName,'r')
f2 = open(fullFileName2, 'r')
employeeList = Extraction(f,f2)#pulling out each line in the file and placing the line_number and length into a list
f.close()
f2.close()
# Itemgetter will sort on the second element of the tuple, len(line)
# and reverse will put it in descending order
ReportName = sorted(employeeList, key=itemgetter(1), reverse=True)
EDIT: I've added markers in the tuples so that you can keep track of what lines came from what file. Might be a bit confusing without them
I want to create a dictionary in python using a for loop, where each key ('CUI' in my case)is associated with an array of values, but the output I obtain is a dictionary where each key yeld just one of the values in my list. Following my code:
import numpy as np
data2 = open('pathways.dat', 'r', errors = 'ignore')
pathways = data2.readlines()
special_line_indexes = []
PWY_ID = []
line_cont = []
L_PRMR = [] #Left primary
dict_comp = dict()
#i is the line number (first element of enumerate), while line is the line content (2nd elem of enumerate)
for CUI in just_compound_id:
for i,line in enumerate(pathways):
if '//' in line:
#fint the indexes of the lines containing //
special_line_indexes = i+1
elif 'REACTION-LAYOUT -' in line:
if CUI in line:
PWY_ID.append(special_line_indexes)
dict_comp[CUI] = special_line_indexes
print(PWY_ID)
You need to take the dictionary out of the inner for and asign the PWY_ID table:
import numpy as np
data2 = open('pathways.dat', 'r', errors = 'ignore')
pathways = data2.readlines()
special_line_indexes = []
line_cont = []
L_PRMR = [] #Left primary
dict_comp = dict()
#i is the line number (first element of enumerate), while line is the line content (2nd elem of enumerate)
for CUI in just_compound_id:
PWY_ID = []
for i,line in enumerate(pathways):
if '//' in line:
#fint the indexes of the lines containing //
special_line_indexes = i+1
elif 'REACTION-LAYOUT -' in line:
if CUI in line:
PWY_ID.append(special_line_indexes)
dict_comp[CUI] = PWY_ID
print(PWY_ID)
print (dict_comp)
EDIT
The reason it's because you are over writting the value of the dictionary index (CUI) every time with a value (special_line_indexes) instead of an array of values. What you need it's to create the table in the inner for (with PWY_ID(append)), adding one element on each loop, and once you have created it, when you are finished with the for loop, then you need to assign that array to the dictionary (dict_comp[CUI] = PWY_ID).
You get an empty array before the inner for each time with the PWY_ID = []