I have a data structure Line whose outline is :
class Line :
x1
y1
x2
y2
m
c
id
# other functions pertaining to the class
In the main loop I have a list of lines which are already populated at this point.
What I want to do is consolidate lines which have m and c values very close so I get single line instead of multiple lines from detection
for line1 in allLines:
consolidateLines = []
for line2 in allLines:
if line1.id() == line2.id():
continue;
if abs(line1.m() - line2.m()) < SomeValue:
if abs(line1.c() - line2.c()) < someOtherValue:
consolidateLines.append(line2);
consolidateLines.append(line1);
# I want to remove all the lines in consolidatedLines.
# But since this is already in the loop, that is a problem.
# How do I accomplish this.
Explaining the problem :
I have a list of lines. Since these lines are detected using a computer vision algorithm (Hough Transforms), some of the lines are very close to each other. That is not ideal. So I am trying to consolidate all the lines that are very close and have close orientation. If one line is represented by y=mx + c, i'm trying to :
consolidate all lines (may have 5 lines which are close by) within the list with nearly same values of m and c and get one line for those.
remove all the consolidated lines
add the new line that i get in the list.
To remove duplicates from a list you basically need to compare every element with every other element from the list. In order to not compare twice you need to start the second loop at the position of the first loop + 1.
The following code does that, and if it finds a duplicate skips the first of the two values (break) command:
consolidateLines = []
for i, line1 in enumerate(allLines):
for j, line2 in enumerate(allLines[i+1:]):
if abs(line1.m() - line2.m()) < SomeValue and
abs(line1.c() - line2.c()) < someOtherValue:
break # found a duplicate later in the list, skipping first occurrence
else:
# no duplicagte found -> add to list
consolidateLines.append(line1);
Related
I have this code that works great and does what I want, however it does it in linear form which is way to slow for the size of my data files so I want to convert it to Log. I tried this code and many others posted here but still no luck at getting it to work. I will post both sets of code and give examples of what I expect.
import pandas
import fileinput
'''This code runs fine and does what I expect removing duplicates from big
file that are in small file, however it is a linear function.'''
with open('small.txt') as fin:
exclude = set(line.rstrip() for line in fin)
for line in fileinput.input('big.txt', inplace=True):
if line.rstrip() not in exclude:
print(line, end='')
else:
print('')
'''This code is my attempt at conversion to a log function.'''
def log_search(small, big):
first = 0
last = len(big.txt) - 1
while first <= last:
mid = (first + last) / 2
if str(mid) == small.txt:
return True
elif small.txt < str(mid):
last = mid - 1
else:
first = mid + 1
with open('small.txt') as fin:
exclude = set(line.rstrip() for line in fin)
for line in fileinput.input('big.txt', inplace=True):
if line.rstrip() not in exclude:
print(line, end='')
else:
print('')
return log_search(small, big)
big file has millions of lines of int data.
small file has hundreds of lines of int data.
compare data and remove duplicated data in big file but leave line number blank.
running the first block of code works but it takes too long to search through the big file. Maybe I am approaching the problem in a wrong way. My attempt at converting it to log runs without error but does nothing.
I don't think there is a better or faster way to do this that what you are currently doing in your first approach. (Update: There is, see below.) Storing the lines from small.txt in a set and iterating the lines in big.txt, checking whether they are in that set, will have complexity of O(b), with b being the number of lines in big.txt.
What you seem to be trying is to reduce this to O(s*logb), with s being the number of lines in small.txt, by using binary search to check for each line in small.txt whether it is in big.txt and removing/overwriting it then.
This would work well if all the lines were in a list with random access to any array, but you have just the file, which does not allow random access to any line. It does, however, allow random access to any character with file.seek, which (at least in some cases?) seems to be O(1). But then you will still have to find the previous line break to that position before you can actually read that line. Also, you can not just replace lines with empty lines, but you have to overwrite the number with the same number of characters, e.g. spaces.
So, yes, theoretically it can be done in O(s*logb), if you do the following:
implement binary search, searching not on the lines, but on the characters of the big file
for each position, backtrack to the last line break, then read the line to get the number
try again in the lower/upper half as usual with binary search
if the number is found, replace with as many spaces as there are digits in the number
repeat with the next number from the small file
On my system, reading and writing a file with 10 million lines of numbers only took 3 seconds each, or about 8 seconds with fileinput.input and print. Thus, IMHO, this is not really worth the effort, but of course this may depend on how often you have to do this operation.
Okay, so I got curious myself --and who needs a lunch break anyway?-- so I tried to implement this... and it works surprisingly well. This will find the given number in the file and replace it with an accordant number of - (not just a blank line, that's impossible without rewriting the entire file). Note that I did not thoroughly test the binary-search algorithm for edge cases, off-by-one erros etc.
import os
def getlineat(f, pos):
pos = f.seek(pos)
while pos > 0 and f.read(1) != "\n":
pos = f.seek(pos-1)
return pos+1 if pos > 0 else 0
def bsearch(f, num):
lower = 0
upper = os.stat(f.name).st_size - 1
while lower <= upper:
mid = (lower + upper) // 2
pos = getlineat(f, mid)
line = f.readline()
if not line: break # end of file
val = int(line)
if val == num:
return (pos, len(line.strip()))
elif num < val:
upper = mid - 1
elif num > val:
lower = mid + 1
return (-1, -1)
def overwrite(filename, to_remove):
with open(filename, "r+") as f:
positions = [bsearch(f, n) for n in to_remove]
for n, (pos, length) in sorted(zip(to_remove, positions)):
print(n, pos)
if pos != -1:
f.seek(pos)
f.write("-" * length)
import random
to_remove = [random.randint(-500, 1500) for _ in range(10)]
overwrite("test.txt", to_remove)
This will first collect all the positions to be overwritten, and then do the actual overwriting in a second stes, otherwise the binary search will have problems when it hits one of the previously "removed" lines. I tested this with a file holding all the numbers from 0 to 1,000 in sorted order and a list of random numbers (both in- and out-of-bounds) to be removed and it worked just fine.
Update: Also tested it with a file with random numbers from 0 to 100,000,000 in sorted order (944 MB) and overwriting 100 random numbers, and it finished immediately, so this should indeed be O(s*logb), at least on my system (the complexity of file.seek may depend on file system, file type, etc.).
The bsearch function could also be generalized to accept another parameter value_function instead of hardcoding val = int(line). Then it could be used for binary-searching in arbitrary files, e.g. huge dictionaries, gene databases, csv files, etc., as long as the lines are sorted by that same value function.
So I have a file with
first name(space)last name(tab)a grade as such.
Example
Wanda Barber 96
I'm having trouble reading this in as a list and then editing the number.
My current code is,
def TopStudents(n):
original = open(n)
contents = original.readlines()
x = contents.split('/t')
for y in x[::2]:
y - 100
if y > 0: (????)
Here is the point where I'm confused. I am just trying to get the first and last names of students who scored over 100%. I thought of creating a new list for students that meet this qualification, but I'm not sure how I would write the corresponding first and last name. I know I need to take the stride of every other location in the list, as odd will always be the first and last names. Thank you in advance for the help!
There are several things wrong with your code:
- The open file must be closed (#1)
- Must be made a function call using to call it (#2)
- The split used is using the forwardslash (/) instead of the backslash () (#3)
- The way you decided to loop through your for loop is not optimal if you are looking to access all the members (#4)
- The for loops end in a : (#5)
- You must store the result of that calculation somewhere (#6)
def TopStudents(n):
original = open(n) #1
contents = original.readlines #2
x = contents.split('/t') #3
for y in x[::2] #4, #5
y - 100 #6
if y > 0:
That said, a fixed version could be:
original = open(n, 'r')
for line in original:
name, score = line.split('\t')
# If needed, you could split the name into first and last name:
# first_name, last_name = name.split(' ')
# 'score' is a string, we must convert it to an int before comparing to one, so...
score = int(score)
if score > 100:
print("The student " + name + " has the score " + str(score))
original.close() #1 - Closed the file
Note: I have focused on readability with several commentary to help you understand the code.
I always prefer to use ‘with open()’ because it closes the file automatically. I used a txt with comma separations for simplicity for me, but you can just replace the comma with \t.
def TopStudents():
with open('temp.txt', 'r') as original:
contents = list(filter(None, (line.strip().strip('\n') for line in original)))
x = list(part.split(',') for part in contents)
for y in x:
if int(y[1]) > 100:
print(y[0], y[1])
TopStudents()
This opens and loads all lines into contents as a list, removing blank lines and line breaks. Then it separates into a list of lists.
You then iterate through each list in x, looking for the second value (y[1]) which is your grade. If the int() is greater than 100, print each segment of y.
EDIT: My question was answered on reddit. Here is the link if anyone is interested in the answer to this problem https://www.reddit.com/r/learnpython/comments/42ibhg/how_to_match_fields_from_two_lists_and_further/
I am attempting to get the pos and alt strings from file1 to match up with what is in
file2, fairly simple. However, file2 has values in the 17th split element/column to the
last element/column (340th) which contains string such as 1/1:1.2.2:51:12 which
I also want to filter for.
I want to extract the rows from file2 that contain/match the pos and alt from file1.
Thereafter, I want to further filter the matched results that only contain certain
values in the 17th split element/column onwards. But to do so the values would have to
be split by ":" so I can filter for split[0] = "1/1" and split[2] > 50. The problem is
I have no idea how to do this.
I imagine I will have to iterate over these and split but I am not sure how to do this
as the code is presently in a loop and the values I want to filter are in columns not rows.
Any advice would be greatly appreciated, I have sat with this problem since Friday and
have yet to find a solution.
import os,itertools,re
file1 = open("file1.txt","r")
file2 = open("file2.txt","r")
matched = []
for (x),(y) in itertools.product(file2,file1):
if not x.startswith("#"):
cells_y = y.split("\t")
pos_y = cells[0]
alt_y = cells[3]
cells_x = x.split("\t")
pos_x = cells_x[0]+":"+cells_x[1]
alt_x = cells_x[4]
if pos_y in pos_x and alt_y in alt_x:
matched.append(x)
for z in matched:
cells_z = z.split("\t")
if cells_z[16:len(cells_z)]:
Your requirement is not clear, but you might mean this:
for (x),(y) in itertools.product(file2,file1):
if x.startswith("#"):
continue
cells_y = y.split("\t")
pos_y = cells[0]
alt_y = cells[3]
cells_x = x.split("\t")
pos_x = cells_x[0]+":"+cells_x[1]
alt_x = cells_x[4]
if pos_y != pos_x: continue
if alt_y != alt_x: continue
extra_match = False
for f in range(17, 341):
y_extra = y[f].split(':')
if y_extra[0] != '1/1': continue
if y_extra[2] <= 50: continue
extra_match = True
break
if not extra_match: continue
xy = x + y
matched.append(xy)
I chose to concatenate x and y into the matched array, since I wasn't sure whether or not you would want all the data. If not, feel free to go back to just appending x or y.
You may want to look into the csv library, which can use tab as a delimiter. You can also use a generator and/or guards to make the code a bit more pythonic and efficient. I think your approach with indexes works pretty well, but it would be easy to break when trying to modify down the road, or to update if your file lines change shape. You may wish to create objects (I use NamedTuples in the last part) to represent your lines and make it much easier to read/refine down the road.
Lastly, remember that Python has a shortcut feature with the comparative 'if'
for example:
if x_evaluation and y_evaluation:
do some stuff
when x_evaluation returns False, Python will skip y_evaluation entirely. In your code, cells_x[0]+":"+cells_x[1] is evaluated every single time you iterate the loop. Instead of storing this value, I wait until the easier alt comparison evaluates to True before doing this (comparatively) heavier/uglier check.
import csv
def filter_matching_alt_and_pos(first_file, second_file):
for x in csv.reader(open(first_file, 'rb'), delimiter='\t'):
for y in csv.reader(open(second_file, 'rb'), delimiter='\t'):
# continue will skip the rest of this loop and go to the next value for y
# this way, we can abort as soon as one value isn't what we want
# .. todo:: we could make a filter function and even use the filter() built-in depending on needs!
if x[3] == y[4] and x[0] == ":".join(y[:1]):
yield x
def match_datestamp_and_alt_and_pos(first_file, second_file):
for z in filter_matching_alt_and_pos(first_file, second_file):
for element in z[16:]:
# I am not sure I fully understood your filter needs for the 2nd half. Here, I split all elements from the 17th onward and look for the two cases you mentioned. This seems like it might be very heavy, but at least we're using generators!
# same idea as before, we abort as early as possible to avoid needless indexing and checks
for chunk in element.split(":"):
# WARNING: if you aren't 100% sure the 2nd element is an int, this is very dangerous
# here, I use the continue keyword and the negative-check to help eliminate excess overhead. The execution is very similar as above, but might be easier to read/understand and can help speed things along in some cases
# once again, I do the lighter check before the heavier one
if not int(chunk[2])> 50:
# continue automatically skips to the next iteration on element
continue
if not chunk[:1] == "1/1":
continue
yield z
if __name__ == '__main__':
first_file = "first.txt"
second_file = "second.txt"
# match_datestamp_and_alt_and_pos returns a generator; for loop through it for the lines which matched all 4 cases
match_datestamp_and_alt_and_pos(first_file=first_file, second_file=second_file)
namedtuples for the first part
from collections import namedtuple
FirstFileElement = namedtuple("FirstFrameElement", "pos unused1 unused2 alt")
SecondFileElement = namedtuple("SecondFrameElement", "pos1 pos2 unused2 unused3 alt")
def filter_matching_alt_and_pos(first_file, second_file):
for x in csv.reader(open(first_file, 'rb'), delimiter='\t'):
for y in csv.reader(open(second_file, 'rb'), delimiter='\t'):
# continue will skip the rest of this loop and go to the next value for y
# this way, we can abort as soon as one value isn't what we want
# .. todo:: we could make a filter function and even use the filter() built-in depending on needs!
x_element = FirstFileElement(*x)
y_element = SecondFileElement(*y)
if x.alt == y.alt and x.pos == ":".join([y.pos1, y.pos2]):
yield x
I'm learning Python and for practicing purposes I'm writing a script that reads a file (containing a graph in Trivial Graph Format) and runs a couple of graph algorithms on the graph.
I thought about storing the graph in a list of n dictionaries, where n is the number of vertexes and all the edges of a vertex would be stored in a dictionary.
I tried this
edges = [{} for i in xrange(num_vertexes)]
for line in file:
args = line.split(' ')
vertex1 = int(args[0])
vertex2 = int(args[1])
label = int(args[2])
edges[vertex1][vertex2] = label
but I'm getting this error for the last line:
IndexError: list index out of range
It looks like vertex1 is probably greater than num_vertexes. Given that python indexes from 0 and the example on the wiki of the format goes from 1, the last line's vertex number is probably 1 higher than the length of the index (I'd need to see the file to know for sure, of course). So in the python case lst[0] is the first element, and lst[n-1] is the last element where for the vertexes 1 is the first element and n is the last element.
So the fix here is to use vertex1 = int(args[0])-1
The issue is somewhere with your data, add some validation to make sure your code doesn't choke on bad data. Currently your code will fail if a line has non-numbers, less than three numbers, or if vertex1 >= len(edges).
edges = [{} for i in xrange(num_vertexs)]
for line in file:
args = line.split(' ')
if len(args) >= 3:
try:
vertex1 = int(args[0])
vertex2 = int(args[1])
label = int(args[2])
if vertex1 < len(edges):
edges[vertex1][vertex2] = label
else:
# value for vertex1 is too large
pass
except ValueError:
# you got some non-number data
pass
else:
# you got a line with not enough data
pass
Replace any of those pass statements with logging if needed (you can also remove the two else blocks if you don't intend to use them).
I'm making a for loop within a for loop. I'm looping through a list and finding a specific string that contains a regular expression pattern. Once I find the line, I need to search to find the next line of a certain pattern. I need to store both lines to be able to parse out the time for them. I've created a counter to keep track of the index number of the list as the outer for loop works. Can I use a construction like this to find the second line I need?
index = 0
for lineString in summaryList:
match10secExp = re.search('taking 10 sec. exposure', lineString)
if match10secExp:
startPlate = lineString
for line in summaryList[index:index+10]:
matchExposure = re.search('taking \d\d\d sec. exposure', line)
if matchExposure:
endPlate = line
break
index = index + 1
The code runs, but I'm not getting the result I'm looking for.
Thanks.
matchExposure = re.search('taking \d\d\d sec. exposure', lineString)
should probably be
matchExposure = re.search('taking \d\d\d sec. exposure', line)
Depending on your exact needs, you can just use an iterator on the list, or two of them as mae by itertools.tee. I.e., if you want to search lines following the first pattern only for the second pattern, a single iterator will do:
theiter = iter(thelist)
for aline in theiter:
if re.search(somestart, aline):
for another in theiter:
if re.search(someend, another):
yield aline, another # or print, whatever
break
This will not search lines from aline to the ending another for somestart, only for someend. If you need to search them for both purposes, i.e., leave theiter itself intact for the outer loop, that's where tee can help:
for aline in theiter:
if re.search(somestart, aline):
_, anotheriter = itertools.tee(iter(thelist))
for another in anotheriter:
if re.search(someend, another):
yield aline, another # or print, whatever
break
This is an exception to the general rule about tee which the docs give:
Once tee() has made a split, the
original iterable should not be used
anywhere else; otherwise, the iterable
could get advanced without the tee
objects being informed.
because the advancing of theiter and that of anotheriter occur in disjoint parts of the code, and anotheriter is always rebuilt afresh when needed (so the advancement of theiter in the meantime is not relevant).