Merging arrays containing the same values - python

I need to get a high correlation group from the correlation coefficient matrix, keep one of them and exclude the other。But I don't know how to do it gracefully and efficiently.
Here's a similar answer, but hopefully it will be done using a vector-like matrix.:
Merge arrays if they contain one or more of the same value
For example:
a = np.array([[1,0,0,0,0,1],
[0,1,0,1,0,0],
[0,0,1,0,1,1],
[0,1,0,1,0,0],
[0,0,1,0,1,0],
[1,0,1,0,0,1]])
Diagonal:
(0,0),(1,1),(2,2)...(5,5)
Other:
(0,5),(1,3),(2,4),(2,5)
These three pairs because each other contains merged into a group of:
(0,2,4,5) = (0,5),(2,4),(2,5)
So ultimately I need the output:
(I will use the results to index other data and therefore decide to keep the largest index in each group)
out = [(0,2,4,5),(1,3)]
I think the simplest approach is to take a nested loop and iterate through all the elements multiple times. I would like to have a more concise and efficient way to achieve, thank you
This is a loop implementation, I'm sorry to write it hard to see:
a = np.array([[1,0,0,0,0,1],
[0,1,0,1,0,0],
[0,0,1,0,1,1],
[0,1,0,1,0,0],
[0,0,1,0,1,0],
[1,0,1,0,0,1]])
a[np.tril_indices(6, -1)]= 0
a[np.diag_indices(6)] = 0
g = list(np.c_[np.where(a)])
p = {}; index = 1
while len(g)>0:
x = g.pop(0)
if not p:
p[index] = list(x)
for i,l in enumerate(g):
if np.in1d(l,x[0]).any()|np.in1d(l,x[1]).any():
n = list(g.pop(i))
p[index].extend(n)
else:
T = False
for key,v in p.items():
if np.in1d(v,x[0]).any()|np.in1d(v,x[1]).any():
v.extend(list(x))
T = True
if T==False:
index += 1; p[index] = list(x)
for i,l in enumerate(g):
if np.in1d(l,x[0]).any()|np.in1d(l,x[1]).any():
n = list(g.pop(i))
p[index].extend(n)
for key,v in p.items():
print key,np.unique(v)
out:
1 [0 2 4 5]
2 [1 3]

The central problem of merging / consolidating the pairs with common extrema can be solved using this answer.
Hence, the above code may be rewritten like:
a = np.array([[1,0,0,0,0,1],
[0,1,0,1,0,0],
[0,0,1,0,1,1],
[0,1,0,1,0,0],
[0,0,1,0,1,0],
[1,0,1,0,0,1]])
a[np.tril_indices(6, -1)]= 0
a[np.diag_indices(6)] = 0
g = np.c_[np.where(a)].tolist()
def consolidate(items):
items = [set(item.copy()) for item in items]
for i, x in enumerate(items):
for j, y in enumerate(items[i + 1:]):
if x & y:
items[i + j + 1] = x | y
items[i] = None
return [sorted(x) for x in items if x]
p = {i + 1: x for i, x in enumerate(sorted(consolidate(g)))}

Related

Is it possible to store matrices in a list or as iterable elements in Python?

My Python code generates some matrices (one at a time) through a loop over some index called i.
Storing matrices with names like mat_0, mat_1,..., mat_i is straightforward but I was wondering if it somehow possible to store matrices like iterable elements like mat[0], mat[1],...,mat[i]?
Note: The matrices are stored in scipy sparse coo_matrix format.
Edit 1 : The index i does not necessarily follow a proper sequence and may loop over some random numbers like 0,2,3,7,... In that case the matrices have to be stored as mat[0], mat[2], mat[3], mat[7],... and so on.
Edit 2: Minimal working code
import numpy as np
from math import sqrt
from scipy.sparse import coo_matrix, csr_matrix
primesqt = np.array([1.41421356, 1.73205080, 2.23606797, 2.64575131, 3.31662479, 3.60555127, 4.12310562, 4.35889894, 4.79583152, 5.38516480, 5.56776436, 6.08276253, 6.40312423, 6.55743852, 6.85565460, 7.28010988, 7.68114574, 7.81024967, 8.18535277, 8.42614977, 8.54400374, 8.88819441, 9.11043357, 9.43398113, 9.84885780, 10.04987562, 10.14889156, 10.34408043, 10.44030650, 10.63014581, 11.26942766, 11.44552314, 11.70469991, 11.78982612, 12.20655561, 12.28820572, 12.52996408, 12.76714533, 12.92284798, 13.15294643, 13.37908816, 13.45362404, 13.82027496, 13.89244398, 14.03566884, 14.10673597, 14.52583904, 14.93318452, 15.06651917, 15.13274595])
def bg(n, k, min_elem, max_elem):
allowed = range(max_elem, min_elem-1, -1)
def helper(n, k, t):
if k == 0:
if n == 0:
yield t
elif k == 1:
if n in allowed:
yield t + (n,)
elif min_elem * k <= n <= max_elem * k:
for v in allowed:
yield from helper(n - v, k - 1, t + (v,))
return helper(n, k, ())
def BinarySearch(lys, val):
first = 0
last = len(lys)-1
index = -1
while (first <= last) and (index == -1):
mid = (first+last)//2
if lys[mid] == val:
index = mid
else:
if val<lys[mid]:
last = mid -1
else:
first = mid +1
return index
m = 4
dim = 16
nmax = 1
a = []
for n in range(0,(nmax*m)+1):
for x in bg(n, m, 0, nmax):
a.append(x)
T = np.zeros(dim)
for ii in range(dim):
for jj in range(m):
T[ii] += primesqt[jj]*float(a[ii][jj])
ind = np.argsort(T)
T = sorted(T)
all_bs = [0,2,3,7] # i_list
# Evaluate 'mat_ee' for each 'ee' given in the list 'all_bs'
for ee in all_bs:
row = []
col = []
val = []
for ii in range(m):
for vv in range(dim):
Tg = 0
if a[vv][ii]+1 < nmax+1:
k = np.copy(a[vv])
elem = sqrt(float(k[ii]+1.0))+ee
k[ii] = k[ii]+1
# Generate tag Tg for elem != 0
for jj in range(m):
Tg += float((primesqt[jj])*k[jj])
# Search location of non-zero element in sorted T
location = BinarySearch(T, Tg)
uu = ind[location]
row.append(uu)
col.append(vv)
val.append(elem)
mat_ee = (coo_matrix((val, (row, col)), shape=(dim, dim)).tocsr()) # To be stored as mat[0], mat[2], mat[3], mat[7]
print(mat_ee)
A dictionary would allow you to reference an object using an arbitrary (but immutable) object. In your case, you could store the matrices mat_ee in each iteration of the outer loop (for ee in all_bs:) using that ee index:
csr_matrices = {}
for ee in all_bs:
# your inner loops, all the way to…
mat_ee = (coo_matrix((val, (row, col)),
shape=(dim, dim))
.tocsr())
csr_matrices[ee] = mat_ee
From that moment, you can access the elements of the dictionaries using the indices you had in all_bs:
print(csr_matrices[2])
and when you inspect the dictionary, you’ll notice it only contains the keys you specified:
print(csr_matrices.keys())
You could use a List of your objects.
items_list = list()
for something:
result = function
items_list.append(result)

how to sum adjacent tuples/list

I apologise for the terrible description and if this is a duplicated, i have no idea how to phrase this question. Let me explain what i am trying to do. I have a list consisting of 0s and 1s that is 3600 elements long (1 hour time series data). i used itertools.groupby() to get a list of consecutive keys. I need (0,1) to be counted as (1,1), and be summed with the flanking tuples.
so
[(1,8),(0,9),(1,5),(0,1),(1,3),(0,3)]
becomes
[(1,8),(0,9),(1,5),(1,1),(1,3),(0,3)]
which should become
[(1,8),(0,9),(1,9),(0,3)]
right now, what i have is
def counter(file):
list1 = list(dict[file]) #make a list of the data currently working on
graph = dict.fromkeys(list(range(0,3601))) #make a graphing dict, x = key, y = value
for i in list(range(0,3601)):
graph[i] = 0 # set all the values/ y from none to 0
for i in list1:
graph[i] +=1 #populate the values in graphing dict
x,y = zip(*graph.items()) # unpack graphing dict into list, x = 0 to 3600 and y = time where it bite
z = [(x[0], len(list(x[1]))) for x in itertools.groupby(y)] #make a new list z where consecutive y is in format (value, count)
z[:] = [list(i) for i in z]
for i in z[:]:
if i == [0,1]:
i[0]=1
return(z)
dict is a dictionary where the keys are filenames and the values are a list of numbers to be used in the function counter(). and this gives me something like this but much longer
[[1,8],[0,9],[1,5], [1,1], [1,3],[0,3]]
edits:
solved it with the help of a friend,
while (0,1) in z:
idx=z.index((0,1))
if idx == len(z)-1:
break
z[idx] = (1,1+z[idx-1][1] + z[idx+1][1])
del z[idx+1]
del z[idx-1]
Not sure what exactly is that you need. But this is my best attempt of understanding it.
def do_stuff(original_input):
new_original = []
new_original.append(original_input[0])
for el in original_input[1:]:
if el == (0, 1):
el = (1, 1)
if el[0] != new_original[-1][0]:
new_original.append(el)
else:
(a, b) = new_original[-1]
new_original[-1] = (a, b + el[1])
return new_original
# check
print (do_stuff([(1,8),(0,9),(1,5),(0,1),(1,3),(0,3)]))

How to check if two permutations are symmetric?

Given two permutations A and B of L different elements, L is even, let's call these permutations "symmetric" (for a lack of a better term), if there exist n and m, m > n such as (in python notation):
- A[n:m] == B[L-m:L-n]
- B[n:m] == A[L-m:L-n]
- all other elements are in place
Informally, consider
A = 0 1 2 3 4 5 6 7
Take any slice of it, for example 1 2. It starts at the second index and its length is 2. Now take a slice symmetric to it: it ends at the penultimate index and is 2 chars long too, so it's 5 6. Swapping these slices gives
B = 0 5 6 3 4 1 2 7
Now, A and B are "symmetric" in the above sense (n=1, m=3). On the other hand
A = 0 1 2 3 4 5 6 7
B = 1 0 2 3 4 5 7 6
are not "symmetric" (no n,m with above properties exist).
How can I write an algorithm in python that finds if two given permutations (=lists) are "symmetric" and if yes, find the n and m? For simplicity, let's consider only even L (because the odd case can be trivially reduced to the even one by eliminating the middle fixed element) and assume correct inputs (set(A)==set(B), len(set(A))==len(A)).
(I have no problem bruteforcing all possible symmetries, but looking for something smarter and faster than that).
Fun fact: the number of symmetric permutations for the given L is a Triangular number.
I use this code to test out your answers.
Bounty update: many excellent answers here. #Jared Goguen's solution appears to be the fastest.
Final timings:
testing 0123456789 L= 10
test_alexis ok in 15.4252s
test_evgeny_kluev_A ok in 30.3875s
test_evgeny_kluev_B ok in 27.1382s
test_evgeny_kluev_C ok in 14.8131s
test_ian ok in 26.8318s
test_jared_goguen ok in 10.0999s
test_jason_herbburn ok in 21.3870s
test_tom_karzes ok in 27.9769s
Here is the working solution for the question:
def isSymmetric(A, B):
L = len(A) #assume equivalent to len(B), modifying this would be as simple as checking if len(A) != len(B), return []
la = L//2 # half-list length
Al = A[:la]
Ar = A[la:]
Bl = B[:la]
Br = B[la:]
for i in range(la):
lai = la - i #just to reduce the number of computation we need to perform
for j in range(1, lai + 1):
k = lai - j #same here, reduce computation
if Al[i] != Br[k] or Ar[k] != Bl[i]: #the key for efficient computation is here: do not proceed unnecessarily
continue
n = i #written only for the sake of clarity. i is n, and we can use i directly
m = i + j
if A[n:m] == B[L-m:L-n] and B[n:m] == A[L-m:L-n]: #possibly symmetric
if A[0:n] == B[0:n] and A[m:L-m] == B[m:L-m] and A[L-n:] == B[L-n:]:
return [n, m]
return []
As you have mentioned, though the idea looks simple, but it is actually quite a tricky one. Once we see the patterns, however, the implementation is straight-forward.
The central idea of the solution is this single line:
if Al[i] != Br[k] or Ar[k] != Bl[i]: #the key for efficient computation is here: do not proceed unnecessarily
All other lines are just either direct code translation from the problem statement or optimization made for more efficient computation.
There are few steps involved in order to find the solution:
Firstly, we need to split the each both list Aand list B into two half-lists (called Al, Ar, Bl, and Br). Each half-list would contain half of the members of the original lists:
Al = A[:la]
Ar = A[la:]
Bl = B[:la]
Br = B[la:]
Secondly, to make the evaluation efficient, the goal here is to find what I would call pivot index to decide whether a position in the list (index) is worth evaluated or not to check if the lists are symmetric. This pivot index is the central idea to find an efficient solution. So I would try to elaborate it quite a bit:
Consider the left half part of the A list, suppose you have a member like this:
Al = [al1, al2, al3, al4, al5, al6]
We can imagine that there is a corresponding index list for the mentioned list like this
Al = [al1, al2, al3, al4, al5, al6]
iAl = [0, 1, 2, 3, 4, 5 ] #corresponding index list, added for explanation purpose
(Note: the reason why I mention of imagining a corresponding index list is for ease of explanation purposes)
Likewise, we can imagine that the other three lists may have similar index lists. Let's name them iAr, iBl, and iBr respectively and they are all having identical members with iAl.
It is the index of the lists which would really matter for us to look into - in order to solve the problem.
Here is what I mean: suppose we have two parameters:
index (let's give a variable name i to it, and I would use symbol ^ for current i)
length (let's give a variable name j to it, and I would use symbol == to visually represent its length value)
for each evaluation of the index element in iAl - then each evaluation would mean:
Given an index value i and length value of j in iAl, do
something to determine if it is worth to check for symmetric
qualifications starting from that index and with that length
(Hence the name pivot index come).
Now, let's take example of one evaluation when i = 0 and j = 1. The evaluation can be illustrated as follow:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
== <-- now this has length (j) of 1
In order for those index i and length j to be worth evaluated further, then the counterpart iBr must have the same item value with the same length but on different index (let's name it index k)
iBr = [0, 1, 2, 3, 4, 5]
^ <-- must compare the value in this index to what is pointed by iAl
== <-- must evaluate with the same length = 1
For example, for the above case, this is a possible "symmetric" permutation just for the two lists Al-Br (we will consider the other two lists Ar-Bl later):
Al = [0, x, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, x, 0]
At this moment, it is good to note that
It won't worth evaluating further if even the above condition is not
true
And this is where you get the algorithm to be more efficient; that is, by selectively evaluating only the few possible cases among all possible cases. And how to find the few possible cases?
By trying to find relationship between indexes and lengths of the
four lists. That is, for a given index i and length j in a
list (say Al), what must be the index k in the counterpart
list (in the case is Br). Length for the counterpart list need not
be found because it is the same as in the list (that is j).
Having know that, let's now proceed further to see if we can see more patterns in the evaluation process.
Consider now the effect of length (j). For example, if we are to evaluate from index 0, but the length is 2 then the counterpart list would need to have different index k evaluated than when the length is 1
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
===== <-- now this has length (j) of 2
iBr = [0, 1, 2, 3, 4, 5]
^ <-- must compare the value in this index to what is pointed by iAl
===== <-- must evaluate with the same length = 2
Or, for the illustration above, what really matters fox i = 0 and y = 2 is something like this:
# when i = 0 and y = 2
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Take a look that the above pattern is a bit different from when i = 0 and y = 1 - the index position for 0 value in the example is shifted:
# when i = 0 and y = 1, k = 5
Al = [0, x, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, x, 0]
# when i = 0 and y = 2, k = 4
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Thus, length shifts where the index of the counterpart list must be checked. In the first case, when i = 0 and y = 1, then the k = 5. But in the second case, when i = 0 and y = 1, then the k = 4. Thus we found the pivot indexes relationship when we change the length j for a fixed index i (in this case being 0) unto the counterpart list index k.
Now, consider the effects of index i with fixed length j for counterpart list index k. For example, let's fix the length as y = 4, then for index i = 0, we have:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 0
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 1
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 2
========== <-- now this has length (j) of 4
#And no more needed
In the above example, it can be seen that we need to evaluate 3 possibilities for the given i and j, but if the index i is changed to 1 with the same length j = 4:
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 1
========== <-- now this has length (j) of 4
iAl = [0, 1, 2, 3, 4, 5]
^ <-- now evaluate this index (i) = 2
========== <-- now this has length (j) of 4
Note that we only need to evaluate 2 possibilities. Thus the increase of index i decreases the number of possible cases to be evaluated!
With all the above patterns found, we almost found all the basis we need to make the algorithm works. But to complete that, we need to find the relationship between indexes which appear in Al-Br pair for a given [i, j] => [k, j] with the indexes in Ar-Bl pair for the same [i, j].
Now, we can actually see that they are simply mirroring the relationship we found in Al-Br pair!
(IMHO, this is really beautiful! and thus I think term "symmetric" permutation is not far from truth)
For example, if we have the following Al-Br pair evaluated with i = 0 and y = 2
Al = [0, y, x, x, x, x] #x means don't care for now
Br = [x, x, x, x, 0, y] #y means to be checked later
Then, to make it symmetric, we must have the corresponding Ar-Bl:
Ar = [x, x, x, x, 3, y] #x means don't care for now
Bl = [3, y, x, x, x, x] #y means to be checked later
The indexing of Al-Br pair is mirroring (or, is symmetric to) the indexing of Ar-Bl pair!
Therefore, combining all the pattern we found above, we now could find the pivot indexes for evaluating Al, Ar, Bl, and Br.
We only need to check the values of the lists in the pivot index
first. If the values of the lists in the pivot indexes of Al, Ar, Bl, and Br
matches in the evaluation then and only then we need to check
for symmetric criteria (thus making the computation efficient!)
Putting up all the knowledge above into code, the following is the resulting for-loop Python code to check for symmetricity:
for i in range(len(Al)): #for every index in the list
lai = la - i #just simplification
for j in range(1, lai + 1): #get the length from 1 to la - i + 1
k = lai - j #get the mirror index
if Al[i] != Br[k] or Ar[k] != Bl[i]: #if the value in the pivot indexes do not match
continue #skip, no need to evaluate
#at this point onwards, then the values in the pivot indexes match
n = i #assign n
m = i + j #assign m
#test if the first two conditions for symmetric are passed
if A[n:m] == B[L-m:L-n] and B[n:m] == A[L-m:L-n]: #possibly symmetric
#if it passes, test the third condition for symmetric, the rests of the elements must stay in its place
if A[0:n] == B[0:n] and A[m:L-m] == B[m:L-m] and A[L-n:] == B[L-n:]:
return [n, m] #if all three conditions are passed, symmetric lists are found! return [n, m] immediately!
#passing this but not outside of the loop means
#any of the 3 conditions to find symmetry are failed
#though values in the pivot indexes match, simply continue
return [] #nothing can be found - asymmetric lists
And there go you with the symmetric test!
(OK, this is quite a challenge and it takes quite a while for me to figure out how.)
I rewrote the code without some of the complexity (and errors).
def test_o_o(a, b):
L = len(a)
H = L//2
n, m = 0, H-1
# find the first difference in the left-side
while n < H:
if a[n] != b[n]: break
n += 1
else: return
# find the last difference in the left-side
while m > -1:
if a[m] != b[m]: break
m -= 1
else: return
# for slicing, we want end_index+1
m += 1
# compare each slice for equality
# order: beginning, block 1, block 2, middle, end
if (a[0:n] == b[0:n] and \
a[n:m] == b[L-m:L-n] and \
b[n:m] == a[L-m:L-n] and \
a[m:L-m] == b[m:L-m] and \
a[L-n:L] == b[L-n:L]):
return n, m
The implementation is both elegant and efficient.
The break into else: return structures ensure that the function returns at the soonest possible point. They also validate that n and m have been set to valid values, but this does not appear to be necessary when explicitly checking the slices. These lines can be removed with no noticeable impact on the timing.
Explicitly comparing the slices will also short-circuit as soon as one evaluates to False.
Originally, I checked whether a permutation existed by transforming b into a:
b = b[:]
b[n:m], b[L-m:L-n] = b[L-m:L-n], b[n:m]
if a == b:
return n, m
But this is slower than explicitly comparing the slices. Let me know if the algorithm doesn't speak for itself and I can offer further explanation (maybe even proof) as to why it works and is minimal.
I tried to implement 3 different algorithms for this task. All of them have O(N) time complexity and require O(1) additional space. Interesting fact: all other answers (known so far) implement 2 of these algorithms (though they not always keep optimal asymptotic time/space complexity). Here is high-level description for each algorithm:
Algorithm A
Compare the lists, group "non-equal" intervals, make sure there are exactly two such intervals (with special case when intervals meet in the middle).
Check if "non-equal" intervals are positioned symmetrically, and their contents is also "symmetrical".
Algorithm B
Compare first halves of the lists to guess where are "intervals to be exchanged".
Check if contents of these intervals is "symmetrical". And make sure the lists are equal outside of these intervals.
Algorithm C
Compare first halves of the lists to find first mismatched element.
Find this mismatched element of first list in second one. This hints the position of "intervals to be exchanged".
Check if contents of these intervals is "symmetrical". And make sure the lists are equal outside of these intervals.
There are two alternative implementations for step 1 of each algorithm: (1) using itertools, and (2) using plain loops (or list comprehensions). itertools are efficient for long lists but relatively slow on short lists.
Here is algorithm C with first step implemented using itertools. It looks simpler than other two algorithms (at the end of this post). And it is pretty fast, even for short lists:
import itertools as it
import operator as op
def test_C(a, b):
length = len(a)
half = length // 2
mismatches = it.imap(op.ne, a, b[:half]) # compare half-lists
try:
n = next(it.compress(it.count(), mismatches))
nr = length - n
mr = a.index(b[n], half, nr)
m = length - mr
except StopIteration: return None
except ValueError: return None
if a[n:m] == b[mr:nr] and b[n:m] == a[mr:nr] \
and a[m:mr] == b[m:mr] and a[nr:] == b[nr:]:
return (n, m)
This could be done using mostly itertools:
def test_A(a, b):
equals = it.imap(op.eq, a, b) # compare lists
e1, e2 = it.tee(equals)
l = it.chain(e1, [True])
r = it.chain([True], e2)
borders = it.imap(op.ne, l, r) # delimit equal/non-equal intervals
ranges = list(it.islice(it.compress(it.count(), borders), 5))
if len(ranges) == 4:
n1, m1 = ranges[0], ranges[1]
n2, m2 = ranges[2], ranges[3]
elif len(ranges) == 2:
n1, m1 = ranges[0], len(a) // 2
n2, m2 = len(a) // 2, ranges[1]
else:
return None
if n1 == len(a) - m2 and m1 == len(a) - n2 \
and a[n1:m1] == b[n2:m2] and b[n1:m1] == a[n2:m2]:
return (n1, m1)
High-level description of this algorithm is already provided in OP comments by #j_random_hacker. Here are some details:
Start with comparing the lists:
A 0 1 2 3 4 5 6 7
B 0 5 6 3 4 1 2 7
= E N N E E N N E
Then find borders between equal/non-equal intervals:
= E N N E E N N E
B _ * _ * _ * _ *
Then determine ranges for non-equal elements:
B _ * _ * _ * _ *
[1 : 3] [5 : 7]
Then check if there are exactly 2 ranges (with special case when both ranges meet in the middle), the ranges themselves are symmetrical, and their contents too.
Other alternative is to use itertools to process only half of each list. This allows slightly simpler (and slightly faster) algorithm because there is no need to handle a special case:
def test_B(a, b):
equals = it.imap(op.eq, a, b[:len(a) // 2]) # compare half-lists
e1, e2 = it.tee(equals)
l = it.chain(e1, [True])
r = it.chain([True], e2)
borders = it.imap(op.ne, l, r) # delimit equal/non-equal intervals
ranges = list(it.islice(it.compress(it.count(), borders), 2))
if len(ranges) != 2:
return None
n, m = ranges[0], ranges[1]
nr, mr = len(a) - n, len(a) - m
if a[n:m] == b[mr:nr] and b[n:m] == a[mr:nr] \
and a[m:mr] == b[m:mr] and a[nr:] == b[nr:]:
return (n, m)
This does the right thing:
Br = B[L//2:]+B[:L//2]
same_full = [a==b for (a,b) in zip(A, Br)]
same_part = [a+b for (a,b) in zip(same_full[L//2:], same_full[:L//2])]
for n, vn in enumerate(same_part):
if vn != 2:
continue
m = n
for vm in same_part[n+1:]:
if vm != 2:
break
m+=1
if m>n:
print("n=", n, "m=", m+1)
I'm pretty sure you could do the counting a bit bettter, but... meh
I believe the following pseudocode should work:
Find the first element i for which A[i] != B[i], set n = i. If no such element, return success. If n >= L/2, return fail.
Find the first element i > n for which A[i] == B[i], set m = i. If no such element or m > L/2, set m = L/2.
Check so A[0:n] == B[0:n], A[n:m] == B[L-m:L-n], B[n:m] == A[L-m:L-n], A[m:L-m] == B[m:L-m] and A[L-n:L] == B[L-n:L]. If all are true, return success. Else, return fail.
Complexity is O(n) which should be the lowest possible as one always needs to compare all elements in the lists.
I build a map of where the characters are in list B, then use that to determine the implied subranges in list A. Once I have the subranges, I can sanity check some of the info, and compare the slices.
If A[i] == x, then where does x appear in B? Call that position p.
I know i, the start of the left subrange.
I know L (= len(A)), so I know L-i, the end of the right subrange.
If I know p, then I know the implied start of the right subrange, assuming that B[p] and A[i] are the start of a symmetric pair of ranges. Thus, the OP's L - m would be p if the lists were symmetric.
Setting L-m == p gives me m, so I have all four end points.
Sanity tests are:
n and m are in left half of list(s)
n <= m (note: OP did not prohibit n == m)
L-n is in right half of list (computed)
L-m is in right half (this is a good check for quick fail)
If all those check out, compare A[left] == B[right] and B[left] == A[right]. Return left if true.
def find_symmetry(a:list, b:list) -> slice or None:
assert len(a) == len(b)
assert set(a) == set(b)
assert len(set(a)) == len(a)
length = len(a)
assert length % 2 == 0
half = length // 2
b_loc = {bi:n for n,bi in enumerate(b)}
for n,ai in enumerate(a[:half]):
L_n = length - 1 - n # L - n
L_m = b_loc[ai] # L - m (speculative)
if L_m < half: # Sanity: bail if on wrong side
continue
m = b_loc[a[L_n]] # If A[n] starts range, A[m] ends it.
if m < n or m > half: # Sanity: bail if backwards or wrong side
continue
left = slice(n, m+1)
right = slice(L_m, L_n+1)
if a[left] == b[right] and \
b[left] == a[right]:
return left
return None
res = find_symmetry(
[ 10, 11, 12, 13, 14, 15, 16, 17, ],
[ 10, 15, 16, 13, 14, 11, 12, 17, ])
assert res == slice(1,3)
res = find_symmetry(
[ 0, 1, 2, 3, 4, 5, 6, 7, ],
[ 1, 0, 2, 3, 4, 5, 7, 6, ])
assert res is None
res = find_symmetry("abcdefghijklmn", "nbcdefghijklma")
assert res == slice(0,1)
res = find_symmetry("abcdefghijklmn", "abjklfghicdmen")
assert res == slice(3,4)
res = find_symmetry("abcdefghijklmn", "ancjkfghidelmb")
assert res == slice(3,5)
res = find_symmetry("abcdefghijklmn", "bcdefgaijklmnh")
assert res is None
res = find_symmetry("012345", "013245")
assert res == slice(2,3)
Here's an O(N) solution which passes the test code:
def sym_check(a, b):
cnt = len(a)
ml = [a[i] == b[i] for i in range(cnt)]
sl = [i for i in range(cnt) if (i == 0 or ml[i-1]) and not ml[i]]
el = [i+1 for i in range(cnt) if not ml[i] and (i == cnt-1 or ml[i+1])]
assert(len(sl) == len(el))
range_cnt = len(sl)
if range_cnt == 1:
start1 = sl[0]
end2 = el[0]
if (end2 - start1) % 2 != 0:
return None
end1 = (start1 + end2) // 2
start2 = end1
elif range_cnt == 2:
start1, start2 = sl
end1, end2 = el
else:
return None
if end1 - start1 != end2 - start2:
return None
if start1 != cnt - end2:
return None
if a[start1:end1] != b[start2:end2]:
return None
if b[start1:end1] != a[start2:end2]:
return None
return start1, end1
I only tested it with Python 2, but I believe it will also work with Python 3.
It identifies the ranges where the two lists differ. It looks for two such ranges (if there is only one such range, it tries to divide it in half). It then checks that both ranges are the same length and in the proper positions relative to each other. If so, then it checks that the elements in the ranges match.
Yet another version:
def compare(a, b):
i_zip = list(enumerate(zip(a, b)))
llen = len(a)
hp = llen // 2
def find_index(i_zip):
for i, (x, y) in i_zip:
if x != y:
return i
return i_zip[0][0]
# n and m are determined by the unmoved items:
n = find_index(i_zip[:hp])
p = find_index(i_zip[hp:])
m = llen - p
q = llen - n
# Symmetric?
if a[:n] + a[p:q] + a[m:p] + a[n:m] + a[q:] != b:
return None
return n, m
This solution is based on:
All validly permuted list pairs A, B adhering to the symmetry requirement will have the structure:
A = P1 + P2 + P3 + P4 + P5
B = P1 + P4 + P3 + P2 + P5
^n ^m ^hp ^p ^q <- indexes
,len(P1) == len(P5) and len(P2) == len(P4)
Therefore the 3 last lines of the above function will determine the correct solution provided the indexes n, m are correctly determined. (p & q are just mirror indexes of m & n)
Finding n is a matter of determining when items of A and B start to diverge. Next the same method is applied to finding p starting from midpoint hp. m is just mirror index of p. All involved indexes are found and the solution emerges.
Make a list (ds) of indices where the first halves of the two lists differ.
A possible n is the first such index, the last such index is m - 1.
Check if valid symmetry. len(ds) == m - n makes sure there aren't any gaps.
import itertools as it
import operator as op
def test(a, b):
sz = len(a)
ds = list(it.compress(it.count(), map(op.ne, a[:sz//2], b[:sz//2])))
n,m = ds[0], ds[-1]+1
if a[n:m] == b[sz-m:sz-n] and b[n:m] == a[sz-m:sz-n] and len(ds) == m - n:
return n,m
else:
return None
Here's a simple solution that passes my tests, and yours:
Compare the inputs, looking for a subsequence that does not match.
Transform A by transposing the mismatched subsequence according to the rules. Does the result match B?
The algorithm is O(N); there are no embedded loops, explicit or implicit.
In step 1, I need to detect the case where the swapped substrings are adjacent. This can only happen in the middle of the string, but I found it easier to just look out for the first element of the moved piece (firstval). Step 2 is simpler (and hence less error-prone) than explicitly checking all the constraints.
def compare(A, B):
same = True
for i, (a, b) in enumerate(zip(A,B)):
if same and a != b: # Found the start of a presumed transposition
same = False
n = i
firstval = a # First element of the transposed piece
elif (not same) and (a == b or b == firstval): # end of the transposition
m = i
break
# Construct the transposed string, compare it to B
origin = A[n:m]
if n == 0: # swap begins at the edge
dest = A[-m:]
B_expect = dest + A[m:-m] + origin
else:
dest = A[-m:-n]
B_expect = A[:n] + dest + A[m:-m] + origin + A[-n:]
return bool(B_expect == B)
Sample use:
>>> compare("01234567", "45670123")
True
Bonus: I believe the name for this relationship would be "symmetric block transposition". A block transposition swaps two subsequences, taking ABCDE to ADCBE. (See definition 4 here; I actually found this by googling "ADCBE"). I've added "symmetric" to the name to describe the length conditions.

What does this program do?

This is a simple program but I am finding difficulty how it is actually working.
I have database with 3 tuples.
import matplotlib.pyplot as plt
queries = {}
rewrites = {}
urls = {}
for line in open("data.tsv"):
q, r, u = line.strip().split("\t")
queries.setdefault(q,0)
queries[q] += 1
rewrites.setdefault(r,0)
rewrites[r] += 1
urls.setdefault(u,0)
urls[u] += 1
sQueries = []
sQueries = [x for x in rewrites.values()]
sQueries.sort()
x = range(len(sQueries))
line, = plt.plot(x, sQueries, '-' ,linewidth=2)
plt.show()
This is whole program,
Now
queries.setdefault(q,0)
This command will set the values as 0 , if key i,e and q is not found.
queries[q] += 1
This command will increment the value of each key by 1 if key is there.
Same we continue with all tuples.
Then,
sQueries = [x for x in rewrites.values()]
Then we store the values from Dictionary rewrites , to List Squeries
x = range(len(sQueries))
This command I am not getting what is happening. Can anyone please explain.
len(sQueries)
gives number of elements in your list sQueries
x = range(len(sQueries))
will create a list x containing elements from 0,1,... to (but not including) length of your sQueries array
This:
sQueries = []
sQueries = [x for x in rewrites.values()]
sQueries.sort()
is an obtuse way of writing
sQueries = rewrites.values()
sQueries = sorted(sQueries)
in other words, sort the values of the rewrites dictionary. If, for the sake of argument, sQueries == [2, 3, 7, 9], then len(sQueries) == 4 and range(4) == [0, 1, 2, 3].
So, now you're plotting (0,2), (1,3), (2,7), (3,9), which doesn't seem very useful to me. It seems more likely that you would want the keys of rewrites on the x-axis, which would be the distinct values of r that you read from the TSV file.
length = len(sQueries) # this is length of sQueries
r = range(length) # this one means from 0 to length-1
so
x = range(len(sQueries)) # means x is from 0 to sQueries length - 1

How to mathematically subtract items in two lists in Python and only output those items which meet a condition?

I have two lists that are already sorted from low to high:
A=['40','60','80']
B=['10','42','100']
I want to subtract each item in A by every item in B. Then, if the difference between these values meets a condition, specifically, if less than 5, then delete both items from each list where the output should be:
A=['60','80']
B=['10','100']
**Sometimes lists are unequal in length, and sometimes there are only 1 item in each list
I have found many ways to subtract items in lists, but I do not know how to retrieve and delete the specific items in the specified lists, or they only subtract one item from each corresponding item in the opposite list
Using lambda:
if list(imap(lambda m, n: m-n < 5, A, B)) == True:
Using imap,sub
list(imap(sub, A, B)):
Using Numpy
M = np.array([A])
N = np.array([B])
c = abs(M-N)
Many thanks.
Without using numpy:
A = ["40", "60", "80"]
B = ["10", "42", "100"]
newA = filter(lambda a: all([abs(int(a) - int(b)) >= 5 for b in B]), A)
newB = filter(lambda b: all([abs(int(a) - int(b)) >= 5 for a in A]), B)
print newA
print newB
A_dict = {}
B_dict = {}
for i in xrange(len(A)):
if A[i] not in A_dict:
A_dict[A[i]] = []
A_dict[A[i]].append(i)
for i in xrange(len(B)):
if B[i] not in B_dict:
B_dict[B[i]] = []
B_dict[B[i]].append(i)
for x in B_dict:
for i in xrange(6):
if x - i in A_dict:
B_dict[x] = []
A_dict[x-i] = []
A_new_idx = []
B_new_idx = []
for x in A_dict:
A_new_idx.extend(A_dict[x])
for x in B_dict:
B_new_idx.extend(B_dict[x])
A_new = [A[i] for i in sorted(A_new_idx)]
B_new = [B[i] for i in sorted(B_new_idx)]
This has running time O(n log n) since the maximum difference you want to remove is 5 (a constant). Should be much faster than naive O(n^2)

Categories

Resources