I am trying to make array assignment in python, but it is very slow, is there any way to accelerate?
simi_matrix_img = np.zeros((len(annot), len(annot)), dtype='float16')
for i in range(len(annot)):
for j in range(i + 1):
score = 0
times = 0
if i != j:
x_idx = [p1 for (p1, q1) in enumerate(annot[i]) if np.abs(q1 - 1) < 1e-5]
y_idx = [p2 for (p2, q2) in enumerate(annot[j]) if np.abs(q2 - 1) < 1e-5]
for idx in itertools.product(x_idx, y_idx):
score += simi_matrix_word[idx]
times += 1
simi_matrix_img[i, j] = score/times
else:
simi_matrix_img[i, j] = 1.0
"annot" is a numpy array. Is there any way to accelerate it?
I think the indent for this line is wrong:
simi_matrix_img[i, j] = score/times
you want to perform that assignment after all the product iterations. But since it's the last assignment that takes, the results will be the same.
Here's a partial reworking of your code
def foo1(annot, simi_matrix_word):
N = annot.shape[0]
simi_matrix_img = np.zeros((N,N))
for i in range(N):
for j in range(i + 1):
if i != j:
x_idx = np.nonzero(annot[i])[0]
y_idx = np.nonzero(annot[j])[0]
idx = np.ix_(x_idx, y_idx)
# print(idx, simi_matrix_word[idx])
score = simi_matrix_word[idx].mean()
simi_matrix_img[i, j] = score
else:
simi_matrix_img[i, j] = 1.0
return simi_matrix_img
For a small test case, it returns the same thing:
annot=np.array([[1,0,1],[0,1,1]])
simi_matrix_word = np.arange(12, dtype=float).reshape(3,4)
[[ 1. 0.]
[ 7. 1.]]
That gets rid of all the inner iterations. Next step would be reduce the outer iterations. For example start with np.eye(N), and just iterate on the lower tri indices:
In [169]: np.eye(2)
Out[169]:
array([[ 1., 0.],
[ 0., 1.]])
In [170]: np.tril_indices(2,-1)
Out[170]: (array([1]), array([0]))
Note that for a 2 row annot, we are only calculating one score, at [1,0].
Replacing nonzero with boolean indexing:
def foo3(annot, simi_matrix_word):
N = annot.shape[0]
A = annot.astype(bool)
simi_matrix_img = np.eye(N,dtype=float)
for i,j in zip(*np.tril_indices(N,-1)):
score = simi_matrix_word[A[i],:][:,A[j]]
simi_matrix_img[i, j] = score.mean()
return simi_matrix_img
or this might speed up the indexing a bit:
def foo4(annot, simi_matrix_word):
N = annot.shape[0]
A = annot.astype(bool)
simi_matrix_img = np.eye(N,dtype=float)
for i in range(1,N):
x = simi_matrix_word[A[i],:]
for j in range(i):
score = x[:,A[j]]
simi_matrix_img[i, j] = score.mean()
return simi_matrix_img
Since the number of nonzero values for each row of annot can differ, the number of terms that are summed for each score also differs. That strongly suggests that further vectorization is impossible.
(1) You could use generators instead of list comprehension where possible. For example:
x_idx = (p1 for (p1, q1) in enumerate(annot[i]) if np.abs(q1 - 1) < 1e-5)
y_idx = (p2 for (p2, q2) in enumerate(annot[j]) if np.abs(q2 - 1) < 1e-5)
With this, you iterate only once over those items (in for idx in itertools.product(x_idx, y_idx)), as opposed to twice (once for constructing the list then again in said for loop).
(2) What Python are you using? If <3, I have a hunch that a significant part of the problem is you're using range(), which can be expensive in connection with really large ranges (as I'm assuming you're using here). In Python 2.7, range() actually constructs lists (not so in Python 3), which can be an expensive operation. Try achieving the same result using a simple while loop. For example, instead of for i in range(len(annot)), do:
i=0
while i < len(annot):
... do stuff with i ...
i += 1
(3) Why call len(annot) so many times? It doesn't seem like you're mutating annot. Although len(annot) is a fast O you could store the length in a var, e.g., annot_len = len(annot), and then just reference that. Wouldn't scrape much off though, I'm afraid.
Related
Consider we have 2 arrays of size N, with their values in the range [0, N-1]. For example:
a = np.array([0, 1, 2, 0])
b = np.array([2, 0, 3, 3])
I need to produce a new array c which contains exactly N/2 elements from a and b respectively, i.e. the values must be taken evenly/equally from both parent arrays.
(For odd length, this would be (N-1)/2 and (N+1)/2. Can also ignore odd length case, not important).
Taking equal number of elements from two arrays is pretty trivial, but there is an additional constraint: c should have as many unique numbers as possible / as few duplicates as possible.
For example, a solution to a and b above is:
c = np.array([b[0], a[1], b[2], a[3]])
>>> c
array([2, 1, 3, 0])
Note that the position/order is preserved. Each element of a and b that we took to form c is in same position. If element i in c is from a, c[i] == a[i], same for b.
A straightforward solution for this is simply a sort of path traversal, easy enough to implement recursively:
def traverse(i, a, b, path, n_a, n_b, best, best_path):
if n_a == 0 and n_b == 0:
score = len(set(path))
return (score, path.copy()) if score > best else (best, best_path)
if n_a > 0:
path.append(a[i])
best, best_path = traverse(i + 1, a, b, path, n_a - 1, n_b, best, best_path)
path.pop()
if n_b > 0:
path.append(b[i])
best, best_path = traverse(i + 1, a, b, path, n_a, n_b - 1, best, best_path)
path.pop()
return best, best_path
Here n_a and n_b are how many values we will take from a and b respectively, it's 2 and 2 as we want to evenly take 4 items.
>>> score, best_path = traverse(0, a, b, [], 2, 2, 0, None)
>>> score, best_path
(4, [2, 1, 3, 0])
Is there a way to implement the above in a more vectorized/efficient manner, possibly through numpy?
The algorithm is slow mainly because it runs in an exponential time. There is no straightforward way to vectorize this algorithm using only Numpy because of the recursion. Even if it would be possible, the huge number of combinations would cause most Numpy implementations to be inefficient (due to large Numpy arrays to compute). Additionally, there is AFAIK no vectorized operation to count the number of unique values of many rows efficiently (the usual way is to use np.unique which is not efficient in this case and cannot be use without a loop). As a result, there is two possible strategy to speed this up:
trying to find an algorithm with a reasonable complexity (eg. <= O(n^4));
using compilation methods, micro-optimizations and tricks to write a faster brute-force implementation.
Since finding a correct sub-exponential algorithm turns out not to be easy, I choose the other approach (though the first approach is the best).
The idea is to:
remove the recursion by generating all possible solutions using a loop iterating on integer;
write a fast way to count unique items of an array;
use the Numba JIT compiler so to optimize the code that is only efficient once compiled.
Here is the final code:
import numpy as np
import numba as nb
# Naive way to count unique items.
# This is a slow fallback implementation.
#nb.njit
def naive_count_unique(arr):
count = 0
for i in range(len(arr)):
val = arr[i]
found = False
for j in range(i):
if arr[j] == val:
found = True
break
if not found:
count += 1
return count
# Optimized way to count unique items on small arrays.
# Count items 2 by 2.
# Fast on small arrays.
#nb.njit
def optim_count_unique(arr):
count = 0
for i in range(0, len(arr), 2):
if arr[i] == arr[i+1]:
tmp = 1
for j in range(i):
if arr[j] == arr[i]: tmp = 0
count += tmp
else:
val1, val2 = arr[i], arr[i+1]
tmp1, tmp2 = 1, 1
for j in range(i):
val = arr[j]
if val == val1: tmp1 = 0
if val == val2: tmp2 = 0
count += tmp1 + tmp2
return count
#nb.njit
def count_unique(arr):
if len(arr) % 2 == 0:
return optim_count_unique(arr)
else:
# Odd case: not optimized yet
return naive_count_unique(arr)
# Count the number of bits in a 32-bit integer
# See https://stackoverflow.com/questions/71097470/msb-lsb-popcount-in-numba
#nb.njit('int_(uint32)', inline='always')
def popcount(v):
v = v - ((v >> 1) & 0x55555555)
v = (v & 0x33333333) + ((v >> 2) & 0x33333333)
c = np.uint32((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24
return c
# Count the number of bits in a 64-bit integer
#nb.njit(inline='always')
def bit_count(n):
if n < (1 << 30):
return popcount(np.uint32(n))
else:
return popcount(np.uint32(n)) + popcount(np.uint32(n >> 32))
# Mutate `out` so not to create an expensive new temporary array
#nb.njit
def int_to_path(n, out, a, b):
for i in range(len(out)):
out[i] = a[i] if ((n >> i) & 1) else b[i]
#nb.njit(['(int32[:], int32[:], int64, int64)', '(int64[:], int64[:], int64, int64)'])
def traverse_fast(a, b, n_a, n_b):
# This assertion is needed because the paths are encoded using 64-bit.
# This should not be a problem in practice since the number of solutions to
# test would be impracticably huge to test using this algorithm anyway.
assert n_a + n_b < 62
max_iter = 1 << (n_a + n_b)
path = np.empty(n_a + n_b, dtype=a.dtype)
score, best_score, best_i = 0, 0, 0
# Iterate over all cases (more than the set of possible solution)
for i in range(max_iter):
# Filter the possible solutions
if bit_count(i) != n_b:
continue
# Analyse the score of the solution
int_to_path(i, path, a, b)
score = count_unique(path)
# Store it if it better than the previous one
if score > best_score:
best_score = score
best_i = i
int_to_path(best_i, path, a, b)
return best_score, path
This implementation is about 30 times faster on arrays of size 8 on my machine. On could use several cores to speed this up even further. However, I think it is better to focus on finding a sub-exponential implementation so to avoid wasting more computing resources. Note that the path is different from the initial function but the score is the same on random arrays. It can help others to test their implementation on larger arrays without waiting for a long time.
Test this heavily.
import numpy as np
from numpy.random._generator import default_rng
rand = default_rng(seed=1)
n = 16
a = rand.integers(low=0, high=n, size=n)
b = rand.integers(low=0, high=n, size=n)
uniques = np.setxor1d(a, b)
print(a)
print(b)
print(uniques)
def limited_uniques(arr: np.ndarray) -> np.ndarray:
choose = np.zeros(shape=n, dtype=bool)
_, idx, _ = np.intersect1d(arr, uniques, return_indices=True)
idx = idx[:n//2]
choose[idx] = True
n_missing = n//2 - len(idx)
counts = choose.cumsum()
diffs = np.arange(n) - counts
at = np.searchsorted(diffs, n_missing)
choose[:at] = True
return arr[choose]
a_half = limited_uniques(a)
uniques = np.union1d(uniques, np.setdiff1d(a, a_half))
interleaved = np.empty_like(a)
interleaved[0::2] = a_half
interleaved[1::2] = limited_uniques(b)
print(interleaved)
[ 7 8 12 15 0 2 13 15 3 4 13 6 4 13 4 6]
[10 8 1 0 13 12 13 8 13 5 7 12 1 4 1 7]
[ 1 2 3 5 6 10 15]
[ 7 10 8 8 12 1 15 0 0 13 2 12 3 5 6 4]
I am used to write code in c++ but now I am trying to learn python. I came to know about the Python language and it is very popular among everyone. So I thought, let's give it a shot.
Currently I am preparing for companies interview questions and able to solve most of them in c++. Alongside which, I am trying to write the code for the same in Python. For the things which I am not familiar with, I do a google search or watch tutorials etc.
While I was writing code for my previously solved easy interview questions in python, I encountered a problem.
Code : Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
Given an array of integers, print the indices of the two numbers such that they add up to a specific target.
def twoNum(*arr, t):
cur = 0
x = 0
y = 0
for i in range (len(arr) - 1):
for j in range (len(arr) - 1):
if(i == j):
break
cur = arr[i] + arr[j]
if(t == cur):
x = arr[i]
y = arr[j]
break
if(t == cur):
break
print(f"{x} + {y} = {x+y} ")
arr = [3, 5, -4, 8, 11, 1, -1, 6]
target = 10
twoNum(arr, t=target)
So here is the problem: I have defined x, y in function and then used x = arr[i] and y = arr[j] and I m printing those values.
output coming is : is 0 + 0 = 10 (where target is 10)
This is I guess probably because I am using x = 0 and y = 0 initially in the function and it seems x and y values are not updating then I saw outline section in VSCode there I saw x and y are declared twice, once at the starting of the function and second in for loop.
Can anyone explain to me what is going on here?
For reference, here is an image of the code I wrote in C++
Change this:
def twoNum(*arr, t):
to this:
def twoNum(arr, t):
* is used to indicate that there will be a variable number of arguments, see this. It is not for pointers as in C++.
Basically what you are trying to do is to write C code in python.
I would instead try to focus first on how to write python code in a 'pythonic' way first. But for your question - sloving it your way using brute force in python:
In [173]: def two_num(arr, t):
...: for i in arr:
...: for j in arr[i + 1: ]:
...: if i + j == t:
...: print(f"{i} + {j} = {t}")
...: return
Here's a way to implement a brute force approach using a list comprehension:
arr = [1,3,5,7,9]
target = 6
i,j = next((i,j) for i,n in enumerate(arr[:-1]) for j,m in enumerate(arr[i+1:],i+1) if n+m==target)
output:
print(f"arr[{i}] + arr[{j}] = {arr[i]} + {arr[j]} = {target}")
# arr[0] + arr[2] = 1 + 5 = 6
Perhaps even more pythonic would be to use iterators:
from itertools import tee
iArr = enumerate(arr)
i,j = next((i,j) for i,n in iArr for j,m in tee(iArr,1)[0] if n+m==target)
When you get to implementing an O(n) solution, you should look into dictionaries:
d = { target-n:j for j,n in enumerate(arr) }
i,j = next( (i,d[m]) for i,m in enumerate(arr) if m in d and d[m] != i )
I am currently wondering how could we make a efficient implementation of LCS problem.
I found a way to find consecutive match (i.e. ngrams match) with tensor operation by comparing and shifting.
With two sequences x (len: n), y (len: m), the matrix:
e = x.eq(y.unsqueeze(1)) # [n x m]
we have: e[i, j] == 1 <=> x[i] == y[j], a n-gram match will be visible as a diagonal of ones.
Thus we can do the following:
# match_1 = [n x m] of {0, 1}
match_1 = x.eq(y.unsqueeze(1))
# match_2 = [(n-1) x (m-1)] matrix of {0, 1}
match_2 = match_1[:-1, :-1] * match_1[1:, 1:]
# etcetc
The LCS problem is more complicated as we allow gaps. It can be implemented using dynamic programing, in O(n x m), but it is not really protable to Pytorch right?
I tried, it’s super slow.
# considering two "sentences" (LongTensors of word indices)
# _ts, _tr of respective length n and m
table = torch.zeros(n+1, m+1)
_ts, _tr = ts, tr
for i in range(1, n+1):
for j in range(1, m+1):
if _ts[i-1] == _tr[j-1]:
_table[i, j] = _table[i-1, j-1] + 1
else:
_table[i, j] = max(_table[i-1, j], _table[i, j-1])
lcs = _table[n][m]
Any idea to make it more efficient?
A non-empty zero-indexed array A consisting of N integers is given. Array A represents numbers on a tape. Any integer P, such that 0 < P < N, splits this tape into two non-empty parts: A[0], A[1], ..., A[P − 1] and A[P], A[P + 1], ..., A[N − 1]. The difference between the two parts is the value of: |(A[0] + A[1] + ... + A[P − 1]) − (A[P] + A[P + 1] + ... + A[N − 1])| In other words, it is the absolute difference between the sum of the first part and the sum of the second part.
def solution(A):
N = len(A)
my_list = []
for i in range(1, N):
first_tape = sum(A[:i - 1]) + A[i]
second_tape = sum(A[i - 1:]) + A[i]
difference = abs(first_tape - second_tape)
my_list.append(difference)
print(min(my_list))
return min(my_list)
My solution gets 100% on Correctness but 0% on Performance.
I think it is supposed to be O(N) but my time complexity is O(N*N).
Can anyone please give me advice please?
You can change your code to something like below to have complexity O(N).
def solution(A):
s = sum(A)
m = float('inf')
left_sum = 0
for i in A[:-1]:
left_sum += i
m = min(abs(s - 2*left_sum), m)
return m
Functional approach as #darkvalance wrote, but with comments:
from itertools import accumulate
def solution(A):
array_sum = sum(A) # saving sum of all elements to have an O(n) complexity
# accumulate returns accumulated sums
# e.g. for input: [3, 1, 2, 4] it returns: [3, 4, 6, 10]
# we are passing a copy of the array without the last element
# including the last element doesn't make sense, becuase
# accumulate[A][-1] == array_sum
accumulated_list = accumulate(A[:-1])
return min([abs(2*x - array_sum) for x in accumulated_list])
To answer your question - it's O(n*n) because sum() function is O(n) time complexity and you are calling it inside a for loop with N elements, which is also O(N).
So the resulting time complexity of the algorithm will be O(N*n)
my java code
O(N)
class Solution {
public int solution(int[] arr) {
int sum = 0;
for(int i = 0; i<arr.length; i++){
sum = sum + arr[i];
}
int minSum = 100000;
int tempSum = 0;
int previousSum = 0;
for(int i = 0; i<arr.length-1; i++){
previousSum = previousSum + arr[i];
tempSum = Math.abs(previousSum - (sum - previousSum));
if(minSum > tempSum){
minSum = tempSum;
}
}
return minSum;
}
}
My python code O(N)
def solution(A):
# write your code in Python 3.6
mini = float('inf')
check = A[0]
total = sum(A)-check
for i in range(1, len(A)):
diff = abs(check-total)
total -= A[i]
check += A[i]
if diff < mini:
mini = diff
return mini
Functional approach O(N). Accumulate provides the cumulative running sum of a list. We can compute the difference between the 2 arrays with the sum of the list, and the cumulative sum at each point.
from itertools import accumulate
def solution(A):
s = sum(A)
l = list(accumulate(A[:-1]))
return min([abs(2*x - s) for x in l])
def solution(A):
res = []
left_sum = 0
right_sum = sum(A)
for i in range(0, len(A)-1):
left_sum += A[i]
right_sum = right_sum - A[i]
res.append(abs(right_sum-left_sum))
return min(res)
currently you are calculating the sum again and again in first_tape and second_tape. What you need to do is store the total sum and the calculate the sum using difference. SO e.g. if your array is [1,2,3,4], the total sum would be 10. Lets assume your first_tape is of size 1 or in other words your first_tape is [1] so the sum of first tape would be 1. Then the sum of the remaining second tape would be
`total sum - first_tape sum`
and the difference would be
first_tape sum - (total sum - first_tape sum)
You can calculate the first_tape sum within the same loop by doing something like:
previous sum += i (where i is the current array element)
So the solution is order of N.
Here I also add a check for when the element of the array is 0
def MinimialDiff(A):
if len(A) == 2:
return abs(A[0]-A[1])
tot_sum = sum(A)
min_value = float('inf')
left_sum = 0
for x in range(0,len(A)-1):
if A[x] == 0:
continue
left_sum += A[x]
temp = abs(2*left_sum-tot_sum)
min_value = min(min_value,temp)
return min_value
This is my original solution for the Tape Equilibrium problem
def solution(A) :
import numpy as np
# Check if the supplied array is empty or single element
if len( A ) < 2 :
return -1
# Otherwise, create two NumPy Arrays of (non-linear) accumulated sums:
# All but last, Start to End
Array_Sum_Accumulated = np.array( list( np.cumsum( A[ 0 : -1 : 1 ] ) )[ : : 1 ] )
# All but first, End to Start
Array_Sum_Acc_Reversed = np.array( list( np.cumsum( A[ -1 : 0 : -1 ] ) )[ : : -1 ] )
# Array of Absolute Diffenences using fast (precompiled) and simple NumPy magic
Array_Sum_Difference = abs( Array_Sum_Accumulated - Array_Sum_Acc_Reversed )
# for debugging only
if len( A ) <= 20 :
print( "%s\n%s\n%s" % ( Array_Sum_Accumulated, Array_Sum_Acc_Reversed, Array_Sum_Difference ) )
return min( Array_Sum_Difference )
Unfortunately, Codility does not permit import of the NumPy module for this particular lesson. Hence, this is the solution, importing the IterTools module instead of NumPy, that (finally) yielded 100% result across the board:
def solution(A):
from itertools import accumulate
# Check if the supplied array is empty or single element
if len( A ) < 2 :
return -1
# If only two elements return the Absolute Difference of them
if len( A ) == 2 :
return abs( A[ 0 ] - A[ 1 ])
# Otherwise, create two lists of (non-linear) accumulated sums:
# All but last, Start to End
Array_Sum_Accumulated = list( accumulate( A[ 0 : -1 : 1 ] ) )[ : : 1 ]
# All but first, End to Start
Array_Sum_Acc_Reversed = list( accumulate( A[ -1 : 0 : -1 ] ) )[ : : -1 ]
# List of Absolute Differences using the slower (interpreted) loop
Array_Sum_Difference = [ ]
for i in range( 0, len( Array_Sum_Accumulated ) ) :
Array_Sum_Difference.append( abs( Array_Sum_Accumulated[ i ] - Array_Sum_Acc_Reversed [ i ] ) )
# For debugging only
if len( A ) <= 20 :
print( "%s\n%s\n%s" % ( Array_Sum_Accumulated, Array_Sum_Acc_Reversed, Array_Sum_Difference ) )
return min( Array_Sum_Difference )
Thanks to darkvalance for his IterTools solution, and to TenaciousRaptor for the (very enlightening) clarification of the logic used.
Thanks also to Jun Jang for attempting the Two Split Tapes solution which shows that the non-linear accumulation can provide multiple 'pairs of tapes', because the same minimum absolute difference can appear at multiple equilibrium points on 'the tape'.
The IterTools solution provided by darkvalance not only provides exactly the same results, it looks extremely Pythonic and out performs the Three NumPy Arrays solution in more than 97% of tests, (after 100,000 tests of arrays of 100,000 elements).
Congratulations. I hope that one day my code will look something like yours.
This code snippet also a possible solution
def solution(A):
# write your code in Python 3.6
d=[]
for i in range(1,len(A)):
d.append(abs(sum(A[:i])-sum(A[i:])))
return list(set(d))[0]
In Sage, trying to define a matrix with conditions for the cells by:
matrix([[(if gcd(i, j) == 0: log(radical((i+j)*i*j)) else: -1.0) for j in srange(1, 5)] for i in srange(1, 5)])
I get a syntax error:
...
matrix([[(if gcd(i, j) == _sage_const_0 : log(radical((i+j)*i*j)) else: -_sage_const_1p0 ) for j in srange(_sage_const_1 , _sage_const_5 )] for i in srange(_sage_const_1 , _sage_const_5 )])
^
SyntaxError: invalid syntax
What is the problem here? How to fix that?
Your problem is a Python one, really, not Sage per se. Python has some filtering for list comprehensions, but it doesn't look like this. See e.g. this question.
So let's try it:
matrix([[log(radical((i+j)*i*j)) if gcd(i,j)==0 else -1.0 for j in srange(1,5)] for i in srange(1,5)])
By the way, did you really want if gcd(i,j)==1? Unlikely you'll get a gcd of zero in this one!
Here is another possibility.
sage: f = lambda i, j: log(radical((i + j)*i*j)) if gcd(i,j) == 1 else -1
sage: m = matrix(SR, 4, lambda i, j: f(i + 1, j + 1))
sage: m
[ log(2) log(6) log(6) log(10)]
[ log(6) -1 log(30) -1]
[ log(6) log(30) -1 log(42)]
[log(10) -1 log(42) -1]
This uses a different syntax for matrix initialization, in which we
first specify the base ring, the matrix size, and then a function
of (i, j) for coefficients. Note that since Sage indexes rows and
columns from 0, we have to apply our function to i + 1 and j + 1.
Putting -1 for non-coprime (i, j) might work better than -1.0
for exact computations.