How to make nested list behave like numpy array? - python

I'm trying to implements an algorithm to count subsets with given sum in python which is
import numpy as np
maxN = 20
maxSum = 1000
minSum = 1000
base = 1000
dp = np.zeros((maxN, maxSum + minSum))
v = np.zeros((maxN, maxSum + minSum))
# Function to return the required count
def findCnt(arr, i, required_sum, n) :
# Base case
if (i == n) :
if (required_sum == 0) :
return 1
else :
return 0
# If the state has been solved before
# return the value of the state
if (v[i][required_sum + base]) :
return dp[i][required_sum + base]
# Setting the state as solved
v[i][required_sum + base] = 1
# Recurrence relation
dp[i][required_sum + base] = findCnt(arr, i + 1, required_sum, n) + findCnt(arr, i + 1, required_sum - arr[i], n)
return dp[i][required_sum + base]
arr = [ 2, 2, 2, 4 ]
n = len(arr)
k = 4
print(findCnt(arr, 0, k, n))
And it gives the expected result, but I was asked to not use numpy, so I replaced numpy arrays with nested lists like this :
#dp = np.zeros((maxN, maxSum + minSum)) replaced by
dp = [[0]*(maxSum + minSum)]*maxN
#v = np.zeros((maxN, maxSum + minSum)) replaced by
v = [[0]*(maxSum + minSum)]*maxN
but now the program always gives me 0 in the output, I think this is because of some behavior differences between numpy arrays and nested lists, but I don't know how to fix it
EDIT :
thanks to #venky__ who provided this solution in the comments :
[[0 for i in range( maxSum + minSum)] for i in range(maxN)]
and it worked, but I still don't understand what is the difference between it and what I was doing before, I tried :
print( [[0 for i in range( maxSum + minSum)] for i in range(maxN)] == [[0]*(maxSum + minSum)]*maxN )
And the result is True, so how this was able to fix the problem ?

It turns out that I was using nested lists the wrong way to represent 2d arrays, since python was not crating separate objets, but the same sub list indexes was referring to the same integer object, for better explanation please read this.

Related

Taking equal number of elements from two arrays, such that the taken values have as few duplicates as possible

Consider we have 2 arrays of size N, with their values in the range [0, N-1]. For example:
a = np.array([0, 1, 2, 0])
b = np.array([2, 0, 3, 3])
I need to produce a new array c which contains exactly N/2 elements from a and b respectively, i.e. the values must be taken evenly/equally from both parent arrays.
(For odd length, this would be (N-1)/2 and (N+1)/2. Can also ignore odd length case, not important).
Taking equal number of elements from two arrays is pretty trivial, but there is an additional constraint: c should have as many unique numbers as possible / as few duplicates as possible.
For example, a solution to a and b above is:
c = np.array([b[0], a[1], b[2], a[3]])
>>> c
array([2, 1, 3, 0])
Note that the position/order is preserved. Each element of a and b that we took to form c is in same position. If element i in c is from a, c[i] == a[i], same for b.
A straightforward solution for this is simply a sort of path traversal, easy enough to implement recursively:
def traverse(i, a, b, path, n_a, n_b, best, best_path):
if n_a == 0 and n_b == 0:
score = len(set(path))
return (score, path.copy()) if score > best else (best, best_path)
if n_a > 0:
path.append(a[i])
best, best_path = traverse(i + 1, a, b, path, n_a - 1, n_b, best, best_path)
path.pop()
if n_b > 0:
path.append(b[i])
best, best_path = traverse(i + 1, a, b, path, n_a, n_b - 1, best, best_path)
path.pop()
return best, best_path
Here n_a and n_b are how many values we will take from a and b respectively, it's 2 and 2 as we want to evenly take 4 items.
>>> score, best_path = traverse(0, a, b, [], 2, 2, 0, None)
>>> score, best_path
(4, [2, 1, 3, 0])
Is there a way to implement the above in a more vectorized/efficient manner, possibly through numpy?
The algorithm is slow mainly because it runs in an exponential time. There is no straightforward way to vectorize this algorithm using only Numpy because of the recursion. Even if it would be possible, the huge number of combinations would cause most Numpy implementations to be inefficient (due to large Numpy arrays to compute). Additionally, there is AFAIK no vectorized operation to count the number of unique values of many rows efficiently (the usual way is to use np.unique which is not efficient in this case and cannot be use without a loop). As a result, there is two possible strategy to speed this up:
trying to find an algorithm with a reasonable complexity (eg. <= O(n^4));
using compilation methods, micro-optimizations and tricks to write a faster brute-force implementation.
Since finding a correct sub-exponential algorithm turns out not to be easy, I choose the other approach (though the first approach is the best).
The idea is to:
remove the recursion by generating all possible solutions using a loop iterating on integer;
write a fast way to count unique items of an array;
use the Numba JIT compiler so to optimize the code that is only efficient once compiled.
Here is the final code:
import numpy as np
import numba as nb
# Naive way to count unique items.
# This is a slow fallback implementation.
#nb.njit
def naive_count_unique(arr):
count = 0
for i in range(len(arr)):
val = arr[i]
found = False
for j in range(i):
if arr[j] == val:
found = True
break
if not found:
count += 1
return count
# Optimized way to count unique items on small arrays.
# Count items 2 by 2.
# Fast on small arrays.
#nb.njit
def optim_count_unique(arr):
count = 0
for i in range(0, len(arr), 2):
if arr[i] == arr[i+1]:
tmp = 1
for j in range(i):
if arr[j] == arr[i]: tmp = 0
count += tmp
else:
val1, val2 = arr[i], arr[i+1]
tmp1, tmp2 = 1, 1
for j in range(i):
val = arr[j]
if val == val1: tmp1 = 0
if val == val2: tmp2 = 0
count += tmp1 + tmp2
return count
#nb.njit
def count_unique(arr):
if len(arr) % 2 == 0:
return optim_count_unique(arr)
else:
# Odd case: not optimized yet
return naive_count_unique(arr)
# Count the number of bits in a 32-bit integer
# See https://stackoverflow.com/questions/71097470/msb-lsb-popcount-in-numba
#nb.njit('int_(uint32)', inline='always')
def popcount(v):
v = v - ((v >> 1) & 0x55555555)
v = (v & 0x33333333) + ((v >> 2) & 0x33333333)
c = np.uint32((v + (v >> 4) & 0xF0F0F0F) * 0x1010101) >> 24
return c
# Count the number of bits in a 64-bit integer
#nb.njit(inline='always')
def bit_count(n):
if n < (1 << 30):
return popcount(np.uint32(n))
else:
return popcount(np.uint32(n)) + popcount(np.uint32(n >> 32))
# Mutate `out` so not to create an expensive new temporary array
#nb.njit
def int_to_path(n, out, a, b):
for i in range(len(out)):
out[i] = a[i] if ((n >> i) & 1) else b[i]
#nb.njit(['(int32[:], int32[:], int64, int64)', '(int64[:], int64[:], int64, int64)'])
def traverse_fast(a, b, n_a, n_b):
# This assertion is needed because the paths are encoded using 64-bit.
# This should not be a problem in practice since the number of solutions to
# test would be impracticably huge to test using this algorithm anyway.
assert n_a + n_b < 62
max_iter = 1 << (n_a + n_b)
path = np.empty(n_a + n_b, dtype=a.dtype)
score, best_score, best_i = 0, 0, 0
# Iterate over all cases (more than the set of possible solution)
for i in range(max_iter):
# Filter the possible solutions
if bit_count(i) != n_b:
continue
# Analyse the score of the solution
int_to_path(i, path, a, b)
score = count_unique(path)
# Store it if it better than the previous one
if score > best_score:
best_score = score
best_i = i
int_to_path(best_i, path, a, b)
return best_score, path
This implementation is about 30 times faster on arrays of size 8 on my machine. On could use several cores to speed this up even further. However, I think it is better to focus on finding a sub-exponential implementation so to avoid wasting more computing resources. Note that the path is different from the initial function but the score is the same on random arrays. It can help others to test their implementation on larger arrays without waiting for a long time.
Test this heavily.
import numpy as np
from numpy.random._generator import default_rng
rand = default_rng(seed=1)
n = 16
a = rand.integers(low=0, high=n, size=n)
b = rand.integers(low=0, high=n, size=n)
uniques = np.setxor1d(a, b)
print(a)
print(b)
print(uniques)
def limited_uniques(arr: np.ndarray) -> np.ndarray:
choose = np.zeros(shape=n, dtype=bool)
_, idx, _ = np.intersect1d(arr, uniques, return_indices=True)
idx = idx[:n//2]
choose[idx] = True
n_missing = n//2 - len(idx)
counts = choose.cumsum()
diffs = np.arange(n) - counts
at = np.searchsorted(diffs, n_missing)
choose[:at] = True
return arr[choose]
a_half = limited_uniques(a)
uniques = np.union1d(uniques, np.setdiff1d(a, a_half))
interleaved = np.empty_like(a)
interleaved[0::2] = a_half
interleaved[1::2] = limited_uniques(b)
print(interleaved)
[ 7 8 12 15 0 2 13 15 3 4 13 6 4 13 4 6]
[10 8 1 0 13 12 13 8 13 5 7 12 1 4 1 7]
[ 1 2 3 5 6 10 15]
[ 7 10 8 8 12 1 15 0 0 13 2 12 3 5 6 4]

how to find max value and its index number in Numpy? [duplicate]

This question already has an answer here:
Numpy argmax. How to compute both max and argmax?
(1 answer)
Closed 2 years ago.
in the python, I'm trying to find the max value of a numpy array with its index number.
in matlab [val,I]=max(v) , val = "maximum value" and I = "index number" witch v is a 1x12 double
how could I implement this array in numpy?
also, I try this :
import numpy as np
v = np.empty([v_size,1])
for j in range(1,v_size):
x1 = int(tmp[j - 1, 0])
x2 = int(tmp[j - 1, 1])
x3 = int(tmp[j + 1, 0])
x4 = int(tmp[j + 1, 1])
v[j] = image[x1, x2] - image[x3, x4]
if np.all(v == 0) == 0:
val,I = np.amax(v)
I would be appreciated if anybody helps me.
Use where and amax:
maxElement = numpy.amax(arr)
result = numpy.where(arr == numpy.amax(arr))
result[0] # Will be your max.
Or equivalent:
max_num, index = [numpy.amax(arr), numpy.where(arr == numpy.amax(arr))[0]]
Check numpy documentation... There are at least two immediate ways to get what you want:
maxvalue = array.max()
maxindex = array.argmax()
Or:
maxvalue = np.max(array)
maxindex = np.argmax(array)
We can use ‘np.argmax’ and use the index to get the value. Performant way.
a=np.asarray((1,3,8,5,4))
m = np.argmax(a)
n = a[m]
print(“Max value: {} and its id: {}”.format(n, m))
Max value: 8 and it’s id: 2

Two number Sum program in python O(N^2)

I am used to write code in c++ but now I am trying to learn python. I came to know about the Python language and it is very popular among everyone. So I thought, let's give it a shot.
Currently I am preparing for companies interview questions and able to solve most of them in c++. Alongside which, I am trying to write the code for the same in Python. For the things which I am not familiar with, I do a google search or watch tutorials etc.
While I was writing code for my previously solved easy interview questions in python, I encountered a problem.
Code : Given an array of integers, return indices of the two numbers such that they add up to a specific target.
You may assume that each input would have exactly one solution, and you may not use the same element twice.
Given an array of integers, print the indices of the two numbers such that they add up to a specific target.
def twoNum(*arr, t):
cur = 0
x = 0
y = 0
for i in range (len(arr) - 1):
for j in range (len(arr) - 1):
if(i == j):
break
cur = arr[i] + arr[j]
if(t == cur):
x = arr[i]
y = arr[j]
break
if(t == cur):
break
print(f"{x} + {y} = {x+y} ")
arr = [3, 5, -4, 8, 11, 1, -1, 6]
target = 10
twoNum(arr, t=target)
So here is the problem: I have defined x, y in function and then used x = arr[i] and y = arr[j] and I m printing those values.
output coming is : is 0 + 0 = 10 (where target is 10)
This is I guess probably because I am using x = 0 and y = 0 initially in the function and it seems x and y values are not updating then I saw outline section in VSCode there I saw x and y are declared twice, once at the starting of the function and second in for loop.
Can anyone explain to me what is going on here?
For reference, here is an image of the code I wrote in C++
Change this:
def twoNum(*arr, t):
to this:
def twoNum(arr, t):
* is used to indicate that there will be a variable number of arguments, see this. It is not for pointers as in C++.
Basically what you are trying to do is to write C code in python.
I would instead try to focus first on how to write python code in a 'pythonic' way first. But for your question - sloving it your way using brute force in python:
In [173]: def two_num(arr, t):
...: for i in arr:
...: for j in arr[i + 1: ]:
...: if i + j == t:
...: print(f"{i} + {j} = {t}")
...: return
Here's a way to implement a brute force approach using a list comprehension:
arr = [1,3,5,7,9]
target = 6
i,j = next((i,j) for i,n in enumerate(arr[:-1]) for j,m in enumerate(arr[i+1:],i+1) if n+m==target)
output:
print(f"arr[{i}] + arr[{j}] = {arr[i]} + {arr[j]} = {target}")
# arr[0] + arr[2] = 1 + 5 = 6
Perhaps even more pythonic would be to use iterators:
from itertools import tee
iArr = enumerate(arr)
i,j = next((i,j) for i,n in iArr for j,m in tee(iArr,1)[0] if n+m==target)
When you get to implementing an O(n) solution, you should look into dictionaries:
d = { target-n:j for j,n in enumerate(arr) }
i,j = next( (i,d[m]) for i,m in enumerate(arr) if m in d and d[m] != i )

number of subsequences whose sum is divisible by k

I just did a coding challenge for a company and was unable to solve this problem. Problem statement goes like:
Given an array of integers, find the number of subsequences in the array whose sum is divisible by k, where k is some positive integer.
For example, for [4, 1, 3, 2] and k = 3, the solution is 5. [[3], [1, 2], [4,3,2], [4,2], [1,3,2]] are the subsequences whose sum is divisible by k, i.e. current_sum + nums[i] % k == 0, where nums[i] is the current element in the array.
I tried to solve this recursively, however, I was unable to pass any test cases. My recursive code followed something like this:
def kSum(nums, k):
def kSum(cur_sum, i):
if i == len(nums): return 0
sol = 1 if (cur_sum + nums[i]) % k == 0 else 0
return sol + kSum(cur_sum, i+1) + kSum(cur_sum + nums[i], i+1)
return kSum(0, 0)
What is wrong with this recursive approach, and how can I correct it? I'm not interested in an iterative solution, I just want to know why this recursive solution is wrong and how I can correct it.
Are you sure that is not the case test? For example:
[4, 1, 3, 2], k = 3
has
4+2 = 6, 1+2=3, 3, 1+2+3=6, 4+2+3 = 9
So, your function is right (it gives me 5) and I don't see a major problem with your function.
Here is a javascript reproduction of what you wrote with some console logs to help explain its behavior.
function kSum(nums, k) {
let recursive_depth = 1;
function _kSum(cur_sum, i) {
recursive_depth++;
if (i == nums.length) {
recursive_depth--;
return 0;
}
let sol = 0;
if (((cur_sum + nums[i]) % k) === 0) {
sol = 1;
console.log(`Found valid sequence ending with ${nums[i]} with sum = ${cur_sum + nums[i]} with partial sum ${cur_sum} at depth ${recursive_depth}`);
}
const _kSum1 = _kSum(cur_sum, i+1);
const _kSum2 = _kSum(cur_sum + nums[i], i+1);
const res = sol + _kSum1 + _kSum2;
recursive_depth--;
return res;
}
return _kSum(0, 0);
}
let arr = [4, 1, 3, 2], k = 3;
console.log(kSum(arr, k));
I think this code actually gets the right answer. I'm not fluent in Python, but I might have inadvertently fixed a bug in your code though by adding parenthesis around (cur_sum + nums[i]) % k
It seems to me that your solution is correct. It reaches the answer by trying all subsequences, which has 2^n complexity. We could formulate it recursively in an O(n*k) search space, although it could be more efficient to table. Let f(A, k, i, r) represent how many subsequences leave remainder r when their sum is divided by k, using elements up to A[i]. Then:
function f(A, k, i=A.length-1, r=0){
// A[i] leaves remainder r
// when divided by k
const c = A[i] % k == r ? 1 : 0;
if (i == 0)
return c;
return c +
// All previous subsequences
// who's sum leaves remainder r
// when divided by k
f(A, k, i - 1, r) +
// All previous subsequences who's
// sum when combined with A[i]
// leaves remainder r when
// divided by k
f(A, k, i - 1, (k + r - A[i]%k) % k);
}
console.log(f([1,2,1], 3));
console.log(f([2,3,5,8], 5));
console.log(f([4,1,3,2], 3));
console.log(f([3,3,3], 3));

Codility - Tape equilibrium training using Python

A non-empty zero-indexed array A consisting of N integers is given. Array A represents numbers on a tape. Any integer P, such that 0 < P < N, splits this tape into two non-empty parts: A[0], A[1], ..., A[P − 1] and A[P], A[P + 1], ..., A[N − 1]. The difference between the two parts is the value of: |(A[0] + A[1] + ... + A[P − 1]) − (A[P] + A[P + 1] + ... + A[N − 1])| In other words, it is the absolute difference between the sum of the first part and the sum of the second part.
def solution(A):
N = len(A)
my_list = []
for i in range(1, N):
first_tape = sum(A[:i - 1]) + A[i]
second_tape = sum(A[i - 1:]) + A[i]
difference = abs(first_tape - second_tape)
my_list.append(difference)
print(min(my_list))
return min(my_list)
My solution gets 100% on Correctness but 0% on Performance.
I think it is supposed to be O(N) but my time complexity is O(N*N).
Can anyone please give me advice please?
You can change your code to something like below to have complexity O(N).
def solution(A):
s = sum(A)
m = float('inf')
left_sum = 0
for i in A[:-1]:
left_sum += i
m = min(abs(s - 2*left_sum), m)
return m
Functional approach as #darkvalance wrote, but with comments:
from itertools import accumulate
def solution(A):
array_sum = sum(A) # saving sum of all elements to have an O(n) complexity
# accumulate returns accumulated sums
# e.g. for input: [3, 1, 2, 4] it returns: [3, 4, 6, 10]
# we are passing a copy of the array without the last element
# including the last element doesn't make sense, becuase
# accumulate[A][-1] == array_sum
accumulated_list = accumulate(A[:-1])
return min([abs(2*x - array_sum) for x in accumulated_list])
To answer your question - it's O(n*n) because sum() function is O(n) time complexity and you are calling it inside a for loop with N elements, which is also O(N).
So the resulting time complexity of the algorithm will be O(N*n)
my java code
O(N)
class Solution {
public int solution(int[] arr) {
int sum = 0;
for(int i = 0; i<arr.length; i++){
sum = sum + arr[i];
}
int minSum = 100000;
int tempSum = 0;
int previousSum = 0;
for(int i = 0; i<arr.length-1; i++){
previousSum = previousSum + arr[i];
tempSum = Math.abs(previousSum - (sum - previousSum));
if(minSum > tempSum){
minSum = tempSum;
}
}
return minSum;
}
}
My python code O(N)
def solution(A):
# write your code in Python 3.6
mini = float('inf')
check = A[0]
total = sum(A)-check
for i in range(1, len(A)):
diff = abs(check-total)
total -= A[i]
check += A[i]
if diff < mini:
mini = diff
return mini
Functional approach O(N). Accumulate provides the cumulative running sum of a list. We can compute the difference between the 2 arrays with the sum of the list, and the cumulative sum at each point.
from itertools import accumulate
def solution(A):
s = sum(A)
l = list(accumulate(A[:-1]))
return min([abs(2*x - s) for x in l])
def solution(A):
res = []
left_sum = 0
right_sum = sum(A)
for i in range(0, len(A)-1):
left_sum += A[i]
right_sum = right_sum - A[i]
res.append(abs(right_sum-left_sum))
return min(res)
currently you are calculating the sum again and again in first_tape and second_tape. What you need to do is store the total sum and the calculate the sum using difference. SO e.g. if your array is [1,2,3,4], the total sum would be 10. Lets assume your first_tape is of size 1 or in other words your first_tape is [1] so the sum of first tape would be 1. Then the sum of the remaining second tape would be
`total sum - first_tape sum`
and the difference would be
first_tape sum - (total sum - first_tape sum)
You can calculate the first_tape sum within the same loop by doing something like:
previous sum += i (where i is the current array element)
So the solution is order of N.
Here I also add a check for when the element of the array is 0
def MinimialDiff(A):
if len(A) == 2:
return abs(A[0]-A[1])
tot_sum = sum(A)
min_value = float('inf')
left_sum = 0
for x in range(0,len(A)-1):
if A[x] == 0:
continue
left_sum += A[x]
temp = abs(2*left_sum-tot_sum)
min_value = min(min_value,temp)
return min_value
This is my original solution for the Tape Equilibrium problem
def solution(A) :
import numpy as np
# Check if the supplied array is empty or single element
if len( A ) < 2 :
return -1
# Otherwise, create two NumPy Arrays of (non-linear) accumulated sums:
# All but last, Start to End
Array_Sum_Accumulated = np.array( list( np.cumsum( A[ 0 : -1 : 1 ] ) )[ : : 1 ] )
# All but first, End to Start
Array_Sum_Acc_Reversed = np.array( list( np.cumsum( A[ -1 : 0 : -1 ] ) )[ : : -1 ] )
# Array of Absolute Diffenences using fast (precompiled) and simple NumPy magic
Array_Sum_Difference = abs( Array_Sum_Accumulated - Array_Sum_Acc_Reversed )
# for debugging only
if len( A ) <= 20 :
print( "%s\n%s\n%s" % ( Array_Sum_Accumulated, Array_Sum_Acc_Reversed, Array_Sum_Difference ) )
return min( Array_Sum_Difference )
Unfortunately, Codility does not permit import of the NumPy module for this particular lesson. Hence, this is the solution, importing the IterTools module instead of NumPy, that (finally) yielded 100% result across the board:
def solution(A):
from itertools import accumulate
# Check if the supplied array is empty or single element
if len( A ) < 2 :
return -1
# If only two elements return the Absolute Difference of them
if len( A ) == 2 :
return abs( A[ 0 ] - A[ 1 ])
# Otherwise, create two lists of (non-linear) accumulated sums:
# All but last, Start to End
Array_Sum_Accumulated = list( accumulate( A[ 0 : -1 : 1 ] ) )[ : : 1 ]
# All but first, End to Start
Array_Sum_Acc_Reversed = list( accumulate( A[ -1 : 0 : -1 ] ) )[ : : -1 ]
# List of Absolute Differences using the slower (interpreted) loop
Array_Sum_Difference = [ ]
for i in range( 0, len( Array_Sum_Accumulated ) ) :
Array_Sum_Difference.append( abs( Array_Sum_Accumulated[ i ] - Array_Sum_Acc_Reversed [ i ] ) )
# For debugging only
if len( A ) <= 20 :
print( "%s\n%s\n%s" % ( Array_Sum_Accumulated, Array_Sum_Acc_Reversed, Array_Sum_Difference ) )
return min( Array_Sum_Difference )
Thanks to darkvalance for his IterTools solution, and to TenaciousRaptor for the (very enlightening) clarification of the logic used.
Thanks also to Jun Jang for attempting the Two Split Tapes solution which shows that the non-linear accumulation can provide multiple 'pairs of tapes', because the same minimum absolute difference can appear at multiple equilibrium points on 'the tape'.
The IterTools solution provided by darkvalance not only provides exactly the same results, it looks extremely Pythonic and out performs the Three NumPy Arrays solution in more than 97% of tests, (after 100,000 tests of arrays of 100,000 elements).
Congratulations. I hope that one day my code will look something like yours.
This code snippet also a possible solution
def solution(A):
# write your code in Python 3.6
d=[]
for i in range(1,len(A)):
d.append(abs(sum(A[:i])-sum(A[i:])))
return list(set(d))[0]

Categories

Resources