i have this method which create a list of lists which contain zeros and one.
for example the output for (unit = 3) is: [[1,0,0],[0,1,0],[0,0,1]]
how can i do it in less lines with list comprehension? I think that one line its enough.
major_list = [] # contains lists full off zeros and ones and at the end converted to a matrix
for i in range(unit):
major_list.append([0] * unit)
major_list[i][i] = 1
You can't get any faster than using numpy.identity():
np.identity(3)
Code:
import numpy as np
unit = 3
major_array = np.identity(unit)
With a list comphrension you can join 3 sublist
major_list = [[0] * i + [1] + [0] * (unit - i - 1) for i in range(unit)]
print(major_list)
Or better use a performante way with numpy
major_list = numpy.identity(3)
print(major_list)
Testing the performance of the different methods suggested here, and assuming the required final result is a list of lists (and not numpy array), the fastest, with 2.091 seconds to unit = 10k is:
major_list = [[0] * i + [1] + [0] * (unit - i - 1) for i in range(unit)]
The numpy method becomes:
major_list = numpy.identity(unit).astype(int).tolist()
And is second fastest with 2.359 sec.
My method:
major_list = [[1 if i == index else 0 for i in range(unit)]
for index in range(unit)]
Is far behind with 6.960 sec.
And last:
major_list = [[int(c==r) for c in range(unit)] for r in range(unit)]
With 17.732 sec
If by optimize you mean reduce the number of lines (not necessarily make it faster), you can use the following:
unit = 4
major_list = [
[ 0 ] * (i) +
[ 1 ] +
[ 0 ] * (unit - i - 1)
for i in range(unit)
]
for i in major_list:
print(i)
[1, 0, 0, 0]
[0, 1, 0, 0]
[0, 0, 1, 0]
[0, 0, 0, 1]
The following makes it rather concise:
major_list = [[int(c==r) for c in range(unit)] for r in range(unit)]
This puts 1 where column index equals row index, 0 everywhere else.
You can try this:
def frame_matrix(unit):
return [[int(1) if i==j else 0 for i in range(unit)] for j in range(unit)]
Related
I have two arrays of size 15 : A = [a_0, ... , a_14] and B = [b_0, ..., b_14]
Goal: obtain the array C of size 8 resulting from
C = [a_0] * [b_7, ..., b_14] + [a_2, a_3] * [b_3, b_4, b_5, b_6] + [a_3, a_4, a_5, a_6] * [b_2, b_3] + [a_7, ..., a_14] * [b_0]
where * is the outer product np.outer. Note that:
each sub-array is of length 2^i for i between 0 and 3.
from the outer product, we obtain two vectors of size (8) and two matrices of sizes (2, 4) and (4, 2). We suppose that we flatten immediately after the product, in order to be able to sum the four products and have at the end a long vector of size 8.
My implementation is the following:
inds = [0, 1, 3, 7, 15]
C = np.zeros(8)
d = 4
for i in range(d):
left = A[inds[i]:inds[i+1]]
right = B[inds[d-i-1]:inds[d-i]]
C += (left[:, None]*right[None, :]).ravel() # same as np.outer(left, right).ravel()
Question: what is the fastest way to obtain C ? i.e. is there a way to avoid having this for loop to perform the summation ?
If not: what are my options ? code in C++ ? Cython ?
NB: this is to be generalized for loops of range(L+1) with L any integer. In the example above I have illustrated the case L=3 for better comprehension. FYI, the generalized code would look like this:
L = 3
inds = np.cumsum([2**k for k in range(0, L+1)])
inds = np.concatenate(([0], inds))
# Input arrays A and B are of size inds[-1]
C = np.zeros(2**L)
d = L+1
for i in range(d):
left = A[inds[i]:inds[i+1]]
right = B[inds[d-i-1]:inds[d-i]]
C += (left[:, None]*right[None, :]).ravel() # same as np.outer(left, right).ravel()
I think you can simply do:
C = np.outer(A[0], B[7:])+\
np.outer(A[[2,3]], B[[3,4,5,6]]).ravel()+\
np.outer(A[[3,4,5,6]], B[[2,3]]).ravel()+\
np.outer(A[7:], B[0]).ravel()
Am I wrong?
I wonder what is the best algorithm to sort binary array with least swaps? (having array eg [0,0,0,1,0,1,0] to become [0,0,0,0,0,1,1]).
I implemented some bubble sorts but wonder what is the optimisation for those?
My code is in python, but any language is welcomed, if anyone has a solution for the least swaps that would really improve my program and i would really appreciate!!
If you really want to do it using swaps, you can start from both ends and swap the 1s you find on the left side going forward with the 0s you find on the right side going backward.
A = [0,0,0,1,0,1,0]
left1s = (i for i,b in enumerate(A) if b==1)
right0s = (len(A)-j for j,b in enumerate(reversed(A),1) if b==0)
swapCount = 0
for i,j in zip(left1s,right0s):
if i>=j:break
A[i],A[j] = A[j],A[i]
swapCount += 1
print(A) # [0, 0, 0, 0, 0, 1, 1]
print(swapCount,"swaps") # 1 swaps
Note that the same logic can be written without the use of iterators and zip:
A = [0,0,0,1,0,1,0]
swaps = 0
f,b = 0,len(A)-1 # forward and backward indexes
while f<b: # until forward meets backward
if A[f]==0: f += 1 # skip 0s forward
elif A[b]==1: b -= 1 # skip 1s backward
else: swaps,A[f],A[b] = swaps+1,A[b],A[f] # swap misplaced bits
print(A) # [0, 0, 0, 0, 0, 1, 1]
print(swaps,"swaps") # 1 swaps
If you don't want to use .sort() or similar stuff I can think of that solution:
arr = [0,0,0,1,0,1,0]
print([0] * arr.count(0) + [1] * arr.count(1))
Which ends up in [0, 0, 0, 0, 0, 1, 1]
Edit:
l = len(arr)
z = arr.count(0)
print([0]*z + [1] * (l - z))
seems to be faster with timeit.timeit
You don't need to sort a binary array, especially with python's small integer interning.
The count of ones is given by
ones = sum(lst)
The count of zeros is the length minutes that:
zeros = len(lst) - ones
You can construct the right list with
[0] * zeros + [1] * ones
Many languages have their own implementation.
Example for javascript:
[0,0,0,1,0,1,0].sort();
returns
[ 0, 0, 0, 0, 0, 1, 1 ]
Edited: adding my own implementation on javascript.
The strategy was navigate from the begin and stop when found 1, and then navigate from the end to found 0 to swap.
It's one of multiple possible implementations.
function sortBinaryArray(binArr){
let i = 0;
let j = binArr.length;
let swapCount = 0;
while(i < j){
if(binArr[i] === 0) {
// found 0 on position i. Its sorted until now.
i++;
continue;
}
// Found 1 on posittion i. Search from the end for 0 to swap
j--;
while(i < j){
if (binArr[j] === 0) {
// Found position to swap
binArr[i] = 0;
binArr[j] = 1;
i++;
swapCount++;
break;
}
j--
}
}
console.log('swapCount='+swapCount);
}
var myArr = [0,0,0,1,0,1,0];
sortBinaryArray(myArr);
console.log(myArr)
output:
swapCount=1
Array(7) [ 0, 0, 0, 0, 0, 1, 1 ]
I'm trying to implements an algorithm to count subsets with given sum in python which is
import numpy as np
maxN = 20
maxSum = 1000
minSum = 1000
base = 1000
dp = np.zeros((maxN, maxSum + minSum))
v = np.zeros((maxN, maxSum + minSum))
# Function to return the required count
def findCnt(arr, i, required_sum, n) :
# Base case
if (i == n) :
if (required_sum == 0) :
return 1
else :
return 0
# If the state has been solved before
# return the value of the state
if (v[i][required_sum + base]) :
return dp[i][required_sum + base]
# Setting the state as solved
v[i][required_sum + base] = 1
# Recurrence relation
dp[i][required_sum + base] = findCnt(arr, i + 1, required_sum, n) + findCnt(arr, i + 1, required_sum - arr[i], n)
return dp[i][required_sum + base]
arr = [ 2, 2, 2, 4 ]
n = len(arr)
k = 4
print(findCnt(arr, 0, k, n))
And it gives the expected result, but I was asked to not use numpy, so I replaced numpy arrays with nested lists like this :
#dp = np.zeros((maxN, maxSum + minSum)) replaced by
dp = [[0]*(maxSum + minSum)]*maxN
#v = np.zeros((maxN, maxSum + minSum)) replaced by
v = [[0]*(maxSum + minSum)]*maxN
but now the program always gives me 0 in the output, I think this is because of some behavior differences between numpy arrays and nested lists, but I don't know how to fix it
EDIT :
thanks to #venky__ who provided this solution in the comments :
[[0 for i in range( maxSum + minSum)] for i in range(maxN)]
and it worked, but I still don't understand what is the difference between it and what I was doing before, I tried :
print( [[0 for i in range( maxSum + minSum)] for i in range(maxN)] == [[0]*(maxSum + minSum)]*maxN )
And the result is True, so how this was able to fix the problem ?
It turns out that I was using nested lists the wrong way to represent 2d arrays, since python was not crating separate objets, but the same sub list indexes was referring to the same integer object, for better explanation please read this.
This question already has an answer here:
Numpy argmax. How to compute both max and argmax?
(1 answer)
Closed 2 years ago.
in the python, I'm trying to find the max value of a numpy array with its index number.
in matlab [val,I]=max(v) , val = "maximum value" and I = "index number" witch v is a 1x12 double
how could I implement this array in numpy?
also, I try this :
import numpy as np
v = np.empty([v_size,1])
for j in range(1,v_size):
x1 = int(tmp[j - 1, 0])
x2 = int(tmp[j - 1, 1])
x3 = int(tmp[j + 1, 0])
x4 = int(tmp[j + 1, 1])
v[j] = image[x1, x2] - image[x3, x4]
if np.all(v == 0) == 0:
val,I = np.amax(v)
I would be appreciated if anybody helps me.
Use where and amax:
maxElement = numpy.amax(arr)
result = numpy.where(arr == numpy.amax(arr))
result[0] # Will be your max.
Or equivalent:
max_num, index = [numpy.amax(arr), numpy.where(arr == numpy.amax(arr))[0]]
Check numpy documentation... There are at least two immediate ways to get what you want:
maxvalue = array.max()
maxindex = array.argmax()
Or:
maxvalue = np.max(array)
maxindex = np.argmax(array)
We can use ‘np.argmax’ and use the index to get the value. Performant way.
a=np.asarray((1,3,8,5,4))
m = np.argmax(a)
n = a[m]
print(“Max value: {} and its id: {}”.format(n, m))
Max value: 8 and it’s id: 2
I have a performance issue when coding with python.
let's say I have 2 very large arrays (Nx2) of strings say with N = 12,000,000, and two variables label_a and label_b which are also strings. Here is the following code:
import numpy as np
import time
indices = np.array([np.random.choice(np.arange(5000).astype(str),size=10000000),np.random.choice(np.arange(5000).astype(str),size=10000000)]).T
costs = np.random.uniform(size=10000000)
label_a = '2'
label_b = '9'
t0 = time.time()
costs = costs[(indices[:,0]!=label_a)*(indices[:,0]!=label_b)*(indices[:,1]!=label_a)*(indices[:,1]!=label_b)]
indices = indices[(indices[:,0]!=label_a)*(indices[:,0]!=label_b)*(indices[:,1]!=label_a)*(indices[:,1]!=label_b)]
t1 = time.time()
toseq = t1-t0
print(toseq)
the above code segment takes 3 seconds every time it's ran. I would like to achieve the same thing while reducing the computing cost:
I am using a boolean mask to only retrieve rows in the costs and indices arrays where the values are not label_a and label_b
As indicated in the comments, computing the values of the indices you're after only once, and combining them only once would save time.
(I've also changed the way of timing, just for brevity - the results are the same)
import numpy as np
from timeit import timeit
r = 5000
n = 10000000
indices = np.array([
np.random.choice(np.arange(r).astype(str), size=n),
np.random.choice(np.arange(r).astype(str), size=n)
]).T
costs = np.random.uniform(size=n)
label_a = '2'
label_b = '9'
n_indices = np.array([
np.random.choice(np.arange(r), size=n),
np.random.choice(np.arange(r), size=n)
]).T
def run():
global indices
global costs
_ = costs[(indices[:, 0] != label_a)*(indices[:, 0] != label_b) *
(indices[:, 1] != label_a)*(indices[:, 1] != label_b)]
_ = indices[(indices[:, 0] != label_a)*(indices[:, 0] != label_b) *
(indices[:, 1] != label_a)*(indices[:, 1] != label_b)]
def run_faster():
global indices
global costs
# only compute these only once
not_a0 = indices[:, 0] != label_a
not_b0 = indices[:, 0] != label_b
not_a1 = indices[:, 1] != label_a
not_b1 = indices[:, 1] != label_b
_ = costs[not_a0 * not_b0 * not_a1 * not_b1]
_ = indices[not_a0 * not_b0 * not_a1 * not_b1]
def run_even_faster():
global indices
global costs
# also combine them only once
cond = ((indices[:, 0] != label_a) * (indices[:, 0] != label_b) *
(indices[:, 1] != label_a) * (indices[:, 1] != label_b))
_ = costs[cond]
_ = indices[cond]
def run_sep_mask():
global indices
global costs
global cond
# just the masking part of run_even_faster
cond = ((indices[:, 0] != label_a) * (indices[:, 0] != label_b) *
(indices[:, 1] != label_a) * (indices[:, 1] != label_b))
def run_sep_index():
global indices
global costs
global cond
# just the indexing part of run_even_faster
_ = costs[cond]
_ = indices[cond]
def run_even_faster_numerical():
global indices
global costs
# use int values and n_indices instead of indices
a = int(label_a)
b = int(label_b)
cond = ((n_indices[:, 0] != a) * (n_indices[:, 0] != b) *
(n_indices[:, 1] != a) * (n_indices[:, 1] != b))
_ = costs[cond]
_ = indices[cond]
def run_all(funcs):
for f in funcs:
print('{:.4f} : {}()'.format(timeit(f, number=1), f.__name__))
run_all([run, run_faster, run_even_faster, run_sep_mask, run_sep_index, run_even_faster_numerical])
Note that I also added an example where the operation is not based on strings, but on numbers instead. If you can avoid the values being strings, but get numbers instead, you'd get a performance boost as well.
This boost gets substantial if you start comparing longer labels - in the end it might even be worth converting the strings to numbers before the filtering, if the strings get long enough.
These are my results:
0.9711 : run()
0.7065 : run_faster()
0.6983 : run_even_faster()
0.2657 : run_sep_mask()
0.4174 : run_sep_index()
0.4536 : run_even_faster_numerical()
The two sep entries show that the indexing is about twice the amount of time it takes to build the mask for run_even_faster, so you can only expect so much improvement from tuning it even more.
However, they also show that building the mask based on integers is less than 0.04 seconds on top of doing the actual indexing, compared to the about 0.26 seconds for building the mask based on strings. So, that's the room you have for improvement.