Possible differences between list and iterator - python

I have a weird problem with iterators, which I can't figure out. I have a complicated numerical routine returning a generator object (or after some changes to the code an islice). Afterwards I check, the results as I know that the results must have a negative imaginary part:
import numpy as np
threshold = 1e-8 # just check up to some numerical accuracy
results = result_generator(**inputs)
is_valid = [np.all(_result.imag < threshold) for _result in results]
print("Number of valid results: ", is_valid.count(True))
(Sorry for not giving an executable code, but I can't come up with a simple code at the moment.)
The problem is now, that this returns one valid solution. If I change the code to
import numpy as np
threshold = 1e-8 # just check up to some numerical accuracy
results = list(result_generator(**inputs))
is_valid = [np.all(_result.imag < threshold) for _result in results]
print("Number of valid results: ", is_valid.count(True))
using a list instead of a generator, I get zero valid solution. I can however not wrap my head around what is different and thus have no idea how to debug the problem.
If I go through the debugger and print out the result with the corresponding index the results are even different, the one of the generator is correct, the one of the list is wrong.
Here the numerical function:
def result_generator(z, iw, coeff, n_min, n_max):
assert n_min >= 1
assert n_min < n_max
if n_min % 2:
# index must be even
n_min += 1
id1 = np.ones_like(z, dtype=complex)
A0, A1 = 0.*id1, coeff[0]*id1
A2 = coeff[0] * id1
B2 = 1. * id1
multiplier = np.subtract.outer(z, iw[:-1])*coeff[1:]
multiplier = np.moveaxis(multiplier, -1, 0).copy()
def _iteration(multiplier_im):
multiplier_im = multiplier_im/B2
A2[:] = A1 + multiplier_im*A0
B2[:] = 1. + multiplier_im
A0[:] = A1
A1[:] = A2 / B2
return A1
complete_iterations = (_iteration(multiplier_im) for multiplier_im in multiplier)
return islice(complete_iterations, n_min, n_max, 2)

You're yielding the same array over and over instead of making new arrays. When you call list, you get a list of references to the same array, and that array is in its final state. When you don't call list, you examine the array in the state the generator yields it, each time it's yielded.
Stop reusing the same array over and over.

Related

How to ignore implicit zeros using scipy.sparse.csr_matrix.minimum?

I have two matrices mat1 and mat2 that are sparse (most entries are zero) and I'm not interested in the zero-valued entries: I look at the matrices from a graph-theoretical perspective where a zero means that there is no edge between the nodes.
How can I efficiently get the minimum values between non-zero entries only using scipy.sparse matrices?
I.e. an equivalent of mat1.minimum(mat2) that would ignore implicit zeros.
Using dense matrices, it is fairly easy to do:
import numpy as np
nnz = np.where(np.multiply(mat1, mat2))
m = mat1 + mat2
m[nnz] = np.minimum(mat1[nnz], mat2[nnz])
But this would be very inefficient with sparse matrices.
NB: a similar question has been asked before but did not get any relevant answer and there is a related PR on the scipy repo that proposes an implementation of this for (arg)min/max but not for minimum.
EDIT: to specify a bit more the desired behavior would be commutative, i.e. this nonzero-minimum would take all values present in only one of the two matrices and the min of the entries that are present in both matrices
Just in case someone also looks for this, my current implementation is below.
However, I'd appreciate any proposal that would either speed this up or reduce the memory footprint.
s = mat1.multiply(mat2)
s.data[:] = 1.
a1 = mat1.copy()
a1.data[:] = 1.
a1 = (a1 - s).maximum(0)
a2 = mat2.copy()
a2.data[:] = 1.
a2 = (a2 - s).maximum(0)
res = mat1.multiply(a1) + mat2.multiply(a2) + \
mat1.multiply(s).minimum(mat2.multiply(s))
If the sparse nonzeros are positive, an alternate way to use the correct UNION behavior of maximum might
be to negate and make positive.
Following your lead of mucking with data explicitly. I found
def sp_min_nz_positive(asp,bsp): # a and b scipy sparse
amax = asp.max()
bmax = bsp.max()
abmaxplus = max(amax, bmax) # + 1.0 : surprise! not needed.
# invert the direction, while remaining positive
arev = asp.copy()
arev.data[:] = abmaxplus - asp.data[:]
brev = bsp.copy()
brev.data[:] = abmaxplus - bsp.data[:]
out = arev.maximum(brev) #
# revert the direction of these positives
out.data[:] = abmaxplus - out.data[:]
return out
there may be inexactness due to roundoff
There was also a suggestion to use sparse internals. A rather generic
function is sp.find which returns the nonzero elements of anything.
So you could also try out a minimum that handles negative values too, with something like:
import scipy.sparse as sp
def sp_min_union(a, b):
assert a.shape == b.shape
assert sp.issparse(a) and sp.issparse(b)
(ra,ca,_) = sp.find(a) # over nonzeros only
(rb,cb,_) = sp.find(b) # over nonzeros only
setab = set(zip(ra,ca)).union(zip(rb,cb)) # row-column union-of-nonzero
r=[]
c=[]
v=[]
for (rr,cc) in setab:
r.append(rr)
c.append(cc)
anz = a[rr,cc]
bnz = b[rr,cc]
assert anz!=0 or bnz!=0 # they came from *some* sp.find
if anz==0: anz = bnz
#else:
# #if bnz==0: anz = anz
# #else: anz=min(anz,bnz)
# equiv.
elif bnz!=0: anz=min(anz,bnz)
v.append(anz)
# choose what sparse output format you want, many seem
# constructible as:
return sp.csr_matrix((v, (r,c)), shape=a.shape)

Walk along 2D numpy array as long as values remain the same

Short description
I want to walk along a numpy 2D array starting from different points in specified directions (either 1 or -1) until a column changes (see below)
Current code
First let's generate a dataset:
# Generate big random dataset
# first column is id and second one is a number
np.random.seed(123)
c1 = np.random.randint(0,100,size = 1000000)
c2 = np.random.randint(0,20,size = 1000000)
c3 = np.random.choice([1,-1],1000000 )
m = np.vstack((c1, c2, c3)).T
m = m[m[:,0].argsort()]
Then I wrote the following code that starts at specific rows in the matrix (start_points) then keeps extending in the specified direction (direction_array) until the metadata changes:
def walk(mat, start_array):
start_mat = mat[start_array]
metadata = start_mat[:,1]
direction_array = start_mat[:,2]
walk_array = start_array
while True:
walk_array = np.add(walk_array, direction_array)
try:
walk_mat = mat[walk_array]
walk_metadata = walk_mat[:,1]
if sorted(metadata) != sorted(walk_metadata):
raise IndexError
except IndexError:
return start_mat, mat[walk_array + (direction_array *-1)]
s = time.time()
for i in range(100000):
start_points = np.random.randint(0,1000000,size = 3)
res = walk(m, start_points)
Question
While the above code works fine I think there must be an easier/more elegant way to walk along a numpy 2D array from different start points until the value of another column changes? This for example requires me to slice the input array for every step in the while loop which seems quite inefficient (especially when I have to run walk millions of times).
You don't have to whole input array in while loop. You could just use the column that values you want to check.
I refactored a little bit your code as well so there is no while True statement and so there is no if that raises error for no particular reason.
Code:
def walk(mat, start_array):
start_mat = mat[start_array]
metadata = sorted(start_mat[:,1])
direction_array = start_mat[:,2]
data = mat[:,1]
walk_array = np.add(start_array, direction_array)
try:
while metadata == sorted(data[walk_array]):
walk_array = np.add(walk_array, direction_array)
except IndexError:
pass
return start_mat, mat[walk_array - direction_array]
In this particular reason if len(start_array) is a big number (thousands of elements) you could use collections.Counter instead of sort as it will be much faster.
I was thinking of another approach that could be used and I that there could be a array with desired slices in correct direction.
But this approach seems very dirty. Anyway I will post it maybe you will find it anyhow useful
Code:
def walk(mat, start_array):
start_mat = mat[start_array]
metadata = sorted(start_mat[:,1])
direction_array = start_mat[:,2]
_data = mat[:,1]
walk_slices = zip(*[
data[start_points[i]+direction_array[i]::direction_array[i]]
for i in range(len(start_points))
])
for step, walk_metadata in enumerate(walk_slices):
if metadata != sorted(walk_metadata):
break
return start_mat, mat[start_array + (direction_array * step)]
To perform operation starting from a single row, define the following class:
class Walker:
def __init__(self, tbl, row):
self.tbl = tbl
self.row = row
self.dir = self.tbl[self.row, 2]
# How many rows can I move from "row" in the indicated direction
# while metadata doesn't change
def numEq(self):
# Metadata from "row" in the required direction
md = self.tbl[self.row::self.dir, 1]
return ((md != md[0]).cumsum() == 0).sum() - 1
# Get row "n" positions from "row" in the indicated direction
def getRow(self, n):
return self.tbl[self.row + n * self.dir]
Then, to get the result, run:
def walk_2(m, start_points):
# Create walkers for each starting point
wlk = [ Walker(m, n) for n in start_points ]
# How many rows can I move
dist = min([ w.numEq() for w in wlk ])
# Return rows from changed positions
return np.vstack([ w.getRow(dist) for w in wlk ])
The execution time of my code is roughly the same as yours,
but in my opinion my code is more readable and concise.

Optimize code for step function using only NumPy

I'm trying to optimize the function 'pw' in the following code using only NumPy functions (or perhaps list comprehensions).
from time import time
import numpy as np
def pw(x, udata):
"""
Creates the step function
| 1, if d0 <= x < d1
| 2, if d1 <= x < d2
pw(x,data) = ...
| N, if d(N-1) <= x < dN
| 0, otherwise
where di is the ith element in data.
INPUT: x -- interval which the step function is defined over
data -- an ordered set of data (without repetitions)
OUTPUT: pw_func -- an array of size x.shape[0]
"""
vals = np.arange(1,udata.shape[0]+1).reshape(udata.shape[0],1)
pw_func = np.sum(np.where(np.greater_equal(x,udata)*np.less(x,np.roll(udata,-1)),vals,0),axis=0)
return pw_func
N = 50000
x = np.linspace(0,10,N)
data = [1,3,4,5,5,7]
udata = np.unique(data)
ti = time()
pw(x,udata)
tf = time()
print(tf - ti)
import cProfile
cProfile.run('pw(x,udata)')
The cProfile.run is telling me that most of the overhead is coming from np.where (about 1 ms) but I'd like to create faster code if possible. It seems that performing the operations row-wise versus column-wise makes some difference, unless I'm mistaken, but I think I've accounted for it. I know that sometimes list comprehensions can be faster but I couldn't figure out a faster way than what I'm doing using it.
Searchsorted seems to yield better performance but that 1 ms still remains on my computer:
(modified)
def pw(xx, uu):
"""
Creates the step function
| 1, if d0 <= x < d1
| 2, if d1 <= x < d2
pw(x,data) = ...
| N, if d(N-1) <= x < dN
| 0, otherwise
where di is the ith element in data.
INPUT: x -- interval which the step function is defined over
data -- an ordered set of data (without repetitions)
OUTPUT: pw_func -- an array of size x.shape[0]
"""
inds = np.searchsorted(uu, xx, side='right')
vals = np.arange(1,uu.shape[0]+1)
pw_func = vals[inds[inds != uu.shape[0]]]
num_mins = np.sum(xx < np.min(uu))
num_maxs = np.sum(xx > np.max(uu))
pw_func = np.concatenate((np.zeros(num_mins), pw_func, np.zeros(xx.shape[0]-pw_func.shape[0]-num_mins)))
return pw_func
This answer using piecewise seems pretty close, but that's on a scalar x0 and x1. How would I do it on arrays? And would it be more efficient?
Understandably, x may be pretty big but I'm trying to put it through a stress test.
I am still learning though so some hints or tricks that can help me out would be great.
EDIT
There seems to be a mistake in the second function since the resulting array from the second function doesn't match the first one (which I'm confident that it works):
N1 = pw1(x,udata.reshape(udata.shape[0],1)).shape[0]
N2 = np.sum(pw1(x,udata.reshape(udata.shape[0],1)) == pw2(x,udata))
print(N1 - N2)
yields
15000
data points that are not the same. So it seems that I don't know how to use 'searchsorted'.
EDIT 2
Actually I fixed it:
pw_func = vals[inds[inds != uu.shape[0]]]
was changed to
pw_func = vals[inds[inds[(inds != uu.shape[0])*(inds != 0)]-1]]
so at least the resulting arrays match. But the question still remains on whether there's a more efficient way of going about doing this.
EDIT 3
Thanks Tin Lai for pointing out the mistake. This one should work
pw_func = vals[inds[(inds != uu.shape[0])*(inds != 0)]-1]
Maybe a more readable way of presenting it would be
non_endpts = (inds != uu.shape[0])*(inds != 0) # only consider the points in between the min/max data values
shift_inds = inds[non_endpts]-1 # searchsorted side='right' includes the left end point and not right end point so a shift is needed
pw_func = vals[shift_inds]
I think I got lost in all those brackets! I guess that's the importance of readability.
A very abstract yet interesting problem! Thanks for entertaining me, I had fun :)
p.s. I'm not sure about your pw2 I wasn't able to get it output the same as pw1.
For reference the original pws:
def pw1(x, udata):
vals = np.arange(1,udata.shape[0]+1).reshape(udata.shape[0],1)
pw_func = np.sum(np.where(np.greater_equal(x,udata)*np.less(x,np.roll(udata,-1)),vals,0),axis=0)
return pw_func
def pw2(xx, uu):
inds = np.searchsorted(uu, xx, side='right')
vals = np.arange(1,uu.shape[0]+1)
pw_func = vals[inds[inds[(inds != uu.shape[0])*(inds != 0)]-1]]
num_mins = np.sum(xx < np.min(uu))
num_maxs = np.sum(xx > np.max(uu))
pw_func = np.concatenate((np.zeros(num_mins), pw_func, np.zeros(xx.shape[0]-pw_func.shape[0]-num_mins)))
return pw_func
My first attempt was utilising a lot of boardcasting operation from numpy:
def pw3(x, udata):
# the None slice is to create new axis
step_bool = x >= udata[None,:].T
# we exploit the fact that bools are integer value of 1s
# skipping the last value in "data"
step_vals = np.sum(step_bool[:-1], axis=0)
# for the step_bool that we skipped from previous step (last index)
# we set it to zerp so that we can negate the step_vals once we reached
# the last value in "data"
step_vals[step_bool[-1]] = 0
return step_vals
After looking at the searchsorted from your pw2 I had a new approach that utilise it with much higher performance:
def pw4(x, udata):
inds = np.searchsorted(udata, x, side='right')
# fix-ups the last data if x is already out of range of data[-1]
if x[-1] > udata[-1]:
inds[inds == inds[-1]] = 0
return inds
Plots with:
plt.plot(pw1(x,udata.reshape(udata.shape[0],1)), label='pw1')
plt.plot(pw2(x,udata), label='pw2')
plt.plot(pw3(x,udata), label='pw3')
plt.plot(pw4(x,udata), label='pw4')
with data = [1,3,4,5,5,7]:
with data = [1,3,4,5,5,7,11]
pw1,pw3,pw4 are all identical
print(np.all(pw1(x,udata.reshape(udata.shape[0],1)) == pw3(x,udata)))
>>> True
print(np.all(pw1(x,udata.reshape(udata.shape[0],1)) == pw4(x,udata)))
>>> True
Performance: (timeit by default runs 3 times, average of number=N of times)
print(timeit.Timer('pw1(x,udata.reshape(udata.shape[0],1))', "from __main__ import pw1, x, udata").repeat(number=1000))
>>> [3.1938983199979702, 1.6096494779994828, 1.962694135003403]
print(timeit.Timer('pw2(x,udata)', "from __main__ import pw2, x, udata").repeat(number=1000))
>>> [0.6884554479984217, 0.6075002400029916, 0.7799002879983163]
print(timeit.Timer('pw3(x,udata)', "from __main__ import pw3, x, udata").repeat(number=1000))
>>> [0.7369808239964186, 0.7557657590004965, 0.8088172269999632]
print(timeit.Timer('pw4(x,udata)', "from __main__ import pw4, x, udata").repeat(number=1000))
>>> [0.20514375300263055, 0.20203858999957447, 0.19906871100101853]

Python optimization with multiple and changing variables

I'm new to Python and looking to find an optimized solution with a number of constraints, where those constraints are based on functions of the outputs.
First two constraints are straightforward:
1) output1 +output2 + output3 = 1
2) output1, output2, and output3 must all be >= 0
Last constraint needs functions of the outputs to be EQUAL:
3) f(output1) == f(output2) == f(output3)
In this case the function array is produced a matrix multiplication of the outputs:
F = cov.dot(array([output1,output2,output3]))*array([output1,output2,output3])
f(output1) = F[0], f(output2) = F[1], f(output3) = F [2]
Hopefully I've described the problem clearly... Eventually I want to extend this to more outputs than 3.
What I have below gives me output values that don't appear to follow the constraints at all (gives me a negative value). I assume I'm entering the constraints wrong... or perhaps there is an easier way to do this with np.linalg.solve?
import numpy as np
from scipy.optimize import fsolve
cov=np.array([0.04,0.0015,0.03,0.0015,0.0025,0.000625,0.03,0.000625,0.0625]).reshape(3,3)
weights=np.array([0.3,0.2,0.5])
def RC(w):
return cov.dot(w)*w
riskcont = RC(weights)
def PV(riskcont):
return np.sqrt(riskcont.sum())
portvol = PV(riskcont)
def ERC(z):
w1=z[0]
w2=z[1]
w3=z[2]
#1) weights sum to 100%
out=w1 +w2 +w3 -1
#2) weights above zero
out.append((w1*w2*w3)>0)
#3) riskcont must all be equal
out.append([riskcont[0] == riskcont[1] == riskcont[2]] #== riskcont(w4)))
return out
z= fsolve(ERC,[1/3,1/3,1/3])

python -- IndexError: list index out of range / dividing lists

I'm not great at programming and I've been driving myself crazy trying to figure this out.
I have a program to calculate binding energies that stores values in lists. At a certain point one list is divided by a different one, but I keep getting this error:
Traceback (most recent call last):
File "semf.py", line 76, in <module>
BpN = BpN(A, Z)
File "semf.py", line 68, in BpN
bper = B[i]/A[i]
IndexError: list index out of range
The relevant code is below, sorry there's so much of it:
A = 0.0
def mass_A(Z):
"""
ranges through all A values Z, ..., 3Z+1 for Z ranging from 1 to 100
"""
a = 0.0
a = np.arange(Z, 3*Z+1)
return a
def semf(A, Z):
"""
The semi-empirical mass formula (SEMF) calculates the binding energy of the nucleus.
N is the number of neutrons.
"""
i = 0
E = []
for n in A:
# if statement to determine value of a5
if np.all(Z%2==0 and (A-Z)%2==0):
a5 = 12.0
elif np.all(Z%2!=0 and (A-Z)%2!=0):
a5 = -12.0
else:
a5 = 0
B = a1*A[i] - a2*A[i]**(2/3) - a3*(Z**2 / A[i]**(1/3)) - a4*( (A[i] - 2*Z)**2 / A[i] ) + a5 / A[i]**(1/2)
i += 1
E.append(B)
return E
def BpN(A, Z):
"""
function to calculate the binding energy per nucleon (B/A)
"""
i = 0
R = []
for n in range(1,101):
bper = B[i]/A[i]
i += 1
R.append(bper)
return R
for Z in range(1,101):
A = mass_A(Z)
B = semf(A, Z)
BpN = BpN(A, Z)
It seems like somehow, the two lists A and B aren't the same length, but I'm not sure how to fix that issue.
Please help.
Thanks
In Python, list indices start from zero and not from one.
It's hard to be sure without seeing your code in its entirety, but range(1,101) looks suspect. If the list has 100 elements, the correct bounds for the loop are range(0,100) or, equivalently, range(100) or, better still, range(len(A)).
P.S. Since you're using Numpy already, you should look into rewriting your code using Numpy arrays instead of using lists and loops. If A and B were Numpy arrays, your entire troublesome function could become:
return B / A
(This is element-wise division of B by A.)

Categories

Resources