Sum of elements of numpy array not same as total - python

I'm trying to count number of pairs and save them in two different histograms, one saves the pair in an array where the parent objects are split and the other one just saves the total, that means I have a loop that looks like this:
for k in range(N_parents):
pair_hist[k, bin] +=1
total_pair_hist[bin] +=1
where both pair_hist and total_pair as defined as,
pair_hist = np.zeros((N_parents, bins.shape[0]), dtype = np.uint64)
total_pair_hist = np.zeros(bins.shape[0], dtype = np.uint64)
I'd expect that summing the elements of pair_hist across all parents (axis=0), I'd get the total histogram. The funny thing is, if I take the sum of pair_hist:
onehalo_sum_ind = np.sum(pair_hist, axis = 0)
I don't get exactly total_pair_hist, but something slightly different:
total_pair_hist = [ 287248245 448773033 695820015 1070797576 1634146741 2466680801
3667159080 5334307986 7524739978 10206208064 13237161068 16466436715
19231751113 20949333183 21254336387 19497450101 16459529579 13038604111
9783826702 7006904025 4813946458 3207605915 2097437543 1355158303
869077173 555036759 353732683 225171870 143179912 0]
pair_hist = [ 287267022 448887401 696415932 1073435699 1644677789 2503693266
3784008845 5665555755 8380564635 12201977310 17382403650 23929909625
31103373709 36859534246 38146287402 33454446858 25689430007 18142721164
12224099624 8035266046 5211441720 3353187036 2147027818 1370663213
873519714 556182465 353995293 225224668 143189173 0]
Any idea of what's going on? Thank you in advance :)

Sorry for the late reply, but I didn't have time to work on it before. The problem was caused by numba. I was using it with the parallel=True flag to parallelise one of the loops and that caused the error.

Related

Improving loop in loops with Numpy

I am using numpy arrays aside from pandas for speed purposes. However, I am unable to advance my codes using broadcasting, indexing etc. Instead, I am using loop in loops as below. It is working but seems so ugly and inefficient to me.
Basically what I am doing is, I am trying to imitate groupby of pandas at the step mydata[mydata[:,1]==i]. You may consider it as a firm id number. Then with respect to the lookup data, I am checking if it is inside the selected firm or not at the step all(np.isin(lookup[u],d[:,3])). But as I denoted at the beginning, I feel so uncomfortable about this.
out = []
for i in np.unique(mydata[:,1]):
d = mydata[mydata[:,1]==i]
for u in range(0,len(lookup)):
control = all(np.isin(lookup[u],d[:,3]))
if(control):
out.append(d[np.isin(d[:,3],lookup[u])])
It takes about 0.27 seconds. However there must exist some clever alternatives.
I also tried Numba jit() but it does not work.
Could anyone help me about that?
Thanks in advance!
Fake Data:
a = np.repeat(np.arange(100)+5000, np.random.randint(50, 100, 100))
b = np.random.randint(100,200,len(a))
c = np.random.randint(10,70,len(a))
index = np.arange(len(a))
mydata = np.vstack((index,a, b,c)).T
lookup = []
for i in range(0,60):
lookup.append(np.random.randint(10,70,np.random.randint(3,6,1) ))
I had some problems getting the goal of your Program, but I got a decent performance improvement, by refactoring your second for loop. I was able to compress your code to 3 or 4 lines.
f = (
lambda lookup: out1.append(d[np.isin(d[:, 3], lookup)])
if all(np.isin(lookup, d[:, 3]))
else None
)
out = []
for i in np.unique(mydata[:, 1]):
d = mydata[mydata[:, 1] == i]
list(map(f, lookups))
This resolves to the same output list you received previously and the code runs almost twice as quick (at least on my machine).

Getting wrong results with np.argpartition, while selecting maximum n values from an array

so I was using this answer on 'How do I get indices of N maximum values in a NumPy array?' question. I used it in my ML model in which it outputs Logsoftmax layer values and I was thinking to get top 4 classes in each. In most of the cases, it sorted and gave values correctly but in a very few cases, I see partially unsorted results like this
arr = np.array([-3.0302, -2.7103, -7.4844, -3.4761, -5.3009, -5.2121, -3.7549, -4.7834,
-5.8870, -3.4839, -5.0104, -3.0992, -4.8823, -0.3319, -6.8084])
ind = np.argpartition(arr, -4)[-4:]
print(arr[ind])
and the output is
[-3.0992 -3.0302 -0.3319 -2.7103]
which is unsorted, it has to output the maximum values at last but it is not seen in this case. I checked with other examples and it is doing all fine. Like
arr = np.array([45, 35, 67.345, -34.5555, 66, -0.23655, 11.0001, 0.234444444])
ind = np.argpartition(arr, -4)[-4:]
print(arr[ind])
output
[35. 45. 66. 67.345]
What could be the reason? Did I miss anything?
If you're not planning on actually utilizing the sorted indices, why not just use np.sort?
>>> arr = np.array([-3.0302, -2.7103, -7.4844, -3.4761, -5.3009, -5.2121, -3.7549,
-4.7834, -5.8870, -3.4839, -5.0104, -3.0992, -4.8823, -0.3319, -6.8084])
>>> np.sort(arr)[-4:]
array([-3.0992, -3.0302, -2.7103, -0.3319])
Alternatively, as read here you could use a range for your kth option on np.argpartition:
np.argpartition(arr, range(0, -4, -1))[-4:]
array([-3.0992, -3.0302, -2.7103, -0.3319])

Python - masking in a for loop?

I have three arrays, r_vals, Tgas_vals, and n_vals. They are all numpy arrays of the shape (9998.). The arrays have repeated values and I want to iterate over the unique values of r_vals and find the corresponding values of Tgas_vals, and n_vals so I can use the last two arrays to calculate the weighted average. This is what I have right now:
def calc_weighted_average (r_vals,Tgas_vals,n_vals):
for r in r_vals:
mask = r == r_vals
count = 0
count += 1
for t in Tgas_vals[mask]:
print (count, np.average(Tgas_vals[mask]*n_vals[mask]))
weighted_average = calc_weighted_average (r_vals,Tgas_vals,n_vals)
The problem I am running into is that the function is only looping through once. Did I implement mask incorrectly, or is the problem somewhere else in the for loop?
I'm not sure exactly what you plan to do with all the averages, so I'll toss this out there and see if it's helpful. The following code will calculate a bunch of weighted averages, one per unique value of r_vals and store them in a dictionary(which is then printed out).
def calc_weighted_average (r_vals, z_vals, Tgas_vals, n_vals):
weighted_vals = {} #new variable to store rval=>weighted ave.
for r in np.unique(r_vals):
mask = r_vals == r # I think yours was backwards
weighted_vals[r] = np.average(Tgas_vals[mask]*n_vals[mask])
return weighted_vals
weighted_averages = calc_weighted_average (r_vals, z_vals, Tgas_vals, n_vals)
for rval in weighted_averages:
print ('%i : %0.4f' % (rval, weighted_averages[rval])) #assuming rval is integer
alternatively, you may want to factor in "z_vals" in somehow. Your question was not clear in this.

How to efficiently mutate certain num of values in an array?

Given an initial 2-D array:
initial = [
[0.6711999773979187, 0.1949000060558319],
[-0.09300000220537186, 0.310699999332428],
[-0.03889999911189079, 0.2736999988555908],
[-0.6984000205993652, 0.6407999992370605],
[-0.43619999289512634, 0.5810999870300293],
[0.2825999855995178, 0.21310000121593475],
[0.5551999807357788, -0.18289999663829803],
[0.3447999954223633, 0.2071000039577484],
[-0.1995999962091446, -0.5139999985694885],
[-0.24400000274181366, 0.3154999911785126]]
The goal is to multiply some random values inside the array by a random percentage. Lets say only 3 random numbers get replaced by a random multipler, we should get something like this:
output = [
[0.6711999773979187, 0.52],
[-0.09300000220537186, 0.310699999332428],
[-0.03889999911189079, 0.2736999988555908],
[-0.6984000205993652, 0.6407999992370605],
[-0.43619999289512634, 0.5810999870300293],
[0.84, 0.21310000121593475],
[0.5551999807357788, -0.18289999663829803],
[0.3447999954223633, 0.2071000039577484],
[-0.1995999962091446, 0.21],
[-0.24400000274181366, 0.3154999911785126]]
I've tried doing this:
def mutate(array2d, num_changes):
for _ in range(num_changes):
row, col = initial.shape
rand_row = np.random.randint(row)
rand_col = np.random.randint(col)
cell_value = array2d[rand_row][rand_col]
array2d[rand_row][rand_col] = random.uniform(0, 1) * cell_value
return array2d
And that works for 2D arrays but there's chance that the same value is mutated more than once =(
And I don't think that's efficient and it only works on 2D array.
Is there a way to do such "mutation" for array of any shape and more efficiently?
There's no restriction of which value the "mutation" can choose from but the number of "mutation" should be kept strict to the user specified number.
One fairly simple way would be to work with a raveled view of the array. You can generate all your numbers at once that way, and make it easier to guarantee that you won't process the same index twice in one call:
def mutate(array_anyd, num_changes):
raveled = array_anyd.reshape(-1)
indices = np.random.choice(raveled.size, size=num_changes, replace=False)
values = np.random.uniform(0, 1, size=num_changes)
raveled[indices] *= values
I use array_anyd.reshape(-1) in favor of array_anyd.ravel() because according to the docs, the former is less likely to make an inadvertent copy.
The is of course still such a possibility. You can add an extra check to write back if you need to. A more efficient way would be to use np.unravel_index to avoid creating a view to begin with:
def mutate(array_anyd, num_changes):
indices = np.random.choice(array_anyd.size, size=num_changes, replace=False)
indices = np.unravel_indices(indices, array_anyd.shape)
values = np.random.uniform(0, 1, size=num_changes)
raveled[indices] *= values
There is no need to return anything because the modification is done in-place. Conventionally, such functions do not return anything. See for example list.sort vs sorted.
Using shuffle instead of random_choice, this would be a different solution. It works on an array of any shape.
def mutate(arrayIn, num_changes):
mult = np.zeros(arrayIn.ravel().shape[0])
mult[:num_changes] = np.random.uniform(0,1,num_changes)
np.random.shuffle(mult)
mult = mult.reshape(arrayIn.shape)
arrayIn = arrayIn + mult*arrayIn
return arrayIn

Append Rows of Different Lengths to the Same Variable

I am trying to append a lengthy list of rows to the same variable. It works great for the first thousand or so iterations in the loop (all of which have the same lengths), but then, near the end of the file, the rows get a bit shorter, and while I still want to append them, I am not sure how to handle it.
The script gives me an out of range error, as expected.
Here is what the part of code in question looks like:
ii = 0
NNCat = []
NNCatelogue = []
while ii <= len(lines):
NNCat = (ev_id[ii], nn1[ii], nn2[ii], nn3[ii], nn4[ii], nn5[ii], nn6[ii], nn7[ii], nn8[ii], nn9[ii], nn10[ii], nn11[ii])
NNCatelogue.append(NNCat)
ii = ii + 1
print NNCatelogue, ii
Any help on this would be greatly appreciated!
I'll answer the question you didn't ask first ;) : how can this code be more pythonic?
Instead of
ii = 0
NNCat = []
NNCatelogue = []
while ii <= len(lines):
NNCat = (ev_id[ii], nn1[ii], nn2[ii], nn3[ii], nn4[ii], nn5[ii], nn6[ii], nn7[ii], nn8[ii], nn9[ii], nn10[ii], nn11[ii])
NNCatelogue.append(NNCat)
ii = ii + 1
you should do
NNCat = []
NNCatelogue = []
for ii, line in enumerate(lines):
NNCat = (ev_id[ii], nn1[ii], nn2[ii], nn3[ii], nn4[ii], nn5[ii], nn6[ii],
nn7[ii], nn8[ii], nn9[ii], nn10[ii], nn11[ii])
NNCatelogue.append(NNCat)
During each pass ii will be incremented by one for you and line will be the current line.
As for your short lines, you have two choices
Use a special value (such as None) to fill in when you don't have a real value
check the length of nn1, nn2, ..., nn11 to see if they are large enough
The second solution will be much more verbose, hard to maintain, and confusing. I strongly recommend using None (or another special value you create yourself) as a placeholder when there is no data.
def gvop(vals,indx): #get values or padding
return vals[indx] if indx<len(vals) else None
NNCatelogue = [(gvop(ev_id,ii), gvop(nn1,ii), gvop(nn2,ii), gvop(nn3,ii), gvop(nn4,ii),
gvop(nn5,ii), gvop(nn6,ii), gvop(nn7,ii), gvop(nn8,ii), gvop(nn9,ii),
gvop(nn10,ii), gvop(nn11,ii)) for ii in xrange(0, len(lines))]
By defining this other function to return either the correct value or padding, you can ensure rows are the same length. You can change the padding to anything, if None is not what you want.
Then the list comp creates a list of tuples as before, except containing padding in cases where some of the lines in the input are shorter.
from itertools import izip_longest
NNCatelogue = list(izip_longest(ev_id, nn1, nn2, ... nn11, fillvalue=None))
See here for documentation of izip. Do yourself a favour and skip the list around the iterator, if you don't need it. In many cases you can use the iterator as well as the list, and you save a lot of memory. Especially if you have long lists, that you're grouping together here.

Categories

Resources