I am trying to loop between 0.01 and 10, but between 0.01 and 0.1 use 0.01 as the step, then between 0.1 and 1.0 use 0.1 as step, and between 1.0 and 10.0 use 1.0 as step.
I have the while loop code written, but want to make it more pythonic.
i = 0.01
while i < 10:
# do something
print i
if i < 0.1:
i += 0.01
elif i < 1.0:
i += 0.1
else:
i += 1
This will produce
0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 2, 3, 4, 5, 6, 7, 8, 9
A special-purse generator function might be the right way to go. This would effectively separate the boring part (getting the list of numbers right) from the interesting part (the # do something in your example).
def my_range():
for j in .01, .1, 1.:
for i in range(1, 10, 1):
yield i * j
for x in my_range():
print x
One approach would be to use two loops: one for the order of magnitude, and one for the values from 1 to 9:
for exp in range(-2, 1):
for i in range(1, 10):
print("{:.2f}".format(i * 10 ** exp))
You could have a nested loop, the outer one that iterates over the precision and inner one that is just range(1,10):
for precision in (0.01, 0.1, 1):
for i in range(1,10):
i*=precision
print(i)
However floats are probably not going to work in this case as this shows values like 0.30000000000000004 on my machine, for precise decimal values you would want to use the decimal module:
import decimal
for precision in ("0.01", "0.1", "1"):
for i in range(1,10):
i*=decimal.Decimal(precision)
print(i)
Just a single line of code through list comprehension -
for k in [i*j for j in (0.01, 0.1, 1) for i in range(1, 10)]
Can't be more pythonic!
Just in case you wished to replace the loop with vectorized code...
In [63]: np.ravel(10.**np.arange(-2, 1)[:,None] * np.arange(1, 10)[None,:])
Out[64]:
array([ 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09,
0.1 , 0.2 , 0.3 , 0.4 , 0.5 , 0.6 , 0.7 , 0.8 , 0.9 ,
1. , 2. , 3. , 4. , 5. , 6. , 7. , 8. , 9. ])
I'd recommend a generator function as well, but if the steps are not such convenient powers of each other I'd write it like
def my_range():
i = 0
while i < 0.1:
i += 0.01
yield i
while i < 1:
i += 0.1
yield i
while i < 10:
i += 1
yield i
for x in my_range():
print x
It might be a bit more repetitive, but illustrates much better what is going on and that the yielded values are monotonically increasing (regardless what numbers you put in).
If it gets too repetitive, use
def my_range():
i = 0
for (end, step) in [(0.1, 0.01), (1, 0.1), (10, 1)]:
while i < end:
i += step
yield i
You could do something like:
import numpy as np
list1 = np.arange(0.01, 0.1, 0.01)
list2 = np.arange(0.1, 1, 0.1)
list3 = np.arange(1, 10, 1)
i_list = np.concatenate((list1, list2, list3)) # note the double parenthesis
for i in i_list:
...
Basically you create the entire list of values that you need up front, i_list, then just iterate through them in your for loop.
Related
I have two numpy arrays; x, y. I want to be able to extract the value of x that is closest to 1 that also has a y value greater than 0.96 and the get the index of that value.
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
In this case the x value would be 1.01 because it is closest to 1 and has a y value of 0.99.
Ideal result would be:
idx = 7
I know how to find the point nearest to 1 and how to get the index of it but I don't know how to add the second condition.
This code also works when there are multiple indexes satisfying the condition.
import numpy as np
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
# differences
first_check = np.abs(np.array(x) - 1)
# extracting index of the value that x is closest to 1
# (indexes in case there are two or more values that are closest to 1)
indexes = np.where(first_check == np.min(first_check))[0]
indexes = [index for index in indexes if y[index] > 0.96]
print(indexes)
OUTPUT:
[7]
You can use np.argsort(abs(x - 1)) to sort the indices according to the closest value to 1. Then, grab the first y index that satisfies y > 0.96 using np.where.
import numpy as np
x = np.array([0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20])
y = np.array([0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85])
closest_inds = np.argsort(abs(x - 1))
idx = closest_inds[np.where(y[closest_inds] > 0.96)][0]
This would give:
idx = 7
For short arrays (shorter than, say 10k elements), the above solution would be slow because there is no findfirst in numpy till the moment. Look at this long awaited feature request.
So, in this case, the following loop would be much faster and will give same result:
for i in closest_inds:
if y[i] > 0.96:
idx = i
break
This will work on multiple conditions and lists.
x = [0.5, 0.8, 0.99, 0.8, 0.85, 0.9, 0.91, 1.01, 10, 20]
y = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99, 0.99, 0.99, 0.85]
condition1 = 1.0
condition2 = 0.96
def convert(*args):
"""
Returns a list of tuples generated from multiple lists and tuples
"""
for x in args:
if not isinstance(x, list) and not isinstance(x, tuple):
return []
size = float("inf")
for x in args:
size = min(size, len(x))
result = []
for i in range(size):
result.append(tuple([x[i] for x in args]))
print(result)
return result
result = convert(x, y)
closest = min([tpl for tpl in result if tpl[0] >= condition1 and tpl[1] > condition2], key=lambda x: x[1])
index = result.index(closest)
print(f'The index of the closest numbers of x-list to 1 and y-list to 0.96 is {index}')
Output
[(0.5, 0.7), (0.8, 0.75), (0.99, 0.8), (0.8, 0.85), (0.85, 0.9), (0.9, 0.95), (0.91, 0.99), (1.01, 0.99), (10, 0.99), (20, 0.85)]
The index of the closest numbers is 7
I want to create 9 random values that sum up to 1. each of the 9 values have to be within a specific range that is stored in array s
s = np.array([
[0.1 , 0.3 ],
[0.05, 0.2 ],
[0.15, 0.2 ],
[0.15, 0.3 ],
[0.05, 0.15],
[0.07, 0.15],
[0.1 , 0.2 ],
[0.05, 0.15],
[0.01, 0.1 ]])
In array s the first column is the lower bound of the range and the second column the highest bound. hence the first value of the 9 has to fall between 0.1 and 0.3, the second between 0.05 and 0.2 etc. in such a way that all the values sum up to 1.
This is my latest try
def randomtosum(ranges, total):
result = []
for x, y in ranges:
result.append(random.uniform(x,y))
result.append(total - sum(result))
return result
r = randomtosum(s,1)
but this always creates a 10th negative value...
How can I solve this?
A short, hacky way is to use your "total - sum" value as last value and check, if it fits the boundaries, if not, try again the whole list:
import numpy as np
import random
s = np.array([
[0.1, 0.3],
[0.05, 0.2],
[0.15, 0.2],
[0.15, 0.3],
[0.05, 0.15],
[0.07, 0.15],
[0.1, 0.2],
[0.05, 0.15],
[0.01, 0.1]])
def is_possible(ranges, total) -> bool:
mins = 0
maxs = 0
for x, y in ranges:
mins += x
maxs += y
return mins < total < maxs
def randomtosum(ranges, total) -> list:
if is_possible(ranges, total):
return get_randomtosum(ranges, total)
else:
return []
def get_randomtosum(ranges, total) -> list:
result = []
for x, y in ranges[:-1]:
result.append(random.uniform(x, y))
tmp = total - sum(result)
if ranges[-1][0] < tmp < ranges[-1][1]:
result.append(tmp)
return result
else:
return randomtosum(ranges, total)
r = randomtosum(s, 1)
print(r)
Quick and dirty solution:
import numpy as np
import random
s = np.array([
[0.1 , 0.3 ],
[0.05, 0.2 ],
[0.15, 0.2 ],
[0.15, 0.3 ],
[0.05, 0.15],
[0.07, 0.15],
[0.1 , 0.2 ],
[0.05, 0.15],
[0.01, 0.1 ]])
raw_rands = [random.uniform(x,y) for x,y in s]
while sum(raw_rands) >= 1:
raw_rands = [random.uniform(x,y) for x,y in s]
if the idea of a while loop running until it meet the condition scares you as much as it scares me, you could try a for using a range that limit the number of iterations.
raw_rands = [random.uniform(x,y) for x,y in s]
for i in range(1000):
if sum(raw_rands) <= 1:
break
new_raw_rands = [random.uniform(x,y) for x,y in s]
if sum(new_raw_rands) < sum(raw_rands):
raw_rands = [random.uniform(x,y) for x,y in s]
If it doesn't get the random variables sum less or equal than 1 at least it will get the iteration with the least sum value.
Use the individual ranges to weight the random numbers.
Normalise to 1-the sum of the lows.
Add to the lows.
import numpy as np
s = np.array([
[0.1 , 0.3 ],
[0.05, 0.2 ],
[0.15, 0.2 ],
[0.15, 0.3 ],
[0.05, 0.15],
[0.07, 0.15],
[0.1 , 0.2 ],
[0.05, 0.15],
[0.01, 0.1 ]])
low = s[ :, 0] # Array of low limits
rng = s[ :, 1] - low # Array of the range of each row
free = 1.0 - low.sum() # the free value in play 1 - sum lo
np.random.seed( 1234 ) # Make reproducable, remove in real version.
rnd = np.random.random( size = len(rng)) * rng # random weighted by the ranges
rnd = rnd * free / rnd.sum() # Normalise to free
result = low + rnd
print( result, 'Sum ', result.sum() )
# [0.11829863 0.09457931 0.16045562 0.20627753 0.08726121
# 0.08041789 0.11320732 0.08830725 0.05119523] Sum 1.0
Making this a function.
def sum_to_one( s ):
low = s[ :, 0] # Array of low limits
rng = s[ :, 1] - low # Array of the range of each row
free = 1.0 - low.sum() # the free value in play 1 - sum lo
rnd = np.random.random( size = len(rng)) * rng # random weighted by the ranges
rnd = rnd * free / rnd.sum() # Normalise to free
return low + rnd
I would like to count and save in a vector a the number of elements of an array that are greater than a certain value t. I want to do this for different ts.
eg
My vector:c=[0.3 0.2 0.3 0.6 0.9 0.1 0.2 0.5 0.3 0.5 0.7 0.1]
I would like to count the number of elements of c that are greater than t=0.9, than t=0.8 than t=0.7 etc... I then want to save the counts for each different value of t in a vector
my code is (not working):
for t in range(0,10,1):
for j in range(0, len(c)):
if c[j]>t/10:
a.append(sum(c[j]>t))
my vector a should be of dimension 10, but it isn't!
Anybody can help me out?
I made a function that loops over the array and just counts whenever the value is greater than the supplied threshold
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
def num_bigger(threshold):
count = 0
for num in c:
if num > threshold:
count +=1
return count
thresholds = [x/10.0 for x in range(10)]
for thresh in thresholds:
print thresh, num_bigger(thresh)
Note that the function checks for strictly greater, which is why, for example, the result is 0 when the threshold is .9.
There are few things wrong with your code.
my vector a should be of dimension 10, but it isn't!
That's because you don't append only 10 elements in your list. Look at your logic.
for t in range(0,10,1):
for j in range(0, len(c)):
if c[j]>t/10:
a.append(sum(c[j]>t))
For each threshold, t, you iterate over all 12 items in c one at a time and you append something to the list. Overall, you get 120 items. What you should have been doing instead is (in pseudocode):
for each threshold:
count = how many elements in c are greater than threshold
a.append(count)
numpy.where() gives you the indices in an array where a condition is satisfied, so you just have to count how many indices you get each time. We'll get to the full solution is a moment.
Another potential error is t/10, which in Python 2 is integer division and will return 0 for all thresholds. The correct way would be to force float division with t/10.. If you're on Python 3 though, you get float division by default so this might not be a problem. Notice though that you do c[j] > t, where t is between 0 and 10. Overall, your c[j] > t logic is wrong. You want to use a counter for all elements, like other answers have shown you, or collapse it all down to a one-liner list comprehension.
Finally, here's a solution fully utilising numpy.
import numpy as np
c = np.array([0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1])
thresh = np.arange(0, 1, 0.1)
counts = np.empty(thresh.shape, dtype=int)
for i, t in enumerate(thresh):
counts[i] = len(np.where(c > t)[0])
print counts
Output:
[12 10 8 5 5 3 2 1 1 0]
Letting numpy take care of the loops under the hood is faster than Python-level loops. For demonstration:
import timeit
head = """
import numpy as np
c = np.array([0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1])
thresh = np.arange(0, 1, 0.1)
"""
numpy_where = """
for t in thresh:
len(np.where(c > t)[0])
"""
python_loop = """
for t in thresh:
len([element for element in c if element > t])
"""
n = 10000
for test in [numpy_where, python_loop]:
print timeit.timeit(test, setup=head, number=n)
Which on my computer results in the following timings.
0.231292377372
0.321743753994
Your problem is here:
if c[j]>t/10:
Notice that both t and 10 are integers and so you perform integer division.
The easiest solution with the least changes is to change it to:
if c[j]>float(t)/10:
to force float division
So the whole code would look something like this:
a = []
c = [0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
for i in range(10): #10 is our 1.0 change it to 9 if you want to iterate to 0.9
sum = 0
cutoff = float(i)/10
for ele in c:
if ele <= cutoff:
sum += ele
a.append(sum)
print(len(a)) # prints 10, the numbers from 0.0 - 0.9
print(a) # prints the sums going from 0.0 cutoff to 1.0 cutoff
You have to divide t / 10.0 so the result is a decimal, the result of t / 10 is an integer
a = []
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
for t in range(0,10,1):
count = 0
for j in range(0, len(c)):
if c[j]>t/10.0:
count = count+1
a.append(count)
for t in range(0,10,1):
print(str(a[t]) + ' elements in c are bigger than ' + str(t/10.0))
Output:
12 elements in c are bigger than 0.0
10 elements in c are bigger than 0.1
8 elements in c are bigger than 0.2
5 elements in c are bigger than 0.3
5 elements in c are bigger than 0.4
3 elements in c are bigger than 0.5
2 elements in c are bigger than 0.6
1 elements in c are bigger than 0.7
1 elements in c are bigger than 0.8
0 elements in c are bigger than 0.9
You can check the test here
If you simplify your code bugs won't have places to hide!
c=[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
a=[]
for t in [x/10 for x in range(10)]:
a.append((t,len([x for x in c if x>t])))
a
[(0.0, 12),
(0.1, 10),
(0.2, 8),
(0.3, 5),
(0.4, 5),
(0.5, 3),
(0.6, 2),
(0.7, 1),
(0.8, 1),
(0.9, 0)]
or even this one-liner
[(r/10,len([x for x in c if x>r/10])) for r in range(10)]
It depends on the sizes of your arrays, but your current solution has O(m*n) complexity, m being the number of values to test and n the size of your array. You may be better off with O((m+n)*log(n)) by first sorting your array in O(n*log(n)) and then using binary search to find the m values in O(m*log(n)). Using numpy and your sample c list, this would be something like:
>>> c
[0.3, 0.2, 0.3, 0.6, 0.9, 0.1, 0.2, 0.5, 0.3, 0.5, 0.7, 0.1]
>>> thresholds = np.linspace(0, 1, 10, endpoint=False)
>>> thresholds
array([ 0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
>>> len(c) - np.sort(c).searchsorted(thresholds, side='right')
array([12, 10, 8, 5, 5, 3, 2, 1, 1, 0])
Ok let's imagine that I have a list of values like so:
list = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6]
I would like to loop on this list and when the sign changes to get the absolute difference between the maximum (minimum if it's negative) of the first interval and maximum (minimum if it's negative) of the next interval.
For example on the previous list, we would like to have a result like so:
[1.6, 1.7, 1.5]
The tricky part is that it has to work also for lists like:
list = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4]
Which would return :
[0.35, 0.83, 0.9, 0.7]
It's tricky because some intervals are 1 value long, and I'm having difficulties with managing this.
How would you solve this with the least possible number of lines?
Here is my current code, but it's not working at the moment.
list is a list of 6 lists, where each of these 6 lists contains else a nan, else a np.array of 1024 values (the values I want to evaluate)
Hmax = []
for c in range(0,6):
Hmax_tmp = []
for i in range(len(list[c])):
if(not np.isnan(list[c][i]).any()):
zero_crossings = np.where(np.diff(np.sign(list[c][i])))[0]
if(not zero_crossings[0] == 0):
zero_crossings = [0] + zero_crossings.tolist() + [1023]
diff = []
for j in range(1,len(zero_crossings)-2):
if
diff.append(max(list[c][i][np.arange(zero_crossings[j-1],zero_crossings[j])].min(), list[c][i][np.arange(zero_crossings[j]+1,zero_crossings[j+1])].max(), key=abs) - max(list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].min(), list[c][i][np.arange(zero_crossings[j+1],zero_crossings[j+2])].max(), key=abs))
Hmax_tmp.append(np.max(diff))
else:
Hmax_tmp.append(np.nan)
Hmax.append(Hmax_tmp)
This type of grouping operation can be greatly simplified using itertools.groupby. For example:
>>> from itertools import groupby
>>> lst = [-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6] # the list
>>> minmax = [min(v) if k else max(v) for k,v in groupby(lst, lambda a: a < 0)]
>>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])]
[1.6, 1.7000000000000002, 1.5]
And the second list:
>>> lst2 = [0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4] # the list
>>> minmax = [min(v) if k else max(v) for k,v in groupby(lst2, lambda a: a < 0)]
>>> [abs(j-i) for i,j in zip(minmax[:-1], minmax[1:])]
[0.35, 0.83, 0.8999999999999999, 0.7]
So here, the list is grouped into consecutive intervals of negative/positive values. The min/max is computed for each group and stored in a list minmax. Lastly, a list comprehension finds the differences.
Excuse the inexact representations of floating point numbers!
It would be straightforward to retrieve max/min values of intervals, and then do the calculation.
def difference(nums):
if not nums:
return []
pivots = []
last_sign = nums[0] >= 0
current = 0
for x in nums:
current_sign = x >= 0
if current_sign != last_sign:
pivots.append(current)
current = 0
last_sign = current_sign
current = max(current, x) if current_sign else min(current, x)
pivots.append(current)
result = []
for idx in xrange(len(pivots)):
if idx + 1 < len(pivots):
result.append(abs(pivots[idx] - pivots[idx + 1]))
return result
>>> print difference([-0.23, -0.5, -0.3, -0.8, 0.3, 0.6, 0.8, -0.9, -0.4, 0.1, 0.6])
[1.6, 1.7000000000000002, 1.5]
>>> print difference([0.12, -0.23, 0.52, 0.2, 0.6, -0.3, 0.4])
[0.35, 0.83, 0.8999999999999999, 0.7]
I have several lists such as:
A = [0.02,0.02,0.02,0.03,0.03,0.04,0.04,0.04,0.04,1,0,0,1,0,1,1,1,0]
Each float value corresponds to an integer, in order. The floats represent a category/label, so I will not need to perform calculations on those values.
I need to find the average of the integers corresponding to each category. For example: 0.02 = 0.33, since 0 + 0 + 1 / 3 = 0.33 and 0.03 = 0.5, since 0 + 1 / 2 = 0.5. The average for a category will never be 0.
Then, I need to replace the integer values in my list with those averages, so:
A = [0.02,0.02,0.02,0.03,0.03,0.04,0.04,0.04,0.04,1,0,0,1,0,1,1,1,0]
becomes
A = [0.02,0.02,0.02,0.03,0.03,0.04,0.04,0.04,0.04,0.33,0.33,0.33,0.5,0.5,0.75,0.75,0.75,0.75]
I've tried splitting the list into categories and integers, zipping the two together, iterating over them to gather all of the integer values for each category, and then calculating averages. Unfortunately it quickly went over my head and I was not able to troubleshoot my multiple nested for loops and if statements.
If anyone could point me in the right direction I would be very thankful!
If your data is presented as such, then one pure Python way is:
from itertools import groupby, izip, chain
def float_int_avg(sequence):
def _do_grouping(sequence):
for k, g in groupby(izip(*izip(*[iter(A)] * (len(A) // 2))), lambda L: L[0]):
vals = [el[1] for el in g]
avg = sum(vals, 0.0) / len(vals)
for i in xrange(len(vals)):
yield k, avg
return list(chain.from_iterable(izip(*_do_grouping(sequence))))
A = [0.02,0.02,0.02,0.03,0.03,0.04,0.04,0.04,0.04,1,0,0,1,0,1,1,1,0]
result = float_int_avg(A)
# [0.02, 0.02, 0.02, 0.03, 0.03, 0.04, 0.04, 0.04, 0.04, 0.3333333333333333, 0.3333333333333333, 0.3333333333333333, 0.5, 0.5, 0.75, 0.75, 0.75, 0.75]
Nicer approach:
from itertools import groupby, izip, chain, repeat
from operator import itemgetter
def float_int_avg(sequence):
floats, ints = A[:len(A) // 2], A[len(A) // 2:]
def _group(sequence):
for k, g in groupby(izip(floats, ints), itemgetter(0)):
vals = [el[1] for el in g]
yield repeat(sum(vals, 0.0)/len(vals), len(vals))
return floats + list(chain.from_iterable(_group(sequence)))
You can use fancy index on np.array with boolean masks:
In [248]: a = np.array(A[:len(A)//2])
In [249]: b = np.array(A[len(A)//2:], dtype=float)
In [250]: for i in set(a):
...: t=b[a==i]
...: b[a==i]=sum(t)*1.0/len(t)
...: print b
[ 0.33333333 0.33333333 0.33333333 0.5 0.5 0.75 0.75
0.75 0.75 ]
Let's put that list into a NumPy array:
>>> import numpy as np
>>> a = np.asarray(A)
>>> a
array([ 0.02, 0.02, 0.02, 0.03, 0.03, 0.04, 0.04, 0.04, 0.04,
1. , 0. , 0. , 1. , 0. , 1. , 1. , 1. , 0. ])
"Each float value corresponds to an integer, in order." We can split these up using numpy.split:
>>> labels, values = np.split(a, 2)
"I need to find the average of the integers corresponding to each category." This is a job for scipy.ndimage.measurements.mean:
>>> import scipy.ndimage
>>> avgs = scipy.ndimage.measurements.mean(values, labels, labels)
>>> avgs
array([ 0.33333333, 0.33333333, 0.33333333, 0.5 , 0.5 ,
0.75 , 0.75 , 0.75 , 0.75 ])
"Then, I need to replace the integer values in my list with those averages". It's easiest to assemble a new array using numpy.hstack:
>>> np.hstack((labels, avgs))
array([ 0.02 , 0.02 , 0.02 , 0.03 , 0.03 ,
0.04 , 0.04 , 0.04 , 0.04 , 0.33333333,
0.33333333, 0.33333333, 0.5 , 0.5 , 0.75 ,
0.75 , 0.75 , 0.75 ])
Putting all that together:
labels, values = np.split(np.asarray(A), 2)
avgs = scipy.ndimage.measurements.mean(values, labels, labels)
A = np.hstack((labels, avgs))