I need to perform something similar to the built-in torch.argmax() function on a one-dimensional tensor, but instead of picking the index of the first of the maximum values, I want to be able to pick a random index of one of the maximum values. For example:
my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
index_1 = random_max_val_index_fn(my_tensor)
index_2 = random_max_val_index_fn(my_tensor)
print(f"{index_1}, {index_2}")
> 5, 1
You can get the indexes of all the maximums first and then choose randomly from them:
def rand_argmax(tens):
max_inds, = torch.where(tens == tens.max())
return np.random.choice(max_inds)
sample runs:
>>> my_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
5
>>> rand_argmax(my_tensor)
2
>>> rand_argmax(my_tensor)
1
I think this should work:
import numpy as np
import torch
your_tensor = torch.tensor([0.1, 0.2, 0.2, 0.1, 0.1, 0.2, 0.1])
argmaxes = np.argwhere(your_tensor==torch.max(your_tensor)).flatten()
rand_argmax = np.random.choice(argmaxes)
print(rand_argmax)
make sure you adjust for np.random.choice to account for replacement
below I put the code I would like to get the result like: 0.1, 0.2, 0.3, 0.4 .... but I get this result [0.0, 0.1, 0.2, 0.30000000000000004, 0.4, 0.5, 0.6000000000000001, 0.7000000000000001, 0.8, 0.9] how can I remove those zeros after the decimal point?
squares = []
for i in range(10):
squares.append(i * (0.1))
print(squares)
You can use something like this:
>>> ['{:.2}'.format(i * 0.1) for i in range(10)]
Use the str method format to specify how many decimals to display.
squares = []
for i in range(10):
squares.append(i * (0.1))
print(*["{:.1f}".format(s) for s in squares], sep=', ')
0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9
Sup, Kozinski. Hope you're having a great time.
squares = []
for i in range(10):
squares.append(round(i * (0.1), 1)) #integers will be stored in a proper format
print(squares)
Check out this round function
I am trying to write a function that returns the value of the smallest integer that needs to be multiplied for a list of floats to be all integers. I tried implementing something with the "Least Common Multiple," but I'm not sure if the math checks out...
Say I have the following list (or list-like object) of float values:
example = [0.5, 0.4, 0.2, 0.1]
How could I write a function that returns func(example) = 10 ?
Another example would be...
example = [0.05, 0.1, 0.7, 0.8]
> func(example)
20
Since...
> 20 * np.array(example)
np.array([1, 2, 14, 16])
And all are integers.
Find the largest decimal places, multiply it to the list, find gcd, and find the minimum integer multiplier.
import numpy as np
import decimal
from math import gcd
from functools import reduce
def find_gcd(lst):
x = reduce(gcd, lst)
return x
example = [0.05, 0.1, 0.7, 0.8, 0.9]
decimal_places = min([decimal.Decimal(str(val)).as_tuple().exponent for val in example])
x1 = np.array(example)
multiplier = 1/(10**decimal_places)
gcd_val = find_gcd(map(int, x1 * multiplier))
min_multipler = int(multiplier/gcd_val)
print('Minimum Integer Multipler: ', min_multipler)
If you don't like Decimal.
example = [0.05, 0.1, 0.7, 0.8, 0.9]
n_places = max([len(str(val).split('.')[1]) for val in example])
multiplier = 10**n_places
x1 = np.array(example)
gcd_val = find_gcd(map(int, x1 * multiplier))
min_multipler = int(multiplier/gcd_val)
print('Minimum Integer Multipler: ', min_multipler)
If you have an upper bound den_max on plausible denominators the fractions.Fraction class has a handy limit_denominator method.
For example:
import fractions
max_den = 1000
fractions.Fraction(1/3)
# probably not what we want
# Fraction(6004799503160661, 18014398509481984)
fractions.Fraction(1/3).limit_denominator(max_den)
# better
# Fraction(1, 3)
import sympy
example = [0.5, 0.4, 0.2, 0.1]
sympy.lcm([fractions.Fraction(x).limit_denominator(max_den).denominator for x in example])
# 10
example = [0.05, 0.1, 0.7, 0.8]
sympy.lcm([fractions.Fraction(x).limit_denominator(max_den).denominator for x in example])
# 20
I have a Numpy array, and I need to find the N maximum product subarrays of M elements. For example, I have the array p = [0.1, 0.2, 0.8, 0.5, 0.7, 0.9, 0.3, 0.5] and I want to find the 5 highest product subarrays of 3 elements. Is there a "fast" way to do that?
Here is another quick way to do it:
import numpy as np
p = [0.1, 0.2, 0.8, 0.5, 0.7, 0.9, 0.3, 0.5]
n = 5
m = 3
# Cumulative product (starting with 1)
pc = np.cumprod(np.r_[1, p])
# Cumulative product of each window
w = pc[m:] / pc[:-m]
# Indices of the first element of top N windows
idx = np.argpartition(w, n)[-n:]
print(idx)
# [1 2 5 4 3]
Approach #1
We can create sliding windows and then perform prod reduction and finally np.argpartition to get top N ones among them -
from skimage.util.shape import view_as_windows
def topN_windowed_prod(a, W, N):
w = view_as_windows(a,W)
return w[w.prod(1).argpartition(-N)[-N:]]
Sample run -
In [2]: p = np.array([0.1, 0.2, 0.8, 0.5, 0.7, 0.9, 0.3, 0.5])
In [3]: topN_windowed_prod(p, W=3, N=2)
Out[3]:
array([[0.8, 0.5, 0.7],
[0.5, 0.7, 0.9]])
Note that the order is not maintained with np.argpartition. So, if we need the top N in descending order of prod values, use range(N) with it. More info.
Approach #2
For smaller window lengths, we can simply slice and get our desired result, like so -
def topN_windowed_prod_with_slicing(a, W, N):
w = view_as_windows(a,W)
L = len(a)-W+1
acc = a[:L].copy()
for i in range(1,W):
acc *= a[i:i+L]
idx = acc.argpartition(-N)[-N:]
return w[idx]
I am looking to get :
input:
arange(0.0,0.6,0.2)
output:
0.,0.4
I want
0.,0.2,0.4,0.6
how do i achieve using range or arange. If not what is alternate ?
A simpler approach to get the desired output is to add the step size in the upper limit. For instance,
np.arange(start, end + step, step)
would allow you to include the end point as well. In your case:
np.arange(0.0, 0.6 + 0.2, 0.2)
would result in
array([0. , 0.2, 0.4, 0.6]).
In short
I wrote a function crange, which does what you require.
In the example below, orange does the job of numpy.arange
crange(1, 1.3, 0.1) >>> [1. 1.1 1.2 1.3]
orange(1, 1.3, 0.1) >>> [1. 1.1 1.2]
crange(0.0, 0.6, 0.2) >>> [0. 0.2 0.4 0.6]
orange(0.0, 0.6, 0.2) >>> [0. 0.2 0.4]
Background information
I had your problem a view times as well. I usually quick-fixed it with adding a small value to stop. As mentioned by Kasrâmvd in the comments, the issue is a bit more complex, as floating point rounding errors can occur in numpy.arange (see here and here).
Unexpected behavior can be found in this example:
>>> numpy.arange(1, 1.3, 0.1)
array([1. , 1.1, 1.2, 1.3])
To clear up things a bit for myself, I decided to stop using numpy.arange if not needed specifically. I instead use my self defined function orange to avoid unexpected behavior. This combines numpy.isclose and numpy.linspace.
Here is the Code
Enough bla bla - here is the code ^^
import numpy as np
def cust_range(*args, rtol=1e-05, atol=1e-08, include=[True, False]):
"""
Combines numpy.arange and numpy.isclose to mimic
open, half-open and closed intervals.
Avoids also floating point rounding errors as with
>>> numpy.arange(1, 1.3, 0.1)
array([1. , 1.1, 1.2, 1.3])
args: [start, ]stop, [step, ]
as in numpy.arange
rtol, atol: floats
floating point tolerance as in numpy.isclose
include: boolean list-like, length 2
if start and end point are included
"""
# process arguments
if len(args) == 1:
start = 0
stop = args[0]
step = 1
elif len(args) == 2:
start, stop = args
step = 1
else:
assert len(args) == 3
start, stop, step = tuple(args)
# determine number of segments
n = (stop-start)/step + 1
# do rounding for n
if np.isclose(n, np.round(n), rtol=rtol, atol=atol):
n = np.round(n)
# correct for start/end is exluded
if not include[0]:
n -= 1
start += step
if not include[1]:
n -= 1
stop -= step
return np.linspace(start, stop, int(n))
def crange(*args, **kwargs):
return cust_range(*args, **kwargs, include=[True, True])
def orange(*args, **kwargs):
return cust_range(*args, **kwargs, include=[True, False])
print('crange(1, 1.3, 0.1) >>>', crange(1, 1.3, 0.1))
print('orange(1, 1.3, 0.1) >>>', orange(1, 1.3, 0.1))
print('crange(0.0, 0.6, 0.2) >>>', crange(0.0, 0.6, 0.2))
print('orange(0.0, 0.6, 0.2) >>>', orange(0.0, 0.6, 0.2))
Interesting that you get that output. Running arange(0.0,0.6,0.2) I get:
array([0. , 0.2, 0.4])
Regardless, from the numpy.arange docs: Values are generated within the half-open interval [start, stop) (in other words, the interval including start but excluding stop).
Also from the docs: When using a non-integer step, such as 0.1, the results will often not be consistent. It is better to use numpy.linspace for these cases
The only thing I can suggest to achieve what you want is to modify the stop parameter and add a very small amount, for example
np.arange(0.0, 0.6 + 0.001 ,0.2)
Returns
array([0. , 0.2, 0.4, 0.6])
Which is your desired output.
Anyway, it is better to use numpy.linspace and set endpoint=True
Old question, but it can be done much easier.
def arange(start, stop, step=1, endpoint=True):
arr = np.arange(start, stop, step)
if endpoint and arr[-1]+step==stop:
arr = np.concatenate([arr,[end]])
return arr
print(arange(0, 4, 0.5, endpoint=True))
print(arange(0, 4, 0.5, endpoint=False))
which gives
[0. 0.5 1. 1.5 2. 2.5 3. 3.5 4. ]
[0. 0.5 1. 1.5 2. 2.5 3. 3.5]
A simple example using np.linspace (mentioned numerous times in other answers, but no simple examples were present):
import numpy as np
start = 0.0
stop = 0.6
step = 0.2
num = round((stop - start) / step) + 1 # i.e. length of resulting array
np.linspace(start, stop, num)
>>> array([0.0, 0.2, 0.4, 0.6])
Assumption: stop is a multiple of step. round is necessary to correct for floating point error.
Ok I will leave this solution, here. First step is to calculate the fractional portion of number of items given the bounds [a,b] and the step amount. Next calculate an appropriate amount to add to the end that will not effect the size of the result numpy array and then call the np.arrange().
import numpy as np
def np_arange_fix(a, b, step):
nf = (lambda n: n-int(n))((b - a)/step+1)
bb = (lambda x: step*max(0.1, x) if x < 0.5 else 0)(nf)
arr = np.arange(a, b+bb, step)
if int((b-a)/step+1) != len(arr):
print('I failed, expected {} items, got {} items, arr-out{}'.format(int((b-a)/step), len(arr), arr))
raise
return arr
print(np_arange_fix(1.0, 4.4999999999999999, 1.0))
print(np_arange_fix(1.0, 4 + 1/3, 1/3))
print(np_arange_fix(1.0, 4 + 1/3, 1/3 + 0.1))
print(np_arange_fix(1.0, 6.0, 1.0))
print(np_arange_fix(0.1, 6.1, 1.0))
Prints:
[1. 2. 3. 4.]
[1. 1.33333333 1.66666667 2. 2.33333333 2.66666667
3. 3.33333333 3.66666667 4. 4.33333333]
[1. 1.43333333 1.86666667 2.3 2.73333333 3.16666667
3.6 4.03333333]
[1. 2. 3. 4. 5. 6.]
[0.1 1.1 2.1 3.1 4.1 5.1 6.1]
If you want to compact this down to a function:
def np_arange_fix(a, b, step):
b += (lambda x: step*max(0.1, x) if x < 0.5 else 0)((lambda n: n-int(n))((b - a)/step+1))
return np.arange(a, b, step)