Is it possible to generate random numbers that are almost equally spaced which shouldnot be exactly same as numpy.linspace output
I look into the numpy.random.uniform function but it doesnot give the required results.
Moreover the the summation of the values generated by the function should be same as the summation of the values generated by numpy.linspace function.
code
import random
import numpy as np
random.seed(42)
data=np.random.uniform(2,4,10)
print(data)
You might consider drawing random samples around the output of numpy.linspace. Setting these numbers as the mean of the normal distribution and setting the variance not too high would generate numbers close to the output of numpy.linspace. For example,
>>> import numpy as np
>>> exact_numbers = np.linspace(2.0, 10.0, num=5)
>>> exact_numbers
array([ 2., 4., 6., 8., 10.])
>>> approximate_numbers = np.random.normal(exact_numbers, np.ones(5) * 0.1)
>>> approximate_numbers
array([2.12950013, 3.9804745 , 5.80670316, 8.07868932, 9.85288221])
Maybe this trick by combining numpy.linspace and numpy.random.uniform and random choice two indexes and increase one of them and decrease other help you:
(You can change size=10, threshold=0.1 for how random numbers are bigger or smaller)
import numpy as np
size = 10
theroshold = 0.1
r = np.linspace(2,4,size) # r.sum()=30
# array([2. , 2.22222222, 2.44444444, 2.66666667, 2.88888889,
# 3.11111111, 3.33333333, 3.55555556, 3.77777778, 4. ])
c = np.random.uniform(0,theroshold,size)
# array([0.02246768, 0.08661081, 0.0932445 , 0.00360563, 0.06539992,
# 0.0107167 , 0.06490493, 0.0558159 , 0.00268924, 0.00070247])
s = np.random.choice(range(size), size+1)
# array([5, 5, 8, 3, 6, 4, 1, 8, 7, 1, 7])
for idx, (i,j) in enumerate(zip(s, s[1:])):
r[i] += c[idx]
r[j] -= c[idx]
print(r)
print(r.sum())
Output:
[2. 2.27442369 2.44444444 2.5770278 2.83420567 3.19772192
3.39512762 3.50172642 3.77532244 4. ]
30
Related
I would like to improve the speed of my code by computing a function once on a numpy array instead of a for loop is over a function of this python library. If I have a function as following:
import numpy as np
import galsim
from math import *
M200=1e14
conc=6.9
def func(M200, conc):
halo_z=0.2
halo_pos =[1200., 3769.7]
halo_pos = galsim.PositionD(x=halo_pos_arcsec[0],y=halo_pos_arcsec[1])
nfw = galsim.NFWHalo(mass=M200, conc=conc, redshift=halo_z,halo_pos=halo_pos, omega_m = 0.3, omega_lam =0.7)
for i in range(len(shear_z)):
shear_pos=galsim.PositionD(x=pos_arcsec[i,0],y=pos_arcsec[i,1])
model_g1, model_g2 = nfw.getShear(pos=self.shear_pos, z_s=shear_z[i])
l=np.sum(model_g1-model_g2)/sqrt(np.pi)
return l
While pos_arcsec is a two-dimensional array of 24000x2 and shear_z is a 1D array with 24000 elements as well.
The main problem is that I want to calculate this function on a grid where M200=np.arange(13., 16., 0.01) and conc = np.arange(3, 10, 0.01). I don't know how to broadcast this function to be estimated for this two dimensional array over M200 and conc. It takes a lot to run the code. I am looking for the best approaches to speed up these calculations.
This here should work when pos is an array of shape (n,2)
import numpy as np
def f(pos, z):
r=np.sqrt(pos[...,0]**2+pos[...,1]**2)
return np.log(r)*(z+1)
Example:
z = np.arange(10)
pos = np.arange(20).reshape(10,2)
f(pos,z)
# array([ 0. , 2.56494936, 5.5703581 , 8.88530251,
# 12.44183436, 16.1944881 , 20.11171117, 24.17053133,
# 28.35353608, 32.64709419])
Use numpy.linalg.norm
If you have an array:
import numpy as np
import numpy.linalg as la
a = np.array([[3, 4], [5, 12], [7, 24]])
then you can determine the magnitude of the resulting vector (sqrt(a^2 + b^2)) by
b = np.sqrt(la.norm(a, axis=1)
>>> print b
array([ 5., 15. 25.])
I have a large array of thousands of vals in numpy. I want to decrease its size by averaging adjacent values.
For example:
a = [2,3,4,8,9,10]
#average down to 2 values here
a = [3,9]
#it averaged 2,3,4 and 8,9,10 together
So, basically, I have n number of elements in array, and I want to tell it to average down to X number of values, and it averages like above.
Is there some way to do that with numpy (already using it for other things, so I'd like to stick with it).
Using reshape and mean, you can average every m adjacent values of an 1D-array of size N*m, with N being any positive integer number. For example:
import numpy as np
m = 3
a = np.array([2, 3, 4, 8, 9, 10])
b = a.reshape(-1, m).mean(axis=1)
#array([3., 9.])
1)a.reshape(-1, m) will create a 2D image of the array without copying data:
array([[ 2, 3, 4],
[ 8, 9, 10]])
2)taking the mean in the second axis (axis=1) will then calculate the mean value of each row, resulting in:
array([3., 9.])
Try this:
n_averaged_elements = 3
averaged_array = []
a = np.array([ 2, 3, 4, 8, 9, 10])
for i in range(0, len(a), n_averaged_elements):
slice_from_index = i
slice_to_index = slice_from_index + n_averaged_elements
averaged_array.append(np.mean(a[slice_from_index:slice_to_index]))
>>>> averaged_array
>>>> [3.0, 9.0]
Looks like a simple non-overlapping moving window average to me, how about:
In [3]:
import numpy as np
a = np.array([2,3,4,8,9,10])
window_sz = 3
a[:len(a)/window_sz*window_sz].reshape(-1,window_sz).mean(1)
#you want to be sure your array can be reshaped properly, so the [:len(a)/window_sz*window_sz] part
Out[3]:
array([ 3., 9.])
In this example, I presume that a is the 1D numpy array that needs to be averaged. In the method that I give below, we first find the factors of the length of this array a. And, then we choose the an appropriate factor as the step size to average the array with.
Here is the code.
import numpy as np
from functools import reduce
''' Function to find factors of a given number 'n' '''
def factors(n):
return list(set(reduce(list.__add__,
([i, n//i] for i in range(1, int(n**0.5) + 1) if n % i == 0))))
a = [2,3,4,8,9,10] #Given array.
'''fac: list of factors of length of a.
In this example, len(a) = 6. So, fac = [1, 2, 3, 6] '''
fac = factors(len(a))
'''step: choose an appropriate step size from the list 'fac'.
In this example, we choose one of the middle numbers in fac
(3). '''
step = fac[int( len(fac)/3 )+1]
'''avg: initialize an empty array. '''
avg = np.array([])
for i in range(0, len(a), step):
avg = np.append( avg, np.mean(a[i:i+step]) ) #append averaged values to `avg`
print avg #Prints the final result
[3.0, 9.0]
this is my first question here on stackoverflow and I hope I will not make huge mistakes.
I am analyzing a set of time series with sampling rate of 1 Hz. I need to plot their fourier transform in order to study their spectra.
Here it is my piece of code:
from obspy.core import read
import numpy as np
import matplotlib.pyplot as plt
st = read('../SC_noise/*HEC_109C*_s', format='SAC')
stp = st.copy()
stp.detrend('linear')
stp.taper('cosine')
for tr in stp:
dataonly = tr.data
spec = np.fft.rfft(dataonly)
plt.plot(abs(spec))
plt.show()
This works just fine: the plot is the same I get using SAC. But the xaxis does not show frequencies. I've wandered around a little bit and found different ideas: none of them is working.
For example in the case of a fft (here I am using a rfft) this should do the job
samp_rate=1
freq = np.fft.fftfreq(len(spec), d=1./samp_rate)
But if I use it it would give me negative frequencies.
Does anybody have an idea?
Thank you very much in advance for all the help!
Piero
If your NumPy version is new enough (1.8 or better), use numpy.fft.rfftfreq. Otherwise, here is the definition:
def rfftfreq(n, d=1.0):
"""
Return the Discrete Fourier Transform sample frequencies
(for usage with rfft, irfft).
The returned float array `f` contains the frequency bin centers in cycles
per unit of the sample spacing (with zero at the start). For instance, if
the sample spacing is in seconds, then the frequency unit is cycles/second.
Given a window length `n` and a sample spacing `d`::
f = [0, 1, ..., n/2-1, n/2] / (d*n) if n is even
f = [0, 1, ..., (n-1)/2-1, (n-1)/2] / (d*n) if n is odd
Unlike `fftfreq` (but like `scipy.fftpack.rfftfreq`)
the Nyquist frequency component is considered to be positive.
Parameters
----------
n : int
Window length.
d : scalar, optional
Sample spacing (inverse of the sampling rate). Defaults to 1.
Returns
-------
f : ndarray
Array of length ``n//2 + 1`` containing the sample frequencies.
Examples
--------
>>> signal = np.array([-2, 8, 6, 4, 1, 0, 3, 5, -3, 4], dtype=float)
>>> fourier = np.fft.rfft(signal)
>>> n = signal.size
>>> sample_rate = 100
>>> freq = np.fft.fftfreq(n, d=1./sample_rate)
>>> freq
array([ 0., 10., 20., 30., 40., -50., -40., -30., -20., -10.])
>>> freq = np.fft.rfftfreq(n, d=1./sample_rate)
>>> freq
array([ 0., 10., 20., 30., 40., 50.])
"""
if not (isinstance(n,int) or isinstance(n, integer)):
raise ValueError("n should be an integer")
val = 1.0/(n*d)
N = n//2 + 1
results = arange(0, N, dtype=int)
return results * val
I'm new to python and having some problems finding the minimum and maximum values for a tuple of tuples. I need them to normalise my data. So, basically, I have a list that is a row of 13 numbers, each representing something. Each number makes a column in a list, and I need the max and min for each column. I tried indexing/iterating through but keep getting an error of
max_j = max(j)
TypeError: 'float' object is not iterable
any help would be appreciated!
The code is (assuming data_set_tup is a tuple of tuples, eg ((1,3,4,5,6,7,...),(5,6,7,3,6,73,2...)...(3,4,5,6,3,2,2...)) I also want to make a new list using the normalised values.
normal_list = []
for i in data_set_tup:
for j in i[1:]: # first column doesn't need to be normalised
max_j = max(j)
min_j = min(j)
normal_j = (j-min_j)/(max_j-min_j)
normal_list.append(normal_j)
normal_tup = tuple(normal_list)
You can transpose rows to columns and vice versa with zip(*...). (Use list(zip(*...)) in Python 3)
cols = zip(*data_set_tup)
normal_cols = [cols[0]] # first column doesn't need to be normalised
for j in cols[1:]:
max_j = max(j)
min_j = min(j)
normal_cols.append(tuple((k-min_j)/(max_j-min_j) for k in j)
normal_list = zip(*normal_cols)
This really sounds like a job for the non-builtin numpy module, or maybe the pandas module, depending on your needs.
Adding an extra dependency on your application should not be done lightly, but if you do a lot of work on matrix-like data, then your code will likely be both faster and more readable if you use one of the above modules throughout your application.
I do not recommend converting a list of lists to a numpy array and back again just to get this single result -- it's better to use the pure python method of Jannes answer. Also, seeing that you're a python beginner, numpy may be overkill right now. But I think your question deserves an answer pointing out that this is an option.
Here's a step-by-step console illustration of how this would work in numpy:
>>> import numpy as np
>>> a = np.array([[1,3,4,5,6],[5,6,7,3,6],[3,4,5,6,3]], dtype=float)
>>> a
array([[ 1., 3., 4., 5., 6.],
[ 5., 6., 7., 3., 6.],
[ 3., 4., 5., 6., 3.]])
>>> min = np.min(a, axis=0)
>>> min
array([1, 3, 4, 3, 3])
>>> max = np.max(a, axis=0)
>>> max
array([5, 6, 7, 6, 6])
>>> normalized = (a - min) / (max - min)
>>> normalized
array([[ 0. , 0. , 0. , 0.66666667, 1. ],
[ 1. , 1. , 1. , 0. , 1. ],
[ 0.5 , 0.33333333, 0.33333333, 1. , 0. ]])
So in actual code:
import numpy as np
def normalize_by_column(a):
min = np.min(a, axis=0)
max = np.max(a, axis=0)
return (a - min) / (max - min)
We have nested_tuple = ((1, 2, 3), (4, 5, 6), (7, 8, 9)).
First of all we need to normalize it. Pythonic way:
flat_tuple = [x for row in nested_tuple for x in row]
Output: [1, 2, 3, 4, 5, 6, 7, 8, 9] # it's a list
Move it to tuple: tuple(flat_tuple), get max value: max(flat_tuple), get min value: min(flat_tuple)
Given a 3 times 3 numpy array
a = numpy.arange(0,27,3).reshape(3,3)
# array([[ 0, 3, 6],
# [ 9, 12, 15],
# [18, 21, 24]])
To normalize the rows of the 2-dimensional array I thought of
row_sums = a.sum(axis=1) # array([ 9, 36, 63])
new_matrix = numpy.zeros((3,3))
for i, (row, row_sum) in enumerate(zip(a, row_sums)):
new_matrix[i,:] = row / row_sum
There must be a better way, isn't there?
Perhaps to clearify: By normalizing I mean, the sum of the entrys per row must be one. But I think that will be clear to most people.
Broadcasting is really good for this:
row_sums = a.sum(axis=1)
new_matrix = a / row_sums[:, numpy.newaxis]
row_sums[:, numpy.newaxis] reshapes row_sums from being (3,) to being (3, 1). When you do a / b, a and b are broadcast against each other.
You can learn more about broadcasting here or even better here.
Scikit-learn offers a function normalize() that lets you apply various normalizations. The "make it sum to 1" is called L1-norm. Therefore:
from sklearn.preprocessing import normalize
matrix = numpy.arange(0,27,3).reshape(3,3).astype(numpy.float64)
# array([[ 0., 3., 6.],
# [ 9., 12., 15.],
# [ 18., 21., 24.]])
normed_matrix = normalize(matrix, axis=1, norm='l1')
# [[ 0. 0.33333333 0.66666667]
# [ 0.25 0.33333333 0.41666667]
# [ 0.28571429 0.33333333 0.38095238]]
Now your rows will sum to 1.
I think this should work,
a = numpy.arange(0,27.,3).reshape(3,3)
a /= a.sum(axis=1)[:,numpy.newaxis]
In case you are trying to normalize each row such that its magnitude is one (i.e. a row's unit length is one or the sum of the square of each element in a row is one):
import numpy as np
a = np.arange(0,27,3).reshape(3,3)
result = a / np.linalg.norm(a, axis=-1)[:, np.newaxis]
# array([[ 0. , 0.4472136 , 0.89442719],
# [ 0.42426407, 0.56568542, 0.70710678],
# [ 0.49153915, 0.57346234, 0.65538554]])
Verifying:
np.sum( result**2, axis=-1 )
# array([ 1., 1., 1.])
I think you can normalize the row elements sum to 1 by this:
new_matrix = a / a.sum(axis=1, keepdims=1).
And the column normalization can be done with new_matrix = a / a.sum(axis=0, keepdims=1). Hope this can hep.
You could use built-in numpy function:
np.linalg.norm(a, axis = 1, keepdims = True)
it appears that this also works
def normalizeRows(M):
row_sums = M.sum(axis=1)
return M / row_sums
You could also use matrix transposition:
(a.T / row_sums).T
Here is one more possible way using reshape:
a_norm = (a/a.sum(axis=1).reshape(-1,1)).round(3)
print(a_norm)
Or using None works too:
a_norm = (a/a.sum(axis=1)[:,None]).round(3)
print(a_norm)
Output:
array([[0. , 0.333, 0.667],
[0.25 , 0.333, 0.417],
[0.286, 0.333, 0.381]])
Use
a = a / np.linalg.norm(a, ord = 2, axis = 0, keepdims = True)
Due to the broadcasting, it will work as intended.
Or using lambda function, like
>>> vec = np.arange(0,27,3).reshape(3,3)
>>> import numpy as np
>>> norm_vec = map(lambda row: row/np.linalg.norm(row), vec)
each vector of vec will have a unit norm.
We can achieve the same effect by premultiplying with the diagonal matrix whose main diagonal is the reciprocal of the row sums.
A = np.diag(A.sum(1)**-1) # A