Find minimum of two values in Python - python

How to program this expression in Python:
min{cos(2xπ), 1/2}
?
I have tried:
x = np.array([1,2,3,4,5,3,2,5,7])
solution = np.min(np.cos(2*x*np.pi), 1/2)
But it does not work, and there is the following mistake:
TypeError: 'float' object cannot be interpreted as an integer.

I have tried your code with np.minimum like this :
import numpy as np
x = np.array([1,2,3,4,5,3,2,5,7])
solution = np.minimum(np.cos(2*x*np.pi), 1/2)
print(solution)
which gives something like this :
[ 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
the minimum function will check through each element of array and returns an array. you can take a look here

Related

How can I fix the problem that numpy.arange is not working properly?

I am trying to create two list with numpy.arange via two input parameters and the I want to pass them in an array which is initialized via np.zeros as a 3x3 matrix. The problem is that the pass only works for 0.1 and I don't see what I am doing wrong. My code:
import numpy as np
from time import sleep
def Stabilizer(px,pz):
ss = 0.05
#initialize an array for data aquisition: 1st row is for countrate, 2nd and 3rd row for x and z values in V
values_x = np.zeros((3,3), dtype=float)
values_z = np.zeros((3,3), dtype=float)
sleep(5)
values_x[2] = pz
values_z[1] = px
x_range = np.arange(px-ss, px+ss,ss)
z_range = np.arange(pz-ss, pz+ss,ss)
print(x_range)
print(z_range)
values_x[1] = x_range
values_z[2] = z_range
for i,x_value in enumerate(x_range):
#change_pos(x_channel, x_value)
sleep(1)
start = 1000
stop = 1+i
countrate = stop - start
values_x[0,i] = countrate
print(x_value)
print(values_x)
Stabilizer(0.1,0.2)
which creates this output on console:
Traceback (most recent call last):
File "C:/Users/x/PycharmProjects/NV_centre/test.py", line 46, in <module>
Stabilizer(0.1,0.2)
File "C:/Users/x/PycharmProjects/NV_centre/test.py", line 35, in Stabilizer
values_z[2] = z_range
ValueError: could not broadcast input array from shape (2) into shape (3)
[0.05 0.1 0.15]
[0.15 0.2 ]
In theory the function np.arange(px-ss, px+ss,ss) creates a list with the output [0.05 0.1]. When I use np.arange(px-ss, px+2*ss,ss) in theory the output would be [0.05 0.1 0.15] but it is [0.05 0.1 0.15 0.2 ]. And for z_range = np.arange(pz-ss, pz+2*ss,ss) the output is [0.15 0.2 0.25] which is correct. I don't understand why the difference occurs since both list are created in the same way.
The results for numpy.arange are not consistent when using a non integer step ( you have used 0.05). Using numpy.linspace instead would give more consistent results.
ref: https://numpy.org/doc/stable/reference/generated/numpy.arange.html
np.arange() does not works good for floating point numbers because floating-point operations have rounding error. Refer this for more details.
It's better to use np.linspace() in such cases. So change the following lines to :
x_range = np.linspace(px-ss, px+ss,3)
z_range = np.linspace(pz-ss, pz+ss,3)
This will work fine.
This is the best solution I can think of :
x_range = [round(i, 2) for i in np.arange(px-ss, px+2*ss,ss) if i<px+2*ss]

Strange working of the python minimize procedure

Working with 'scipy.optimize.minimize' I'm having strange using of the minimize procedure. Below is test code to show my results:
import numpy as np
import pandas as pd
from scipy.optimize import minimize
def SES(good, a, h):
print('good is : {}'.format(good))
print('a is : {}'.format(a))
print('h is : {}'.format(h))
return 0
good = [1,2,3,4,5,6]
a = minimize(SES, x0 = good, args=(0.1, 1), method = 'L-BFGS-B', bounds = [[0.1, 0.3]]*len(good))
I'm expecting that SES function will print for 'good' parameter the values [1,2,3,4,5,6]. But I'm receiving the following output
good is : [0.3 0.3 0.3 0.3 0.3 0.3]
a is : 0.1
h is : 1
If I remove bounds parameter then I receive output as I expect:
a = minimize(SES, x0 = good, args=(0.1, 1), method = 'L-BFGS-B')
good is : [1. 2. 3. 4. 5. 6.]
a is : 0.1
h is : 1
Could you explain what I'm doing wrong...
It seems I know where is problem. The good is out of bounds therefore I have this result..

Why is assignment operation still giving MemoryError for large arrays in Python?

I have a large array K (29000 x 29000):
K= numpy.random.random((29000, 29000))
I want to apply the following operation on K:
output = K* (1.5 - 0.5 * K* K)
To try preventing 'MemoryError' , I am doing my computations as suggested on the answer from this thread.
However, when I try to do the assignment operation on the large array as follows, I still get the MemoryError:
K *= 1.5 - 0.5 * K * K
Any help welcome.
NOTE: this is not a duplicate post. There is a suggestion on this post using cython. But I am looking for alternative solutions which may not rely on Cython.
You can do assignment in blocks, say, of 1000 rows. The additional array this creates will be 1/29 of the size of your array, and having a for loop running 29 times shouldn't be much of a speed problem. Typical memory/speed tradeoff.
block = 1000 # the size of row blocks to use
K = np.random.random((29000, 29000))
for i in range(int(np.ceil(K.shape[0] / block))):
K[i*block:(i+1)*block, :] *= 1.5 - 0.5 * K[i*block:(i+1)*block, :]**2
Since there was some concern about the performance on smaller matrices, here is a test for those:
block = 1000
K = np.arange(9).astype(np.float).reshape((3, 3))
print(1.5 * K - 0.5 * K**3)
for i in range(int(np.ceil(K.shape[0] / block))):
K[i*block:(i+1)*block_size, :] *= 1.5 - 0.5 * K[i*block:(i+1)*block_size, :]**2
print(K)
This prints
[[ 0. 1. -1.]
[ -9. -26. -55.]
[ -99. -161. -244.]]
twice.

numpy 2D-array divide precise lost

I got some troubles when did numpy 2d-array divide.
I have a 2D numpy array A (shape=(N,N)), then i divide it by row_sum(axis=1) and got 2D-array B, but when i computed the row_sum(axis=1) of B, is not equal to one at some rows, the code is followed:
(python2.7.x)
from __future__ import division
import numpy as np
A = np.array([[x_11, x_12, ..., x_1N],
[x_21, x_22, ..., x_2N],
[... ... ... ... ]
[x_N1, x_N2, ..., x_NN]]) # x_ij are some np.float64 values
B = A / np.sum(A, axis=1, keepdims=True)
Theoretically result:
np.count_nonzero(np.sum(B, axis=1) != 1)
# it should be 0
Reality result:
np.count_nonzero(np.sum(B, axis=1) != 1)
# something bigger than 0
I believe the reason is precise lost, though i use the dtype=np.float64. Because in my project, the A 2D-array (shape=(N, N), N>8000), most of the values is very small(eg. =1.0) and the others are very big(eg. =2000) at the same row.
I have try this: Add the losts
while np.count_nonzero(np.sum(B, axis=1) != 1) != 0
losts = 1 - B
B[:, i] += losts # the i may change by some conditions
Though, finally it can solve this problems, but is not good for next step in my project.
Could anyone help me? Thanks a lot!!!
When working with floating numbers you get loss in precision and floating numbers very hardly match exactly natural numbers.
A simple test to demonstrate this is:
>>> 0.1 + 0.2 == 0.3
False
This is because the floatting point representation of 0.1 + 0.2 is 0.30000000000000004.
To solve this you just need to switch to np.isclose or np.allclose:
import numpy as np
N = 100
A = np.random.randn(N, N)
B = A / np.sum(A, axis=1, keepdims=True)
Then:
>>> np.count_nonzero(np.sum(B, axis=1) != 1)
79
whereas
>>> np.allclose(np.sum(B, axis=1), 1)
True
In short, your rows are properly normalized, they just don't sum exactly to 1.
From the documentation np.isclose(a, b) is equivalent to:
absolute(a - b) <= (atol + rtol * absolute(b))
with atol = 1e-8 and rtol = 1e-5 (by default), which is the proper way of comparing that two floating point numbers represent the same number (or at least, approximately).

python program bezier function

So I am trying to program a bezier curve using python, and unlike previously posts I have been able to find, I need to program It in such a way, that the summation adjust for how many splines there is, since I need to be able to remove or add splines.
I am basing my programming on this wiki http://en.wikipedia.org/wiki/B%C3%A9zier_curve right after the headline "Explicit definition"
so far this is what I have made
from __future__ import division
import math
import numpy as np
from pylab import*
fact = math.factorial
def binormal(n,i):
koef = fact(n)/float((fact(i)*fact(n-i)))
return koef
def bernstein(n,i,t):
bern = binormal(n,i)*(1-t)**(n-i)*(t**i)
return bern
f = open('polaerekoordinator.txt','r')
whole_thing = f.read().splitlines()
f.close() #these are the coordinates I'm am trying to use for now
#0.000 49.3719597
#9.0141211 49.6065178
#20.2151089 50.9161568
#32.8510895 51.3330612
#44.5151596 45.5941772
#50.7609444 35.3062477
#51.4409332 23.4890251
#49.9188042 11.8336229
#49.5664711 0.000
alle = []
for entry in whole_thing:
alle.append(entry.split(" "))
def bezier(t): #where t is how many points there is on the bezier curve
n = len(alle)
x = y = 0
for i,entry in enumerate(alle):
x +=float(entry[0])*bernstein(n,i,t)+x
for i,entry in enumerate(alle):
y +=float(entry[1])*bernstein(n,i,t)+y
return x,y
bezier(np.arange(0,1,0.01))
my problem right now is that I need to do a summation of the x and y coordinates, so they become something like this
y = [y0, y0+y1, y0+y1+y2, y0+y1+y2+...+yn]
and the same for x
any pointers?
I think you can use np.cumsum http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html:
>>>y = np.arange(0,1,0.1)
>>>[ 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
>>>y_cum = np.cumsum(y)
>>>[ 0. 0.1 0.3 0.6 1. 1.5 2.1 2.8 3.6 4.5]
Edit:
Using your example coordinates I get the following outputs:
x,y = bezier(np.arange(0,1,0.01))
plot(x,y)
plot(np.cumsum(x),np.cumsum(y))
Assuming this is what you are looking for!
I'm not very clear on what're you trying to accomplish, but it's probably something like this:
for i, v in enumerate(y):
y2.append(sum(y[:i+1]))
Demo:
>>> y = [1, 2, 3]
>>> y2 = []
>>> for i, v in enumerate(y):
y2.append(sum(y[:i+1]))
>>> y2
[1, 3, 6]
>>>
Or, the shortcut using list comprehension:
y2 = [sum(y[:i+1]) for i,_ in enumerate(y)]
Well, hope it helps!

Categories

Resources