Working with 'scipy.optimize.minimize' I'm having strange using of the minimize procedure. Below is test code to show my results:
import numpy as np
import pandas as pd
from scipy.optimize import minimize
def SES(good, a, h):
print('good is : {}'.format(good))
print('a is : {}'.format(a))
print('h is : {}'.format(h))
return 0
good = [1,2,3,4,5,6]
a = minimize(SES, x0 = good, args=(0.1, 1), method = 'L-BFGS-B', bounds = [[0.1, 0.3]]*len(good))
I'm expecting that SES function will print for 'good' parameter the values [1,2,3,4,5,6]. But I'm receiving the following output
good is : [0.3 0.3 0.3 0.3 0.3 0.3]
a is : 0.1
h is : 1
If I remove bounds parameter then I receive output as I expect:
a = minimize(SES, x0 = good, args=(0.1, 1), method = 'L-BFGS-B')
good is : [1. 2. 3. 4. 5. 6.]
a is : 0.1
h is : 1
Could you explain what I'm doing wrong...
It seems I know where is problem. The good is out of bounds therefore I have this result..
Related
I am trying to create two list with numpy.arange via two input parameters and the I want to pass them in an array which is initialized via np.zeros as a 3x3 matrix. The problem is that the pass only works for 0.1 and I don't see what I am doing wrong. My code:
import numpy as np
from time import sleep
def Stabilizer(px,pz):
ss = 0.05
#initialize an array for data aquisition: 1st row is for countrate, 2nd and 3rd row for x and z values in V
values_x = np.zeros((3,3), dtype=float)
values_z = np.zeros((3,3), dtype=float)
sleep(5)
values_x[2] = pz
values_z[1] = px
x_range = np.arange(px-ss, px+ss,ss)
z_range = np.arange(pz-ss, pz+ss,ss)
print(x_range)
print(z_range)
values_x[1] = x_range
values_z[2] = z_range
for i,x_value in enumerate(x_range):
#change_pos(x_channel, x_value)
sleep(1)
start = 1000
stop = 1+i
countrate = stop - start
values_x[0,i] = countrate
print(x_value)
print(values_x)
Stabilizer(0.1,0.2)
which creates this output on console:
Traceback (most recent call last):
File "C:/Users/x/PycharmProjects/NV_centre/test.py", line 46, in <module>
Stabilizer(0.1,0.2)
File "C:/Users/x/PycharmProjects/NV_centre/test.py", line 35, in Stabilizer
values_z[2] = z_range
ValueError: could not broadcast input array from shape (2) into shape (3)
[0.05 0.1 0.15]
[0.15 0.2 ]
In theory the function np.arange(px-ss, px+ss,ss) creates a list with the output [0.05 0.1]. When I use np.arange(px-ss, px+2*ss,ss) in theory the output would be [0.05 0.1 0.15] but it is [0.05 0.1 0.15 0.2 ]. And for z_range = np.arange(pz-ss, pz+2*ss,ss) the output is [0.15 0.2 0.25] which is correct. I don't understand why the difference occurs since both list are created in the same way.
The results for numpy.arange are not consistent when using a non integer step ( you have used 0.05). Using numpy.linspace instead would give more consistent results.
ref: https://numpy.org/doc/stable/reference/generated/numpy.arange.html
np.arange() does not works good for floating point numbers because floating-point operations have rounding error. Refer this for more details.
It's better to use np.linspace() in such cases. So change the following lines to :
x_range = np.linspace(px-ss, px+ss,3)
z_range = np.linspace(pz-ss, pz+ss,3)
This will work fine.
This is the best solution I can think of :
x_range = [round(i, 2) for i in np.arange(px-ss, px+2*ss,ss) if i<px+2*ss]
I am writing a code to identify the proper dataset from the options in an array fits better to a given value, as below:
import numpy as np
def find_nearest(array, value):
array = np.asarray(array)
idx = (np.abs(array - value)).argmin()
return array[idx]
thickness = np.array([0.1,0.2,0.4,0.8,1.6,3.2,6.4,12.8,25.6,51.2])
b=np.array([])
a=100
c = 48.4
while c>=0 and a>0.1:
a = find_nearest(thickness,c)
if a > c:
g = np.where(thickness==a)
f = g[0]-1
a = thickness[f]
else:
a = a
c = c - a
print(c)
if c == 0.1:
break
b=np.append(b,a)
itemindex = np.where(thickness==a)
itemindex = itemindex[0]
upper_limit = len(thickness)+1
hj = np.arange(itemindex,upper_limit)
thickness = np.delete(thickness,hj, None)
print(thickness)
slots_sum = np.sum(b)
print("It will be used the following slots: ",b, "representing a total of {:.2f} mm".format(slots_sum))
However, for some reason that could not figured out, when the codes try to find the proper combination of values to reach 48.4, the code skips the in the array the value 0.4 and select 0.2 and 0.1, which results in the sum of 48.3 instead of the correct 48.4. I am banging my head for some days, I will appreciate any help.
[22.8]
[ 0.1 0.2 0.4 0.8 1.6 3.2 6.4 12.8]
[10.]
[0.1 0.2 0.4 0.8 1.6 3.2 6.4]
[3.6]
[0.1 0.2 0.4 0.8 1.6 3.2]
[0.4]
[0.1 0.2 0.4 0.8 1.6]
[0.2]
[0.1]
[0.1]
[]
It will be used the following slots: [25.6 12.8 6.4 3.2 0.2 0.1] representing a total of 48.30 mm.
```
Multiply your inputs by 10 to give integer values and the answer is what you expect.
You will need to compensate for the inexact nature of floating point values if you want to compare the sums of two different lists of floating point values.
How to program this expression in Python:
min{cos(2xπ), 1/2}
?
I have tried:
x = np.array([1,2,3,4,5,3,2,5,7])
solution = np.min(np.cos(2*x*np.pi), 1/2)
But it does not work, and there is the following mistake:
TypeError: 'float' object cannot be interpreted as an integer.
I have tried your code with np.minimum like this :
import numpy as np
x = np.array([1,2,3,4,5,3,2,5,7])
solution = np.minimum(np.cos(2*x*np.pi), 1/2)
print(solution)
which gives something like this :
[ 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5]
the minimum function will check through each element of array and returns an array. you can take a look here
I need to normalize a list of values to fit in a probability distribution, i.e. between 0.0 and 1.0.
I understand how to normalize, but was curious if Python had a function to automate this.
I'd like to go from:
raw = [0.07, 0.14, 0.07]
to
normed = [0.25, 0.50, 0.25]
Use :
norm = [float(i)/sum(raw) for i in raw]
to normalize against the sum to ensure that the sum is always 1.0 (or as close to as possible).
use
norm = [float(i)/max(raw) for i in raw]
to normalize against the maximum
if your list has negative numbers, this is how you would normalize it
a = range(-30,31,5)
norm = [(float(i)-min(a))/(max(a)-min(a)) for i in a]
For ones who wanna use scikit-learn, you can use
from sklearn.preprocessing import normalize
x = [1,2,3,4]
normalize([x]) # array([[0.18257419, 0.36514837, 0.54772256, 0.73029674]])
normalize([x], norm="l1") # array([[0.1, 0.2, 0.3, 0.4]])
normalize([x], norm="max") # array([[0.25, 0.5 , 0.75, 1.]])
How long is the list you're going to normalize?
def psum(it):
"This function makes explicit how many calls to sum() are done."
print "Another call!"
return sum(it)
raw = [0.07,0.14,0.07]
print "How many calls to sum()?"
print [ r/psum(raw) for r in raw]
print "\nAnd now?"
s = psum(raw)
print [ r/s for r in raw]
# if one doesn't want auxiliary variables, it can be done inside
# a list comprehension, but in my opinion it's quite Baroque
print "\nAnd now?"
print [ r/s for s in [psum(raw)] for r in raw]
Output
# How many calls to sum()?
# Another call!
# Another call!
# Another call!
# [0.25, 0.5, 0.25]
#
# And now?
# Another call!
# [0.25, 0.5, 0.25]
#
# And now?
# Another call!
# [0.25, 0.5, 0.25]
try:
normed = [i/sum(raw) for i in raw]
normed
[0.25, 0.5, 0.25]
There isn't any function in the standard library (to my knowledge) that will do it, but there are absolutely modules out there which have such functions. However, its easy enough that you can just write your own function:
def normalize(lst):
s = sum(lst)
return map(lambda x: float(x)/s, lst)
Sample output:
>>> normed = normalize(raw)
>>> normed
[0.25, 0.5, 0.25]
If you consider using numpy, you can get a faster solution.
import random, time
import numpy as np
a = random.sample(range(1, 20000), 10000)
since = time.time(); b = [i/sum(a) for i in a]; print(time.time()-since)
# 0.7956490516662598
since = time.time(); c=np.array(a);d=c/sum(a); print(time.time()-since)
# 0.001413106918334961
Try this :
from __future__ import division
raw = [0.07, 0.14, 0.07]
def norm(input_list):
norm_list = list()
if isinstance(input_list, list):
sum_list = sum(input_list)
for value in input_list:
tmp = value /sum_list
norm_list.append(tmp)
return norm_list
print norm(raw)
This will do what you asked.
But I will suggest to try Min-Max normalization.
min-max normalization :
def min_max_norm(dataset):
if isinstance(dataset, list):
norm_list = list()
min_value = min(dataset)
max_value = max(dataset)
for value in dataset:
tmp = (value - min_value) / (max_value - min_value)
norm_list.append(tmp)
return norm_list
If working with data, many times pandas is the simple key
This particular code will put the raw into one column, then normalize by column per row. (But we can put it into a row and do it by row per column, too! Just have to change the axis values where 0 is for row and 1 is for column.)
import pandas as pd
raw = [0.07, 0.14, 0.07]
raw_df = pd.DataFrame(raw)
normed_df = raw_df.div(raw_df.sum(axis=0), axis=1)
normed_df
where normed_df will display like:
0
0 0.25
1 0.50
2 0.25
and then can keep playing with the data, too!
Here is a not-terribly-inefficient one liner similar to the top answer (only performs summation once)
norm = (lambda the_sum:[float(i)/the_sum for i in raw])(sum(raw))
A similar method can be done for a list with negative numbers
norm = (lambda the_max, the_min: [(float(i)-the_min)/(the_max-the_min) for i in raw])(max(raw),min(raw))
Use scikit-learn:
from sklearn.preprocessing import MinMaxScaler
data = np.array([1,2,3]).reshape(-1, 1)
scaler = MinMaxScaler()
scaler.fit(data)
print(scaler.transform(data))
So I am trying to program a bezier curve using python, and unlike previously posts I have been able to find, I need to program It in such a way, that the summation adjust for how many splines there is, since I need to be able to remove or add splines.
I am basing my programming on this wiki http://en.wikipedia.org/wiki/B%C3%A9zier_curve right after the headline "Explicit definition"
so far this is what I have made
from __future__ import division
import math
import numpy as np
from pylab import*
fact = math.factorial
def binormal(n,i):
koef = fact(n)/float((fact(i)*fact(n-i)))
return koef
def bernstein(n,i,t):
bern = binormal(n,i)*(1-t)**(n-i)*(t**i)
return bern
f = open('polaerekoordinator.txt','r')
whole_thing = f.read().splitlines()
f.close() #these are the coordinates I'm am trying to use for now
#0.000 49.3719597
#9.0141211 49.6065178
#20.2151089 50.9161568
#32.8510895 51.3330612
#44.5151596 45.5941772
#50.7609444 35.3062477
#51.4409332 23.4890251
#49.9188042 11.8336229
#49.5664711 0.000
alle = []
for entry in whole_thing:
alle.append(entry.split(" "))
def bezier(t): #where t is how many points there is on the bezier curve
n = len(alle)
x = y = 0
for i,entry in enumerate(alle):
x +=float(entry[0])*bernstein(n,i,t)+x
for i,entry in enumerate(alle):
y +=float(entry[1])*bernstein(n,i,t)+y
return x,y
bezier(np.arange(0,1,0.01))
my problem right now is that I need to do a summation of the x and y coordinates, so they become something like this
y = [y0, y0+y1, y0+y1+y2, y0+y1+y2+...+yn]
and the same for x
any pointers?
I think you can use np.cumsum http://docs.scipy.org/doc/numpy/reference/generated/numpy.cumsum.html:
>>>y = np.arange(0,1,0.1)
>>>[ 0. 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9]
>>>y_cum = np.cumsum(y)
>>>[ 0. 0.1 0.3 0.6 1. 1.5 2.1 2.8 3.6 4.5]
Edit:
Using your example coordinates I get the following outputs:
x,y = bezier(np.arange(0,1,0.01))
plot(x,y)
plot(np.cumsum(x),np.cumsum(y))
Assuming this is what you are looking for!
I'm not very clear on what're you trying to accomplish, but it's probably something like this:
for i, v in enumerate(y):
y2.append(sum(y[:i+1]))
Demo:
>>> y = [1, 2, 3]
>>> y2 = []
>>> for i, v in enumerate(y):
y2.append(sum(y[:i+1]))
>>> y2
[1, 3, 6]
>>>
Or, the shortcut using list comprehension:
y2 = [sum(y[:i+1]) for i,_ in enumerate(y)]
Well, hope it helps!