Python script to generate gradients not working - python

I have this python script to generate x,y,z lists and u,v,w lists such that u[i],v[i],w[i] is the gradient vector for point x[i],y[i],z[i].
It doesn't seem to be getting the right values. Does anyone know whats wrong?
from math import *
def coordinates(lst, f, gradx, grady, gradz):
lst = lst[1:-1].split(",")
lst = [float(x.strip()) for x in lst]
xlst = []
ylst = []
zlst = []
ulst = []
vlst = []
wlst = []
for x in lst:
for y in lst:
xlst.append(str(x))
ylst.append(str(y))
zlst.append(str(f(x,y)))
ulst.append(str(gradx(x,y)))
vlst.append(str(grady(x,y)))
wlst.append(str(gradz(x,y)))
string = "xlst=[" + ",".join(xlst) + "]\n" + \
"ylst=[" + ",".join(ylst) + "]\n" + \
"zlst=[" + ",".join(zlst) + "]\n" + \
"ulst=[" + ",".join(ulst) + "]\n" + \
"vlst=[" + ",".join(vlst) + "]\n" + \
"wlst=[" + ",".join(wlst) + "]\n"
return string
lst = "{0, 2, 4, 6, 8, 10}"
# get function in the form f(x,y)=z or here its y^2 - x^2 - z = 0
f = lambda x,y: y**2 - x**2
# get the three gradient functions (df/dx, df/dy, df/dz)
gx = lambda x,y: -2*x
gy = lambda x,y: 2*y
gz = lambda x,y: -1
c = coordinates(lst, f, gx, gy, gz)
print c

Related

Check equality using conditions sympy

I want to proove that (x/a)^2 + (y/b)^2 + (z/c)^2 == 1, if conditions x/a + y/b + z/c ==1 and a/x + b/y + c/z == 0 are given. I know that, for example, in Maple I can simply write
eq1 := x/a + y/b + z/c = 1;
eq2 := a/x + b/y + c/z = 0;
f := x^2/a^2 + y^2/b^2 + z^2/c^2 = 1;
simplify(lhs(f)-rhs(f), {eq1, eq2});
But I'm struggling to come up with solution using sympy.
Without loss of generality, let x <= x/a, etc...
>>> e1=Eq(x + y + z , 1)
>>> e2=Eq(1/x+1/y+1/z , 0)
>>> e3=Eq(x**2 + y**2 + z**2, 1)
Eq(x**2 + y**2 + z**2, 1)
>>> [e3.subs(i).expand() for i in solve((e1,e2))]
[True, True]
Thus, e3 is true for all values that satisfy e1 and e2

Custom math function in panda dataframe with two columns

I would like to get this custum function for a panda dataframe to work.
It is a simple function with two inputs
wordCount
imageCount
and supposed to calculate the reading time of a text in a panda dataframe.
c = ImageCount
x = WordCount
(5.717938 + (12.03401 - 5.717938)/(1 + (c /3.579499)^4.092419))* c) + x * 0.0037736111111111113
I tried it in a couple of ways, but could not get it to work properly.
def readingT(df, y="imageCount", x="wordCount"):
readingTimeImage = (5.717938 + (12.03401 - 5.717938)/(1 + (c/3.579499)^4.092419))* c
readingTimeWords = 0.0037736111111111113 * x
return readingTimeImage + readingTimeWords
def readingT2(c="imageCount", w="wordCount"):
return ((5.717938 + (12.03401 - 5.717938)/(1 + (c/3.579499)^4.092419))* c + 0.0037736111111111113 * w)
readingT2.apply(readingT, c="imageCount", w="wordCount")
#Try next
def readingT3(x, y):
(((5.717938 + (12.03401 - 5.717938)/(1 + ( x /3.579499)**4.092419)) * x) + 0.0037736111111111113 * y)
readingT3.apply(lambda x: rule(x["imageCount"], x["wordCount"]), axis = 1)
Every single one of them gives throws out an error.
Cheers in advance for any help.
def f(c, x):
return (5.717938 + (12.03401 - 5.717938)/(1 + (c /3.579499)^4.092419))* c) + x * 0.0037736111111111113
df['reading_time'] = df.apply(lambda x: f(x.imageCount, x.wordCount), axis=1)

Dynamic query creation using python reduce function?

Currently, the below code dynamically creates the query as:-
code:
zip_cols = list(zip(['name','address'],
['name_1','address_1']))
self.matches = self.features[
(
[
reduce(
lambda x, y: x + y,
[self.features[a + "_" + c[0] + "_" + c[1]] for a in self._algos],
)
for c in zip_cols
][0]
> (self.input_args.get('threshold', 0.7) * 4)
)
& (
[
reduce(
lambda x, y: x + y,
[self.features[a + "_" + c[0] + "_" + c[1]] for a in self._algos],
)
for c in zip_cols
][1]
> (self.input_args.get('threshold', 0.7) * 4)
)].copy()
query:
matches = features[(
(
(features['fw_name_name_1'] / 100)
+ features['sw_name_name_1']
+ features['jw_name_name_1']
+ features['co_name_name_1']
) > 2.8
)
&
(
(
(features['fw_address_address_1'] / 100)
+ features['sw_address_address_1']
+ features['jw_address_address_1']
+ features['co_address_address_1']
) > 2.8
)
].copy()
but this query works if there are 2 columns in source_compare_names and fails for 1 or more than 2. How can we fix that here?
With the minumum input and context I got this should get you started. The idea is that you dynamically build up the filter criteria as a string, join them and evaluate them.
threshold = self.input_args.get('threshold', 0.7) * 4
column_selection = [reduce(lambda x, y: x + y,
[self.features[a + "_" + c[0] + "_" + c[1]] for a in self._algos]) for c in zip_cols]
size = 10 # number of items you need
total_filter_list = []
for i in range(size):
# build the filter columns as list of strings
total_filter_list.append(f'(column_selection[{i}] > {threshold})')
# join the list of strings with '&', build the total filter criteria as string
total_filter_string = ' & '.join(total_filter_list)
# evaluate the filter
self.features[eval(total_filter_string)]

scipy.optimize minimize not changing value. think it's due to late binding but unsure how to change...

I'm fairly new to this but will try and be as clear as possible.
Essentially I have 5 different lists of lists. 4 are imported from txt files and the 5th is a merger of the 4. Each inner list contains a value at index position 3. My objective is to maximize the sum by picking appropriately.
I also have a couple constraints:
The sum of the values at index 6 position can't exceed 50000
I pick 2 items from set C, 3 from set W, 2 from set D, 1 from set G, and 1 from set U (the combined) and I can't pick the same item for each set. Ie. each pick in W has to be different.
My code is below. I'm having trouble in that the optimizer just spits out my initial list of picks. Looking at the data though, I know for sure there are better solutions. I read that the issue may be related to late binding but I'm not sure if that's right and if it is, not sure how to update to fix error either. Appreciate any help. Thanks!
Read: Scipy.optimize.minimize SLSQP with linear constraints fails
import numpy as np
from scipy.optimize import minimize
C = open('C.txt','r').read().splitlines()
W = open('W.txt','r').read().splitlines()
D = open('D.txt','r').read().splitlines()
G = open('G.txt','r').read().splitlines()
def splitdata(file):
for index,line in enumerate(file):
file[index] = line.split('\t')
return(file)
def objective(x, sign=-1.0):
x = list(map(int, x))
pos = 3
Cvalue = float(C[x[0]][pos]) + float(C[x[1]][pos])
Wvalue = float(W[x[2]][pos]) + float(W[x[3]][pos]) + float(W[x[4]][pos])
Dvalue = float(D[x[5]][pos]) + float(D[x[6]][pos])
Gvalue = float(G[x[7]][pos])
Uvalue = float(U[x[8]][pos])
grand_value = sign*(Cvalue + Wvalue + Dvalue + Gvalue + Uvalue)
#print(grand_value)
return grand_value
def constraint_cost(x):
x = list(map(int, x))
pos = 6
Ccost = int(C[x[0]][pos]) + int(C[x[1]][pos])
Wcost = int(W[x[2]][pos]) + int(W[x[3]][pos]) + int(W[x[4]][pos])
Dcost = int(D[x[5]][pos]) + int(D[x[6]][pos])
Gcost = int(G[x[7]][pos])
Ucost = int(U[x[8]][pos])
grand_cost = 50000 - (Ccost + Wcost + Dcost + Gcost + Ucost)
#print(grand_cost)
return grand_cost
def constraint_C(x):
if x[0] == x[1]:
return 0
else:
return 1
def constraint_W(x):
if x[2] == x[3] or x[2] == x[4] or x[3] == x[4]:
return 0
else:
return 1
def constraint_D(x):
if x[5] == init[6]:
return 0
else:
return 1
con1 = {'type':'ineq','fun':constraint_cost}
con2 = {'type':'ineq','fun':constraint_C}
con3 = {'type':'ineq','fun':constraint_W}
con4 = {'type':'ineq','fun':constraint_D}
con = [con1, con2, con3, con4]
c0 = [0,1]
w0 = [0,1,2]
d0 = [0,1]
g0 = [0]
u0 = [0]
init = c0+w0+d0+g0+u0
C = splitdata(C)
W = splitdata(W)
D = splitdata(D)
G = splitdata(G)
U = C + W + D + G
sol = minimize(objective, init, method='SLSQP',constraints=con)
print(sol)

generate polynomial in python

I am trying to make a function which can print a polynomial of order n of x,y
i.e. poly(x,y,1) will output c[0] + c[1]*x + c[2]*y
i.e. poly(x,y,2) will output c[0] + c[1]*x + c[2]*y + c[3]*x**2 + c[4]*y**2 + c[5]*x*y
Could you give me some ideas? Maybe itertools?
You could try to start from something like
def poly(x,y,n):
counter = 0
for nc in range(n+1):
for i in range(nc+1):
print "c[", counter, "]",
print " * ", x, "**", i,
print " * ", y, "**", nc-i,
print " + ",
counter += 1
For example
poly("x", "y", 2)
will produce
c[ 0 ] * x ** 0 * y ** 0 + c[ 1 ] * x ** 0 * y ** 1 + c[ 2 ] * x ** 1 * y ** 0 + c[ 3 ] * x ** 0 * y ** 2 + c[ 4 ] * x ** 1 * y ** 1 + c[ 5 ] * x ** 2 * y ** 0 +
Build in ifs, if you want to suppress undesired output.
Since you wanted a functional solution with itertools, here's a one-liner:
import itertools as itt
from collections import Counter
n = 3
xy = ("x", "y") # list of variables may be extended indefinitely
poly = '+'.join(itt.starmap(lambda u, t: u+"*"+t if t else u, zip(map(lambda v: "C["+str(v)+"]", itt.count()),map(lambda z: "*".join(z), map(lambda x: tuple(map(lambda y: "**".join(map(str, filter(lambda w: w!=1, y))), x)), map(dict.items, (map(Counter, itt.chain.from_iterable(itt.combinations_with_replacement(xy, i) for i in range(n+1))))))))))
That would give you
C[0]+C[1]*x+C[2]*y+C[3]*x**2+C[4]*y*x+C[5]*y**2+C[6]*x**3+C[7]*y*x**2+C[8]*y**2*x+C[9]*y**3
Note, the order of coefficients is slightly different. This will work not only for any n, but also for any number of variables (x, y, z, etc...)
Just for laughs
Slightly more generalized:
from itertools import product
def make_clause(c, vars, pows):
c = ['c[{}]'.format(c)]
vp = (['', '{}', '({}**{})'][min(p,2)].format(v,p) for v,p in zip(vars,pows))
return '*'.join(c + [s for s in vp if s])
def poly(vars, max_power):
res = (make_clause(c, vars, pows) for c,pows in enumerate(product(*(range(max_power+1) for v in vars))))
return ' + '.join(res)
then poly(['x', 'y'], 2) returns
"c[0] + c[1]*y + c[2]*(y**2) + c[3]*x + c[4]*x*y + c[5]*x*(y**2) + c[6]*(x**2) + c[7]*(x**2)*y + c[8]*(x**2)*(y**2)"

Categories

Resources