How to map a function over numpy array - python

I would like to be able to apply a generic function on either scalars numpy 1-D arrays, o numpy 2-D arrays.
The example in point is
def stress2d_lefm_cyl(KI, r, qdeg) :
"""Compute stresses in Mode I around a 2D crack, according to LEFM
q should be input in degrees"""
sfactor = KI / sqrt(2*pi*r)
q = radians(qdeg)
q12 = q/2; q32 = 3*q/2;
sq12 = sin(q12); cq12 = cos(q12);
sq32 = sin(q32); cq32 = cos(q32);
af11 = cq12 * (1 - sq12*sq32); af22 = cq12 * (1 + sq12*sq32);
af12 = cq12 * sq12 * cq32
return sfactor * np.array([af11, af22, af12])
def stress2d_lefm_rect(KI, x, y) :
"""Compute stresses in Mode I around a 2D crack, according to LEFM
"""
r = sqrt(x**2+y**2) <-- Error line
q = atan2(y, x)
return stress2d_lefm_cyl(KI, r, degrees(q))
delta = 0.5
x = np.arange(-10.0, 10.01, delta)
y = np.arange(0.0, 10.01, delta)
X, Y = np.meshgrid(x, y)
KI = 1
# I want to pass a scalar KI, and either scalar, 1D, or 2D arrays for X,Y (of the same shape, of course)
Z = stress2d_lefm_rect(KI, X, Y)
TypeError: only size-1 arrays can be converted to Python scalars
(I mean to use this for a contour plot).
If I now change to
def stress2d_lefm_rect(KI, x, y) :
"""Compute stresses in Mode I around a 2D crack, according to LEFM
"""
r = lambda x,y: x**2 + y**2 <-- Now this works
q = lambda x,y: atan2(y, x) <-- Error line
return stress2d_lefm_cyl(KI, r(x,y), degrees(q(x,y)))
Z = stress2d_lefm_rect(KI, X, Y)
TypeError: only size-1 arrays can be converted to Python scalars
which boils down to
x = np.array([1.0, 2, 3, 4, 5])
h = lambda x,y: atan2(y,x) <-- Error
print(h(0,1)) <-- Works
print(h(x, x)) <-- Error
1.5707963267948966
TypeError: only size-1 arrays can be converted to Python scalars
A "similar" question was posted, Most efficient way to map function over numpy array
The differences are:
1. I have to (or possibly more) arguments (x,y), which should have the same shape.
2. I am combining also with a scalar argument (KI).
3. atan2 seems to be less "tolerant" than **2. I mean to work with a generic function.
4. I am chaining two functions.
Can this be worked out?
Perhaps point 2 can be overcome by multiplying the result somewhere else.

You should use numpy to apply your function to every element of an array.
Ex :
import numpy as np
np.sqrt(np.square(x) + np.square(y))

Related

Is there a more elegant way to apply complicated functions to meshgrid?

My goal is to plot the function $f(x,y) = 3^{\min\{x,2\}}t_0 y$
where $t_0$
is the smallest solution of $2t^x-3t-1=0$ (it's just a MWE; I didn't check the existence of the root(s). )
I did it. But in somehow an ugly way.
My approach is to apply the function element-wise. Here is my code:
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
def equ(t,*data): # 2t^x - 3t -1
x = data
return 2*(t ** x) - 3*t - 1
def t0(t): # call fslove to solve the equation
return fsolve(equ, 1.5, args=(t))[0] # 1.5 is randomly choosed
xx = np.linspace(1,3,50) # range of x
yy = np.linspace(1,3,50) # range of y
ff = [] # the value of f at each (x, y)
# compute the value of f element-wise
for x in xx:
for y in yy:
f = (3 ** np.minimum(x, 2)) * t0(x) * y
ff.append(f)
fv = np.split(np.array(ff), np.size(xx)) # split ff to make it conform with meshgrid
xv, yv = np.meshgrid(xx, yy)
plt.pcolormesh(xv, yv, fv)
plt.colorbar()
plt.show()
But I think the elegancy of numpy is to avoid writing loops manually and to manipulate lists as a whole. So, is there a more elegant way to do that?

How to calculate the divergent of a vector in sympy?

I want to calculate the divergent of a given vector with sympy. Is there any function in python responsible for this? I looked for something in the functions of einsteinpy, but I still haven't found any that help.
Basically I want to calculate \nabla_\mu (n v^\mu)=0 from a given vector v; n being a constant number.
\nabla_\mu (nv^\mu)=0 represents a divergence where \mu will take the derivative with respect to x, y or z of the vector element corresponding to the component. For example:
\nabla_\mu (n v^\mu) = \partial_x (u^x) + \partial_y(u^y) + \partial_z(u^z)
u can be something like (2x,4y,6z)
I appreciate any help.
As shown by #mikuszefski, you can use the module sympy.vector such that you have the implementation of the divergence in a space.
Another way to do what you want is to use the function derive_by_array to get a tensor and do einsten contraction.
import sympy as sp
x, y, z = sp.symbols("x y z") # dim = 3
# Now the functions that you want:
u, v, w = 2*x, 4*y, 6*z
# In a more general way, you can do:
u = sp.Function("u")(x, y, z)
v = sp.Function("v")(x, y, z)
w = sp.Function("w")(x, y, z)
U = sp.Array([u, v, w]) # U is a vector of dim = 3 (or sympy.Array)
X = sp.Array([x, y, z]) # X is a vector of dim = 3 (or sympy.Array)
dUdX = sp.derive_by_array(U, X) # dUdX is a tensor of dim = 3 and order = 2
# Frist way:
divU = sp.trace(sp.Matrix(sp.derive_by_array(U, X))) # Limited
# Second way:
divU = sp.tensorcontraction(sp.derive_by_array(U, X), (0, 1)) # More general
This solution works fine when dim = 2 for example, but you must have that len(X) == len(U)

Evaluating a numpy array

I have a function that returns a numpy array as follows:
import numpy as np
from scipy.misc import derivative
import sympy as sp
x, y = sp.symbols('x y')
f = x**2 + y **2
def grad(f):
exp = sp.expand(f)
dfdx = sp.diff(exp,x)
dfdy = sp.diff(exp,y)
global grad
Df = np.array([dfdx,dfdy])
return Df
I'm using the variable Df in another function and do some computations with it.
As you may have guessed, the results come out including x and y. However, I need the results to be evaluated each time with the initial values I choose for x and y instead of the symbols.
I was wondering if there was something like the .subs() in sympy but works on a numpy array rather than a function????
Sympy and numpy are two separate worlds, that aren't easy to bring together.
With sympy's lambdify, sympy expressions can be made to work on numpy arguments. When arrays are used as arguments, they all need to be 1D and of the same size. The function np_grad_1 below is how it works standard. It returns an array with two subarrays.
To get your desired functionality, a wrapper can take a 2D numpy input and convert the result back to a 2D numpy array:
import sympy as sp
import numpy as np
x, y = sp.symbols('x y')
f = x ** 2 + y ** 2
def grad(f, x, y):
exp = sp.expand(f)
dfdx = sp.diff(exp, x)
dfdy = sp.diff(exp, y)
return [dfdx, dfdy]
np_grad_1 = sp.lambdify([x, y], grad(f, x, y))
np_grad_2 = lambda points: np.array(np_grad_1(points[:, 0], points[:, 1])).T
points = np.random.uniform(-1, 1, (5, 2))
np_grad_1(points[:, 0], points[:, 1]) # returns an array with 2 subarrays
np_grad_2(points) # returns an Nx2 array

Function that computes projection and recostruction error using numpy - python

I want to create a function that calculates and returns the projection of a vector x on a vector b as well as the reconstruction error.
My code is the following:
def reconstruction_error(x, b):
'''The function calculates the projection and reconstruction error
from projecting a vector x onto a vector b'''
b = np.matrix(b)
x_projection_on_b = (b.T # b/ float((b#b.T))) # x
reconstruction_error = (x - x_projection_on_b) # (x - x_projection_on_b).T
return( x_projection_on_b, float(reconstruction_error))
However the reconstruction error is not correct. E.g.,
x = np.array([1,1,1])
b = np.array([5, 10, 10])
a, error = reconstruction_error(x, b)
a
matrix([[0.55555556, 1.11111111, 1.11111111]])
error
0.2222222222222222
Not sure about terminology, but if "reconstruction error" is length of "rejection vector" (original vector minus its projection), then you would have:
import numpy as np
from numpy.linalg import norm
a = np.array([1,1,1])
b = np.array([5, 10, 10])
def projection(x, on):
return np.dot(x, on) / np.dot(on, on) * on
def rejection(x, on):
return x - projection(x, on)
def reconstruction_error(x, on):
return norm(rejection(x, on))
>>> reconstruction_error(a, b)
0.4714045207910317
you are correct in your approach
reconstruction_error = (x - x_projection_on_b) # (x - x_projection_on_b).T
However, what you have calculated above is squared norm.
Apply -> np.sqrt(reconstruction_error) to get the right answer

Create a matrix using values from a tuple with numpy

I'm trying to create a matrix with values based on x,y values I have stored in a tuple. I use a loop to iterate over the tuple and perform a simple calculation on the data:
import numpy as np
# Trying to fit quadratic equation to the measured dots
N = 6
num_of_params = 3
# x values
x = (1,4,3,5,2,6)
# y values
y = (3.96, 24.96,14.15,39.8,7.07,59.4)
# X is a matrix N * 3 with the x values to the power of {0,1,2}
X = np.zeros((N,3))
Y = np.zeros((N,1))
print X,"\n\n",Y
for i in range(len(x)):
for p in range(num_of_params):
X[i][p] = x[i]**(num_of_params - p - 1)
Y[i] = y[i]
print "\n\n"
print X,"\n\n",Y
Is this can be achieved in an easier way? I'm looking for some way to init the matrix like X = np.zeros((N,3), read_values_from = x)
Is it possible? Is there another simple way?
Python 2.7
Extend array version of x to 2D with a singleton dim (dim with length=1) along the second one using np.newaxis/None. This lets us leverage NumPy broadcasting to get the 2D output in a vectorized manner. Similar philosophy for y.
Hence, the implementation would be -
X = np.asarray(x)[:,None]**(num_of_params - np.arange(num_of_params) - 1)
Y = np.asarray(y)[:,None]
Or use the built-in outer method for np.power to get X that takes care of the array conversion under the hoods -
X = np.power.outer(x, num_of_params - np.arange(num_of_params) - 1)
Alternatively, for Y, use np.expand_dims -
Y = np.expand_dims(y,1)

Categories

Resources