I am looking for a way to visualise the proximity of points to a 4-dimensional sphere. For a circle I can simply use a scatter plot and observe the distribution of points near the unit circle as shown below. For a 3D sphere I can do something similar. However, how would I go about visualising this for a 4-dimensional sphere?
Is there a way to reduce the dimensionality to project the entire space into 3D? Obviously I can just take the norm of the points and see how close it is to 1, but I would like to have a visual aid of some sort.
Here is one way to convert 4-dimensional coordinates into 3-dimenstional coordinates that will give you a visualization of the distances of the points from the 4D sphere. Since you show no code or equations of your own I'll just give an overview. If you give more details on your own work then you can ask me for more details.
Take a point in 4 dimensions, let's say (x, y, z, w). Then convert those Cartesian coordinates to 4D spherical coordinates
(r, t1, t2, t3), where r is the distance of the point to the origin and t1, t2, t3 are reference angles. Formulas for the conversion are in Wikipedia's entry for n-sphere, though in my preferred transformation I would reverse the order of the Cartesian coordinates. In other words, we get the relations
w = r * cos(t1)
z = r * sin(t1) * cos(t2)
y = r * sin(t1) * sin(t2) * cos(t3)
x = r * sin(t1) * sin(t2) * sin(t3)
We now map that point to a point in 3D space by changing angle t1 to 90° (or pi/2 radians). This has the effect of "rotating" the point away from the w axis down into 3D space in regular spherical coordinates. The distances from the origin and from any 4-sphere centered at the origin were not changed. Now convert to 3D Cartesian coordinates with
z = r * cos(t2)
y = r * sin(t2) * cos(t3)
x = r * sin(t2) * sin(t3)
Now graph those as usual. Since distances to the origin and to the 4-sphere were not changed, this should be a useful visualization.
Looking at those equations, we realize that the values of x, y, and z were all divided by sin(t1). That means you could optimize the calculations by finding only sin(t1) with the formula
sin(t1) = sqrt((x*x + y*y + z*z) / (x*x + y*y + z*z + w*w))
There is no need to find r, t2, or t3 or even t1 itself. You need to be careful for the special case sin(t1) == 0.0, which happens only when x == y == z == 0. I would then map the 4D point (0, 0, 0, w) to the 3D point (w, 0, 0) and the visualization should still work well.
There are other, similar transformations you could use that may be more useful, such as changing angle t3 to zero rather than changing t1. This slightly reduces the calculations but you would need to permute the coordinates and the visualization uses only half the 3-sphere, I believe.
Of course, one way to graph that 3D point to a computer graphing surface is to now set t2 to 90° to get
y = r * cos(t3)
x = r * sin(t3)
and you will get a graph very much like the one you show in your question.
(NOTE: I changed the formulas above, based on further consideration of the best visualization.)
Related
I'm currently working with matplotlib in order to create a module of a specific vector field using matplotlib.pyplot.streamplot
after the creation and coloring of the lines in the streamplot, i'm trying to color the whitespace around the divergence points around the plot, im trying to achieve a gradient of color that is dependent on the distance of the white pixels around it.
The streamplot in question is built according to:
xs=np.linspace(-10,10,2000)
ys=np.linspace(-10,10,2000)
Therefore, if the divergence is located (for demonstration purposes) at (0,0) it will be located exactly in the middle of the plot.
Now, the only method i can think of for coloring according to distance from it, is kind of clunky since it requires me to:
add a matplotlib.patches.Rectangle on top of the divergence point in a specific color that is not in the image yet.
convert the plot, with the streamlines and rectangles (one rectangle for each divergence point in streamplot) to a np.array
find the new coordinates of the colors of the rectangles (they represent the location of the divergence point in the new np.array created from streamplot).
calculate the pixels like i want from the colored pixels.
This whole method feels way to clunky and over-complicating, and obviously slower than i could do. im sure theres a way to convert the coordinates from the matplotlib plot to the ones in np.array somehow or perhaps handle the coloring in matplotlib which will be even easier.
sadly i couldn't find a solution that answers this specific need yet.
thanks in advance for any help given!
EDIT
I'm adding an example (not my code, but a representation of what I wish to achieve).
I want to clarify that the solution of adding a patches.circle on top of a circle patch is not my go to, since i'm looking to keep my painting options more dynamic.
If you can define the color intensity you want as a 2-dimensional function, you can plot that function with plt.imshow() and then put the streamplot on top of it. You just need to transform the coordinates linearly to match the image coordinates.
Here is an example:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 10]
# plot 2d function
grid = np.arange(-1, 1, 0.001)
x, y = np.meshgrid(grid, grid)
z = 1 - (x ** 2 + y ** 2) ** 0.5
plt.imshow(z, cmap='Blues')
# streamplot example from matplotlib docs (modified)
w = 3
Y, X = np.mgrid[-w:w:100j, -w:w:100j]
U = Y ** 2
V = X ** 2
# transform according to previous plot
n = len(grid) / 2
scale = n / w
X = (X + w) * scale
Y = (Y + w) * scale
U = (U + w) * scale
V = (V + w) * scale
plt.xticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.yticks(ticks=[0, n, 2*n],
labels=[-w, 0, w])
plt.streamplot(X, Y, U, V);
I'm trying to use ray casting to gather all the surfaces in a room and determine it's volume.
I have a centroid location where the rays will be coming from, but I'm drawing a blank on how to get the rays in all 360 degrees (in 3D space).
I'm not getting any points on the floors or ceilings, it's like it's doing a 60 degree spread rotated about the Z axis.
I think I have the rest of it working, but this is stumping me.
for y in range(360):
for x in range(360):
vector = DB.XYZ(math.sin(math.radians(x)), math.cos(math.radians(x)), math.cos(math.radians(y))).Normalize()
prox = ri.FindNearest(origin, direction).Proximity
point = origin + (direction * prox)
Look at it this way: x and y of vector are created from angle x (-> a circle in the plane) and then you add a z component which lies between -1 and 1 (which cos does). So it's obvious that you end up with a cylindrical distribution.
What you might want are spherical coordinates. Modify your code like this:
for y in range(-90, 91):
for x in range(360):
vector = DB.XYZ(math.sin(math.radians(x)) * cos(math.radians(y)),
math.cos(math.radians(x)) * cos(math.radians(y)),
math.sin(math.radians(y))) # Normalize unnecessary, since vector² = sin² * cos² + cos² * cos² + sin² = 1
prox = ri.FindNearest(origin, direction).Proximity
point = origin + (direction * prox)
But be aware that the angle distribution of rays is not uniform using spherical coordinates. At the poles it's more dense than at the equator. You can mitigate this e.g. by scaling the density of x down, depending on y. The surface elements scale down by cos(y)², so I think you have to scale by cos(y).
I am trying to sample around 1000 points from a 3-D ellipsoid, uniformly. Is there some way to code it such that we can get points starting from the equation of the ellipsoid?
I want points on the surface of the ellipsoid.
Theory
Using this excellent answer to the MSE question How to generate points uniformly distributed on the surface of an ellipsoid? we can
generate a point uniformly on the sphere, apply the mapping f :
(x,y,z) -> (x'=ax,y'=by,z'=cz) and then correct the distortion
created by the map by discarding the point randomly with some
probability p(x,y,z).
Assuming that the 3 axes of the ellipsoid are named such that
0 < a < b < c
We discard a generated point with
p(x,y,z) = 1 - mu(x,y,y)/mu_max
probability, ie we keep it with mu(x,y,y)/mu_max probability where
mu(x,y,z) = ((acy)^2 + (abz)^2 + (bcx)^2)^0.5
and
mu_max = bc
Implementation
import numpy as np
np.random.seed(42)
# Function to generate a random point on a uniform sphere
# (relying on https://stackoverflow.com/a/33977530/8565438)
def randompoint(ndim=3):
vec = np.random.randn(ndim,1)
vec /= np.linalg.norm(vec, axis=0)
return vec
# Give the length of each axis (example values):
a, b, c = 1, 2, 4
# Function to scale up generated points using the function `f` mentioned above:
f = lambda x,y,z : np.multiply(np.array([a,b,c]),np.array([x,y,z]))
# Keep the point with probability `mu(x,y,z)/mu_max`, ie
def keep(x, y, z, a=a, b=b, c=c):
mu_xyz = ((a * c * y) ** 2 + (a * b * z) ** 2 + (b * c * x) ** 2) ** 0.5
return mu_xyz / (b * c) > np.random.uniform(low=0.0, high=1.0)
# Generate points until we have, let's say, 1000 points:
n = 1000
points = []
while len(points) < n:
[x], [y], [z] = randompoint()
if keep(x, y, z):
points.append(f(x, y, z))
Checks
Check if all points generated satisfy the ellipsoid condition (ie that x^2/a^2 + y^2/b^2 + z^2/c^2 = 1):
for p in points:
pscaled = np.multiply(p,np.array([1/a,1/b,1/c]))
assert np.allclose(np.sum(np.dot(pscaled,pscaled)),1)
Runs without raising any errors. Visualize the points:
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
points = np.array(points)
ax.scatter(points[:, 0], points[:, 1], points[:, 2])
# set aspect ratio for the axes using https://stackoverflow.com/a/64453375/8565438
ax.set_box_aspect((np.ptp(points[:, 0]), np.ptp(points[:, 1]), np.ptp(points[:, 2])))
plt.show()
These points seem evenly distributed.
Problem with currently accepted answer
Generating a point on a sphere and then just reprojecting it without any further corrections to an ellipse will result in a distorted distribution. This is essentially the same as setting this posts's p(x,y,z) to 0. Imagine an ellipsoid where one axis is orders of magnitude bigger than another. This way, it is easy to see, that naive reprojection is not going to work.
Consider using Monte-Carlo simulation: generate a random 3D point; check if the point is inside the ellipsoid; if it is, keep it. Repeat until you get 1,000 points.
P.S. Since the OP changed their question, this answer is no longer valid.
J.F. Williamson, "Random selection of points distributed on curved surfaces", Physics in Medicine & Biology 32(10), 1987, describes a general method of choosing a uniformly random point on a parametric surface. It is an acceptance/rejection method that accepts or rejects each candidate point depending on its stretch factor (norm-of-gradient). To use this method for a parametric surface, several things have to be known about the surface, namely—
x(u, v), y(u, v) and z(u, v), which are functions that generate 3-dimensional coordinates from two dimensional coordinates u and v,
The ranges of u and v,
g(point), the norm of the gradient ("stretch factor") at each point on the surface, and
gmax, the maximum value of g for the entire surface.
The algorithm is then:
Generate a point on the surface, xyz.
If g(xyz) >= RNDU01()*gmax, where RNDU01() is a uniform random variate in [0, 1), accept the point. Otherwise, repeat this process.
Chen and Glotzer (2007) apply the method to the surface of a prolate spheroid (one form of ellipsoid) in "Simulation studies of a phenomenological model for elongated virus capsid formation", Physical Review E 75(5), 051504 (preprint).
Here is a generic function to pick a random point on a surface of a sphere, spheroid or any triaxial ellipsoid with a, b and c parameters. Note that generating angles directly will not provide uniform distribution and will cause excessive population of points along z direction. Instead, phi is obtained as an inverse of randomly generated cos(phi).
import numpy as np
def random_point_ellipsoid(a,b,c):
u = np.random.rand()
v = np.random.rand()
theta = u * 2.0 * np.pi
phi = np.arccos(2.0 * v - 1.0)
sinTheta = np.sin(theta);
cosTheta = np.cos(theta);
sinPhi = np.sin(phi);
cosPhi = np.cos(phi);
rx = a * sinPhi * cosTheta;
ry = b * sinPhi * sinTheta;
rz = c * cosPhi;
return rx, ry, rz
This function is adopted from this post: https://karthikkaranth.me/blog/generating-random-points-in-a-sphere/
One way of doing this whch generalises for any shape or surface is to convert the surface to a voxel representation at arbitrarily high resolution (the higher the resolution the better but also the slower). Then you can easily select the voxels randomly however you want, and then you can select a point on the surface within the voxel using the parametric equation. The voxel selection should be completely unbiased, and the selection of the point within the voxel will suffer the same biases that come from using the parametric equation but if there are enough voxels then the size of these biases will be very small.
You need a high quality cube intersection code but with something like an elipsoid that can optimised quite easily. I'd suggest stepping through the bounding box subdivided into voxels. A quick distance check will eliminate most cubes and you can do a proper intersection check for the ones where an intersection is possible. For the point within the cube I'd be tempted to do something simple like a random XYZ distance from the centre and then cast a ray from the centre of the elipsoid and the selected point is where the ray intersects the surface. As I said above, it will be biased but with small voxels, the bias will probably be small enough.
There are libraries that do convex shape intersection very efficiently and cube/elipsoid will be one of the options. They will be highly optimised but I think the distance culling would probably be worth doing by hand whatever. And you will need a library that differentiates between a surface intersection and one object being totally inside the other.
And if you know your elipsoid is aligned to an axis then you can do the voxel/edge intersection very easily as a stack of 2D square intersection elipse problems with the set of squares to be tested defined as those that are adjacent to those in the layer above. That might be quicker.
One of the things that makes this approach more managable is that you do not need to write all the code for edge cases (it is a lot of work to get around issues with floating point inaccuracies that can lead to missing or doubled voxels at the intersection). That's because these will be very rare so they won't affect your sampling.
It might even be quicker to simply find all the voxels inside the elipse and then throw away all the voxels with 6 neighbours... Lots of options. It all depends how important performance is. This will be much slower than the opther suggestions but if you want ~1000 points then ~100,000 voxels feels about the minimum for the surface, so you probably need ~1,000,000 voxels in your bounding box. However even testing 1,000,000 intersections is pretty fast on modern computers.
Depending on what "uniformly" refers to, different methods are applicable. In any case, we can use the parametric equations using spherical coordinates (from Wikipedia):
where s = 1 refers to the ellipsoid given by the semi-axes a > b > c. From these equations we can derive the relevant volume/area element and generate points such that their probability of being generated is proportional to that volume/area element. This will provide constant volume/area density across the surface of the ellipsoid.
1. Constant volume density
This method generates points on the surface of an ellipsoid such that their volume density across the surface of the ellipsoid is constant. A consequence of this is that the one-dimensional projections (i.e. the x, y, z coordinates) are uniformly distributed; for details see the plot below.
The volume element for a triaxial ellipsoid is given by (see here):
and is thus proportional to sin(theta) (for 0 <= theta <= pi). We can use this as the basis for a probability distribution that indicates "how many" points should be generated for a given value of theta: where the area density is low/high, the probability for generating a corresponding value of theta should be low/high, too.
Hence, we can use the function f(theta) = sin(theta)/2 as our probability distribution on the interval [0, pi]. The corresponding cumulative distribution function is F(theta) = (1 - cos(theta))/2. Now we can use Inverse transform sampling to generate values of theta according to f(theta) from a uniform random distribution. The values of phi can be obtained directly from a uniform distribution on [0, 2*pi].
Example code:
import matplotlib.pyplot as plt
import numpy as np
from numpy import sin, cos, pi
rng = np.random.default_rng(seed=0)
a, b, c = 10, 3, 1
N = 5000
phi = rng.uniform(0, 2*pi, size=N)
theta = np.arccos(1 - 2*rng.random(size=N))
x = a*sin(theta)*cos(phi)
y = b*sin(theta)*sin(phi)
z = c*cos(theta)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which produces the following plot:
The following plot shows the one-dimensional projections (i.e. density plots of x, y, z):
import seaborn as sns
sns.kdeplot(data=dict(x=x, y=y, z=z))
plt.show()
2. Constant area density
This method generates points on the surface of an ellipsoid such that their area density is constant across the surface of the ellipsoid.
Again, we start by calculating the corresponding area element. For simplicity we can use SymPy:
from sympy import cos, sin, symbols, Matrix
a, b, c, t, p = symbols('a b c t p')
x = a*sin(t)*cos(p)
y = b*sin(t)*sin(p)
z = c*cos(t)
J = Matrix([
[x.diff(t), x.diff(p)],
[y.diff(t), y.diff(p)],
[z.diff(t), z.diff(p)],
])
print((J.T # J).det().simplify())
This yields
-a**2*b**2*sin(t)**4 + a**2*b**2*sin(t)**2 + a**2*c**2*sin(p)**2*sin(t)**4 - b**2*c**2*sin(p)**2*sin(t)**4 + b**2*c**2*sin(t)**4
and further simplifies to (dividing by (a*b)**2 and taking the sqrt):
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
Since for this case the area element is more complex, we can use rejection sampling:
import matplotlib.pyplot as plt
import numpy as np
from numpy import cos, sin
def f_redo(t, p):
return (
sin(t)*np.sqrt(1 + ((c/b)**2*sin(p)**2 + (c/a)**2*cos(p)**2 - 1)*sin(t)**2)
< rng.random(size=t.size)
)
rng = np.random.default_rng(seed=0)
N = 5000
a, b, c = 10, 3, 1
t = rng.uniform(0, np.pi, size=N)
p = rng.uniform(0, 2*np.pi, size=N)
redo = f_redo(t, p)
while redo.any():
t[redo] = rng.uniform(0, np.pi, size=redo.sum())
p[redo] = rng.uniform(0, 2*np.pi, size=redo.sum())
redo[redo] = f_redo(t[redo], p[redo])
x = a*np.sin(t)*np.cos(p)
y = b*np.sin(t)*np.sin(p)
z = c*np.cos(t)
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
ax.scatter(x, y, z, s=2)
plt.show()
which yields the following distribution:
The following plot shows the corresponding one-dimensional projections (x, y, z):
First of all, will appreciate if someone will give me a proper term for "annulus with a shifted hole", see exactly what kind of shape I mean on a picture below.
Back to main question: I want to pick a random point in the orange area, uniform distribution is not required. For a case of a usual annulus I would've picked random point in (r:R) range and a random angle, then transform those to x,y and it's done. But for this unusual shape... is there even a "simple" formula for that, or should I approach it by doing some kind of polygonal approximation of a shape?
I'm interested in a general approach but will appreciate an example in python, javascript or any coding language of your choice.
Here's a simple method that gives a uniform distribution with no resampling.
For simplicity assume that the center of the outer boundary circle (radius r_outer) is at (0, 0) and that the center of the inner circular boundary (radius r_inner) lies at (x_inner, y_inner).
Write D for the outer disk, H1 for the subset of the plane given by the off-center inner hole, and H2 for the central disk of radius r_inner, centered at (0, 0).
Now suppose that we ignore the fact that the inner circle is not central, and instead of sampling from D-H1 we sample from D-H2 (which is easy to do uniformly). Then we've made two mistakes:
there's a region A = H1 - H2 that we might sample from, even though those samples shouldn't be in the result.
there's a region B = H2 - H1 that we never sample from, even though we should
But here's the thing: the regions A and B are congruent: given any point (x, y) in the plane, (x, y) is in H2 if and only if (x_inner - x, y_inner - y) is in H1, and it follows that (x, y) is in A if and only if (x_inner - x, y_inner - y) is in B! The map (x, y) -> (x_inner - x, y_inner - y) represents a rotation by 180 degress around the point (0.5*x_inner, 0.5*y_inner). So there's a simple trick: generate from D - H2, and if we end up with something in H1 - H2, rotate to get the corresponding point of H2 - H1 instead.
Here's the code. Note the use of the square root of a uniform distribution to choose the radius: this is a standard trick. See this article, for example.
import math
import random
def sample(r_outer, r_inner, x_inner, y_inner):
"""
Sample uniformly from (x, y) satisfiying:
x**2 + y**2 <= r_outer**2
(x-x_inner)**2 + (y-y_inner)**2 > r_inner**2
Assumes that the inner circle lies inside the outer circle;
i.e., that hypot(x_inner, y_inner) <= r_outer - r_inner.
"""
# Sample from a normal annulus with radii r_inner and r_outer.
rad = math.sqrt(random.uniform(r_inner**2, r_outer**2))
angle = random.uniform(-math.pi, math.pi)
x, y = rad*math.cos(angle),rad*math.sin(angle)
# If we're inside the forbidden hole, reflect.
if math.hypot(x - x_inner, y - y_inner) < r_inner:
x, y = x_inner - x, y_inner - y
return x, y
And an example plot, generated by the following:
import matplotlib.pyplot as plt
samples = [sample(5, 2, 1.0, 2.0) for _ in range(10000)]
xs, ys = zip(*samples)
plt.scatter(xs, ys, s=0.1)
plt.axis("equal")
plt.show()
Do you really need exact sampling? Because with acceptance/rejection it should work just fine. I assume big orange circle is located at (0,0)
import math
import random
def sample_2_circles(xr, yr, r, R):
"""
R - big radius
r, xr, yr - small radius and its position
"""
x = xr
y = yr
cnd = True
while cnd:
# sample uniformly in whole orange circle
phi = 2.0 * math.pi * random.random()
rad = R * math.sqrt(random.random())
x = rad * math.cos(phi)
y = rad * math.sin(phi)
# check condition - if True we continue in the loop with sampling
cnd = ( (x-xr)**2 + (y-yr)**2 < r*r )
return (x,y)
Since you have shown no equation, algorithm, or code of your own, but just an outline of an algorithm for center-aligned circles, I'll also just give the outline of an algorithm here for the more general case.
The smaller circle is the image of the larger circle under a similarity transformation. I.e. there is a fixed point in the larger circle and a ratio (which is R/r, greater than one) such that you can take any point on the smaller circle, examine the vector from the fixed point to that point, and multiply that vector by the ratio, then the end of that vector when it starts from the fixed point is a point on the larger circle. This transformation is one-to-one.
So you can choose a random point on the smaller circle (choose the angle at random between 0 and two-pi) and choose a ratio at random between 1 and the proportionality ratio R/r between the circles. Then use that the similarity transformation with the same fixed point but using the random ratio to get the image point of the just-chosen point on the smaller circle. This is a random point in your desired region.
This method is fairly simple. In fact, the hardest mathematical part is finding the fixed point of the similarity transformation. But this is pretty easy, given the centers and radii of the two circles. Hint: the transformation takes the center of the smaller circle to the center of the larger circle.
Ask if you need more detail. My algorithm does not yield a uniform distribution: the points will be more tightly packed where the circles are closest together and less tightly packed where the circles are farthest apart.
Here is some untested Python 3.6.2 code that does the above. I'll test it and show a graphic for it when I can.
import math
import random
def rand_pt_between_circles(x_inner,
y_inner,
r_inner,
x_outer,
y_outer,
r_outer):
"""Return a random floating-point 2D point located between the
inner and the outer circles given by their center coordinates and
radii. No error checking is done on the parameters."""
# Find the fixed point of the similarity transformation from the
# inner circle to the outer circle.
x_fixed = x_inner - (x_outer - x_inner) / (r_outer - r_inner) * r_inner
y_fixed = y_inner - (y_outer - y_inner) / (r_outer - r_inner) * r_inner
# Find a a random transformation ratio between 1 and r_outer / r_inner
# and a random point on the inner circle
ratio = 1 + (r_outer - r_inner) * random.random()
theta = 2 * math.pi * random.random()
x_start = x_inner + r_inner * math.cos(theta)
y_start = y_inner + r_inner * math.sin(theta)
# Apply the similarity transformation to the random point.
x_result = x_fixed + (x_start - x_fixed) * ratio
y_result = y_fixed + (y_start - y_fixed) * ratio
return x_result, y_result
The acceptance/rejection method as described by Severin Pappadeux is probably the simplest.
For a direct approach, you can also work in polar coordinates, with the center of the hole as the pole.
The polar equation (Θ, σ) (sorry, no rho) of the external circle will be
(σ cosΘ - xc)² + (σ sinΘ - yc)² = σ² - 2(cosΘ xc + sinΘ yc)σ + xc² + yc² = R²
This is a quadratic equation in σ, that you can easily solve in terms of Θ. Then you can draw an angle in 0, 2π an draw a radius between r and σ.
This won't give you a uniform distribution, because the range of σ is a function of Θ and because of the polar bias. This might be fixed by computing a suitable transfer function, but this is a little technical and probably not tractable analytically.
I am working in python and i have previous (x_prev,y_prev) = (1.5, 3) coordinate and current (x,y) = (2, 3.2)coordinate and angle difference between them and i want the next coordinate to be at a certain distance d with the same orientation as the current (x,y)coordinate. I have tried using the rotation and translation formula but it fails to give the proper answer. here is the code so far what i tried.
d = 0.5
angle = np.arctan2((y - y_prev), (x - x_prev))
x_ = x * np.cos(angle) - y * np.sin(angle) + (d * np.sinc(angle_/2)* np.cos(angle/2))
y_ = x * np.sin(angle) + y * np.cos(angle) + (d * np.sinc(angle_/2)* np.sin(angle/2))
the expected coordinate is approximately (x_,y_) = (2.5, 3.6) with the same orientation as the current but it results in wrong coordinate so is there anything i am missing.
Thanks in advance
I partly agree with #ImportanceOfBeingErnest that your question is a geometrical one. However, I'm adding an answer because numpy lets you avoid all that trigonometric work that you are trying to do in the first place.
What you want is to find the point (x_new,y_new) based on (x_prev,y_prev) and (x_now,y_now) such that the three points lie on the same line and the distance between (x_prev,y_prev) and (x_new,y_new) is a preset d.
You don't need trigonometry if you can work with proper two-dimensional vectors. You can normalize the vector (x_now,y_now) - (x_prev,y_prev) to get an orientation vector of the line along which you need to move from (x_prev,y_prev) in order to end up at (x_new,y_new). Numpy lets you handle this elegantly:
import numpy as np
x_prev,y_prev = (1.5, 3)
x_now,y_now = (2, 3.2)
d = 0.5
# use 2d arrays for elegant vector operations
# of course we can directly define these from coordinates if we want to
p_prev = np.array([x_prev,y_prev])
p_now = np.array([x_now,y_now])
# compute the unit direction vector for p_new - p_prev
t = p_now - p_prev
t /= np.linalg.norm(t) # use Euclidean norm by default
# p_new is simple now:
p_new = p_prev + d*t
print(p_prev)
print(p_now)
print(p_new)
The above results in (x_new,y_new)=(1.96423835,3.18569534). Your points are actually such that (x_now,y_now) is almost at 0.5 distance from (x_prev,y_prev), so the resulting vector is hardly different from the original one. But anyway, the above procedure will always give you a new point which is at the same angle from (x_prev,y_prev) as (x_now,y_now) but at the fixed distance.