I used various sources of information to determine the GPS coordinates of a traffic sign, and plotted them using using plotly.express.scatter_mapbox and add_scattermapbox as follows:
The orange dot is a high end, "reference" measurement and the others are from different sources.
The numeric coordinates in this example are:
red: 51.4001213° 12.4291356°
purple: 51.400127° 12.429187°
green: 51.400106346232° 12.429278003005°
orange: 51.4000684461437° 12.4292323627949°
How can i calculate an area around the orange dot (e.g. 5 meter), find the coordinates which are inside this area and how do i plot that area on my map?
Does this answer: https://gis.stackexchange.com/a/25883
This is tricky for two reasons: first, limiting the points to a circle instead of a square; second, accounting for distortions in the distance calculations.
Many GISes include capabilities that automatically and transparently handle both complications. However, the tags here suggest that a GIS-independent description of an algorithm may be desirable.
To generate points uniformly, randomly, and independently within a circle of radius r around a location (x0, y0), start by generating two independent uniform random values u and v in the interval [0, 1). (This is what almost every random number generator provides you.) Compute
w = r * sqrt(u)
t = 2 * Pi * v
x = w * cos(t)
y = w * sin(t)
The desired random point is at location (x+x0, y+y0).
When using geographic (lat,lon) coordinates, then x0 (longitude) and y0 (latitude) will be in degrees but r will most likely be in meters (or feet or miles or some other linear measurement). First, convert the radius r into degrees as if you were located near the equator. Here, there are about 111,300 meters in a degree.
Second, after generating x and y as in step (1), adjust the x-coordinate for the shrinking of the east-west distances:
x' = x / cos(y0)
The desired random point is at location (x'+x0, y+y0). This is an approximate procedure. For small radii (less than a few hundred kilometers) that do not extend over either pole of the earth, it will usually be so accurate you cannot detect any error even when generating tens of thousands of random points around each center (x0,y0).
Related
I have the coordinates of 6 points in an image
(170.01954650878906, 216.98866271972656)
(201.3812255859375, 109.42137145996094)
(115.70114135742188, 210.4272918701172)
(45.42426300048828, 97.89037322998047)
(167.0367889404297, 208.9329833984375)
(70.13690185546875, 140.90538024902344)
I have a point as center [89.2458, 121.0896]. I am trying to re-calculate the position of points in python using 4 rotation degree (from 0,90,-90,180) and 6 scaling factor (0.5,0.75,1,1.10,1.25,1.35,1.5).
My question is how can I rotate and scale the abovementioned points relative to the center point and get the new coordinates of those 6 points?
Your help is really appreciated.
Mathematics
A mathematical approach would be to represent this data as vectors from the center to the image-points, translate these vectors to the origin, apply the transformation and relocate them around the center point. Let's look at how this works in detail.
Representation as vectors
We can show these vectors in a grid, this will produce following image
This image provides a nice way to look at these points, so we can see our actions happening in a visual way. The center point is marked with a dot at the beginning of all the arrows, and the end of each arrow is the location of one of the points supplied in the question.
A vector can be seen as a list of the values of the coordinates of the point so
my_vector = [point[0], point[1]]
could be a representation for a vector in python, it just holds the coordinates of a point, so the format in the question could be used as is! Notice that I will use the position 0 for the x-coordinate and 1 for the y-coordinate throughout my answer.
I have only added this representation as a visual aid, we can look at any set of two points as being a vector, no calculation is needed, this is only a different way of looking at those points.
Translation to origin
The first calculations happen here. We need to translate all these vectors to the origin. We can very easily do this by subtracting the location of the center point from all the other points, for example (can be done in a simple loop):
point_origin_x = point[0] - center_point[0] # Xvalue point - Xvalue center
point_origin_y = point[1] - center_point[1] # Yvalue point - Yvalue center
The resulting points can now be rotated around the origin and scaled with respect to the origin. The new points (as vectors) look like this:
In this image, I deliberately left the scale untouched, so that it is clear that these are exactly the same vectors (arrows), in size and orientation, only shifted to be around (0, 0).
Why the origin
So why translate these points to the origin? Well, rotations and scaling actions are easy to do (mathematically) around the origin and not as easy around other points.
Also, from now on, I will only include the 1st, 2nd and 4th point in these images to save some space.
Scaling around the origin
A scaling operation is very easy around the origin. Just multiply the coordinates of the point with the factor of the scaling:
scaled_point_x = point[0] * scaling_factor
scaled_point_y = point[1] * scaling_factor
In a visual way, that looks like this (scaling all by 1.5):
Where the blue arrows are the original vectors and the red ones are the scaled vectors.
Rotating
Now for rotating. This is a little bit harder, because a rotation is most generally described by a matrix multiplication with this vector.
The matrix to multiply with is the following
(from wikipedia: Rotation Matrix)
So if V is the vector than we need to perform V_r = R(t) * V to get the rotated vector V_r. This rotation will always be counterclockwise! In order to rotate clockwise, we simply need to use R(-t).
Because only multiples of 90° are needed in the question, the matrix becomes a almost trivial. For a rotation of 90° counterclockwise, the matrix is:
Which is basically in code:
rotated_point_x = -point[1] # new x is negative of old y
rotated_point_y = point[0] # new y is old x
Again, this can be nicely shown in a visual way:
Where I have matched the colors of the vectors.
A rotation 90° clockwise will than be
rotated_counter_point_x = point[1] # x is old y
rotated_counter_point_y = -point[0] # y is negative of old x
A rotation of 180° will just be taking the negative coordinates or, you could just scale by a factor of -1, which is essentially the same.
As last point of these operations, might I add that you can scale and/or rotated as much as you want in a sequence to get the desired result.
Translating back to the center point
After the scaling actions and/or rotations the only thing left is te retranslate the vectors to the center point.
retranslated_point_x = new_point[0] + center_point_x
retranslated_point_y = new_point[1] + center_point_y
And all is done.
Just a recap
So to recap this long post:
Subtract the coordinates of the center point from the coordinates of the image-point
Scale by a factor with a simply multiplication of the coordinates
Use the idea of the matrix multiplication to think about the rotation (you can easily find these things on Google or Wikipedia).
Add the coordinates of the center point to the new coordinates of the image-point
I realize now that I could have just given this recap, but now there is at least some visual aid and a slight mathematical background in this post, which is also nice. I really believe that such problems should be looked at from a mathematical angle, the mathematical description can help a lot.
I have some points that are located in the same place, with WGS84 latlngs, and I want to 'jitter' them randomly so that they don't overlap.
Right now I'm using this crude method, which jitters them within a square:
r['latitude'] = float(r['latitude']) + random.uniform(-0.0005, 0.0005)
r['longitude'] = float(r['longitude']) + random.uniform(-0.0005, 0.0005)
How could I adapt this to jitter them randomly within a circle?
I guess I want a product x*y = 0.001 where x and y are random values. But I have absolutely no idea how to generate this!
(I realise that really I should use something like this to account for the curvature of the earth's surface, but in practice a simple circle is probably fine :) )
One simple way to generate random samples within a circle is to just generate square samples as you are, and then reject the ones that fall outside the circle.
The basic idea is, you generate a vector with x = radius of circle y = 0.
You then rotate the vector by a random angle between 0 and 360, or 0 to 2 pi radians.
You then apply this displacement vector and you have your random jitter in a circle.
An example from one of my scripts:
def get_randrad(pos, radius):
radius = random() * radius
angle = random() * 2 * pi
return (int(pos[0] + radius * cos(angle)),
int(pos[1] + radius * sin(angle)))
pos beeing the target location and radius beeing the "jitter" range.
As pjs pointed out, add
radius *= math.sqrt(random())
for uniform distribution
Merely culling results that fall outside your circle will be sufficient.
If you don't want to throw out some percentage of random results, you could choose a random angle and distance, to ensure all your values fall within the radius of your circle. It's important to note, with this solution, that the precision of the methods you use to extrapolate an angle into a vector will skew your distribution to be more concentrated in the center.
If you make a vector out of your x,y values, and then do something like randomize the length of said vector to fall within your circle, your distribution will no longer be uniform, so I would steer clear of that approach, if uniformity is your biggest concern.
The culling approach is the most evenly distributed, of the three I mentioned, although the random angle/length approach is usually fine, except in cases involving very fine precision and granularity.
my previous question (Integral of Intensity function in python)
you can see the diffraction model in image below:
I want to calculate integral of intensity in each pixel (square), so I can't use R and Theta as variable. How can I do this in X-Y coordinate.
Our function:
instead of sin(theta) we can use:
sintheta= (np.sqrt((x)**2 + (y)**2)/(np.sqrt((x)**2 + (y)**2 + d**2)))
Other constants:
lamb=550*10**(-9)
k=2.0*np.pi/lamb
a=5.5*2.54*10**(-2)
d=2.8
when you plot function, the result is something like this:(The image above is a view from top)
the method in previous topic:
calculate integrate of function in (0.0, dist) and after that * (2*np.pix) which x = ka*np.sin(theta), But now I want integrate in each pixel. which the previous method doesn't work, because this is X-Y coordinate , not polar.
Actually, the integration in Cartesian coordinates is rather straightforward. Now that you have the intensity function, you have to express the radius r by coordinates x and y. A trivial thing which you have actually done in your question.
So, the function to be integrated (without some constants) is:
from scipy import special as sp
# Fraunhofer intensity function (spherical aperture)
def f(x,y):
r = np.sqrt(x**2 + y**2)
return (sp.j1(r)/r)**2
Or by using the fact that 2 J1(x)/x = J0(x) + J2(x) [thanks, Jaime!]:
def f(x,y):
r = np.sqrt(x**2 + y**2)
return (sp.j0(r) + sp.jn(2,r))**2
This form is better in the sense that it does not have a singularity anywhere.
Now, I do not use any constant factors. You may add them if you want, but I find it easier to normalize with the integration result over infinite area. Otherwise it is too easy to just forget some constant (I do, usually).
The integration can be carried out with scipy.integrate.nquad. It accepts a multi-dimensional function to integrate. So, in this case:
import scipy.integrate
integral = scipy.integrate.nquad(f, ([-d/2, d/2], [-d/2, d/2]))[0]
However, as your function is very clearly symmetric, you might consider integrating over one quadrant only and then multiply by four:
4. * scipy.integrate.nquad(f, ([0, d/2], [0, d/2]))[0]
By using these the full intensity is:
>>> 4. * scipy.integrate.nquad(f, [[0,inf],[0,inf]])[0]
12.565472446489999
(Which looks very much like 4 pi, BTW.) Of course, you can also use the polar coordinates to calculate the full value, as the function has circular symmetry (as outlined in Integral of Intensity function in python). The different values are due to different scaling (2 pi omitted in the polar integration, 2 because I am using the sum form of the bessel functions here).
For example for a square area from -1..1 on both directions the normalized (divided by the above full power value) power over the square area is:
>>> 4*scipy.integrate.nquad(f, [[0,1],[0,1]])[0] / 12.565472446489999
0.27011854108867
So, approximately 27 % of the incoming light shines onto the square photodetector.
When it comes to your constants, it seems that something (at least units) is missing. My guess:
wavelength: 550 nm
circular aperture diameter: 0.0055" = 0.14 mm
distance from the aperture to the sensor: 2.8 mm
square sensor size 5.4 um x 5.4 um
The last one I just guessed from the image. As the sensor size is very much smaller than the distance, sin(ϴ) is very close to y/d, where d is distance and y displacement from the optical axis. By using these numbers x = ka sin(ϴ) = kay / d ≈ 1.54. For that number the intensity integral gives approximately 0.52 (or 52 %).
If you are comparing this to some experimental value, remember that there are numerous error sources. The image on the image plane is a fourier transform of the aperture. If there are small imperfections in the aperture edge, they may change the resulting spot. Airy rings are seldom as beautiful as astronomers think...
Just for fun:
I have some geo data (the image below shows the path of a river as red dots) which I want to approximate using a multi segment cubic bezier curve. Through other questions on stackoverflow here and here I found the algorithm by Philip J. Schneider from "Graphics Gems". I successfully implemented it and can report that even with thousands of points it is very fast. Unfortunately that speed comes with some disadvantages, namely that the fitting is done quite sloppily. Consider the following graphic:
The red dots are my original data and the blue line is the multi segment bezier created by the algorithm by Schneider. As you can see, the input to the algorithm was a tolerance which is at least as high as the green line indicates. Nevertheless, the algorithm creates a bezier curve which has too many sharp turns. You see too of these unnecessary sharp turns in the image. It is easy to imagine a bezier curve with less sharp turns for the shown data while still maintaining the maximum tolerance condition (just push the bezier curve a bit into the direction of the magenta arrows). The problem seems to be that the algorithm picks data points from my original data as end points of the individual bezier curves (the magenta arrows point indicate some suspects). With the endpoints of the bezier curves restricted like that, it is clear that the algorithm will sometimes produce rather sharp curvatures.
What I am looking for is an algorithm which approximates my data with a multi segment bezier curve with two constraints:
the multi segment bezier curve must never be more than a certain distance away from the data points (this is provided by the algorithm by Schneider)
the multi segment bezier curve must never create curvatures that are too sharp. One way to check for this criteria would be to roll a circle with the minimum curvature radius along the multisegment bezier curve and check whether it touches all parts of the curve along its path. Though it seems there is a better method involving the cross product of the first and second derivative
The solutions I found which create better fits sadly either work only for single bezier curves (and omit the question of how to find good start and end points for each bezier curve in the multi segment bezier curve) or do not allow a minimum curvature contraint. I feel that the minimum curvature contraint is the tricky condition here.
Here another example (this is hand drawn and not 100% precise):
Lets suppose that figure one shows both, the curvature constraint (the circle must fit along the whole curve) as well as the maximum distance of any data point from the curve (which happens to be the radius of the circle in green). A successful approximation of the red path in figure two is shown in blue. That approximation honors the curvature condition (the circle can roll inside the whole curve and touches it everywhere) as well as the distance condition (shown in green). Figure three shows a different approximation to the path. While it honors the distance condition it is clear that the circle does not fit into the curvature any more. Figure four shows a path which is impossible to be approximated with the given constraints because it is too pointy. This example is supposed to illustrate that to properly approximate some pointy turns in the path, it is necessary that the algorithm chooses control points which are not part of the path. Figure three shows that if control points along the path were chosen, the curvature constraint cannot be fulfilled anymore. This example also shows that the algorithm must quit on some inputs as it is not possible to approximate it with the given constraints.
Does there exist a solution to this problem? The solution does not have to be fast. If it takes a day to process 1000 points, then that's fine. The solution does also not have to be optimal in the sense that it must result in a least squares fit.
In the end I will implement this in C and Python but I can read most other languages too.
I found the solution that fulfills my criterea. The solution is to first find a B-Spline that approximates the points in the least square sense and then convert that spline into a multi segment bezier curve. B-Splines do have the advantage that in contrast to bezier curves they will not pass through the control points as well as providing a way to specify a desired "smoothness" of the approximation curve. The needed functionality to generate such a spline is implemented in the FITPACK library to which scipy offers a python binding. Lets suppose I read my data into the lists x and y, then I can do:
import matplotlib.pyplot as plt
import numpy as np
from scipy import interpolate
tck,u = interpolate.splprep([x,y],s=3)
unew = np.arange(0,1.01,0.01)
out = interpolate.splev(unew,tck)
plt.figure()
plt.plot(x,y,out[0],out[1])
plt.show()
The result then looks like this:
If I want the curve more smooth, then I can increase the s parameter to splprep. If I want the approximation closer to the data I can decrease the s parameter for less smoothness. By going through multiple s parameters programatically I can find a good parameter that fits the given requirements.
The question though is how to convert that result into a bezier curve. The answer in this email by Zachary Pincus. I will replicate his solution here to give a complete answer to my question:
def b_spline_to_bezier_series(tck, per = False):
"""Convert a parametric b-spline into a sequence of Bezier curves of the same degree.
Inputs:
tck : (t,c,k) tuple of b-spline knots, coefficients, and degree returned by splprep.
per : if tck was created as a periodic spline, per *must* be true, else per *must* be false.
Output:
A list of Bezier curves of degree k that is equivalent to the input spline.
Each Bezier curve is an array of shape (k+1,d) where d is the dimension of the
space; thus the curve includes the starting point, the k-1 internal control
points, and the endpoint, where each point is of d dimensions.
"""
from fitpack import insert
from numpy import asarray, unique, split, sum
t,c,k = tck
t = asarray(t)
try:
c[0][0]
except:
# I can't figure out a simple way to convert nonparametric splines to
# parametric splines. Oh well.
raise TypeError("Only parametric b-splines are supported.")
new_tck = tck
if per:
# ignore the leading and trailing k knots that exist to enforce periodicity
knots_to_consider = unique(t[k:-k])
else:
# the first and last k+1 knots are identical in the non-periodic case, so
# no need to consider them when increasing the knot multiplicities below
knots_to_consider = unique(t[k+1:-k-1])
# For each unique knot, bring it's multiplicity up to the next multiple of k+1
# This removes all continuity constraints between each of the original knots,
# creating a set of independent Bezier curves.
desired_multiplicity = k+1
for x in knots_to_consider:
current_multiplicity = sum(t == x)
remainder = current_multiplicity%desired_multiplicity
if remainder != 0:
# add enough knots to bring the current multiplicity up to the desired multiplicity
number_to_insert = desired_multiplicity - remainder
new_tck = insert(x, new_tck, number_to_insert, per)
tt,cc,kk = new_tck
# strip off the last k+1 knots, as they are redundant after knot insertion
bezier_points = numpy.transpose(cc)[:-desired_multiplicity]
if per:
# again, ignore the leading and trailing k knots
bezier_points = bezier_points[k:-k]
# group the points into the desired bezier curves
return split(bezier_points, len(bezier_points) / desired_multiplicity, axis = 0)
So B-Splines, FITPACK, numpy and scipy saved my day :)
polygonize data
find the order of points so you just find the closest points to each other and try them to connect 'by lines'. Avoid to loop back to origin point
compute derivation along path
it is the change of direction of the 'lines' where you hit local min or max there is your control point ... Do this to reduce your input data (leave just control points).
curve
now use these points as control points. I strongly recommend interpolation polynomial for both x and y separately for example something like this:
x=a0+a1*t+a2*t*t+a3*t*t*t
y=b0+b1*t+b2*t*t+b3*t*t*t
where a0..a3 are computed like this:
d1=0.5*(p2.x-p0.x);
d2=0.5*(p3.x-p1.x);
a0=p1.x;
a1=d1;
a2=(3.0*(p2.x-p1.x))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-p2.x+p1.x));
b0 .. b3 are computed in same way but use y coordinates of course
p0..p3 are control points for cubic interpolation curve
t =<0.0,1.0> is curve parameter from p1 to p2
this ensures that position and first derivation is continuous (c1) and also you can use BEZIER but it will not be as good match as this.
[edit1] too sharp edges is a BIG problem
To solve it you can remove points from your dataset before obtaining the control points. I can think of two ways to do it right now ... choose what is better for you
remove points from dataset with too high first derivation
dx/dl or dy/dl where x,y are coordinates and l is curve length (along its path). The exact computation of curvature radius from curve derivation is tricky
remove points from dataset that leads to too small curvature radius
compute intersection of neighboring line segments (black lines) midpoint. Perpendicular axises like on image (red lines) the distance of it and the join point (blue line) is your curvature radius. When the curvature radius is smaller then your limit remove that point ...
now if you really need only BEZIER cubics then you can convert my interpolation cubic to BEZIER cubic like this:
// ---------------------------------------------------------------------------
// x=cx[0]+(t*cx[1])+(tt*cx[2])+(ttt*cx[3]); // cubic x=f(t), t = <0,1>
// ---------------------------------------------------------------------------
// cubic matrix bz4 = it4
// ---------------------------------------------------------------------------
// cx[0]= ( x0) = ( X1)
// cx[1]= (3.0*x1)-(3.0*x0) = (0.5*X2) -(0.5*X0)
// cx[2]= (3.0*x2)-(6.0*x1)+(3.0*x0) = -(0.5*X3)+(2.0*X2)-(2.5*X1)+( X0)
// cx[3]= ( x3)-(3.0*x2)+(3.0*x1)-( x0) = (0.5*X3)-(1.5*X2)+(1.5*X1)-(0.5*X0)
// ---------------------------------------------------------------------------
const double m=1.0/6.0;
double x0,y0,x1,y1,x2,y2,x3,y3;
x0 = X1; y0 = Y1;
x1 = X1-(X0-X2)*m; y1 = Y1-(Y0-Y2)*m;
x2 = X2+(X1-X3)*m; y2 = Y2+(Y1-Y3)*m;
x3 = X2; y3 = Y2;
In case you need the reverse conversion see:
Bezier curve with control points within the curve
The question was posted long ago, but here is a simple solution based on splprep, finding the minimal value of s allowing to fulfill a minimum curvature radius criteria.
route is the set of input points, the first dimension being the number of points.
import numpy as np
from scipy.interpolate import splprep, splev
#The minimum curvature radius we want to enforce
minCurvatureConstraint = 2000
#Relative tolerance on the radius
relTol = 1.e-6
#Initial values for bisection search, should bound the solution
s_0 = 0
minCurvature_0 = 0
s_1 = 100000000 #Should be high enough to produce curvature radius larger than constraint
s_1 *= 2
minCurvature_1 = np.float('inf')
while np.abs(minCurvature_0 - minCurvature_1)>minCurvatureConstraint*relTol:
s = 0.5 * (s_0 + s_1)
tck, u = splprep(np.transpose(route), s=s)
smoothed_route = splev(u, tck)
#Compute radius of curvature
derivative1 = splev(u, tck, der=1)
derivative2 = splev(u, tck, der=2)
xprim = derivative1[0]
xprimprim = derivative2[0]
yprim = derivative1[1]
yprimprim = derivative2[1]
curvature = 1.0 / np.abs((xprim*yprimprim - yprim* xprimprim) / np.power(xprim*xprim + yprim*yprim, 3 / 2))
minCurvature = np.min(curvature)
print("s is %g => Minimum curvature radius is %g"%(s,np.min(curvature)))
#Perform bisection
if minCurvature > minCurvatureConstraint:
s_1 = s
minCurvature_1 = minCurvature
else:
s_0 = s
minCurvature_0 = minCurvature
It may require some refinements such as iterations to find a suitable s_1, but works.
I have a set of GPS coordinates in decimal notation, and I'm looking for a way to find the coordinates in a circle with variable radius around each location.
Here is an example of what I need. It is a circle with 1km radius around the coordinate 47,11.
What I need is the algorithm for finding the coordinates of the circle, so I can use it in my kml file using a polygon. Ideally for python.
see also Adding distance to a GPS coordinate for simple relations between lat/lon and short-range distances.
this works:
import math
# inputs
radius = 1000.0 # m - the following code is an approximation that stays reasonably accurate for distances < 100km
centerLat = 30.0 # latitude of circle center, decimal degrees
centerLon = -100.0 # Longitude of circle center, decimal degrees
# parameters
N = 10 # number of discrete sample points to be generated along the circle
# generate points
circlePoints = []
for k in xrange(N):
# compute
angle = math.pi*2*k/N
dx = radius*math.cos(angle)
dy = radius*math.sin(angle)
point = {}
point['lat']=centerLat + (180/math.pi)*(dy/6378137)
point['lon']=centerLon + (180/math.pi)*(dx/6378137)/math.cos(centerLat*math.pi/180)
# add to list
circlePoints.append(point)
print circlePoints
Use the formula for "Destination point given distance and bearing from start point" here:
http://www.movable-type.co.uk/scripts/latlong.html
with your centre point as start point, your radius as distance, and loop over a number of bearings from 0 degrees to 360 degrees. That will give you the points on a circle, and will work at the poles because it uses great circles everywhere.
It is a simple trigonometry problem.
Set your coordinate system XOY at your circle centre. Start from y = 0 and find your x value with x = r. Then just rotate your radius around origin by angle a (in radians). You can find the coordinates of your next point on the circle with Xi = r * cos(a), Yi = r * sin(a). Repeat the last 2 * Pi / a times.
That's all.
UPDATE
Taking the comment of #poolie into account, the problem can be solved in the following way (assuming the Earth being the right sphere). Consider a cross section of the Earth with its largest diameter D through our point (call it L). The diameter of 1 km length of our circle then becomes a chord (call it AB) of the Earth cross section circle. So, the length of the arc AB becomes (AB) = D * Theta, where Theta = 2 * sin(|AB| / 2). Further, it is easy to find all other dimensions.