In reading TLE and calculating the orbit of satellites, does PyEphem regard the earth as the sphere or as the ellipse ?
The underlying astronomy library beneath PyEphem is named libastro, and here is its code for doing satellite computations:
https://github.com/brandon-rhodes/pyephem/blob/master/libastro-3.7.5/earthsat.c
It looks like it simply considers the Earth a sphere; the only place that I see the shape of the Earth even coming into the calculation is where the height is produced from its distance from the Earth's surface, where it just uses a constant radius instead of anything fancier:
#if SSPELLIPSE
#else
*Height = r - EarthRadius;
#endif
So I think your answer is “sphere.”
Related
I am looking to create a repetitive pattern from a single shape (in the example below, the starting shape would be the smallest centre star) using Python. The pattern would look something like this:
To give context, I am working on a project that uses a camera to detect a shape on a rectangle of sand. The idea is that the ripple pattern is drawn out around the object using a pen plotter-type mechanism in the sand to create a zen garden-type feature.
Currently, I am running the Canny edge detection algorithm to create a png (in this example it would be the smallest star). I am able to convert this into an SVG using potrace, but am not sure how to create the ripple pattern (and at what stage, i.e. before converting to an SVG, or after).
Any help would be appreciated!
Here's how I did it:
In the end, I ran a vertex detection algorithm to calculate the shape's vertices.
Then, I sorted them in a clockwise order around the centroid coordinate. Using the svgwrite library, I recreated the shapes using lines.
I 'drew' a circle with a set radius around each vertex and calculated the intersection between the circle and a straight line from the centroid through the vertex.
This gave me two potential solutions (a +ve and a -ve). I chose the point furthest away from the centroid, iterated this method for each vertex and joined the points to create an outline of the shape.
Assing you are using turtle (very beginner friendly) you can use this:
import turtle, math
turtle.title("Stars!")
t = turtle.Turtle()
t.speed(900) # make it go fast
t.hideturtle() # hide turtle
t.width(1.5) # make lines nice & thick
def drawstar(size):
t.up() # make turtle not draw while repositioning
t.goto(0, size * math.sin(144)) # center star at 0, 0
t.setheading(216); # make star flat
t.down() # make turtle draw
for i in range(5): # draw 5 spikes
t.forward(size)
t.right(144)
t.forward(size)
t.right(288)
drawstar(250)
drawstar(200)
drawstar(150)
drawstar(100)
input() # stop turtle from exiting
which creates this:
I am using gluLookAt with a camera whose coordinates are xCam, yCam and zCam. The coordinates of the object the camera is looking at are xPos, yPos, and zPos. There are variables named mouseturnX and mouseturnY, which measure the deviation of the mouse from the middle of the screen in the x-axis and the y-axis. The variable camdist describes the distance between camera and the object it looks at.
The code of the cameraposition is this:
xCam = sin(mouseturnX)*camdist+xPos
yCam = mouseturnY+yPos
zCam = cos(mouseturnX)*camdist+zPos
I now made a polygon object, which I rotate with:
glRotatef(mouseturnX,0,1,0)
Usually it should only show me the backside of the object, it does not matter which position the camera has. But now it does not turn correctly. I tried it with other rotation-axises, there it works fine, but with the y-axis it just does not want to work. I tried changing the camdist from positive to negative, the mouseturnX in the glRotatef function from positive to negative and back to positive again. It just does not work. I used glPushMatrix before the rotation command and glPopMatrix after it. One line before the rotation command I used the translate function to set a fixpoint for the polygon.
Edit: The polygon actually spins, but not in the right amount. It seems like I have to multiply the rotation of the polygon with something.
I found the multiplicator by trying. It is 56.5. It is not perfect, but it works.
Here's some background:
I'm working on a game engine and I've recently added a physics engine (physx in this case ). problem is that my transform class uses euler angles for rotation and the physics engine's transform class uses euler angles. so I just implemented a method to change my transform class to the physics engine transform and back. It's working well but I've discovered a weird bug.
Behavior I get:
When the Yaw(second element of the euler vector) of the rotation is above 90 degrees it doesn't make the object rotate anymore on the y axis and start's messing around with the pitch and the roll (weird shaking skip from 0 to 180 and back a lot).
The debugging tools shows that the rotation doesn't go above 91 but does go to about 90.0003 max I do transfer the degrees to radians.
Example:
To show this bug I have a cube with a python script rotating it :
from TOEngine import *
class rotate:
direction = vec3(0,10,0)
def Start(self):
pass
def Update(self,deltaTime):
transform.Rotate(self.direction*deltaTime*5)
pass
Engine itself written in cpp but I got a scripting system working with embedded python. TOEngine is just my module and the script itself is just rotating the cube every frame.
The cube start at 0 , 0 , 0 rotation and rotating fine but stops and
90 degress yaw and starts shaking.
This only happens when the physics system is enabled so I know that the bug must be in the method transferring the rotation from euler to quat and back every frame using glm.
Here's the actual problematic code :
void RigidBody::SetTransform(Transform transform)
{
glm::vec3 axis = transform.rotation;
rigidbody->setGlobalPose(PxTransform(*(PxVec3*)&transform.position,*(PxQuat*)&glm::quat(glm::radians(transform.rotation))));//Attention Over Here
}
Transform RigidBody::GetTransform()
{
auto t = rigidbody->getGlobalPose();
return Transform(*(glm::vec3*)&t.p, glm::degrees(glm::eulerAngles(*(glm::quat*)&t.q)), entity->transform.scale);
}
Avoid the weird type punning PxQuat is basically the same as glm::quat and PxVec3 is basically the same as glm::vec3. I expect this code to transfer between the physics's engine transform class and my transform class by changing the rotation from euler angles degress to a quat with radians(the hard part).
And the inside the physics system:
void PreUpdate(float deltaTime)override { //Set Physics simulation changes to the scene
mScene->fetchResults(true);
for (auto entity : Events::scene->entities)
for (auto component : entity->components)
if (component->GetName() == "RigidBody")
entity->transform = ((RigidBody*)component)->GetTransform(); //This is running on the cube entity
}
void PostUpdate(float deltaTime)override { //Set Scene changes To Physics simulation
for (auto entity : Events::scene->entities)
for (auto component : entity->components)
if (component->GetName() == "RigidBody")
((RigidBody*)component)->SetTransform(entity->transform);//This is running on the cube entity
mScene->simulate(deltaTime);
}
PreUpdate runs before the update every frame PostUpdate runs after the update every frame. the method Update(showed in the script above) as the name suggests runs on the update...(in between PreUpdate and PostUpdate). The cube has a rigidbody component.
What I expect getting:
a rotating cube that doesn't stop rotating when it reaches yaw 90 degrees.
I know this one is a bit complex. I tried my best to explain the bug I believe the problem is in changing Euler angles to a quat.
Regarding the conversion from PxQuat to glm::quat, do read the documentation at https://en.cppreference.com/w/cpp/language/explicit_cast and https://en.cppreference.com/w/cpp/language/reinterpret_cast and look for undefined behavior in the page on reinterpret_cast. As far as I can tell, that c-style cast is not guaranteed to work, or even desirable. While I am digressing at this point, but remember that you have two options for this conversion.
glm::quat glmQuat = GenerateQuat();
physx::PxQuat someQuat = *(physx::PxQuat*)(&glmQuat); //< (1)
physx::PxQuat someOtherQuat = ConvertGlmQuatToPxQuat(glmQuat); //< (2)
(1) This option has potential to result in undefined behavior but more importantly, you did not save a copy. That statement is certain to result in 1 copy constructor invocation.
(2) This option, on account of return value optimization, will also result in a single construction of physx::PxQuat.
So in effect, by taking option (1), you do not save any cost but are risking undefined behavior. With option (2), cost is the same but code is now standards compliant. Now back to the original point.
I would ordinarily do everything in my capacity to avoid using euler angles as they are error prone and far more confusing than quaternions. That said here is a simple test you can set up to test your quaternion conversion from euler angles (keep physx out of this for the time being).
You need to generate the following methods.
glm::mat3 CreateRotationMatrix(glm::vec3 rotationDegrees);
glm::mat3 CreateRotationMatrix(glm::quat inputQuat);
glm::quat ConvertEulerAnglesToQuat(glm::vec3 rotationDegrees);
and then your pseudo code for the test looks like below.
for (auto angles : allPossibleAngleCombinations) {
auto expectedRotationMatrix = CreateRotationMatrix(angles);
auto convertedQuat = ConvertEulerAnglesToQuat(angles);
auto actualRotationMatrix = CreateRotationMatrix(convertedQuat);
ASSERT(expectedRotationMatrix, actualRotationMatrix);
}
Only if this test passes for you, can you then look at the next problem of converting them to PxQuat. I would guess that this test is going to fail for you. The one advice I would give is that one of the input angles, (depends on convention) needs to be range limited. Say, if you keep your yaw angle limited between -90, 90 degrees, chances are your test might succeed. This is because, there are a non-unique combination of euler angles that can result in the same rotation matrix.
In the pybox2d manual it states the following:
pybox2d uses radians for angles. The body rotation is stored in
radians and may grow unbounded. Consider normalizing the angle of your
bodies if the magnitude of the angle becomes too large (use
b2Body.SetAngle).
However, when I try to implement something to 'normalize' the angle I get the following error:
AttributeError: 'b2Body' object has no attribute 'SetAngle'
Code snippet:
def update_outputs(self):
# This is necessary to prevent the angle
# from getting too large or small
self.body.SetAngle(self.body.angle % 2*pi)
Looks like the library has been pythonized since those docs were written. angle is a property of Body:
#angle.setter
def angle(self, angle):
self._xf.angle=angle
self._transform_updated()
You should be able to simply set it with something like:
def update_outputs(self):
# This is necessary to prevent the angle
# from getting too large or small
self.body.angle %= 2*pi
Given land polygons as a Shapely MultiPolygon, I want to find the (Multi-)Polygon that represents the e.g. 12 nautical mile buffer around the coastlines.
Using the Shapely buffer method does not work since it uses euclidean calculations.
Can somebody tell me how to calculate geodesic buffers in python?
This is not a shapely problem, since shapely explicitly tells in its documentation that the library is for planar computation only. Nevertheless, in order to answer your question, you should specify the coordinate systems you are using for your multipolygons.
Assuming you are using WGS84 projection (lat,lon), this is a recipe I found in another SO question (fix-up-shapely-polygon-object-when-discontinuous-after-map-projection). You will need pyproj library.
import pyproj
from shapely.geometry import MultiPolygon, Polygon
from shapely.ops import transform as sh_transform
from functools import partial
wgs84_globe = pyproj.Proj(proj='latlong', ellps='WGS84')
def pol_buff_on_globe(pol, radius):
_lon, _lat = pol.centroid.coords[0]
aeqd = pyproj.Proj(proj='aeqd', ellps='WGS84', datum='WGS84',
lat_0=_lat, lon_0=_lon)
project_pol = sh_transform(partial(pyproj.transform, wgs84_globe, aeqd), pol)
return sh_transform( partial(pyproj.transform, aeqd, wgs84_globe),
project_pol.buffer(radius))
def multipol_buff_on_globe(multipol, radius):
return MultiPolygon([pol_buff_on_globe(g, radius) for g in multipol])
pol_buff_on_globe function does the following. First, build an azimuthal equidistant projection centered in the polygon centroid. Then, change the coordinate system of the polygon to that projection. After that, builds the buffer there, and then change the coordinate system of the buffered polygon to WGS84 coordinate system.
Some special care is needed:
You will need to find out how to translate the distance you want to the distance used in aeqd projection.
Be careful of not buffering including the poles (see the mentioned SO question).
The fact that we are using the centroid of the polygon to center the projection should guaranty the answer is good enough, but if you have specif precision requirements you should NOT USE this solution, or at least make a characterization of the error for the typical polygon you are using.