How to draw orthographic projection from equirectangular projection - python

I have this image :
I don’t know exactly what kind on projection it is, I guess equirectangular or mercator by the shape. It's the texture for an attitude indicator, b.
I want to draw a orthographic projection, b or maybe a General Perspective projection (which one looks better) of it according to a direction vector defined by two angles (heading and pitch). This direction define a point on the sphere, this point should be the center of the projection.
I want it to look from the pilot point of view, so only half of the sphere should be drawn.
I use python, and I have not yet chosen a graphic library, I will probably be using pygame though.
I’ve found something related : http://www.pygame.org/project-Off-Center+Map+Projections-2881-.html but it uses OpenGL and I have no experience with it, but I can try if needed.
How should I do that ? I probably can draw it manually by calculating every pixel from the calculation formulas but I think there are some kind of library tools to do that efficiently (hardware accelerated probably ?).

For an all-Python solution (using numpy/scipy array ops, which will be faster than any explicit per-pixel looping), this:
#!/usr/bin/env python
import math
import numpy as np
import scipy
import scipy.misc
import scipy.ndimage.interpolation
import subprocess
src=scipy.misc.imread("ji80w.png")
size=256
frames=50
for frame in xrange(0,frames):
# Image pixel co-ordinates
px=np.arange(-1.0,1.0,2.0/size)+1.0/size
py=np.arange(-1.0,1.0,2.0/size)+1.0/size
hx,hy=scipy.meshgrid(px,py)
# Compute z of sphere hit position, if pixel's ray hits
r2=hx*hx+hy*hy
hit=(r2<=1.0)
hz=np.where(
hit,
-np.sqrt(1.0-np.where(hit,r2,0.0)),
np.NaN
)
# Some spin and tilt to make things interesting
spin=2.0*np.pi*(frame+0.5)/frames
cs=math.cos(spin)
ss=math.sin(spin)
ms=np.array([[cs,0.0,ss],[0.0,1.0,0.0],[-ss,0.0,cs]])
tilt=0.125*np.pi*math.sin(2.0*spin)
ct=math.cos(tilt)
st=math.sin(tilt)
mt=np.array([[1.0,0.0,0.0],[0.0,ct,st],[0.0,-st,ct]])
# Rotate the hit points
xyz=np.dstack([hx,hy,hz])
xyz=np.tensordot(xyz,mt,axes=([2],[1]))
xyz=np.tensordot(xyz,ms,axes=([2],[1]))
x=xyz[:,:,0]
y=xyz[:,:,1]
z=xyz[:,:,2]
# Compute map position of hit
latitude =np.where(hit,(0.5+np.arcsin(y)/np.pi)*src.shape[0],0.0)
longitude=np.where(hit,(1.0+np.arctan2(z,x)/np.pi)*0.5*src.shape[1],0.0)
latlong=np.array([latitude,longitude])
# Resample, and zap non-hit pixels
dst=np.zeros((size,size,3))
for channel in [0,1,2]:
dst[:,:,channel]=np.where(
hit,
scipy.ndimage.interpolation.map_coordinates(
src[:,:,channel],
latlong,
order=1
),
0.0
)
# Save to f0000.png, f0001.png, ...
scipy.misc.imsave('f{:04}.png'.format(frame),dst)
# Use imagemagick to make an animated gif
subprocess.call('convert -delay 10 f????.png anim.gif',shell=True)
will get you
.
OpenGL is really the place to be doing this sort of pixel wrangling though, especially if it's for anything interactive.

I glanced at the code in the "Off-Center Map Projections" stuff you linked...
As a starting point for you, I'd say it was pretty good, especially if you want to achieve this with any sort of efficiency in PyGame as offloading any sort of per-pixel operations to OpenGL will be much faster than they'll ever be in Python.
Obviously to get any further you'll need to understand the OpenGL; the projection is implemented in main.py's GLSL code (the stuff in the string passed to mod_program.ShaderFragment) - the atan and asin there shouldn't be a surprise if you've read up on equirectangular projections.
However, to get to what you want, you'll have to figure out how to render a sphere instead of the viewport-filling quad (rendered in main.py at glBegin(GL_QUADS);). Or alternatively, stick with the screen-filling quad and do a ray-sphere intersection in the shader code too (which is effectively what the python code in my other answer does).

Related

Rasterization algorithms with anti-aliasing in Python

I have vector graphics. (In my first case, it's the epigraph of a function whose formula is given. So it is a shape whose outline is given by a parametric curve.)
I want to rasterize this image with anti-aliasing. So I want raster graphics, i.e. a numpy array. I want to obtain this array in a low-level way, avoiding libraries that are meant for object-oriented interactive GUI visualizations with plot axes, etc.. I just want an array. The only problem with doing something like Y,X=np.ogrid(...) and then picture = Y>f(X) is that that's not anti-aliased. (Note that blurring that binary picture is worse than a good dedicated anti-aliasing algorithm.) How to rasterize with anti-aliasing in Python without any overkill GUI-centered libraries?
If the curve is given by an implicit equation F(x,y)=0, evaluate the value of the function at the four corners of every pixel. If the signs are the same, the pixel is wholly outside or inside. If the signs vary, the area inside the polygon formed by the corners and the points along the edges where the function vanishes (find these by a mere linear interpolation) tells you the mixture of background and foreground colors (alpha blending coefficient).
Tracing the polygon isn't that difficult: traverse the four edges of the square and keep the positive vertices and zero points in the order you meet them. You will get from a triangle to an hexagon. The area is obtained by the shoelace formula.
The case of a parametric function is a little harder. You need to find the intersections of the curve with the grid lines, and perform the area estimation in all cells that are traversed. For this, draw the curve as a polyline (this is called flattening), and slice the polyline with horizontals, then verticals.
Manim might be able to rasterize epigraphs and parametric curves well and fast. Its community edition seems actively maintained.
Edits/comments with details are welcome.

Intersection between 2d image point and 3d mesh

Given: Mesh, Source Camera - I have intrinsic and extrinsic parameters, Image coordinate 2d
Output: 3D point, which is the intersection of a ray from camera center, through the 2d point on the image plane and the mesh. (I'm trying to find the 3d point on the mesh)
This is the process:
From Multiple View Geometry in Computer Vision book:
I have constructed the equation (6.14).
I'm not sure how to continue and get the 3d point that lies on the mesh (I also need the point that is closet to the camera).
I thought that it can be done in the following way:
Iterate over all the vertices and find the distance between the vertex and the line and the vertices that have the least distance lie on the line (if they're close to zero or zero), and finding the closet vertex is I guess finding the magnitude of between the center of the camera and the closet vertices, the smallest one will mean the point is the closest?
Quick update: This repo does seem to work with the rays: github.com/szabolcsdombi/python-mesh-raycast
I guess the bug now lies in getting the point D right..
As Grillteller pointed out in the comment, this is a ray intersection problem with the 3d mesh. As far as I know, humanity does not yet know a quick way to determine the intersection for an arbitrary mesh. In your problem context, you should Ray Tracing, which is also pointed out by Grillteller, however this has serious performance issues, although it gives a lot of shading possibilities.
To find the intersection of a ray and a mesh, the Ray Tracing algorithm typically uses different acceleration structures. Often such structures are a partition of space by trees:
KD-tree for Ray Tracing https://graphics.stanford.edu/papers/gpu_kdtree/kdtree.pdf
BSP-tree for Ray Tracing https://www.sci.utah.edu/publications/ize08/BSP_RT08.pdf
Octree for Ray Tracing https://www.researchgate.net/publication/3410767_Octree-R_An_Adaptive_Octree_for_Efficient_Ray_Tracing
This presentation explains some of these and other approaches very well.
P.S .: If you only need a simple visualization, then it would be better to reverse the problem: for each mesh element, perform rasterisation.
I found another implementation called trimesh using python.
You need to read to installation guide and then you are able to load your meshes via:
import numpy as np
import trimesh
# attach to logger so trimesh messages will be printed to console
trimesh.util.attach_to_log()
mesh = trimesh.load('models/CesiumMilkTruck.glb', force='mesh')
I found the relevant lines to import a camera in scene as trimesh.scene.Camera.
Then you can use the function cameras_to_rays(camera) (line 417) to "return one ray per pixel, as set in camera.resolution".
So now you are having the rays for every pixel and the mesh and can create a RayMeshIntersector as shown in ray_triangle.py. Then, you can use intersects_location (line 75) to calculate cartesian image coordinates where a respective ray hits the mesh.
I found an example for your purpose here:
"""
raytrace.py
----------------
A very simple example of using scene cameras to generate
rays for image reasons.
Install `pyembree` for a speedup (600k+ rays per second)
"""
from __future__ import division
import PIL.Image
import trimesh
import numpy as np
if __name__ == '__main__':
# test on a simple mesh
mesh = trimesh.load('../models/featuretype.STL')
# scene will have automatically generated camera and lights
scene = mesh.scene()
# any of the automatically generated values can be overridden
# set resolution, in pixels
scene.camera.resolution = [640, 480]
# set field of view, in degrees
# make it relative to resolution so pixels per degree is same
scene.camera.fov = 60 * (scene.camera.resolution /
scene.camera.resolution.max())
# convert the camera to rays with one ray per pixel
origins, vectors, pixels = scene.camera_rays()
# do the actual ray- mesh queries
points, index_ray, index_tri = mesh.ray.intersects_location(
origins, vectors, multiple_hits=False)
# for each hit, find the distance along its vector
depth = trimesh.util.diagonal_dot(points - origins[0],
vectors[index_ray])
# find pixel locations of actual hits
pixel_ray = pixels[index_ray]
# create a numpy array we can turn into an image
# doing it with uint8 creates an `L` mode greyscale image
a = np.zeros(scene.camera.resolution, dtype=np.uint8)
# scale depth against range (0.0 - 1.0)
depth_float = ((depth - depth.min()) / depth.ptp())
# convert depth into 0 - 255 uint8
depth_int = (depth_float * 255).round().astype(np.uint8)
# assign depth to correct pixel locations
a[pixel_ray[:, 0], pixel_ray[:, 1]] = depth_int
# create a PIL image from the depth queries
img = PIL.Image.fromarray(a)
# show the resulting image
img.show()
# create a raster render of the same scene using OpenGL
# rendered = PIL.Image.open(trimesh.util.wrap_as_stream(scene.save_image()))
The problem in the question is to find the closest point on 3D mesh visible in specific 2D point of screen and it is a part of Ray tracing technique. The ray in question is uniquely defined by the camera location (ray's origin) and the pixel location, which the ray penetrates. So knowing both of them allows one to specify the ray and find its intersection (if any) with the triangular surface.
It is rather computationally expensive task especially for high resolution screens (millions of pixels) and detailed meshes (millions of triangles), so a number of highly optimized software libraries where developed for it, for example:
Nvidia OptiX uses GPU for fast finding of ray-surface intersections. One can find a wrapper library for python.
Intel Embree does the same on x86 processors. Python wrappers: python-embree and pyembree. The latter is a dependency of trimesh for fast queries.
And there are libraries not only from hardware vendors with python interface that can quickly find ray-mesh collisions, e.g. MeshLib.

Method to determine polygon surface rotation from top-down camera

I have a webcam looking down on a surface which rotates about a single-axis. I'd like to be able to measure the rotation angle of the surface.
The camera position and the rotation axis of the surface are both fixed. The surface is a distinct solid color right now, but I do have the option to draw features on the surface if it would help.
Here's an animation of the surface moving through its full range, showing the different apparent shapes:
My approach thus far:
Record a series of "calibration" images, where the surface is at a known angle in each image
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(). I iterate through various epsilon values until I find one that yields exactly 4 points.
Order the points consistently (top-left, top-right, bottom-right, bottom-left)
Compute the angles between each points with atan2.
Use the angles to fit a sklearn linear_model.linearRegression()
This approach is getting me predictions within about 10% of actual with only 3 training images (covering full positive, full negative, and middle position). I'm pretty new to both opencv and sklearn; is there anything I should consider doing differently to improve the accuracy of my predictions? (Probably increasing the number of training images is a big one??)
I did experiment with cv2.moments directly as my model features, and then some values derived from the moments, but these did not perform as well as the angles. I also tried using a RidgeCV model, but it seemed to perform about the same as the linear model.
If I'm clear, you want to estimate the Rotation of the polygon with respect to the camera. If you know the length of the object in 3D, you can use solvePnP to estimate the pose of the object, from which you can get the Rotation of the object.
Steps:
Calibrate your webcam and get the intrinsic matrix and distortion matrix.
Get the 3D measurements of the object corners and find the corresponding points in 2d. Let me assume a rectangular planar object and the corners in 3d will be (0,0,0), (0, 100, 0), (100, 100, 0), (100, 0, 0).
Use solvePnP to get the rotation and translation of the object
The rotation will be the rotation of your object along the axis. Here you can find an example to estimate the pose of the head, you can modify it to suit your application
Your first step is good -- everything after that becomes way way way more complicated than necessary (if I understand correctly).
Don't think of it as 'learning,' just think of it as a reference. Every time you're in a particular position where you DON'T know the angle, take a picture, and find the reference picture that looks most like it. Guess it's THAT angle. You're done! (They may well be indeterminacies, maybe the relationship isn't bijective, but that's where I'd start.)
You can consider this a 'nearest-neighbor classifier,' if you want, but that's just to make it sound better. Measure a simple distance (Euclidean! Why not!) between the uncertain picture, and all the reference pictures -- meaning, between the raw image vectors, nothing fancy -- and choose the angle that corresponds to the minimum distance between observed, and known.
If this isn't working -- and maybe, do this anyway -- stop throwing away so much information! You're stripping things down, then trying to re-estimate them, propagating error all over the place for no obvious (to me) benefit. So when you do a nearest neighbor, reference pictures and all that, why not just use the full picture? (Maybe other elements will change in it? That's a more complicated question, but basically, throw away as little as possible -- it should all be useful in, later, accurately choosing your 'nearest neighbor.')
Another option that is rather easy to implement, especially since you've done a part of the job is the following (I've used it to compute the orientation of a cylindrical part from 3 images acquired when the tube was rotating) :
Threshold each image to isolate the surface.
Find the four corners with cv2.approxPolyDP(), alternatively you could find the four sides of your part with LineSegmentDetector (available from OpenCV 3).
Compute the angle alpha, as depicted on the image hereunder
When your part is rotating, this angle alpha will follow a sine curve. That is, you will measure alpha(theta) = A sin(theta + B) + C. Given alpha you want to know theta, but first you need to determine A, B and C.
You've acquired many "calibration" or reference images, you can use all of these to fit a sine curve and determine A, B and C.
Once this is done, you can determine theta from alpha.
Notice that you have to deal with sin(a+Pi/2) = sin(a). It is not a problem if you acquire more than one image sequentially, if you have a single static image, you have to use an extra mechanism.
Hope I'm clear enough, the implementation really shouldn't be a problem given what you have done already.

Smoothing HEALPix maps with `healpy`: Why does the output map appear "patchy"?

I have a HEALPix all-sky map, from the AKARI Far Infrared Surveyor databse (publicly released). I have tried to "smooth" the map using healpy, but the result looks very strange. Is there a better way? My question however relates to any all-sky HEALPix map (i.e. IRAS, Planck, WISE, WMAP).
My objective is to "smooth" the effective point-spread function of this AKARI map to an angular resolution of 1-degree (the original data has a PSF of about 1 arcminute). This is so that I can compare the far infrared AKARI map to lower resolution microwave maps (specifically, those of the anomalous microwave foreground).
In my example below, I'm using a degraded version of the map, so it'd be small enough to upload to Github. This means that the pixels are about 3.42 arcminutes. I wouldn't degrade the pixel scale so much, before PSF smoothing, normally- but this is just an example:
#Load the packages needed for visualization, and HEALPix processing
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
import healpy.projector as pro
#Loads the HEALPix .FITS file into an array
map_in = hp.read_map("akari_WideL_1_1024.fits", nest = True)
#Visualizes the all-sky map, before any processing is done.
hp.mollview(map_in, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
#Smoothes the map with a 1-degree FWHM Gaussian (fwhm given in radians).
map_out = hp.sphtfunc.smoothing(map_out, fwhm = 0.017, iter = 1)
#Visualizes the the map after smoothing
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
I have tried the healpy.sphtfunct.smoothing routine (https://healpy.readthedocs.org/en/latest/generated/healpy.sphtfunc.smoothing.html#healpy.sphtfunc.smoothing).As far as I understand, smoothing converts the map into spherical harmonics, then convolves with the gaussian, and then converts it back into a spatial map.
I've saved the ipython notebook as well as the low-res .FITS HEALpix map in a github repository, here:
https://github.com/aaroncnb/healpy_smoothing_test
(You'll need to have the healpy package installed)
By running the code in the notebook, you can easily visualize the trouble I'm having- after smoothing the map, there are some strange "artifacts", as if the pixels had been iteratively box-averaged, rather than smoothed with a circular guassian profile. What I expect to see, is just a blurrier version of the input map.
I think I'm missing something fundamental about the conversion to spherical harmonics, before the smoothing is done.
Has anyone tried to do this kind of all-sky smoothing before, on a HEALPix map?
I believe another option is to convert the map to a standard, rectangular array, and then conduct the smoothing. However I remain curious about solving the problem without leaving the HEALPix format.
It appears smoothing works on a RINGed map only (it kind of makes sense to me, since this seems a bit easier to handle mathematically). Thus, you'll need to convert your input map to a RINGed format:
map_ring = hp.pixelfunc.reorder(map_in, inp='NEST', out='RING')
map_out = hp.sphtfunc.smoothing(map_ring, fwhm = 0.17, iter = 1)
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = False, norm = 'hist')
This answer is from a bit of trial and error, because I can't find anything definitive on it in the documentation, and I haven't dived into the source code (though, with the below result, it may be easy to verify whether my assumption is correct by looking through the relevant source code).
Or, you may want to ask the healpix/healpy people directly.
(I'd suggest it is in fact a shortcoming in the documentation: the docs for healpy.sphtfunc.smoothing don't mention the required form for the input. I guess that's a healpy issue/PR for another day.)
Btw, bonus points for creating a SSCCE as a notebook file on Github! (Now if only StackOverflow also rendered notebooks.)

Gradient alpha polygon with pygame

I have a scene, and I need to be able to overlay the scene with translucent polygons (which can be done easily using pygame.gfxdraw.filled_polygon which supports drawing with alpha), but the catch is that the amount of translucency has to fade over a distance (so for example, if the alpha value is 255 at one end of the polygon, then it is 0 at the other end and it blends from 255 to 0 through the polygon). I've implemented drawing shapes with gradients by drawing the gradient and then drawing a mask on top, but I've never come across a situation like this, so I have no clue what to do. I need a solution that can run in real time. Does anyone have any ideas?
It is possible that you have already thought of this and have decided against it, but it would obviously run far better in real time if the polygons were pre-drawn. Presuming there aren't very many different types of polygons, you could even resize them however you need and you would be saving CPU.
Also, assuming that all of the polygons are regular, you could just have several different equilateral triangles with gradients going in various directions on them to produce the necessary shapes.
Another thing you could do is define the polygon you are drawing, than draw an image of a gradient saved on your computer inside that shape.
The final thing you could do is to build your program (or certain, CPU intensive parts of your program) in C or C++. Being compiled and automatically optimized during compiling, these languages are significantly faster than python and better suited to what you are trying to do.

Categories

Resources