Smoothing HEALPix maps with `healpy`: Why does the output map appear "patchy"? - python

I have a HEALPix all-sky map, from the AKARI Far Infrared Surveyor databse (publicly released). I have tried to "smooth" the map using healpy, but the result looks very strange. Is there a better way? My question however relates to any all-sky HEALPix map (i.e. IRAS, Planck, WISE, WMAP).
My objective is to "smooth" the effective point-spread function of this AKARI map to an angular resolution of 1-degree (the original data has a PSF of about 1 arcminute). This is so that I can compare the far infrared AKARI map to lower resolution microwave maps (specifically, those of the anomalous microwave foreground).
In my example below, I'm using a degraded version of the map, so it'd be small enough to upload to Github. This means that the pixels are about 3.42 arcminutes. I wouldn't degrade the pixel scale so much, before PSF smoothing, normally- but this is just an example:
#Load the packages needed for visualization, and HEALPix processing
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
import healpy.projector as pro
#Loads the HEALPix .FITS file into an array
map_in = hp.read_map("akari_WideL_1_1024.fits", nest = True)
#Visualizes the all-sky map, before any processing is done.
hp.mollview(map_in, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
#Smoothes the map with a 1-degree FWHM Gaussian (fwhm given in radians).
map_out = hp.sphtfunc.smoothing(map_out, fwhm = 0.017, iter = 1)
#Visualizes the the map after smoothing
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
I have tried the healpy.sphtfunct.smoothing routine (https://healpy.readthedocs.org/en/latest/generated/healpy.sphtfunc.smoothing.html#healpy.sphtfunc.smoothing).As far as I understand, smoothing converts the map into spherical harmonics, then convolves with the gaussian, and then converts it back into a spatial map.
I've saved the ipython notebook as well as the low-res .FITS HEALpix map in a github repository, here:
https://github.com/aaroncnb/healpy_smoothing_test
(You'll need to have the healpy package installed)
By running the code in the notebook, you can easily visualize the trouble I'm having- after smoothing the map, there are some strange "artifacts", as if the pixels had been iteratively box-averaged, rather than smoothed with a circular guassian profile. What I expect to see, is just a blurrier version of the input map.
I think I'm missing something fundamental about the conversion to spherical harmonics, before the smoothing is done.
Has anyone tried to do this kind of all-sky smoothing before, on a HEALPix map?
I believe another option is to convert the map to a standard, rectangular array, and then conduct the smoothing. However I remain curious about solving the problem without leaving the HEALPix format.

It appears smoothing works on a RINGed map only (it kind of makes sense to me, since this seems a bit easier to handle mathematically). Thus, you'll need to convert your input map to a RINGed format:
map_ring = hp.pixelfunc.reorder(map_in, inp='NEST', out='RING')
map_out = hp.sphtfunc.smoothing(map_ring, fwhm = 0.17, iter = 1)
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = False, norm = 'hist')
This answer is from a bit of trial and error, because I can't find anything definitive on it in the documentation, and I haven't dived into the source code (though, with the below result, it may be easy to verify whether my assumption is correct by looking through the relevant source code).
Or, you may want to ask the healpix/healpy people directly.
(I'd suggest it is in fact a shortcoming in the documentation: the docs for healpy.sphtfunc.smoothing don't mention the required form for the input. I guess that's a healpy issue/PR for another day.)
Btw, bonus points for creating a SSCCE as a notebook file on Github! (Now if only StackOverflow also rendered notebooks.)

Related

Scikit-Image Questions (specifically re: `blob_log`)

I'm trying to use blob detection in scikit-image. blob_log is supposed to return an array of either Nx3 for a 2D image, or an Nx4 for a 3D image (?) The first two values in a 2D image are the (x, y, sigma) and in 3D are (p, x, y sigma)
I'm attempting to load this image into my code which looks like it has quite a few observable blobs & it is a 2D image.
I've got a few questions:
(1) the blob_log is returning a Nx4 array which means its loading it as a 3D image. When I try to print it, it looks like its just a bunch of empty arrays which I don't understand how because when I plt.show() it is a 2D image.
(2) If N is the number of blobs it has calculated, then it is only giving me less than 10% of the total images - I believe this is due to the fact that the image is on a white background making it more difficult for the blob_log to notice - is that correct?
(3) I don't understand how the for loop at the end of the Blob documentation works? How is it plotting the circles over the image? I'm sorry if this is an elementary question, but its frustrating me because I think that would help me with some of the other things I was wondering about.
Attempts to figure out what is going on:
(1) Loading data.coins() and printing it gives me a nice array of values which I assume are the 2D, it still doesn't explain why the image I want to load isn't being recognized as 2D.
(2) I tried to load the data.coins() which should be an obviously provided image with circular images and fooled around with the sigma and threshold settings, I'm getting a variety of different values depending on the settings - is there a good way of figuring out which are the best without having to fool around with the settings until I get one that works?
Due to the length of my code and my question, below is just the applicable parts, but my entire code can be found here
from skimage import data, feature, exposure, io
import matplotlib
import matplotlib.pyplot as plt
img = data.coins()
#img = io.imread('gfp_test.png') #this is the image I linked above just in my dir
print(img)
print(type(img))
A = feature.blob_log(imgG, max_sigma = 30, num_sigma = 10, threshold = .4)
print (A)
Thank you for your help!
(1) You have a color image, while blob_* expect a grayscale image. Use skimage.color.rgb2gray to convert your image to grayscale before using the blob finding functions. See our crash course on NumPy for images for more details.
(2) Let's see if the above fixes your problem. I think blob finding is a local operation, so the white frame around the edges is probably not a problem.
(3) Yes, the variable naming could be clearer. The key is here: sequence = zip(blobs_list, colors, titles). If you look at what those individual variables are, they are length-3 lists with the results from the three different blob-finding methods, three different colors, and three different titles (the names of the three methods). So the outer for-loop is iterating through the methods, and the three panels of the figure. (You should look at the matplotlib documentation for subplots for more on this.)
The inner loop, then, is going through the results of a single blob-finding method and putting a circle around each result. You'll see the x/y transposed, and this is a consequence of the different coordinate conventions between our images (see the crash course linked above) and the matplotlib canvas. Then we create a circle with the appropriate radius for each blob, and add it to the matplotlib axes. See the examples linked from the Circle documentation for more information on adding patches.
Hope this helps!

PSF (point spread function) for an image (2D)

I'm new in image analysis (with Python) and I would like to apply the richardson_lucy deconvolution (from skimage) on my data (CT scans). For this reason, I estimated the PSF in "number of voxels" by means of a specific software. Its value is roughly 6.73 voxels, but I don't know how to use it as a paramter in the function.
The function uses the PSF parameter as a ndarray, so I tried in this way:
from skimage import io
from pylab import array
img = io.imread ("Slice1.tif")
import skimage.restoration as rst
PSF = array (6.7)
img_dbl = rst.richardson_lucy (img, PSF, iterations=10)
It shows me this error: IndexError: too many indices for array
In CT scans, blurring between two different materials can be linked to a Gaussian PSF. If you have more tips for deblurring (maybe better than RL) just write them.
Can any one please help me.
Have a similar problem and still researching it. In my case it didn't work if I didn't use np.uint8 as type. CT data should be 16 bit but only uses the first 12 bits (which are mapped to values between [-1024, 3096]. So I had to rescale my image data to [0-255] before getting anything than black or white out it.
If I understood that correctly, the sum of the PSF should always be 1. What I can guess from your question is that you assume the point spread function to be a gaussian with a meaningful (95% of values?) spread of 6.7 pixels. In that case you would have to model the PSF as a gaussian (that's what I came here for).
You can create one with the method described by #FuzzyDuck in this post.
PSF = gkern(5,2)
This would create a gaussian 5x5 kernel with sum 1 using the proposed method by #FuzzyDuck with a sigma of 2. Note that point spread functions could be applied several times so you have to experiment a little bit with the values (or use an algorithm to approximate that).

How to draw orthographic projection from equirectangular projection

I have this image :
I don’t know exactly what kind on projection it is, I guess equirectangular or mercator by the shape. It's the texture for an attitude indicator, b.
I want to draw a orthographic projection, b or maybe a General Perspective projection (which one looks better) of it according to a direction vector defined by two angles (heading and pitch). This direction define a point on the sphere, this point should be the center of the projection.
I want it to look from the pilot point of view, so only half of the sphere should be drawn.
I use python, and I have not yet chosen a graphic library, I will probably be using pygame though.
I’ve found something related : http://www.pygame.org/project-Off-Center+Map+Projections-2881-.html but it uses OpenGL and I have no experience with it, but I can try if needed.
How should I do that ? I probably can draw it manually by calculating every pixel from the calculation formulas but I think there are some kind of library tools to do that efficiently (hardware accelerated probably ?).
For an all-Python solution (using numpy/scipy array ops, which will be faster than any explicit per-pixel looping), this:
#!/usr/bin/env python
import math
import numpy as np
import scipy
import scipy.misc
import scipy.ndimage.interpolation
import subprocess
src=scipy.misc.imread("ji80w.png")
size=256
frames=50
for frame in xrange(0,frames):
# Image pixel co-ordinates
px=np.arange(-1.0,1.0,2.0/size)+1.0/size
py=np.arange(-1.0,1.0,2.0/size)+1.0/size
hx,hy=scipy.meshgrid(px,py)
# Compute z of sphere hit position, if pixel's ray hits
r2=hx*hx+hy*hy
hit=(r2<=1.0)
hz=np.where(
hit,
-np.sqrt(1.0-np.where(hit,r2,0.0)),
np.NaN
)
# Some spin and tilt to make things interesting
spin=2.0*np.pi*(frame+0.5)/frames
cs=math.cos(spin)
ss=math.sin(spin)
ms=np.array([[cs,0.0,ss],[0.0,1.0,0.0],[-ss,0.0,cs]])
tilt=0.125*np.pi*math.sin(2.0*spin)
ct=math.cos(tilt)
st=math.sin(tilt)
mt=np.array([[1.0,0.0,0.0],[0.0,ct,st],[0.0,-st,ct]])
# Rotate the hit points
xyz=np.dstack([hx,hy,hz])
xyz=np.tensordot(xyz,mt,axes=([2],[1]))
xyz=np.tensordot(xyz,ms,axes=([2],[1]))
x=xyz[:,:,0]
y=xyz[:,:,1]
z=xyz[:,:,2]
# Compute map position of hit
latitude =np.where(hit,(0.5+np.arcsin(y)/np.pi)*src.shape[0],0.0)
longitude=np.where(hit,(1.0+np.arctan2(z,x)/np.pi)*0.5*src.shape[1],0.0)
latlong=np.array([latitude,longitude])
# Resample, and zap non-hit pixels
dst=np.zeros((size,size,3))
for channel in [0,1,2]:
dst[:,:,channel]=np.where(
hit,
scipy.ndimage.interpolation.map_coordinates(
src[:,:,channel],
latlong,
order=1
),
0.0
)
# Save to f0000.png, f0001.png, ...
scipy.misc.imsave('f{:04}.png'.format(frame),dst)
# Use imagemagick to make an animated gif
subprocess.call('convert -delay 10 f????.png anim.gif',shell=True)
will get you
.
OpenGL is really the place to be doing this sort of pixel wrangling though, especially if it's for anything interactive.
I glanced at the code in the "Off-Center Map Projections" stuff you linked...
As a starting point for you, I'd say it was pretty good, especially if you want to achieve this with any sort of efficiency in PyGame as offloading any sort of per-pixel operations to OpenGL will be much faster than they'll ever be in Python.
Obviously to get any further you'll need to understand the OpenGL; the projection is implemented in main.py's GLSL code (the stuff in the string passed to mod_program.ShaderFragment) - the atan and asin there shouldn't be a surprise if you've read up on equirectangular projections.
However, to get to what you want, you'll have to figure out how to render a sphere instead of the viewport-filling quad (rendered in main.py at glBegin(GL_QUADS);). Or alternatively, stick with the screen-filling quad and do a ray-sphere intersection in the shader code too (which is effectively what the python code in my other answer does).

OpenCV HOGDescripter Python

I was wondering if anyone knew why there is no documentation for HOGDescriptors in the Python bindings of OpenCV.
Maybe I've just missed them, but the only code I've found of them is this thread: Get HOG image features from OpenCV + Python?
If you scroll down in that thread, this code is found in there:
import cv2
hog = cv2.HOGDescriptor()
im = cv2.imread(sample)
h = hog.compute(im)
I've tested this and it works -- so the Python Bindings do exist, just the documentation doesn't. I was wondering if anyone knew why documentation for the Python bindings for HOG is so difficult to find / non-existent. Does anyone know if there is a tutorial I can read anywhere about HOG (especially via the Python Bindings)? I'm new to HOG and would like to see a few examples of how OpenCV does stuff before I start writing my own stuff.
1. Get Inbuilt Documentation:
Following command on your python console will help you know the structure of class HOGDescriptor:
import cv2
help(cv2.HOGDescriptor())
2. Example code:
Here is a snippet of code to initialize an cv2.HOGDescriptor with different parameters (The terms I used here are standard terms which are well defined in OpenCV documentation here):
import cv2
image = cv2.imread("test.jpg",0)
winSize = (64,64)
blockSize = (16,16)
blockStride = (8,8)
cellSize = (8,8)
nbins = 9
derivAperture = 1
winSigma = 4.
histogramNormType = 0
L2HysThreshold = 2.0000000000000001e-01
gammaCorrection = 0
nlevels = 64
hog = cv2.HOGDescriptor(winSize,blockSize,blockStride,cellSize,nbins,derivAperture,winSigma,
histogramNormType,L2HysThreshold,gammaCorrection,nlevels)
#compute(img[, winStride[, padding[, locations]]]) -> descriptors
winStride = (8,8)
padding = (8,8)
locations = ((10,20),)
hist = hog.compute(image,winStride,padding,locations)
3. Reasoning:
The resultant hog descriptor will have dimension as:
9 orientations X (4 corner blocks that get 1 normalization + 6x4 blocks on the edges that get 2 normalizations + 6x6 blocks that get 4 normalizations) = 1764. as I have given only one location for hog.compute().
4. Different way to initialize HOGDescriptor:
One more way to initialize is from xml file which contains all parameter values:
hog = cv2.HOGDescriptor("hog.xml")
To get an xml file one can do following:
hog = cv2.HOGDescriptor()
hog.save("hog.xml")
and edit the respective parameter values in xml file.
I was wondering the same. Almost none documentation can be found for OpenCV HOGDescriptor, other than the source cpp code.
Scikit-image has a good example page on extracting and illustrating HOG feature. It provides an alternative to explore HOG. It is documented here.
However, there is one thing to point out about scikit-image's hog implementation. Its Python code for hog function does not implement weighted vote for histogram orientation binning, but only does simple binning based on magnitude value falling into which bin. See its hog_histogram function. This is not following exactly Dalal and Triggs's paper.
Actually, I found that object detection based on OpenCV's implementation of HOG is more accurate than with the api from scikit-image. It makes sense to me, because weighted vote is important. By casting weighted votes to bins, variation in histogram is greatly reduced when gradient magnitude falls on or around the boundary. Chris McCormick wrote a very insightful blog on hog, in which orientation binning is clearly described as
For each gradient vector, it’s contribution to the histogram is given by the magnitude of the vector (so stronger gradients have a bigger impact on the histogram). We split the contribution between the two closest bins. So, for example, if a gradient vector has an angle of 85 degrees, then we add 1/4th of its magnitude to the bin centered at 70 degrees, and 3/4ths of its magnitude to the bin centered at 90.
I believe the intent of splitting the contribution is to minimize the problem of gradients which are right on the boundary between two bins. Otherwise, if a strong gradient was right on the edge of a bin, a slight change in the gradient angle (which nudges the gradient into the next bin) could have a strong impact on the histogram.
So, use OpenCV to compute hog if possible (haven't digged into its code and don't feel like doing so, but I suppose OpenCV's way of hog implementation is more appropriate). Not only I found an improvement in detection accuracy, but it also runs faster. Compared to scikit-image's hog code with wonderful comments, its documentation is almost none. Yet it is still feasible that one could get OpenCV's version working in practice - it's a matter of passing the right parameter for window size, cell size, block size, block stride, number of orientations, etc. Other parameters I just went with default.

Differences in projecting satellite images with cartopy and pyresample

I have written a python script to project and overlay geostationary satellite images from the university of Dundee so the resulting image can be used for xplanet to render the surface of the earth. The source code of the tool can be found at https://github.com/jmozmoz/cloudmap/tree/cartopy (this is the branch with cartopy support)
The tool support two different python libraries to project the geostationary images on a flat map: pyresample and cartopy.
I have found the following differences/problems:
pyresample is much faster than cartopy (depending on the size of the output image up to a factor of 10)
The output images differ: The results using pyresample show a stronger contrast.
For examples see the debug directory at https://github.com/jmozmoz/cloudmap/tree/cartopy/debug
If the multiprocessing library is used to do the projection in parallel, the cartopy version crashes with the following error message:
Fatal Python error: PyEval_RestoreThread: NULL tstate
So why is cartopy so much slower? Is pyresample doing the work in C code? Should cartopy support multiprocessing? And how to fix the problem with the contrast?
Thank you for your help
1. pyresample is much faster than cartopy (depending on the size of the output image up to a factor of 10)
The cartopy reprojection functionality hasn't been optimized in any way, and although it is using the scipy ckdtree functionality under the hood, the algorithm itself is written in Python. I seem to remember that a quick win was to use https://pypi.python.org/pypi/kdtree which from memory gave quite a reasonable speedup with little work, cartopy.img_transform would be the place where changes would be needed.
Cartopy's re-projection functionality is probably also paying the cost of being very general - you can provide an image in any projection, and it will put it into any other projection, dealing with discontinuities and tears without a problem. It would be really cool to hook into pyresample's functionality though (and GDAL's for that matter) to give users the opportunity to speed up the reprojection in certain cases.
2. The output images differ: The results using pyresample show a stronger contrast.
Looks like you're creating a matplotlib figure to resample the image and using mpl's savefig functionality. It is possible that this process is causing the contrast to be lost. I'd advise just using cartopy's reprojection functionality without adding an image to a figure and saving the figure (example at the end).
3. If the multiprocessing library is used to do the projection in parallel, the cartopy version crashes with the following error message:
This really surprised me as there is no C code in cartopy which is doing the reprojecting. Therefore you've either found a bug with scipy, or more likely you are hitting a problem with numpy/matplotlib (google brings up a few results with your exception and matplotlib and/or numpy, e.g. https://github.com/numpy/numpy/issues/1270).
Ok, so here is how I would do the reprojection without using matplotlib at all:
import cartopy.crs as ccrs
from cartopy.img_transform import warp_array
import numpy as np
import PIL.Image
# I've downloaded the file from https://github.com/jmozmoz/cloudmap/blob/78923d15ad906eaa6d1dcab168a6364643d3fc94/debug/2014_8_7_1800_GOES15_4_S1.jpeg
# and clipped the image.
fname = '2014_8_7_1800_GOES15_4_S1.jpeg'
img = PIL.Image.open(fname)
result_array, extent = warp_array(np.array(img),
source_proj=ccrs.Geostationary(),
target_proj=ccrs.PlateCarree(),
target_res=(4000, 2000))
result = PIL.Image.fromarray(result_array)
result.save('reprojected.jpeg')
With the resulting image (eventually) looking something like:
There are some real possibilities for some optimisations with this functionality - quite a large amount of work is done creating the kdtree in the first place (which could potentially be cached) and another large chunk of the work is computing the indices from the original image (again, caches very well) which would essentially reduce and repeat reprojections to an numpy indexing problem.
If you want to look into the performance possibilities or the contrast issue (which I'm uncertain whether my solution fixes or not) please feel free to open up an issue on the github repo and we can talk through some of the options.
Thanks for asking, and HTH!

Categories

Resources