When working with healpy, I am able to plot a Healpix map in Mollview using
import healpy
map = 'filename.fits'
healpy.visufunc.mollview(map)
or as in the tutorial
>>> import numpy as np
>>> import healpy as hp
>>> NSIDE = 32
>>> m = np.arange(hp.nside2npix(NSIDE))
>>> hp.mollview(m, title="Mollview image RING")
which outputs
Is there a way to display only certain regions of the map? For example, only the upper hemisphere, or only the left side?
What I have in mind is viewing only small patches of the sky to see small point sources, or something like the "half-sky" projection from LSST
You can use a mask, which is a boolean map of the same size, where 1 are masked, 0 are not masked:
http://healpy.readthedocs.org/en/latest/tutorial.html#masked-map-partial-maps
Example:
import numpy as np
import healpy as hp
NSIDE = 32
m = hp.ma(np.arange(hp.nside2npix(NSIDE), dtype=np.double))
mask = np.zeros(hp.nside2npix(NSIDE), dtype=np.bool)
pixel_theta, pixel_phi = hp.pix2ang(NSIDE, np.arange(hp.nside2npix(NSIDE)))
mask[pixel_theta > np.pi/2] = 1
m.mask = mask
hp.mollview(m)
Related
I have an array of values in range of 1500 to 4500.
I managed to convert the data using matplotlib function. The code as follows:
import matplotlib.pyplot as plt
import numpy as np
norm = plt.Normalize(vmin=1500, vmax=4500)
jet = plt.cm.jet
# generate 100x100 with value in range 1500-4500
original = np.random.randInt(1500,4500, (100,100))
# array in shape (100,100)
# convert the array to rgba image
converted = jet(norm(original))
# image in shape (100,100,4)
How to get the original array from converted images?
Some rounding will take place because of the limited amount of colors in the colormap, so a perfect reversal is not possible.
But you can get close by simply inverting the colormap and subsequent normalization.
Starting with some sample data:
import matplotlib as mpl
import numpy as np
rng = np.random.default_rng(seed=0)
data = rng.integers(1500,4500, (3,3))
# array([[4051, 3410, 3033],
# [2309, 2423, 1622],
# [1725, 1549, 2025]], dtype=int64)
Which can be converted to RGBA:
norm = mpl.colors.Normalize(vmin=1500, vmax=4500)
cmap = mpl.colormaps["jet"].copy()
data_rgb = cmap(norm(data))
Converting the colormap to a lookup table, I'll drop the alpha for simplicity since this colormap doesn't use it.
lut = np.zeros((256,) * 3, dtype=np.uint8)
for i in range(cmap.N):
r,g,b,a = cmap(i)
lut[int(r*255), int(g*255), int(b*255)] = i
The lookup table can then be indexed with the RGB expressed as bytes:
data_rgb_byte = (data_rgb*255).astype(np.uint16)
data_inv_norm = lut[
data_rgb_byte[:,:,0],
data_rgb_byte[:,:,1],
data_rgb_byte[:,:,2],
]/255
data_recovered = norm.inverse(data_inv_norm).data
data_recovered
# array([[4052.94117647, 3405.88235294, 3029.41176471],
# [2311.76470588, 2417.64705882, 1617.64705882],
# [1723.52941176, 1547.05882353, 2017.64705882]])
I guess the loss in accuracy relates to the range of initial normalization (4500 - 1500 = 3000) compared to the resolution of the colormap (N=256), so 3000/256 ~= 11.7.
Hi all I’m have managed to reconstruct a shape using the carve from silhouette voxel carving function in open3D.
How do I count the total number of voxels contained in the grid that makes up the carved 3D model ?
You can find the total number of voxels in a VoxelGrid with len(voxel_grid.get_voxels()). Here is a complete example
>>> import open3d as o3d
>>> import numpy as np
>>>
>>> N = 2000
>>> armadillo_data = o3d.data.ArmadilloMesh()
>>> pcd = o3d.io.read_triangle_mesh(armadillo_data.path).sample_points_poisson_disk(N)
>>> pcd.scale(1 / np.max(pcd.get_max_bound() - pcd.get_min_bound()), center=pcd.get_center())
PointCloud with 2000 points.
>>> voxel_grid = o3d.geometry.VoxelGrid.create_from_point_cloud(pcd, voxel_size=0.05)
>>>
>>> len(voxel_grid.get_voxels())
737
I am trying to perform PCA on an image and then output an image with pixels coloured based on the cluster they fall in in the PCA. I am doing unsupervised PCA. Ultimate goal is seen at this link: Forward PC rotation
I am currently using the pandas library(if people have other more elegant solutions I am all ears) as well as open for image manipulation.
I am trying to load in the b,g,r bands as my column with the index being a pixel giving a table with rows of all pixels in image (each with a column for the color bands).
When populating the data I ultimately have 3 million + pixels in my image and I have it populating but it takes about 5 seconds to do so for each pixel so can't event tell if I am doing it correctly. Is there a better way? Also if people understand how to use PCA with images I would be greatly appreciative.
Code:
import pandas as pd
import numpy as np
import random as rd
from sklearn.decomposition import PCA
from sklearn import preprocessing
import matplotlib.pyplot as plt
import cv2
#read in image
img = cv2.imread('/Volumes/EXTERNAL/Stitched-Photos-for-Chris/p7_0015_20161005-949am-75m-pass-1.jpg.png',1)
row,col = img.shape[:2]
print(row , col)
#get a unique pixel ID for each pixel
pixel = ['pixel-' + str(i) for i in range(0,row*col)]
bBand = ['bBand']
gBand = ['gBand']
rBand = ['rBand']
data = pd.DataFrame(columns=[bBand,gBand,rBand],index = pixel)
#populate data for each band
b,g,r = cv2.split(img)
#each index value
indexCount = row*col
for index in range(indexCount):
i = int(index/row)
j = index%row
data.loc[pixel,'bBand'] = b[i,j]
data.loc[pixel,'gBand'] = g[i,j]
data.loc[pixel,'rBand'] = r[i,j]
print(data.head())
Yes that for loop that you have there can take a long time.
Use np.ravel (for a 1D view) or np.flatten (for a 1D copy) or np.flat (for an 1D iterator) to convert 2d arrays to a series.
Also, creating a string index with x y encoded can be expensive too. I would either use row number as index and calculate x,y as row_num/row, row_num%col or a multi index with x,y depending on how frequent x,y are used in your calculations.
I have a HealPix plot, made with HEALPY, as in Healpy: From Data to Healpix map (with less pixels, for instance taking nside=2, see code below).
import healpy as hp
import numpy as np
import matplotlib.pyplot as plt
# Set the number of sources and the coordinates for the input
nsources = int(1.e4)
nside = 2
npix = hp.nside2npix(nside)
# Coordinates and the density field f
thetas = np.random.random(nsources) * np.pi
phis = np.random.random(nsources) * np.pi * 2.
fs = np.random.randn(nsources)
# Go from HEALPix coordinates to indices
indices = hp.ang2pix(nside, thetas, phis)
# Initate the map and fill it with the values
hpxmap = np.zeros(npix, dtype=np.float)
hpxmap[indices] += fs[indices]
# Inspect the map
hp.mollview(hpxmap)
example plot
How can I write a text with a value in the center of each HEALPix I have on the plot ?
For example, how to write an identifuer for each 'pixel', using an array like range(len(hpxmap)) ?
Thanks a lot in advance for your help !
I've constructed an image from some FITS files, and I want to save the resultant masked image as another FITS file. Here's my code:
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
#from astropy.nddata import CCDData
from ccdproc import CCDData
hdulist1 = fits.open('wise_neowise_w1-MJpersr.fits')
hdulist2 = fits.open('wise_neowise_w2-MJpersr.fits')
data1_raw = hdulist1[0].data
data2_raw = hdulist2[0].data
# Hide negative values in order to take logs
# Where {condition}==True, return data_raw, else return np.nan
data1 = np.where(data1_raw >= 0, data1_raw, np.nan)
data2 = np.where(data2_raw >= 0, data2_raw, np.nan)
# Calculation and image subtraction
w1mag = -2.5 * (np.log10(data1) - 9.0)
w2mag = -2.5 * (np.log10(data2) - 9.0)
color = w1mag - w2mag
## Find upper and lower 5th %ile of pixels
mask_percent = 5
masked_value_lower = np.nanpercentile(color, mask_percent)
masked_value_upper = np.nanpercentile(color, (100 - mask_percent))
## Mask out the upper and lower 5% of pixels
## Need to hide values outside the range [lower, upper]
color_masked = np.ma.masked_outside(color, masked_value_lower, masked_value_upper)
color_masked = np.ma.masked_invalid(color_masked)
plt.imshow(color)
plt.title('color')
plt.savefig('color.png', overwrite = True)
plt.imshow(color_masked)
plt.title('color_masked')
plt.savefig('color_masked.png', overwrite = True)
fits.writeto('color.fits',
color,
overwrite = True)
ccd = CCDData(color_masked, unit = 'adu')
ccd.write('color_masked.fits', overwrite = True))
hdulist1.close()
hdulist2.close()
When I use matplotlib.pyplot to imshow the images color and color_masked, they look as I expect:
However, my two output files, color_masked.fits == color.fits. I think somehow I'm not quite understanding the masking process properly. Can anyone see where I've gone wrong?
astropy.io.fits only handles normal arrays and that means it just ignores/discards the mask of your MaskedArray.
Depending on your use-case you have different options:
Saving the file so other FITS programs recognize the mask
I actually don't think that's possible. But some programs like DS9 can handle NaNs, so you could just set the masked values to NaN for the purpose of displaying them:
data_naned = np.where(color_masked.mask, np.nan, color_masked)
fits.writeto(filename, data_naned, overwrite=True)
They do still show up as "bright white spots" but they don't affect the color-scale.
If you want to take this a step further you could replace the masked pixels using a convolution filter before writing them to a file. Not sure if there's one in astropy that only replaces masked pixels though.
Saving the mask as extension so you can read them back
You could use astropy.nddata.CCDData (available since astropy 2.0) to save it as FITS file with mask:
from astropy.nddata import CCDData
ccd = CCDData(color_masked, unit='adu')
ccd.write('color_masked.fits', overwrite=True)
Then the mask will be saved in an extension called 'MASK' and it can be read using CCDData as well:
ccd2 = CCDData.read('color_masked.fits')
The CCDData behaves like a masked array in normal NumPy operations but you could also convert it to a masked-array by hand:
import numpy as np
marr = np.asanyarray(ccd2)