Fourier transform and Full Width Half Maximum - python

I'm trying to calculate the Fourier transform of three muon polarization signals, which are simply cosine functions multiplied by an exponential decay.
So, doing the Fourier transform, we are going to see broadened peaks centered at the corresponding frequency.
The problem is that I have already tried to do the Fourier transform, but I do not know if it's correct; furthermore, I'm trying to calculate the FWHM using the scipy.stats.moment function, using the 2-nd moment: is it correct?
Can you tell me if the code is correct?
I put here the three signals in .npy file and the code used for the Fourier analysis.
The signals are signal[0], signal[1] and signal[2], arrays of 10 dimension.
Each signal[k] contains 10 polarization functions (1 for each applied magnetic field), which are signals of 400 points.
The corresponding files are signal_100, signal_110, signal_111, provided here:
https://github.com/JonathanFrassineti/UNDI-examples.
Ah, the frequencies range from 0 Hz to 40 MHz.
Thank you!
N = 400 # Number of signal points.
N1 = 40000000
T = 1./800. # Sampling spacing.
xf = np.fft.rfftfreq(N1, T)
yf1 = FWHM1 = sigma1 = delta1 = bhar1 = np.zeros(fields, dtype = object)
yf2 = FWHM2 = sigma2 = delta2 = bhar2 = np.zeros(fields, dtype = object)
yf3 = FWHM3 = sigma3 = delta3 = bhar3 = np.zeros(fields, dtype = object)
for j in range(fields):
# Fourier transform.
yf1[j] = np.fft.rfft(signal[0][j])
yf2[j] = np.fft.rfft(signal[1][j])
yf3[j] = np.fft.rfft(signal[2][j])
FWHM1[j] = moment(yf1[j], moment=2)
FWHM2[j] = moment(yf2[j], moment=2)
FWHM3[j] = moment(yf3[j], moment=2)
sigma1[j] = np.sqrt(np.abs(FWHM3[j]))/2.355
sigma2[j] = np.sqrt(np.abs(FWHM2[j]))/2.355
sigma3[j] = np.sqrt(np.abs(FWHM3[j]))/2.355
delta1[j] = sigma1[j]/gamma_Cu
delta2[j] = sigma2[j]/gamma_Cu
delta3[j] = sigma3[j]/gamma_Cu
bhar1[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta1[j]
bhar2[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta2[j]
bhar3[j] = (((a*angtom)**3)/(1e-7*gamma_Cu*hbar))*delta3[j]

Currently i work in a python project with same object. I've a set of data of magnetic field B(x,y,z), i think ideal would be to organize your data periodically at event and deduce Fe (sampling_rate).
f(A, t)=A*( cos(2*pi*fe*t) - sin(2*pi*fe*t)
B=[ 50, 50, 10, 3 ] # where each data is |B| normal at second
res=[ f(a, time) for time, a in enumerate(B) ]
fourrier_transform=np.fft.fft( res )
frequency= fftfreq([ time for time in range(len(B)) ]) # U can use fftfreq provide by scipy
Please star this project, research ressource to contribute
RFSignalToolkit github project

Related

How to compute Dimitrov spectral fatigue index in Python?

I have an already existing python script to calculate mean power frequency. Additionally I would like to calculate dimitrovs spectral fatigue index. Its formula slightly differs from mean power frequency instead of using a ratio of the moments of 1 and 0 it uses the moments of -1 and 5. I thought, I could simply add the moments of interest, but I get only inf-values in the renamed FInms5 function
[mean power frequency]
[dimitrov spectral fatigue index]
This is the working mean freq function. I changed the last line to:
mean_freq[i] = np.dot(P,f.t-1)/np.dot(P,f.t5)
from scipy.signal import periodogram
def get_mean_freq(emg_sig, sfreq, epoch_duration = 0.5):
'''
Parameters
----------
emg_sig : array
pre-filtered emg data.
sfreq : int
emg sampling frequency, in Hz.
epoch_duration : float
epoch (time window) duration, in seconds.
Returns
-------
mean_freq: array
mean frequency at each epoch
time_points: array
time point at the center of each evaluated epoch
samples: array
sample numbers at the center of each evaluated epoch
Method according to:
https://stackoverflow.com/questions/37922928/difference-in-mean-frequency-in-python-and-matlab
'''
ons = range(
0,
len(emg_sig),
int(epoch_duration*sfreq)
)
mean_freq = np.empty((len(ons),))
samples =np.empty((len(ons),))
time_points = np.empty((len(ons),))
for i,on in enumerate(ons):# i,on = 0,ons[0]
off = ons[i+1]-1 if i+1<len(ons) else len(emg_sig)
# print([on,off])
processing_window = emg_sig[on:off]
mid_point = (on + off) / 2
samples[i] = mid_point
# t_epoch[ch] += [mid_point / sfreq]
time_points[i] = mid_point / sfreq
# Processing window power spectrum (PSD) generation
f, Pxx_den = periodogram(np.array(processing_window), fs=float(sfreq))# plot_freq_spectrum(f, Pxx_den)
Pxx_den = np.reshape(Pxx_den, (1,-1))
width = np.tile(f[2]-f[0], (1, Pxx_den.shape[2]))
f = np.reshape(f, (1,-1))
P = Pxx_den*width
pwr = np.sum(P)
mean_freq[i] = np.dot(P, f.T)/pwr
return mean_freq, time_points, samples
This is the working mean freq function. I changed the last line to:
mean_freq[i] = np.dot(P,f.t-1)/np.dot(P,f.t5)
[1]: https://i.stack.imgur.com/He32L.png
[2]: https://i.stack.imgur.com/o6gGD.png

Predicting a 2D profile knowing the 1D profiles

For a project I have to predict the 2D profile of a certain function C(x,y) by sampling many "rows", i.e. C(x,0) where this term will depend on a certain parameter alpha which is picked with uniform distribution on a certain interval decided a priori.
To set things up:
x = linspace(0,1,50)
y = linspace(0,1,50)
Once I defined the function I want to predict through its 1D profiles, I want use least square method to find a numerical solution. Thus first we have to define a matrix M:
def matrix_of_profiles(x,y): #x is an array, y is float64
M = empty((0,50))
random_alpha =empty(1)
riga = []
for i in range(0,5000): #genrate the 1Dim profile of C(x,y)
random_alpha = random.uniform(1,3+1e-12) #pick a different distribution?
riga = C(x,y, random_alpha)
M = r_[M, [riga]]
return M
Then, as polynomial (which I found through some Taylor expansion):
def predicting_poly(x,shift):
y=1+4*x+(1-shift)/(2*pi)
return y
My final goal is to obtain a 50 x 50 matrix which should resemble the 2D I wanted to reconstruct. But now: if I was to run:
M = matrix_of_profiles(x,0)
t = linspace(0,1, 5000)
value_poly = predicting_poly(t,0)
value_poly = value_poly[:,newaxis]
col_1 = linalg.lstsq(M,value_poly, rcond=None)
plot(x,col_1[0])
then col_1 behaves as I wished (a sinusoid). While, if I go for:
def predicted_profile(x):
t = linspace(0,1, 5000)
prediction = empty((50,50))
M = matrix_of_profiles(x,0)
for j in range(0,50):
shift = -1+j/50
value_poly = predicting_poly(t,shift)
value_poly = value_poly[:,newaxis]
predicted_value = linalg.lstsq(M,value_poly, rcond=None)
predicted_value = predicted_value[0].reshape(50,)
prediction[:,j] = predicted_value[0]
return prediction
the column of the new matrix prediction should behave similar to what I previously defined as col_1 but it does not: it is now just a line and I do not understand why. Did I mess up in the last function?

Creating a 2D Gaussian random field from a given 2D variance

I've been trying to create a 2D map of blobs of matter (Gaussian random field) using a variance I have calculated. This variance is a 2D array. I have tried using numpy.random.normal since it allows for a 2D input of the variance, but it doesn't really create a map with the trend I expect from the input parameters. One of the important input constants lambda_c should manifest itself as the physical size (diameter) of the blobs. However, when I change my lambda_c, the size of the blobs does not change if at all. For example, if I set lambda_c = 40 parsecs, the map needs blobs that are 40 parsecs in diameter. A MWE to produce the map using my variance:
import numpy as np
import random
import matplotlib.pyplot as plt
from matplotlib.pyplot import show, plot
import scipy.integrate as integrate
from scipy.interpolate import RectBivariateSpline
n = 300
c = 3e8
G = 6.67e-11
M_sun = 1.989e30
pc = 3.086e16 # parsec
Dds = 1097.07889283e6*pc
Ds = 1726.62069147e6*pc
Dd = 1259e6*pc
FOV_arcsec_original = 5.
FOV_arcmin = FOV_arcsec_original/60.
pix2rad = ((FOV_arcmin/60.)/float(n))*np.pi/180.
rad2pix = 1./pix2rad
x_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
y_pix = np.linspace(-FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,FOV_arcsec_original/2/pix2rad/180.*np.pi/3600.,n)
X_pix,Y_pix = np.meshgrid(x_pix,y_pix)
conc = 10.
M = 1e13*M_sun
r_s = 18*1e3*pc
lambda_c = 40*pc ### The important parameter that doesn't seem to manifest itself in the map when changed
rho_s = M/((4*np.pi*r_s**3)*(np.log(1+conc) - (conc/(1+conc))))
sigma_crit = (c**2*Ds)/(4*np.pi*G*Dd*Dds)
k_s = rho_s*r_s/sigma_crit
theta_s = r_s/Dd
Renorm = (4*G/c**2)*(Dds/(Dd*Ds))
#### Here I just interpolate and zoom into my field of view to get better resolutions
A = np.sqrt(X_pix**2 + Y_pix**2)*pix2rad/theta_s
A_1 = A[100:200,0:100]
n_x = n_y = 100
FOV_arcsec_x = FOV_arcsec_original*(100./300)
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x1 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x)
y1 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y)
X1,Y1 = np.meshgrid(x1,y1)
n_x_2 = 500
n_y_2 = 500
x2 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_2)
y2 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_2)
X2,Y2 = np.meshgrid(x2,y2)
interp_spline = RectBivariateSpline(y1,x1,A_1)
A_2 = interp_spline(y2,x2)
A_3 = A_2[50:450,0:400]
n_x_3 = n_y_3 = 400
FOV_arcsec_x = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_x = FOV_arcsec_x/60.
pix2rad_x = ((FOV_arcmin_x/60.)/float(n_x_3))*np.pi/180.
rad2pix_x = 1./pix2rad_x
FOV_arcsec_y = FOV_arcsec_original*(100./300)*400./500.
FOV_arcmin_y = FOV_arcsec_y/60.
pix2rad_y = ((FOV_arcmin_y/60.)/float(n_y_3))*np.pi/180.
rad2pix_y = 1./pix2rad_y
x3 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_3)
y3 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_3)
X3,Y3 = np.meshgrid(x3,y3)
n_x_4 = 1000
n_y_4 = 1000
x4 = np.linspace(-FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,FOV_arcsec_x/2/pix2rad_x/180.*np.pi/3600.,n_x_4)
y4 = np.linspace(-FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,FOV_arcsec_y/2/pix2rad_y/180.*np.pi/3600.,n_y_4)
X4,Y4 = np.meshgrid(x4,y4)
interp_spline = RectBivariateSpline(y3,x3,A_3)
A_4 = interp_spline(y4,x4)
############### Function to calculate variance
variance = np.zeros((len(A_4),len(A_4)))
def variance_fluctuations(x):
for i in xrange(len(x)):
for j in xrange(len(x)):
if x[j][i] < 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccosh(1./x[j][i]))/(np.sqrt(1-x[j][i]**2.)))))
elif x[j][i] > 1.:
variance[j][i] = (k_s**2)*(lambda_c/r_s)*((np.pi/x[j][i]) - (1./(x[j][i]**2 -1)**3.)*(((6.*x[j][i]**4. - 17.*x[j][i]**2. + 26)/3.)+ (((2.*x[j][i]**6. - 7.*x[j][i]**4. + 8.*x[j][i]**2. - 8)*np.arccos(1./x[j][i]))/(np.sqrt(x[j][i]**2.-1)))))
variance_fluctuations(A_4)
#### Creating the map
mean = 0
delta_kappa = np.random.normal(0,variance,A_4.shape)
xfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
yfinal = np.linspace(-FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,FOV_arcsec_x*np.pi/180./3600.*Dd/pc/2,1000)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,delta_kappa,100)
plt.show()
The map looks like this, with the density of blobs increasing towards the right. However, the size of the blobs don't change and the map looks virtually the same whether I use lambda_c = 40*pc or lambda_c = 400*pc.
I'm wondering if the np.random.normal function isn't really doing what I expect it to do? I feel like the pixel scale of the map and the way samples are drawn make no link to the size of the blobs. Maybe there is a better way to create the map using the variance, would appreciate any insight.
I expect the map to look something like this , the blob sizes change based on the input parameters for my variance :
This is quite a well visited problem in (surprise surprise) astronomy and cosmology.
You could use lenstool: https://lenstools.readthedocs.io/en/latest/examples/gaussian_random_field.html
You could also try here:
https://andrewwalker.github.io/statefultransitions/post/gaussian-fields
Not to mention:
https://github.com/bsciolla/gaussian-random-fields
I am not reproducing code here because all credit goes to the above authors. However, they did just all come right out a google search :/
Easiest of all is probably a python module FyeldGenerator, apparently designed for this exact purpose:
https://github.com/cphyc/FyeldGenerator
So (adapted from github example):
pip install FyeldGenerator
from FyeldGenerator import generate_field
from matplotlib import use
use('Agg')
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
# Helper that generates power-law power spectrum
def Pkgen(n):
def Pk(k):
return np.power(k, -n)
return Pk
# Draw samples from a normal distribution
def distrib(shape):
a = np.random.normal(loc=0, scale=1, size=shape)
b = np.random.normal(loc=0, scale=1, size=shape)
return a + 1j * b
shape = (512, 512)
field = generate_field(distrib, Pkgen(2), shape)
plt.imshow(field, cmap='jet')
plt.savefig('field.png',dpi=400)
plt.close())
This gives:
Looks pretty straightforward to me :)
PS: FoV implied a telescope observation of the gaussian random field :)
A completely different and much quicker way may be just to blur the delta_kappa array with gaussian filter. Try adjusting sigma parameter to alter the blobs size.
from scipy.ndimage.filters import gaussian_filter
dk_gf = gaussian_filter(delta_kappa, sigma=20)
Xfinal, Yfinal = np.meshgrid(xfinal,yfinal)
plt.contourf(Xfinal,Yfinal,dk_ma,100, cmap='jet')
plt.show();
this is image with sigma=20
this is image with sigma=2.5
ThunderFlash, try this code to draw the map:
# function to produce blobs:
from scipy.stats import multivariate_normal
def blob (positions, mean=(0,0), var=1):
cov = [[var,0],[0,var]]
return multivariate_normal(mean, cov).pdf(positions)
"""
now prepare for blobs generation.
note that I use less dense grid to pick blobs centers (regulated by `step`)
this makes blobs more pronounced and saves calculation time.
use this part instead of your code section below comment #### Creating the map
"""
delta_kappa = np.random.normal(0,variance,A_4.shape) # same
step = 10 #
dk2 = delta_kappa[::step,::step] # taking every 10th element
x2, y2 = xfinal[::step],yfinal[::step]
field = np.dstack((Xfinal,Yfinal))
print (field.shape, dk2.shape, x2.shape, y2.shape)
>> (1000, 1000, 2), (100, 100), (100,), (100,)
result = np.zeros(field.shape[:2])
for x in range (len(x2)):
for y in range (len(y2)):
res2 = blob(field, mean = (x2[x], y2[y]), var=10000)*dk2[x,y]
result += res2
# the cycle above took over 20 minutes on Ryzen 2700X. It could be accelerated by vectorization presumably.
plt.contourf(Xfinal,Yfinal,result,100)
plt.show()
you may want to play with var parameter in blob() to smoothen the image and with step to make it more compressed.
Here is the image that I got using your code (somehow axes are flipped and more dense areas on the top):

Splitting integrated probability density into two spatial regions

I have some probability density function:
T = 10000
tmin = 0
tmax = 10**20
t = np.linspace(tmin, tmax, T)
time = np.asarray(t) #this line may be redundant
for j in range(T):
timedep_PD[j]= probdensity_func(x,time[j],initial_state)
I want to integrate it over two distinct regions of x. I tried the following to split the timedep_PD array into two spatial regions and then proceeded to integrate:
step = abs(xmin - xmax) / T
l1 = int(np.floor((abs(ab - xmin)* T ) / abs(xmin - xmax)))
l2 = int(np.floor((abs(bd - ab)* T ) / abs(xmin - xmax)))
#For spatial region 1
R1 = np.empty([l1])
R1 = x[:l1]
for i in range(T):
Pd1[i] = Pd[i][:l1]
#For spatial region 2
Pd2 = np.empty([T,l2])
R2 = np.empty([l2])
R2 = x[l1:l1+l2]
for i in range(T):
Pd2[i] = Pd[i][l1:l1+l2]
#Integrating over each spatial region
for i in range(T):
P[0][i] = np.trapz(Pd1[i],R1)
P[1][i] = np.trapz(Pd2[i],R2)
Is there an easier/more clear way to go about splitting up a probability density function into two spatial regions and then integrating within each spatial region at each time-step?
The loops can be eliminated by using vectorized operations instead. It's not clear whether Pd is a 2D NumPy array; it it's something else (e.g., a list of lists), it should be converted to a 2D NumPy array with np.array(...). After that you can do this:
Pd1 = Pd[:, :l1]
Pd2 = Pd[:, l1:l1+l2]
No need to loop over the time index; the slicing happens for all times at once (having : in place of an index means "all valid indices").
Similarly, np.trapz can integrate all time slices at once:
P1 = np.trapz(Pd1, R1, axis=1)
P2 = np.trapz(Pd2, R2, axis=1)
Each P1 and P2 is now a time series of integrals. The axis parameter determines along which axis Pd1 gets integrated - it's the second axis, i.e., space.

How to remove rings from convolve healpix map?

I'm applying convolution techniques to convolve 2 datasets, a healpix map with nside = 256 and a primary beam of shape (256, 256) in order to measure the total intensity from the convolved healpix map. My problem is that after convolving my map with the primary beam i get rings in my convolved map. I've tried normalizing it with either lanczos or Gaussian kernel to take care of the rings but all these approaches have failed.
In my code below, i used the query function in scipy to search for the nearest pixels in my healpix map within a given radius and take the sum of the product of the corresponding pixels in the primary beam using map coordinate. The final image i get has rings in it. Please can anyone help me solve this problem? Thanks in advance.
def query_npix(nside, npix, radius):
print 'searching for nearest pixels:......'
t1, t2 = hp.pix2ang(nside, np.arange(npix))
tree = spatial.cKDTree(zip(t1, t2))
dist, ipix_indx = tree.query(zip(t1, t2), k = 150, distance_upper_bound = radius)
r1, r2 = hp.pix2ang(nside, ipix_indx)
ra = r1.T - t1
dec = r2.T - t2
print 'Done searching'
return np.array(dist), np.array(ipix_indx), np.array(ra.T), np.array(dec.T)
def fullSky_convolve(healpix_map, primary_beam_fits, ipix_indx, dist, radius, r1, r2):
measured_map = []
hdulist = openFitsFile(primary_beam_fits)
beam_data = hdulist[0].data
header = hdulist[0].header
nside = hp.get_nside(healpix_map[0, ...])
npix = hp.get_map_size(healpix_map[0, ...]) # total number of pixels in the map must be 12 * nside^2
crpix1, crval1, cdelt1 = [ header.get(x) for x in "CRPIX1", "CRVAL1", "CDELT1" ]
crpix2, crval2, cdelt2 = [ header.get(x) for x in "CRPIX2", "CRVAL2", "CDELT2" ]
# beam centres in pixel coordinates
xc = crpix1-1 + (np.rad2deg(r1.ravel()) - crval1)/(256*cdelt1)
yc = crpix2-1 + (np.rad2deg(r2.ravel()) - crval2)/(256*cdelt2)
#xc = (np.rad2deg(r1.ravel()) )/cdelt1
for j in xrange(4):
print 'started Stokes: %d' %j
for iter in xrange(0 + j, 16, 4):
outpt = np.zeros(shape = npix, dtype=np.float64)
#by = outpt.copy()
# mask beam
bm_data = beam_data[iter]
#masked_beam= beam_data[iter]
shape = bm_data.shape
rad = np.linspace(-shape[0]/2,shape[-1]/2,shape[0])
rad2d = np.sqrt(rad[np.newaxis,:]**2+rad[:,np.newaxis]**2)
mask = rad2d <= radius/abs(cdelt2)
masked_beam = bm_data*mask
s1 = ndimage.map_coordinates(masked_beam, [xc, yc], mode = 'constant')
bm_map = s1.reshape(dist.shape[0], dist.shape[-1])
for itr in xrange(npix):
g_xy = (1.0/(np.sqrt(2*np.pi)*np.std(dist[itr])))*np.exp(-(dist[itr])**2/(2*np.var(dist[itr])))
#weighted_healpix_map = np.convolve(healpix_map[j, ...][ipix_indx[itr]], g_xy/g_xy.sum(), mode='same')
weighted_healpix_map = ndimage.filters.convolve(healpix_map[j, ...][ipix_indx[itr]], g_xy/g_xy.sum(), mode='reflect')
#outpt[itr] = np.sum(weighted_healpix_map*(bm_map[itr]/bm_map[itr].sum()))
outpt[itr] = np.sum(weighted_healpix_map*(bm_map[itr]))
#print 'itr', itr
alpha = file('pap%d.save'%iter, 'wb')
#h_map = ndimage.filters.gaussian_filter(outpt, sigma = 3.)
cPickle.dump(outpt, alpha, protocol = cPickle.HIGHEST_PROTOCOL)
alpha.close()
print 'Just dumped stripp%d.save:-------'%iter
print 'Loading dumped files:-------'
loaded_objects = []
for itr4 in xrange(16):
alpha = file('stripp%d.save'%itr4, 'rb')
loaded_objects.append(cPickle.load(alpha))
alpha.close()
measured_map.append(copy.deepcopy(loaded_objects))
return measured_map
Remember that HEALPix maps can be in either "Ring" or "Nested" format. It sounds like you may need to add the keyword nest=True to your healpy functions like hp.pix2ang. If your input maps are in nested format, this keyword is needed.
For example:
I recently tried using the healpy.smoothing() function, and found my resulting image to have rings (perhaps like you described), upon viewing the output map with healpix.mollview(). The rings disappeared and the image was presented as I expected, after running mollview with the nested=True keyword. Check what ordering schemes your input files use
Reference:
http://healpy.readthedocs.org/en/latest/tutorial.html#creating-and-manipulating-maps
Healpix supports two different ordering schemes, RING or NESTED. By
default, healpy maps are in RING ordering. In order to work with
NESTED ordering, all map related functions support the nest keyword,
for example: hp.mollview(m, nest=True, title="Mollview image NESTED")

Categories

Resources