I am new to python and FITS image files, as such I am running into issues. I have two FITS files; the first FITS file is pixels/counts and the second FITS file (calibration file) is pixels/wavelength. I need to convert pixels/counts into wavelength/counts. Once this is done, I need to output wavelength/counts as a new FITS file for further analysis. So far I have managed to array the required data as shown in the code below.
import numpy as np
from astropy.io import fits
# read the images
image_file = ("run_1.fits")
image_calibration = ("cali_1.fits")
hdr = fits.getheader(image_file)
hdr_c = fits.getheader(image_calibration)
# print headers
sp = fits.open(image_file)
print('\n\nHeader of the spectrum :\n\n', sp[0].header, '\n\n')
sp_c = fits.open(image_calibration)
print('\n\nHeader of the spectrum :\n\n', sp_c[0].header, '\n\n')
# generation of arrays with the wavelengths and counts
count = np.array(sp[0].data)
wave = np.array(sp_c[0].data)
I do not understand how to save two separate arrays into one FITS file. I tried an alternative approach by creating list as shown in this code
file_list = fits.open(image_file)
calibration_list = fits.open(image_calibration)
image_data = file_list[0].data
calibration_data = calibration_list[0].data
# make a list to hold images
img_list = []
img_list.append(image_data)
img_list.append(calibration_data)
# list to numpy array
img_array = np.array(img_list)
# save the array as fits - image cube
fits.writeto('mycube.fits', img_array)
However I could only save as a cube, which is not correct because I just need wavelength and counts data. Also, I lost all the headers in the newly created FITS file. To say I am lost is an understatement! Could someone point me in the right direction please? Thank you.
I am still working on this problem. I have now managed (I think) to produce a FITS file containing the wavelength and counts using this website:
https://www.mubdirahman.com/assets/lecture-3---numerical-manipulation-ii.pdf
This is my code:
# Making a Primary HDU (required):
primaryhdu = fits.PrimaryHDU(flux) # Makes a header # or if you have a header that you’ve created: primaryhdu = fits.PrimaryHDU(arr1, header=head1)
# If you have additional extensions:
secondhdu = fits.ImageHDU(wave)
# Making a new HDU List:
hdulist1 = fits.HDUList([primaryhdu, secondhdu])
# Writing the file:
hdulist1.writeto("filename.fits", overwrite=True)
image = ("filename.fits")
hdr = fits.open(image)
image_data = hdr[0].data
wave_data = hdr[1].data
I am sure this is not the correct format for wavelength/counts. I need both wavelength and counts to be contained in hdr[0].data
If you are working with spectral data, it might be useful to look into specutils which is designed for common tasks associated with reading/writing/manipulating spectra.
It's common to store spectral data in FITS files using tables, rather than images. For example you can create a table containing wavelength, flux, and counts columns, and include the associated units in the column metadata.
The docs include an example on how to create a generic "FITS table" writer with wavelength and flux columns. You could start from this example and modify it to suit your exact needs (which can vary quite a bit from case to case, which is probably why a "generic" FITS writer is not built-in).
You might also be able to use the fits-wcs1d format.
If you prefer not to use specutils, that example still might be useful as it demonstrates how to create an Astropy Table from your data and output it to a well-formatted FITS file.
Related
I'm editing a .fits file I have in python but I want the header to stay the exact same. This is the code:
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
# read in the fits file
im = fits.getdata('myfile.fits')
header = fits.getheader('myfile.fits')
ID = 1234
newim = np.copy(im)
newim[newim == ID] = 0
newim[newim == 0] = -99
newim[newim > -99] = 0
newim[newim == -99] = 1
plt.imshow(newim,cmap='gray', origin='lower')
plt.colorbar()
hdu = fits.PrimaryHDU(newim)
hdu.writeto('mynewfile.fits')
All this is fine and does exactly what I want it to do except that it does not conserve the header after it saves the new file. Is there any way to fix this such that the original header file is not lost?
First of all don't do this:
im = fits.getdata('myfile.fits')
header = fits.getheader('myfile.fits')
As explained in the warning here, this kind of usage is discouraged (newer versions of the library have a caching mechanism that makes this less inefficient than it used to be, but it's still a problem). This is because the first one returns just the data array from the file, and the latter returns just the header from a file. At that point there's no longer any association between them; it's just a plain Numpy ndarray and a plain Header and their associations with a specific file are not tracked.
You can return the full HDUList data structure which represents the HDUs in a file, and for each HDU there's an HDU object associating headers with their arrays.
In your example you can just open the file, modify the data array in-place, and then use the .writeto method on it to write it to a new file, or if you open it with mode='update' you can modify the existing file in-place. E.g.
hdul = fits.open('old.fits')
# modify the data in the primary HDU; this is just an in-memory operation and will not change the data on disk
hdul[0].data +=1
hdul.writeto('new.fits')
There's also no clear reason for doing this in your code
newim = np.copy(im)
Unless you have a specific reason to keep an unmodified copy of the original array in memory, you can just directly modify the original array in-place.
I have a large input file which consists of data frames (a data series (complex64), with an identifying header in each frame). It is larger than my available memory. The headers repeat, but are randomly ordered, so for example the input file could look like:
<FRAME header={0}, data={**first** 500 numbers...}>,
<FRAME header={18}, data={first 500 numbers...}>,
<FRAME header={4}, data={first 500 numbers...}>,
<FRAME header={0}, data={**next** 500 numbers...}>
...
I want to order the data into a new file that is a numpy array of shape (len(headers), len(data_series)). It has to build the output file as it reads the frames, because I can't fit it all in memory.
I've looked at numpy.savetxt and the python csv package but for disk size, precision, and speed reasons I would prefer for the output file to be binary. numpy.save is good except that I can't figure out how to make it append to an unknown array size.
I have to work in Python2.7 because of some dependencies needed to read these frames. What I have done so far is made a function able to write all of the frames with a matching header to a single binary file:
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
with open("singleFrameHeader", 'ab') as f:
current_data = input_data.readFrame() # This loads the next frame in the file
if current_data.header == 0:
float_arr = np.array(current_data.data).view(float)
float_arr.tofile(f)
This works great, but what I need to extend it to be two dimensional. I'm starting to look at h5py as an option, but was hoping there is a simpler solution.
What would be great is something like
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
with open("bigMatrix", 'ab') as f:
current_data = input_data.readFrame() # This loads the next frame in the file
index = current_data.header
float_arr = np.array(current_data.data).view(float)
float_arr.tofile(f, index)
Any help is appreciated. I thought this would be a more common use-case to read and write to a 2D binary file in append mode.
You have two problems: one is that a file contains sequential data, and the other is that numpy binary files don't store shape information.
A simple way to start solving this would be to carry through with your initial idea of converting the data into files by header, then combining all the binary files into one large product (if you still feel the need to do so).
You could maintain a map of the headers you've found so far to their output files, data size, etc. This will allow you to combine the data more intelligently, if for example, there are missing chunks or headers or something.
from contextlib import ExitStack
from os import remove
from tempfile import NamedTemporaryFile
from shutil import copyfileobj
import sys
class Header:
__slots__ = ('id', 'count', 'file', 'name')
def __init__(self, id):
self.id = id
self.count = 0
self.file = NamedTemporaryFile(delete=False)
self.name = self.file.name
def write_frame(self, frame):
data = np.array(frame.data).view(float)
self.count += data.size
data.tofile(self.file)
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
file_map = {}
with ExitStack() as stack:
while True:
frame = input_data.next_frame()
if frame is None:
break # recast this loop as necessary
if frame.header not in file_map:
header = Header(frame.header)
stack.enter_context(header.file)
file_map[frame.header] = header
else:
header = file_map[frame.header]
header.write_frame(frame)
max_header = max(file_map)
max_count = max(h.count for h in file_map)
with open('singleFrameHeader', 'wb') as output:
output.write(max_header.to_bytes(8, sys.byteorder))
output.write(max_count.to_bytes(8, sys.byteorder))
for i in range max_header:
if i in file_map:
h = file_map[i]
with open(h.name, 'rb') as input:
copyfileobj(input, output)
remove(h.name)
if h.count < max_count:
np.full(max_count - h.count, np.nan, dtype=np.float).tofile(output)
else:
np.full(max_count, np.nan, dtype=np.float).tofile(output)
The first 16 bytes will be the int64 number of headers and number of elements per header, respectively. Keep in mind that the file is in native byte order, whatever that may be, and is therefore not portable.
Alternative
If (and only if) you know the exact size of a header dataset ahead of time, you can do this in one pass, with no temporary files. It also helps if the headers are contiguous. Otherwise, missing swaths will be zero-filled. You will still need to maintain a dictionary of your current position within a header, but you will no longer have to keep a separate file pointer around for each one. All-in-all, this is a much better alternative than the original solution, if your use-case allows it:
header_size = 500 * N # You must know this up front
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
header_map = {}
with open('singleFrameHeader', 'wb') as output:
output.write(max_header.to_bytes(8, sys.byteorder))
output.write(max_count.to_bytes(8, sys.byteorder))
while True:
frame = input_data.next__frame()
if frame is None:
break
if frame.header not in header_map:
header_map[frame.header] = 0
data = np.array(frame.data).view(float)
output.seek(16 + frame.header * header_size + header_map[frame.header])
data.tofile(output)
header_map[frame.header] += data.size * data.dtype.itemsize
I asked a question regarding this sort of out-of-order write pattern as a consequence of this answer: What happens when you seek past the end of a file opened for writing?
I am relatively new to python. As part of my astronomy project work, I have to deal with binary files (which of course is again new to me). I was given a binary file and a python code which reads data from the binary file. I was then asked by my professor to understand how the code works on the binary file. I spent couple of days trying to figure out, but nothing helped. Can anyone here help me with the code?
# Read the binary opacity file
f = open(file, "r")
# read file dimension sizes
a = np.fromfile(f, dtype=np.int32, count=16)
NX, NY, NZ = a[1], a[4], a[7]
# read the time and time step
time, time_step = np.fromfile(f, dtype=np.float64, count=2)
# number of iterations
nite = np.fromfile(f, dtype=np.int32, count=1)
# radius array
trash = np.fromfile(f, dtype=np.float64, count=1)
rad = np.fromfile(f, dtype=np.float64, count=a[1])
# phi array
trash = np.fromfile(f, dtype=np.float64, count=1)
phi = np.fromfile(f, dtype=np.float64, count=a[4])
# close the file
f.close()
The binary file as far as I know contains several parameters (eg: radius, phi, sound speed, radiation energy) and its many values. The above code extract the values 2 parameters- radius and phi from the binary file. Both radius and phi have more than 100 values. The program works, but I am not able to understand how it works. Any help would be appreciated.
The binary file is essentially just a long list of continuous data; you need to tell np.fromfile() both where to look and what type of data to expect.
Perhaps it's easiest to understand if you create your own file:
import numpy as np
with open('numpy_testfile', 'w+') as f:
## we create a "header" line, which collects the lengths of all relevant arrays
## you can then use this header line to tell np.fromfile() *how long* the arrays are
dimensions=np.array([0,10,0,0,10,0,3,10],dtype=np.int32)
dimensions.tofile(f) ## write to file
a=np.arange(0,10,1) ## some fake data, length 10
a.tofile(f) ## write to file
print(a.dtype)
b=np.arange(30,40,1) ## more fake data, length 10
b.tofile(f) ## write to file
print(b.dtype)
## more interesting data, this time it's of type float, length 3
c=np.array([3.14,4.22,55.0],dtype=np.float64)
c.tofile(f) ## write to file
print(c.dtype)
a.tofile(f) ## just for fun, let's write "a" again
with open('numpy_testfile', 'r+b') as f:
### what's important to know about this step is that
# numpy is "seeking" the file automatically, i.e. it is considering
# the first count=8, than the next count=10, and so on
# as "continuous data"
dim=np.fromfile(f,dtype=np.int32,count=8)
print(dim) ## our header line: [ 0 10 0 0 10 0 3 10]
a=np.fromfile(f,dtype=np.int64,count=dim[1])## read the dim[1]=10 numbers
b=np.fromfile(f,dtype=np.int64,count=dim[4])## and the next 10
## now it's dim[6]=3, and the dtype is float 10
c=np.fromfile(f,dtype=np.float64,count=dim[6] )#count=30)
## read "the rest", unspecified length, let's hope it's all int64 actually!
d=np.fromfile(f,dtype=np.int64)
print(a)
print(b)
print(c)
print(d)
Addendum: the numpy documentation is quite explicit when it comes to discouraging the use of np.tofile() and np.fromfile():
Do not rely on the combination of tofile and fromfile for data storage, as the binary files generated are are not platform independent. In particular, no byte-order or data-type information is saved. Data can be stored in the platform independent .npy format using save and load instead.
Personal side note: if you spent a couple of days to understand this code, don't feel discouraged of learning python; we all start somewhere. I'd suggest to be honest about the obstacles you've hit to your Professor (if this comes up in conversation), as she/he should be able to correctly assert "where you're at" when it comes to programming. :-)
from astropy.io import ascii
data = ascii.read('/directory/filename')
column1data = data[nameofcolumn1]
column2data = data[nameofcolumn2]
ect.
column1data is now an array of all the values under that header
I use this method to import SourceExtractor dat files which are in the ASCII format.
I believe this a more elegant way to import data from ascii files.
I have two files, one an esri shapefile (.shp), the other a point cloud (.las).
Using laspy and shapefile modules I've managed to find which points of the .las file fall within specific polygons of the shapefile. What I now wish to do is to add an index number that enables identification between the two datasets. So e.g. all points that fall within polygon 231 should get number 231.
The problem is that as of yet I'm unable to append anything to the list of points when writing the .las file. The piece of code that I'm trying to do it in is here:
outFile1 = laspy.file.File("laswrite2.las", mode = "w",header = inFile.header)
outFile1.points = truepoints
outFile1.points.append(indexfromshp)
outFile1.close()
The error I'm getting now is: AttributeError: 'numpy.ndarray' object has no attribute 'append'. I've tried multiple things already including np.append but I'm really at a loss here as to how to add anything to the las file.
Any help is much appreciated!
There are several ways to do this.
Las files have classification field, you could store the indexes in this field
las_file = laspy.file.File("las.las", mode="rw")
las_file.classification = indexfromshp
However if the Las file has version <= 1.2 the classification field can only store values in the range [0, 35], but you can use the 'user_data' field which can hold values in the range [0, 255].
Or if you need to store values higher than 255 / you need a separate field you can define a new dimension (see laspy's doc on how to add extra dimensions).
Your code should be close to something like this
outFile1 = laspy.file.File("laswrite2.las", mode = "w",header = inFile.header)
# copy fields
for dimension in inFile.point_format:
dat = inFile.reader.get_dimension(dimension.name)
outFile1.writer.set_dimension(dimension.name, dat)
outFile1.define_new_dimension(
name="index_from_shape",
data_type=7, # uint64_t
description = "Index of corresponding polygon from shape file"
)
outFile1.index_from_shape = indexfromshp
outFile1.close()
Is it possible, with Python + netCDF4, to open an existing NetCDF file and change one of the dimensions from fixed size to an unlimited dimension, such that I can append data to it?
I found this question/answer, which lists several options for doing this with NCO/xarray, but I'm specifically looking for a method using the netCDF4 package.
Below is a minimal example which creates a NetCDF file with a fixed dimension (this part, of course, does not exist in reality, otherwise I could simply create the file with an unlimited dimension...), and then re-opens it in an attempt to modify the time dimension. The netCDF4.Dimension time_dim has a function/method isunlimited() to test whether the dimension is unlimited or not, but nothing like e.g. make_unlimited(), which I was hoping for.
import netCDF4 as nc4
import numpy as np
# Create dataset for testing
nc1 = nc4.Dataset('test.nc', 'w')
dim = nc1.createDimension('time', 10)
var = nc1.createVariable('time', 'f8', 'time')
var[:] = np.arange(10)
nc1.close()
# Read it back
nc2 = nc4.Dataset('test.nc', 'a')
time_dim = nc2.dimensions['time']
# ...
# (1) Make time_dim unlimited
# (2) Append some data
# ...
nc2.close()