I'm editing a .fits file I have in python but I want the header to stay the exact same. This is the code:
import numpy as np
from astropy.io import fits
import matplotlib.pyplot as plt
# read in the fits file
im = fits.getdata('myfile.fits')
header = fits.getheader('myfile.fits')
ID = 1234
newim = np.copy(im)
newim[newim == ID] = 0
newim[newim == 0] = -99
newim[newim > -99] = 0
newim[newim == -99] = 1
plt.imshow(newim,cmap='gray', origin='lower')
plt.colorbar()
hdu = fits.PrimaryHDU(newim)
hdu.writeto('mynewfile.fits')
All this is fine and does exactly what I want it to do except that it does not conserve the header after it saves the new file. Is there any way to fix this such that the original header file is not lost?
First of all don't do this:
im = fits.getdata('myfile.fits')
header = fits.getheader('myfile.fits')
As explained in the warning here, this kind of usage is discouraged (newer versions of the library have a caching mechanism that makes this less inefficient than it used to be, but it's still a problem). This is because the first one returns just the data array from the file, and the latter returns just the header from a file. At that point there's no longer any association between them; it's just a plain Numpy ndarray and a plain Header and their associations with a specific file are not tracked.
You can return the full HDUList data structure which represents the HDUs in a file, and for each HDU there's an HDU object associating headers with their arrays.
In your example you can just open the file, modify the data array in-place, and then use the .writeto method on it to write it to a new file, or if you open it with mode='update' you can modify the existing file in-place. E.g.
hdul = fits.open('old.fits')
# modify the data in the primary HDU; this is just an in-memory operation and will not change the data on disk
hdul[0].data +=1
hdul.writeto('new.fits')
There's also no clear reason for doing this in your code
newim = np.copy(im)
Unless you have a specific reason to keep an unmodified copy of the original array in memory, you can just directly modify the original array in-place.
Related
I have a problem with Whoosh. I want to create an index in different moments, because the query to extract data is heavy. I fixed almost all the problems, but I can't get over the problem that every time I reopen the index to add new documents, the file is cleaned instead of simply adding new documents. I tried to use update_document instead of add_document, and FileStorage.open_index instead of index.open_dir, but nothing changed: I always had an index file much smaller than expected.
if is_new_index_file:
if os.path.isdir(<dirname>):
rmtree(<dirname>)
os.mkdir(<dirname>)
else:
os.mkdir(<dirname>)
schema = TranslationSchema()
index.create_in(<dirname>, <schema>, indexname=<indexname>)
ix = index.open_dir(<dirname>, indexname=<indexname>, schema=<schema>)
else:
#open an existing index object
# ix = index.open_dir(<dirname>, indexname=<indexname>)
# open file storage
ix = FileStorage(<dirname>)
ix.open_index(indexname = <indexname>)
...
list-of-fields = <query-to-the-database-to-extract-fields>
...
writer = ix.writer()
#writer.add_document(<list-of-fields>)
writer.update_document(<list-of-fields>)
writer.commit(merge=False, optimize=True)
ix.close()
I am new to python and FITS image files, as such I am running into issues. I have two FITS files; the first FITS file is pixels/counts and the second FITS file (calibration file) is pixels/wavelength. I need to convert pixels/counts into wavelength/counts. Once this is done, I need to output wavelength/counts as a new FITS file for further analysis. So far I have managed to array the required data as shown in the code below.
import numpy as np
from astropy.io import fits
# read the images
image_file = ("run_1.fits")
image_calibration = ("cali_1.fits")
hdr = fits.getheader(image_file)
hdr_c = fits.getheader(image_calibration)
# print headers
sp = fits.open(image_file)
print('\n\nHeader of the spectrum :\n\n', sp[0].header, '\n\n')
sp_c = fits.open(image_calibration)
print('\n\nHeader of the spectrum :\n\n', sp_c[0].header, '\n\n')
# generation of arrays with the wavelengths and counts
count = np.array(sp[0].data)
wave = np.array(sp_c[0].data)
I do not understand how to save two separate arrays into one FITS file. I tried an alternative approach by creating list as shown in this code
file_list = fits.open(image_file)
calibration_list = fits.open(image_calibration)
image_data = file_list[0].data
calibration_data = calibration_list[0].data
# make a list to hold images
img_list = []
img_list.append(image_data)
img_list.append(calibration_data)
# list to numpy array
img_array = np.array(img_list)
# save the array as fits - image cube
fits.writeto('mycube.fits', img_array)
However I could only save as a cube, which is not correct because I just need wavelength and counts data. Also, I lost all the headers in the newly created FITS file. To say I am lost is an understatement! Could someone point me in the right direction please? Thank you.
I am still working on this problem. I have now managed (I think) to produce a FITS file containing the wavelength and counts using this website:
https://www.mubdirahman.com/assets/lecture-3---numerical-manipulation-ii.pdf
This is my code:
# Making a Primary HDU (required):
primaryhdu = fits.PrimaryHDU(flux) # Makes a header # or if you have a header that you’ve created: primaryhdu = fits.PrimaryHDU(arr1, header=head1)
# If you have additional extensions:
secondhdu = fits.ImageHDU(wave)
# Making a new HDU List:
hdulist1 = fits.HDUList([primaryhdu, secondhdu])
# Writing the file:
hdulist1.writeto("filename.fits", overwrite=True)
image = ("filename.fits")
hdr = fits.open(image)
image_data = hdr[0].data
wave_data = hdr[1].data
I am sure this is not the correct format for wavelength/counts. I need both wavelength and counts to be contained in hdr[0].data
If you are working with spectral data, it might be useful to look into specutils which is designed for common tasks associated with reading/writing/manipulating spectra.
It's common to store spectral data in FITS files using tables, rather than images. For example you can create a table containing wavelength, flux, and counts columns, and include the associated units in the column metadata.
The docs include an example on how to create a generic "FITS table" writer with wavelength and flux columns. You could start from this example and modify it to suit your exact needs (which can vary quite a bit from case to case, which is probably why a "generic" FITS writer is not built-in).
You might also be able to use the fits-wcs1d format.
If you prefer not to use specutils, that example still might be useful as it demonstrates how to create an Astropy Table from your data and output it to a well-formatted FITS file.
I have a large input file which consists of data frames (a data series (complex64), with an identifying header in each frame). It is larger than my available memory. The headers repeat, but are randomly ordered, so for example the input file could look like:
<FRAME header={0}, data={**first** 500 numbers...}>,
<FRAME header={18}, data={first 500 numbers...}>,
<FRAME header={4}, data={first 500 numbers...}>,
<FRAME header={0}, data={**next** 500 numbers...}>
...
I want to order the data into a new file that is a numpy array of shape (len(headers), len(data_series)). It has to build the output file as it reads the frames, because I can't fit it all in memory.
I've looked at numpy.savetxt and the python csv package but for disk size, precision, and speed reasons I would prefer for the output file to be binary. numpy.save is good except that I can't figure out how to make it append to an unknown array size.
I have to work in Python2.7 because of some dependencies needed to read these frames. What I have done so far is made a function able to write all of the frames with a matching header to a single binary file:
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
with open("singleFrameHeader", 'ab') as f:
current_data = input_data.readFrame() # This loads the next frame in the file
if current_data.header == 0:
float_arr = np.array(current_data.data).view(float)
float_arr.tofile(f)
This works great, but what I need to extend it to be two dimensional. I'm starting to look at h5py as an option, but was hoping there is a simpler solution.
What would be great is something like
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
with open("bigMatrix", 'ab') as f:
current_data = input_data.readFrame() # This loads the next frame in the file
index = current_data.header
float_arr = np.array(current_data.data).view(float)
float_arr.tofile(f, index)
Any help is appreciated. I thought this would be a more common use-case to read and write to a 2D binary file in append mode.
You have two problems: one is that a file contains sequential data, and the other is that numpy binary files don't store shape information.
A simple way to start solving this would be to carry through with your initial idea of converting the data into files by header, then combining all the binary files into one large product (if you still feel the need to do so).
You could maintain a map of the headers you've found so far to their output files, data size, etc. This will allow you to combine the data more intelligently, if for example, there are missing chunks or headers or something.
from contextlib import ExitStack
from os import remove
from tempfile import NamedTemporaryFile
from shutil import copyfileobj
import sys
class Header:
__slots__ = ('id', 'count', 'file', 'name')
def __init__(self, id):
self.id = id
self.count = 0
self.file = NamedTemporaryFile(delete=False)
self.name = self.file.name
def write_frame(self, frame):
data = np.array(frame.data).view(float)
self.count += data.size
data.tofile(self.file)
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
file_map = {}
with ExitStack() as stack:
while True:
frame = input_data.next_frame()
if frame is None:
break # recast this loop as necessary
if frame.header not in file_map:
header = Header(frame.header)
stack.enter_context(header.file)
file_map[frame.header] = header
else:
header = file_map[frame.header]
header.write_frame(frame)
max_header = max(file_map)
max_count = max(h.count for h in file_map)
with open('singleFrameHeader', 'wb') as output:
output.write(max_header.to_bytes(8, sys.byteorder))
output.write(max_count.to_bytes(8, sys.byteorder))
for i in range max_header:
if i in file_map:
h = file_map[i]
with open(h.name, 'rb') as input:
copyfileobj(input, output)
remove(h.name)
if h.count < max_count:
np.full(max_count - h.count, np.nan, dtype=np.float).tofile(output)
else:
np.full(max_count, np.nan, dtype=np.float).tofile(output)
The first 16 bytes will be the int64 number of headers and number of elements per header, respectively. Keep in mind that the file is in native byte order, whatever that may be, and is therefore not portable.
Alternative
If (and only if) you know the exact size of a header dataset ahead of time, you can do this in one pass, with no temporary files. It also helps if the headers are contiguous. Otherwise, missing swaths will be zero-filled. You will still need to maintain a dictionary of your current position within a header, but you will no longer have to keep a separate file pointer around for each one. All-in-all, this is a much better alternative than the original solution, if your use-case allows it:
header_size = 500 * N # You must know this up front
input_data = Funky_Data_Reader_that_doesnt_matter(input_filename)
header_map = {}
with open('singleFrameHeader', 'wb') as output:
output.write(max_header.to_bytes(8, sys.byteorder))
output.write(max_count.to_bytes(8, sys.byteorder))
while True:
frame = input_data.next__frame()
if frame is None:
break
if frame.header not in header_map:
header_map[frame.header] = 0
data = np.array(frame.data).view(float)
output.seek(16 + frame.header * header_size + header_map[frame.header])
data.tofile(output)
header_map[frame.header] += data.size * data.dtype.itemsize
I asked a question regarding this sort of out-of-order write pattern as a consequence of this answer: What happens when you seek past the end of a file opened for writing?
I have a large dataset: 20,000 x 40,000 as a numpy array. I have saved it as a pickle file.
Instead of reading this huge dataset into memory, I'd like to only read a few (say 100) rows of it at a time, for use as a minibatch.
How can I read only a few randomly-chosen (without replacement) lines from a pickle file?
You can write pickles incrementally to a file, which allows you to load them
incrementally as well.
Take the following example. Here, we iterate over the items of a list, and
pickle each one in turn.
>>> import cPickle
>>> myData = [1, 2, 3]
>>> f = open('mydata.pkl', 'wb')
>>> pickler = cPickle.Pickler(f)
>>> for e in myData:
... pickler.dump(e)
<cPickle.Pickler object at 0x7f3849818f68>
<cPickle.Pickler object at 0x7f3849818f68>
<cPickle.Pickler object at 0x7f3849818f68>
>>> f.close()
Now we can do the same process in reverse and load each object as needed. For
the purpose of example, let's say that we just want the first item and don't
want to iterate over the entire file.
>>> f = open('mydata.pkl', 'rb')
>>> unpickler = cPickle.Unpickler(f)
>>> unpickler.load()
1
At this point, the file stream has only advanced as far as the first
object. The remaining objects weren't loaded, which is exactly the behavior you
want. For proof, you can try reading the rest of the file and see the rest is
still sitting there.
>>> f.read()
'I2\n.I3\n.'
Since you do not know the internal workings of pickle, you need to use another storing method. The script below uses the tobytes() functions to save the data line-wise in a raw file.
Since the length of each line is known, it's offset in the file can be computed and accessed via seek() and read(). After that, it is converted back to an array with the frombuffer() function.
The big disclaimer however is that the size of the array in not saved (this could be added as well but requires some more complications) and that this method might not be as portable as a pickled array.
As #PadraicCunningham pointed out in his comment, a memmap is likely to be an alternative and elegant solution.
Remark on performance: After reading the comments I did a short benchmark. On my machine (16GB RAM, encrypted SSD) I was able to do 40000 random line reads in 24 seconds (with a 20000x40000 matrix of course, not the 10x10 from the example).
from __future__ import print_function
import numpy
import random
def dumparray(a, path):
lines, _ = a.shape
with open(path, 'wb') as fd:
for i in range(lines):
fd.write(a[i,...].tobytes())
class RandomLineAccess(object):
def __init__(self, path, cols, dtype):
self.dtype = dtype
self.fd = open(path, 'rb')
self.line_length = cols*dtype.itemsize
def read_line(self, line):
offset = line*self.line_length
self.fd.seek(offset)
data = self.fd.read(self.line_length)
return numpy.frombuffer(data, self.dtype)
def close(self):
self.fd.close()
def main():
lines = 10
cols = 10
path = '/tmp/array'
a = numpy.zeros((lines, cols))
dtype = a.dtype
for i in range(lines):
# add some data to distinguish lines
numpy.ndarray.fill(a[i,...], i)
dumparray(a, path)
rla = RandomLineAccess(path, cols, dtype)
line_indices = list(range(lines))
for _ in range(20):
line_index = random.choice(line_indices)
print(line_index, rla.read_line(line_index))
if __name__ == '__main__':
main()
Thanks everyone. I ended up finding a workaround (a machine with more RAM so I could actually load the dataset into memory).
I have an array I would like to add a header for.
This is what i have now:
0.0,1.630000e+01,1.990000e+01,1.840000e+01
1.0,1.630000e+01,1.990000e+01,1.840000e+01
2.0,1.630000e+01,1.990000e+01,1.840000e+01
This is what i want:
SP,1,2,3
0.0,1.630000e+01,1.990000e+01,1.840000e+01
1.0,1.630000e+01,1.990000e+01,1.840000e+01
2.0,1.630000e+01,1.990000e+01,1.840000e+01
Notes:
"SP" will always be 1st followed by the numbering of the columns which may vary
here is my existing code:
fmt = ",".join(["%s"] + ["%10.6e"] * (my_array.shape[1]-1))
np.savetxt('final.csv', my_array, fmt=fmt,delimiter=",")
Ever since Numpy 1.7.0, three parameters have been added to numpy.savetxt for exactly this purpose: header, footer and comments. So the code to do as you wanted can easily be written as:
import numpy
a = numpy.array([[0.0,1.630000e+01,1.990000e+01,1.840000e+01],
[1.0,1.630000e+01,1.990000e+01,1.840000e+01],
[2.0,1.630000e+01,1.990000e+01,1.840000e+01]])
fmt = ",".join(["%s"] + ["%10.6e"] * (a.shape[1]-1))
numpy.savetxt("temp", a, fmt=fmt, header="SP,1,2,3", comments='')
Note: this answer was written for an older version of numpy, relevant when the question was written. With modern numpy, makhlaghi's answer provides a more elegant solution.
Since numpy.savetxt can also write to file objects, you can open the file youself and write your header before the data:
import numpy
a = numpy.array([[0.0,1.630000e+01,1.990000e+01,1.840000e+01],
[1.0,1.630000e+01,1.990000e+01,1.840000e+01],
[2.0,1.630000e+01,1.990000e+01,1.840000e+01]])
fmt = ",".join(["%s"] + ["%10.6e"] * (a.shape[1]-1))
# numpy.savetxt, at least as of numpy 1.6.2, writes bytes
# to file, which doesn't work with a file open in text mode. To
# work around this deficiency, open the file in binary mode, and
# write out the header as bytes.
with open('final.csv', 'wb') as f:
f.write(b'SP,1,2,3\n')
#f.write(bytes("SP,"+lists+"\n","UTF-8"))
#Used this line for a variable list of numbers
numpy.savetxt(f, a, fmt=fmt, delimiter=",")
It is also possible to save other things than numpy arrays to file using the savez or savez_compressed functions. Using load function you can retrieve all data like it was pickled like a dict.
import numpy as np
np.savez("filename.npz", array_to_save=np.array([0.0, 0.0]), header="Some header")
data = np.load("filename.npz")
array = data["array_to_save"]
header = str(data["header"])