How to resize a matrix in python, say np.random.randn(100,100), to shape [50,50]? I hope to get interpolation like what we expect for image resize, like cv2.imresize. But I don't find some decent way to do for matrix.
I think np.resize does't serve this purpose for it do no interpolation to keep the new matrix "looks" like the old one, using the plt.matshow().
import numpy as np
import matplotlib.pyplot as plt
a = np.random.randn(100,100)
# how to do?
b = resize(a, ) # something like this
plt.matshow(a)
plt.matshow(b)
plt.show()
--> a and b looks quite like but with different size
Related
So I have these 2 images I downloaded from a fits file. Here is the code located below. Now I want to create a function that takes in both those image arrays and subtracts each array from eachother and then plots the difference as a new picture. Can anyone help me out? I'm really stuck on it been working at it for 4 hours to no avail.
import numpy as np
import matplotlib.pyplot as plt
from astropy.io import fits
Image_file = fits.open('https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_ir.fits')
fourty_min_ir = Image_file[0].data
type(fourty_min_ir)
Image_file = fits.open('https://raw.githubusercontent.com/msu-cmse-courses/cmse202-S21-student/master/data/m42_40min_red.fits')
fourty_min_red = Image_file[0].data
type(fourty_min_red)
results_img=fourty_min_ir-fourty_min_red
plt.imshow(results_img)
plt.show()
I made a 3D array, which consists of numbers(0~4). What I want is to save 3D array as a stack of 2D images(if possible, save *.tiff file). What am I supposed to do?
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
Actually, I made it. This is my code.
With this code, I don't need to stack a series of 2D image(array).
Make a 3D array, and save it. That is just what I did for this.
import numpy as np
from skimage.external import tifffile as tif
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
tif.imsave('a.tif', a, bigtiff=True)
This should work. I haven't tested it but I have separated color images into RGB slices using this method and it should work pretty much the same way here, assuming you don't want to do anything with those pixel values first. (They will be very close to the same color in an image).
import imageio
import numpy as np
a = np.random.randint(0,5, size=(100,100,100))
a = a.astype('int8')
for i in range(100):
newimage = a[:, :, i]
imageio.imwrite("path/to/image%d.tiff" %i, newimage)
What exactly do you mean by "stack"? As you refer to tiff as output format, I assume here you want your data in one file as a multiframe-tiff.
This can easily be done with imageio's mimwrite() function:
# import numpy as np
# a = np.random.randint(0,5, size=(100,100,100))
# a = a.astype('int8')
import imageio
imageio.mimwrite("image.tiff", a)
Note that this function relies on having the counter for your several frames as first parameter and x and y follw. See also its documentation.
However, if I'm wrong and you want to have n (e.g. 100) separate tif-files, you can also use the normal imwrite() function in a loop:
n = len(a)
for i in range(n):
imageio.imwrite(f'image_{i:03}.tiff', a[i])
I am using the PIL package in python and I want to import the pixels into a matrix after I convert it to grayscale this is my code
from PIL import Image
import numpy as np
imo = Image.open("/home/gauss/Pictures/images.jpg")
imo2 = imo.convert('L')
dim = imo2.size
pic_mat = np.zeros(shape=(dim[0] , dim[1]))
for i in range(dim[0]):
for j in range(dim[1]):
pic_mat[i][j] = imo2.getpixel((i,j))
My question is about the size function. it usually returns a tuple (a,b) where a is the width of the picture and the b is the length of the picture, but doesn't that mean that a is the column in a matrix and b is the row in a matrix. I am wondering this to see if I set up my matrix properly.
Thank you
Try just doing
pic_mat = np.array(imo.convert('L'))
You can also avoid doing things like shape=(dim[0] , dim[1]) by slicing the size tuple like this shape=dim[:2] (the :2 is even redundant in this case but I like to be careful...)
I had x,y,height vars to build a contour in python.
I created a Triangulation grid using
x,y,height and traing are numpy arrays
tri = Tri.Triangulation(x, y, triang)
then i did a contour using tricontourf
tricontourf(tri,height)
how can i get the output of the tricontourf into a numpy array. I can display the image using pyplot but I dont want to.
when I tried this:
triout = tricontourf(tri,height)
print triout
I got:
<matplotlib.tri.tricontour.TriContourSet instance at 0xa9ab66c>
I need to get the image data and if I could get numpy array its easy for me.
Is it possible to do this?
if its not possible can I do what tricontourf does without matplotlib in python?
You should try this :
cs = tricontourf(tri,height)
for collection in cs.collections:
for path in collection.get_paths():
print path.to_polygons()
as I learned on:
https://github.com/matplotlib/matplotlib/issues/367
(it is better to use path.to_polygons() )
I've looked all over the place and am not finding a solution to this issue. I feel like it should be fairly straightforward, but we'll see.
I have a .FITS format data cube and I need to collapse it into a 2D FITS image. The data cube has two spacial dimensions and one spectral/velocity dimension.
Just looking for a simple python routine to load in the cube and flatten all these layers (i.e. integrate them along the spectral/velocity axis). Thanks for any help.
This tutorial on pyfits is a little old, but still basically correct. The key is that the output of opening a FITS cube with pyfits (or astropy.io.fits) is that you have a 3 dimensional numpy array.
import pyfits
# if you are using astropy then for this example
# from astropy.io import fits as pyfits
data_cube, header_data_cube = pyfits.getdata("data_cube.fits", 0, header=True)
data_cube.shape
# (Z, X, Y)
You then have to decided how to flatten/integrate cube along the Z axis, and there are plenty of resources out there to help you decide the right (hopefully based in some analysis framework) to do that.
OK, this seems to work:
import pyfits
import numpy as np
hdulist = pyfits.open(filename)
header = hdulist[0].header
data = hdulist[0].data
data = np.nan_to_num(data)
new_data = data[0]
for i in range(1,84): #this depends on number of layers or pages
new_data += data[i]
hdu = pyfits.PrimaryHDU(new_data)
hdu.writeto(new_filename)
One problem with this routine is that WCS coordinates (which are attached to the original data cube) are lost during this conversion.
This is a bit of an old question, but spectral-cube now provides a better solution for this.
Example, based on Teachey's answer:
from spectral_cube import SpectralCube
cube = SpectralCube.read(filename)
summed_image = cube.sum(axis=0)
summed_image.hdu.writeto(new_filename)