I'm having issues trying to set a default cell size for polygon to raster conversion. I need to convert a buffered stream (polygon) to a raster layer, so that I can burn the stream into a DEM. I'd like to automate this process to include it in a larger script.
My main problem is that the PolygonToRaster_conversion() tool is not allowing me to set the cell size to a raster layer value. It's also not obeying the default raster cell size I'm trying to set in the environment. Instead, it consistently uses the default "extent divided by 250".
Here is my script for this process:
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Environment Settings
arcpy.env.cellSize = Input_DEM
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", Input_DEM)
This produces the following error:
"Cell size must be greater than zero."
The same error occurs if I type out the path for the DEM layer.
I've also tried manually typing in a number for the cell size. This works, but I want to generalize the usability of this tool.
What I really don't understand is that I used the DEM layer as the cell size manually through the ArcGIS interface and this worked perfectly!!
Any help will be greatly appreciated!!!
There are several options here. First, you can use the raster band properties to extract the cell size and insert that into the PolygonToRaster function. Second, try using the MINOF parameter in the cell size environment setting.
import arcpy
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Use the describe function to get at cell size
desc = arcpy.Describe(Input_DEM)
cellsize = desc.meanCellWidth
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", cellsize)
Related
I have a large tiff file (around 2GB) containing a map. I have been able to successfully read the data and even display it using the following python code:
import rasterio
from rasterio.plot import show
with rasterio.open("image.tif") as img:
show(img)
data = img.read()
This works just fine. However, I need to be able to display specific parts of this map without having to load the entire file into memory (as it takes up too much of the RAM and is not doable on many other PCs). I tried using the Window class of rasterio in order to that, but when I tried to display the map the outcome was different from how the full map is displayed (as if it caused data loss):
import rasterio
from rasterio.plot import show
from rasterio.windows import Window
with rasterio.open("image.tif") as img:
data = img.read(window=Window(0, 0, 100000, 100000))
show(data)
So my question is, how can I display a part of the map without having to load into memory the entire file, while also making it look as if it had been cropped from the full map image?
thanks in advance :)
The reason that it displays nicely in the first case, but not in the second, is that in the first case you pass an instance of rasterio.DatasetReader to show (show(img)), but in the second case you pass in a numpy array (show(data)). The DatasetReader contains additional information, in particular an affine transformation and color interpretation, which show uses.
The additional things show does in the first case (for RGB data) can be recreated for the windowed case like so:
import rasterio
from rasterio.enums import ColorInterp
from rasterio.plot import show
from rasterio.windows import Window
with rasterio.open("image.tif") as img:
window = Window(0, 0, 100000, 100000)
# Lookup table for the color space in the source file
source_colorinterp = dict(zip(img.colorinterp, img.indexes))
# Read the image in the proper order so the numpy array will have the colors in the
# order expected by matplotlib (RGB)
rgb_indexes = [
source_colorinterp[ci]
for ci in (ColorInterp.red, ColorInterp.green, ColorInterp.blue)
]
data = img.read(rgb_indexes, window=window)
# Also pass in the affine transform corresponding to the window in order to
# display the correct coordinates and possibly orientation
show(data, transform=img.window_transform(window))
(I figured out what show does by looking at the source code here)
In case of data with a single channel, the underlying matplotlib library used for plotting scales the color range based on the min and max value of the data. To get exactly the same colors as before, you'll need to know the min and max of the whole image, or some values that come reasonably close.
Then you can explicitly tell matplotlib's imshow how to scale:
with rasterio.open("image.tif") as img:
window = Window(0, 0, 100000, 100000)
data = img.read(window=window, masked=True)
# adjust these
value_min = 0
value_max = 255
show(data, transform=img.window_transform(window), vmin=value_min, vmax=value_max)
Additional kwargs (like vmin and vmax here) will be passed on to matplotlib.axes.Axes.imshow, as documented here.
From the matplotlib documenation:
vmin, vmax: float, optional
When using scalar data and no explicit norm, vmin and vmax define the data range that the colormap covers. By default, the colormap covers the complete value range of the supplied data. It is deprecated to use vmin/vmax when norm is given. When using RGB(A) data, parameters vmin/vmax are ignored.
That way you could also change the colormap it uses etc.
I am working with a set of dicom images. I would like to create a new image with a header similar to an existing image. However, I already propagate the images in numpy arrays, so to avoid duplication I propagate the headers without PixelData:
metadata = pydicom.filereader.dcmread(image_path[l],stop_before_pixels=True)
In a separate function, I want to attach a different image (an ROI) to the modified metadata:
ds = metadata
ds.PixelData = roi.astype(np.int16).tostring() # A numpy array converted to the same datatype as pixel_array was
ds.save_as(os.path.join(write_dir,'ROI'+str(slice+1))+'.dcm')
This results in an error message below which seems to indicate that the PixelData VR is not set in the dictionary? Thanks for your suggestions.
ValueError: Cannot write ambiguous VR of 'OB or OW' for data element
with tag (7fe0, 0010). Set the correct VR before writing, or use an
implicit VR transfer syntax
The VR for Pixel Data is ambiguous in the DICOM Standard. Depending on the exact nature of your dataset the required VR is either OB or OW. Because you're adding a brand new Pixel Data element to an existing dataset pydicom defaults the VR to 'OB or OW'. Normally this isn't an issue if your dataset is conformant because during write pydicom will automatically fix this so the correct VR is used (using the correct_ambiguous_vr() function). If your dataset isn't conformant then:
If your Pixel Data uses a compressed transfer syntax, like JPEG, then it should be OB.
Otherwise, it should be OB if Bits Allocated <= 8 and OW if > 8.
# Set the VR manually
ds['PixelData'].VR = 'OW'
I'm trying to work out how to create a batch operation tool in ArcCatalog, based on all .img raster files in a directory. I do not need to change the code, but I need to set the correct parameters.
Here's my code:
'''This script uses map algebra to find values in an
elevation raster greater than a specified value.'''
import os
import arcpy
#switches on Spatial Analyst
arcpy.CheckOutExtension('Spatial')
#loads the spatial analyst module
from arcpy.sa import *
#overwrites any previous files of same name
arcpy.overwriteOutput=True
# Specify the input folder and cut-offs
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(1))
for i in os.listdir(inDirectory):
if os.path.splitext(i)[1] == '.img':
inRaster = os.path.join(inDirectory, i)
outRaster = os.path.join(inDirectory, os.path.splitext(i)[0] + '_above_' + str(cutoffElevation) + '.img')
# Make a map algebra expression and save the resulting raster
tmpRaster = Raster(inRaster) > cutoffElevation
tmpRaster.save(outRaster)
# Switch off Spatial Analyst
arcpy.CheckInExtension('Spatial')
In the parameters I have selected:
Input Raster Raster Dataset - direction Input, Multivalue yes
Output Raster Raster Dataset - direction output
Cut off elevation - string, direction input
I add the images I want in the input raster, select the output raster and cut off elevation. But I get the error:
line 13, in
cutoffElevation =int(arcpy.GetparameterAsText(1)).
ValueError: invalid literal for int() with base 10
Does anybody know how to fix this?
You have three input parameters shown in that dialog box screenshot, but only two are described in the script. (The output raster outRaster is being defined in line 15, not as an input parameter.)
The error you're getting is because the output raster (presumably a file path and file name) can't be converted to an integer.
There are two ways to solve that:
Change the input parameters within that tool definition, so you're only feeding in input raster (parameter 0) and cut off elevation (parameter 1).
Change the code so it's looking for the correct parameters that are currently defined -- input raster (parameter 0) and cut off elevation (parameter 2).
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(2))
Either way, you're making sure that the GetParameterAsText command is actually referring to the parameter you really want.
I'm having a problem wit fits file manipulation in the astropy package, and I'm in need of some help.
I essentially want to take an image I have in fits file format, and create a new file I need to start inputing correction factors to and a new image which can then be used with the correction factors and the original image to produce a correction image. Each of these will have the same dimensions.
Starting with this:
from astropy.io import fits
# Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#URL: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
nxpix_um2_ext1 = fits.open('...')[1]['NAXIS1']
nypix_um2_ext1 = fits.open('...')[1]['NAXIS2']
#nxpix_um2_ext1 = 4071 #hima_sk_um2[1].header['NAXIS1'] # IDL: nxpix_uw1_ext1 = sxpar(hima_sk_uw1_ext1,'NAXIS1')
#nypix_um2_ext1 = 4321 #hima_sk_um2[1].header['NAXIS2'] # IDL: nypix_uw1_ext1 = sxpar(hima_sk_uw1_ext1,'NAXIS2')
# Make a new image file with the same dimensions (and headers, etc) to save the correction factors:
coicorr_um2_ext1 = ??[nxpix_um2_ext1,nypix_um2_ext1]
# Make a new image file with the same dimensions (and headers, etc) to save the corrected image:
ima_sk_coicorr_um2_ext1 = ??[nxpix_um2_ext1,nypix_um2_ext1]
Can anyone give me the obvious knowledge I am missing to do this...the last two lines are just there to outline what is missing. I have included ?? to perhaps signal I need something else there perhaps fits.writeto() or something similar...
The astropy documentation takes you though this task step by step: create an array with size (NAXIS1,NAXIS2), put the data in the primary HDU, make an HDUlist and write it to disk:
import numpy as np
from astropy.io import fits
data = np.zeros((NAXIS2,NAXIS1))
hdu = fits.PrimaryHDU(data)
hdulist = fits.HDUList([hdu])
hdulist.writeto('new.fits')
I think #VincePs answer is correct but I'll add some more information because I think you are not using the capabilities of astropy well here.
First of all Python is zero-based so the primary extension has the number 0. Maybe you got that wrong, maybe you don't - but it's uncommon to access the second HDU so I thought I better mention it.
hdu_num = 0 # Or use = 1 if you really want the second hdu.
First you do not need to open the same file twice, you can open it once and close it after extracting the relevant values:
with fits.open('...') as hdus:
nxpix_um2_ext1 = hdus[hdu_num]['NAXIS1']
nxpix_um2_ext1 = hdus[hdu_num]['NAXIS2']
# Continue without indentation and the file will be closed again.
or if you want to keep the whole header (for saving it later) and the data you can use:
with fits.open('...') as hdus:
hdr = hdus[hdu_num].header
data = hdus[hdu_num].data # I'll also take the data for comparison.
I'll continue with the second approach because I think it's a lot cleaner and you'll have all the data and header values ready.
new_data = np.zeros((hdr['NAXIS2'], hdr['NAXIS1']))
Please note that Python interprets the axis different than IRAF (and I think IDL, but I'm not sure) so you need axis2 as first and axis1 as second element.
So do a quick check that the shapes are the same:
print(new_data.shape)
print(data.shape)
If they are not equal I got confused about the axis in Python (again) but I don't think so. But instead of creating a new array based on the header values you can also create a new array by just using the old shape:
new_data_2 = np.zeros(data.shape)
That will ensure the dimensions and shape is identical. Now you have an empty image. If you rather like a copy then you can, but do not need to, explicitly copy the data (except if you opened the file explicitly in write/append/update mode then you should always copy it but that's not the default.)
new_data = data # or = data.copy() for explicitly copying.
Do your operations on it and if you want to save it again you can use what #VinceP suggested:
hdu = fits.PrimaryHDU(new_data, header=hdr) # This ensures the same header is written to the new file
hdulist = fits.HDUList([hdu])
hdulist.writeto('new.fits')
Please note that you don't have to alter the shape-related header keywords even if you changed the data's shape because during writeto astropy will update these (by default)
I am looking to store pixel values from satellite imagery into an array. I've been using
np.empty((image_width, image_length)
and it worked for smaller subsets of an image, but when using it on the entire image (3858 x 3743) the code terminates very quickly and all I get is an array of zeros.
I load the image values into the array using a loop and opening the image with gdal
img = gdal.Open(os.path.join(fn + "\{0}".format(fname))).ReadAsArray()
but when I include print img_array I end up with just zeros.
I have tried almost every single dtype that I could find in the numpy documentation but keep getting the same result.
Is numpy unable to load this many values or is there a way to optimize the array?
I am working with 8-bit tiff images that contain NDVI (decimal) values.
Thanks
Not certain what type of images you are trying to read, but in the case of radarsat-2 images you can the following:
dataset = gdal.Open("RADARSAT_2_CALIB:SIGMA0:" + inpath + "product.xml")
S_HH = dataset.GetRasterBand(1).ReadAsArray()
S_VV = dataset.GetRasterBand(2).ReadAsArray()
# gets the intensity (Intensity = re**2+imag**2), and amplitude = sqrt(Intensity)
self.image_HH_I = numpy.real(S_HH)**2+numpy.imag(S_HH)**2
self.image_VV_I = numpy.real(S_VV)**2+numpy.imag(S_VV)**2
But that is specifically for that type of images (in this case each image contains several bands, so i need to read in each band separately with GetRasterBand(i), and than do ReadAsArray() If there is a specific GDAL driver for the type of images you want to read in, life gets very easy
If you give some more info on the type of images you want to read in, i can maybe help more specifically
Edit: did you try something like this ? (not sure if that will work on tiff, or how many bits the header is, hence the something:)
A=open(filename,"r")
B=numpy.fromfile(A,dtype='uint8')[something:].reshape(3858,3743)
C=B*1.0
A.close()
Edit: The problem is solved when using 64bit python instead of 32bit, due to memory errors at 2Gb when using the 32bit python version.