Is there a NCO or netCDF4 command that can be used to extend the lat-lon dimensions of netCDF file.
E.g. I have a netCDF file with foll. dimensons:
dimensions(sizes): time(10), var(1), latitude(1674), longitude(4320)
I want to extend the latitude dimension to cover the entire globe i.e. instead of 1674 it should be 2160. Is there a way to do that? The new cells should be assigned a user-specified value say 0.0
You could generate a new grid of the size you want and then remap your original data to that grid with ncremap. It's a fairly sophisticated feature but then so is what you want to do :). Or you could open your file in ncap2, define new dimensions of the sizes above, and then use hyperslab subscripting to copy your original data into a corner of your new array, then use ncks to extract only the new fields/dimensions from that file.
Related
There are problems in my writing data into .tif file with gdal module in python.
I want to extract data (numpy array) from a tif file and modify some of its values before saving it into a new one, with the new file functioning normally. I use following script:
tif = gdal.Open('data/pre_heilj_mean90_15.tif') #original tif file
imwidth = tif.RasterXSize
imheight = tif.RasterYSize
data = tif.ReadAsArray()
data[100][100] = 100 #modify value
data = data.astype(np.float32)
driver = gdal.GetDriverByName("GTiff")
dataset = driver.Create('data/res.tif', imwidth, imheight, 1, gdal.GDT_Float32)
dataset.SetSpatialRef(tif.GetSpatialRef())
dataset.SetGeoTransform(tif.GetGeoTransform())
dataset.SetProjection(tif.GetProjection())
dataset.GetRasterBand(1).WriteArray(data)
dataset.FlushCache()
dataset=None
data=None
tif=None
I am certain that data in original tif file is 2-d and float32 type.
However, the new tif file(res.tif) is all black in ArcMap:
res.tif
Here is how the original tif file shows in ArcMap:
original tif file
And sizes of the two files differ a lot, original is 5287KB and the new one is 4633KB.
I want to know what goes wrong.(forgive my poor English pls)
You probably forgot to write the nodata value in the metadata of the output file. The fact that it's "black" is probably just due to stretching, if you stretch the output similar (min = ~406) is should look similar.
For example get the nodata value with:
nodata_value = tif.GetRasterBand(1).GetNoDataValue()
Then write/assign it with:
dataset.GetRasterBand(1).SetNoDataValue(nodata_value)
Keep in mind that this is a property of a band, so multiple bands in a single file can potentially have different nodata values.
I used this puyhon code to convert nifti file to .vtk polydata meshes
import itk
import vtk
input_filename = '/home/nour/Bureau/7ans/244/whatIneed/244seg_pve_2.nii.gz'
reader=itk.ImageFileReader[itk.Image[itk.UC,3]].New()
reader.SetFileName(input_filename)
reader.Update()
itkToVtkFilter = itk.ImageToVTKImageFilter[itk.Image[itk.UC,3]].New()
itkToVtkFilter.SetInput(reader.GetOutput())
myvtkImageData = itkToVtkFilter.GetOutput()
print("myvtkImageData")
and for saving and writing the .vtk file I used
writer = vtk.vtkPolyDataWriter()
writer.SetInputData()
writer.SetFileName("/home/nour/Bureau/7ans/244/whatIneed/Output.vtk")
writer.Write()
and here the error :
ERROR: In /work/standalone-x64-build/VTK-source/Common/ExecutionModel/vtkDemandDrivenPipeline.cxx, line 809
vtkCompositeDataPipeline (0x4d9cac0): Input for connection index 0 on input port index 0 for algorithm vtkPolyDataWriter(0x4de3ea0) is of type vtkImageData, but a vtkPolyData is required.
I was wondering as to what would be a good way of writing a vtk Polydata file.
thanks
You need to transform your image (regular grid) into a polygonal mesh to be able to save it as a .vtk file.
For 2D meshes, this can be done using the vtkExtractSurface filter.
Another solution would be to use another format (.vtk is a legacy format):
If you want to save a regular grid, you can use the vtkXMLImageDataWriter, that uses the .vti extension.
If you want a unstructured mesh, you can use the vtkXMLPolyDataWriter, that uses the .vtp extension and gives a polygonal mesh. You can also use the vtkXMLUnstructuredGridWriter, that uses the .vtu extension and can contain 3D cells.
Images and polygonal meshes are fundamentally different types of data. You can't just cast an image into a mesh.
To get a mesh you would need to do some type of iso-surface extraction. Typically you would select some image intensity as the value of your surface, and then you would use an algorithm such as Marching Cubes to create a mesh of that value.
In VTK you can use the vtkContourFilter to create a mesh from an image. There are a number of examples on the VTK Example web site that show how to use the filter. Here is one:
https://lorensen.github.io/VTKExamples/site/Python/ImplicitFunctions/Sphere/
thank you very much. So as I understand :
1 reading nifti file (segmented file)
2 apply itktovtkfilter
3 creating meshes (using the vtkContourFilter)
4 and finaly convert it to polydata and save it to .vtk file
that's right ??
I have a large 40 mb (about 173,397 lines) .dat file filled with binary data (random symbols). It is an astronomical photograph. I need to read and display it with Python. I am using a binary file because I will need to extract pixel value data from specific regions of the image. But for now I just need to ingest it into Python. Something like the READU procedure in IDL. Tried numpy and matplotlib but nothing worked. Suggestions?
You need to know the data type and dimensions of the binary file. For example, if the file contains float data, use numpy.fromfile like:
import numpy as np
data = np.fromfile(filename, dtype=float)
Then reshape the array to the dimensions of the image, dims, using numpy.reshape (the equivalent of REFORM in IDL):
im = np.reshape(data, dims)
Hello, I have some problem with converting Tiff file to numpy array.
I have a 16 bit signed raster file and I want to convert it to numpy array.
I using to this gdal libarary.
import numpy
from osgeo import gdal
ds = gdal.Open("C:/.../dem.tif")
dem = numpy.array(ds.GetRasterBand(1).ReadAsArray())
At first glance, everything converts well, but I compared the result obtained in python with result in GIS software and I got different results.
Python result
Arcmap result
I found many value in numpy array that are below 91 and 278 (real min and max values), that should not exist.
GDAL already returns a Numpy array, and wrapping it in np.array by default creates a copy of that array. Which is an unnecessary performance hit. Just use:
dem = ds.GetRasterBand(1).ReadAsArray()
Or if its a single-band raster, simply:
dem = ds.ReadAsArray()
Regading the statistics, are you sure ArcMap shows the absolute high/low value? I know QGIS for example often draws the statistics from a sample of the dataset (for performance) and depending on the settings sometimes uses a percentile (eg 1%, 99%).
edit: BTW, is this a public dataset? Like an SRTM tile? It might help if you list the source.
I used a builders' level to get x,y,z coordinates on a 110' x 150' building lot.
They are not in equally spaced rows and columns, but are randomly placed.
I have found a lot of info on mapping and I'm looking forward to learning about GIS. And how to use the many free software utilities out there.
Where should I start?
Now the data is in a csv file format, but I could change that.
It seems that I want to get the information I have into a "shapefile" or a raster format.
I supose I could look up the formats and do this, but it seems that I havn't come accross the proper utility for this part of the process.
Thank You Peter
You can convert your coordinate into a shapefile to display them in QGIS, ArcMAP, or similar GIS programs. You probably want a polygon shapefile.
One easy way to do this is with the PySAL
>>> import pysal
>>> coords = [(0,0), (10,0), (10,10), (0,10), (0,0)]
>>> pts = map(pysal.cg.Point, coords)
>>> polygon = pysal.cg.Polygon(pts)
>>> shp = pysal.open('myPolygon.shp','w')
>>> shp.write(polygon)
>>> shp.close()
Note: pysal currently doesn't support (Z coordinates) but there are plenty of similar libraries that do.
Also notice the first and last point are the same, indicating a closed polygon.
If your X,Y,Z coordinates are GPS coordinates you'll be able to align your data with other GIS data easily by telling the GIS what projection your data is in (WGS84, UTM Zone #, etc). If your coordinates are in local coordinates (not tied to a grid like UTM, etc) you'll need to "Georeference" you coordinates in order to align them with other data.
Finally using the ogr2ogr command you can easilly export your data from shapefile to other formats like KML,
ogr2ogr -f KML myPolygon.kml myPolygon.shp
You can convert a CSV file into any OGR supported format. All you need is a header file for the CSV file.
Here you have an example:
<ogrvrtdatasource>
<ogrvrtlayer name="bars">
<srcdatasource>bars.csv</srcdatasource>
<geometrytype>wkbPoint</geometrytype>
<layersrs>EPSG:4326</layersrs>
<geometryfield encoding="PointFromColumns" x="longitude" y="latitude">
</geometryfield>
</ogrvrtlayer>
</ogrvrtdatasource>
In the datasource field you set the CSV file name.
In your case, you have points, so the example is ok.
The field layersrs indicates the projection of the coordinates. If you have longitude and latitude, this one is ok.
The geometryfields must contain the x and y properties, that define the columns in the CSV file that containt the coordinates. The CSV file must have a first line defining the field names.
Save the file with a .vrt extension.
Once you have this, use the ogr2ogr program, which you have if GDAL is installed.
If you want to convert the file to a Shapefile, just type in a console:
ogr2ogr -f "ESRI Shapefile" bars.shp bars.vrt
If your question is what to do with the data, you can check the gdal_grid utility program, which converts scattered data (as yours) to raster data. You can use the CSV with the vrt header file as the input, without changing the format.