Creating a python batch Image operation using ArcCatalog - python

I'm trying to work out how to create a batch operation tool in ArcCatalog, based on all .img raster files in a directory. I do not need to change the code, but I need to set the correct parameters.
Here's my code:
'''This script uses map algebra to find values in an
elevation raster greater than a specified value.'''
import os
import arcpy
#switches on Spatial Analyst
arcpy.CheckOutExtension('Spatial')
#loads the spatial analyst module
from arcpy.sa import *
#overwrites any previous files of same name
arcpy.overwriteOutput=True
# Specify the input folder and cut-offs
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(1))
for i in os.listdir(inDirectory):
if os.path.splitext(i)[1] == '.img':
inRaster = os.path.join(inDirectory, i)
outRaster = os.path.join(inDirectory, os.path.splitext(i)[0] + '_above_' + str(cutoffElevation) + '.img')
# Make a map algebra expression and save the resulting raster
tmpRaster = Raster(inRaster) > cutoffElevation
tmpRaster.save(outRaster)
# Switch off Spatial Analyst
arcpy.CheckInExtension('Spatial')
In the parameters I have selected:
Input Raster Raster Dataset - direction Input, Multivalue yes
Output Raster Raster Dataset - direction output
Cut off elevation - string, direction input
I add the images I want in the input raster, select the output raster and cut off elevation. But I get the error:
line 13, in
cutoffElevation =int(arcpy.GetparameterAsText(1)).
ValueError: invalid literal for int() with base 10
Does anybody know how to fix this?

You have three input parameters shown in that dialog box screenshot, but only two are described in the script. (The output raster outRaster is being defined in line 15, not as an input parameter.)
The error you're getting is because the output raster (presumably a file path and file name) can't be converted to an integer.
There are two ways to solve that:
Change the input parameters within that tool definition, so you're only feeding in input raster (parameter 0) and cut off elevation (parameter 1).
Change the code so it's looking for the correct parameters that are currently defined -- input raster (parameter 0) and cut off elevation (parameter 2).
inDirectory = arcpy.GetParameterAsText(0)
cutoffElevation = int(arcpy.GetParameterAsText(2))
Either way, you're making sure that the GetParameterAsText command is actually referring to the parameter you really want.

Related

Reading channel locations in MNE Python

I am new to MNE Python and I am working with .set files from EEGlab(Matlab) for source estimation analysis. The data were recorded from 66 channels (64 EEG and 2 EOG) from EasyCaps, with 10-20 IS. In Matlab, the EEG.chanlocs correctly shows the coordinates of each electrode (labels, type, theta, radius, X, Y, Z, sph_theta, sph_phi, sph_radius, urchin, ref). But it seems that I cannot read these locations in MNE Python.
import mne
#The .set files are imported ok
data_path = r"D:\EEGdata";
fname = data_path + '\ppt10.set'
mydata = mne.io.read_epochs_eeglab(fname)
#The data look ok, and channel labels are correctly displayed
mydata
mydata.plot()
mydata.ch_names
#But the channel locations are not found
mydata.plot_sensors() #RuntimeError: No valid channel positions found
Any suggestion on how to read the channel locations from the .set files? Or alternatively, how to manually create the locations based on the coordinates from EEG.chanlocs?
I have also tried to use the default montage 10-20, selecting only the channels I used, but I cannot make it work.
#Create a montage based on the standard 1020, which includes 94 electrode labels in upper case
montage = mne.channels.make_standard_montage('standard_1020')
[ch_name.upper() for ch_name in mydata.ch_names] #it correctly convert the channel labels into upper case
mydata.ch_names = [ch_name.upper() for ch_name in mydata.ch_names] #doesn't work
#File "<ipython-input-62-69a7053dc310>", line 1, in <module>
#mydata.ch_names=[ch_name.upper() for ch_name in mydata.ch_names]
#AttributeError: can't set attribute
montage = mne.channels.make_standard_montage('standard_1020',mydata.ch_names]
I also thought I could use a conversion tool to convert the .set files into .fif files. I have checked the online documentation, but I cannot find such tool. Any idea?
I had a similar problem that I fixed by adding a line for mydata.set_montage(montage) before running mydata.plot_sensors(). You don't need to convert the channel names to uppercase, as they are case-insensitive in MNE

How can I cut a portion of a satellite image based on coordinates? (gdal)

I have a satellite image of 7-channels (Basically I have seven .tif files, one for each band). And I have a .csv file with coordinates of points-of-interest that are in the region shot by the satellite. I want to cut small portions of the image in the surroundings of each coordinate point. How could I do that?
As I don't have a full working code right now, it really doesn't matter the size of those small portions of image. For the explanation of this question let's say that I want them to be 15x15 pixels. So for the moment, my final objective is to obtain a lot of 15x15x7 vectors, one for every coordinate point that I have in the .csv file. And that is what I am stucked with. (the "7" in the "15x15x7" is because the image has 7 channels)
Just to give some background in case it's relevant: I will use those vectors later to train a CNN model in keras.
This is what I did so far: (I am using jupyter notebook, anaconda environment)
imported gdal, numpy, matplotlib, geopandas, among other libraries.
Opened the .gif files using gdal, converted them into arrays
Opened the .csv file using pandas.
Created a numpy array called "imagen" of shape (7931, 7901, 3) that will host the 7 bands of the satellite image (in form of numbers). At this point I just need to know which rows and colums of the array "imagen" correspond to each coordinate point. In other words I need to convert every coordinate point into a pair of numbers (row,colum). And that is what I am stucked with.
After that, I think that the "cutting part" will be easy.
#I import libraries
from osgeo import gdal_array
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import geopandas
from geopandas import GeoDataFrame
from shapely.geometry import Point
#I access the satellite images (I just show one here to make it short)
b1 = r"E:\Imágenes Satelitales\2017\226_86\1\LC08_L1TP_226086_20170116_20170311_01_T1_sr_band1.tif"
band1 = gdal.Open(b1, gdal.GA_ReadOnly)
#I open the .csv file
file_svc = "C:\\Users\\Administrador\Desktop\DeepLearningInternship\Crop Yield Prediction\Crop Type Classification model - CNN\First\T28_Pringles4.csv"
df = pd.read_csv(file_svc)
print(df.head())
That prints something like this:
Lat1 Long1 CropingState
-37.75737 -61.14537 Barbecho
-37.78152 -61.15872 Verdeo invierno
-37.78248 -61.17755 Barbecho
-37.78018 -61.17357 Campo natural
-37.78850 -61.18501 Campo natural
#I create the array "imagen" (I only show one channel here to make it short)
imagen = (np.zeros(7931*7901*7, dtype = np.float32)).reshape(7931,7901,7)
imagen[:,:,0] = band1.ReadAsArray().astype(np.float32)
#And then I can plot it:
plt.imshow(imagen[:,:,0], cmap = 'hot')
plt.plot()
Which plots something like this:
(https://github.com/jamesluc007/DeepLearningInternship/blob/master/Crop%20Yield%20Prediction/Crop%20Type%20Classification%20model%20-%20CNN/First/red_band.png)
I want to transform those (-37,-61) into something like (2230,1750). But I haven't figured it how yet. Any clues?

pythonOCC set default units to inches

PLEASE SEE EDIT FOR SHORT VERSION
Been hunting through the pythonOCC documentation for this.
I have .step file in inches. Here are the lines in the .step file for confirmation:
#50 = ( CONVERSION_BASED_UNIT( 'INCH', #122 )LENGTH_UNIT( )NAMED_UNIT( #125 ) );
#51 = ( NAMED_UNIT( #127 )PLANE_ANGLE_UNIT( )SI_UNIT( $, .RADIAN. ) );
#52 = ( NAMED_UNIT( #127 )SI_UNIT( $, .STERADIAN. )SOLID_ANGLE_UNIT( ) );
~~~
#122 = LENGTH_MEASURE_WITH_UNIT( LENGTH_MEASURE( 25.4000000000000 ), #267 );
File reads and displays in window:
When I use manual coordinates to make a bounding box, I find my box is wayyyy off:
Position is off because the STEP model is not at 0,0,0.
Turns out pythonOCC automatically converts everything into MM. When I manually enter in box dimensions in INCHES, it reads them as MM. I've tried to deal by converting everything manually too (inches * 25.4) but this is problematic and ugly.
I know pythonOCC uses the STEP file line # 122 as the conversion ratio as I've changed it from above to:
#122 = LENGTH_MEASURE_WITH_UNIT( LENGTH_MEASURE( 1.0 ), #267 );
When I do, my bounding box and step model line up perfectly... But I still know PythonOCC thinks it's converting to MM.
Anyone have any experience changing the default units for pythonocc?
I've tried to find in the following occ packages:
OCC.STEPControl, OCC.Display, OCC.AIS
and many others.
EDIT:
When I draw my box using my own coordinates like this:
minPoint = gp_Pnt(minCoords)
maxPoint = gp_Pnt(maxCoords)
my_box = AIS_Shape(BRepPrimAPI_MakeBox(minPoint, maxPoint).Shape())
display.Context.Display(my_box.GetHandle())
My coordinates are in Inches, but pythonOCC reads them as MM. If I can get my own coordinates to be read in Inches, this would be solved. Can't find anything in OCC.Display for how my coordinates are interpreted. Something like "OCC.Display.inputUnitsAre("INCHES")"?
EDIT 2:
Getting closer looking here:
https://dev.opencascade.org/doc/refman/html/class_units_a_p_i.html
Under UnitsAPI_SystemUnits and SetCurrentUnit... Though I'm not sure how to implement in python yet to test. Working on it.
You'll find documentation for the units here
Take a look at the OCC.Extended.DataExchange module, you'll see the following function:
def write_step_file(a_shape, filename, application_protocol="AP203"):
""" exports a shape to a STEP file
a_shape: the topods_shape to export (a compound, a solid etc.)
filename: the filename
application protocol: "AP203" or "AP214"
"""
# a few checks
assert not a_shape.IsNull()
assert application_protocol in ["AP203", "AP214IS"]
if os.path.isfile(filename):
print("Warning: %s file already exists and will be replaced" % filename)
# creates and initialise the step exporter
step_writer = STEPControl_Writer()
Interface_Static_SetCVal("write.step.schema", "AP203")
# transfer shapes and write file
step_writer.Transfer(a_shape, STEPControl_AsIs)
status = step_writer.Write(filename)
assert status == IFSelect_RetDone
assert os.path.isfile(filename)
By default, OCC writes units in millimeters, so I'm curious what function / method was used to export your STEP file.
Interface_Static_SetCVal("Interface_Static_SetCVal("write.step.unit","MM")
The docs though state that this methods Defines a unit in which the STEP file should be written. If set to unit other than MM, the model is converted to these units during the translation., so having to explicitly set this unit is unexpected.

Setting Default Cell Size

I'm having issues trying to set a default cell size for polygon to raster conversion. I need to convert a buffered stream (polygon) to a raster layer, so that I can burn the stream into a DEM. I'd like to automate this process to include it in a larger script.
My main problem is that the PolygonToRaster_conversion() tool is not allowing me to set the cell size to a raster layer value. It's also not obeying the default raster cell size I'm trying to set in the environment. Instead, it consistently uses the default "extent divided by 250".
Here is my script for this process:
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Environment Settings
arcpy.env.cellSize = Input_DEM
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", Input_DEM)
This produces the following error:
"Cell size must be greater than zero."
The same error occurs if I type out the path for the DEM layer.
I've also tried manually typing in a number for the cell size. This works, but I want to generalize the usability of this tool.
What I really don't understand is that I used the DEM layer as the cell size manually through the ArcGIS interface and this worked perfectly!!
Any help will be greatly appreciated!!!
There are several options here. First, you can use the raster band properties to extract the cell size and insert that into the PolygonToRaster function. Second, try using the MINOF parameter in the cell size environment setting.
import arcpy
# Input Data
Input_DEM = "C:\\GIS\\DEM\\dem_30m.grid"
BufferedStream = "C:\\GIS\\StreamBuff.shp"
# Use the describe function to get at cell size
desc = arcpy.Describe(Input_DEM)
cellsize = desc.meanCellWidth
# Convert to Raster
StreamRaster = "C:\\GIS\\Stream_Rast.grid"
arcpy.PolygonToRaster_conversion(BufferedStream, "FID", StreamRaster, "CELL_CENTER", "NONE", cellsize)

Check if a geopoint with latitude and longitude is within a shapefile

How can I check if a geopoint is within the area of a given shapefile?
I managed to load a shapefile in python, but can't get any further.
Another option is to use Shapely (a Python library based on GEOS, the engine for PostGIS) and Fiona (which is basically for reading/writing files):
import fiona
import shapely
with fiona.open("path/to/shapefile.shp") as fiona_collection:
# In this case, we'll assume the shapefile only has one record/layer (e.g., the shapefile
# is just for the borders of a single country, etc.).
shapefile_record = fiona_collection.next()
# Use Shapely to create the polygon
shape = shapely.geometry.asShape( shapefile_record['geometry'] )
point = shapely.geometry.Point(32.398516, -39.754028) # longitude, latitude
# Alternative: if point.within(shape)
if shape.contains(point):
print "Found shape for point."
Note that doing point-in-polygon tests can be expensive if the polygon is large/complicated (e.g., shapefiles for some countries with extremely irregular coastlines). In some cases it can help to use bounding boxes to quickly rule things out before doing the more intensive test:
minx, miny, maxx, maxy = shape.bounds
bounding_box = shapely.geometry.box(minx, miny, maxx, maxy)
if bounding_box.contains(point):
...
Lastly, keep in mind that it takes some time to load and parse large/irregular shapefiles (unfortunately, those types of polygons are often expensive to keep in memory, too).
This is an adaptation of yosukesabai's answer.
I wanted to ensure that the point I was searching for was in the same projection system as the shapefile, so I've added code for that.
I couldn't understand why he was doing a contains test on ply = feat_in.GetGeometryRef() (in my testing things seemed to work just as well without it), so I removed that.
I've also improved the commenting to better explain what's going on (as I understand it).
#!/usr/bin/python
import ogr
from IPython import embed
import sys
drv = ogr.GetDriverByName('ESRI Shapefile') #We will load a shape file
ds_in = drv.Open("MN.shp") #Get the contents of the shape file
lyr_in = ds_in.GetLayer(0) #Get the shape file's first layer
#Put the title of the field you are interested in here
idx_reg = lyr_in.GetLayerDefn().GetFieldIndex("P_Loc_Nm")
#If the latitude/longitude we're going to use is not in the projection
#of the shapefile, then we will get erroneous results.
#The following assumes that the latitude longitude is in WGS84
#This is identified by the number "4326", as in "EPSG:4326"
#We will create a transformation between this and the shapefile's
#project, whatever it may be
geo_ref = lyr_in.GetSpatialRef()
point_ref=ogr.osr.SpatialReference()
point_ref.ImportFromEPSG(4326)
ctran=ogr.osr.CoordinateTransformation(point_ref,geo_ref)
def check(lon, lat):
#Transform incoming longitude/latitude to the shapefile's projection
[lon,lat,z]=ctran.TransformPoint(lon,lat)
#Create a point
pt = ogr.Geometry(ogr.wkbPoint)
pt.SetPoint_2D(0, lon, lat)
#Set up a spatial filter such that the only features we see when we
#loop through "lyr_in" are those which overlap the point defined above
lyr_in.SetSpatialFilter(pt)
#Loop through the overlapped features and display the field of interest
for feat_in in lyr_in:
print lon, lat, feat_in.GetFieldAsString(idx_reg)
#Take command-line input and do all this
check(float(sys.argv[1]),float(sys.argv[2]))
#check(-95,47)
This site, this site, and this site were helpful regarding the projection check. EPSG:4326
Here is a simple solution based on pyshp and shapely.
Let's assume that your shapefile only contains one polygon (but you can easily adapt for multiple polygons):
import shapefile
from shapely.geometry import shape, Point
# read your shapefile
r = shapefile.Reader("your_shapefile.shp")
# get the shapes
shapes = r.shapes()
# build a shapely polygon from your shape
polygon = shape(shapes[0])
def check(lon, lat):
# build a shapely point from your geopoint
point = Point(lon, lat)
# the contains function does exactly what you want
return polygon.contains(point)
i did almost exactly what you are doing yesterday using gdal's ogr with python binding. It looked like this.
import ogr
# load the shape file as a layer
drv = ogr.GetDriverByName('ESRI Shapefile')
ds_in = drv.Open("./shp_reg/satreg_etx12_wgs84.shp")
lyr_in = ds_in.GetLayer(0)
# field index for which i want the data extracted
# ("satreg2" was what i was looking for)
idx_reg = lyr_in.GetLayerDefn().GetFieldIndex("satreg2")
def check(lon, lat):
# create point geometry
pt = ogr.Geometry(ogr.wkbPoint)
pt.SetPoint_2D(0, lon, lat)
lyr_in.SetSpatialFilter(pt)
# go over all the polygons in the layer see if one include the point
for feat_in in lyr_in:
# roughly subsets features, instead of go over everything
ply = feat_in.GetGeometryRef()
# test
if ply.Contains(pt):
# TODO do what you need to do here
print(lon, lat, feat_in.GetFieldAsString(idx_reg))
Checkout http://geospatialpython.com/2011/01/point-in-polygon.html and http://geospatialpython.com/2011/08/point-in-polygon-2-on-line.html
One way to do this is to read the ESRI Shape file using the OGR
library Link and then use the GEOS geometry
library http://trac.osgeo.org/geos/ to do the point-in-polygon test.
This requires some C/C++ programming.
There is also a python interface to GEOS at http://sgillies.net/blog/14/python-geos-module/ (which I have never used). Maybe that is what you want?
Another solution is to use the http://geotools.org/ library.
That is in Java.
I also have my own Java software to do this (which you can download
from http://www.mapyrus.org plus jts.jar from http://www.vividsolutions.com/products.asp ). You need only a text command
file inside.mapyrus containing
the following lines to check if a point lays inside the
first polygon in the ESRI Shape file:
dataset "shapefile", "us_states.shp"
fetch
print contains(GEOMETRY, -120, 46)
And run with:
java -cp mapyrus.jar:jts-1.8.jar org.mapyrus.Mapyrus inside.mapyrus
It will print a 1 if the point is inside, 0 otherwise.
You might also get some good answers if you post this question on
https://gis.stackexchange.com/
If you want to find out which polygon (from a shapefile full of them) contains a given point (and you have a bunch of points as well), the fastest way is using postgis. I actually implemented a fiona based version, using the answers here, but it was painfully slow (I was using multiprocessing and checking bounding box first). 400 minutes of processing = 50k points. Using postgis, that took less than 10seconds. B tree indexes are efficient!
shp2pgsql -s 4326 shapes.shp > shapes.sql
That will generate a sql file with the information from the shapefiles, create a database with postgis support and run that sql. Create a gist index on the geom column. Then, to find the name of the polygon:
sql="SELECT name FROM shapes WHERE ST_Contains(geom,ST_SetSRID(ST_MakePoint(%s,%s),4326));"
cur.execute(sql,(x,y))

Categories

Resources