Divine geometry bounding box to equal smaller boxes - python

I have a bounding box with coordinates:
bottom_left = [10.7510994291,106.5517721598]
bottom_right = [10.7510994291,106.7500970722]
top_right = [10.9005609767,106.7500970722,]
top_left = [10.9005609767,106.5517721598]
I'm trying to divide it into smaller boxes that have the same area using Python,
I'm able to create two lists using this code:
cols = np.linspace(bottom_left[1], bottom_right[1], num=15)
rows = np.linspace(bottom_left[0],top_left[0], num=15)
Here is the result:
[106.55177216 106.56593822 106.58010429 106.59427036 106.60843642
106.62260249 106.63676855 106.65093462 106.66510068 106.67926675
106.69343281 106.70759888 106.72176494 106.73593101 106.75009707]
[10.75109943 10.76177525 10.77245108 10.7831269 10.79380273 10.80447855
10.81515438 10.8258302 10.83650603 10.84718185 10.85785768 10.8685335
10.87920933 10.88988515 10.90056098]
I'm trying to combine the lat/long for creating a box, here is the example of two small boxes:
[[106.55177216,10.75109943],[106.55177216,10.76177525],[106.56593822,10.75109943],[106.56593822,10.76177525]]
[[106.55177216,10.75109943],[106.55177216,10.76177525],[106.580104,10.751099],[106.580104,10.761775]]
Image:
I know that loop can handle this case but I'm still trying to find a better way. Any help is appreciated. Many thanks.
ps: I'm new to Python and don't know much about libs in Python ecosystem.

So if I now understand correctly you want to take items from each list, two at a time, and generate the four points that will make up the coordinates of a box.
In this case the solution is itertools.product():
out = []
for x in range(0,len(lats),2):
out.append(list(product(lats[x:x+2],longs[x:x+2])))
out
[[(106.55177216, 10.75109943), (106.55177216, 10.76177525), (106.56593822, 10.75109943), (106.56593822, 10.76177525)], [(106.58010429, 10.77245108), (106.58010429, 10.7831269), (106.59427036, 10.77245108), (106.59427036, 10.7831269)], [(106.60843642, 10.79380273), (106.60843642, 10.80447855), (106.62260249, 10.79380273), (106.62260249, 10.80447855)], [(106.63676855, 10.81515438), (106.63676855, 10.8258302), (106.65093462, 10.81515438), (106.65093462, 10.8258302)], [(106.66510068, 10.83650603), (106.66510068, 10.84718185), (106.67926675, 10.83650603), (106.67926675, 10.84718185)], [(106.69343281, 10.85785768), (106.69343281, 10.8685335), (106.70759888, 10.85785768), (106.70759888, 10.8685335)], [(106.72176494, 10.87920933), (106.72176494, 10.88988515), (106.73593101, 10.87920933), (106.73593101, 10.88988515)], [(106.75009707, 10.90056098)]]
Note: I'm assuming lats and longs are lists, and of course the 15th item in each list is ignored since we take them two at a time

Related

Can morphology.remove_small_objects remove big objects?

I was wondering if morphology.remove_small_objects could be used to remove big objects. I am using this tool to detect the objects as seen in the figure.
,
However,there are big objects as seen in the left. Is there any way I could use morphology.remove_small_objects as a threshold, for example:
mask=morphology.remove_small_objects(maske, 30)
Could I use like a range? between 30 and 200 so I can ignore the red detection in the image.
Otherwise, I will just count the white pixels in the image and remove the ones that have the highest.
This might be a good contribution to the scikit-image library itself, but for now, you need to roll your own. As suggested by Christoph, you can subtract the result of remove_small_objects from the original image to remove large objects. So, something like:
def filter_objects_by_size(label_image, min_size=0, max_size=None):
small_removed = remove_small_objects(label_image, min_size)
if max_size is not None:
mid_removed = remove_small_objects(small_removed, max_size)
large_removed = small_removed - mid_removed
return large_removed
else:
return small_removed

Convert LineString / MultiLineString geometries to lat lon

I am using this Mapillary endpoint: https://tiles.mapillary.com/maps/vtp/mly1_public/2/{zoom_level}/{x}/{y}?access_token={} and getting such responses back (see photo). Also, here is the Mapillary documentation.
It is not quite clear to me what the nested coordinate lists in the response represent. By the looks of it, I initially thought it may have to do with pixel coordinates. But judging by the context (the API documentation) and the endpoint I am using, I would say that is not the case. Also, I am not sure if the json response you see in the picture is valid geojson. Some online formatters did not accept it as valid.
I would like to find the bounding box of the "sequence". For context, that would be the minimal-area rectangle defined by two lat, lon positions that fully encompasses the geometry of the so-called "sequence"; and a "sequence" is basically a series of photos taken during a vehicle/on-foot trip, together with the metadata associated with the photos (metadata is available using another endpoint, but that is just for context).
My question is: is it possbile to turn the coordinates you see in the pictures into (lat,lon)? Having those, it would be easy for me to find the bounding box of the sequence. And if so, how? Also, please notice that some of the nested lists are of type LineString while others are MultiLineString (which I read about the difference here: help.arcgis.com, hope this helps)
Minimal reproducible code snippet:
import json
import requests
import mercantile
import mapbox_vector_tile as mvt
ACCESS_TOKEN = 'XXX' # can be provided from here: https://www.mapillary.com/dashboard/developers
z6_tiles = list(mercantile.tiles( #us_west_coast_bbox
west=-125.066423,
south=42.042594,
east=-119.837770,
north=49.148042,
zooms=6
))
# pprint(z6_tiles)
vector_tiles_url = 'https://tiles.mapillary.com/maps/vtp/mly1_public/2/{}/{}/{}?access_token={}'
for tile in z6_tiles:
res = requests.get(vector_tiles_url.format(tile.z,tile.x,tile.y,ACCESS_TOKEN))
res_json = mvt.decode(res.content)
with open('idea.json','w+') as f:
json.dump(res_json, f, indent=4)
I think this get_normalized_coordinates is the solution I was looking for. Please take this with a grain of salt, as I did not fully test it yet. Will try to and then I will update my answer. Also, please be cautious, because for tiles closer to either the South or the North Pole, the Z14_TILE_DMD_WIDTH constant will not be the one you see, but something more like: 0.0018958715374282065.
Z14_TILE_DMD_WIDTH = 0.02197265625
Z14_TILE_DMD_HEIGHT = 0.018241950298914844
def get_normalized_coordinates(bbox: mercantile.LngLatBbox,
target_lat: int,
target_lon: int,
extent: int=4096): # 4096 is Mapillary's default
"""
Returns lon,lat tuple representing real position on world map of a map feature.
"""
min_lon, min_lat, _, _ = bbox
return min_lon + target_lon / extent * Z14_TILE_DMD_WIDTH,
min_lat + target_lat / extent * Z14_TILE_DMD_HEIGHT
And if you are wondering how I came with the constants that you see, I simply iterated over the list of tiles that I am interested in and checked to make sure they all have the same width/height size (this might have not been the case, keeping in mind what I mentioned above about tiles closer to one of the poles - I think this is called "distortion", not sure). Also, for context: these tiles I iterated over are within this bbox: (-125.024414, 31.128199, -108.896484, 49.152970) (min_lon, min_lat, max_lon, max_lat; US west coast) which I believe is also why all the tiles have the same width/height sizes.
set_test = set()
for tile in relevant_tiles_set:
curr_bbox = mercantile.bounds(list_relevant_tiles_set[i])
dm_width_diff: float = curr_bbox.east - curr_bbox.west
dm_height_diff: float = curr_bbox.north - curr_bbox.south
set_test.add((dm_width_diff, dm_height_diff))
set_test
output:
{(0.02197265625, 0.018241950298914844}
UPDATE: forgot to mention that you actually do not need to compute those WIDTH, HEIGHT constants. You just replace those with (max_lon - min_lon) and (max_lat - min_lat) respectively. What I did with those constants was something for testing purposes only

Remove elements from array of arrays

I have an array of arrays from which I want to remove specific elements according to a logical command.
I have an array of arrays such that galaxies = ([[z1,ra1,dec1,distance1],[z2,ra2,dec2,distance2]...])and i want to remove all elements whose distance term is greater than 1. Ive tried to write "from galaxies[i], remove all galaxies such that galaxies[i][4]>1"
My code right now is:
galaxies_in_cluster = []
for i in range(len(galaxies)):
galacticcluster = galaxies[~(galaxies[i][4]<=1)]
galaxies_in_cluster.append(galacticcluster)
where
galaxies = [array([1.75000000e-01, 2.43794800e+02, 5.63820000e+01, 6.80000000e+00,
7.07290131e-02]),
array([1.75000000e-01, 2.40898000e+02, 5.15900000e+01, 7.10000000e+00,
5.60800387e+00]),
array([1.80000000e-01, 2.43792000e+02, 5.63990000e+01, 6.50000000e+00,
5.00059297e+02]),
array([1.75000000e-01, 2.43805000e+02, 5.62190000e+01, 7.80000000e+00,
2.16588562e-01])]
I want it to return
galaxies_in_cluster = [array([1.75000000e-01, 2.43794800e+02, 5.63820000e+01, 6.80000000e+00,
7.07290131e-02]), array([1.75000000e-01, 2.43805000e+02, 5.62190000e+01, 7.80000000e+00,
2.16588562e-01])]
(basically eliminating the second and third entry) but its returning the first and second entry twice, which doesn't make sense to me, especially since in the second entry, galaxies[2][4]>1.
Any help would be much appreciated.

How can I set the length of an edge in Maya via Python?

I have a number of edges with different lengths. I get the length of one of the edges and then want to scale a selection of edges to the gotten length.
I get the length via this code: (Not my code btw: http://forums.cgsociety.org/archive/index.php?t-846300.html)
global length
length =[]
sel=cmds.ls(sl=True,fl=True)
cmds.ConvertSelectionToVertices()
p=cmds.xform(sel,q=True,t=True,ws=True)
length=math.sqrt(math.pow(p[0]-p[3],2)+math.pow(p[1]-p[4],2)+math.pow(p[2]-p[5],2))
cmds.select(sel)
cmds.selectMode(co=True)
cmds.selectType(eg=True)
print 'Edge Length=',length
This scales all selected edges along the component axis y direction, which is how I want them to scale:
cmds.scale( 0,1,0, cs=True)
Now to set the length of an edge to the gotten length I've tried like this, but that doesn't work.
cmds.scale( length, cs=True)
Can someone point me in the right direction?
Ok, in case someone needs this working in the future. Here is the entire code
def getEdgeLengthFunc (*pArgs):
global referenceLength
referenceLength =[]
sel1=cmds.ls(sl=True,fl=True)
cmds.ConvertSelectionToVertices()
p=cmds.xform(sel1,q=True,t=True,ws=True)
referenceLength=math.sqrt(math.pow(p[0]-p[3],2)+math.pow(p[1]-p[4],2)+math.pow(p[2]-p[5],2))
cmds.select(sel1)
cmds.selectMode(co=True)
cmds.selectType(eg=True)
def setEdgeLengthFunc (*pArgs):
sel2=cmds.ls(sl=True,fl=True)
cmds.ConvertSelectionToVertices()
p=cmds.xform(sel2,q=True,t=True,ws=True)
initialLength=math.sqrt(math.pow(p[0]-p[3],2)+math.pow(p[1]-p[4],2)+math.pow(p[2]-p[5],2))
cmds.select(sel2)
cmds.selectMode(co=True)
cmds.selectType(eg=True)
lengthToSet=abs(referenceLength / initialLength)
cmds.scale( 1, lengthToSet, 1, cs=True, a=True)

Count number of points in multipolygon shapefile using Python

I have a polygon shapefile of the U.S. made up of individual states as their attribute values. In addition, I have arrays storing latitude and longitude values of point events that I am also interested in. Essentially, I would like to 'spatial join' the points and polygons (or perform a check to see which polygon [i.e., state] each point is in), then sum the number of points in each state to find out which state has the most number of 'events'.
I believe the pseudocode would be something like:
Read in US.shp
Read in lat/lon points of events
Loop through each state in the shapefile and find number of points in each state
print 'Here is a list of the number of points in each state: '
Any libraries or syntax would be greatly appreciated.
Based on what I can tell, the OGR library is what I need, but I am having trouble with the syntax:
dsPolygons = ogr.Open('US.shp')
polygonsLayer = dsPolygons.GetLayer()
#Iterating all the polygons
polygonFeature = polygonsLayer.GetNextFeature()
k=0
while polygonFeature:
k = k + 1
print "processing " + polygonFeature.GetField("STATE") + "-" + str(k) + " of " + str(polygonsLayer.GetFeatureCount())
geometry = polygonFeature.GetGeometryRef()
#Read in some points?
geomcol = ogr.Geometry(ogr.wkbGeometryCollection)
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(-122.33,47.09)
point.AddPoint(-110.11,33.33)
#geomcol.AddGeometry(point)
print point.ExportToWkt()
print point
numCounts=0.0
while pointFeature:
if pointFeature.GetGeometryRef().Within(geometry):
numCounts = numCounts + 1
pointFeature = pointsLayer.GetNextFeature()
polygonFeature = polygonsLayer.GetNextFeature()
#Loop through to see how many events in each state
I like the question. I doubt I can give you the best answer, and definitely can't help with OGR, but FWIW I'll tell you what I'm doing right now.
I use GeoPandas, a geospatial extension of pandas. I recommend it — it's high-level and does a lot, giving you everything in Shapely and fiona for free. It is in active development by twitter/#kajord and others.
Here's a version of my working code. It assumes you have everything in shapefiles, but it's easy to generate a geopandas.GeoDataFrame from a list.
import geopandas as gpd
# Read the data.
polygons = gpd.GeoDataFrame.from_file('polygons.shp')
points = gpd.GeoDataFrame.from_file('points.shp')
# Make a copy because I'm going to drop points as I
# assign them to polys, to speed up subsequent search.
pts = points.copy()
# We're going to keep a list of how many points we find.
pts_in_polys = []
# Loop over polygons with index i.
for i, poly in polygons.iterrows():
# Keep a list of points in this poly
pts_in_this_poly = []
# Now loop over all points with index j.
for j, pt in pts.iterrows():
if poly.geometry.contains(pt.geometry):
# Then it's a hit! Add it to the list,
# and drop it so we have less hunting.
pts_in_this_poly.append(pt.geometry)
pts = pts.drop([j])
# We could do all sorts, like grab a property of the
# points, but let's just append the number of them.
pts_in_polys.append(len(pts_in_this_poly))
# Add the number of points for each poly to the dataframe.
polygons['number of points'] = gpd.GeoSeries(pts_in_polys)
The developer tells me that spatial joins are 'new in the dev version', so if you feel like poking around in there, I'd love to hear how that goes! The main problem with my code is that it's slow.
import geopandas as gpd
# Read the data.
polygons = gpd.GeoDataFrame.from_file('polygons.shp')
points = gpd.GeoDataFrame.from_file('points.shp')
# Spatial Joins
pointsInPolygon = gpd.sjoin(points, polygons, how="inner", op='intersects')
# Add a field with 1 as a constant value
pointsInPolygon['const']=1
# Group according to the column by which you want to aggregate data
pointsInPolygon.groupby(['statename']).sum()
**The column ['const'] will give you the count number of points in your multipolygons.**
#If you want to see others columns as well, just type something like this :
pointsInPolygon = pointsInPolygon.groupby('statename').agg({'columnA':'first', 'columnB':'first', 'const':'sum'}).reset_index()
[1]: https://geopandas.org/docs/user_guide/mergingdata.html#spatial-joins
[2]: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html

Categories

Resources