Folium - Extracting an nxn image of mxm meters - python

I am using Folium on Python to extract maps. Given a coordinate, I want to extract an image of the mxm meters square around that coordinate. So, using pyproj, I project UTM to regular meters, create the mxm square and project back to UTM to get the coordinates of the bounding boxs corners.
Then, I've used fit_bounds with those corners to get my nxn picture. However, the output is still a rectangle. Sure, I can use Pillow to crop the image after the fact, but I need more control over how many meters that image is... And, right now I am not sure what I am actually getting.
What is the best way to extract a square image using Folium? Lets say I want to extract a map that gets the 100x100 meters area with coordinates (48.8584,2.2945) in the center.
What is the best approach to get this map?

I figured out how to control
OpenStreetMap has this wiki link with information regarding the different zoom levels.
To figure out how much of the real world is covered by a single pixel, formulas are provided. It is a function of the zoom level and the latitude at which the map is extracted.
s_pixel = C*cos(latitude)/(2**(zoomlevel + 8))

Related

Python: correcting offset of points on google maps (due to satellite image tilt)

I'm using the google maps static api to get top view satellite images of objects of which I have the surface coordinates (LoD1 / LoD2).
the points are always slightly off, I think this is due to a small tilt in the satellite image itself (is it a correct assumption?).
For example in this image I have the building shape, but the points are slightly off. Is there a way to correct this for all objects?
The red markers are the standard google-maps api pointers, the center of the original image (here it is cropped) is the center of the building, and the white line is a cv2.polyline implementation of the object shape.
Just shifting by n pixels will not help since the offset depends on the angle between the satellite and object and the shape of that object.
I am using the pyproj library to transform the coordinates, and then convert the coordinates to pixel values (by setting the center point as the center pixel value, and having the difference in the coordinate space, one can calculate the edge-points pixel values too).
So - the good news is that there is no need to "correct" this for all objects, because there is no way to do that without using 3d models & textures.
Google (or most map platforms for that matter) don't actually use satellite images, they use aeroplane images. The planes don't fly directly over the top of every building (imagine how tight/redundant their flight path would be if they did!).
Instead, the plane will take an image from some kind of angle, and then, through the wonders of photogrammetric processing, the images are all corrected and ortho-rectified so the ground surface is in the right place everywhere.
What can't (and shouldn't) be corrected in a 2d image is the location of objects above ground height. Like the roof in your image. For a more extreme example, just look at a skyscraper, and you'll realise you can't ever get the pixels correct above the ground:
https://goo.gl/maps/4tLSrd7yXQYWZPTy7

Transform Triangle Mesh OpenCV

I am trying to transform a picture with OpenCV in python.
Therefore I have points in a grid placed on the image that I can also move.
I then split each grid rectangle into two triangles and I have their coordinates:
where they were at the beginning and
where they are after i moved some points around
Now I want to transform the image so it fits the new mesh but without seeing lines on the edges of the triangles or image pieces getting ripped by transforming differently.
Help!

how can i get coordinates of arbitrary side in sqare something image in python

I'm trying to get coordinates of billiards table's side point.
This is origin image
in this image to
This black point's coordinates
This black point's coordinates
because i need to get edge's coordinates even when something hide some part of the image, like that way if i can get two arbitrary points in one side of table i can calculate the edge points.
but i have no idea how can i get that point...
please help. thanks for reading

How to find the largest empty rectangle using OpenCV?

I need to find the coordinates of the largest empty rectangle in a PNG image. The rectangle should consist of light colors (if that is too difficult, white pixels only are fine) and should be axis-oriented.
I am new to computer vision and I found out about OpenCV, I am currently using the python interface to it and started tackling this problem with the SimpleBlobDetector interface, but it gives me only the center of the Blob with a certain radius.
Can anyone point me in the right direction for this?
EDIT: I need to do this with a regular colored PNG image, not a binary matrix
You can use a contour extractor, with the given point list you can check the size of the rectangle by checking the sizes of the lists, assuming that all the rectangles are parallel to the cardinal axis. If not you need to compute the distance of a pixel and the next for all the pixels in the contour list by using the x and y coordinates on each.

Perform spherical projection of image in python

I am writing a program using PyGTK that displays a gtk.Image. The desktop is projected onto the inside of a spherical dome. If the image displayed is rectangular on the screen, once projected onto a sphere it gets distorted.
To help picture this: The desktop itself is square. The center pixel of the desktop projects to the zenith and a circle inscribed inside the square desktop becomes the horizon (0 degrees elevation in polar coordinates). Everything outside that (in the corners of the desktop) is not displayed.
I would like to somehow modify the gtk.Image such that it still appears rectangular on the spherical surface. I'm sure there are lots of details in how this projection could be done, but very simplistically I have to convert the rectangular image into a curved trapezoid. Converting to a range of polar coordinates (e.g., map this rectangle to the area between two azimuth and two elevation angles) would be a good first approximation, though you can imagine if the elevation angles are 0 and 90, the resulting image will be a wedge of the sphere and not look rectangular at all.
How can I apply transformations like this to a gtk.Image (or its underlying Pixbuf)? Is there a package already that can do this? If not, how should I go about writing it from scratch? Presumably I would have to pull out the pixel values, map them to some new grid, and replace the original image. I just don't want to reinvent something that has already been done.

Categories

Resources