I have recently been looking at using Bing Maps API to return images of properties and was very interested in the oblique imagery provided by Bing.
It seems that the API does offer this imagery and there are examples of this working. However, when I go to implement it myself, I find that the birdseye view does not return the building of interest. It returns a view off centre of the point put into the url.
An example of this is the place below, which shows the URL for looking at the Leadenhall building in City of London:
aerial_url = http://dev.virtualearth.net/REST/V1/Imagery/Map/Aerial/51.5138,-0.0821/18?&key={api_key}
birdseye_url = http://dev.virtualearth.net/REST/V1/Imagery/Map/BirdsEyeV2/51.5138,-0.0821/18?&key={api_key}
You will notice that the build is top right of the second image, rather than centre as in the first image.
Is anyone able to help me resolve this, as ideally I would want it to be centre of the image?
Thanks
I took a look at it, and it looks to be centered correctly. It's hard to tell comparing at the same zoom level. Setting the aerial image to zoom level 17 helps provide a better overview so you can better match the images. Note that the coordinates are based on the ground, not building heights. Aerial images are captured looking straight down while birdseye are captured at approximately 45 degree angle.
Related
My goal is to find a way to download (with Python) satellite images given coordinates describing a rectangle. I've never really found a precise and free solution (no business here, just school stuff).
At first I tried Google Maps' API, which worked perfectly but turned out to be paying after a certain time. I then considered using OpenStreetMap, but again I had a lot of trouble finding information on how to obtain those.
Can you please help me with a simple solution?
OpenStreetMap only provides map data. OpenStreetMap doesn't have aerial imagery and thus also no satellite imagery API.
If you are looking for free aerial imagery then take a look at OpenAerialMap.
I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2
Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:
I have a pair of coordinates (lat, long).
I need to generate an image of displaying these coordinates on the map.
And then generate such images with other coordinates in the future without the Internet.
Please tell me whether there are solutions that allow you to display coordinates offline?
Upd: is there any opportunity to download maps offline , eg: gps tracker maps or something like that?
thank you
This is not possible while offline. To generate an image of the coordinate location you would most likely be using
os.system("open \"\" https://www.google.nl/maps/place/" + location)
and then generating a image of the location that is popped up. This is impossible to do while offline, I am very sorry.
The question is too broad to give a good answer. However:
There are several companies, such as TeleAtlas and NavTeq, that sell map data. I have no idea what buying the world from them at 1:1M resolution would cost, but I'd guess several thousand USD.
You could download data, or pre-rendered rasters, from Natural Earth. However, they don't have quite the resolution required for good 1:1M maps.
You could download data from OpenStreetMap. The data is free (as in beer, and as in speech), but using it is a major undertaking.
There are companies that offer pre-rendered maps in various formats from OpenStreetMap data. OpenMapTiles is the one I happen to have at the top of my head, but here are others.
I thought about tackling a new project in which I use Tensorflow object detection API to detect Euro pallets (eg. pic).
My ultimate goal is to know how far I am away from the pallet and which relative position I have to it. So I thought about first detecting the euro pallet in an RGB feed from a kinect camera and then using its 3D feature to get the distance to the pallet.
But how do I go about the relative position of the pallet? I could create different classes, for example one is "Front view laying pallet" another one Side view laying pallet etc. but I think for that to be accurate I'd need quite a few pictures for each class for it to be valid? Like 200 for each class?
Since my guess is that there are no such labeled datasets yet thats quite a pain to create by myself.
Another way I could think of, is if I label my pallets with segmentation instead of bounding boxes, maybe there is another way to find out my relative position to the pallet? I never did semantic segmentation labeling myself but can anyone name any good programs which I could use?
I'm hoping someone can help point me in the right direction. Any help would be appreciated.
Some ideas: assuming detection and segmentation with classifier(s) works, one could then try feature detection like edges / lines to obtain clues about its orientation (bounding box).
Of course this will be tricky for simple feature detection because of very different surfaces (wood, dirt), backgrounds and lighting.
Also, "markerless tracking" (a topic in augmented reality) and "bin picking" (actually applied in the automation industry) may be keywords for similar problems, although you are probably not starting with an unordered pile of pallets.
What I want to do is to generate a static image (e.g. a png) using python and using openstreetmap tiles as a background.
Mathplotlib and Basemap is almost what I'm looking for. The problem is being able to use OSM tiles as background. I'm not pleased by the approach suggested in http://stevendkay.wordpress.com/2010/02/24/plotting-points-on-an-openstreetmap-export/
The closest I found is in this answer but using R, and not python Plotting points from a data.frame using OpenStreetMap
Did I miss any obvious and easy solution?
Thanks for your help
EDITÂ : this questions suggests many tools, but none seems to match my needs How can I display OSM tiles using Python?
You overlooked the "Export" tab at the OSM website, which is capable of generating a static image with the dimensions and map extents you want. Have a look at http://wiki.openstreetmap.org/wiki/Export
Please be advised that generating static images is a resource-intensive process, and the OSM sysadmins will frown upon you if you do a large number of requests or abuse this feature. Unfortunately this means you'll have to find another solution if you're trying to do lots of images.
By the way, the data you're plotting on top is properly projected into EPSG:3857 and not just raw lat/lon coordinates, right? Raw lat/lon data will look distorted at large zoom levels.