Working with shapefiles in Python - python

I am currently working on the problem of generating roads from Openstreetmap files. I use these Osm files to create a shapefile from all different road types with different roadway numbers, widths and so on.
Later on, I would also like to use digital orthophotos (DOP) to eliminate inconsistencies and errors in the Osm files.
In the appendix you can see an example image of a DOP (with corrected shadows) and a generated mask, in addition to the mask there is the generated shapefile (with the different information contained).
So much for the introduction, now for the main question:
In order to generate road markings on the generated roads it is first necessary to split shapefiles according to individual lanes. How is that possible in Python?
Many thanks and greetings

Related

Generate schematic (geographic) diagram from graph

I would like to know how best to generate a schematic diagram, something like this, from a graph (created using the Python NetworkX library) that contains the latitude and longitude of each node (city) and the lines connecting them in the Indian railway network.
The cities (nodes) should be located reasonably close to their actual position, but not necessarily exactly. I am OK with using the plate carrée projection that simply maps lat/long onto X/Y in the diagram.
The rail lines (edges) can be straight lines or even curves if it fits better.
On the diagram should be displayed the cities (preferably as dots) along with a short (max 4 characters) label for each, the lines connecting them, and a single label for each line (the given example has quite long labels for the lines).
Preferably the amount of manual tweaking of coordinates to get things to fit should be minimised.
Using Graphviz was my first idea. But I don't know how well neato/fdp (required for fixed positioning of nodes) will perform with large numbers of nodes/edges. Also, making Graphviz display labels separately outside the nodes (rather than inline) seems to need a lot of manual positioning of each label, which would be pretty boring. Is there any better way to get this kind of layout?
Doable (https://forum.graphviz.org/t/another-stupid-graphviz-trick-geographic-graphs/256), but does not seem to use many Graphviz features. In addition to tools mentioned in the link, maybe consider pikchr (https://pikchr.org/home/doc/trunk/homepage.md)

Multiview Stereo 3D construction in Blender

I intend to make a 3D model based on multi view stereo images ( basically 2D plane images of the same object from different angles and orientation) inside Blender from scratch.However, I am new to Blender.
I wanted to know if there are any tutorials of how to project a single pixel or point in the space of Blender's 3D environment using python. If not tutorial, any documentation. I am still learning about this whole 3D construction thing and pretty new to this, so I am not sure maybe these points are displayed using a 3 dimensional matrix/array ?
Basically I want to implement 3D construction based on a paper written by some researchers. Mostly every such project is in C++. I want to do it in Python in Blender, and if I am capable enough, make these libraries open source.
Suggest me any pre-requisite if you think that shall help me. I have just started my 3rd year of BSc Computer Science course, and very new to the world of Computer Graphics.
(My skillset is C, Java and Python.)
I would be very glad and appreciate any help.
Thank You
[Link to websitehttps://vision.in.tum.de/research/image-based_3d_reconstruction/multiviewreconstruction[][1]]
image2
Yes, it can very likely be done in Blender, and in Python at least for small geometries / low resolution.
A valid approach for the kind of scenarios you seem to want to play with is based on the idea of "space carving" or "silhouette projection". A good description in is an old paper by Kutulakos and Seitz, which was based in part on earlier work by Szelisky.
Given a good estimation of the silhouettes, these methods can correctly reconstruct all convex portions of the object's surface, and the subset of concavities that are resolved in the photo hull. The remaining concavities are "patched" over and need to be reconstructed using a different method (e.g. stereo, or structured light). For the surfaces that can be reconstructed, space carving is generally more robust than stereo (since it is insensitive to the color and surface texture of the object), and can work on surfaces where structured light struggles (e.g. surfaces with specularities, or very dark objects with low reflectance for a laser stripe)
The basic idea is to use the silhouettes of the projection of the object in cameras around it to "remove" mass from an initial volume (e.g. a box) encompassing the object, a bit like a sculptor carving a statue by removing material from a block of marble.
Computationally, you can do it representing the volume of space of interest using an octree, initialized with a minimal level of subdivision, and then progressively refined. The refinement consists of projecting the vertices of the octree leaves in the cameras, and identifying which leaves are completely outside or partially inside the silhouettes. The former are pruned, while the latter are split, and the process continues until no more leaves can be split or a maximul level of subdivision is reached. The hull of the octree is then extracted as a "watertight" mesh using standard methods.
Apart from the above paper, a way more detailed description can be found on an old patent by Geometrix - it sold a scanner based on the above ideas around year 2000. Here is what it looked like:

how to locate and extract coordinates/data/sub-components of charts/map image data?

I'm working on creating a tile server from some raster nautical charts (maps) i've paid for access, and i'm trying to post-process the raw image data that these charts are distributed as, prior to geo-referencing them and slicing them up into tiles
i've got a two sets of tasks and would greatly appreciate any help or even sample code on how to get these done in an automated way. i'm no stranger to python/jupyter notebooks but have zero experience with this type of data-science to do image analysis/processing using things like opencv/machine learning (or if there's a better toolkit library that i'm not even yet aware of).
i have some sample images (originals are PNG but too big to upload so i encoded them in high-quality JPEGs to follow along/provide sample data).. here's what i'm trying to get done:
validation of all image data.. the first chart (as well as last four) demonstrate what properly formatted charts images should looks like (i manually added a few colored rectangles to the first, to highlight different parts of the image in the bonus section below)
some images will either have missing tile data, as in the 2nd sample image, these are ALWAYS chunks of 256x256 image data, so should be straightforward to identify black boxes of this exact size..
some images will have corrupt/misplaced tiles as in the 3rd image (notice in the center/upper half of the image is a large colorful semi-circle/arcs, it is slightly duplicated beneath and if you look along horizontally you can see the image data is shifted and so these tiles have been corrupted somehow
extraction of information, ultimately once all image data is verified to be valid (the above steps are ensured), there is a few bit of data i really need pulled out of the image, the most important of which is
the 4 coordinates (upper left, upper right, lower left, lower right) of the internal chart frame, in the first image they are highlighted in a small pink box at each corner (the other images don't have them but they are located in a simlar way) - NOTE, because these are geographic coordinates and involve projections, they are NOT always 100% horizontal/vertical of each other.
the critical bit is that SOME images container more than one "chartlet", i really need to obtain the above 4 coordinate for EACH chartlet (some charts have no chartlets, some two to several of them, and they are not always simple rectangular shapes), i may be able to generate for input the number of chartlets if that helps..
if possible, what would also help is extracting each chartlet as a separate image (each of these have a single capital letter, A, B, C in a circle that would be good if it appeared in the filename)
as a bonus, if there was a way to also extract the sections sampled in the first sample image (in the lower left corner), this would probably involve recognize where/if in the image this appears (would probably only appear once per file but not certain) and then extracting based on its coordinates?
mainly the most important is inside a green box and represents a pair of tables (the left table is an example and i believe would always be the same, and the right has a variable amount of columns)
also the table in the orange box would be good to also get the text from as it's related
as would the small overview map in the blue box, can be left as an image
i have been looking at tutorials on opencv and image recognition processes but the content so far has been highly elementary not to mention an overwhelming endless list of algorithms for different operations (which again i don't know which i'd even need), so i'm not sure how it relates to what i'm trying to do.. really i don't even know where to begin to structure the steps needed for undertaking all these tasks or how each should be broken down further to ease the processing.

How to update specific raster tiles which are a part of very big basemap covering multiple countries using gdal2tiles.py or rasterio?

I pre-rendered a custom base map spanning across multiple countries using gdal2tiles.py and it works like a charm. But I don't know how to update only specific tiles when there is a data update for lets say 1/3 of the basemap. So I have two questions here
1) How to update specific tiles across multiple zoom levels for a pre rendered base map?
2) Is it possible to dynamically render raster tiles (from satellite images) in the query time?
Any kind of help is appreciated
I have already tried to render only the specific tiles for the updated data but it ended up breaking the shared tiles between two different datasets ( example two landsat tiles).

Drawing upon openstreetmap in python

What I want to do is to generate a static image (e.g. a png) using python and using openstreetmap tiles as a background.
Mathplotlib and Basemap is almost what I'm looking for. The problem is being able to use OSM tiles as background. I'm not pleased by the approach suggested in http://stevendkay.wordpress.com/2010/02/24/plotting-points-on-an-openstreetmap-export/
The closest I found is in this answer but using R, and not python Plotting points from a data.frame using OpenStreetMap
Did I miss any obvious and easy solution?
Thanks for your help
EDIT : this questions suggests many tools, but none seems to match my needs How can I display OSM tiles using Python?
You overlooked the "Export" tab at the OSM website, which is capable of generating a static image with the dimensions and map extents you want. Have a look at http://wiki.openstreetmap.org/wiki/Export
Please be advised that generating static images is a resource-intensive process, and the OSM sysadmins will frown upon you if you do a large number of requests or abuse this feature. Unfortunately this means you'll have to find another solution if you're trying to do lots of images.
By the way, the data you're plotting on top is properly projected into EPSG:3857 and not just raw lat/lon coordinates, right? Raw lat/lon data will look distorted at large zoom levels.

Categories

Resources