I have a series of images of a structure with different z values (each photo is taken 5µm higher than the last one).
I wrote a programm that calculates the area of the structure for each photo. I also have a list that stores this area for each photo as a binary image (background = black, area = white).
Since I basically have all 3 coordinates for my structure I think it should be possible to create a STL file to plot this structre in 3D space.
Since I never did anything in terms of 3D programming I don't really know how to do this.
I would appreciate any help
Related
I have a large model of a house, with every internal object like walls, tables, doors and tv's inside. The file is a 3D object, either a .obj or a .fbx file. I also have a pointcloud from a 180 degrees lidar scanner that has scanned from somewhere inside the house. I know where the scanner stood with a precision of about 3 meters, and I want to find out what my pointcloud corresponds to in my 3D model. In other words, I want to find the translation and rotation required to move my pointcloud to the correct position in the model.
I have tried to turn the 3D model into a pointcloud, and then use ICP (iterative closest point), but as the points I generate does not necessarily correspond with those from the scanner, I get quite weird results from time to time.
I have also looked at this one Match 3D point cloud to CAD model, but in my case I only have a scan of a small portion of the full model.
Does anyone have any advice on how to do this with python?
I'm using the google maps static api to get top view satellite images of objects of which I have the surface coordinates (LoD1 / LoD2).
the points are always slightly off, I think this is due to a small tilt in the satellite image itself (is it a correct assumption?).
For example in this image I have the building shape, but the points are slightly off. Is there a way to correct this for all objects?
The red markers are the standard google-maps api pointers, the center of the original image (here it is cropped) is the center of the building, and the white line is a cv2.polyline implementation of the object shape.
Just shifting by n pixels will not help since the offset depends on the angle between the satellite and object and the shape of that object.
I am using the pyproj library to transform the coordinates, and then convert the coordinates to pixel values (by setting the center point as the center pixel value, and having the difference in the coordinate space, one can calculate the edge-points pixel values too).
So - the good news is that there is no need to "correct" this for all objects, because there is no way to do that without using 3d models & textures.
Google (or most map platforms for that matter) don't actually use satellite images, they use aeroplane images. The planes don't fly directly over the top of every building (imagine how tight/redundant their flight path would be if they did!).
Instead, the plane will take an image from some kind of angle, and then, through the wonders of photogrammetric processing, the images are all corrected and ortho-rectified so the ground surface is in the right place everywhere.
What can't (and shouldn't) be corrected in a 2d image is the location of objects above ground height. Like the roof in your image. For a more extreme example, just look at a skyscraper, and you'll realise you can't ever get the pixels correct above the ground:
https://goo.gl/maps/4tLSrd7yXQYWZPTy7
i have a shirt displayed as a 3D model in the file format „obj“ or „fbx“ . I would like to calculate the object width at a specific height. It would be best, when i have the coordinates from all points at a specific height. Can anyone tell me, a python or javascript framework for that or a suggestion, how i can calculate this manually.
enter image description here
If you're using the OBJ format, then you have no unit data. It's triangles but no absolute scale.
What you're looking for should be easy to moderately difficult to determine. 3D Printing slicing software does exactly what you want to calculate the path for 3d printers. You'll take your 3D model and make sure it's oriented so "up" makes sense - the neck of the shirt in your example, then run the slicer on it at various heights.
You'll get a 2D slice of a 3D object as the intersection of a plane with the model at that height. You'll then have to compute the bounding box around the slices and adjust the width to fit whatever units high your model is.
A good place to start might be this library: https://pypi.org/project/meshcut/
or else look for open source 3D printer slicing software.
I'm working on creating a tile server from some raster nautical charts (maps) i've paid for access, and i'm trying to post-process the raw image data that these charts are distributed as, prior to geo-referencing them and slicing them up into tiles
i've got a two sets of tasks and would greatly appreciate any help or even sample code on how to get these done in an automated way. i'm no stranger to python/jupyter notebooks but have zero experience with this type of data-science to do image analysis/processing using things like opencv/machine learning (or if there's a better toolkit library that i'm not even yet aware of).
i have some sample images (originals are PNG but too big to upload so i encoded them in high-quality JPEGs to follow along/provide sample data).. here's what i'm trying to get done:
validation of all image data.. the first chart (as well as last four) demonstrate what properly formatted charts images should looks like (i manually added a few colored rectangles to the first, to highlight different parts of the image in the bonus section below)
some images will either have missing tile data, as in the 2nd sample image, these are ALWAYS chunks of 256x256 image data, so should be straightforward to identify black boxes of this exact size..
some images will have corrupt/misplaced tiles as in the 3rd image (notice in the center/upper half of the image is a large colorful semi-circle/arcs, it is slightly duplicated beneath and if you look along horizontally you can see the image data is shifted and so these tiles have been corrupted somehow
extraction of information, ultimately once all image data is verified to be valid (the above steps are ensured), there is a few bit of data i really need pulled out of the image, the most important of which is
the 4 coordinates (upper left, upper right, lower left, lower right) of the internal chart frame, in the first image they are highlighted in a small pink box at each corner (the other images don't have them but they are located in a simlar way) - NOTE, because these are geographic coordinates and involve projections, they are NOT always 100% horizontal/vertical of each other.
the critical bit is that SOME images container more than one "chartlet", i really need to obtain the above 4 coordinate for EACH chartlet (some charts have no chartlets, some two to several of them, and they are not always simple rectangular shapes), i may be able to generate for input the number of chartlets if that helps..
if possible, what would also help is extracting each chartlet as a separate image (each of these have a single capital letter, A, B, C in a circle that would be good if it appeared in the filename)
as a bonus, if there was a way to also extract the sections sampled in the first sample image (in the lower left corner), this would probably involve recognize where/if in the image this appears (would probably only appear once per file but not certain) and then extracting based on its coordinates?
mainly the most important is inside a green box and represents a pair of tables (the left table is an example and i believe would always be the same, and the right has a variable amount of columns)
also the table in the orange box would be good to also get the text from as it's related
as would the small overview map in the blue box, can be left as an image
i have been looking at tutorials on opencv and image recognition processes but the content so far has been highly elementary not to mention an overwhelming endless list of algorithms for different operations (which again i don't know which i'd even need), so i'm not sure how it relates to what i'm trying to do.. really i don't even know where to begin to structure the steps needed for undertaking all these tasks or how each should be broken down further to ease the processing.
I'm currently involved in a hardware project where I am mapping triangular shaped LED to traditional bitmap images. I'd like to overlay a triangle vector onto an image and get the average pixel data within the bounds of that vector. However, I'm unfamiliar with the math needed to calculate this. Does anyone have an algorithm or a link that could send me in the right direction? (I tagged this as Python, which is preferred, but I'd be happy with the general algorithm!)
I've created a basic image of what I'm trying to capture here: http://imgur.com/Isjip.gif
Will this work: http://www.blackpawn.com/texts/pointinpoly/default.html ?
You can do line rasterization on the lineparts to determine for each pixel at each horizontal scanline lie within your triangle. Sum and divide their RGB values to get the average.