Difficulty in classification of roads from satellite images [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 days ago.
Improve this question
I am trying to create a machine learning algorithm which would help in categorizing the types(highway, city-road) of roads in a satellite image.
I've tried using image segmentation to extract the roads from satellite images, and thought of a process as follows:
Based on the color of the road, dirt roads can be predicted
Based on the thickness of the road, highways and city roads can be predicted
But the problem is I don't know how can differentiate the results based on the aforementioned criterion.

Related

How to write this equation into Python code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 17 hours ago.
Improve this question
I am running a programme about Real-time Road Congestion Detection Based on Image Texture Analysis that calculates the traffic density which is added up by the average energy and average entropy from the GLCM of a greyscale.
I am using scikit-image to calculate the GLCM and the energy from greyscale but it cannot calculate the entropy.
skimage.feature.graycomatrix
skimage.feature.graycoprops
How can I write this equation of calculating entropy feature into Python code? I am not good at math and I am a beginner in Python.

can i find a trained deep learning model which comparing two persons image and return if they are the same or not [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I need a trained deep learning mode can compare two images for two persons and give me the result as if the two image are for the same person or not.
I guess you see the face of the two persons on the image. In this case you can try to use Facenet Paper
You can find an implementation for example here link

Character recognition from image with low contrast [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have images (about 1000) with different numbers. Using opencv I extracted ROI from these images. Here's a small sample:
And I don't know how to extract these numbers or identify them. For opencv have a small threshold. I tried VGGnet keras (I rotated each image 1 degree to create 360 images as input for tensorflow), but the control image was mostly not recognized. Does anyone have an idea?

How to use OCR to read characters engraved on a metallic plate if both the background and the foreground are of the same color [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Character engraved on a metal plate
How to extract the characters engraved on a metallic plate?
OCR( Pytesseract) is unable to give good results. I tried Ridge detection but in vain. Any form of threshold doesn't seem to work because the background and the foreground are of the same color. Is there a series of steps that I can follow for such a use-case?
I think Binarization wont work in your image. If any preprocessing improves the quality of this image that doesn't mean that same method will work on all the images you have.
So my suggestion is to create your own Custom OCR using machine learning or CNN.
You can convert your digits into a 28x28 image matrix and then reshape it into 1x784 matrix and perform the training like MNIST dataset is trained.

I am taking CCD images through my telescope and am wondering what can I do analytically with those images? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
This weekend I'll be taking around 50 CCD images with my Celestron CPC 8in. telescope and would like to use python to analyzie the images. Does anyone have any experience doing this?
Check out the python wrapper for OpenCV:
http://docs.opencv.org/trunk/doc/py_tutorials/py_tutorials.html
This should provide all the power you need to do any image processing tasks you require.
And there are some great tutorials to help you get started.

Categories

Resources