I want to calculate how irregular the surface is for various images. I attach one of the images below:
The idea would be to calculate the direction vectors every few samples and see how similar they are to each other (with the scalar product for example), if they are very similar it means that the surface is quite regular and the scalar product will be close to 1. If the irregularity is huge, it will be close to 0. Something like this:
I would be grateful for any help. Thank you!!!
Here's the first part of a solution for you – computing the "roughness" is up to you...
import numpy as np
import cv2 as cv
import matplotlib.pyplot as plt
# Read, convert, threshold image
im = cv.imread('5FH31.png')
imgray = cv.cvtColor(im, cv.COLOR_BGR2GRAY)
ret, thresh = cv.threshold(imgray, 127, 255, 0)
# Find the outermost contour
contours, hierarchy = cv.findContours(thresh, cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
# Decimate the contour to an approximation, squeeze to get an N x 2 array
dec_points = np.squeeze(cv.approxPolyDP(np.squeeze(contours), 3, True))
# Compute vectors from point + follower
vecs = dec_points - np.roll(dec_points, 1, 0)
# TODO: compute "roughness"? :)
# Debug: show our image and data
plt.imshow(im)
xs, ys = zip(*dec_points)
plt.plot(xs, ys, 'y')
plt.plot(xs, ys, 'r+')
plt.tight_layout()
plt.show()
The debug image will look something like this - you can see each point and the contour they form.
Related
I am new to Python and I have a question of how to get the boundaries of a figure that plotted by matplotlib imshow. For example, the following figure is the one that plotted by imshow and I want to get the coordinate of the boundaries and plot the boundaries over the same image. I hope you can help me.
Here is the code that I wrote for the first two lines.
df = pd.read_csv('data.txt', sep='\t')
plt.imshow(df,origin='lower')
The stackoverflow does not allow me upload the raw data. But please download it from my google drive folder:
https://drive.google.com/file/d/1ZPxtAz7vjsdFjeRmfop2cgpfcuGlO5PI/view?usp=sharing
Here is you original array:
One option is to compute a threshold on your array and then calculate the difference between the binary dilation and the threshold:
from scipy.ndimage.morphology import binary_dilation
array_thresh = np.array(array<0.8, dtype='int')
plt.imshow(binary_dilation(array_thresh)-array_thresh)
To perform the overlay, you need to mask the second array with numpy.ma.masked_where:
a_cnt = (binary_dilation(a_thresh)-a_thresh)
a_cnt_masked = np.ma.masked_where(a_cnt==0, a_cnt)
plt.imshow(array)
plt.imshow(a_cnt_masked, cmap='hsv', interpolation='none')
You can also use a specialized library, such as opencv:
import cv2
im = cv2.threshold(array, 1, 255, cv2.THRESH_BINARY)[1]
cnt = cv2.findContours(im.astype('uint8'), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[0]
im2 = sum([cv2.drawContours(np.zeros(array.shape), [c], 0, (255,0,0), 1) for c in cnt])
plt.imshow(im2)
You might need to play around with the different parameters of opencv.
I'm working with binary images representing contours (taken through cv2.Canny), and i want to get the coordinates of each contour clockwise, starting from the first point as intersection of the image and a horizontal line located in the center of the image. Assuming that the image i want to use is a circular contour, i would like to get something like this (assuming Y decreasing vertically, as matplotlib.pyplot.imshow does):
I tried with the following code:
indices = numpy.where(edges == [255]) #edges is the contour image
print(indices)
But this solution sorts the coordinates from the upper side of the image. I tried other solution found on the web too, but none of them seems to be usefull for this task.
I will recycle my idea from that answer incorporating the arctan2 function from numpy.
Given is an input image like this:
The output will be a plot like this:
Here's the code, which is hopefully self-explaining:
import cv2
import numpy as np
import matplotlib.pyplot as plt
# Generate artificial image
img = np.zeros((400, 400), np.uint8)
center = (150, 150)
img = cv2.circle(img, center, 100, 255, 1)
cv2.imwrite('images/img.png', img)
# Find contour(s)
cnts, _ = cv2.findContours(img, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
# Center contour
cnt = np.squeeze(cnts[0]) - center
# Calculate atan2 values, and sort
val = np.sort(np.arctan2(cnt[:, 0], cnt[:, 1]))
idx = np.argsort(np.arctan2(cnt[:, 0], cnt[:, 1]))
# atan2 uses (1, 0) as starting point, so correct by 1/2 * pi
corr = np.where(val <= (-0.5 * np.pi))[0][-1]
# Build final indices
indFinal = np.concatenate((idx[corr - 1:], idx[0:corr]))
x = cnt[indFinal, 0]
y = cnt[indFinal, 1]
# Generate plot
ax = plt.subplot(121)
plt.plot(x)
plt.title('x')
ax = plt.subplot(122)
plt.plot(y)
plt.title('y')
plt.savefig('images/plot.png')
Caveat: Concave contours will likely cause corrupted results.
I am trying to use Python along with opencv, numpy and matplotlib to do some computer vision for a robot which will use a railing to navigate. I am currently extremely stuck have run out of places to look. My current code is:
import cv2
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread('railings.jpg')
railing_image = np.copy(image)
resized_image = cv2.resize(railing_image,(881,565))
gray = cv2.cvtColor(resized_image, cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray, (5, 5), 0)
canny = cv2.Canny(blur, 85, 255)
cv2.imshow('test',canny)
image_array = np.array(canny)
ncols, nrows = image_array.shape
count = 0
scan = np.array
for x in range(0,image_array.shape[1]):
for y in range(0,image_array.shape[0]):
if image_array[y, x] == 0:
count += 1
scan = [scan, count]
print(scan)
plt.plot([0, count])
plt.axis([0, nrows, 0, ncols])
plt.show()
cv2.waitKey(0)
I am using a canny image which is stored in an array of 1's and 0's, the image I need represented is
The final result should look something like the following image.
I've tried using a histogram function but I've only managed to get that to output essentially a count of the number of times a 1 or 0 appears.
If anyone could help me or point me in the right direction that would produce a graph that represents the image pixels within a graph of height and width dimensions.
Thank you
I'm not sure how general this is but you could just use numpy argmax to get location of the maximum (like this) in your case. You should avoid loops as this will be very slow, better to use numpy functions. I've imported your image and used the cutoff criterion that 200 or more in the yellow channel is railing,
import cv2
import numpy as np
import matplotlib.pyplot as plt
#This loads the canny image you uploaded
image = cv2.imread('uojHJ.jpg')
#Trim off the top taskbar
trimimage = image[100:, :,0]
#Use argmax with 200 cutoff colour in one channel
maxindex = np.argmax(trimimage[:,:]>200, axis=0)
#Plot graph
plt.plot(trimimage.shape[0] - maxindex)
plt.show()
Where this looks as follows:
I am trying to classify if an image mostly contains black and white or color, to be precise it is a photo of a photocopy(think xerox),which is mostly black and white.The image is NOT single channel image, but a 3 channel image.
I just want to know if there are any obvious ways to solve this that im missing.
for now im trying to plot histograms and may be do a pixel count, but that does not look very promising,any suggestions on this would be really helpful.
Thanks in advance.
I am unsure of the exact use case, but having experienced similar issues I used this rather helpful article.
https://www.alanzucconi.com/2015/05/24/how-to-find-the-main-colours-in-an-image/
The GitHub containing the full code is found here: https://gist.github.com/jayapal/077f63f3163abbfb3c50c7d209524cc6
If this is for your own visual the histogram should be enough, if you are attempting to automate however, it may be helpful to round the color values up or down, this would provide information on if the image is darker or lighter than a certain value.
What are you using this code for on a larger perspective? Maybe that will help provide more adequate information
Edit: The code above also provides the ability to define a region of the image, hopefully this will make your selection more accurate
Adding code directly
from sklearn.cluster import KMeans
from sklearn import metrics
import cv2
import numpy as np
import cv2
image = cv2.imread("red.png")
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
# Resize it
h, w, _ = image.shape
w_new = int(100 * w / max(w, h) )
h_new = int(100 * h / max(w, h) )
image = cv2.resize(image, (w_new, h_new));
# Reshape the image to be a list of pixels
image_array = image.reshape((image.shape[0] * image.shape[1], 3))
print image_array
# Clusters the pixels
clt = KMeans(n_clusters = 3)
clt.fit(image_array)
def centroid_histogram(clt):
# grab the number of different clusters and create a histogram
# based on the number of pixels assigned to each cluster
numLabels = np.arange(0, len(np.unique(clt.labels_)) + 1)
(hist, _) = np.histogram(clt.labels_, bins = numLabels)
# normalize the histogram, such that it sums to one
hist = hist.astype("float")
hist /= hist.sum()
# return the histogram
return hist
# Finds how many pixels are in each cluster
hist = centroid_histogram(clt)
# Sort the clusters according to how many pixel they have
zipped = zip (hist, clt.cluster_centers_)
zipped.sort(reverse=True, key=lambda x : x[0])
hist, clt.cluster_centers = zip(*zipped)
# By Adrian Rosebrock
import numpy as np
import cv2
bestSilhouette = -1
bestClusters = 0;
for clusters in range(2, 10):
# Cluster colours
clt = KMeans(n_clusters = clusters)
clt.fit(image_array)
# Validate clustering result
silhouette = metrics.silhouette_score(image_array, clt.labels_,
metric='euclidean')
# Find the best one
if silhouette > bestSilhouette:
bestSilhouette = silhouette;
bestClusters = clusters;
print bestSilhouette
print bestClusters
I am trying to write a script to calculate the angle between two bones given an x-ray.
A sample x-ray would look like the following:
I am trying to calculate the midline of each bone, essentially a line following the midpoints of the two sides of a bone, and then compare the angle between the two midlines.
I have tried using OpenCV to get the outline of the bones, but it does not seem accurate enough and gets lots of extra data. I am stuck on how to move next and how I would calculate the midline. I am quite new to image processing but have experience with Python.
Getting edges using OpenCV results:
Code for OpenCV:
import cv2
# Load the image
img = cv2.imread("xray-3.jpg")
# Find the contours
imgray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(img,60,200)
im2, contours, hierarchy = cv2.findContours(edges, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
hierarchy = hierarchy[0] # get the actual inner list of hierarchy descriptions
# For each contour, find the bounding rectangle and draw it
cv2.drawContours(img, contours, -1, (0,255,0), 3)
# Finally show the image
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
If it's not cheating, i'd recommend cropping the image to not in include as much of the labels and scales as possible without removing any areas of interest.
That being said, I think your method of getting the contours will be usable if you do some preprocessing to the image. One algorithm that might do the trick is a Difference of Gaussians (DoG) filter which will bring out the edges a little more. I modified slightly this code which will compute the DoG filter using a few different sigma and k values.
from skimage import io, feature, color, filters, img_as_float
from matplotlib import pyplot as plt
raw_img = io.imread('xray-3.jpg')
original_image = img_as_float(raw_img)
img = color.rgb2gray(original_image)
k = 1.6
plt.subplot(2,3,1)
plt.imshow(original_image)
plt.title('Original Image')
for idx,sigma in enumerate([4.0, 8.0, 16.0, 32.0]):
s1 = filters.gaussian(img, k*sigma)
s2 = filters.gaussian(img, sigma)
# multiply by sigma to get scale invariance
dog = s1 - s2
plt.subplot(2,3,idx+2)
print("min: {} max: {}".format(dog.min(), dog.max())
plt.imshow(dog, cmap='RdBu')
plt.title('DoG with sigma=' + str(sigma) + ', k=' + str(k))
ax = plt.subplot(2, 3, 6)
blobs_dog = [(x[0], x[1], x[2]) for x in feature.blob_dog(img, min_sigma=4, max_sigma=32, threshold=0.5, overlap=1.0)]
# skimage has a bug in my version where only maxima were returned by the above
blobs_dog += [(x[0], x[1], x[2]) for x in feature.blob_dog(-img, min_sigma=4, max_sigma=32, threshold=0.5, overlap=1.0)]
#remove duplicates
blobs_dog = set(blobs_dog)
img_blobs = color.gray2rgb(img)
for blob in blobs_dog:
y, x, r = blob
c = plt.Circle((x, y), r, color='red', linewidth=2, fill=False)
ax.add_patch(c)
plt.imshow(img_blobs)
plt.title('Detected DoG Maxima')
plt.show()
At first glance, it appears that sigma=8.0, k=1.6 might be your best bet as this seems to best exaggerate the edges of the lower leg while getting rid of the noise across it. Particularly over that of the subjects left (image right) leg. Give your edge detection another go and play around with k and sigma and let me know what you get :)
If the results look good you should be able to get a center point between the edges detected for either leg in each row of the image. Then just find the line of best fit for the mid points for either leg and you should be good to go. You will also need to isolate one leg from another, so again, if it's not cheating, maybe crop the image down the middle into two images.