Applying K nearest neighbors algorithm causes issue with method train - python

when i tried to implement k nearest Neighbors to my training datasets i have been created same as this photo
Python version 3.7.6
OpenCv version 4.2.0
enter image description here
and same as this code
but instead of training only hand written numbers i have done it for chars and numbers based on font type i have done all the steps very well and all generated arrays is perfect
only KNN.train has problem i found some posts before said it has problem with old versions of Python but at same time i heard that cv2.ml.KNearest_create() still work does ive dont something wrong
# KNN
knn = cv2.ml.KNearest_create()
knn.train(cells, cv2.ml.ROW_SAMPLE, cells_labels)
ret, result, neighbours, dist = knn.findNearest(test_cells, k=3)
it caused me strange error does it in compatible with python3.7.6
Traceback (most recent cakk kast):
File "knn-apply.py", line 38, in <module>
knn.train(cells.ml.ROW_SAMPLE, cells_labels)
RypeError: Expected Ptr<cv::UMat> for argument 'responses'

Related

Following PyGSLIB demo - with my own data

I have followed the PyGSLIB tutorial (https://opengeostat.github.io/pygslib/Tutorial.html) and am now ready to try running the code on my own data. I have successfully imported the drillhole tables, created the drillhole objects... all the way to tagging my samples with the domain code.
I encounter a problem when I get to the code
>># creating a partial model by filtering out blocks with zero proportion inside the solid
>>mymodel.set_blocks(mymodel.bmtable[mymodel.bmtable['D1']> 0])
>># export partial model to a vtk unstructured grid (*.vtu)
>>mymodel.blocks2vtkUnstructuredGrid(path='model.vtu')
Where the kernel dies, specifically when trying to export the model to a VTU file format. To explore some more I tried declustering the parameters, but when I entered the code:
>>wtopt,vrop,wtmin,wtmax,error, \
xinc,yinc,zinc,rxcs,rycs,rzcs,rvrcr = pygslib.gslib.declus(parameters_declus)
The kernel did not complete the task, and I was not able to plot the deculstered optimization results.
--------------------------------------------------------------------------- NameError Traceback (most recent call
last) Input In [27], in
1 #Plotting declustering optimization results
----> 2 plt.plot (rxcs, rvrcr, '-o')
3 plt.xlabel('X cell size')
4 plt.ylabel('declustered mean')
NameError: name 'rxcs' is not defined
ERROR! Session/line number was not unique in database. History logging
moved to new session 118
As far as I can tell, this is because I have not set up an appropriate wireframe - the STL file I have does not seem to have the geospatial data listed in terms of meters (which I think the data from the tutorial is)... but I can't seem to find a way to convert the data from my STL file to a useable format.
My question is: How do I convert an STL file of a 3D portion of the earth's surface downloaded from 3D-mapper.com, into a useable wireframe to continue with the PyGSLIB code?
If anyone could shed some light on this issue, I would really appreciate it.

Python OpenCV - how to find ONLY rotation and translation needed to align two images given two sets of points? (no Affine, no warping)

I have two sets of matching points, eg.
# first set of points
[[696.0, 971.3333333333334], [1103.3333333333333, 934.6666666666666], ...]
# second set of points
[[475.0, 458.6666666666667], [1531.3333333333333, 524.0], ...]
from two images. Right now I'm using this piece of code to align images:
points_source = np.array(source_coordinates)
points_destination = np.array(destination_coordinates)
h, status = cv2.findHomography(points_destination, points_source, cv2.RANSAC)
aligned_image = cv2.warpPerspective(destination_image, h, (source_image.shape[1], source_image.shape[0]))
It works well most of the time, but sometimes it warps image and it aligns bad. I found estimateRigidTransform function, that'd be the best for me, because it's possible to only translate and rotate the image, but it's deprecated and when I try to use it, it throws an error:
Traceback (most recent call last):
File "align.py", line 139, in <module>
align(image, image2, source_coordinates, destination_coordinates)
File "align.py", line 111, in align
m = cv2.estimateRigidTransform(points_destination, points_source, fullAffine=False)
AttributeError: module 'cv2' has no attribute 'estimateRigidTransform'
I couldn't find any other solution than estimateRigidTransform. Is there any other function that'd work for me? Maybe I can use warpPerspective to only change rotation and translation? I don't want to use getAffineTransform function because it can accept only three points and I want to use much more points. My OpenCV version is 4.0.1-1
The function I needed is: cv2.estimateAffinePartial2D()
Instead of using plain OpenCV, I would recommend to link your project with another library that has the algorithms you are looking (for and much more). Probably the best solution would be Insight Toolkit library (ITK) or Visual Toolkit (VTK). The former is much complex and also much harder to learn, but the latter is actually very simple. They both use CMake and there is no problem in compiling/linking etc.
ITK is especially designed for image processing. It includes so called Landmark based registration, which is exactly what you need. Complete working example is available. Unfortunately, the library seems very complex at the beginning.
On the other hand, VTK also implements the same algorithm, but it can be used very simply (From the example):
vtkSmartPointer<vtkLandmarkTransform> landmarkTransform = vtkSmartPointer<vtkLandmarkTransform>::New();
landmarkTransform->SetSourceLandmarks(sourcePoints);
landmarkTransform->SetTargetLandmarks(targetPoints);
landmarkTransform->SetModeToRigidBody();
landmarkTransform->Update();
std::cout << landmarkTransform->GetMatrix() << std::endl;

Dilation image-recognition algorithms cityScapes model

I am trying to use this image-recognition algorithms using cityScapes model
https://github.com/fyu/dilation
However, I keep on getting the following error:
- bash-4.2$ python predict.py cityscapes sunny_1336601.png --gpu 0
Using GPU 0
WARNING: Logging before InitGoogleLogging() is written to STDERR
Traceback (most recent call last):
File "predict.py", line 133, in <module>
main()
File "predict.py", line 129, in main
predict(args.dataset, args.input_path, args.output_path)
File "predict.py", line 98, in predict
color_image = dataset.palette[prediction.ravel()].reshape(image_size)
ValueError: cannot reshape array of size 12582912 into shape (1090,1920,3)
I tried reshaping the image to every common resolution I could think of, including 640x480, but I have been getting the same error.
Any help or tips is highly appreciated.
Thanks!
I don't have enough reputation to comment, so I am posting my hunch as an answer (forgive me if I'm wrong) : the given size 12582912 has to be a product of the three numbers in the tuple. A quick factorisation showed 12582912 = 1024*768*16 = 2048*1536*4 So, if the images is a 4-channel image, the resolution is 2048 x 1536 which is in standard 4:3 aspect ratio.
It turns out that Cityscapes model only takes a specific size: The width should be twice the length.
If you know Python well, you will see that ValueError is internal code error. It has nothing to do with missing dependencies or environment.
It has to do with the fact that the image was one total size first, and then it's being casted to array and then back into another dimensions.
That is not something that can be fixed or should be fixed by tempering with input data, but with addressing the bug in the provided library itself.
It is very common to have this kind of restrictions with NN classifier. Because once layers are trained, they can't be changed and the input must be very specific. Of course, it still can be "cooked" before giving it to NN but it's usually nondestructive/basic scaling, so the proportions must be preserved, which is what the library does wrong.

How to fit multiple sequences with GMMHMM?

I have a problem with the Python hmmlearn library. This is that I have several training sets and I would like to have one Gaussian mixture hmm model to fit them.
Here is an example working with multiple sequences.
X = np.concatenate([X1, X2])
lengths = [len(X1), len(X2)]
hmm.GaussianHMM(n_components=3).fit(X, lengths)
When I change GaussianHMM to GMMHMM, it returns the following error:
hmm.GMMHMM(n_components=3).fit(X, lengths)
Traceback (most recent call last):
File "C:\Users\Cody\workspace\QuickSilver_HMT\hmm_list_sqlite.py", line 141, in hmm_list_pickle
hmm.GMMHMM(n_components=3).fit(X, lengths)
File "build\bdist.win32\egg\hmmlearn\hmm.py", line 998, in fit
raise ValueError("'lengths' argument is not supported yet")
ValueError: 'lengths' argument is not supported yet
How can one fit multiple sequences with GMMHMM?
The current master version contains a re-write of GMMHMM which did not support multiple sequences at some point. Now it does, so updating should help, as #ppasler suggested.
The re-write is still a work-in-progress. Please report any issues you encounter on the hmmlearn issue tracker.

Unrecognized or unsupported array type in function cvGetMat in python opencv

I am trying to code in python opencv-2.4.3, It is giving me an error as below
Traceback (most recent call last):
File "/home/OpenCV-2.4.3/cam_try.py", line 6, in <module>
cv2.imshow('video test',im)
error: /home/OpenCV-2.4.3/modules/core/src/array.cpp:2482: error: (-206) Unrecognized or unsupported array type in function cvGetMat
I am not understanding what does that mean, Can anybody help me out?
Thankyou.
The relevant snippet of the error message is Unrecognized or unsupported array type in function cvGetMat. The cvGetMat() function converts arrays into a Mat. A Mat is the matrix data type that OpenCV uses in the world of C/C++ (Note: the Python OpenCV interface you are utilizing uses Numpy arrays, which are then converted behind the scenes into Mat arrays). With that background in mind, the problem appears to be that that the array im you're passing to cv2.imshow() is poorly formed. Two ideas:
This could be caused by quirky behavior on your webcam... on some cameras null frames are returned from time to time. Before you pass the im array to imshow(), try ensuring that it is not null.
If the error occurs on every frame, then eliminate some of the processing that you are doing and call cv2.imshow() immediately after you grab the frame from the webcam. If that still doesn't work, then you'll know it's a problem with your webcam. Else, add back your processing line by line until you isolate the problem. For example, start with this:
while True:
# Grab frame from webcam
retVal, image = capture.read(); # note: ignore retVal
# faces = cascade.detectMultiScale(image, scaleFactor=1.2, minNeighbors=2, minSize=(100,100),flags=cv.CV_HAAR_DO_CANNY_PRUNING);
# Draw rectangles on image, and then show it
# for (x,y,w,h) in faces:
# cv2.rectangle(image, (x,y), (x+w,y+h), 255)
cv2.imshow("Video", image)
i += 1;
source: Related Question: OpenCV C++ Video Capture does not seem to work
I was having the same error, and after about an hour of searching for the error, I found the path to the image to be improperly defined. It solved my problem, may be it will solve yours.
I solved the porblem by using a BGR-picture. the one from my cam was YUYV by default!
I am working in Windows with Opencv 2.3.1 and Python 2.7.2, so, I had the same problem, I solved it pasting the following DLL files: opencv_ffmpeg.dll and opencv_ffmpeg_64.dll in the installation folder of Python. Maybe it help you with a similar solution in Ubuntu.
For me, like Gab Hum did, I copied opencv_ffmpeg245.dll to my python code folder. Then it works.
Check your Image Array (or NpArray),(by printing it) whether you are trying to pass an array of images at one shot instead of passing each image at once.
A single image array would look like :
[[[ 76 85 103] ... [ 76 85 103]], ... ]
Rows encloses each columns, each matrix(pixes) encloses no of rows, each image comprises of matrices (pixels).
It is always good to have a sanity check, to be sure your camera is working.
In my case my camera is
raspistill -o test.jpg

Categories

Resources