This for a homework question for implementing clustering algorithms. The code has already been given to me but its implemented in matlab and since I am using python I don't know what to make of it. I think I'll have to write it from scratch
I've been given a text file which contains feature vectors for an image.
data = np.loadtxt("filename").T
# data.shape = n,4
where the first two features are the chrominence and last 2 are the co-ordinates of a pixel
I have another file which contains some information about the image :-
offset: 3
sx: 321
sy: 481
stepsize: 7
winsize: 7
Could anyone tell me how to form an image from set of feature vectors?
Also could anyone point to me some on-line resource for learning image segmentation with python? Thanks.
OpenImageIO is a very good place to start. It's used by many professional imaging applications like The Foundry's Nuke and others.
As of 1.37, they've got an all new Python API which can create images in all kinds of amatuer and professional formats (like DPX, EXR, etc) and all kinds of colorspaces (YCbCr, xvYCC, RGB, etc).
It's worth a gander looking at.
Related
Im comparing some Open-Source Face-Recognition-Frameworks running with python (dlib) and for that i wanted to create ROC and DET curves. For creating match-scores im using the casia faceV5 dataset. Everything is only for educational purpose.
My Questions is:
Whats the best way to generate these kind of curves? (Any good libs for that?)
I found this via google Skicit but i still dont know how i should use that for face recognition?
I mean, which information should i have to pass? I know that ROC is using the true match rate and the false match rate, but from a developers point of view i just dont know how to integrate these informations to that Skicit-function.
My Test:
Im creating genuine match-scores of every person in the casia dataset. Therefor i use different pictures of the same person to create it. I save this scores in the array "genuineScores".
Example:
Person1_Picture1.jpg comparing with Person1_Picture2.jpg
Person2_Picture1.jpg comparing with Person2_Picture2.jpg etc.
Im also creating impostor match-scores. For this im using two pictures of different persons. I save this scores in the array "impostorScores".
Example:
Person1_Picture1.jpg comparing with Person2_Picture1.jpg
Person2_Picture1.jpg comparing with Person3_Picture1.jpg etc.
Now im just looking for a lib where i could pass the two arrays and its creating a roc curve for me.
Or is there another method for doing so?
I appreciate any kind of help. Thank you.
I'm working on a project to breakdown 3D models but I'm quite lost. I hope you can help me.
I'm getting a 3D model from Autodesk BIM and the format could be native or generic CAD formats (.stp, .igs, .x_t, .stl). Then, I need to "measure" somehow the maximum dimensions to model a raw material body, it will always have the shape of a huge panel. Once I get both bodies, I will get the difference to extract the solids I need to analyze; and, on each of these bodies, I need to extract the faces, and then the lines or curves of each face.
This sounds something really easy to do on a CAD software, but the idea is to automate this process. I was looking into openSCAD, but seems that works only to model geometry and it doesn't handle well imported solids. I'm leaving a picture with the idea of what I need to do in the link below.
So, Any idea how can I do this? which langue and library can help in this project?
I can see this automation possible with a few in between steps:
OpenSCAD can handle differences well, so your "Extract Bodies" seems plausible
1.5 Before going further, you'll have to explain how you "filtered out" the cylinder. Will you do this manually? If you don't, you will have it considered for analysis and have a lot of faces as a result.
I don't think openSCAD provides you a vertex array. However, it can save to .STL, which is kinda easy to parse with the programming language of your choice, you'll have to study .stl file structure a bit (this sounds much more frightening than it is - if you open an stl with an editor you will probably immediately realize what's happening).
Since you've parsed the file, you can now calculate lines with high school math.
This is not an easy, GUI way to do what you ask, but if you have a few skills you'll have your automation, and depending on the amount of your projects it may be worth it.
I have been working in this project, and foundt the library "trimesh" is better to solve this concern. Give it a shot, and save some time.
I have read a lot of essays and articles about (Compressing Image Algorithm). There are many algorithms which I can only understand some of them because I'm a student and I haven't gone to high school yet. I read this article which it helps me a lot! Article In page 3 at this part (Run length code). It's a very EZ and helpful algorithm but I don't know how do I make new format of image. I am a python developer but I don't know how to make a new format which it has a separate algorithm and program. --> like .jpeg, ,jpg, .png, .bmp
(Sorry I have studied English for 1 years so if I have some problems such as grammar or vocabulary just excuse me )
Sure, you can make your own image file format. Choose a filename extension, define how it will be stored and write Python code to:
read the format from disk into a Numpy array, and
write an image contained in a Numpy array to disk
That way you will be interoperable with all the major image processing libraries such as OpenCV, scikit-image, PIL, wand.
Have a look how NetPBM works to get started with a simple format. Maybe look at PCX format if you like the thought of RLE.
Read up on how to write binary to a file with Python.
i'm doing a little project with neural networks. I've read about digit recognition, with MNIST dataset and thought if it possible to make same dataset but with regular objects we see every day.
So here's algorithm( if we can say so):
All is done with opencv library for python
1) Get contours from image. This is not literally contours, but something that looks so.
I've done this with this code:
def findContour(self):
gray = cv2.cvtColor(self.image, cv2.COLOR_BGR2GRAY)
gray = cv2.bilateralFilter(gray, 11, 17, 17)
self.image = cv2.Canny(gray, 30, 200)
2) Next need to create training set.
I copy and edit this message. Change rotation and flip it -- now we have about 40 images, which are consists of rotated contours.
3) Now i'm gonna dump this images to a csv file.
These images are represented as 3D array, so i flatten them using .flatten function from numpy. Next this flatten vector is written in csv file, with label as last character
This is what i've done, and want to ask : will it work out?
Next i want to use everything except last element as input x vector, and last elements as y vector. (like here)
Recognizing will be done same way : we getting contour of image, and feed it to neural network, output will be label.
Is it even possible, or better not to try?
There is plenty of room for experimentation. However, you should not reinvent the wheel, except as a learning exercise. Research the paradigm, learn what already exists, and then go make your own wheel improvements.
I strongly recommend that you start with image recognition in CNNs (convolutional neural networks). A lot of wonderful work has been done with the ILSVRC 2012 image data set (a.k.a. ImageNet files). In fact, a large part of today's NN popularity comes from Alex Krizhevsky's breakthrough (resulting in AlexNet, the first NN to win the ILSVRC) and ensuing topologies (ResNet, GoogleNet, VGG, etc.).
The simple answer is to let your network "decide" what's important in the original photo. Certainly, flatten the image and feed it contours, but don't be surprised if a training run on the original images produces superior results.
Search for resources on "Image Recognition introduction" and pick a few of the hits that match your current reading and topic interests. There are plenty of good ones out there.
When you get to programming your own models, I strongly recommend that you use an existing framework, rather than building all that collateral from scratch. Dump the CSV format; there are better ones with pre-packaged I/O routines and plenty of support. The idea is to let you design your network, rather than manipulating data all the time.
Popular frameworks include Caffe, TensorFlow, Torch, Theano, and CNTK, among others. So far, I've found Caffe and Torch to have the easiest overall learning curves, although there's not so much difference that I'd actually recommend one over another in general. Look for one that has good documentation and examples in your areas of interest.
I have available a database of medical grade CT scans. Each scan series consists of about 30 ordered slides, each parallel to the next (dimensions consistent among images). I would like to generate a three dimensional (preferably rotatable) rendering of these images.
I anticipate that I'll be able to accomplish my goal by joining series of threshholded images in binary form into numpy/scipy arrays ([image1,image2,...imageN]) and feeding the object into some sort of a display function. Does anyone know of a function that fits this description?
I apologize if my question is poorly articulated-- Please kindly request further clarification if needed.
I'm a first time poster and by no means an expert in image processing, computer vision, etc.