Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a dataset from Stanford on Movie Reviews - Movie Review Dataset. It has both training data and testing data - all of them are text files in 2 folders - positive and negative.
How do I implement text classification using SVM algorithm on it? (Using a Python Library)
Check scikit-learn, it's a great machine learning framework. Have a look at Working With Text Data Classification of text documents using sparse features example, Feature extractions.
There is also NTLK, but it's not very powerful in ur case, check this thread for more details.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I need a trained deep learning mode can compare two images for two persons and give me the result as if the two image are for the same person or not.
I guess you see the face of the two persons on the image. In this case you can try to use Facenet Paper
You can find an implementation for example here link
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have been working on an algorithm trading project where I used R to fit a random forest using historical data while the real-time trading system is in Python.
I have fitted a model I'd like to use in R and am now wondering how can I use this model for prediction purposes in the Python system.
Thanks.
There are several options:
(1) Random Forest is a well researched algorithm and is available in Python through sci-kit learn. Consider implementing it natively in Python if that is the end goal.
(2) If that is not an option, you can call R from within Python using the Rpy2 library. There is plenty of online help available for this library, so just do a google search for it.
Hope this helps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I need to identify the songs by human hum. what are the best methodology and algorithm that i can use for achieve that. I search for code samples. But I couldn't find. Please help me....
You could begin a python program that uses tensorflow to deep-learn the correspondence between humming and songs - it should fall under the umbrella initiative by Google Brain called Magenta.
Of course for Deep-Learning you would need to have a large corpus of examples to learn from.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a phrase like this one:
Apple iPhone 5 White 16GB
and I want to tag in this way
B M C S
where
B=Brand (Apple)
M=Model (iPhone 5)
C=Color (White)
S=Size (Size)
A classifier must learn the sequence pattern... I think that I will use SVM or CRF.
My question is what is the best way to tag a phrase like this? I will use the NLTK library for python.
What you think of {Apple}\B {iPhone 5}\M....? What is the best way?
Is there also a way to use a seed dictionary (of brands for example) to let NLTK automatic tagging a list of phrases for me?
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I am trying to find top 100/1000 words based on tfidfVectorizer output of Python's scikit-learn library. Is there a way to do it using a function from the scikit libraries?
Thanks for help
What do you mean by top 100/1000 words? The most frequent words in a dataset? You can use the Counter class of the Python standard library to do that. No need for scikit-learn.