Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have been working on an algorithm trading project where I used R to fit a random forest using historical data while the real-time trading system is in Python.
I have fitted a model I'd like to use in R and am now wondering how can I use this model for prediction purposes in the Python system.
Thanks.
There are several options:
(1) Random Forest is a well researched algorithm and is available in Python through sci-kit learn. Consider implementing it natively in Python if that is the end goal.
(2) If that is not an option, you can call R from within Python using the Rpy2 library. There is plenty of online help available for this library, so just do a google search for it.
Hope this helps.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I'm doing a project where I have data of 100 sensors and its cycles until it breaks. It shows a lot of characteristcs until its failure, and then shows it for the replacement sensor. With this data, I have to built a model where I can predict for how long the sensor will work until its failure, but only with a few data, not the full cycle. I have no idea what machine learning model is suitable for this.
The type of problem you are describing is known as survival analysis. A wide range of both statistical and machine learning methods are available to help you solve these type of problems.
What is great about these methods is it also allow you to use data points where the event you are interested in has not occur. In your example, it means you can possibly extend your dataset by including data from sensors which has not failed yet.
When you look at the methods I suggest you also spend some time examining how to evaluate these types of models, since the evaluation methods are also slightly different then in typical machine learning problems.
A comprehensive range of techniques is available at: http://dmkd.cs.vt.edu/TUTORIAL/Survival/Slides.pdf
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I wanted to get the list of all the Hyperparameters for the Machine Learning Algorithm. I wanted to give it as an input for the Grid Search. I am currently using Jupyter Notebook. Is there any command to get this list of all the hyperparameters ?
The first option is using Shift + Tab.
Moreover using Shift + Tab + Tab makes it much more user friendly.
Sometimes the signature/documentation of function with Shift+Tab may not work. then you can also use <function>? and execute the cell.
This also will lead you to documentation, here you can see all the Hyperplanes.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I came across some answers regarding the possibility of image recognition (and further matching) in Python and other languages, as well as papers describing a machine learning process for matching signatures*, but none in R regarding the possibility of recognizing signatures. In Python, a language in which I might try a hand at, I found the OpenCV library for face recognition. But still is not exactly what I need.
For an R user (and someone willing to instruct himself in Python), what is the best strategy -- considering time, nothing that it would take more than a few days of learning -- to create an algorithm for signature matching in R or Python?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to create your own simple CNN, but I need in some ready implementations.
Can your share me links, articles, where I can find ready implementations of CNN(without using any frameworks Keras, but maybe with numpy,scipy)where I can see the implementation of each operations, like matrices multiplication and so on?
Yes, it is possible to implement your own bare-bones CNN without help of any frameworks like Keras, TF, etc. You can check out this simple implementation of CNNs using numpy/cython and the code repository here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have a dataset from Stanford on Movie Reviews - Movie Review Dataset. It has both training data and testing data - all of them are text files in 2 folders - positive and negative.
How do I implement text classification using SVM algorithm on it? (Using a Python Library)
Check scikit-learn, it's a great machine learning framework. Have a look at Working With Text Data Classification of text documents using sparse features example, Feature extractions.
There is also NTLK, but it's not very powerful in ur case, check this thread for more details.