I'm wondering what the set_weights method of the Maxent class in NLTK is used for (or more specifically how to use it). As I understand, it allows you to manually assign weights to certain features? Could somebody provide a basic example of the type of parameter that would be passed into it?
Thanks
Alex
It apparently allows you to set the coefficient matrix of the classifier. This may be useful if you have an external MaxEnt/logistic regression learning package from which you can export the coefficients. The train_maxent_classifier_with_gis and train_maxent_classifier_with_iis learning algorithms call this function.
If you don't know what a coefficient matrix is; it's the β mentioned in Wikipedia's treatment of MaxEnt.
(To be honest, it looks like NLTK is either leaking implementation details here, or has a very poorly documented API.)
Related
I've read in the documentation that sklearn uses CART algorithm for trees.
Are there specific attributes to change so that it becomes similar to a c4.5 implementation?
CART and C4.5 are somehow similar algorithms, but there are fundamental differences which won't let you tweak sklearn's implementation to get a C4.5 without a lot of work.
C4.5 uses rule sets to decide where to split the data, whereas CART merely uses a numerical splitting criterion.
You can take a look at this implementation of C4.5
I need to create a general-linear polynomial model with python. And as definitions for this type of models vary I'd note that I refer to this reference by NI. I guess that Matlab's implementation is quite similar.
I am particularly interested in creating an Output-Error model (OE) with it's initialization handled by Prediction Error Method (PEM).
I've been looking through scikit, statsmodels and some time-series and stat libraries on github but failed to meet the suite that addresses this very task.
I would be grateful for both:
suggestions of ready-made modules/libs
(if none exists) advice on creating my own lib: perhaps, building on top of Numpy, Scipy or one of the mentioned above.
Thank you.
P.S.: A module just for OE/PEM would be sufficient but I doubt this exists separately from other linear polynomial model libs.
Is there a way to have an x,y pair dataset given to a function that will return a list of curve fit models and the coeff. The program DataFit does this with about 200 different models, but we are looking for a pythonic way. From exponential to inverse polynomial etc.
I have seen many posts of manually using scipy to type each model, but this is not feasible for the number of models we want to test.
The closest I found was pyeq2, but this is not returning the list of functions, and seems to be a rabbit hole to code for.
If R has this available, we could use that but python is really the goal
Below is an example of the data, we want to find the best way to describe this curve
You can try library splines in R. I have used this for higher order curve fitting to some univariate data. You can try to change and achieve similar thing with corresponding R^2 errors.
You can either decide to do the following:
Choose a model to fit a parameters. This model should be based on a single independent variable. This can be done by python's scipy.optimize curve_fit function. You can choose something like a hyberbola.
Choose a model that is complex and likely represents an underlying mechanism of something at work. Like the system of ODE's from a disease SIR model. Fitting the parameters will be no easy task. This will be done by Markov Chain Monte Carlo (MCMC) methods. This is VERY difficult.
Realise that you have data and can use machine learning via scikit learn to predict from your data. This is a method that doesn't require parameters.
Machine learning and neural networks don't fit something and can't really tell you about the underlying mechanism but can make predicitions just as a best fit model would...dare I say even better.
In the end, we found that Eureqa software was able to achieve this. https://www.nutonian.com/products/eureqa/
I'm a beginner to using statsmodels & I'm also open to using other Python based methods of solving my problem:
I have a data set with ~ 85 features some of which are highly correlated.
When I run the OLS method I get a helpful 'strong multicollinearity problems' warning as I might expect.
I've previously run this data through Weka, which as part of the regression classifier has an eliminateColinearAttributes option.
How can I do the same thing - get the model to chose which attributes to use instead of having them all in the model?
Thanks!
To run multivariate regression use scipy.stats.linregress. Check out this nice example which has a good explanation.
The eliminateColinearAttributes option in the software you've mentioned is just some algorithm implemented in this software to fight the problem. Here, you need to implement some iterative algorithm yourself based on elimination of one of highly correlated variables with the highest p-value (then run regression again and repeat until multicollinearity is not there).
There's no one and only way here, there are different techniques. It is also a good practice to choose manually from the set of highly correlated with each other set of variables which to omit that it also makes sense.
I am trying to train a Hidden Markov Model (HMM) using the GHMM library. So far, I have been able to train both a discrete model, and a continuous model using a single Gaussian for each of the states.
There are really good examples on how to do it here.
However, I would like to train a continuous HMM with a single covariance matrix tied across all states (instead of having one for each state). Is that possible with GHMM lib? If it is, I would love to see some examples. If not, could somebody point me to some other code, or refer me to another HMM python/c library that can actually do it?
Thank you!
So, I have found this great package in C that has an HMM implementation exactly the way I wanted: Queen Mary Digital Signal Processing Library. More specifically, the HMM implementation is in these files. So, no need to use GHMM lib anymore.