Several days ago, I used the sklearn multilayer perceptron module for predictions.
Now I try to change the cost function while I using the neural network method which may make the prediction results more accurate. I have added the new cost function to ‘_base.py’ and also I have changed some codes in ‘multilayer_perceptron.py’. However, when I try to call the package and the module, there is a ‘no module named...’ problem. I tried several methods to solve this problem, like check the ‘init.py’ file and check the ‘PYTHONPATH’, but these methods don’t work.
So, could you please give me some guide on how to change the cost function? I would appreciate that and thank you so much.
Related
When Machine Learning is seen mathematically, we have cost functions, to reduce the error in the prediction for the next time and we keep on optimizing the parameters of the equation/s used in the particular algorithm.
I wonder where does this optimization happen in the library
Sci-kit learn.
There is no function for doing this job, so far I know,there are rather a bunch of algorithms as functions.
Can someone please tell me how do I optimize those parameters in sci-kit learn, and is there a way to do it in the mentioned library or is it just for learning purposes.
I saw the code of library of logistic regression but got nothing.
Any effort is appreciated.
I got it.
GridsearchCV is the answer, thats what I was looking for.
I think it allows us to choose the values of alpha, c and number of iterations, therefore, not allowing to alter the values of weights directly and I think thats ok or thats how we'd assign values to those parameters after carrying out the same process independtly.
This article helped me to understand it well.
I am using estimator in Tensorflow(1.8) and python3.6 to build up Neural network for my Reinforcement learning project. And I notice everytime you use estimator.predict(), the tensorflow will load up the checkpoint under the model_dir. But It's extremely inefficient if you have to use this function multiple times for the same checkpoint, e.g. in Reinforcement learning, I may need to predict next action based on current state and next state will be realized only after you choose a specific action. So it's commonplace to call this function thousands of times.
So my question is, how to call this function without loading checkpoint(same checkpoint) everytime.
Thank you.
Well, I think I've just found a good answer to my own question. A good solution to this problem is to construct a tf.dataset by generator. Link is here.
Generator will keep your estimator.predict open and in this way you won't need to keep loading the checkpoint. Only thing you need to do is to change the yielded object in this fastpredict object(self.next_feature in this case) if necessary.
However, I need to mention if your ultimate goal is to make the whole thing a service or something like that. You may need something like tf.serving. So I suggest you go in that way directly. I waste a lot time in the process. So I hope this answer helps you save yours.
Maybe this is a stupid question, but I switched from basic TensorFlow recently to tflearn and while I knew little of TensorFlow, I know even less of tflearn as I have just begun to experiment with it. I was able to create a network, train it, and generate a model that achieved a satisfactory metric. I did this all without using a TensorFlow session because a) none of the documentation I was looking at necessarily suggested it and b) I didn't even think to use it.
However, I would like to predict a value for a single input (the model performs regression on images, so I'm trying to get a value for a single image) and now I'm getting an error that the convolutional layers need to be initialized (Specifically "FailedPreconditionError: Attempting to use uninitialized value Conv2D/W").
The only thing I've added, though, are two lines:
model = Evaluator(network)
model.predict(feed_dict={input_placeholder: image_data})
I'm asking this as a general question because my actual code is a bit troublesome to just post here because admittedly I've been very sloppy in writing it. I will mention, however, that even if I start a session and initialize all variables before that second line, then run the line in the session, I get the same error.
Succinctly put, does tflearn require a session if I've not used TensorFlow stuff directly anywhere in my code? If so, does the model need to be trained in the session? And if not, what about those two lines would cause such an error?
I'm hoping it isn't necessary for more code to be posted, but if this isn't a general issue and is actually specific to my code then I can try to format it to be understandable here and then edit the post.
I'm trying to write something similar to google's wide and deep learning after running into difficulties of doing multi-class classification(12 classes) with the sklearn api. I've tried to follow the advice in a couple of posts and used the tf.group(logistic_regression_optimizer, deep_model_optimizer). It seems to work but I was trying to figure out how to get predictions out of this model. I'm hoping that with the tf.group operator the model is learning to weight the logistic and deep models differently but I don't know how to get these weights out so I can get the right combination of the two model's predictions. Thanks in advance for any help.
https://groups.google.com/a/tensorflow.org/forum/#!topic/discuss/Cs0R75AGi8A
How to set layer-wise learning rate in Tensorflow?
tf.group() creates a node that forces a list of other nodes to run using control dependencies. It's really just a handy way to package up logic that says "run this set of nodes, and I don't care about their output". In the discussion you point to, it's just a convenient way to create a single train_op from a pair of training operators.
If you're interested in the value of a Tensor (e.g., weights), you should pass it to session.run() explicitly, either in the same call as the training step, or in a separate session.run() invocation. You can pass a list of values to session.run(), for example, your tf.group() expression, as well as a Tensor whose value you would like to compute.
Hope that helps!
I'd like to have 2 separate networks running in Theano at the same time, where the first network trains on the results of the second. I could embed both networks in the same structure but that would be a real mess in the entire forward pass (and probably won't even work because of the shared variables etc.)
The problem is that when I define a theano function I don't specify the model it's applied on, meaning if I'm having a predict and a train function they'll both work on the first model I define.
Is there a way to overcome that issue?
In a rather simplified way I've managed to find a nice solution. The trick was to create one model, define its function and then create the other model and define the second function. Works like a charm