autoencoder reconstructed image (output) are not clear as i want [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am using an autoencoder,Is that okey if reconstructed image are like this because the input image has lost a lot of quality
reconstructed image
what should i do to have an image that looks more like the input because ,i will use the output image for face recognition.
should i edit epochs,batch size ..?.

One of the go-to ways to improve performance is to change the learning rate. You can do this by creating your own optimizer with a different learning rate. The RMSProp optimizer defaults to a learning rate of 0.001. If your images are in [0, 1] then I suggest trying a higher learning rate, maybe 0.1. If they are in [0, 255], maybe 0.0001. Experiment!
Another issue might be that you have too many max pooling layers in the encoder, decimating spatial information. When I use max pooling, I try to keep it at less than 1 pooling layer per 2 convolutional layers. You could replace the max pooling with stride 2 convolutions.

Related

How to insert a pre-trained model inside a model to train? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm trying to build a deep-learning model in Keras. In this model, I want to insert some copies of a "pre-trained unit". This unit corresponds to a Keras model model_1, whose parameters I optimazed at an earlier stage.
Now I want to properly define model_2 (to train) such that it includes some copies of model_1. The final layout of model_2 will be something like this, where each box contains a copy of model_1
and the red connections correspond to parameters extracted from pre-training.
Thus, the weights and biases associated with the red connections are known numbers.
I want to train model_2 by keeping fixed these parameters and optimizing the other parameters (i.e. weights and biases associated with the black connections).
I have not found any similar example in literature. Is it possible to realize such a Neural Network architecture by using Keras? How can I properly define model_2?
Of course, I can manually extract the pre-trained parameters par (as a numpy.array) from model_1 and put them into model_2, by using something like model_2.set_weights(par). The problem is that I have very large numbers of neurons/parameters. This means that Python is unable to allocate memory for a numpy.array of such dimensions.

How many words are lmmatized? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
In a data frame with 1000 texts, after doing preprocessing lemmatization, how can I find out how many words have been lemmatized in each text?
Why did you run your model for just 3 epochs? I would suggest you to run it for about 20 epochs, and then see if the validation accuracy is not reducing. And the thing, I can tell you is that You need to change your this line of code:
model.add(Embedding(300000,60,input_length=300))
To this:
model.add(Embedding(k, 60,input_length=300))
Where you can set k as 256 or 512 or a number close to them. But 300000 would be just too much. By that, your network would focus more on the embedding layer, when the main job is of encoder and decoder.
Another thing, you should increase your LSTM units (maybe to a number like 128 or 256) in both encoder and decoder, and remove the recurrent_dropout parameter (since, you are dropping out using the dropout layer after encoder). If that still doesn't help then you can even add Batch Normalization layers to your model.

Analysis of loss curve : raw data VS normalized data (Machine learning / Keras) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I'm trying to configure my CNN, and this need to analyze loss function results. I'm working with à VGG-16 (CNN). Input data are grey scaled images. So each pixel have a value of [0:255].
For the first part, I substracted the mean : pixel - 127 to have for each image a range of [-127:128]. Here are a loss / accuracy result with this configuration :
In this case there are some noise at the beginning (epoch 0 to 25), so I thought that it could be resolved by normalizing data.
So I change each pixel with : (pixel - 127)/128 to normalize in an easy way first. Here are for the same configuration the curves :
The noise disappear but now, the train curve have a behaviour I never met before.. Could somenone said to me if the behaviour is usual and why ? And I would like to know if you know a good way to analyze these kind of curves.
Thanks you
Well it looks as if you are reaching convergence very fast and you are jumping out of your minima. Try lowering the learning rate, put decay on your LR or do an early stop.
Also it might be interesting to do K-folding. It might be that your training set has 'hard samples' that are not in your test set and are creating those spikes.
Hope it helps.

how to use convolutional neural network for non-image data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I search a lot but I cannot find anything to show that how can I generate data for continuous dataset such as breastCancer? All documents are about images or text classifications.
Can you please help me construct neural network?
CNNs are useful for datasets where the features have strong temporal or spatial correlation. For instance, in the case of images, the value of a pixel is highly correlated to the neighboring pixels. If you randomly permute the pixels, then this correlation goes away, and convolution no longer makes sense.
For the breast cancer dataset, you have only 10 attributes which are not spatially correlated in this way. Unlike the previous image example, you can randomly permute these 10 features and no information is lost. Therefore, CNNs are not directly useful for this problem domain.

Understanding scikit neural network parameters [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I've been trying to train a neural network to recognise the three types of tags I have images of (circle, rectangle and blank). I used the example set up for recognising the digits dataset provided here and found that I got a 75% correct prediction rate with barely any tweaking (provided my images had a certain level of preprocessing with filters etc).
What I'm interested in understanding more about is the classifier section (code below). I'm not sure what the different convolution and layer options do and what options I have for tweaking them. Does anyone have any advice for other convolution or layers that I could use to try improve my prediction accuracy and what they mean? Apologies for being vague, this is the first time I've touched a NN and am struggling to get my head around it.
nn = Classifier(
layers=[
Convolution('Rectifier', channels=12, kernel_shape=(3, 3), border_mode='full'),
Convolution('Rectifier', channels=8, kernel_shape=(3, 3), border_mode='valid'),
Layer('Rectifier', units=64),
Layer('Softmax')],
learning_rate=0.002,
valid_size=0.2,
n_stable=10,
verbose=True)
I would recommend the excellent video course by Hugo Larochelle on Youtube. The 9th chapter is about convolutional networks and explains all the parameters. You might start from the first two chapters, they explain how the neural networks work in general, and you will get used to the terms like softmax and rectifier.
Another good resource: Andrej Karpathy's lecture notes

Categories

Resources