neural networks in pytorch [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am using pytorch and need to implement this as a part of a neural network. Is there a particular way to code the layers shown in purple (s
def forward(self, images: torch.Tensor) -> torch.Tensor:
x= self.fc1(x)
return x

Combining all my comments into the answer.
To split the output vector from the first layer, which has a shape [batch_size, 4608], you can use torch.split function as follows
batch = torch.zeros((10, 4608))
sub_batch_1, sub_batch_2 = torch.split(batch, 2304, dim=1)
print(sub_batch_1.shape, sub_batch_2.shape)
This code results in two tensors
(torch.Size([10, 2304]), torch.Size([10, 2304]))
I am not sure about MAX logic you need. If it is about getting the maximum element in a given position from two tensors obtained during split, then torch.maximum function can be used to do so.

Related

Tensorflow - What is the reason of difference between train size and trained values [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I have an excel file with 1000 different values, I am trying to train my artificial intelligence with these values. While the Test Size is 0.33, artificial intelligence should be trained with 670 values, but only 21 values ​​are trained. What is the source of the problem?
You probably mean a number of batches trained using fit. Every batch comprises 32 items by default

How many words are lmmatized? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
In a data frame with 1000 texts, after doing preprocessing lemmatization, how can I find out how many words have been lemmatized in each text?
Why did you run your model for just 3 epochs? I would suggest you to run it for about 20 epochs, and then see if the validation accuracy is not reducing. And the thing, I can tell you is that You need to change your this line of code:
model.add(Embedding(300000,60,input_length=300))
To this:
model.add(Embedding(k, 60,input_length=300))
Where you can set k as 256 or 512 or a number close to them. But 300000 would be just too much. By that, your network would focus more on the embedding layer, when the main job is of encoder and decoder.
Another thing, you should increase your LSTM units (maybe to a number like 128 or 256) in both encoder and decoder, and remove the recurrent_dropout parameter (since, you are dropping out using the dropout layer after encoder). If that still doesn't help then you can even add Batch Normalization layers to your model.

autoencoder reconstructed image (output) are not clear as i want [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am using an autoencoder,Is that okey if reconstructed image are like this because the input image has lost a lot of quality
reconstructed image
what should i do to have an image that looks more like the input because ,i will use the output image for face recognition.
should i edit epochs,batch size ..?.
One of the go-to ways to improve performance is to change the learning rate. You can do this by creating your own optimizer with a different learning rate. The RMSProp optimizer defaults to a learning rate of 0.001. If your images are in [0, 1] then I suggest trying a higher learning rate, maybe 0.1. If they are in [0, 255], maybe 0.0001. Experiment!
Another issue might be that you have too many max pooling layers in the encoder, decimating spatial information. When I use max pooling, I try to keep it at less than 1 pooling layer per 2 convolutional layers. You could replace the max pooling with stride 2 convolutions.

how to use convolutional neural network for non-image data [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I search a lot but I cannot find anything to show that how can I generate data for continuous dataset such as breastCancer? All documents are about images or text classifications.
Can you please help me construct neural network?
CNNs are useful for datasets where the features have strong temporal or spatial correlation. For instance, in the case of images, the value of a pixel is highly correlated to the neighboring pixels. If you randomly permute the pixels, then this correlation goes away, and convolution no longer makes sense.
For the breast cancer dataset, you have only 10 attributes which are not spatially correlated in this way. Unlike the previous image example, you can randomly permute these 10 features and no information is lost. Therefore, CNNs are not directly useful for this problem domain.

How to create bounding boxes around the ROIs using keras [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I have to questions regarding keras and Convolutional Neural Networks:
1.
How to create bounding boxes using keras and convolution neural network ?
2.
How to use keras vgg16 application retraining?

Categories

Resources