Module object has no attribute leaky_relu - python

I am trying to run the code from here which is an implementatino of Generative Adversarial Networks using keras python. I followed the instructions and install all the requirements. Then i tried to run the code for DCGAN. However, it seems that there is some issue with the compatibility of the libraries. I am receiving the following message when i am running the code:
AttributeError: 'module' object has no attribute 'leaky_relu'
File "main.py", line 176, in <module>
dcgan = DCGAN()
File "main.py", line 25, in __init__
self.discriminator = self.build_discriminator()
File "main.py", line 84, in build_discriminator
model.add(LeakyReLU(alpha=0.2))
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/models.py", line 492, in add
output_tensor = layer(self.outputs[0])
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/engine/topology.py", line 617, in __call__
output = self.call(inputs, **kwargs)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/layers/advanced_activations.py", line 46, in call
return K.relu(inputs, alpha=self.alpha)
File "/opt/libraries/anaconda2/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 2918, in relu
x = tf.nn.leaky_relu(x, alpha)
I am using kerasVersion: 2.1.3 while tensorflowVersion: 1.2.1
and TheanoVersion: 1.0.1+40.g757b4d5
Any idea why am I receiving that issue?
EDIT:
The error is located in the line 84 in the build_discriminator:
function:`model.add(LeakyReLU(alpha=0.2))`

According to this answer, leaky_relu was added to tensorflow on version 1.4. So you might wanna check if your tensorflow installation is at least on version 1.4.

Related

Problem with executing python machine learning code I found on GitHub

I need some clear instructions on how to execute some code.
Context:
This is a python machine learning peptide binding script, but you don't need to know biology to help me.
I am trying to recreate this scientific paper to test its validity and if I can use it. I work in the biotech industry and am only somewhat familiar with C# and python.
The paper is linked to a GitHub page. And the GitHub page has some instructions on how to execute the code. But every time I try to execute this code as instructed, it gives me an error. I already installed its requirements of the most updated pytorch, numpy, scikit-learn; I also switched between GPU and CPU, but no method worked. I don't know what to do at this point.
Paper Title:
"Prediction of Specific TCR-Peptide Binding From Large Dictionaries of TCR-Peptide Pairs" by Ido Springer, Hanan Besser. etc.
Paper's Github8 (found in the paper's abstract):
https://github.com/louzounlab/ERGO
These are the example codes I input in the terminal. The example code was found in a comment at the end of ERGO.py
GPU ver:
python ERGO.py train lstm mcpas specific cuda:0 --model_file=model.pt --train_data_file=train_data --test_data_file=test_data
GPU code results:
Traceback (most recent call last): File "D:\D Download\ERGO-master\ERGO.py", line 437, in <module>
main(args) File "D:\D Download\ERGO-master\ERGO.py", line 141, in main
model, best_auc, best_roc = lstm.train_model(train_batches, test_batches, args.device, arg, params) File "D:\D Download\ERGO-master\lstm_utils.py", line 163, in train_model
model.to(device) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\cuda\__init__.py", line 211, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
CPU code ver (only replaced specific cuda:0 with specific cpu):
python ERGO.py train lstm mcpas specific cpu --model_file=model.pt --train_data_file=train_data --test_data_file=test_data
CPU code results:
epoch: 1 C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py:1960: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead. warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.") Traceback (most recent call last): File "D:\D Download\ERGO-master\ERGO.py", line 437, in <module>
main(args) File "D:\D Download\ERGO-master\ERGO.py", line 141, in main
model, best_auc, best_roc = lstm.train_model(train_batches, test_batches, args.device, arg, params) File "D:\D Download\ERGO-master\lstm_utils.py", line 173, in train_model
loss = train_epoch(batches, model, loss_function, optimizer, device) File "D:\D Download\ERGO-master\lstm_utils.py", line 137, in train_epoch
loss = loss_function(probs, batch_signs) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\loss.py", line 613, in forward
return F.binary_cross_entropy(input, target, weight=self.weight, reduction=self.reduction) File "C:\Users\username\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\functional.py", line 3074, in binary_cross_entropy
raise ValueError( ValueError: Using a target size (torch.Size([50])) that is different to the input size (torch.Size([50, 1])) is deprecated. Please ensure they have the same size.
Looking at the ValueError, it seems that what you're trying to do is deprecated in pytorch, so you have a more recent version of the package than the one it was developed in. I suggest you try
pip install pytorch 1.4.0
in command line.
I'm not familiar with pytorch but menaging tensor shapes in tensorflow is the biggest pain in the a** for me. What it actually looks like to be the problem is that the input has an extra dimension than it should, so you would have to manually reshape it.

TypeError: __array__() takes 1 positional argument but 2 were given

I've been doing the pytorch tutorial (https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html) and have been getting this error that I don't know how to fix. The full error is below:
Traceback (most recent call last):
File "main.py", line 146, in <module>
main()
File "main.py", line 138, in main
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=10)
File "/engine.py", line 26, in train_one_epoch
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
File "/utils.py", line 180, in log_every
for obj in iterable:
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/usr/local/lib/python3.6/dist-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataset.py", line 311, in __getitem__
return self.dataset[self.indices[idx]]
File "main.py", line 64, in __getitem__
img, target = self.transforms(img, target)
File "/transforms.py", line 26, in __call__
image, target = t(image, target)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/transforms.py", line 50, in forward
image = F.to_tensor(image)
File "/usr/local/lib/python3.6/dist-packages/torchvision/transforms/functional.py", line 129, in to_tensor
np.array(pic, mode_to_nptype.get(pic.mode, np.uint8), copy=True)
TypeError: __array__() takes 1 positional argument but 2 were given
I believe it means somewhere I'm using an array with 2 arguments which isn't allowed, but I don't really know where abouts that is happening - perhaps in one of their pre written libraries?
I can share the code in full if desired, but thought its a bit unwieldy. Does anyone know what might be causing this error?
PyTorch has already considered this issue. It does not seem to be a PyTorch problem.
As xwang233 mentioned in the issue, we can fix it by downgrading pillow:
pip install pillow==8.2.0
This issue could be fixed as well by upgrading Pillow from version 8.3.0 to 8.3.1. I had the same issue with
torch==1.9.0+cu111
torchvision==0.10.0+cu111
Pillow==8.3.0
After Pillow was upgraded to 8.3.1 (with no change to torch and torchvision) as below, the issue is gone:
pip install --upgrade pillow
Thanks to DRTorresRuiz for providing the clue about Pillow.
I had the same error when using:
torch==1.9.0
torchvision==0.10.0
In my requirements.txt file I downgraded the torch library, which forced me to downgrade torchvision, and that fixed the error for me. The library versions I ended up using that did not raise the error were:
torch==1.8.1
torchvision==0.9.1
change your code:
np.array(pic ,np.float32)
to:
np.array(pic).astype('float32')

How to avoid "RuntimeError: error in LoadLibraryA" for torch.cat?

I am running a pytorch solution for wireframe detection. I am receiving a "RuntimeError: error in LoadLibraryA" when the solution executes "forward return torch.cat(outputs, 1)"
I am not able to provide a minimal re-producable example. Therefore the quesion: Is it possible to produce just type of error in a microsoft library by python programming errors, or is this most likely a version (of python, pytorch, CUDA,...) problem or a bug in my installation?
I am using windows 10, python 3.8.1 and pytorch 1.4.0.
File "main.py", line 144, in <module>
main()
File "main.py", line 137, in main
trainer.train(train_loader, val_loader=None)
File "D:\Dev\Python\Projects\wireframe\wireframe\junc\trainer\balance_junction_trainer.py", line 75, in train
self.step(epoch, train_loader)
File "D:\Dev\Python\Projects\wireframe\wireframe\junc\trainer\balance_junction_trainer.py", line 176, in step
) = self.model(input_var, junc_conf, junc_res, bin_conf, bin_res)
File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\inception.py", line 41, in forward
base_feat = self.base_net(im_data)
File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\networks\inception_v2.py", line 63, in forward
x = self.Mixed_3b(x)
File "D:\Dev\Python\Environment\Environments\pytorch\lib\site-packages\torch\nn\modules\module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "D:\Dev\Python\Projects\wireframe\wireframe\junc\model\networks\inception_v2.py", line 97, in forward
return torch.cat(outputs, 1)
RuntimeError: error in LoadLibraryA
Try this workground: run the following code after import torch (should be fixed in 1.5):
import ctypes
ctypes.cdll.LoadLibrary('caffe2_nvrtc.dll')
It was possible to avoid this error by downgrading to python 3.7.6
Remark: Unfortunately, the first step of the overall processing (run time 3 days on my GPU) creates intermediate results with pickel format 5, which is new in Python 3.8. Therefore, I either have to re-run the first step for 3 days or find another solution. The files with the intermediate results cannot be used with python 3.7.6

Keras: IndexError: tuple index out of range when loading custom model

I have an .h5 model that was built with tensorflow==1.13.1 and Keras==2.2.4 on a host to which I don't have access. I'm trying to load that model using keras.models.load_model as follows:
model.py:
from keras.models import load_model
import numpy as np
encoder = load_model('encoder.h5')
encoder.summary()
This throws a stacktrace that points to a source file (implicit_delta.py) I cannot open:
Duhaime:web doug$ python model.py
Using TensorFlow backend.
WARNING: Logging before flag parsing goes to stderr.
W0620 09:18:29.064763 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
W0620 09:18:29.130089 140735739011968 deprecation_wrapper.py:119] From /Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Traceback (most recent call last):
File "model.py", line 8, in <module>
encoder = load_model('../pose-enc-raymond.h5')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
model = model_from_config(model_config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/saving.py", line 458, in model_from_config
return deserialize(config, custom_objects=custom_objects)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/__init__.py", line 55, in deserialize
printable_module_name='layer')
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 145, in deserialize_keras_object
list(custom_objects.items())))
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 1032, in from_config
process_node(layer, node_data)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/network.py", line 991, in process_node
layer(unpack_singleton(input_tensors), **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "/Users/doug/anaconda/envs/3.5/lib/python3.5/site-packages/keras/layers/core.py", line 687, in call
return self.function(inputs, **arguments)
File "/home/cshimmin/jupyter/dance/implicit_delta.py", line 95, in <lambda>
IndexError: tuple index out of range
I have tried installing other versions of tensorflow and keras but so far haven't had luck working around this. Is there any trick I can do to figure out how to load this model? Any suggestions or hacks are appreciated!
This thread helped me realize I needed to load the model using the version of python that was used to create the model:
conda create -n 3.7.3 python=3.7.3
conda activate 3.7.3
Then pip install everything and the model will boot!

tensorflow attributeerror moddule has no attribute per_image_standardization

I'm trying to go through the tutorial on convolutional neural nets using cifar10. The cnn is being built (cifar10.py) but when I try to run cifar10_train.py I'm getting the following error:
Traceback (most recent call last):
File "cifar10_train.py", line 115, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "cifar10_train.py", line 111, in main
train()
File "cifar10_train.py", line 58, in train
images, labels = cifar10.distorted_inputs()
File "/home/brennus/workspace/python/cifar/cifar10.py", line 141, in distorted_inputs
batch_size=FLAGS.batch_size)
File "/home/brennus/workspace/python/cifar/cifar10_input.py", line 177, in distorted_inputs
float_image = tf.image.per_image_standardization(distorted_image)
AttributeError: 'module' object has no attribute 'per_image_standardization'
According to https://github.com/tensorflow/tensorflow/blob/master/tensorflow/g3doc/api_docs/python/image.md, there is indeed a per_image_standardization attribute but it looks like my tensorflow doesn't have it. I'm not sure what version I have and not sure where to find it, but I built it from source from the repository so I imagine it's the current one.
I can't find anyone else who is having this problem so I'm stymied. Maybe I have to write my own?
I reinstalled tensorflow and solved the problem. Thanks, all!

Categories

Resources