tensorflow session stops when try to print values - python

I have a problem with tesorflow session or python.
Everytime I try to print some tensor values to check the network works well,
the program stops with this error.
" Process finished with exit code -1073741819 (0xC0000005) "
I use python 3.5 and tensorflow-gpu 1.14.0 now.
I also tried a very simple code like below, the process stopped without printing any values.
I can't find out what's wrong.
Can anyone help me? Thanks.
input = tf.constant([2, 3, 4])
with tf.compat.v1.Session() as sess:
print(sess.run(input))
I also can see this comment what I've never seen before. Is this the problem?
" GPU libraries are statically linked, skip dlopen check."

I solved the problem by making a new pycharm project.
And the error about GPU,
"GPU libraries are statically linked, skip dlopen check."
→ I downgraded the tensorflow-gpu version to 1.13.1 because 1.14.0 may not compatiable with CUDA 10.0.

Related

Vscode fails to resolve tensorflow fuctions in editor

I have codes like this
import tensorflow as tf
mapp = tf.nest.map_structure(lambda x : x.reshape([-1]+list(x.shape[2:]), data))
No problem happens when running this code, but in vscode editor, the resolved result for nest and map_structure is Incomplete. And autocomplete also fails for tensorflow.
Additionally, for all other tensorflow functions, it shows same error. The IntelliSense for tensorflow only have data type and Tensor class and Variable class, etc.
But for other modules like numpy, everything works fine. And if in Jupyter in the same python env, it works fine even for tensorflow.
So I think the problem should be in the vscode, but have no idea where it should be exactly.
Any idea about this problem is appreciated!
Version
vscode : 1.75.1
python : 3.8.10
tensorflow : 2.11
More Info
I checked the extension installed as
and I can show the resolve result for tensorflow and numpy as
I have tried many times and found a possible solution.
This may be because your Python Language Server is not defined. So Pylance was not enabled.
You can add the following codes to your settings.json to enable it:
"python.languageServer": "Pylance",

Error when running code using pickle load

I'm doing my code exactly the same with https://www.youtube.com/watch?v=y1ZrOs9s2QA&t=4124s in minute 1:11:09.
I write this code :
pickle_in = open("venv/model_trained.p","rb")
model = pickle.load(pickle_in)
It showed an error like this when I tried to run it.
Error that I got after running the code
Is someone having the same issue as me?
Thank you.
Best Regards,
Bhetrand
Never mind, I solved it, the problem is on the python version that doesn't support TensorFlow version 2.0.0, I guess pickle can work well in TensorFlow 2.0.0 and we need the python version 3.5- 3.7 to run tensor flow 2.0.0.

How to disable TensorFlow GPU?

I first created my TensorFlow code in python on my GPU using :
import tensorflow-gpu as tf
I used it for training purpose and everything went very well. But now, I want to deploy my python scripts on a device without GPU. So, I uninstalled tensorflow-gpu with pip and import normal TensorFlow :
import tensorflow as tf
But when I run the script, it is still using the gpu :
I tried this code :
try:
# Disable all GPUS
tf.config.set_visible_devices([], 'GPU')
visible_devices = tf.config.get_visible_devices()
print(visible_devices)
for device in visible_devices:
assert device.device_type != 'GPU'
except Exception as e:
# Invalid device or cannot modify virtual devices once initialized.
print(e)
But still not working (even if the gpu seems disable as you can see in white on the screenshot).
I just want to return to the default TensorFlow installation without GPU features.
I tried to uninstall and install tensorflow, remove the virtual environment and create a new one, nothing worked.
Thanks for your help !
Tensorflow > 2.x has default GPU support. To know more please visit Tensorflow site.
As per the above screenshot, it is showing only CPU.
And also observe Adding visible GPU devices: 0
If you still want to use only CPU enable Tensorflow use Tensorflow==1.15

tensorflow cuDNN compatibility

I'm using keras to make a model.
While compiling, my model doesn't work and an error message pops out:tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
My computer's spec is as follows:
GPU : RTX2070,
Tensorflow version : 1.13.1,
Python version : 3.6.5,
CUDA : 10.0,
cuDNN : 7.4.2
I tried cuDNN 7.5.0 and this link: cannot train Keras convolution network on GPU but changing cuDNN version doesn't work for me.
So, I tried these codes:
>>>import tensorflow as tf
>>>a = tf.constant([1])
>>>b = tf.constnat([2])
>>>sess = tf.Session()
>>>with tf.device('/gpu:0'):
... print(sess.run(a+b))
...
[3]
It works! Does anyone know why I suffer from this problem?
This issue might be of help https://github.com/tensorflow/tensorflow/issues/24828
Try to check which versions of cudnn and tensorflow you have.
I solved this problem by conda install tensorflow-gpu. It automatically installed cuDNN 7.3.1, and the problem solved.

python keeps crashing whenever CNN model is being trained

I am really looking for your help.
I have GTX-1070 which is 8vram.
I downloaded tensorflow-gpu, cuda 9.0, cudnn 7.0 for cuda 9.0.
and everything works fine with DNN. GPU is also working fine.
but whenever I try to train any model that has to do with image, it crashes.
Currently I am working with keras pre-trained VGG16.
I tried using smaller batch-size, resized image down to 64x64.
When I look at the process, GPU is used 0%, then spikes up to 100% then crashes.
Spyder says "kernel died, restarting".
Is gtx-1070 really that short of memory or am I missing something?
Thanks for reading
The first thing I would try is to download Cudnn 7.1.
These are good instructions to follow, and you may consider reinstalling Cuda 9 again. I had to do the same at one point, it was frustrating but haven't had a problem since I got it right.
Installation Instructions
I had a similar crashing problem before. The cause was my cudnn7.1 and tensorflow-gpu (precompiled with cudnn7.05) versions mismatched. Once taken care, there is no more problem.

Categories

Resources