I've got a poetry project. My environment is Conda 22.9.0 on a windows machine with poetry version 1.2.2:
This is my pyproject.toml file:
[tool.poetry]
name = "myproject"
version = "0.1.0"
description = ""
[tool.poetry.dependencies]
# REVIEW DEPENDENCIES
python = ">=3.7,<3.11"
numpy = "*"
tensorflow = "^2.8"
[build-system]
requires = ["poetry>=0.12"]
build-backend = "poetry.masonry.api"
[tool.poetry.scripts]
start = "myproject.main:start"
The myproject\main.py module contains:
import tensorflow as tf
def start():
if tf.test.is_gpu_available():
print("TensorFlow is using a GPU.")
else:
print("TensorFlow is NOT using a GPU.")
If I do poetry install, it seems to work fine:
Creating virtualenv myproject in D:\Projects\myproject\dev\myproject-series-forecast\.venv
Updating dependencies
Resolving dependencies...
Writing lock file
Package operations: 41 installs, 0 updates, 0 removals
• Installing certifi (2022.12.7)
• Installing charset-normalizer (2.1.1)
• Installing idna (3.4)
• Installing pyasn1 (0.4.8)
• Installing urllib3 (1.26.13)
• Installing cachetools (5.2.0)
• Installing oauthlib (3.2.2)
• Installing rsa (4.9)
• Installing six (1.16.0)
• Installing zipp (3.11.0)
• Installing requests (2.28.1)
• Installing pyasn1-modules (0.2.8)
• Installing google-auth (2.15.0)
• Installing importlib-metadata (5.2.0)
• Installing requests-oauthlib (1.3.1)
• Installing markupsafe (2.1.1)
• Installing absl-py (1.3.0)
• Installing grpcio (1.51.1)
• Installing numpy (1.21.6)
• Installing tensorboard-data-server (0.6.1)
• Installing markdown (3.4.1)
• Installing tensorboard-plugin-wit (1.8.1)
• Installing protobuf (3.19.6)
• Installing werkzeug (2.2.2)
• Installing google-auth-oauthlib (0.4.6)
• Installing astunparse (1.6.3)
• Installing flatbuffers (22.12.6)
• Installing gast (0.4.0)
• Installing google-pasta (0.2.0)
• Installing h5py (3.7.0)
• Installing keras (2.11.0)
• Installing tensorflow-estimator (2.11.0)
• Installing packaging (22.0)
• Installing opt-einsum (3.3.0)
• Installing libclang (14.0.6)
• Installing tensorboard (2.11.0)
• Installing tensorflow-io-gcs-filesystem (0.29.0)
• Installing termcolor (2.1.1)
• Installing typing-extensions (4.4.0)
• Installing wrapt (1.14.1)
• Installing tensorflow (2.11.0)
Installing the current project: myproject (0.1.0)
But when executing poetry run start I got error in import
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\Python\Anaconda3\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "D:\Projects\myproject\dev\myproject-series-forecast\myproject\main.py", line 3, in <module>
import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'
I have a problem installing a PyQt5 python package. I am in Yocto Linux environment (Hardknott kernel 5.10.35) on the Variscite board (DART-MX8M-PLUS). This is the log when I try to install with pip:
root#imx8mp-var-dart:~# pip install pyqt5
Collecting pyqt5
Using cached PyQt5-5.15.6.tar.gz (3.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [29 lines of output]
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 156, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/usr/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/usr/lib/python3.9/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 160, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/sipbuild/api.py", line 51, in build_wheel
project = AbstractProject.bootstrap('pep517')
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/sipbuild/abstract_project.py", line 83, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/sipbuild/project.py", line 594, in setup
self.apply_user_defaults(tool)
File "/tmp/pip-install-w8dpcxmz/pyqt5_db6cfa3b68b641d3a6209736257b28c5/project.py", line 63, in apply_user_defaults
super().apply_user_defaults(tool)
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults
super().apply_user_defaults(tool)
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/sipbuild/project.py", line 241, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/tmp/pip-build-env-j1_dul47/overlay/lib/python3.9/site-packages/pyqtbuild/builder.py", line 67, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
root#imx8mp-var-dart:~#
Instead, this is the list of currently installed python packages:
root#imx8mp-var-dart:~# pip list
Package Version
----------------- --------------
attrs 20.3.0
btrfsutil 5.10.1
cycler 0.11.0
decorator 5.0.7
deepview-rt 2.4.25
fonttools 4.29.1
future 0.18.2
gpg 1.15.1-unknown
iniparse 0.4
kiwisolver 1.3.2
libcomps 0.1.15
matplotlib 3.5.1
mne 0.24.1
numpy 1.20.1
packaging 21.3
Pillow 8.2.0
pip 22.0.3
psutil 5.8.0
pyarmnn 24.0.0
pycairo 1.20.0
PyGObject 3.38.0
pyparsing 3.0.7
python-dateutil 2.8.2
scipy 1.8.0
setuptools 60.9.3
six 1.15.0
tflite-runtime 2.4.1
toml 0.10.2
torch 1.7.1
torchvision 0.8.2
tvm 0.7.0
typing-extensions 3.7.4.3
root#imx8mp-var-dart:~#
How can it be solved? Thanks in advance!
Do not bother installing packages natively on the board,
PyQt5 is already supported by Yocto in meta-qt5, link to recipe.
Just add meta-qt5 to your bblayers.conf and :
IMAGE_INSTALL_append = " python3-pyqt5"
I'm trying to use pip-compile to build my requirements.txt file and I get the following error.
Traceback (most recent call last):
File "/Users/foobar/.pyenv/versions/3.7.0/bin/pip-compile", line 11, in <module>
sys.exit(cli())
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/click/core.py", line 764, in __call__
return self.main(*args, **kwargs)
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/click/core.py", line 717, in main
rv = self.invoke(ctx)
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/click/core.py", line 956, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/click/core.py", line 555, in invoke
return callback(*args, **kwargs)
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/click/decorators.py", line 17, in new_func
return f(get_current_context(), *args, **kwargs)
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/piptools/scripts/compile.py", line 305, in cli
for ireq in filter(is_pinned_requirement, ireqs):
File "/Users/foobar/.pyenv/versions/3.7.0/lib/python3.7/site-packages/piptools/utils.py", line 122, in is_pinned_requirement
if ireq.editable:
AttributeError: 'ParsedRequirement' object has no attribute 'editable'
pip==21.2.4
pip-tools==6.2.0
I tried downgrading pip and pip-tools according to the pip-tools pypi website together but I can't seem to overcome this error. If anyone has suggestions, they'd be much appreciated.
Pip List
Package Version
----------------------- -----------
aiohttp 3.7.4.post0
alembic 1.4.3
asn1crypto 1.4.0
async-timeout 3.0.1
attrs 21.2.0
certifi 2021.5.30
chardet 3.0.4
click 8.0.1
colorama 0.4.3
coverage 5.5
flake8 3.8.3
greenlet 1.1.0
idna 2.10
importlib-metadata 4.6.1
Mako 1.1.4
MarkupSafe 2.0.1
mccabe 0.6.1
more-itertools 8.8.0
multidict 5.1.0
packaging 21.0
pep517 0.11.0
pg8000 1.20.0
pip 21.0.1
pip-tools 6.1.0
pluggy 0.13.1
psycopg2 2.8.5
py 1.10.0
pycodestyle 2.6.0
pyflakes 2.2.0
PyJWT 1.7.1
pyparsing 2.4.7
pytest 5.4.3
pytest-cov 2.12.0
pytest-mock 3.1.1
python-dateutil 2.8.2
python-editor 1.0.4
requests 2.24.0
scramp 1.4.0
setuptools 39.0.1
six 1.16.0
slack-sdk 3.11.0
SQLAlchemy 1.4.20
testing.common.database 2.0.3
testing.postgresql 1.3.0
toml 0.10.2
tomli 1.2.1
typing-extensions 3.10.0.0
urllib3 1.25.11
wcwidth 0.2.5
wheel 0.37.0
yarl 1.6.3
zipp 3.5.0
The error comes from pip-tools trying to access the editable attribute on the class ParsedRequirement, whereas the correct attribute name on that class is is_editable. With previous versions of pip, the object at ireq were of type InstallRequirement, which does have the attribute editable.
Try pip==20.0.2; that seems to be the last version that returned InstallRequirement instead of ParsedRequirement from the relevant method (parse_requirements).
I have previously installed Tensorflow on several machines but am stuck installing it on my new laptop with RTX 2060. No matter what combination of versions I try, I get the same error. I found similar issues online and it seems like the problem is the version conflict of cudnn and tensorflow.
Here's my installation and the error.
Currently, I have Cuda v10.0.130 and cudnn-10.0-windows10-x64-v7.6.0.64 to match the installation of tensorflow, like in the image. tf__version__ = 1.13.1. Python version is 3.6. THe cudnn libraries are copied in Cuda installation folder. I also tried with Tensorflow 1.14 and python 3.7 and am getting the same results.
I'm installing tensorflow with Anaconda
conda install tensorflow-gpu
Traceback (most recent call last):
File "<ipython-input-1-c77ea08f5c30>", line 1, in <module>
runfile('C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py', wdir='C:/Users/mazat/Documents/Python/MVTools/player_detector')
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 399, in <module>
player_detector_run()
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 392, in player_detector_run
glavnaya(dropbox_folder,gamename,mvstatus)
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 247, in glavnaya
__,box1,score = yolo_class.detect_images(im2[ii].astype('uint8'))
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\yolo3.py", line 181, in detect_images
K.learning_phase(): 0
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_1/convolution (defined at C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\keras\backend\tensorflow_backend.py:3650) ]]
Caused by op 'conv2d_1/convolution', defined at:
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\console\__main__.py", line 11, in <module>
start.main()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\console\start.py", line 318, in main
kernel.start()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\kernelapp.py", line 563, in start
self.io_loop.start()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\platform\asyncio.py", line 148, in start
self.asyncio_loop.run_forever()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\asyncio\base_events.py", line 438, in run_forever
self._run_once()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\asyncio\base_events.py", line 1451, in _run_once
handle._run()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\asyncio\events.py", line 145, in _run
self._callback(*self._args)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\ioloop.py", line 690, in <lambda>
lambda f: self._run_callback(functools.partial(callback, future))
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\ioloop.py", line 743, in _run_callback
ret = callback()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\gen.py", line 787, in inner
self.run()
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\gen.py", line 748, in run
yielded = self.gen.send(value)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\kernelbase.py", line 365, in process_one
yield gen.maybe_future(dispatch(*args))
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\kernelbase.py", line 272, in dispatch_shell
yield gen.maybe_future(handler(stream, idents, msg))
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\kernelbase.py", line 542, in execute_request
user_expressions, allow_stdin,
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tornado\gen.py", line 209, in wrapper
yielded = next(result)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\ipkernel.py", line 294, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\ipykernel\zmqshell.py", line 536, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py", line 2855, in run_cell
raw_cell, store_history, silent, shell_futures)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py", line 2881, in _run_cell
return runner(coro)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\async_helpers.py", line 68, in _pseudo_sync_runner
coro.send(None)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py", line 3058, in run_cell_async
interactivity=interactivity, compiler=compiler, result=result)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py", line 3249, in run_ast_nodes
if (await self.run_code(code, result, async_=asy)):
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-1-c77ea08f5c30>", line 1, in <module>
runfile('C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py', wdir='C:/Users/mazat/Documents/Python/MVTools/player_detector')
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 399, in <module>
player_detector_run()
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 392, in player_detector_run
glavnaya(dropbox_folder,gamename,mvstatus)
File "C:/Users/mazat/Documents/Python/MVTools/player_detector/player_detector_testing.py", line 137, in glavnaya
yolo_class=YOLO(model_name,script_dir, res)
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\yolo3.py", line 39, in __init__
self.boxes, self.scores, self.classes = self.generate()
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\yolo3.py", line 68, in generate
if is_tiny_version else yolo_body(Input(shape=(None,None,3)), num_anchors//3, num_classes)
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\model.py", line 72, in yolo_body
darknet = Model(inputs, darknet_body(inputs))
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\model.py", line 48, in darknet_body
x = DarknetConv2D_BN_Leaky(32, (3,3))(x)
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\utils.py", line 16, in <lambda>
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
File "C:\Users\mazat\Documents\Python\MVTools\player_detector\yolo3\utils.py", line 16, in <lambda>
return reduce(lambda f, g: lambda *a, **kw: g(f(*a, **kw)), funcs)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\keras\layers\convolutional.py", line 171, in call
dilation_rate=self.dilation_rate)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\keras\backend\tensorflow_backend.py", line 3650, in conv2d
data_format=tf_data_format)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 851, in convolution
return op(input, filter)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 966, in __call__
return self.conv_op(inp, filter)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 591, in __call__
return self.call(inp, filter)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 208, in __call__
name=self.name)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1026, in conv2d
data_format=data_format, dilations=dilations, name=name)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 3300, in create_op
op_def=op_def)
File "C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
UnknownError (see above for traceback): Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[node conv2d_1/convolution (defined at C:\Users\mazat\Anaconda3\envs\tf_gpu\lib\site-packages\keras\backend\tensorflow_backend.py:3650) ]]
Here's also result of conda list
Same issue if I don't force python or tensorflow versions and install the default tensorflow 1.14 and python 3.7
(tf_gpu_tds) C:\Users\mazat>conda list
# packages in environment at C:\Users\mazat\Anaconda3\envs\tf_gpu_tds:
#
# Name Version Build Channel
_tflow_select 2.1.0 gpu
absl-py 0.8.0 py37_0
alabaster 0.7.12 py37_0
asn1crypto 0.24.0 py37_0
astor 0.8.0 py37_0
astroid 2.3.1 py37_0
attrs 19.1.0 py37_1
babel 2.7.0 py_0
backcall 0.1.0 py37_0
blas 1.0 mkl
bleach 3.1.0 py37_0
ca-certificates 2019.9.11 hecc5488_0 conda-forge
certifi 2019.9.11 py37_0
cffi 1.12.3 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
cloudpickle 1.2.2 py_0
colorama 0.4.1 py37_0
cryptography 2.7 py37h7a1dbc1_0
cudatoolkit 10.0.130 0
cudnn 7.6.0 cuda10.0_0
cycler 0.10.0 py_1 conda-forge
cytoolz 0.10.0 py37hfa6e2cd_0 conda-forge
dask-core 2.5.0 py_0 conda-forge
decorator 4.4.0 py37_1
defusedxml 0.6.0 py_0
docutils 0.15.2 py37_0
entrypoints 0.3 py37_0
freetype 2.9.1 ha9979f8_1
gast 0.3.2 py_0
grpcio 1.16.1 py37h351948d_1
h5py 2.9.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
idna 2.8 py37_0
imageio 2.5.0 py37_0 conda-forge
imagesize 1.1.0 py37_0
intel-openmp 2019.4 245
ipykernel 5.1.2 py37h39e3cac_0
ipython 7.8.0 py37h39e3cac_0
ipython_genutils 0.2.0 py37_0
isort 4.3.21 py37_0
jedi 0.15.1 py37_0
jinja2 2.10.1 py37_0
joblib 0.13.2 py37_0
jpeg 9b hb83a4c4_2
jsonschema 3.0.2 py37_0
jupyter_client 5.3.3 py37_1
jupyter_core 4.5.0 py_0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0 anaconda
keras-gpu 2.2.4 0 anaconda
keras-preprocessing 1.1.0 py_1
keyring 18.0.0 py37_0
kiwisolver 1.1.0 py37he980bc4_0 conda-forge
lazy-object-proxy 1.4.2 py37he774522_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.9.2 h7bd577a_0
libsodium 1.0.16 h9d3ae62_0
libtiff 4.0.10 hb898794_2
markdown 3.1.1 py37_0
markupsafe 1.1.1 py37he774522_0
matplotlib-base 3.1.1 py37h2852a4a_1 conda-forge
mccabe 0.6.1 py37_1
mistune 0.8.4 py37he774522_0
mkl 2019.4 245
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.0.14 py37h14836fe_0
mkl_random 1.1.0 py37h675688f_0
nbconvert 5.6.0 py37_1
nbformat 4.4.0 py37_0
networkx 2.3 py_0 conda-forge
numpy 1.16.5 py37h19fb1c0_0
numpy-base 1.16.5 py37hc3f5095_0
numpydoc 0.9.1 py_0
olefile 0.46 py37_0
openssl 1.1.1c hfa6e2cd_0 conda-forge
packaging 19.2 py_0
pandas 0.25.1 py37ha925a31_0 anaconda
pandoc 2.2.3.2 0
pandocfilters 1.4.2 py37_1
parso 0.5.1 py_0
pickleshare 0.7.5 py37_0
pillow 6.1.0 py37hdc69c19_0
pip 19.2.3 py37_0
prompt_toolkit 2.0.9 py37_0
protobuf 3.9.2 py37h33f27b4_0
psutil 5.6.3 py37he774522_0
pycodestyle 2.5.0 py37_0
pycparser 2.19 py37_0
pyflakes 2.1.1 py37_0
pygments 2.4.2 py_0
pylint 2.4.2 py37_0
pyopenssl 19.0.0 py37_0
pyparsing 2.4.2 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pyrsistent 0.15.4 py37he774522_0
pysocks 1.7.1 py37_0
python 3.7.4 h5263a28_0
python-dateutil 2.8.0 py37_0
pytz 2019.2 py_0
pywavelets 1.0.3 py37h452e1ab_1 conda-forge
pywin32 223 py37hfa6e2cd_1
pyyaml 5.1.2 py37he774522_0 anaconda
pyzmq 18.1.0 py37ha925a31_0
qt 5.9.7 vc14h73c81de_0
qtawesome 0.6.0 py_0
qtconsole 4.5.5 py_0
qtpy 1.9.0 py_0
requests 2.22.0 py37_0
rope 0.14.0 py_0
scikit-image 0.15.0 py37he350917_2 conda-forge
scikit-learn 0.21.3 py37h6288b17_0
scipy 1.3.1 py37h29ff71c_0
setuptools 41.2.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.12.0 py37_0
snowballstemmer 1.9.1 py_0
sphinx 2.2.0 py_0
sphinxcontrib-applehelp 1.0.1 py_0
sphinxcontrib-devhelp 1.0.1 py_0
sphinxcontrib-htmlhelp 1.0.2 py_0
sphinxcontrib-jsmath 1.0.1 py_0
sphinxcontrib-qthelp 1.0.2 py_0
sphinxcontrib-serializinghtml 1.1.3 py_0
spyder 3.3.6 py37_0
spyder-kernels 0.5.2 py37_0
sqlite 3.29.0 he774522_0
tensorboard 1.14.0 py37he3c9ec2_0
tensorflow 1.14.0 gpu_py37h5512b17_0
tensorflow-base 1.14.0 gpu_py37h55fc52a_0
tensorflow-estimator 1.14.0 py_0
tensorflow-gpu 1.14.0 h0d30ee6_0
termcolor 1.1.0 py37_1
testpath 0.4.2 py37_0
tk 8.6.8 hfa6e2cd_0
toolz 0.10.0 py_0 conda-forge
tornado 6.0.3 py37he774522_0
traitlets 4.3.2 py37_0
urllib3 1.24.2 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_0
wcwidth 0.1.7 py37_0
webencodings 0.5.1 py37_1
werkzeug 0.16.0 py_0
wheel 0.33.6 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.11.2 py37he774522_0
xz 5.2.4 h2fa13f4_4
yaml 0.1.7 vc14h4cb57cf_1 [vc14] anaconda
zeromq 4.3.1 h33f27b4_3
zlib 1.2.11 h62dcd97_3
zstd 1.3.7 h508b16e_0
In the end, I got my answer from this github issue. After several reinstalls and restarts, these lines of code started making all the difference:
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.keras.backend.set_session(tf.Session(config=config))
I'm still not sure what was happening, but I guess my suggestion would be to make sure you restart your computer often enough. Now both environments with TF1.13.1 + Python 3.6 and TF1.14+Python 3.7 work for me.
Please note that, due to compatibility reasons, the code for newer (>= 2.0) versions of tensorflow is:
import tensorflow as tf
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
tf.compat.v1.keras.backend.set_session(tf.compat.v1.Session(config=config))
Since the compatibility issue is the most probable option as we discussed in the comment sections, I found the tested versions of tensorflow with respect to the CUDA and cuDNN versions. You can find it in here.
Please feel free to update the status of your problem after you setup your environment according to the given link.
I hope it would be resolved.
EDIT: In case you're in Windows, I'd like to update a new link
I am trying to import tensorflow without success. The installation of each packages is done with Anaconda Navigator or in the Anaconda prompt command with conda install *package* in the right env that I've created previously.
For the test I found this script usefull: https://www.southampton.ac.uk/~fangohr/blog/installation-of-python-spyder-numpy-sympy-scipy-pytest-matplotlib-via-anaconda.html#running-the-tests-with-spyder
and all others packages seems to work.
Something is wrong in the installation of tensorflow.
import tensorflow as tf
Output in the console:
import tensorflow as tf
Traceback (most recent call last):
File "<ipython-input-17-64156d691fe5>", line 1, in <module>
import tensorflow as tf
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\__init__.py", line 34, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\python\__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\imp.py", line 242, in load_module
return load_dynamic(name, filename, file)
File "C:\Users\Megaport\Anaconda3\envs\venv\lib\imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: DLL load failed: Impossibile trovare il modulo specificato.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
Thanks in advance.
this is the list of what is installed in the env. OS x64
All the solutions in stackoverflow (avx support of the cpu, microsoft c++ 2015, python path) have already been tried
# packages in environment at C:\Users\Megaport\Anaconda3\envs\venv:
#
# Name Version Build Channel
_py-xgboost-mutex 2.0 cpu_0
_tflow_select 2.3.0 mkl
absl-py 0.8.0 pypi_0 pypi
astor 0.8.0 pypi_0 pypi
atomicwrites 1.3.0 py37_1
attrs 19.1.0 py37_1
backcall 0.1.0 py37_0
blas 1.0 mkl
ca-certificates 2019.5.15 1
certifi 2019.6.16 py37_1
cloudpickle 1.2.1 py_0
colorama 0.4.1 py37_0
cycler 0.10.0 py37_0
decorator 4.4.0 py37_1
fastcache 1.1.0 py37he774522_0
freetype 2.9.1 ha9979f8_1
gast 0.2.2 pypi_0 pypi
google-pasta 0.1.7 pypi_0 pypi
grpcio 1.23.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
hdf5 1.10.2 hac2f561_1
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha66f8fd_1
importlib_metadata 0.19 py37_0
intel-openmp 2019.4 245
ipykernel 5.1.2 py37h39e3cac_0
ipython 7.8.0 py37h39e3cac_0
ipython_genutils 0.2.0 py37_0
jedi 0.15.1 py37_0
joblib 0.13.2 py37_0
jpeg 9b hb83a4c4_2
jupyter_client 5.3.1 py_0
jupyter_core 4.5.0 py_0
keras-applications 1.0.8 py_0
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.1.0 py37ha925a31_0
libmklml 2019.0.5 0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.8.0 h7bd577a_0
libsodium 1.0.16 h9d3ae62_0
libxgboost 0.90 0
m2w64-gcc-libgfortran 5.3.0 6
m2w64-gcc-libs 5.3.0 7
m2w64-gcc-libs-core 5.3.0 7
m2w64-gmp 6.1.0 2
m2w64-libwinpthread-git 5.0.0.4634.697f757 2
markdown 3.1.1 py37_0
matplotlib 3.1.1 py37hc8f65d3_0
mkl 2019.4 245
mkl-service 2.0.2 py37he774522_0
mkl_fft 1.0.14 py37h14836fe_0
mkl_random 1.0.2 py37h343c172_0
more-itertools 7.2.0 py37_0
mpmath 1.1.0 py37_0
msys2-conda-epoch 20160418 1
numpy 1.17.2 pypi_0 pypi
numpy-base 1.16.4 py37hc3f5095_0
openssl 1.1.1c he774522_1
opt-einsum 3.0.1 pypi_0 pypi
packaging 19.1 py37_0
pandas 0.25.1 py37ha925a31_0
parso 0.5.1 py_0
pickleshare 0.7.5 py37_0
pip 19.2.2 py37_0
pluggy 0.12.0 py_0
prompt_toolkit 2.0.9 py37_0
protobuf 3.9.1 pypi_0 pypi
py 1.8.0 py37_0
py-xgboost 0.90 py37_0
py-xgboost-cpu 0.90 py37_0
pygments 2.4.2 py_0
pyparsing 2.4.2 py_0
pyqt 5.9.2 py37h6538335_2
pytest 5.0.1 py37_0
python 3.7.4 h5263a28_0
python-dateutil 2.8.0 py37_0
pytz 2019.2 py_0
pyyaml 5.1.2 py37he774522_0
pyzmq 18.1.0 py37ha925a31_0
qt 5.9.7 vc14h73c81de_0
scikit-learn 0.21.2 py37h6288b17_0
scipy 1.3.1 py37h29ff71c_0
setuptools 41.2.0 pypi_0 pypi
sip 4.19.8 py37h6538335_0
six 1.12.0 pypi_0 pypi
spyder-kernels 0.5.1 py37_0
sqlite 3.29.0 he774522_0
sympy 1.4 py37_0
tb-nightly 1.15.0a20190806 pypi_0 pypi
tensorboard 1.14.0 py37he3c9ec2_0
tensorflow 1.14.0 mkl_py37h7908ca0_0
tensorflow-base 1.14.0 mkl_py37ha978198_0
tensorflow-estimator 1.14.0 py_0
termcolor 1.1.0 pypi_0 pypi
tornado 6.0.3 py37he774522_0
traitlets 4.3.2 py37_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_0
wcwidth 0.1.7 py37_0
werkzeug 0.15.6 pypi_0 pypi
wheel 0.33.6 pypi_0 pypi
wincertstore 0.2 py37_0
wrapt 1.11.2 py37he774522_0
yaml 0.1.7 hc54c509_2
zeromq 4.3.1 h33f27b4_3
zipp 0.5.2 py_0
zlib 1.2.11 h62dcd97_3
Installing spyder again inside the environment was the solution.