I've been using tensorflow for quite some time. Recently my scripts run very slow (> 80 sec) as to earlier (< 1 sec). I narrowed the issue down to import tensorflow, which alone is taking all the time (all other libs and ops are running << 1 sec).
I might have a trace, but I don't know what to do with it: When I keyboard-interrupt (Strg+C) the execution during the 80 sec import, usually this is what comes up:
Traceback (most recent call last):
File "/.../py_env/tf_unet/lib/python3.5/site.py", line 703, in <module>
main()
File "/.../py_env/tf_unet/lib/python3.5/site.py", line 694, in main
execsitecustomize()
File "/.../py_env/tf_unet/lib/python3.5/site.py", line 548, in execsitecustomize
import sitecustomize
File "/usr/lib/python3.5/sitecustomize.py", line 3, in <module>
import apport_python_hook
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 954, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 896, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1139, in find_spec
File "<frozen importlib._bootstrap_external>", line 1113, in _get_spec
File "<frozen importlib._bootstrap_external>", line 1225, in find_spec
File "<frozen importlib._bootstrap_external>", line 1264, in _fill_cache
KeyboardInterrupt
Does this mean, something with "filling the caches" (_fill_cache) is wrong? Anyone experience with this? Can I fix this somehow?
What I've tried so far:
I broke down import tensorflow to only modules I need (from tensorflow import train / python_io / compat), with no improvement.
I found other people complaining about long import tensorflow speed here, here and in the corresponding SO question, but in the range of < 10 sec and referring to specific modules (tf.contrib or tf.learn). So not much to learn from there. Also I am using tensorflow 1.4.0 which apparently fixed the problems described there.
Just for reference, I am using this little piece of code to determine the speed:
from timeit import default_timer as timer
print('import tensorflow')
start = timer()
import tensorflow
end = timer()
print('Elapsed time: ' + str(end - start))
This is probably not the only reason that can cause this, but in my experience certainly plays a role. I had serious slowness in importing Tensorflow due to the fact that I had my TF virtual environment on a network drive. Moving the virtual environment to a local hard drive helped quite a bit in this respect.
You could try doing something similar that applies in your environment.
Related
I am learning to import packages in python and facing issue in importing custom packages while debugging code in Thonny IDE.
The issue does not come if I simply run the program.
My relative directory structure is
Compilation\Scripts\tesing_pkg_import.py
Contents of tesing_pkg_import.py is
import pandas as pd
def tes_func():
#Checking for same column name in a single dataframe
testDpCol = [ (11, 'jack', 34, 'Sydney', 5) ]
testDfObj = pd.DataFrame(testDpCol, columns=['ID', 'Name', 'Age', 'Name', 'Experience'])
print(testDfObj.head())
Then in Compilation folder I have tesing_pkg_import_main.py content of which are
import Scripts.tesing_pkg_import as test
test.tes_func()
I have verified
That my parent path is present in syspath
Program is running successfully
Issue comes only when I start debugger in Thonny
init.py file is present in Compilation\Scripts\ folder
The issue logs are printed as follow :
Traceback (most recent call last):
File "D:\***Masked Manually*****\Compilation\tesing_pkg_import_main.py", line 1, in <module>
import Scripts.tesing_pkg_import as test
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 851, in exec_module
File "<frozen importlib._bootstrap_external>", line 988, in get_code
IndexError: list index out of range
Any help is appreciated.
Seems like someone else also faced this issue and raised on github.
Link is to issue is
https://github.com/thonny/thonny/issues/1920
The solution seems to work for me also, fast debug is working which shift-F5 but normal debug is not working which Ctrl-F5.
Anyone who needs help with this can watch the issue on git hub link I have mentioned.
I have recently performed various software updates as suggested by the "Software Center" on my Ubuntu machine (Ubuntu 18.04.5 LTS).
Now, when I try to import numba (numba==0.51.2) via
python3 -c 'import numba'
I get the following error
double free or corruption (top) Aborted (core dumped)
The same happens when I create a new conda environment with a fresh numba install.
I have looked at the core dump via
gdb -c core
with
thread apply all bt full
but I only get memory address information. I use python 3.6.9 on my machine, but I have also tried 3.8 in a new conda environment, where I get the same error.
I suspect that the software update is the reason for the error described above. But I might be mistaken and something else goes on here.
Is there any other way to get more info on where python crashes? I really don't want to go through the updated libraries one by one and roll them back to find the error.
At least I have now found the library that causes this error.
What I did were the following steps:
put import numba into a file, e.g. importNumba.py
locate python3.X-gdb.py via locate --regex python3.*-gdb.py. In my case it is in /usr/share/gdb/auto-load/usr/bin/python3.6-gdb.py
run python in debug mode via gdb python3 - the gdb console opens
execute source /usr/share/gdb/auto-load/usr/bin/python3.6-gdb.py in the gdb console - this will load the python extensions into gdb
execute run importNumba.py in the gdb console - this will produce above error
execute py-bt in the gdb console
This gives
Traceback (most recent call first):
File "/usr/local/lib/python3.6/dist-packages/llvmlite/binding/ffi.py", line 113, in __call__
return self._cfn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/llvmlite/binding/dylib.py", line 29, in load_library_permanently
_encode_string(filename), outerr):
File "/usr/local/lib/python3.6/dist-packages/numba/__init__.py", line 151, in _try_enable_svml
llvmlite.binding.load_library_permanently("libsvml.so")
File "/usr/local/lib/python3.6/dist-packages/numba/__init__.py", line 201, in <module>
config.USING_SVML = _try_enable_svml()
<built-in method exec of module object at remote 0x7ffff7fb7638>
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "importNumba.py", line 1, in <module>
import numba
So it seems something is wrong with libsvml.so.
I found out that numba allows to disable SVML via
setting the environment flag NUMBA_DISABLE_INTEL_SVML to something other than 0, see https://numba.pydata.org/numba-doc/dev/reference/envvars.html
Changing importNumba.py to
import os
# note that this must be executed before 'import numba'
os.environ['NUMBA_DISABLE_INTEL_SVML'] = '1'
import numba
and running it via python3 importNumba.py now works without error.
These were a few useful resources that I used:
https://fedoraproject.org/wiki/Features/EasierPythonDebugging#New_gdb_commands
https://wiki.python.org/moin/DebuggingWithGdb
Here is my problem: I am working on the German text classification project. I use spacy for that and decided to fine-tune its pretrained BERT model to get better results. However, when I try to load it to the code, it shows me errors.
Here is what I've done:
Installed spacy-transformers: pip install spacy-transformers
Downloaded German BERT model: python -m spacy download de_trf_bertbasecased_lg. It was downloaded successfully and showed me: ✔ Download and installation successful
You can now load the model via spacy.load('de_trf_bertbasecased_lg')
Wrote the following code:
import spacy
nlp = spacy.load('de_trf_bertbasecased_lg')
And the output was:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
nlp = spacy.load('de_trf_bertbasecased_lg')
File "C:\Python\Python37\lib\site-packages\spacy\__init__.py", line 30, in load
return util.load_model(name, **overrides)
File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 164, in load_model
return load_model_from_package(name, **overrides)
File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 185, in load_model_from_package
return cls.load(**overrides)
File "C:\Python\Python37\lib\site-packages\de_trf_bertbasecased_lg\__init__.py", line 12, in load
return load_model_from_init_py(__file__, **overrides)
File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 228, in load_model_from_init_py
return load_model_from_path(data_path, meta, **overrides)
File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 196, in load_model_from_path
cls = get_lang_class(lang)
File "C:\Python\Python37\lib\site-packages\spacy\util.py", line 70, in get_lang_class
if lang in registry.languages:
File "C:\Python\Python37\lib\site-packages\catalogue.py", line 56, in __contains__
has_entry_point = self.entry_points and self.get_entry_point(name)
File "C:\Python\Python37\lib\site-packages\catalogue.py", line 140, in get_entry_point
return entry_point.load()
File "C:\Python\Python37\lib\site-packages\importlib_metadata\__init__.py", line 94, in load
module = import_module(match.group('module'))
File "C:\Python\Python37\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "C:\Python\Python37\lib\site-packages\spacy_transformers\__init__.py", line 1, in <module>
from .language import TransformersLanguage
File "C:\Python\Python37\lib\site-packages\spacy_transformers\language.py", line 5, in <module>
from .util import is_special_token, pkg_meta, ATTRS, PIPES, LANG_FACTORY
File "C:\Python\Python37\lib\site-packages\spacy_transformers\util.py", line 2, in <module>
import transformers
File "C:\Python\Python37\lib\site-packages\transformers\__init__.py", line 20, in <module>
from .file_utils import (TRANSFORMERS_CACHE, PYTORCH_TRANSFORMERS_CACHE, PYTORCH_PRETRAINED_BERT_CACHE,
File "C:\Python\Python37\lib\site-packages\transformers\file_utils.py", line 37, in <module>
import torch
File "C:\Python\Python37\lib\site-packages\torch\__init__.py", line 81, in <module>
ctypes.CDLL(dll)
File "C:\Python\Python37\lib\ctypes\__init__.py", line 356, in __init__
self._handle = _dlopen(self._name, mode)
OSError: [WinError 126] The specified module could not be found
If I run the same code in PyCharm, it also shows me these two lines before all of those above:
2020-05-19 18:00:55.132721: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not
load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found
2020-05-19 18:00:55.132990: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart
dlerror if you do not have a GPU set up on your machine.
If I got it right, these two lines complain that I don't have a GPU. However, according to the docs, I should be able to use BERT even without GPU.
So I am really stuck right now and looking for your help.
I should also mention, that I used de_core_news_sm model before and it worked fine.
I have also already tried several solutions, but none of them worked. I tried:
this and this. I have also tried to uninstall all spacy-related libraries and installed them again. Didn't help either.
I am working with:
Windows 10 Home
Python: 3.7.2
Spacy: 2.2.4
Spacy-transformers: 0.5.1
Would appreciate any help or advice!
It's probably a problem with your installation of torch. Start in a clean virtual environment and install torch using the instructions here with CUDA as None: https://pytorch.org/get-started/locally/. Then install spacy-transformers with pip.
Try:
python -m spacy download de_trf_bertbasecased_lg
python -m spacy download de_trf_bertbasecased_lg-2.2.0
python -m spacy link de_trf_bertbasecased_lg
python -m spacy link de_trf_bertbasecased_lg-2.2.0
Or:
python -m spacy download de
python -m spacy.de.download all
Or download directly from:
https://deepset.ai/german-bert
Then load:
nlp = spacy.load(r'Path_To_File\de_trf_bertbasecased_lg-2.2.0')
I am trying to set up python in Vim, but I failed to get it to work. It always throws an exception named UnicodeDecodeError.
I have installed gvim on windows 10. And also installed python 3 with the corresponding version.
Vim can find the python37.dll and the command
:echo has('python3')
returns 1 as expected.
My vim with python works only when no modules except for the builtin ones are imported.
For example:
:py3 print('a')
works pretty well.
:py3 import vim
or
:py3 import sys
are also working.
However, if I write a simple python script vim_test.py like this
print('This is for vim test')
then try to import it in vim as
:py3 import vim_test
it will give an exception:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 963, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 906, in _find_spec
File "<frozen importlib._bootstrap_external>", line 1280, in find_spec
File "<frozen importlib._bootstrap_external>", line 1252, in _get_spec
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb3 in position 9: invalid start byte
It cannot import any python module in Vim.
But it can run this script directly by
:py3file vim_test.py
if the script file vim_test.py is in the current directory.
What could be the reason for this problem?
And how can I solve it?
I expected to be able to use vim plugins written in python.
With this problem, I cannot achieve that.
After installing Anaconda for Python 3.4 on my Mac I get constant messages saying:
Error in sitecustomize; set PYTHONVERBOSE for traceback:
KeyError: 'PYTHONPATH'
As suggested by a user on another question, I used
PYTHONVERBOSE=1 conda update --all
And received the traceback:
Traceback (most recent call last):
File "/Users/user/anaconda/lib/python3.4/site.py", line 506, in execsitecustomize
import sitecustomize
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2226, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 1200, in _load_unlocked
File "<frozen importlib._bootstrap>", line 1129, in _exec
File "<frozen importlib._bootstrap>", line 1471, in exec_module
File "<frozen importlib._bootstrap>", line 321, in _call_with_frames_removed
File "/usr/local/lib/python2.7/site-packages/sitecustomize.py", line 15, in <module>
str(sys.version_info[0]) + '.x!\n PYTHONPATH is currently: "' + str(os.environ['PYTHONPATH']) + '"\n' +
File "/Users/user/anaconda/lib/python3.4/os.py", line 633, in __getitem__
raise KeyError(key) from None
KeyError: 'PYTHONPATH'
# destroy sitecustomize
I have looked around and found that 'PYTHONPATH' does not exist as a key in os.environ.
If your PYTHONPATH environment variable is set, unset it. You can check with echo $PYTHONPATH. If it is set it is probably coming from something in ~/.profile or ~/.bash_profile.
The issue is the file /usr/local/lib/python2.7/site-packages/sitecustomize.py. You may want to check what that file is and where it comes from, but removing it should fix the problem.
Going to necro-answer here with more detail for folks that might hit this page after searching for the error shown…
If your Mac has messages referencing /usr/local/, I'm going to go ahead and assume you've used homebrew to install something. In this case, Python.
When Anaconda's Python distribution is installed, one of the things it'll check is if there are any site customizations applied to your existing Python installation. If you installed any version of Python using Homebrew, you likely have such a site customization.
Running conda info -a | grep dirs will get your Anaconda install info and look for a line with dirs included. Only one should match, if it exists:
user site dirs: ~/.local/lib/python3.5
If it does exist, cd to that directory (whatever it is), and get a directory listing (ls). You'll then (likely) find a file called homebrew.pth.
Remove that file, and the error goes away.
Reason: Anaconda is referencing that homebrew.pth file, which then goes on to include the sitecustomize.py from your earlier homebrew-installed version of Python.