pyinstaller GLIBC_2.25 not found, however another script works - python

I created an executable out of a simple Python script, using pyinstaller on Ubuntu 18.04, and tested it in a different computer (also with Ubutnu 18) and worked perfectly.
However when trying the same with a more complex script (more library imports) the executable fails in the other computer with the error
ImportError: /lob/x86_64-linux-gnu/libc.so.6: version 'GLIBC_2.25' not found
This can't be a Python incompatibility (see https://github.com/pyinstaller/pyinstaller/issues/4758), as the other script did work fine. So it most probably is based on some of the libraries the second script imports.
How can I include the libraries imported in the executable made by Pyinstaller (if that is even the origin of this error)?

Solution A
I have not confirmed this solution, but it sometimes helps. Delete directories ./build and ./dist, then try creating the executable again with pyinstaller.
Solution B
The solution, for me at least, is to build your executable on an older version of your OS.
I was seeing the same error.
ImportError: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /tmp/_MEIjdcWu4/./libX11.so.6)
[32614] Failed to execute script 'test_executable' due to unhandled exception!
I built my exectuable with Pyinstaller on Ubuntu 22.04. Then I copied and ran the executable on the older Ubuntu 20.04 and the error was encountered.
Per the comment below, this might be a compatibility issue where the executable built on a newer OS is not compatible with older OSs.
"For what is worth, the issue could be that the libraries bundled with
the built program conflict with the system libraries, preventing the
DRI driver from properly loading.
The culprit could be either standard c/c++ libraries (libgcc_s.so.1,
libstdc++.so.6) or maybe the X11 libraries (libX11.so.6, libXau.so.6,
libXdmcp.so.6, libXext.so.6, `libXrender.so.1˙). Perhaps more likely
former than the latter.
For example, if libstdc++.so.6 on the build system is older than the
one used by the target system, then the non-bundled libraries will
fail to load due to missing symbols (which are present in the newer,
system version of the library, but not in the bundled one). This is
actually quite a common issue with binary-only software on linux,
especially on more bleeding edge distributions. In those cases,
removing the bundled version of the offending library may help.
(You have a similar issue with system libgvfsdbus.so, which is missing
a symbol that is not available in the bundled libglib-2.0.so.0, which
is probably older than the glib library available on the system)."
Source:
https://github.com/cryptoadvance/specter-desktop/issues/373#issuecomment-694476451

Related

Cython and clang on mac, "Python.h not found"

I'm running clang on mac to compile a c file created by running a very simple program through cython, but the compiler always give me a "Python.h not found" fatal error. I've tried every solution I could find, reinstalling python 3.9, using the -I/path/to/headerfile method, and rewriting the include statement in the code to contain the full filepath, but nothing has worked. When I do include the full filepath, I get fatal error: 'cpython/initconfig.h' file not found. What could the issue possibly be, and how would I fix it? The program itself works fine in the standard python interpreter, pyinstaller, and nuitka.
Today I needed to embed Python into a C application developed on XCode with Clang compiler, and I've met the same problems and solved them. Let me share the tips:
1. Python not found error
You should install a Python framework to your MacOS, and attach it into Frameworks and Libraries project settings tab. Also, specify it's path in Framework Search Paths inside Build Settings tab if XCode hasn't done it automatically.
2. Undefined symbols / Implicit declaration errors
Verify you're using a modern version of Python framework. Versions prior to 3.6 had different ABI which leads to the mentioned errors.
3. cpython/initconfig.h file not found
Versions prior to ~3.9 had some header inclusion problem for the cpython/initconfig.h file (issues 40642, 39026).
Check the builds of Python that your OS currently has. I found Pythons installed in the following paths:
# Default 2.7 MacOSX installation is here
/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/System/Library/Frameworks/
/Applications/Xcode.app/Contents/Developer/Library/Frameworks/
/Library/Frameworks/
Or install a recent Python version. I recommend using "universal binary" build, see the docs.

Is it necessary to stop all python scripts when upgrading python packages?

If a python script using package X is running and package X is being upgraded, will it lead to permission problem that will cause the upgrade to fail?
I am using Windows 10, Anaconda v5.2 running on python v3.6 and conda to do update of packages.
Although pure Python files are compiled in memory when imported, and the source is (almost) no longer relevant after that, that's not the end of the story.
Packages may have extra assets that are lazily loaded, or your program or its dependencies may load dependent modules on demand, so, if running during an upgrade, it may load unexpected versions of packages/resources, or even halfway-upgraded packages.
Also, native (=non Python-only) modules - .pyd files on Windows - are dlls that are loaded in the importing process. As dlls are mapped in memory with no sharing, replacing them while they are loaded is not allowed, so this may block the upgrade of the relevant packages.
I tried a simple test just now. I ran a python script which loops forever. It uses numpy. Then, I tried to install a python package(pytorch) that requires downgrading the numpy version. When the script is running, the installation failed with some "no permission" error message. After I stopped the script, the installation succeeded.
Based on the results of this experiment, the conclusion is that it is recommended to stop all python scripts when performing python package upgrade.
When you import, you're creating a local instance of the package in your RAM for running. So upgrading your packages should not affect your scripts running.
You can look at from importlib import reload to reload your packages while your scripts are still running

Configure error while installing graph-tool on ubuntu 14.04

So I spent a whole day trying to find out the solution for this. I am trying to install graph-tool on my machine with 14.04 OS. Initially I was unable to succeed because I didn't have gcc 5 on my machine. After installing it, I am trying the following:
./configure CXX='g++5'
and I get the following error:
===========================
Using python version: 2.7.6
===========================
checking for boostlib >= 1.54.0... configure: We could not detect the boost libraries (version 1.54 or higher). If you have a staged boost library (still not installed) please specify $BOOST_ROOT in your environment and do not give a PATH to --with-boost option. If you are sure you have boost installed, then check your version number looking in <boost/version.hpp>. See http://randspringer.de/boost for more documentation.
checking whether the Boost::Python library is available... no
configure: error: No usable boost::python found
I see no solution on the mailing list of graph-tool or stackoverflow about this problem. I would be really grateful if somebody could help me with this.
Thanks in advance.
In Debian, the libraries are almost always split in two packages: One
containing the shared object and another one with "-dev" suffix which
contains the header files. For cairomm you need to install the
libcairomm-1.0-dev package, in addition to libcairomm-1.0.
And cairo support is optional. If you want to disable it, just pass
the --disable-cairo to the configure script.
Source: https://lists.skewed.de/pipermail/graph-tool/2013-November/001094.html
There are some issues with the boost package on ubuntu 14.04 and some of the graph-tool functions (see graph-tool - k-shortest path - boost::coroutine was not found at compile-time and http://main-discussion-list-for-the-graph-tool-project.982480.n3.nabble.com/Debian-package-and-boost-at-compile-time-td4026383.html ). At current it seems neccessary to compile boost from source until a newer version of boost is uploaded to the repository in order for graph-tool to work fully.
Once this bug is fixed (https://bugs.launchpad.net/ubuntu/+source/boost1.54/+bug/1529289) it will no longer be a problem.

"R6034 An application has made an attempt to load the C runtime library incorrectly" after pygtk being installed

I'm using python 2.7.9 and encountered a problem when installing pygtk.
It displayed "Runtime error!...R6034 An application has made an attempt to load the C runtime library incorrectly" when installing numpy/scipy after pygtk being installed.
I tried to figure it out by searching it in stackoverflow and found two similar questions: Runtime error R6034 in embedded Python application and An application has made an attempt to load the C runtime library incorrectly.
So following the first one, I deleted the path corresponding to msvcr90.dll, however, it still cannot work. Then I chose to simply delete msvcr90.dll; at this time, this error wasn't presented when installing numpy/scipy, however, these two modules cannot work when simply typing "importing numpy/scipy".
I also renamed gtk-2.0 following the second one. Then numpy and scipy can be successfully installed. But it displayed "Error processing line 3 of C:\Python27\lib\site-packages\pygtk.pth" when installing matplotlib using pip.
I'm really confused about it. Can anybody provide some methods to fix it?
I've installed Python and PyGTK on 5+ machines, at least two of them brand new, clean builds of Win 7.
I've got the An application has made an attempt to load the C runtime library incorrectly error whenever I install a Python package as a windows installer (rather than using pip) on all these machines. It's annoying, but has never made a jot of difference, both Python and Gtk function correctly.
You've deleted msvcr90.dll, and that is why you get your Error processing line 3... If you look at this file, you'll see that line 3 is import runtime, and if you look further into the 'runtime' package, you'll see that this then tries to find the missing dll.
I think your best bet is to try to restore the missing file. If it's still in your recycle bin - great!
If not, the best thing to do is reinstall the Visual C++ runtime library
I made this video to show my way: https://www.youtube.com/watch?v=s6jhR1VBfeU. I use Anaconda to embedded Python in my C++ application. I simply changed "msvcr90.dll" to "msvcr90.dll_hihi" in 3 folders:
C:\Users\your user\Anaconda2\Library\bin, C:\Users\your user\Anaconda2 and C:\Program Files\Intel\iCLS Client (for x64)

pySide: ExtensionLoader_Pyside_QtGUI.py specified module could not be found

I'm using CXFreeze with PySide (QT). I get an error:
cx_Freeze: Python error in main script.
myscript.py line 33, in
File ExtensionLoader_Pyside_QtGUI.py, line 11, in
Import Error: DLL load failed: The specified module could not be found
When running a fresh install of Windows server 2008.
I'm running the frozen EXE package (with the folder). It seems to work on my own system and other systems. What might be the issue?
After reading, online, I tried to replace the Qt4Gui file, but this didn't solve the issue.
Python version is 2.7
Based on your Import Error: DLL load failed it is most likely an installation issue causing the missing DLL. To figure our exactly which DLL you are missing, use http://www.dependencywalker.com/ Run the .exe and open the .pyd file for File ExtensionLoader_Pyside_QtGUI.py and it will show you exactly which DLL's are missing and more importantly the locations where they should be. You can probably then track down the missing DLL online.
there are known issues with pyside 1.2.0 and cxFreeze. All should be fixed in development version (available on git repo). Please build the PySide from latest sources yourself or wait for PySide version 1.2.1. Build instructions are here [1].
[1] https://github.com/PySide/pyside-setup#building-pyside-on-a-windows-system
I used Py2exe instead of CXFreeze and it worked perfectly.
Also, apparently Python requires the MS Visual C++ Dependency Files:
http://www.microsoft.com/en-us/download/details.aspx?id=29
So any bundling needs that as well, if it's a fresh install. (Although I think they are now bundled with newer Windows versions.)
Other Notes:
In my experience, sometimes you should try CXFreeze, Py2EXE and PyInstaller quickly and see if one works best. As ideal as CXFreeze is re: cross platform, it just isn't going to happen perfectly.
Also, while I don't know if this was a factor, I set up a Windows 2000 Pro virtual machine and ran Py2exe on that. That was to ensure compatibility for all older Windows versions, and seemed to work well. (NOTE: Many things won't even run on Win2000 anymore so be careful that your other tools and libraries will run on it.)
Finally, be extra careful to match the bit level (32 vs 64) of all your libraries, and your Python install itself. If you have 32-bit python, ensure that your PySide, CXFreeze and any other libraries you use are 32-bit. (Or 64-bit if you're using 64-bit python.)

Categories

Resources