Python terminal call doesn't load appropriate libraries - python

I am running a program which utilizes the OpenMPI libraries on Fedora 20.
When I run the command from terminal:
../bin/boxfit ../settings/boxfitsettings.txt | tee boxoutput.log
it is successful.
When I run it through the Python console I return an error:
os.system('../bin/boxfit ../settings/boxfitsettings2.txt | tee boxoutput.log')
../bin/boxfit: error while loading shared libraries: libmpi_cxx.so.1: cannot open shared object file: No such file or directory
The same error results with
subprocess.call(args,shell=True)
I have the paths set the same so it should have access to the same libraries. Is there internal Python functionality that I need to be aware of to get around this error? Or is it perhaps an program compilation error that says the program libraries can't talk to Python?

Looks like it checks for this file in Python's own directory, and not from your current one.
What you could do is for example
path = os.path.abspath("..")
os.system('%s/bin/boxfit %s/settings/boxfitsettings2.txt | tee boxoutput.log' % (path, path))
To get the path of where you are at, then format that into your command

Related

subprocess.popen not able to run command [duplicate]

This question already has answers here:
How can I specify working directory for a subprocess
(2 answers)
Closed 1 year ago.
I have an exe located at C:\Users\srinivast6>C:\Users\srinivast6\Documents\Cipia\Cipia\DriverSenseCLI-v7.4.3-win64.
This exe I can run in terminal (non admin mode ,windows OS) without any problem.
I am calling it programmatically using subprocess.Popenas below.
process = subprocess.Popen(['myApplication.exe'])
But it is giving following error. It is not able to read license file. What might be the cause for this?. Do I need to open this in admin mode?
Cannot open license file : license.dat
License 284
Error initializing library:
license is not valid
EDIT1: After changing the directory from where I was running python script to the directory where exe is located,I no more see any error. It is launching as expected. But I am packing this python script as an standalone executable. So the user might use this executable from any directory to launch myApplication.exe. I cannot actually put restriction on user.
So is it possible to set the current working directory programmatically to the path where myApplication.exe is located??
Some more information needs to be shared, but a guess for what could be happening is that you are executing the python script from a directory other than where you normally execute myApplication.exe, and that the call to open the file in myApplication is using a relative path (which seems to be the case here). That would mean when you execute myApplication with Popen, it will have the working directory set to the working directory of wherever you executed the python script and the relative path would be relative to that. If that is the case, try executing the script from the same directory as where you would usually execute myApplication.exe or change the path passed to file the file open call in myApplication to use an absolute path.
Example:
Directory Structure
C:\Users\user
|--popen.py
|--somedir
|--license.dat
|--myApplication.exe
|--myApplication.py
Contents of popen.py:
process = subprocess.Popen(['myApplication.exe'])
Contents of myApplication.py (I realize python files don't get compiled to executables, but it is only for the sake of an example):
f = open('license.dat', 'r')
Now, this wouldn't work:
cwd: C:\Users\user
$ python popen.py # File not found error.
Either execute the script from somedir:
cwd: C:\Users\user\somedir
$ python ..\popen.py
Or alternatively, change the path passed to open in myApplication.py:
f = open('C:\Users\user\somedir\license.dat', 'r')

Fix "Fatal Python error: Py_Initialize: can't initialize sys standard streams"

I have two python environments with different versions running in parallel. When I execute a python script (test2.py) from one python environment in the other python environment, I get the following error:
Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\io.py", line 52, in <module>
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\abc.py", line 147
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
^
SyntaxError: invalid syntax
So my setup is this:
Python 3.7
(test.py)
│
│ Python 3.5.6
├───────────────────────────────┐
┆ │
┆ execute test2.py
┆ │
┆ 🗲 Error
How can I fix this?
For dm-script-people: How can I execute a module with a different python version in Digital Micrograph?
Details
I have two python files.
File 1 (test.py):
# execute in Digital Micrograph
import os
import subprocess
command = ['C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe',
os.path.join(os.getcwd(), 'test2.py')]
print(*command)
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
print("Subprocess result: '{}', '{}'".format(result.stdout.decode("utf-8"), result.stderr.decode("utf-8")))
and File 2 (test2.py)
# only executable in python 3.5.6
print("Hi")
in the same directory. test.py is executing test2.py with a different python version (python 3.5.6, legacy environment).
My python script (test.py) is running in the python interpreter in a third party program (Digital Micrograph). This program installs a miniconda python enviromnent called GMS_VENV_PYTHON (python version 3.7.x) which can be seen in the above traceback. The legacy miniconda environment is used only for running test2.py (from test.py) in python version 3.5.6.
When I run test.py from the command line (also in the conda GMS_VENV_PYTHON environment), I get the expected output from test2.py in test.py. When I run the exact same file in Digital Micrograph, I get the response
Subprocess result: '', 'Fatal Python error: Py_Initialize: can't initialize sys standard streams
Traceback (most recent call last):
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\io.py", line 52, in <module>
File "C:\ProgramData\Miniconda3\envs\GMS_VENV_PYTHON\lib\abc.py", line 147
print(f"Class: {cls.__module__}.{cls.__qualname__}", file=file)
^
SyntaxError: invalid syntax
'
This tells me the following (I guess):
The test2.py is called since this is the error output of the subprocess call. So the subprocess.run() function seems to work fine
The paths are in the GMS_VENV_PYTHON environment which is wrong in this case. Since this is test2.py, they should be in the legacy paths
There is a SyntaxError because a f-string (Literal String Interpolation) is used which is introduced with python 3.6. So the executing python version is before 3.6. So the legacy python environment is used.
test2.py uses either use io nor abc (I don't know what to conclude here; are those modules loaded by default when executing python?)
So I guess this means, that the standard modules are loaded (I don't know why, probably because they are always loaded) from the wrong destination.
How can I fix this? (See What I've tried > PATH for more details)
What I've tried so far
Encoding
I came across this post "Fatal Python error: Py_Initialize: can't initialize sys standard streams LookupError: unknown encoding: 65001" telling me, that there might be problems with the encoding. I know that Digital Micrograph internally uses ISO 8859-1. I tried to use python -X utf8 and python -X utf8 (test2.py doesn't care about UTF-8, it is ASCII only) as shown below. But neither of them worked
command = ['C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe',
"-X", "utf8=0",
os.path.join(os.getcwd(), 'test2.py')]
PATH
As far as I can tell, I think this is the problem. The answer "https://stackoverflow.com/a/31877629/5934316" of the post "PyCharm: Py_Initialize: can't initialize sys standard streams" suggests to change the PYTHONPATH.
So to specify my question:
Is this the way to go?
How can I set the PYTHONPATH for only the subprocess (while executing python with other libraries in the main thread)?
Is there a better way to have two different python versions at the same time?
Thank you for your help.
Background
I am currently writing a program for handling an electron microscope. I need the "environment" (the graphical interface, the help tools but also hardware access) from Digital Micrograph. So there is no way around using it. And DigitalMicrograph does only support python 3.7.
On the other hand I need an external module which is only available for python 3.5.6. Also there is no way around using this module since it controlls other hardware.
Both rely on python C modules. Since they are compiled already, there is no possibility to check if they work with other versions. Also they are controlling highly sensitive aperatures where one does not want to change code. So in short words: I need two python versions parallel.
I was actually quite close. The problem is, that python imports invalid modules from a wrong location. In my case modules were imported from another python installation due to a wrong path. Modifying the PYTHONPATH according to "https://stackoverflow.com/a/4453495/5934316" works for my example.
import os
my_env = os.environ.copy()
my_env["PYTHONHOME"] = "C:\\ProgramData\\Miniconda3\\envs\\legacy"
my_env["PYTHONPATH"] = "C:\\ProgramData\\Miniconda3\\envs\\legacy;"
my_env["PATH"] = my_env["PATH"].replace("C:\\ProgramData\\Miniconda3\\envs\\GMS_VENV_PYTHON",
"C:\\ProgramData\\Miniconda3\\envs\\legacy")
command = ["C:\\ProgramData\\Miniconda3\\envs\\legacy\\python.exe",
os.path.join(path, "test2.py")]
result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=my_env)
For Digital Micrograph users: The python environment is saved in the global tags in "Private:Python:Python Path". So replace:
import DigitalMicrograph as DM
# ...
success, gms_venv = DM.GetPersistentTagGroup().GetTagAsString("Private:Python:Python Path")
if not success:
raise KeyError("Python path is not set.")
my_env["PATH"] = my_env["PATH"].replace(gms_venv, "C:\\ProgramData\\Miniconda3\\envs\\legacy")
I had set "PYTHONPATH" as "D:\ProgramData\Anaconda3" for my python (base) python environment before, but i found when I had changed to another env my python still import basic python package from "D:\ProgramData\Anaconda3",which means it get the wrong basic package with the wrong "System environment variables" config.
so I delete "PYTHONPATH" from my windows "System environment variables", and that will be fixed.

Calling other scripts in PyPI package

I have a Python package that I have uploaded to PyPP. The script calls two additional R scripts to run. I have verified that the required R scripts are also uploaded to PyPI (by physically downloading the latest version and seeing them present in the directory). I also can successfully install and run the main python script.
However, I am having trouble figuring out how to call the R scripts from within the Python script. That is, what directory structure do I use? Here is the command I use to run:
$ python_script -f file1.txt -g file2.txt
and I get this error:
Fatal error: cannot open file 'script.r': No such file or directory
In the Python script, here is how I am calling the R script:
cmd = [ 'Rscript', 'python_script/Rscript.r' ]
output = subprocess.Popen(cmd, stderr=subprocess.PIPE).communicate()
result = output[1].decode('utf-8')
But nothing I try works: I've tried just 'Rscript.r' and './Rscript.r'
I'm at a loss as to how to correctly call this script. It is in the same directory as the main python_script I am running.
The path here would be relative to where you're invoking python_script from, but your R scripts exist in a directory relative to where your package has been installed.
You can use __file__ to determine the full path to the file which is being executed. By splitting this, you can get a path to the directory where the package was installed, and then add any additional directories/filenames to get a full path to your R script:
import os
this_dir, this_filename = os.path.split(__file__)
RSCRIPT_PATH = os.path.join(this_dir, "Rscript.r")
cmd = ['Rscript', RSCRIPT_PATH]
output = subprocess.Popen(cmd, stderr=subprocess.PIPE).communicate()
result = output[1].decode('utf-8')
Note: Best practice to ensure cross-platform compatibility here is to use os.path.join('path', 'to', 'file.txt') to generate a path instead of path/to/file.txt, since not all platforms use / as a path separator.

Using dlopen to load one .so in Python says it can't find another in the same directory

I connected yesterday using the SSH protocol to another computer and tried to load, through Python, a SO file (which would be compiled C). Here is what I got in the CLI:
The file that is being requested (libLMR_Demodulator.so) next to "OSError:" is in the same dir as the file I want to load (libDemodulatorJNI_lmr.so).
The python code (v3.5.2) is the following one:
import ctypes
sh_obj = ctypes.cdll.LoadLibrary('./libLMR_Demodulator.so')
actual_start_frequency = sh_obj.getActualStartFrequency(ctypes.c_long(0))
print('The Current Actual Frequency Is: ' + str(actual_start_frequency))
#Charles Duffy is right. The issue come from dependencies. You can verify this by command:
ldd libLMR_Demodulator.so
You have several ways to fix this issue:
Put all the lib to /lib, /usr/lib paths, or directly install them to your system.
Put the libs' path to /etc/ld.so.conf file, then run ldconfig to refresh cache.
use LD_LIBRARY_PATH to add the libs' path, then try to run you script
LD_LIBRARY_PATH=[..path] python [script.py]
or
export LD_LIBRARY_PATH=[..path]
python [script.py]
You can check with manual of dlopen to get more details.
I got here looking for how to ensure that a module / package with a .so file was able to load another .so file that it depends upon -- changing the current directory to the location of the first .so file (i.e., in the directory where the module is) seems to work for me:
import os,sys,inspect
cwd = os.getcwd()
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
os.chdir(currentdir)
import _myotherlib
os.chdir(cwd) # go back
might also work for the OP case?

Unpacking PyInstaller packed files

I currently have a PyInstaller packed Elf file and I'm looking to unpack it into the original .py file(s). I have been using PyInstaller Extractor but it appears to be telling the archive is not a PyInstaller archive.
Here is an example of what I've been doing:
$ cat main.py
#! /usr/bin/python3
print ("Hello %s" % ("World"))
I pack it in the file dist/main/main with the command:
pyinstaller main.py
Which outputs the file:
$ file dist/main/main
dist/main/main: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=373ec5dee826653796e927ac3d65c9a8ec7db9da, stripped
Now, when I want to unpack it:
$ python pyinstxtractor.py dist/main/main
[*] Processing dist/main/main
[*] Error : Unsupported pyinstaller version or not a pyinstaller archive
I don't understand why the file cannot be unpacked while I've been looking through many posts telling that this should be possible and I'm beginning to doubt it.
Is the unpacking of the ELF file actually possible?
Am I doing it the right away?
According to the Github page, this script is applicable only for Windows binaries. There is an archive_viewer.py script distributed with pyinstaller itself that allows to view binary contents and extract it. If you get a .pyz file after extraction, use archive_viewer.py on it again. IIRC, after all you will get .pyc files, which have to be decompiled.
On my system (Manjaro Linux) I've found this script at /lib/python3.6/site-packages/PyInstaller/utils/cliutils
It is also available as pyi-archive_viewer (at /usr/bin/pyi-archive_viewer) after installing to global interpreter.
Using pyi-archive_viewer CLI seems to be the supported solution, i.e. to print only the module names, recursively, and quit instead of prompting:
$ pyi-archive_viewer --log --recursive --brief build/PYZ-00.pyz
['__future__',
'_aix_support',
---SNIP---
'zipfile',
'zipimport']
But if you don't want to parse or unsafely eval() the CLI output, it seems to work to use the library directly:
from PyInstaller.utils.cliutils import archive_viewer
archive = archive_viewer.get_archive('build/PYZ-00.pyz')
output = []
archive_viewer.get_content(archive, recursive=True, brief=True, output=output)
# Now, output is ['__future__', '_aix_support', ---SNIP--- 'zipfile', 'zipimport']
This use of the library is undocumented, but it's essentially the same to what the CLI does given those flags.

Categories

Resources