I tried executing a python command using ksh in SAP BODS script to run a program called "zzz.py" in the BODS server:
print(exec('ksh', '-c "python --version"', 8));
print(exec('ksh', '-c "python zzz.py"', 8));
However, upon executing the script, I got the following output:
3850 2990602048 PRINTFN 11/2/2017 4:26:17 PM 0: Python 2.7.9
3850 2990602048 PRINTFN 11/2/2017 4:26:17 PM 1: Could not find platform independent libraries <prefix> Could not find platform dependent libraries <exec_prefix>
3850 2990602048 PRINTFN 11/2/2017 4:26:17 PM Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] ImportError: No module named site
While I proceeded to add the export PYTHONHOME=/usr/bin/python, and executed the printenv command, the PYTHONHOME path is not shown.
I went ahead to use SSH to access the server via PuTTy, and executing the command works perfectly. However, when running the python --version command, it shows that my version in 2.7.5 as opposed to the one shown in BODS. I tried adding the PYTHONHOME path as well, but it did not help in the BODS (and instead i cannot run the python command in my SSH session, which of course i went to unset it and SSH session works normally now)
May I seek some help in this? THANKS!
Managed to solve this:
When executing from BODS, a different user is being used (as opposed to root which was being used for SSH). Had to set "export LD_LIBRARY_PATH=/usr/local/lib" before executing python and it works.
Related
I am trying to run some code inside a server. In that server, we use docker images to create notebooks inside directories, with commands like:
docker run -it --gpus "device=1" -p 8886:8886 -v /folder/directory:/workspace/work --name container-name --shm-size=60g --ulimit memlock=-1 --ulimit stack=67108864 --rm imageid jupyter notebook --port=8886 --ip=0.0.0.0 --allow-root --no-browser
Once created the notebook with an image, we have two different environments with two different python versions in the folder that were designed to execute the code inside /folder/directory: venv3.6 and venv3.7.
Even if I didn't create them, I am confident that the environments worked at some point (there are checkpoints obtained from the execution of the code by a colleague that worked on it before me). However, it must have been messed up with at some point, maybe after some modifications on the libraries of the docker image.
The problem is that, whenever I try to activate venv3.7 by using source ./venv3.7/bin/activate and run a script with python script_name.py, the python version that is executed is not 3.7, but rather 3.6.10. When going into /venv3.7/bin/activate and trying to access or download the python, the python3 or the python3.7 files, they cannot be accessed, moved or activated (i.e., if I enter /venv3.7/bin/python3.7 on the terminal, I obtain the file not found error).
When the environment is activated:
root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
root#XXXX:/workspace/work/path# source ./venv3.7/bin/activate
(venv3.7) root#XXXX:/workspace/work/path#
Following this stackoverflow post, I make the following comprobations
(venv3.7) root#XXXX:/workspace/work/path# python -V
Python 3.6.10 :: Anaconda, Inc.
(venv3.7) root#XXXX:/workspace/work/path# echo $PATH
/workspace/work/path/venv3.7/bin:/usr/local/nvm/versions/node/v15.0.1/bin:/opt/conda/bin:/opt/cmake-3.14.6-Linux-x86_64/bin/:/usr/local/mpi/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/ucx/bin:/opt/tensorrt/bin
(venv3.7) root#XXXX:/workspace/work/path# which python
/opt/conda/bin/python
(venv3.7) root#XXXX:/workspace/work/path# alias
alias egrep='egrep --color=auto'
alias fgrep='fgrep --color=auto'
alias grep='grep --color=auto'
alias l='ls -CF'
alias la='ls -A'
alias ll='ls -alF'
alias ls='ls --color=auto'
Which shows that the path is added correctly and there is no alias for python that could be messing with the activation but, still, python command uses the version from /opt/conda/bin/python instead of /workspace/work/path/venv3.7/bin
I also have checked that the path VIRTUAL_ENV in activate script (venv3.7/bin/activate) is correct.
I noticed that the directory: /venv3.7/pyvenv.cfg contains:
home = /usr/bin
include-system-site-packages = false
version = 3.7.5
And when I go to the directory /usr/bin, which should contain the python in which the environment is based, it only has python2.7 files. Could that mean that, when the first directory in $PATH is followed, no valid version of Python is found?
My guess is that the python (python, python3, python3.7) files were symlinks that were broken because the python version changed in /usr/bin. However, I don't want to risk to update the version of python in that directory, because it would probably change the default python in /opt/conda/bin/python instead, and I don't know much about docker images. Do you think it would work? In that case, how would I do it?
As additional info, the python files inside venv3.6/bin seems to work well (it can be executed and copied), but maybe because /venv3.6/pyvenv.cfg leads to the default python instead (in /opt/conda/bin/python). Also, after asking the original creator of the code, she doesn't know how to solve this issue either.
I need the environment to work, and recreating it is problematic, since many libraries were downloaded from different places (it was delicate work).
What do you suggest?
EDIT
I have tried recreating the environment with the python version I need (3.7.5). Do you know of an easy way to install the same libraries than in the other environment, considering that I can't activate it?
I was thinking to use the folders with the libraries located in /venv3.7/lib, but It is not straight forward. Any idea on how to do it?
Also, would you recommend me to create the new environment with virtualenv (to have a separate python version) or, rather, with anaconda?
Thank you so much for reading me!
After checking the python3.7 file in the environment:
root#XXXX:/# cd workspace/work/path/venv3.7/bin
root#XXXX:/workspace/work/path/venv3.7/bin# stat python
File: python -> python3.7
Size: 9 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
root#XXXX:/workspace/work/path/venv3.7/bin# stat python3.7
File: python3.7 -> /usr/bin/python3.7
Size: 18 Blocks: 0 IO Block: 4096 symbolic link
Device: XXXX Inode: XXXX Links: 1
Access: (XXXX) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2022-12-06 10:31:18.165001523 +0000
Modify: 2022-05-20 12:28:37.481538688 +0000
Change: 2022-05-20 12:28:37.481538688 +0000
Birth: -
It became obvious that, as stated in the post, /usr/bin should be the directory where python3.7 should be installed. That means the problem could be solved by installing it in that folder.
As I didn't know that was the default folder for the Python installation, I tried installing python from source as exposed in several guides. However, even if now the environment started accessing python3.7 in the folder, that installation didn't work either.
So I just tried apt-get install python3.7. It took around 10 seconds and, when I tried the code again, it worked!
Next time, when your environments fails because the wrong python version is executed, and the aliases and $PATH are right (see this post for more details), just remember to check where the python files in the environment point to and verify that the python installation is correct!
I hope this is useful for you.
I'm using MacOS Mojave (v. 10.14.16, latest version with all updates) and I'm running Gimp 2.10. I made a Python plugin that works great from the command line terminal, but I can't it to work from a shell script. The plugin opens an XCF template, adds an external JPG as a new layer, positions the external JPG using x and y offsets, flattens, and then exports a new JPG that shows the external JPG in the template.
Background info:
In OSX just typing “gimp” at a terminal did not launch Gimp, so I created an alias in the .bash_profile in my home directory (/users/TimB) using the steps described here: https://mattmazur.com/2012/01/27/how-to-add-terminal-aliases-in-mac-os-x-lion/. The alias reads as follows:
alias gimp=”/Applications/GIMP-2.10.app/Contents/MacOS/gimp”
My alias works great from the command line terminal, but not from a shell script.
In my shell script just trying to execute “gimp” on a line does not launch Gimp so I believe my alias is not recognized in the script. Therefore, to launch Gimp along with my Python command line arguments I do this:
/applications/gimp-2.10.app/contents/macos/gimp -idf --batch-interpreter python-fu-eval -b "import sys;sys.path=['.']+sys.path;import OAFE_PARAM;OAFE_PARAM.open_add_flatten_export('/Users/TimB/Desktop/xcftemplate.xcf', '/Users/TimB/Desktop/jpg_to_add.jpg', 2060, 410, '/Users/TimB/Desktop/')" -b "pdb.gimp_quit(1)"
This does not work. My command line arguments are ignored, and I see a message "GIMP-Warning: The batch interpreter 'python-fu-eval' is not available. Batch mode disabled."
To troubleshoot I tried just launching the Gimp UI from a shell script and even that doesn't work properly. It loads strangely with broken icons (see image below). Any ideas what I can do to fix this? Am I doing something wrong in how I'm trying to launch Gimp?
Gimp UI screenshot
Here's my script to launch Gimp that fails:
#!/bin/bash
arg1="/Users/TimB/Desktop/xcf_template.xcf" #XCF file to open
arg2="/Users/TimB/Desktop/jpg_to_add.jpg" # JPG to insert
arg3=2060 # x_offset
arg4=410 # y_offset
arg5="/Users/TimB/desktop/" # save location
echo
echo "arg1 is" $arg1
echo "arg2 is" $arg2
echo "arg3 is" $arg3
echo "arg4 is" $arg4
echo "arg5 is" $arg5
/applications/gimp-2.10.app/contents/macos/gimp -idf --batch-interpreter python-fu-eval -b "import sys;sys.path=['.']+sys.path;import OAFE_PARAM;OAFE_PARAM.open_add_flatten_export('/Users/TimB/Desktop/xcftemplate.xcf', '/Users/TimB/Desktop/jpg_to_add.jpg', 2060, 410, '/Users/TimB/Desktop/')" -b "pdb.gimp_quit(1)"
UPDATE (3-27-2021):
I followed Mark's suggestion by removing my alias, ensuring that /usr/local/bin is on my PATH, and I created a symlink with sudo ln -s /applications/gimp-2.10.app/contents/macos/gimp /usr/local/bin/gimp. That still lets me launch Gimp by typing "Gimp" in terminal, but it still fails from within a shell script. I get this message.
The shell script is very simple:
#!/bin/bash
gimp
Here's the error I get:
../../../../gtk/source/babl-0.1.78/babl/babl-internal.h:214 void babl_log(const char *, ...)()
WARNING: the babl installation seems broken, no extensions found in queried
BABL_PATH (/Users/distiller/gtk/inst/lib/babl-0.1) this means no SIMD/instructions/special case fast paths and
only slow reference conversions are available, applications might still
run but software relying on babl for conversions will be slow
2021-03-27 11:27:22.180 gimp[85955:1451980] *** WARNING: Method userSpaceScaleFactor in class NSView is deprecated on 10.7 and later. It should not be used in new applications. Use convertRectToBacking: instead.
Cannot spawn a message bus without a machine-id: Unable to load /var/lib/dbus/machine-id or /etc/machine-id: Failed to open file “/var/lib/dbus/machine-id”: No such file or directory
../../../../gtk/source/babl-0.1.78/babl/babl-internal.h:222 void babl_fatal(const char *, ...)()
const Babl *babl_format(const char *)("CIE Lab double"): not found
sh: gdb: command not found
I got the same error today on macOS 12.4, Monterey. Gimp and babl installed through brew. Even when I run it as:
BABL_PATH=/usr/local/lib/babl-0.1 gimp
it still fails.
../babl/babl-internal.h:222 void babl_fatal(const char *, ...)() const Babl *babl_model(const char *)("RGBA"): you must call babl_init first
Unable to find Mach task port for process-id 17902: (os/kern) failure (0x5).
(please check gdb is codesigned - see taskgated(8))
/tmp/babl.gdb:1: Error in sourced command file:
No stack.
The gdb error is not related. I just didn't sign it yet. If I start gimp by clicking on the icon then it works though. But not from the cmdline.
I'm tring to execute a python script call from php with the command below:
$output = shell_exec('python /var/www/html/sna/server/userManagement.py '. $user.' '. $pass .' \''.$action.'\' 2>&1');
But when I execute it I get this
sh: 1: python: not found
But python is correctly installed in my env.
If I digit
type -a python
I get the path of python in this env like below (not sure because they are two)
python is /home/leonardo/miniconda2/bin/python
python is /home/leonardo/miniconda2/envs/sna/bin/python
At the very beginning of the python script I have include
#! /usr/bin/env python
But I recieve always the same error. How can I solve ?
EDIT
I tried to add python path to the $PATH with command
export $PATH:/home/leonardo/miniconda2/envs/sna/bin/python
But I get the same error anywhay
Your binary is not in the PATH for the webservers user account.
PHP inherits the PATH from Apache. And most distros set a fairly restrained:
SetEnv PATH /bin:/usr/bin
Either change that, or putenv() in PHP, or use absolute paths instead.
I have a simple python script which I want to start a daemon-service in background in docker container
/sbin/start-stop-daemon --start --user root --make-pidfile --pidfile /var/lock/subsys/my-application.pid --exec 'python /opt/app/uc/monitor/bin/my-application.py'
when I execute this command in a shell I get
/sbin/start-stop-daemon: unable to stat //python /opt/app/uc/monitor/bin/my-application.py (No such file or directory)
However when execute just the below command in shell it works
python /opt/app/uc/monitor/bin/my-application.py
I'm sure the python is installed and all the links have been setup.
Thanks for the help
That error message implies that start-stop-daemon is looking for a file to open (the stat operation is a check before it opens the file) and treating your 'python ... ' argument as if it was a file.
See this example which confirms this. You may need to read the man page for start-stop-daemon, for your Ubuntu version, to check what a valid command would be for your setup.
Simplest solution is probably to create a shell script (say /opt/app/uc/monitor/bin/run-my-application.sh), and put this into it:
#!/bin/bash
python /opt/app/uc/monitor/bin/my-application.py
Be sure to do chmod +x on this file. If python is not found, use which python to find the path to python and use that in the script.
Now try:
/sbin/start-stop-daemon --start --user root --make-pidfile --pidfile /var/lock/subsys/my-application.pid --exec '/opt/app/uc/monitor/bin/run-my-application.sh'
I am trying to compile & execute my hw cpp file under the python script file which we are given by lecturer. the how-to-manual.pdf he sent us it says use:
c:\>python ./submit.pyc problemID -u username -p password -b //submit.pyc is already given to us
and here is the manifest.txt we are given:
[main]
problem = gc
build =
g++ main.cpp -o solver
run =
./solver %f
my cpp file works normally like this:
./solver input_file
However, I am trying (I have to) to do this under the windows OS. I have Python 2.7.x installed and python.exe is in the Command PATH. I can't run it under the linux ssh sytem because there is 2.4.x python installed and I can't touch it (school's system).
Anyway, when I execute the line above, it returns me:
Command execution failed:
g++ solver.cpp -o solver
I think I told everything I can. So, any idea that what I have to do else? except asking to lecturer:)
For the above to work it needs to be able to find g++ so you need to add the directory that it resides in to the PATH environment variable. This can be done from within your python script or on the command line with:
path=Where\g++\lives;%path%
This will only apply within the current DOS session.
Or you can add it permanenty through system settings->advanced settings->environmental variables
You could also look at using a python virtual environments on the schools linux system.