I'm trying to create a virtualenv for nodepool user using ansible but it is failing as outlined below. I want to become nodepool user as it uses python3.5 whereas all others use the server default, 2.7.5. It seems that it cannot source the 3.5 version.
The play is:
- name: Create nodepool venv
become: true
become_user: nodepool
become_method: su
command: virtualenv-3.5 /var/lib/nodepool/npvenv
The error is:
fatal: [ca-o3lscizuul]: FAILED! => {"changed": false, "cmd": "virtualenv-3.5 /var/lib/nodepool/npvenv", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
It works from shell.
[root#host ~]# su nodepool
[nodepool#host root]$ virtualenv-3.5 /var/lib/nodepool/npvenv
Using base prefix '/opt/rh/rh-python35/root/usr'
New python executable in /var/lib/nodepool/npvenv/bin/python3
Also creating executable in /var/lib/nodepool/npvenv/bin/python
Installing setuptools, pip, wheel...done.
Worked around the issue as follows.
shell: source /var/lib/nodepool/.bashrc && virtualenv-3.5 /var/lib/nodepool/npvenv creates="/var/lib/nodepool/npvenv"
It is not as I'd like to do it but it will do. If anyone knows how I might do like originally posted, please advise. Perhaps it's not possible as it doesn't pickup paths etc.
I threw in the creates option as it prevents redoing if it exists.
Related
I have a problem trying to ping to machines using ansible, 1 is fedora 35 the 2nd is ubuntu 21.
when I run
ansible all -i inventory -m ping -u salam -k
I get the following warnings
[WARNING]: Unhandled error in Python interpreter discovery for host
myubuntuIP: unexpected output from Python interpreter discovery
[WARNING]: sftp transfer mechanism failed on [myubuntuIP]. Use
ANSIBLE_DEBUG=1 to see detailed information
[WARNING]: scp transfer
mechanism failed on [myubuntuIP]. Use ANSIBLE_DEBUG=1 to see detailed
information myubuntuIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3"
},
"changed": false,
"ping": "pong" }
[WARNING]: Platform unknown on host myfedoraIP is using the discovered Python interpreter at /usr/bin/python, but future
installation of another Python interpreter could change the meaning of
that path. See
https://docs.ansible.com/ansible-core/2.14/reference_appendices/interpreter_discovery.html
for more information. myfedoraIP | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
when I do
which python3
on both machines, I get 2 different paths as follows
/usr/bin/python3 for fedora box /bin/python3 for ubuntu box
I understand from 1 thread here that we should indicate the path of python in ansible.cfg file, Can I indicate 2 different paths in the ansible.cfg? If yes how? and why ansible is not able to find the python path?
First, the error on your Ubuntu system appears unrelated to this question; it says:
[WARNING]: sftp transfer mechanism failed on [myubuntuIP]
[WARNING]: scp transfer mechanism failed on [myubuntuIP]
I suspect to diagnose that issue you'll need to follow the instructions in the error message, set ANSIBLE_DEBUG=1, and if the cause isn't immediately obvious open a new question here for that particular issue.
I understand from 1 thread here that we should indicate the path of python in ansible.cfg file, Can I indicate 2 different paths in the ansible.cfg? If yes how?
You don't set this in your ansible.cfg (unless you really do want a single setting for all your hosts); you set this in your Ansible inventory or in your host_vars or group_vars directory. For example, to set this on a specific host in your inventory, you might do something like this:
all:
hosts:
host1:
ansible_python_interpreter: /usr/bin/python3
host2:
host3:
You could accomplish the same thing by placing:
ansible_python_interpreter: /usr/bin/python3
In host_vars/host1.yaml.
If the same configuration applies to more than one host, you can group them and then apply the setting as a group variable. For example, to apply the setting only to a subset of your hosts:
all:
hosts:
host1:
children:
fedora_hosts:
vars:
ansible_python_interpreter: /usr/bin/python3
hosts:
host2:
host3:
Or to apply it globally:
all:
vars:
ansible_python_interpreter: /usr/bin/python3
hosts:
host1:
host2:
host3:
And why ansible is not able to find the python path?
That's not what the warning is telling you -- it was able to find the Python path (/usr/bin/python), but "future installation of another Python interpreter could change the meaning of that path" (because /usr/bin/python, depending on your distribution, could actually be python 2 instead of python 3, etc).
I am trying to deploy the below task via Ansible AWX (Tower) and having some issues with the aws_s3 module.
---
- hosts: all
become: yes
tasks:
- name: Setting host facts for Python interpreter
set_fact:
ansible_python_interpreter: "/usr/bin/python3"
- name: 01 - Download file locally
aws_s3:
bucket: temp-buck-0001
object: /test/quiz.sh
dest: /tmp/quiz.sh
mode: get
- name: 02 - Change the file permissions of the shell scrip to allow for execute
file:
path: /tmp/quiz.sh
mode: "u=x"
- name: 03 - Change the working directory to tmp before executing the command.
shell: quiz.sh >> quizlog.txt
args:
chdir: /tmp
I'm getting the below error when trying to deploy the above ansible play. It seems to have a problem with Python dependencies in particular boto3 and botocore so I attempted to install these manually for testing purposes.
However, I'm receiving the below errors. It concerns dependencies for the aws_s3 module. I'm not sure if I have set up my Python interpreter correctly. Any help would be much appreciated.
Also, if anybody could suggest how to write a task to install Python3x and dependencies which is required for the aws_s3 module that would also be much appreciated.
TASK [01 - Download Metricbeats file locally]
********************************** fatal: [Dev-02]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 20.10.12.114
closed.\r\n", "module_stdout": "/bin/sh: /usr/bin/python3: No such
file or directory\r\n", "msg": "The module failed to execute
correctly, you probably need to set the interpreter.\nSee
stdout/stderr for the exact error", "rc": 127}
and the below output
{ "module_stdout": "/bin/sh: /usr/bin/python3: No such file or
directory\r\n", "module_stderr": "Shared connection to 20.10.12.114
closed.\r\n", "msg": "The module failed to execute correctly, you
probably need to set the interpreter.\nSee stdout/stderr for the exact
error", "rc": 127, "_ansible_no_log": false, "changed": false }
For context, this is the version of Python I have installed:
[ec2-user#ip-20-10-12-114 ~]$ pip --version
pip 21.0.1 from /usr/lib/python3.6/site-packages/pip (python 3.6)
[ec2-user#ip-20-10-12-114 ~]$ which pip
/usr/bin/pip
[ec2-user#ip-20-10-12-114 ~]$ python --version
Python 2.7.18
[ec2-user#ip-20-10-12-114 ~]$ which python
/usr/bin/python
[ec2-user#ip-20-10-12-114 ~]$ python3 --version
Python 3.6.2
[ec2-user#ip-20-10-12-114 ~]$ which python3
/usr/bin/python3
I am trying to setup a project that uses the shiny new Jenkins pipelines, more specifically a multibranch project.
I have a Jenkinsfile created in a test branch as below:
node {
stage 'Preparing VirtualEnv'
if (!fileExists('.env')){
echo 'Creating virtualenv ...'
sh 'virtualenv --no-site-packages .env'
}
sh '. .env/bin/activate'
sh 'ls -all'
if (fileExists('requirements/preinstall.txt')){
sh 'pip install -r requirements/preinstall.txt'
}
sh 'pip install -r requirements/test.txt'
stage 'Unittests'
sh './manage.py test --noinput'
}
It's worth noting that preinstall.txt will update pip.
I am getting error as below:
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip'
Looks like it's trying to update pip in global env instead of inside virtualenv, and looks like each sh step is on its own context, how do I make them to execute within the same context?
What you are trying to do will not work. Every time you call the sh command, jenkins will create a new shell.
This means that if you use .env/bin/activate in a sh it will be only sourced in that shell session. The result is that in a new sh command you have to source the file again (if you take a closer look at the console output you will see that Jenkins will actually create temporary shell files each time you run the command.
So you should either source the .env/bin/activate file at the beginning of each shell command (you can use triple quotes for multiline strings), like so
if (fileExists('requirements/preinstall.txt')) {
sh """
. .env/bin/activate
pip install -r requirements/preinstall.txt
"""
}
...
sh """
. .env/bin/activate
pip install -r requirements/test.txt
"""
}
stage("Unittests") {
sh """
. .env/bin/activate
./manage.py test --noinput
"""
}
or run it all in one shell
sh """
. .env/bin/activate
if [[ -f requirements/preinstall.txt ]]; then
pip install -r requirements/preinstall.txt
fi
pip install -r requirements/test.txt
./manage.py test --noinput
"""
Like Rik posted, virtualenvs don't work well within the Jenkins Pipeline Environment, since a new shell is created for each command.
I created a plugin that makes this process a little less painful, which can be found here: https://wiki.jenkins.io/display/JENKINS/Pyenv+Pipeline+Plugin. It essentially just wraps each call in a way that activates the virtualenv prior to running the command. This in itself is tricky, as some methods of running multiple commands inline are split into two separate commands by Jenkins, causing the activated virtualenv no longer to apply.
I'm new to Jenkins files. Here's how I've been working around the virtual environment issue. (I'm running Python3, Jenkins 2.73.1)
Caveat: Just to be clear, I'm not saying this is a good way to solve the problem, nor have I tested this enough to stand behind this approach, but here what is working for me today:
I've been playing around with bypassing the venv 'activate' by calling the virtual environment's python interpreter directly. So instead of:
source ~/venv/bin/activate
one can use:
~/venv/bin/python3 my_script.py
I pass the path to my virtual environment python interpreter via the shell's rc file (In my case, ~/.bashrc.) In theory, every shell Jenkins calls should read this resource file. In practice, I must restart Jenkins after making changes to the shell resource file.
HOME_DIR=~
export VENV_PATH="$HOME_DIR/venvs/my_venv"
export PYTHON_INTERPRETER="${VENV_PATH}/bin/python3"
My Jenkinsfile looks similar to this:
pipeline {
agent {
label 'my_slave'
}
stages {
stage('Stage1') {
steps {
// sh 'echo $PYTHON_INTERPRETER'
// sh 'env | sort'
sh "$PYTHON_INTERPRETER my_script.py "
}
}
}
}
So when the pipeline is run, the sh has the $PYTHON_INTERPRETER environment values set.
Note one shortcoming of this approach is that now the Jenkins file does not contain all the necessary information to run the script correctly. Hopefully this will get you off the ground.
I am trying to provision a coreOS box using Ansible. First a bootstapped the box using https://github.com/defunctzombie/ansible-coreos-bootstrap
This seems to work ad all but pip (located in /home/core/bin) is not added to the path. In a next step I am trying to run a task that installs docker-py:
- name: Install docker-py
pip: name=docker-py
As pip's folder is not in path I did it using ansible:
environment:
PATH: /home/core/bin:$PATH
If I am trying to execute this task I get the following error:
fatal: [192.168.0.160]: FAILED! => {"changed": false, "cmd": "/home/core/bin/pip install docker-py", "failed": true, "msg": "\n:stderr: /home/core/bin/pip: line 2: basename: command not found\n/home/core/bin/pip: line 2: /root/pypy/bin/: No such file or directory\n"}
what I ask is where does /root/pypy/bin/ come from it seems this is the problem. Any idea?
You can't use shell-style variable expansion when setting Ansible variables. In this statement...
environment:
PATH: /home/core/bin:$PATH
...you are setting your PATH environment variable to the literal value /home/core/bin:$PATH. In other words, you are blowing away any existing value of $PATH, which is why you're getting "command not found" errors for basic things like basename.
Consider installing pip somewhere in your existing $PATH, modifying $PATH before calling ansible, or calling pip from a shells cript:
- name: install something with pip
shell: |
PATH="/home/core/bin:$PATH"
pip install some_module
The problem lies in /home/core/bin/pip script which is literally:
#!/bin/bash
LD_LIBRARY_PATH=$HOME/pypy/lib:$LD_LIBRARY_PATH $HOME/pypy/bin/$(basename $0) $#
when run under root by ansible the $HOME variable is substituted with /root and not with /home/core.
Change $HOME with /home/core and it should work.
I am installing Python 2.7 on CentOS 5. I built and installed Python as follows
./configure --enable-shared --prefix=/usr/local
make
make install
When I try to run /usr/local/bin/python, I get this error message
/usr/local/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
When I run ldd on /usr/local/bin/python, I get
ldd /usr/local/bin/python
libpython2.7.so.1.0 => not found
libpthread.so.0 => /lib64/libpthread.so.0 (0x00000030e9a00000)
libdl.so.2 => /lib64/libdl.so.2 (0x00000030e9200000)
libutil.so.1 => /lib64/libutil.so.1 (0x00000030fa200000)
libm.so.6 => /lib64/libm.so.6 (0x00000030e9600000)
libc.so.6 => /lib64/libc.so.6 (0x00000030e8e00000)
/lib64/ld-linux-x86-64.so.2 (0x00000030e8a00000)
How do I tell Python where to find libpython?
Try the following:
LD_LIBRARY_PATH=/usr/local/lib /usr/local/bin/python
Replace /usr/local/lib with the folder where you have installed libpython2.7.so.1.0 if it is not in /usr/local/lib.
If this works and you want to make the changes permanent, you have two options:
Add export LD_LIBRARY_PATH=/usr/local/lib to your .profile in your home directory (this works only if you are using a shell which loads this file when a new shell instance is started). This setting will affect your user only.
Add /usr/local/lib to /etc/ld.so.conf and run ldconfig. This is a system-wide setting of course.
Putting on my gravedigger hat...
The best way I've found to address this is at compile time. Since you're the one setting prefix anyway might as well tell the executable explicitly where to find its shared libraries. Unlike OpenSSL and other software packages, Python doesn't give you nice configure directives to handle alternate library paths (not everyone is root you know...) In the simplest case all you need is the following:
./configure --enable-shared \
--prefix=/usr/local \
LDFLAGS="-Wl,--rpath=/usr/local/lib"
Or if you prefer the non-linux version:
./configure --enable-shared \
--prefix=/usr/local \
LDFLAGS="-R/usr/local/lib"
The "rpath" flag tells python it has runtime libraries it needs in that particular path. You can take this idea further to handle dependencies installed to a different location than the standard system locations. For example, on my systems since I don't have root access and need to make almost completely self-contained Python installs, my configure line looks like this:
./configure --enable-shared \
--with-system-ffi \
--with-system-expat \
--enable-unicode=ucs4 \
--prefix=/apps/python-${PYTHON_VERSION} \
LDFLAGS="-L/apps/python-${PYTHON_VERSION}/extlib/lib -Wl,--rpath=/apps/python-${PYTHON_VERSION}/lib -Wl,--rpath=/apps/python-${PYTHON_VERSION}/extlib/lib" \
CPPFLAGS="-I/apps/python-${PYTHON_VERSION}/extlib/include"
In this case I am compiling the libraries that python uses (like ffi, readline, etc) into an extlib directory within the python directory tree itself. This way I can tar the python-${PYTHON_VERSION} directory and land it anywhere and it will "work" (provided you don't run into libc or libm conflicts). This also helps when trying to run multiple versions of Python on the same box, as you don't need to keep changing your LD_LIBRARY_PATH or worry about picking up the wrong version of the Python library.
Edit: Forgot to mention, the compile will complain if you don't set the PYTHONPATH environment variable to what you use as your prefix and fail to compile some modules, e.g., to extend the above example, set the PYTHONPATH to the prefix used in the above example with export PYTHONPATH=/apps/python-${PYTHON_VERSION}...
I had the same problem and I solved it this way:
If you know where libpython resides at, I supposed it would be /usr/local/lib/libpython2.7.so.1.0 in your case, you can just create a symbolic link to it:
sudo ln -s /usr/local/lib/libpython2.7.so.1.0 /usr/lib/libpython2.7.so.1.0
Then try running ldd again and see if it worked.
I installed Python 3.5 by Software Collections on CentOS 7 minimal. It all worked fine on its own, but I saw the shared library error mentioned in this question when I tried running a simple CGI script:
tail /var/log/httpd/error_log
AH01215: /opt/rh/rh-python35/root/usr/bin/python: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
I wanted a systemwide permanent solution that works for all users, so that excluded adding export statements to .profile or .bashrc files. There is a one-line solution, based on the Red Hat solutions page. Thanks for the comment that points it out:
echo 'source scl_source enable rh-python35' | sudo tee --append /etc/profile.d/python35.sh
After a restart, it's all good on the shell, but sometimes my web server still complains. There's another approach that always worked for both the shell and the server, and is more generic. I saw the solution here and then realized it's actually mentioned in one of the answers here as well! Anyway, on CentOS 7, these are the steps:
vim /etc/ld.so.conf
Which on my machine just had:
include ld.so.conf.d/*.conf
So I created a new file:
vim /etc/ld.so.conf.d/rh-python35.conf
And added:
/opt/rh/rh-python35/root/usr/lib64/
And to manually rebuild the cache:
sudo ldconfig
That's it, scripts work fine!
This was a temporary solution, which didn't work across reboots:
sudo ldconfig /opt/rh/rh-python35/root/usr/lib64/ -v
The -v (verbose) option was just to see what was going on. I saw that it did:
/opt/rh/rh-python35/root/usr/lib64:
libpython3.so.rh-python35 -> libpython3.so.rh-python35
libpython3.5m.so.rh-python35-1.0 -> libpython3.5m.so.rh-python35-1.0
This particular error went away. Incidentally, I had to chown the user to apache to get rid of a permission error after that.
Note that I used find to locate the directory for the library. You could also do:
sudo yum install mlocate
sudo updatedb
locate libpython3.5m.so.rh-python35-1.0
Which on my VM returns:
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Which is the path I need to give to ldconfig, as shown above.
This worked for me...
$ sudo apt-get install python2.7-dev
On Solaris 11
Use LD_LIBRARY_PATH_64 to resolve symlink to python libs.
In my case for python3.6 LD_LIBRARY_PATH didn't work but LD_LIBRARY_PATH_64 did.
Hope this helps.
Regards
This answer would be helpful to those who have limited auth access on the server.
I had a similar problem for python3.5 in HostGator's shared hosting. Python3.5 had to be enabled every single damn time after login. Here are my 10 steps for resolution:
Enable the python through scl script python_enable_3.5 or scl enable rh-python35 bash.
Verify that it's enabled by executing python3.5 --version. This should give you your python version.
Execute which python3.5 to get its path. In my case, it was /opt/rh/rh-python35/root/usr/bin/python3.5. You can use this path get the version again (just to verify that this path is working for you.)
Awesome, now please exit out of current shell by scl.
Now, lets get the version again through this complete python3.5 path /opt/rh/rh-python35/root/usr/bin/python3.5 --version.
It won't give you the version but an error. In my case, it was
/opt/rh/rh-python35/root/usr/bin/python3.5: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
As mentioned in Tamas' answer, we gotta find that so file. locate doesn't work in shared hosting and you can't install that too.
Use the following command to find where that file is located:
find /opt/rh/rh-python35 -name "libpython3.5m.so.rh-python35-1.0"
Above command would print the complete path (second line) of the file once located. In my case, output was
find: `/opt/rh/rh-python35/root/root': Permission denied
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Here is the complete command for the python3.5 to work in such shared hosting which would give the version,
LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5 --version
Finally, for shorthand, append the following alias in your ~/.bashrc
alias python351='LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5'
For verification, reload the .bashrc by source ~/.bashrc and execute python351 --version.
Well, there you go, now whenever you login again, you have got python351 to welcome you.
This is not just limited to python3.5, but can be helpful in case of other scl installed softwares.
I installed using the command:
./configure --prefix=/usr \
--enable-shared \
--with-system-expat \
--with-system-ffi \
--enable-unicode=ucs4 &&
make
Now, as the root user:
make install &&
chmod -v 755 /usr/lib/libpython2.7.so.1.0
Then I tried to execute python and got the error:
/usr/local/bin/python: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
Then, I logged out from root user and again tried to execute the Python and it worked successfully.
All it needs is the installation of libpython [3 or 2] dev files installation.
just install python-lib. (python27-lib). It will install libpython2.7.so1.0. We don't require to manually set anything.