How do I create python environment variables in puppet using python::virtualenv? - python

I'm running a python script that interacts with Slack. I'm getting the Slack api token into the python script with
the_token = os.environ.get('SLACK_TOKEN')
I tried to puppetize the python environment with
$var_name = 'SLACK_TOKEN'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
python::virtualenv { $virtualenv_path:
ensure => present,
requirements => '/opt/<dir>/<dir>/<dir>/requirements.txt'
owner => $::local_username,
version => '3',
require => [Class['<class>']],
environment => ["${var_name}=${token}"],
}
I thought the last line of the 'virtualenv' block would set the environment variable, but apparently not.

In manifests/virtualenv.pp there is an exec that's running pip commands to install/start virtual environments (sorry, I'm no expert on Python virtual environments).
exec { "python_requirements_initial_install_${requirements}_${venv_dir}":
command => "${pip_cmd} --log ${venv_dir}/pip.log install ${pypi_index} ${proxy_flag} --no-binary :all: -r ${requirements} ${extra_pip_args}",
refreshonly => true,
timeout => $timeout,
user => $owner,
subscribe => Exec["python_virtualenv_${venv_dir}"],
environment => $environment, <----- HERE
cwd => $cwd,
}
When the exec runs, Puppet opens a shell, pipes in the environment variables, runs the command and closes the shell so the environment variables exist within the shell Puppet started but not in the Python environment it created. It's intended function is probably to setup paths to commands if they are in a different place or pass in proxy configurations so pip can pull packages in from external sites.
I verified the exec is handling what your sending it correctly using this.
class test {
$var_name = 'test'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
exec { "test":
command => "/bin/env > /tmp/env.txt",
environment => ["${var_name}=${token}"],
}
}
But you'll notice I had to dump the environment the exec was running in out to a file to see it.

Related

Show Python virtualenv name and a short path in Powershell

I want to simplify my powershell prompt when using Python virtual environment: I want to see the name of the current environment, and the current directory only (not the full path).
For example, I want to convert this
(venv) PS C:User\me\Desktop\project\subfolder>
to this:
(venv) PS subfolder>
I tried many Powershell profile.ps configuration, such as
function prompt {
$p = Split-Path -leaf -path (Get-Location)
"$p> "
}
But this command hides the current virtual environment.
Does someone already achieved to do what I want ?

VS Code Jupyter integration does not consider custom LD_LIBRARY_PATH

I recently setup a fresh EC2 instance for development running Amazon Linux 2. To run the recent version of prefect (https://orion-docs.prefect.io/) I had to install an up to date version of SQLite3, which I compiled from source. I then set the LD_LIBRARY_PATH environmental variable to "/usr/local/lib", and installed python 3.10.5 with the LDFLAGS and CPPFLAGS compiler arguments to include that folder as well, so that the new sqlite libraries are found by python. All good so far, when running the jupyter notebook server or the prefect orion server from the terminal everything works fine. If I want to use the integrated jupyter environment from VS Code I run into the issue that the kernel does not start:
Failed to start the Kernel.
ImportError: /home/mickelj/.pyenv/versions/3.10.5/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so: undefined symbol: sqlite3_trace_v2.
This leads me to believe that the system sqlite library is used, as this is the same error I get when I unset the LD_LIBRARY_PATH env variable. However when calling
ldd /home/mickelj/.pyenv/versions/3.10.5/lib/python3.10/lib-dynload/_sqlite3.cpython-310-x86_64-linux-gnu.so I am getting the following:
linux-vdso.so.1 (0x00007ffcde9c8000)
libsqlite3.so.0 => /usr/local/lib/libsqlite3.so.0 (0x00007f96a3339000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f96a311b000)
libc.so.6 => /lib64/libc.so.6 (0x00007f96a2d6e000)
libz.so.1 => /lib64/libz.so.1 (0x00007f96a2b59000)
libm.so.6 => /lib64/libm.so.6 (0x00007f96a2819000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f96a2615000)
/lib64/ld-linux-x86-64.so.2 (0x00007f96a3870000)
Where the new sqlite3 library is correctly referenced. If I unset the LD_LIBRARY_PATH variable the second line changes to:
libsqlite3.so.0 => /lib64/libsqlite3.so.0 (0x00007f9dce52e000)
So my guess is that the VS Code jupyter integration does not consider environment variables, so my question is: is there a way to specify them (and in particular the LD_LIBRARY_PATH) globally for VS Code or for the built-in jupyter server at runtime or anywhere else to fix this?
Recently, jupyter is repairing .env related problems.
You can try to install vscode insiders and install pre-release version of jupyter extension.
Using ipykernel to create a custom kernel spec with env variable solved this for me.
Steps:
Create a kernelspec with your environment.
conda activate myenv # checkout that venv, using conda as an example
# pip install ipykernel # in case you don't have one
python -m ipykernel install --user --name myenv_ldconf
Edit the kernelspec file, add env variable in the object
nano ~/.local/share/jupyter/kernels/myenv_ldconf/kernel.json
You will see something like this:
{
"argv": [
"/home/alice/miniconda3/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv_ldconf",
"language": "python",
"metadata": {
"debugger": true
}
}
After adding env variable:
{
"argv": [
"/home/alice/miniconda3/envs/myenv/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "myenv_ldconf",
"language": "python",
"env": {"LD_LIBRARY_PATH": "/home/alice/miniconda3/envs/myenv/lib"},
"metadata": {
"debugger": true
}
}
Ref: How to set env variable in Jupyter notebook
Change the kernel in vscode to myenv_ldconf.

Why can I not create virtualenv using ansible?

I'm trying to create a virtualenv for nodepool user using ansible but it is failing as outlined below. I want to become nodepool user as it uses python3.5 whereas all others use the server default, 2.7.5. It seems that it cannot source the 3.5 version.
The play is:
- name: Create nodepool venv
become: true
become_user: nodepool
become_method: su
command: virtualenv-3.5 /var/lib/nodepool/npvenv
The error is:
fatal: [ca-o3lscizuul]: FAILED! => {"changed": false, "cmd": "virtualenv-3.5 /var/lib/nodepool/npvenv", "failed": true, "msg": "[Errno 2] No such file or directory", "rc": 2}
It works from shell.
[root#host ~]# su nodepool
[nodepool#host root]$ virtualenv-3.5 /var/lib/nodepool/npvenv
Using base prefix '/opt/rh/rh-python35/root/usr'
New python executable in /var/lib/nodepool/npvenv/bin/python3
Also creating executable in /var/lib/nodepool/npvenv/bin/python
Installing setuptools, pip, wheel...done.
Worked around the issue as follows.
shell: source /var/lib/nodepool/.bashrc && virtualenv-3.5 /var/lib/nodepool/npvenv creates="/var/lib/nodepool/npvenv"
It is not as I'd like to do it but it will do. If anyone knows how I might do like originally posted, please advise. Perhaps it's not possible as it doesn't pickup paths etc.
I threw in the creates option as it prevents redoing if it exists.

SSH keys in build environment when using multibranch pipeline Jenkinsfile

I have a project being built on Jenkins using the multibranch pipeline plugin. I am using the declarative pipeline syntax and my Jenkinsfile looks something like this:
pipeline {
agent { label 'blah' }
options {
timeout(time: 2, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr: '5'))
}
triggers { pollSCM('H/5 * * * *') }
stages {
stage('Prepare') {
steps {
sh '''
echo "Building environment"
python3 -m venv venv && \
pip install git+ssh://git#my_private_repo.git
'''
}
}
}
}
When the build is run on the Jenkins box the build fails and when I check the console output it is failing on the pip install command with the error:
Permission denied (publickey).
fatal: Could not read from remote repository.
I am guessing that I need to set the required ssh key into jenkins build environment, but am not sure how to do this.
You need to install the SSH Agent plugin and use it to wrap the actions in the steps directive in order to be able to pull from a private repository. You enable the SSH Agent with the sshagent directive, where you need to pass in an argument representing the hash for a valid key with read permissions to the git repository. The key needs to be available in the global credentials view of Jenkins (Jenkins -> Credentials [on the left-hand side menu], search for the ID field of the right key), e.g.:
stage('Prepare') {
steps {
sshagent(['<hash_for_your_key>']) {
echo "Building environment"
sh "python3.5 -m venv venv"
sh "venv/bin/python3.5 venv/bin/pip install git+ssh://git#my_private_repo.git
}
}
N.B.: Because the actions under the steps directive are executed as subprocesses, you'll need to call explicitly the executable files from the virtual environment, using long syntax.

Jenkinsfile and Python virtualenv

I am trying to setup a project that uses the shiny new Jenkins pipelines, more specifically a multibranch project.
I have a Jenkinsfile created in a test branch as below:
node {
stage 'Preparing VirtualEnv'
if (!fileExists('.env')){
echo 'Creating virtualenv ...'
sh 'virtualenv --no-site-packages .env'
}
sh '. .env/bin/activate'
sh 'ls -all'
if (fileExists('requirements/preinstall.txt')){
sh 'pip install -r requirements/preinstall.txt'
}
sh 'pip install -r requirements/test.txt'
stage 'Unittests'
sh './manage.py test --noinput'
}
It's worth noting that preinstall.txt will update pip.
I am getting error as below:
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip'
Looks like it's trying to update pip in global env instead of inside virtualenv, and looks like each sh step is on its own context, how do I make them to execute within the same context?
What you are trying to do will not work. Every time you call the sh command, jenkins will create a new shell.
This means that if you use .env/bin/activate in a sh it will be only sourced in that shell session. The result is that in a new sh command you have to source the file again (if you take a closer look at the console output you will see that Jenkins will actually create temporary shell files each time you run the command.
So you should either source the .env/bin/activate file at the beginning of each shell command (you can use triple quotes for multiline strings), like so
if (fileExists('requirements/preinstall.txt')) {
sh """
. .env/bin/activate
pip install -r requirements/preinstall.txt
"""
}
...
sh """
. .env/bin/activate
pip install -r requirements/test.txt
"""
}
stage("Unittests") {
sh """
. .env/bin/activate
./manage.py test --noinput
"""
}
or run it all in one shell
sh """
. .env/bin/activate
if [[ -f requirements/preinstall.txt ]]; then
pip install -r requirements/preinstall.txt
fi
pip install -r requirements/test.txt
./manage.py test --noinput
"""
Like Rik posted, virtualenvs don't work well within the Jenkins Pipeline Environment, since a new shell is created for each command.
I created a plugin that makes this process a little less painful, which can be found here: https://wiki.jenkins.io/display/JENKINS/Pyenv+Pipeline+Plugin. It essentially just wraps each call in a way that activates the virtualenv prior to running the command. This in itself is tricky, as some methods of running multiple commands inline are split into two separate commands by Jenkins, causing the activated virtualenv no longer to apply.
I'm new to Jenkins files. Here's how I've been working around the virtual environment issue. (I'm running Python3, Jenkins 2.73.1)
Caveat: Just to be clear, I'm not saying this is a good way to solve the problem, nor have I tested this enough to stand behind this approach, but here what is working for me today:
I've been playing around with bypassing the venv 'activate' by calling the virtual environment's python interpreter directly. So instead of:
source ~/venv/bin/activate
one can use:
~/venv/bin/python3 my_script.py
I pass the path to my virtual environment python interpreter via the shell's rc file (In my case, ~/.bashrc.) In theory, every shell Jenkins calls should read this resource file. In practice, I must restart Jenkins after making changes to the shell resource file.
HOME_DIR=~
export VENV_PATH="$HOME_DIR/venvs/my_venv"
export PYTHON_INTERPRETER="${VENV_PATH}/bin/python3"
My Jenkinsfile looks similar to this:
pipeline {
agent {
label 'my_slave'
}
stages {
stage('Stage1') {
steps {
// sh 'echo $PYTHON_INTERPRETER'
// sh 'env | sort'
sh "$PYTHON_INTERPRETER my_script.py "
}
}
}
}
So when the pipeline is run, the sh has the $PYTHON_INTERPRETER environment values set.
Note one shortcoming of this approach is that now the Jenkins file does not contain all the necessary information to run the script correctly. Hopefully this will get you off the ground.

Categories

Resources