SSH keys in build environment when using multibranch pipeline Jenkinsfile - python

I have a project being built on Jenkins using the multibranch pipeline plugin. I am using the declarative pipeline syntax and my Jenkinsfile looks something like this:
pipeline {
agent { label 'blah' }
options {
timeout(time: 2, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr: '5'))
}
triggers { pollSCM('H/5 * * * *') }
stages {
stage('Prepare') {
steps {
sh '''
echo "Building environment"
python3 -m venv venv && \
pip install git+ssh://git#my_private_repo.git
'''
}
}
}
}
When the build is run on the Jenkins box the build fails and when I check the console output it is failing on the pip install command with the error:
Permission denied (publickey).
fatal: Could not read from remote repository.
I am guessing that I need to set the required ssh key into jenkins build environment, but am not sure how to do this.

You need to install the SSH Agent plugin and use it to wrap the actions in the steps directive in order to be able to pull from a private repository. You enable the SSH Agent with the sshagent directive, where you need to pass in an argument representing the hash for a valid key with read permissions to the git repository. The key needs to be available in the global credentials view of Jenkins (Jenkins -> Credentials [on the left-hand side menu], search for the ID field of the right key), e.g.:
stage('Prepare') {
steps {
sshagent(['<hash_for_your_key>']) {
echo "Building environment"
sh "python3.5 -m venv venv"
sh "venv/bin/python3.5 venv/bin/pip install git+ssh://git#my_private_repo.git
}
}
N.B.: Because the actions under the steps directive are executed as subprocesses, you'll need to call explicitly the executable files from the virtual environment, using long syntax.

Related

Azure Functions deployment fails because pip install cannot find environment variable / app setting

EDIT: My flaw when deploying to Azure was that I was not saving the new Application settings I was introducing facepalm. Saving PIP_EXTRA_INDEX_URL=https://<token>#raw.githubusercontent.com/<gituser> allows pip to finally install my private library.
However, it still doesn't work when locally debugging with F5 on VSCode, regardless of my local.settings.json. Below I'll describe the hack that makes it work locally, but I would prefer to resolve why it doesn't work from my local settings.
My requirements.txt indicates the private library as <package_name> # https://raw.githubusercontent.com/<gituser>/<repo>/<branch>/dist/<package_name>0.1.11-py3-none-any.whl. Based on the fact that adding an PIP_EXTRA_INDEX_URL in Azure allows the deployment, I am trying to imitate that in my local.settings.json:
{
"IsEncrypted": false,
"Values": {
...
"PIP_EXTRA_INDEX_URL": "https://<token>#raw.githubusercontent.com/<gituser>",
...
}
}
However, I get a 404 error when the Azure's pip tries to install from requirements.txt.
Locally, in tasks.json, if I set that env var before I run pip install, the F5 starts working:
{
"version": "2.0.0",
"tasks": [
{
"type": "func",
"command": "host start",
"problemMatcher": "$func-python-watch",
"isBackground": true,
"dependsOn": "pip install (functions)"
},
{
"label": "pip install (functions)",
"type": "shell",
"osx": {
...
"windows": {
"command": "$env:PIP_EXTRA_INDEX_URL = 'https://<token>#raw.githubusercontent.com/<gituser>'; ${config:azureFunctions.pythonVenv}\\Scripts\\python -m pip install -r requirements.txt"
},
...
}
]
}
In other words, my F5 is not reading the env var from local.settings.json before the pip install task runs (note that the function itself is properly receiving the env vars as evidenced by successful os.environ[] calls). Also note that locally it does work if I set the env var in tasks.json, which is not ideal. I prefer my local.settings.json to mirror my Application settings on Azure.
In this issue it has been confirmed that this is expected behavior and that my workaround is valid:
"local.settings.json" is unique to core tools and thus only applies
during the func start command. Since "pip install" happen before
func start, you would have to set the env var elsewhere (and yes
tasks.json is a good place).

How do I create python environment variables in puppet using python::virtualenv?

I'm running a python script that interacts with Slack. I'm getting the Slack api token into the python script with
the_token = os.environ.get('SLACK_TOKEN')
I tried to puppetize the python environment with
$var_name = 'SLACK_TOKEN'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
python::virtualenv { $virtualenv_path:
ensure => present,
requirements => '/opt/<dir>/<dir>/<dir>/requirements.txt'
owner => $::local_username,
version => '3',
require => [Class['<class>']],
environment => ["${var_name}=${token}"],
}
I thought the last line of the 'virtualenv' block would set the environment variable, but apparently not.
In manifests/virtualenv.pp there is an exec that's running pip commands to install/start virtual environments (sorry, I'm no expert on Python virtual environments).
exec { "python_requirements_initial_install_${requirements}_${venv_dir}":
command => "${pip_cmd} --log ${venv_dir}/pip.log install ${pypi_index} ${proxy_flag} --no-binary :all: -r ${requirements} ${extra_pip_args}",
refreshonly => true,
timeout => $timeout,
user => $owner,
subscribe => Exec["python_virtualenv_${venv_dir}"],
environment => $environment, <----- HERE
cwd => $cwd,
}
When the exec runs, Puppet opens a shell, pipes in the environment variables, runs the command and closes the shell so the environment variables exist within the shell Puppet started but not in the Python environment it created. It's intended function is probably to setup paths to commands if they are in a different place or pass in proxy configurations so pip can pull packages in from external sites.
I verified the exec is handling what your sending it correctly using this.
class test {
$var_name = 'test'
$token = 'xxxx-xxxxxxxxxx-xxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx'
exec { "test":
command => "/bin/env > /tmp/env.txt",
environment => ["${var_name}=${token}"],
}
}
But you'll notice I had to dump the environment the exec was running in out to a file to see it.

Install multiple private packages from Github using Poetry and deploy keys

I want to install multiple private Github repositories as python packages into my repository using poetry. This works fine when running poetry install locally as I have my public SSH key added to Github which allows poetry to access the private repos. The problem is that I want to install these same private packages in my CI/CD pipeline and for that I need to add one deploy key per repo. The correct deploy key needs to be used towards the correct repository and for that to work, I need to setup a config with some aliases on the following format (I haven't actually gotten so far as to know whether this will actually work or not):
// /.ssh/config
Host repo-1.github.com
IdentityFile ~/.ssh/repo-1-deploy-key
Host repo-2.github.com
IdentityFile ~/.ssh/repo-2-deploy-key
Where repo-1 and repo-2 are the names of the private repositories I need to install. When running locally the pyproject.toml packages need to be setup in the following format:
// pyproject.toml
...
[tool.poetry.dependencies]
repo-1 = { git = "ssh://git#github.com/equinor/repo-1.git" }
repo-2 = { git = "ssh://git#github.com/equinor/repo-2.git" }
...
As this will allow the developer to install the packages without any configuration (given that they have access). For the CI/CD pipeline however, the URLs need to match the alias in the SSH config file and therefore it needs to look something like this:
// pyproject.toml
...
[tool.poetry.dependencies]
repo-1 = { git = "ssh://git#repo-1.github.com/equinor/repo-1.git" }
repo-2 = { git = "ssh://git#repo-2.github.com/equinor/repo-2.git" }
...
Now, where I seem to be stuck at is on how I can include two different git paths in the same pyproject file? I tried the following:
//pyproject.toml
[tool.poetry.dependencies]
repo-1 = { git = "ssh://git#repo-1.github.com/equinor/repo-1.git", optional=true }
repo-2 = { git = "ssh://git#repo-2.github.com/equinor/repo-2.git", optional=true }
[tool.poetry.dev-dependencies]
repo-1 = { git = "ssh://git#repo-1.github.com/equinor/repo-1.git" }
repo-2 = { git = "ssh://git#repo-2.github.com/equinor/repo-2.git" }
[tool.poetry.extras]
cicd_modules = ["repo-1", "repo-2"]
So that I could run poetry install locally and it would use the dev dependencies and poetry install --no-dev --extras cicd_modules in the CI/CD pipeline to use the aliased paths. Sadly, this gives me a CalledProcessError as it seems like poetry tries to install the optional package despite the optional flag set to true.
What am I doing wrong here, am I using the optional flag incorrectly somehow? Is there a better way to solve this? In summary I just want to be able to install multiple private repositories as packages using poetry and Github deploy keys in a CI/CD pipeline without breaking local install behaviour, if it's in this way or another better way, I don't really have a strong opinion on.

How to pass AWS credential when building Docker image in Jenkins?

Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/

Jenkinsfile and Python virtualenv

I am trying to setup a project that uses the shiny new Jenkins pipelines, more specifically a multibranch project.
I have a Jenkinsfile created in a test branch as below:
node {
stage 'Preparing VirtualEnv'
if (!fileExists('.env')){
echo 'Creating virtualenv ...'
sh 'virtualenv --no-site-packages .env'
}
sh '. .env/bin/activate'
sh 'ls -all'
if (fileExists('requirements/preinstall.txt')){
sh 'pip install -r requirements/preinstall.txt'
}
sh 'pip install -r requirements/test.txt'
stage 'Unittests'
sh './manage.py test --noinput'
}
It's worth noting that preinstall.txt will update pip.
I am getting error as below:
OSError: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/pip'
Looks like it's trying to update pip in global env instead of inside virtualenv, and looks like each sh step is on its own context, how do I make them to execute within the same context?
What you are trying to do will not work. Every time you call the sh command, jenkins will create a new shell.
This means that if you use .env/bin/activate in a sh it will be only sourced in that shell session. The result is that in a new sh command you have to source the file again (if you take a closer look at the console output you will see that Jenkins will actually create temporary shell files each time you run the command.
So you should either source the .env/bin/activate file at the beginning of each shell command (you can use triple quotes for multiline strings), like so
if (fileExists('requirements/preinstall.txt')) {
sh """
. .env/bin/activate
pip install -r requirements/preinstall.txt
"""
}
...
sh """
. .env/bin/activate
pip install -r requirements/test.txt
"""
}
stage("Unittests") {
sh """
. .env/bin/activate
./manage.py test --noinput
"""
}
or run it all in one shell
sh """
. .env/bin/activate
if [[ -f requirements/preinstall.txt ]]; then
pip install -r requirements/preinstall.txt
fi
pip install -r requirements/test.txt
./manage.py test --noinput
"""
Like Rik posted, virtualenvs don't work well within the Jenkins Pipeline Environment, since a new shell is created for each command.
I created a plugin that makes this process a little less painful, which can be found here: https://wiki.jenkins.io/display/JENKINS/Pyenv+Pipeline+Plugin. It essentially just wraps each call in a way that activates the virtualenv prior to running the command. This in itself is tricky, as some methods of running multiple commands inline are split into two separate commands by Jenkins, causing the activated virtualenv no longer to apply.
I'm new to Jenkins files. Here's how I've been working around the virtual environment issue. (I'm running Python3, Jenkins 2.73.1)
Caveat: Just to be clear, I'm not saying this is a good way to solve the problem, nor have I tested this enough to stand behind this approach, but here what is working for me today:
I've been playing around with bypassing the venv 'activate' by calling the virtual environment's python interpreter directly. So instead of:
source ~/venv/bin/activate
one can use:
~/venv/bin/python3 my_script.py
I pass the path to my virtual environment python interpreter via the shell's rc file (In my case, ~/.bashrc.) In theory, every shell Jenkins calls should read this resource file. In practice, I must restart Jenkins after making changes to the shell resource file.
HOME_DIR=~
export VENV_PATH="$HOME_DIR/venvs/my_venv"
export PYTHON_INTERPRETER="${VENV_PATH}/bin/python3"
My Jenkinsfile looks similar to this:
pipeline {
agent {
label 'my_slave'
}
stages {
stage('Stage1') {
steps {
// sh 'echo $PYTHON_INTERPRETER'
// sh 'env | sort'
sh "$PYTHON_INTERPRETER my_script.py "
}
}
}
}
So when the pipeline is run, the sh has the $PYTHON_INTERPRETER environment values set.
Note one shortcoming of this approach is that now the Jenkins file does not contain all the necessary information to run the script correctly. Hopefully this will get you off the ground.

Categories

Resources