I am running a Jenkins pipeline where I have a flask app to run and then test (using nose)
the current Jenkinsfile I am using is
pipeline {
agent { docker { image 'python:3.8.0' } }
stages {
stage('build') {
steps {
withEnv(["HOME=${env.WORKSPACE}"]) {
sh 'python --version'
sh 'python -m pip install --upgrade pip'
sh 'pip install --user -r requirements.txt'
sh script:'''
#!/bin/bash
echo "This is start $(pwd)"
cd ./flask/section5
echo "This is $(pwd)"
python create_table.py
python myapp.py &
nose2
'''
}
}
}
}
}
Jenkins creates the docker image and download the pythnon requirements. I get and error when running nose2 from shell script
with this error
+ pwd
+ echo This is start .jenkins/workspace/myflaskapptest_master
This is start .jenkins/workspace/myflaskapptest_master
+ cd ./flask/section5
+ pwd
+ echo This is .jenkins/workspace/myflaskapptest_master/flask/section5
This is .jenkins/workspace/myflaskapptest_master/flask/section5
+ python create_table.py
DONE
+ nose2+
python myapp.py
.jenkins/workspace/myflaskapptest_master#tmp/durable-1518e85e/script.sh: 8: .jenkins/workspace/myflaskapptest_master#tmp/durable-1518e85e/script.sh: nose2: not found
what am I missing?
I fixed it (i.e. the jenkins build works) by changing the Jenkinfile in this way
sh script:'''
#!/bin/bash
echo "This is start $(pwd)"
cd ./flask/section5
echo "This is $(pwd)"
python create_table.py
python myapp.py &
cd ./tests
python test.py
'''
I would have preferred to run nose2 like I do on my machine. if anyone knows out to fix this would be great
Related
I have a text statement in nodejs which i want to send to python,apply transformers and process it and then retrieve the processed text back into nodejs. Then to write a dockerfile and containerise the integrated code into a single container.
The dockerfile,python script and nodejs codes are attached.
The nodejs code:
const questions = JSON.stringify('Hello World machine learning dynamic learning advanced neural nets techs')
const pythonprocess = spawn('python3',["./py_mo2.py"],{
shell: true,
env:{
...process.env
}
});
console.log("before process")
pythonprocess.stdout.on('data',(data) => {
console.log(data.toString());
});
console.log("nodejs is working")
The python code:
import sys
import json
import warnings
warnings.filterwarnings("ignore")
text=sys.argv[1]
from transformers import pipeline
summarizer=pipeline('summarization',model='lidiya/bart-base-samsum')
print(json.dumps(summarizer(text)))
The dockerfile code:
FROM python:3.7
ENV VIRTUAL_ENV=/app/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 16.19.0
RUN mkdir -p $NVM_DIR
WORKDIR $NVM_DIR
RUN apt-get update && curl -fsSL https://deb.nodesource.com/setup_16.x | bash -
RUN apt-get install nodejs
ENV NODE_PATH $NVM_DIR/node/lib/node_modules
ENV PATH="$NVM_DIR/node/bin:$PATH"
WORKDIR /app/venv
COPY requirements.txt /app/venv/
RUN . /app/venv/bin/activate && pip3 install -r requirements.txt
COPY package*.json /app/venv/
RUN npm install
COPY . /app/venv/
CMD /usr/bin/node /app/venv/nomo.js
I tried experimenting using multi-stage builds in dockerfile,to install python on node image and vice versa.Am able to get a simple text output from the python code in the absence of the transformers library.Once i use it the container just exits after running the nodejs code but no output from the python code.
Thank you for the help! kudos
This is my jenkinfile
stage('Build') {
agent {
docker {
label '1.63slave'
image 'linux-c4702-image'
args '-u root:root'
}
}
when {
expression {params.build == 'yes'}
beforeAgent true
}
steps {
script{
sh """#!/bin/bash
cd ${env.WORKSPACE} && mv -f /home/nanopb-0.3.6-linux-x86.tar.gz ${env.WORKSPACE}/adsp_proc/ssc/tools
cd ${env.WORKSPACE}/adsp_proc/ssc/tools && tar -zxvf nanopb-0.3.6-linux-x86.tar.gz
cd ${env.WORKSPACE}/adsp_proc
pwd
python ssc/build/config_nanopb_dependency.py -f nanopb-0.3.6-linux-x86
python ./build/build.py -c sm6150 -o all -f aDSP
cd ${env.WORKSPACE} && source set_env.sh
cd ${env.WORKSPACE} && sh buildNonhlos.sh all
cd ${env.WORKSPACE} && bitbake qti-multimedia-image
cd ${env.WORKSPACE} && sh buildNonhlos.sh nonhlos
"""
}
}
}
This is my python wrong
#Check that boot_images folder exists, there are dependencies on this
if BOOT_IMAGES_DIR not in os.environ["WORKSPACE"]:
raise NameError("ERROR: buildex::setup_environment: " + \
"Build root folder 'boot_images' is missing. Please ensure this folder exist.")
Because jenkins global environment variables conflict with python
jenkins gloval env
The Jenkins log shows WORKSPACE as in jenkins ${env.WORKSPACE},However, the WORKSPACE required in python is ${env.WORKSPACE}/boot_images,I have no problem trying to run python without Jenkins
use jenkins environment {} and withEnv to change is useless
You can prepend an environment variable definition in front of a commands:
WORKSPACE="${env.WORKSPACE}/boot_images" python <script.py> <arguments...>
will set that environment variable just for that command.
I ran my pytest command:
python -m pytest tests/encoding/unit/* --verbose --junit-xml 'results.xml'
and I got the following error:
ERROR: script returned exit code 2
Please find below my JenkinsFile:
//#!/bin/bash
//#!groovy
def withConda(Closure body) {
//CONDA INSTALLATION-------------------------------------
env.CONDA_ZIP = 'miniconda3.sh'
env.CERTS_ZIP = 'certs.zip'
env.CONDA_PATH = "${WORKSPACE}/bin/anaconda3"
env.CONDA_BIN_DIR = "${env.CONDA_PATH}/bin"
//GENERIC DIRECTORIES------------------------------------
env.DOWNLOADS = "${WORKSPACE}/downloads"
env.WORKSPACE_BIN = "${WORKSPACE}/bin"
if ( fileExists("${env.CONDA_PATH}/tls-ca-bundle.pem") ) {
echo "=====================>> CONDA ALREADY INSTALLED! <<====================="
} else {
echo "=====================>> INSTALLING CONDA! <<====================="
stage('Installing Miniconda3') {
dir("${env.DOWNLOADS}") {
sh "curl -k -o ${env.CONDA_ZIP} https://nexus.com/nexus/content/repositories/third-party-applications/continuum/miniconda3/4.4.10/Miniconda3-latest-Linux-x86_64.sh --insecure"
sh "chmod +x ${env.CONDA_ZIP}"
sh "curl -k -o ${env.CERTS_ZIP} 'https://nexus.com/nexus/service/local/repositories/ib-cto-releases/content/com/cacerts/java-cacerts/3.1.0/java-cacerts-3.1.0.zip' --insecure"
dir("${env.CONDA_PATH}") {
sh "unzip -d . -u ${env.DOWNLOADS}/${env.CERTS_ZIP}"
//sh 'ls -lart'
sh "bash ${env.DOWNLOADS}/${env.CONDA_ZIP} -b -f -p ."
}
}
}
}
stage('Setting Up Conda Configuration') {
dir ("${env.CONDA_PATH}") {
sh "mv ${env.JENKINS_PIPELINES_PATH}/.condarc ./.condarc"
sh "sed -i 's+TLS_PEM_BUNDLE_PATH+${env.CONDA_PATH}/tls-ca-bundle.pem+' .condarc"
}
withEnv(["PATH+CONDA_BIN_DIR=${env.CONDA_BIN_DIR}"]) {
sh 'python -m conda update conda -y'
//sh 'conda update conda -y'
//sh 'conda update --all -y'
sh 'conda env list'
//sh 'conda remove --name dev --all -y'
withCredentials([[$class: 'UsernamePasswordMultiBinding', credentialsId: "d398a781-1860-4c2b-96b6-dbd5442f9a82", usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
body()
}
}
}
}
def withSpark(Closure body) {
//JAVA INSTALLATION-------------------------------------
//env.JAVA_ZIP = 'java.sh'
stage('Setting Up Spark Configuration')
dir("${env.DOWNLOADS}") {
echo "=====================>> INSTALLING JAVA! <<====================="
// cannot find java on Nexus
sh "curl -k -o ${env.JAVA_ZIP} https://nexus.com/nexus/content/repositories/third-party-applications/?? --insecure"
//echo "=====================>> INSTALLING SPARK! <<====================="
//sh "curl -k -o ${env.JAVA_ZIP} https://nexus.com/nexus/content/repositories/third-party-applications/?? --insecure"
}
}
def pullRepos() {
//BITBUCKET SETUP----------------------------------------
env.BBS_REPO_URL = 'https://stash.com:8443/scm/qafrdrnd/dataprocessing.git'
env.BBS_REPO_PATH = "${WORKSPACE}/bitbucket/jenkins"
env.JENKINS_PIPELINES_PATH = "${env.BBS_REPO_PATH}"
// Download the repos regardless to pull in any new changes...
stage("Checkout BBS RM Repo: ${env.BBS_REPO_URL}") {
//Setup some useful environment variables to used through the pipeline...
dir ("${env.BBS_REPO_PATH}") {
checkout([$class: 'GitSCM',
branches: [[name: '']],
doGenerateSubmoduleConfigurations: false,
extensions: [],
submoduleCfg: [],
//userRemoteConfigs: [[credentialsId: "${params.AD_CREDENTIALS}", url: "${env.BBS_REPO_URL}"]]])
userRemoteConfigs: [[credentialsId: "d398a781-1860-4c2b-96b6-dbd5442f9a82", url: "${env.BBS_REPO_URL}"]]])
}
}
}
//machine that executes an entire workflow
node('Linux') {
//GENERIC DIRECTORIES------------------------------------
env.DOWNLOADS = "${WORKSPACE}/downloads"
env.WORKSPACE_BIN = "${WORKSPACE}/bin"
//Initialise the pipeline environment, bringing in any required scripts and configuration files...
//deleteDir()
pullRepos()
withConda {
stage("Running unit-test with pytest") {
dir("${env.BBS_REPO_PATH}") {
sh 'conda install pytest -y'
sh 'conda install ipython -y'
sh 'conda install numpy -y'
sh 'conda install pandas -y'
sh 'conda install pyspark -y'
sh '''
python -m pytest tests/encoding/unit/* --verbose --junit-xml 'results.xml'
'''
}
}
}
}
If memory serves right, pytest does not take the wildcard * character. The way you invoked the --junit-xl is also wrong I think. If you want to run everything in the tests/encoding/unit folder, you should run:
python -m pytest tests/encoding/unit --verbose --junit-xml=results.xml
You may refer to Pytest Usage for more information.
Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/
Trying to run collectstatic on deloyment, but running into the following error:
pipeline.exceptions.CompressorError: /usr/bin/env: yuglify: No such
file or directory
When I run collectstatic manually, everything works as expected:
Post-processed 'stylesheets/omnibase-v1.css' as
'stylesheets/omnibase-v1.css' Post-processed 'js/omnijs-v1.js' as
'js/omnijs-v1.js'
I've installed Yuglify globally. If I run 'heroku run yuglify', the interface pops up and runs as expected. I'm only running into an issue with deployment. I'm using the multibuildpack, with NodeJS and Python. Any help?
My package, just in case:
{
"author": "One Who Sighs",
"name": "sadasd",
"description": "sadasd Dependencies",
"version": "0.0.0",
"homepage": "http://sxaxsaca.herokuapp.com/",
"repository": {
"url": "https://github.com/heroku/heroku-buildpack-nodejs"
},
"dependencies": {
"yuglify": "~0.1.4"
},
"engines": {
"node": "0.10.x"
}
}
Should maybe mention that Yuglify is not in my requirements.txt, just in my package.json.
I ran into the same problem and ended up using a custom buildpack such as this and writing a bash script to install node and yuglify:
https://github.com/heroku/heroku-buildpack-python
After setting the buildpack, I created a few bash scripts to install node and yuglify. the buildpack has hooks to call these post compile scripts. Here's a good example how to do this which I followed:
https://github.com/nigma/heroku-django-cookbook
These scripts are placed under bin in your root folder.
In the post_compile script, I added a script to install yuglify.
post_compile script
if [ -f bin/install_nodejs ]; then
echo "-----> Running install_nodejs"
chmod +x bin/install_nodejs
bin/install_nodejs
if [ -f bin/install_yuglify ]; then
echo "-----> Running install_yuglify"
chmod +x bin/install_yuglify
bin/install_yuglify
fi
fi
install_yuglify script
#!/usr/bin/env bash
set -eo pipefail
npm install -g yuglify
If that doesn't work, you can have a look at this post:
Yuglify compressor can't find binary from package installed through npm