Running poetry in jenkinsfile - python

Setup is Jenkins running in Kubernetes. I want to lint my code, run my tests, then build a container. Having trouble getting poetry to install/run in one of my build steps.
podTemplate(inheritFrom: 'k8s-slave', containers: [
containerTemplate(name: 'py38', image: 'python:3.8.4-slim-buster', ttyEnabled: true, command: 'cat')
])
{
node(POD_LABEL) {
stage('Checkout') {
checkout scm
sh 'ls -lah'
}
container('py38') {
stage('Poetry Configuration') {
sh 'apt-get update && apt-get install -y curl'
sh "curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python"
sh "$HOME/.poetry/bin/poetry install --no-root"
sh "$HOME/.poetry/bin/poetry shell --no-interaction"
}
stage('Lint') {
sh 'pre-commit install'
sh "pre-commit run --all"
}
}
}
}
Poetry install works fine, but when I go to activate the shell, it fails.
+ /root/.poetry/bin/poetry shell --no-interaction
Spawning shell within /root/.cache/pypoetry/virtualenvs/truveris-version-Zr2qBFRU-py3.8
[error]
(25, 'Inappropriate ioctl for device')

The issue here is that Jenkins runs a non-interactive shell and you are trying to start an interactive shell. The --no-interaction option doesn't mean a non-interactive shell but rather the shell not asking you questions:
-n (--no-interaction) Do not ask any interactive question
This answer explains it 🔑🔑.
I would just not call the shell and just use the poetry run 🏃🏃 command:
podTemplate(inheritFrom: 'k8s-slave', containers: [
containerTemplate(name: 'py38', image: 'python:3.8.4-slim-buster', ttyEnabled: true, command: 'cat')
])
{
node(POD_LABEL) {
stage('Checkout') {
checkout scm
sh 'ls -lah'
}
container('py38') {
stage('Poetry Configuration') {
sh 'apt-get update && apt-get install -y curl'
sh "curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python"
sh "$HOME/.poetry/bin/poetry install --no-root"
}
stage('Lint') {
sh "$HOME/.poetry/bin/poetry run 'pre-commit install'"
sh "$HOME/.poetry/bin/poetry run 'pre-commit run --all'"
}
}
}
}
✌️

Install poetry on container level and then parse the poetry.lock file with
poetry export --without-hashes --dev -f requirements.txt -o requirements.txt
Then install the dependencies with pip install -r requirements.txt INSTEAD of poetry install
Then you don't have to run your commands in the virtual env.

Related

Run file in conda inside docker

I have a python code that I attempt to wrap in a docker:
FROM continuumio/miniconda3
# Python 3.9.7 , Debian (use apt-get)
ENV TARGET=dev
RUN apt-get update
RUN apt-get install -y gcc
RUN apt-get install dos2unix
RUN apt-get install -y awscli
RUN conda install -y -c anaconda python=3.7
WORKDIR /app
COPY . .
RUN conda env create -f conda_env.yml
RUN echo "conda activate tensorflow_p36" >> ~/.bashrc
RUN pip install -r prod_requirements.txt
RUN pip install -r ./architectures/mask_rcnn/requirements.txt
RUN chmod +x aws_pipeline/set_env_vars.sh
RUN chmod +x aws_pipeline/start_gpu_aws.sh
RUN dos2unix aws_pipeline/set_env_vars.sh
RUN dos2unix aws_pipeline/start_gpu_aws.sh
RUN aws_pipeline/set_env_vars.sh $TARGET
Building the image works fine, running the image using the following commands works fine:
docker run --rm --name d4 -dit pd_v2 sh
My OS in windows11, when I use the docker desktop "CLI" button to enter the container, all I need to do is type "bash" and the conda environment "tensorflow_p36" is activated and I can run my code.
When I try docker exec in the following manner:
docker exec d4 bash && <path_to_sh_file>
I get an error that the file doesn't exists.
What is missing here? Thanks
Won't bash && <path_to_sh_file> enter a bash shell, successfully exit it, then try to run your sh file in a new shell? I think it would be better to put #! /usr/bin/bash as the top line of your sh file, and be sure the sh file has executable permissions chmod a+x <path_to_sh_file>

pip install failing due to no 'setup.py' nor 'pyproject.toml' found

I have a sh script line (as part of a Jenkinsfile groovy script) which does
sh "python3 -m venv venv"
sh "source venv/bin/activate"
withCredentials([usernamePassword(credentialsId: XXXXXXX,
usernameVariable: 'XXXXXXX',
passwordVariable: 'XXXXXXX')]) {
sh "pip install --extra-index-url 'https://${XXXXXXX}:${XXXXXX}#atifactory-url-base/artifactory/api/pypi/pypi-release-local/simple' -e ."
}
sh "pip freeze >> requirements.txt"
However, above fails with
ERROR: file:///home/jenkins/workspace/XXXXXXXXXXX does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.
The project I have has no setup.py or requirements.txt file at the top level - how can I do this without adding the current python project for installation using -e?

How to solve CDK CLI version mismatch

I'm getting following error:
This CDK CLI is not compatible with the CDK library used by your application. Please upgrade the CLI to the latest version.
(Cloud assembly schema version mismatch: Maximum schema version supported is 8.0.0, but found 9.0.0)
after issuing cdk diff command.
I did run npm install -g aws-cdk#latest after which I've successfully installed new versions of packages: Successfully installed aws-cdk.assets-1.92.0 aws-cdk.aws-apigateway-1.92.0 aws-cdk.aws-apigatewayv2-1.92.0 ... etc. with pip install -r requirements.txt
However after typing cdk --version I'm still getting 1.85.0 (build 5f44668).
My part of setup.py is as follows:
install_requires=[
"aws-cdk.core==1.92.0",
"aws-cdk.aws-ec2==1.92.0",
"aws-cdk.aws_ecs==1.92.0",
"aws-cdk.aws_elasticloadbalancingv2==1.92.0"
],
And I'm stuck now, as downgrading packages setup.py to 1.85.0 throwing ImportError: cannot import name 'CapacityProviderStrategy' from 'aws_cdk.aws_ecs'.
Help :), I would like to use newest packages version.
I encountered this issue with a typescript package, after upgrading the cdk in package.json. As Maciej noted upgrading did not seem to work. I am installing the cdk cli with npm, and an uninstall followed by an install fixed the issue.
npm -g uninstall aws-cdk
npm -g install aws-cdk
So I've fixed it, but is too chaotic to describe the steps.
It seems like there are problems with the symlink
/usr/local/bin/cdk
which was pointing to version 1.85.0 and not the one I updated to 1.92.0.
I removed the aws-cdk from the node_modules and installed it again, then removed the symlink /usr/local/bin/cdk and recreated it manually with
ln -s /usr/lib/node_modules/aws-cdk/bin/cdk /usr/local/bin/cdk
nothing helps for mac OS except this command:
yarn global upgrade aws-cdk#latest
For those coming here that are not using a global install (using cdk from node_modules) and using a mono-repo. This issue is due to the aws-cdk package of the devDependencies not matching the version of the dependencies for the package.
I was using "aws-cdk": "2.18.0" in my root package.json but all my packages were using "aws-cdk-lib": "2.32.1" as their dependencies. By updating the root package.json to use "aws-cdk": "2.31.1" this solved the issue.
I have been experiencing this a few times as well, so i am just dropping this solution which helps me resolves version mismatches, particularly on existing projects.
For Python:
What you need to do is modify the setup.py to specify the latest version.
Either, implicitly
install_requires=[
"aws-cdk.core",
"aws-cdk.aws-ec2",
"aws-cdk.aws_ecs",
"aws-cdk.aws_elasticloadbalancingv2"
],
or explicitly;
install_requires=[
"aws-cdk.core==1.xx.x",
"aws-cdk.aws-ec2==1.xx.x",
"aws-cdk.aws_ecs==1.xx.x",
"aws-cdk.aws_elasticloadbalancingv2==1.xx.x"
],
Then from project root, run;
setup.py install
For TypeScript:
Modify package.json;
"dependencies": {
"#aws-cdk/core" : "latest",
"source-map-support": "^0.5.16"
}
Then run from project root:
npm install
I hope this helps! Please let me know if I need to elaborate or provide more details.
Uninstall the CDK version:
npm uninstall -g aws-cdk
Install specfic version which your application is using. For ex: CDK 1.158.0
npm install -g aws-cdk#1.158.0
If you've made it down here there are 2 options to overcome this issue.
Provision a cloud9 env and run the code there.
Run the app in a container.
Move the cdk app into a folder so that you have a parent folder to put a dockerfile a makefile and a dockercompose file.
fire up docker desktop
cd into the root folder from the terminal and then run make
Dockerfile contents
FROM ubuntu:20.04 as compiler
WORKDIR /app/
RUN apt-get update -y \
&& apt install python3 -y \
&& apt install python3-pip -y \
&& apt install python3-venv -y \
&& python3 -m venv venv
ARG NODE_VERSION=16
RUN ls
RUN apt-get update
RUN apt-get install xz-utils
RUN apt-get -y install curl
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y && \
curl -sL https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - && \
apt install -y nodejs
RUN apt-get install -y python-is-python3
RUN npm install -g aws-cdk
RUN python -m venv /opt/venv
docker-compose.yaml contents
version: '3.6'
services:
cdk-base:
build: .
image: cdk-base
command: ${COMPOSE_COMMAND:-bash}
volumes:
- .:/app
- /var/run/docker.sock:/var/run/docker.sock #Needed so a docker container can be run from inside a docker container
- ~/.aws/:/root/.aws:ro
Makefile content
SHELL=/bin/bash
CDK_DIR=cdk_app/
COMPOSE_RUN = docker-compose run --rm cdk-base
COMPOSE_UP = docker-compose up
PROFILE = --profile default
all: pre-reqs synth
pre-reqs: _prep-cache container-build npm-install
_prep-cache: #This resolves Error: EACCES: permission denied, open 'cdk.out/tree.json'
mkdir -p cdk_websocket/cdk.out/
container-build: pre-reqs
docker-compose build
container-info:
${COMPOSE_RUN} make _container-info
_container-info:
./containerInfo.sh
clear-cache:
${COMPOSE_RUN} rm -rf ${CDK_DIR}cdk.out && rm -rf ${CDK_DIR}node_modules
cli: _prep-cache
docker-compose run cdk-base /bin/bash
npm-install: _prep-cache
${COMPOSE_RUN} make _npm-install
_npm-install:
cd ${CDK_DIR} && ls && python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && npm -v && python --version
npm-update: _prep-cache
${COMPOSE_RUN} make _npm-update
_npm-update:
cd ${CDK_DIR} && npm update
synth: _prep-cache
${COMPOSE_RUN} make _synth
_synth:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk synth --no-staging ${PROFILE} && cdk deploy --require-approval never ${PROFILE}
bootstrap: _prep-cache
${COMPOSE_RUN} make _bootstrap
_bootstrap:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk bootstrap ${PROFILE}
deploy: _prep-cache
${COMPOSE_RUN} make _deploy
_deploy:
cd ${CDK_DIR} && cdk deploy --require-approval never ${PROFILE}
destroy:
${COMPOSE_RUN} make _destroy
_destroy:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk destroy --force ${PROFILE}
diff: _prep-cache
${COMPOSE_RUN} make _diff
_diff: _prep-cache
cd ${CDK_DIR} && cdk diff ${PROFILE}
test:
${COMPOSE_RUN} make _test
_test:
cd ${CDK_DIR} && npm test
This worked for me in my dev environment since I had re-cloned the source, I needed to re-run the npm install command. A sample package.json might look like this (update the versions as needed):
{
"dependencies": {
"aws-cdk": "2.27.0",
"node": "^16.14.0"
}
}
I changed typescript": "^4.7.3" version and it worked.
Note: cdk version 2("aws-cdk-lib": "^2.55.0")

How do I setup only python 2.7 in a docker container?

I have node app and in one use case I am calling python script from node using python-shell . I am trying to setup this app on docker and my Dockerfile looks something like this:
FROM debian:latest
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# update the repository sources list
# and install dependencies
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get -y autoclean
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.15.3
# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl --silent -o-https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
RUN apt-get -y install python2.7
COPY package.json .
RUN npm install
COPY . .
CMD ["npm","run","start"]
after building and running this container when I try to invoke use case where python script gets called from node I am getting this error.
null
events.js:174
throw er; // Unhandled 'error' event
^
Error: spawn /usr/lib/python2.7 EACCES
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
Help on setting up just python2.7 in a docker container?
You can use python base image
FROM python:2.7
This base image with have python pre-configured and you don't need to install python seperately. Hope it helps.
Here is the list of available image
For quick reference please check
https://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3
You can use "FROM python:2.7" , the base image.
FROM python:2.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
(documentation link)please find some examples here

How to deploy a custom docker image on Elastic Beanstalk?

Looking at this blog - 5. Create Dockerfile. It seems I had to create a new Dockerfile pointing to my private image on Docker.io.
And since the last command shall be starting an executable or the docker image would end up in nirvana, there is the supervisrd at the end:
FROM flux7/wp-site # This is the location of our docker container.
RUN apt-get install supervisor
RUN mkdir -p /var/log/supervisor
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 80
CMD supervisord -c /etc/supervisor/conf.d/supervisord.conf
This is a bit confusing to me, because I have a fully tested custom Docker image that ends with supervisord, see below:
FROM ubuntu:14.04.2
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
RUN apt-get -y update && apt-get upgrade -y
RUN apt-get install supervisor python build-essential python-dev python-pip python-setuptools -y
RUN apt-get install libxml2-dev libxslt1-dev python-dev -y
RUN apt-get install libpq-dev postgresql-common postgresql-client -y
RUN apt-get install openssl openssl-blacklist openssl-blacklist-extra -y
RUN apt-get install nginx -y
RUN pip install "pip>=7.0"
RUN pip install virtualenv uwsgi
RUN mkdir -p /var/log/supervisor
ADD canonicaliser_api /home/ubuntu/canonicaliser_api
ADD config_local.py /home/ubuntu/canonicaliser_api/config/config_local.py
RUN virtualenv /home/ubuntu/canonicaliser_api/venv
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && pip install -r /home/ubuntu/canonicaliser_api/requirements.txt
RUN export CFLAGS=-I/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/numpy/core/include/
RUN source /home/ubuntu/canonicaliser_api/venv/bin/activate && cd /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/ && python setup.py build_ext --inplace
RUN cp /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser/cython_extensions/*.so /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions
RUN rm -rf /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/canonicaliser
RUN rm -r /home/ubuntu/canonicaliser_api/canonicaliser/cython_extensions/build
RUN mkdir /var/run/flask-uwsgi
RUN chown -R www-data:www-data /var/run/flask-uwsgi
RUN mkdir /var/log/flask-uwsgi
ADD flask-uwsgi.ini /etc/flask-uwsgi/
ADD supervisord.conf /etc/supervisor/conf.d/supervisord.conf
EXPOSE 8888
CMD ["/usr/bin/supervisord"]
So how do I serve my custom image (CMD ?) instead of using supervisord? Unless I'm overlooking something....
UPDATE
I have applied the suggested updates, but it fails to authenticate to private repo on DockerHub.
[2015-08-11T14:02:10.489Z] INFO [1858] - [CMD-Startup/StartupStage0/AppDeployPreHook/03build.sh] : Activity execution failed, because: WARNING: Invalid auth configuration file
Pulling repository houmie/canon
time="2015-08-11T14:02:08Z" level="fatal" msg="Error: image houmie/canon:latest not found"
Failed to pull Docker image houmie/canon:latest, retrying...
WARNING: Invalid auth configuration file
The dockercfg inside a folder called docker inside the S3 bucket is
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "xxxx",
"email": "xxx#gmail.com"
}
}
}
The Dockerrun.aws.json is:
{
"AWSEBDockerrunVersion":"1",
"Authentication":{
"Bucket":"dd-xxx-ir-01",
"Key":"docker/dockercfg"
},
"Image":{
"Name":"houmie/canon",
"Update":"true"
},
"Ports":[
{
"ContainerPort":"8888"
}
]
}
When deploying containers with Elastic Beanstalk, you can tell it to build your image locally on each host from a Dockerfile defined by you, or to use a pre-built image from a registry.
You don't necessarily have to recreate your image, you may just use one you already have (be it on Docker Hub or a private registry).
If your application runs on an image that is available in a hosted repository, you can specify the image in a Dockerrun.aws.json file and omit the Dockerfile.
If your registry account demands authentication, then you need to provide a .dockercfg file on a S3 bucket, which will be pulled by Docker hosts (so you need the proper permissions given to instances via IAM Role).
Declare the .dockercfg file in the Authentication parameter of the Dockerrun.aws.json file. Make sure that the Authentication parameter contains a valid Amazon S3 bucket and key. The Amazon S3 bucket must be hosted in the same region as the environment that is using it. Elastic Beanstalk will not download files from Amazon S3 buckets hosted in other regions. Grant permissions for the action s3:GetObject to the IAM role in the instance profile.
So, your Dockerrun.aws.json may look like this (considering your image is hosted on Docker Hub).
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "myBucket",
"Key": ".dockercfg"
},
"Image":
{
"Name": "yourRegistryUser/yourImage",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "1234"
}
],
"Volumes": [
{
"HostDirectory": "/var/app/mydb",
"ContainerDirectory": "/etc/mysql"
}
],
"Logging": "/var/log/nginx"
{
Look at the official documentation for further details about the configuration and available options.
As for what command you run (supervisord, whatever), it doesn't matter.

Categories

Resources