Hi I am working in jenkins to build my AWS CDK project. I have created my docker file as below.
FROM python:3.7.4-alpine3.10
ENV CDK_VERSION='1.14.0'
RUN mkdir /cdk
COPY ./requirements.txt /cdk/
COPY ./entrypoint.sh /usr/local/bin/
COPY ./aws /cdk/
WORKDIR /cdk
RUN apk -uv add --no-cache groff jq less
RUN apk add --update nodejs npm
RUN apk add --update bash && rm -rf /var/cache/apk/*
RUN npm install -g aws-cdk
RUN pip3 install -r requirements.txt
RUN ls -la
ENTRYPOINT ["entrypoint.sh"]
RUN cdk synth
RUN cdk deploy
In jenkins I am building this Docker image as below.
stages {
stage('Dev Code Deploy') {
when {
expression {
return BRANCH_NAME = 'Develop'
}
}
agent {
dockerfile {
additionalBuildArgs "--build-arg 'http_proxy=${env.http_proxy}' --build-arg 'https_proxy=${env.https_proxy}'"
filename 'Dockerfile'
args '-u root:root'
}
}
In the above code I am not supplying AWS credentials So when cdk synth is executed I get error Need to perform AWS calls for account 1234567 but no credentials found. Tried: default credentials.
In jenkins I have my AWS credentials and I can access this like
steps {
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${env.ENVIRONMENT}"]]) {
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
But how can I pass these credentialsId when building docker image. Can someone help me to figure it out. Any help would be appreciated. Thanks
I am able to pass credentials like below.
steps {
script {
node {
checkout scm
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding',credentialsId: "${env.PROJECT_ID}-aws-${CFN_ENVIRONMENT}"]]) {
abc = docker.build('cdkimage', "--build-arg http_proxy=${env.http_proxy} --build-arg https_proxy=${env.https_proxy} .")
abc.inside{
sh 'ls -la'
sh "bash ./scripts/build.sh"
}
}
}
}
I have added below code in build.sh
cdk synth
cdk deploy
You should install the "Amazon ECR" plugin and restart Jenkins
Fulfil the plugin with your credential. And specify in pipeline
All documentation you can find here https://wiki.jenkins.io/display/JENKINS/Amazon+ECR
If you're using Jenkins pipeline, maybe you can try withAWS step.
This should provide a way to access Jenkins aws credential, then pass the credential as docker environment while running docker container.
ref:
https://github.com/jenkinsci/pipeline-aws-plugin
https://jenkins.io/doc/book/pipeline/docker/
Related
I work with nsq_to_file utility while running some automation code, I wanted to automate that utility as a docker-compose service. I can't find any documentation about using this utility with docker. I use it as follows:
./nsq_to_file --lookupd-http-address=<http_address> --topic=ta-gcp-test -output-dir=/path/to/local/dir -filename-format=local_file_name
Does anyone have any input on that?
You can build a Docker container with the nsq_to_file executable, it would look like this:
#
# build container
#
FROM golang:1.17-alpine as builder
RUN apk update && apk add git
RUN git clone https://github.com/nsqio/nsq
RUN cd nsq/apps/nsq_to_file/ && CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -ldflags="-w -s" -o /nsq_to_file .
#
# scratch release container
#
FROM scratch as scratch
COPY --from=builder /nsq_to_file /nsq_to_file
COPY --from=builder /etc/ssl/certs /etc/ssl/certs
# Run as non-root user for secure environments
USER 59000:59000
ENTRYPOINT [ "/nsq_to_file" ]
You can then build it and run it:
docker build -t oliver006/nsq_to_file -f Dockerfile .
docker run --rm oliver006/nsq_to_file --lookupd-http-address=<http_address> --topic=ta-gcp-test ...
I'm trying to mount a volume to a docker container, but I have some problems with it. I have a simple python script in a docker container that creates file "links.json" and I would like to access this file from the filesystem.
This is my Dockerfile:
FROM python:3.6-slim
COPY . /srv/app
WORKDIR /srv/app
RUN pip install -r requirements.txt --src /usr/local/src
CMD [ "python", "main.py" ]
I have created volume with:
docker volume create my-data
And I'm running this container with command:
docker run --mount source=my-data,target=/srv/app 3743b8d3b043
I've tried it on MacOS.
When I wrote: docker volume inspect my-data, I got this result:
[
{
"CreatedAt": "2019-08-15T08:30:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-data/_data",
"Name": "my-data",
"Options": {},
"Scope": "local"
}
]
But all directories like /var/lib/docker/volumes, and directories of this code are empty.. Do you have any ideas where's the problem?
Thanks a lot!
you are overwriting all you data in /srv/app that you add during the build process . you may change your mount to use other target as /srv/app.
Update
start your container using:
docker run -v /ful/path/folder:/srv/app/data IMAGE
I have a nodejs application that works on my machine since I have python installed and it's in the global env PATH (also in process.env.PATH) so I can run:
const spawn = require("child_process").spawn;
console.log('PATH:::::');
console.log(process.env.PATH);
const pythonProcess = spawn('python', ["./detect_shapes.py", './example2.png']);
pythonProcess.stdout.on('data', (data) => {
console.log('DATA::::');
console.log(data);
res.render('index', {data});
});
The script above basically runs a separate python script inside my nodejs application and returns a response to it. I can run the basic commands that can be found on any machine like this: const pythonProcess = spawn('ls');. This line of code will run the ls command and return the files as it is expected to do.
I also have a Dockerfile like this:
FROM node:9-slim
WORKDIR /app
COPY . /app
RUN npm install
EXPOSE 3000
CMD ["node", "index.js"]
I created nodejs applications with this exact Dockerfile config and it worked, since I am using child_process.spawn functions it maybe doesn't know about python or it's path so I am getting this error:
Error: spawn python ENOENT
at Process.ChildProcess._handle.onexit (internal/child_process.js:201:19)
at onErrorNT (internal/child_process.js:379:16)
at process._tickCallback (internal/process/next_tick.js:178:19)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:207:12)
at onErrorNT (internal/child_process.js:379:16)
at process._tickCallback (internal/process/next_tick.js:178:19)
I tried adding a RUN apt-get install python -y in my Dockerfile for it to install python in my docker image and I can use python, but it doesn't work. Do I have to add another FROM <image> that can install python (I am thinking that node:9-slim doesn't know how to install python since it's not used for that) in the Dockerfile so docker knows how to download python so I can use it.
Also when I print process.env.PATH on my docker container I get this: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin. How can I know that I have python working on my image and/or how can I add it to my paths if that is the problem?
I am new to Docker. I learned it yesterday, so if I didn't make things clear or you need more information please PM me or leave a comment.
In fact, this is not a docker question, just a debian question. You need always do apt-get update before install package. So, for you scenario, it should be:
RUN apt-get update || : && apt-get install python -y
As per your comments:
W: Failed to fetch http://deb.debian.org/debian/dists/jessie-updates/InRelease Unable to find expected entry 'main/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file) E: Some index files failed to download. They have been ignored, or old ones used instead. The command '/bin/sh -c apt-get update && apt-get install python -y' returned a non-zero code: 100
So, you can add || : after apt-get to ignore the error, as at that time python meta data already finished downloading with other previous url hit, so you can bypass the error.
Update:
A whole workable solution in case you need to compare:
a.py:
print("success")
index.js:
const spawn = require("child_process").spawn;
console.log('PATH:::::');
console.log(process.env.PATH);
const pythonProcess = spawn('python', ['/app/a.py']);
pythonProcess.stdout.on('data', (data) => {
console.log('DATA::::');
console.log(data.toString());
});
pythonProcess.stderr.on('data', (data) => {
console.log("wow");
console.log(data.toString());
});
Dockerfile:
FROM node:9-slim
RUN apt-get update || : && apt-get install python -y
WORKDIR /app
COPY . /app
CMD ["node", "index.js"]
Try command:
orange#orange:~/gg$ docker build -t abc:1 .
Sending build context to Docker daemon 4.096kB
...
Successfully built 756b13952760
Successfully tagged abc:1
orange#orange:~/gg$ docker run abc:1
PATH:::::
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DATA::::
success
I want to run my Python program on IBM cloud functions, because of dependencies this needs to be done in an OpenWhisk Docker. I've changed my code so that it accepts a json:
json_input = json.loads(sys.argv[1])
INSTANCE_NAME = json_input['INSTANCE_NAME']
I can run it from the terminal:
python main/main.py '{"INSTANCE_NAME": "example"}'
I've added this Python program to OpenWhisk with this Dockerfile:
# Dockerfile for example whisk docker action
FROM openwhisk/dockerskeleton
ENV FLASK_PROXY_PORT 8080
### Add source file(s)
ADD requirements.txt /action/requirements.txt
RUN cd /action; pip install -r requirements.txt
# Move the file to
ADD ./main /action
# Rename our executable Python action
ADD /main/main.py /action/exec
CMD ["/bin/bash", "-c", "cd actionProxy && python -u actionproxy.py"]
But now if I run it using IBM Cloud CLI I just get my Json back:
ibmcloud fn action invoke --result e2t-whisk --param-file ../env_var.json
# {"INSTANCE_NAME": "example"}
And if I run from the IBM Cloud Functions website with the same Json feed I get an error like it's not even there.
stderr: INSTANCE_NAME = json_input['INSTANCE_NAME']",
stderr: KeyError: 'INSTANCE_NAME'"
What can be wrong that the code runs when directly invoked, but not from the OpenWhisk container?
I have a project being built on Jenkins using the multibranch pipeline plugin. I am using the declarative pipeline syntax and my Jenkinsfile looks something like this:
pipeline {
agent { label 'blah' }
options {
timeout(time: 2, unit: 'HOURS')
buildDiscarder(logRotator(numToKeepStr: '5'))
}
triggers { pollSCM('H/5 * * * *') }
stages {
stage('Prepare') {
steps {
sh '''
echo "Building environment"
python3 -m venv venv && \
pip install git+ssh://git#my_private_repo.git
'''
}
}
}
}
When the build is run on the Jenkins box the build fails and when I check the console output it is failing on the pip install command with the error:
Permission denied (publickey).
fatal: Could not read from remote repository.
I am guessing that I need to set the required ssh key into jenkins build environment, but am not sure how to do this.
You need to install the SSH Agent plugin and use it to wrap the actions in the steps directive in order to be able to pull from a private repository. You enable the SSH Agent with the sshagent directive, where you need to pass in an argument representing the hash for a valid key with read permissions to the git repository. The key needs to be available in the global credentials view of Jenkins (Jenkins -> Credentials [on the left-hand side menu], search for the ID field of the right key), e.g.:
stage('Prepare') {
steps {
sshagent(['<hash_for_your_key>']) {
echo "Building environment"
sh "python3.5 -m venv venv"
sh "venv/bin/python3.5 venv/bin/pip install git+ssh://git#my_private_repo.git
}
}
N.B.: Because the actions under the steps directive are executed as subprocesses, you'll need to call explicitly the executable files from the virtual environment, using long syntax.