How do I setup only python 2.7 in a docker container? - python

I have node app and in one use case I am calling python script from node using python-shell . I am trying to setup this app on docker and my Dockerfile looks something like this:
FROM debian:latest
# replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# update the repository sources list
# and install dependencies
RUN apt-get update \
&& apt-get install -y curl \
&& apt-get -y autoclean
# nvm environment variables
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 10.15.3
# install nvm
# https://github.com/creationix/nvm#install-script
RUN curl --silent -o-https://raw.githubusercontent.com/creationix/nvm/v0.31.2/install.sh | bash
# install node and npm
RUN source $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
# add node and npm to path so the commands are available
ENV NODE_PATH $NVM_DIR/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# confirm installation
RUN node -v
RUN npm -v
RUN apt-get -y install python2.7
COPY package.json .
RUN npm install
COPY . .
CMD ["npm","run","start"]
after building and running this container when I try to invoke use case where python script gets called from node I am getting this error.
null
events.js:174
throw er; // Unhandled 'error' event
^
Error: spawn /usr/lib/python2.7 EACCES
at Process.ChildProcess._handle.onexit (internal/child_process.js:240:19)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
Emitted 'error' event at:
at Process.ChildProcess._handle.onexit (internal/child_process.js:246:12)
at onErrorNT (internal/child_process.js:415:16)
at process._tickCallback (internal/process/next_tick.js:63:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
Help on setting up just python2.7 in a docker container?

You can use python base image
FROM python:2.7
This base image with have python pre-configured and you don't need to install python seperately. Hope it helps.
Here is the list of available image
For quick reference please check
https://blog.realkinetic.com/building-minimal-docker-containers-for-python-applications-37d0272c52f3

You can use "FROM python:2.7" , the base image.
FROM python:2.7
WORKDIR /usr/src/app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD [ "python", "./your-daemon-or-script.py" ]
(documentation link)please find some examples here

Related

AppDynamics Serverless APM for Lambda. Auto instrument docker layer extension. Python app. Error = Extension.Crash

I'm trying to implement APM Serverless for AWS Lambda in a Python function. The function is deployed via a container image, so the extension is built as a layer in the Dockerfile.
Firstly, in case someone is trying to auto instrument this process; The extension must be unzipped into /opt/extensions/ not to /opt/ like suggested in the docs. Otherwise, the Lambda won't see the extension.
This is the Dockerfile:
FROM amazon/aws-cli:2.2.4 AS downloader
ARG version_number=10
ARG region=
ENV AWS_REGION=${region}
ENV VERSION_NUMBER=${version_number}
RUN yum install -y jq curl
WORKDIR /aws
RUN aws lambda get-layer-version-by-arn --arn arn:aws:lambda:$AWS_REGION:716333212585:layer:appdynamics-lambda-extension:$VERSION_NUMBER | jq -r '.Content.Location' | xargs curl -o extension.zip
# set base image (host OS)
FROM python:3.6.8-slim
ENV APPDYNAMICS_PYTHON_AUTOINSTRUMENT=true
# set the working directory for AppD
WORKDIR /opt
RUN apt-get clean \
&& apt-get -y update \
&& apt-get -y install python3-dev \
python3-psycopg2 \
&& apt-get -y install build-essential
COPY --from=downloader /aws/extension.zip .
RUN apt-get install -y unzip && unzip extension.zip -d /opt/extensions/ && rm -f extension.zip
# set the working directory in the container
WORKDIR /code
RUN pip install --upgrade pip \
&& pip install awslambdaric && pip install appdynamics-lambda-tracer
# copy the dependencies file to the working directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to the working directory
COPY /src .
RUN chmod 644 $(find . -type f) \
&& chmod 755 $(find . -type d)
# command to run on container start
ENTRYPOINT [ "/usr/local/bin/python", "-m", "awslambdaric" ]
CMD [ "app.lambda_handler" ]
However, when executing the function, I get the following error:
AWS_Execution_Env is not supported, running lambda with default arguments.
EXTENSION Name: appdynamics-extension-script State: Started Events: []
Start RequestId <requestid>
End RequestId <requestid>
Error: exit code 0 Extension.Crash
Without more information.
When I try to implement the tracer manually, by installing appdynamics-lambda-tracer with pip, and importing the module I do see the logs in Cloudwatch from AppDynamics but they don't report to the controller.
Any idea what could be causing said crash?

run mitmproxy-node inside of a Docker container?

Pls note that the Docker container is run inside a Jenkins pipeline. I am trying to run a specific npm package mitmproxy-node on/in a Docker container, it has a dependency on the python mitmproxy package. I think that I need to have the python dep inside of the node container so that the node code can find and run/instantiate the mitmproxy during runtime(it is being turned on as a runtime process.env )
How do I construct/fix a Dockerfile to build the container so that the node container, where the test.runner exists knows/can use the python mitmproxy code ?
I have some thing like this.
FROM node:14.15-buster
COPY .package*.json .package*.json
COPY .npmrc .npmrc
is it here? RUN apt-get install python ?
RUN npm install
COPY . .
CMD ("npm", "test")
When trying to instantiate mitmproxy, it throws an error
Error in beforeSession:
Error: mitmdump, which is an executable that ships with mitmproxy, is not on your PATH.
Please ensure that you can run mitmdump --version successfully from your command line.
I am pretty new when it comes to Docker so your help is appreciated.
here's the full dockerfile that got me to where I am goin.
FROM node:14.15.0-buster
WORKDIR /usr/src/app
RUN apt-get update || : && apt-get install python3 -y -V
ENV PATH="${PATH}:/usr/bin/python3"
RUN apt-get install python3-pip -y -V
RUN pip3 install mitmproxy
ENV PATH="${PATH}:/usr/bin/mitmproxy"
ARG NPM_TOKEN
COPY package*.json ./
COPY .npmrc .
RUN npm install
COPY . .
CMD ["npm", "test"]

How to solve CDK CLI version mismatch

I'm getting following error:
This CDK CLI is not compatible with the CDK library used by your application. Please upgrade the CLI to the latest version.
(Cloud assembly schema version mismatch: Maximum schema version supported is 8.0.0, but found 9.0.0)
after issuing cdk diff command.
I did run npm install -g aws-cdk#latest after which I've successfully installed new versions of packages: Successfully installed aws-cdk.assets-1.92.0 aws-cdk.aws-apigateway-1.92.0 aws-cdk.aws-apigatewayv2-1.92.0 ... etc. with pip install -r requirements.txt
However after typing cdk --version I'm still getting 1.85.0 (build 5f44668).
My part of setup.py is as follows:
install_requires=[
"aws-cdk.core==1.92.0",
"aws-cdk.aws-ec2==1.92.0",
"aws-cdk.aws_ecs==1.92.0",
"aws-cdk.aws_elasticloadbalancingv2==1.92.0"
],
And I'm stuck now, as downgrading packages setup.py to 1.85.0 throwing ImportError: cannot import name 'CapacityProviderStrategy' from 'aws_cdk.aws_ecs'.
Help :), I would like to use newest packages version.
I encountered this issue with a typescript package, after upgrading the cdk in package.json. As Maciej noted upgrading did not seem to work. I am installing the cdk cli with npm, and an uninstall followed by an install fixed the issue.
npm -g uninstall aws-cdk
npm -g install aws-cdk
So I've fixed it, but is too chaotic to describe the steps.
It seems like there are problems with the symlink
/usr/local/bin/cdk
which was pointing to version 1.85.0 and not the one I updated to 1.92.0.
I removed the aws-cdk from the node_modules and installed it again, then removed the symlink /usr/local/bin/cdk and recreated it manually with
ln -s /usr/lib/node_modules/aws-cdk/bin/cdk /usr/local/bin/cdk
nothing helps for mac OS except this command:
yarn global upgrade aws-cdk#latest
For those coming here that are not using a global install (using cdk from node_modules) and using a mono-repo. This issue is due to the aws-cdk package of the devDependencies not matching the version of the dependencies for the package.
I was using "aws-cdk": "2.18.0" in my root package.json but all my packages were using "aws-cdk-lib": "2.32.1" as their dependencies. By updating the root package.json to use "aws-cdk": "2.31.1" this solved the issue.
I have been experiencing this a few times as well, so i am just dropping this solution which helps me resolves version mismatches, particularly on existing projects.
For Python:
What you need to do is modify the setup.py to specify the latest version.
Either, implicitly
install_requires=[
"aws-cdk.core",
"aws-cdk.aws-ec2",
"aws-cdk.aws_ecs",
"aws-cdk.aws_elasticloadbalancingv2"
],
or explicitly;
install_requires=[
"aws-cdk.core==1.xx.x",
"aws-cdk.aws-ec2==1.xx.x",
"aws-cdk.aws_ecs==1.xx.x",
"aws-cdk.aws_elasticloadbalancingv2==1.xx.x"
],
Then from project root, run;
setup.py install
For TypeScript:
Modify package.json;
"dependencies": {
"#aws-cdk/core" : "latest",
"source-map-support": "^0.5.16"
}
Then run from project root:
npm install
I hope this helps! Please let me know if I need to elaborate or provide more details.
Uninstall the CDK version:
npm uninstall -g aws-cdk
Install specfic version which your application is using. For ex: CDK 1.158.0
npm install -g aws-cdk#1.158.0
If you've made it down here there are 2 options to overcome this issue.
Provision a cloud9 env and run the code there.
Run the app in a container.
Move the cdk app into a folder so that you have a parent folder to put a dockerfile a makefile and a dockercompose file.
fire up docker desktop
cd into the root folder from the terminal and then run make
Dockerfile contents
FROM ubuntu:20.04 as compiler
WORKDIR /app/
RUN apt-get update -y \
&& apt install python3 -y \
&& apt install python3-pip -y \
&& apt install python3-venv -y \
&& python3 -m venv venv
ARG NODE_VERSION=16
RUN ls
RUN apt-get update
RUN apt-get install xz-utils
RUN apt-get -y install curl
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y && \
curl -sL https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - && \
apt install -y nodejs
RUN apt-get install -y python-is-python3
RUN npm install -g aws-cdk
RUN python -m venv /opt/venv
docker-compose.yaml contents
version: '3.6'
services:
cdk-base:
build: .
image: cdk-base
command: ${COMPOSE_COMMAND:-bash}
volumes:
- .:/app
- /var/run/docker.sock:/var/run/docker.sock #Needed so a docker container can be run from inside a docker container
- ~/.aws/:/root/.aws:ro
Makefile content
SHELL=/bin/bash
CDK_DIR=cdk_app/
COMPOSE_RUN = docker-compose run --rm cdk-base
COMPOSE_UP = docker-compose up
PROFILE = --profile default
all: pre-reqs synth
pre-reqs: _prep-cache container-build npm-install
_prep-cache: #This resolves Error: EACCES: permission denied, open 'cdk.out/tree.json'
mkdir -p cdk_websocket/cdk.out/
container-build: pre-reqs
docker-compose build
container-info:
${COMPOSE_RUN} make _container-info
_container-info:
./containerInfo.sh
clear-cache:
${COMPOSE_RUN} rm -rf ${CDK_DIR}cdk.out && rm -rf ${CDK_DIR}node_modules
cli: _prep-cache
docker-compose run cdk-base /bin/bash
npm-install: _prep-cache
${COMPOSE_RUN} make _npm-install
_npm-install:
cd ${CDK_DIR} && ls && python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && npm -v && python --version
npm-update: _prep-cache
${COMPOSE_RUN} make _npm-update
_npm-update:
cd ${CDK_DIR} && npm update
synth: _prep-cache
${COMPOSE_RUN} make _synth
_synth:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk synth --no-staging ${PROFILE} && cdk deploy --require-approval never ${PROFILE}
bootstrap: _prep-cache
${COMPOSE_RUN} make _bootstrap
_bootstrap:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk bootstrap ${PROFILE}
deploy: _prep-cache
${COMPOSE_RUN} make _deploy
_deploy:
cd ${CDK_DIR} && cdk deploy --require-approval never ${PROFILE}
destroy:
${COMPOSE_RUN} make _destroy
_destroy:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk destroy --force ${PROFILE}
diff: _prep-cache
${COMPOSE_RUN} make _diff
_diff: _prep-cache
cd ${CDK_DIR} && cdk diff ${PROFILE}
test:
${COMPOSE_RUN} make _test
_test:
cd ${CDK_DIR} && npm test
This worked for me in my dev environment since I had re-cloned the source, I needed to re-run the npm install command. A sample package.json might look like this (update the versions as needed):
{
"dependencies": {
"aws-cdk": "2.27.0",
"node": "^16.14.0"
}
}
I changed typescript": "^4.7.3" version and it worked.
Note: cdk version 2("aws-cdk-lib": "^2.55.0")

/bin/sh python not found error when running docker base image

I am trying to run a docker base image but am encountering the error /bin/sh: 1: python: not found. I am first building a parent image and then modifying it using the bash script below
#!/usr/bin/env bash
docker build -t <image_name>:latest .
docker run <image_name>:latest
docker push <image_name>:latest
and the Dockerfile
FROM ubuntu:18.04
# Installing Python
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install Pillow boto3
WORKDIR /app
After that, I run the following script to create and run the base image:
#!/usr/bin/env bash
docker build -t <base_image_name>:latest .
docker run -it <base_image_name>:latest
with the following Dockerfile:
FROM <image_name>:latest
COPY app.py /app
# Run app.py when the container launches
CMD python /app/app.py
I have also tried installing python through the Dockerfile of the base image, but I still get the same error.
IMHO a better solution would be to use one of the official python images.
FROM python:3.9-slim
RUN pip install --no-cache-dir Pillow boto3
WORKDIR /app
To fix the issue of python not being found -- instead of
cd /usr/local/bin \
&& ln -s /usr/bin/python3 python
OP should symlink to /usr/bin/python, not /usr/local/bin/python as they did in the original post. Another way to do this is with an absolute symlink as below.
ln -s /usr/bin/python3 /usr/bin/python

Docker 'run' not finding file

I'm attempting to run a Python file from a Docker container but receive the error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "exec: \"./models/PriceNotifications.py\": stat ./models/PriceNotifications.py: no such file or directory": unknown.
I build and run using commands:
docker build -t pythonstuff .
docker tag pythonstuff adr/test
docker run -t adr/test
Dockerfile:
FROM ubuntu:16.04
COPY . /models
# add the bash script
ADD install.sh /
# change rights for the script
RUN chmod u+x /install.sh
# run the bash script
RUN /install.sh
# prepend the new path
ENV PATH /root/miniconda3/bin:$PATH
CMD ["./models/PriceNotifications.py"]
install.sh:
apt-get update # updates the package index cache
apt-get upgrade -y # updates packages
# installs system tools
apt-get install -y bzip2 gcc git # system tools
apt-get install -y htop screen vim wget # system tools
apt-get upgrade -y bash # upgrades bash if necessary
apt-get clean # cleans up the package index cache
# INSTALL MINICONDA
# downloads Miniconda
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O Miniconda.sh
bash Miniconda.sh -b # installs it
rm -rf Miniconda.sh # removes the installer
export PATH="/root/miniconda3/bin:$PATH" # prepends the new path
# INSTALL PYTHON LIBRARIES
conda install -y pandas # installs pandas
conda install -y ipython # installs IPython shell
# CUSTOMIZATION
cd /root/
wget http://hilpisch.com/.vimrc # Vim configuration
I've tried modifying the CMD within the Dockerfile to :
CMD ["/models/PriceNotifications.py"]
but the same error occurs.
The file structure is as follows:
How should I modify the Dockerfile or dir structure so that models/PriceNotifications.py is found and executed ?
Thanks to earlier comments, using the path:
CMD ["/models/models/PriceNotifications.py"]
instead of
CMD ["./models/PriceNotifications.py"]
Allows the Python script to run.
I would have thought CMD ["python /models/models/PriceNotifications.py"] should be used instead of CMD ["/models/models/PriceNotifications.py"] to invoke the Python interpreter but the interpreter seems to be already available as part of the Docker run.

Categories

Resources