How to solve CDK CLI version mismatch - python

I'm getting following error:
This CDK CLI is not compatible with the CDK library used by your application. Please upgrade the CLI to the latest version.
(Cloud assembly schema version mismatch: Maximum schema version supported is 8.0.0, but found 9.0.0)
after issuing cdk diff command.
I did run npm install -g aws-cdk#latest after which I've successfully installed new versions of packages: Successfully installed aws-cdk.assets-1.92.0 aws-cdk.aws-apigateway-1.92.0 aws-cdk.aws-apigatewayv2-1.92.0 ... etc. with pip install -r requirements.txt
However after typing cdk --version I'm still getting 1.85.0 (build 5f44668).
My part of setup.py is as follows:
install_requires=[
"aws-cdk.core==1.92.0",
"aws-cdk.aws-ec2==1.92.0",
"aws-cdk.aws_ecs==1.92.0",
"aws-cdk.aws_elasticloadbalancingv2==1.92.0"
],
And I'm stuck now, as downgrading packages setup.py to 1.85.0 throwing ImportError: cannot import name 'CapacityProviderStrategy' from 'aws_cdk.aws_ecs'.
Help :), I would like to use newest packages version.

I encountered this issue with a typescript package, after upgrading the cdk in package.json. As Maciej noted upgrading did not seem to work. I am installing the cdk cli with npm, and an uninstall followed by an install fixed the issue.
npm -g uninstall aws-cdk
npm -g install aws-cdk

So I've fixed it, but is too chaotic to describe the steps.
It seems like there are problems with the symlink
/usr/local/bin/cdk
which was pointing to version 1.85.0 and not the one I updated to 1.92.0.
I removed the aws-cdk from the node_modules and installed it again, then removed the symlink /usr/local/bin/cdk and recreated it manually with
ln -s /usr/lib/node_modules/aws-cdk/bin/cdk /usr/local/bin/cdk

nothing helps for mac OS except this command:
yarn global upgrade aws-cdk#latest

For those coming here that are not using a global install (using cdk from node_modules) and using a mono-repo. This issue is due to the aws-cdk package of the devDependencies not matching the version of the dependencies for the package.
I was using "aws-cdk": "2.18.0" in my root package.json but all my packages were using "aws-cdk-lib": "2.32.1" as their dependencies. By updating the root package.json to use "aws-cdk": "2.31.1" this solved the issue.

I have been experiencing this a few times as well, so i am just dropping this solution which helps me resolves version mismatches, particularly on existing projects.
For Python:
What you need to do is modify the setup.py to specify the latest version.
Either, implicitly
install_requires=[
"aws-cdk.core",
"aws-cdk.aws-ec2",
"aws-cdk.aws_ecs",
"aws-cdk.aws_elasticloadbalancingv2"
],
or explicitly;
install_requires=[
"aws-cdk.core==1.xx.x",
"aws-cdk.aws-ec2==1.xx.x",
"aws-cdk.aws_ecs==1.xx.x",
"aws-cdk.aws_elasticloadbalancingv2==1.xx.x"
],
Then from project root, run;
setup.py install
For TypeScript:
Modify package.json;
"dependencies": {
"#aws-cdk/core" : "latest",
"source-map-support": "^0.5.16"
}
Then run from project root:
npm install
I hope this helps! Please let me know if I need to elaborate or provide more details.

Uninstall the CDK version:
npm uninstall -g aws-cdk
Install specfic version which your application is using. For ex: CDK 1.158.0
npm install -g aws-cdk#1.158.0

If you've made it down here there are 2 options to overcome this issue.
Provision a cloud9 env and run the code there.
Run the app in a container.
Move the cdk app into a folder so that you have a parent folder to put a dockerfile a makefile and a dockercompose file.
fire up docker desktop
cd into the root folder from the terminal and then run make
Dockerfile contents
FROM ubuntu:20.04 as compiler
WORKDIR /app/
RUN apt-get update -y \
&& apt install python3 -y \
&& apt install python3-pip -y \
&& apt install python3-venv -y \
&& python3 -m venv venv
ARG NODE_VERSION=16
RUN ls
RUN apt-get update
RUN apt-get install xz-utils
RUN apt-get -y install curl
RUN apt-get update -y && \
apt-get upgrade -y && \
apt-get install -y && \
curl -sL https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - && \
apt install -y nodejs
RUN apt-get install -y python-is-python3
RUN npm install -g aws-cdk
RUN python -m venv /opt/venv
docker-compose.yaml contents
version: '3.6'
services:
cdk-base:
build: .
image: cdk-base
command: ${COMPOSE_COMMAND:-bash}
volumes:
- .:/app
- /var/run/docker.sock:/var/run/docker.sock #Needed so a docker container can be run from inside a docker container
- ~/.aws/:/root/.aws:ro
Makefile content
SHELL=/bin/bash
CDK_DIR=cdk_app/
COMPOSE_RUN = docker-compose run --rm cdk-base
COMPOSE_UP = docker-compose up
PROFILE = --profile default
all: pre-reqs synth
pre-reqs: _prep-cache container-build npm-install
_prep-cache: #This resolves Error: EACCES: permission denied, open 'cdk.out/tree.json'
mkdir -p cdk_websocket/cdk.out/
container-build: pre-reqs
docker-compose build
container-info:
${COMPOSE_RUN} make _container-info
_container-info:
./containerInfo.sh
clear-cache:
${COMPOSE_RUN} rm -rf ${CDK_DIR}cdk.out && rm -rf ${CDK_DIR}node_modules
cli: _prep-cache
docker-compose run cdk-base /bin/bash
npm-install: _prep-cache
${COMPOSE_RUN} make _npm-install
_npm-install:
cd ${CDK_DIR} && ls && python3 -m venv .venv && source .venv/bin/activate && pip install -r requirements.txt && npm -v && python --version
npm-update: _prep-cache
${COMPOSE_RUN} make _npm-update
_npm-update:
cd ${CDK_DIR} && npm update
synth: _prep-cache
${COMPOSE_RUN} make _synth
_synth:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk synth --no-staging ${PROFILE} && cdk deploy --require-approval never ${PROFILE}
bootstrap: _prep-cache
${COMPOSE_RUN} make _bootstrap
_bootstrap:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk bootstrap ${PROFILE}
deploy: _prep-cache
${COMPOSE_RUN} make _deploy
_deploy:
cd ${CDK_DIR} && cdk deploy --require-approval never ${PROFILE}
destroy:
${COMPOSE_RUN} make _destroy
_destroy:
cd ${CDK_DIR} && source .venv/bin/activate && pip install -r requirements.txt && cdk destroy --force ${PROFILE}
diff: _prep-cache
${COMPOSE_RUN} make _diff
_diff: _prep-cache
cd ${CDK_DIR} && cdk diff ${PROFILE}
test:
${COMPOSE_RUN} make _test
_test:
cd ${CDK_DIR} && npm test

This worked for me in my dev environment since I had re-cloned the source, I needed to re-run the npm install command. A sample package.json might look like this (update the versions as needed):
{
"dependencies": {
"aws-cdk": "2.27.0",
"node": "^16.14.0"
}
}

I changed typescript": "^4.7.3" version and it worked.
Note: cdk version 2("aws-cdk-lib": "^2.55.0")

Related

How can I setup a persistence Python virtual environment in a Dockerfile?

I'm building Python 3.7.4 (It's a hard requirement for other software) on a base Ubuntu 20.04 image using a Dockerfile. I'm following this guide.
Everything works fine if I run the image and follow the guide, but I want to setup my virtual environment in the Dockerfile and have the pip requirements persistent when running the image.
Here's the relevant part of my Dockerfile:
...
RUN echo =============== Building and Install Python =============== \
&& cd /tmp \
&& wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz \
&& tar xvf ./Python-3.7.4.tgz \
&& cd Python-3.7.4 \
&& ./configure --enable-optimizations --with-ensurepip=install \
&& make -j 8 \
&& sudo make install
ENV VIRTUAL_ENV=/opt/python-3.7.4
ENV PATH="$VIRTUAL_ENV:$PATH"
COPY "./hourequirements.txt" /usr/local/
RUN echo =============== Setting up Python Virtual Environment =============== \
&& python3 -m venv $VIRTUAL_ENV \
&& source $VIRTUAL_ENV/bin/activate \
&& pip install --upgrade pip \
&& pip install --no-input -r /usr/local/hourequirements.txt
...
The Dockerfile builds without errors, but when I run the image the environment doesn't exist and python 3.7.4 doesn't show any of the installed requirements.
How can I install Python modules in the virtual environment using PIP in the Dockerfile and have them persist when the docker image runs?
Usual find answer just after post.
I changed:
ENV PATH="$VIRTUAL_ENV:$PATH"
to:
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
in Dockerfile and started working correctly.

Installing python in Dockerfile without using python image as base

I have a python script that uses DigitalOcean tools (doctl and kubectl) I want to containerize. This means my container will need python, doctl, and kubectl installed. The trouble is, I figure out how to install both python and DigitalOcean tools in the dockerfile.
I can install python using the base image "python:3" and I can also install the DigitalOcean tools using the base image "alpine/doctl". However, the rule is you can only use one base image in a dockerfile.
So I can include the python base image and install the DigitalOcean tools another way:
FROM python:3
RUN <somehow install doctl and kubectl>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Or I can include the alpine/doctl base image and install python3 another way.
FROM alpine/doctl
RUN <somehow install python>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Unfortunately, I'm not sure how I would do this. Any help in how I can get all these tools installed would be great!
just add this with any other thing you want to apt-get install:
RUN apt-get update && apt-get install -y \
python3.6 &&\
python3-pip &&\
in alpine it should be something like:
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python &&\
python3 -m ensurepip &&\
pip3 install --no-cache --upgrade pip setuptools &&\
This Dockerfile worked for me:
FROM alpine/doctl
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
This answer comes from here:(https://stackoverflow.com/a/62555259/7479816; I don't have enough street cred to comment)
You can try multi-stage build as shown below.
Also check your copy statement, you need to define where you want script.py file to be copied as second parameter. "." will copy it to root directory
FROM alpine/doctl
FROM python:3.6-slim-buster
ENV PYTHONUNBUFFERED 1
RUN pip install firebase-admin
COPY script.py .
CMD ["python", "script.py"]

Can I copy a directory from some location outside the docker area to my dockerfile?

I have installed a library called fastai==1.0.59 via requirements.txt file inside my Dockerfile.
But the purpose of running the Django app is not achieved because of one error. To solve that error, I need to manually edit the files /site-packages/fastai/torch_core.py and site-packages/fastai/basic_train.py inside this library folder which I don't intend to.
Therefore I'm trying to copy the fastai folder itself from my host machine to the location inside docker image.
source location: /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/
destination location: ../venv/lib/python3.6/site-packages/ which is inside my docker image.
being new to docker, I tried this using COPY command like:
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
which gave me an error:
ERROR: Service 'app' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder583041406/Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai: no such file or directory.
I tried referring this: How to include files outside of Docker's build context?
but seems like it bounced off my head a bit..
Please help me tackling this. Thanks.
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER model1
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano && \
apt-get install -y libsm6 libxext6 libxrender-dev
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN apt-get install -y libglib2.0-0
RUN mkdir -p /model/
COPY . /model/
WORKDIR /model/
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r ./requirements.txt
EXPOSE 8001
RUN chmod -R 777 /model/
COPY /Users/AjayB/anaconda3/envs/MyDjangoEnv/lib/python3.6/site-packages/fastai/ ../venv/lib/python3.6/site-packages/
CMD python3 -m /venv/activate
CMD /model/my_setup.sh development
CMD export API_ENV = development
CMD cd server && \
python manage.py migrate && \
python manage.py runserver 0.0.0.0:8001
Short Answer
No
Long Answer
When you run docker build the current directory and all of its contents (subdirectories and all) are copied into a staging area called the 'build context'. When you issue a COPY instruction in the Dockerfile, docker will copy from the staging area into a layer in the image's filesystem.
As you can see, this procludes copying files from directories outside the build context.
Workaround
Either download the files you want from their golden-source directly into the image during the build process (this is why you often see a lot of curl statements in Dockerfiles), or you can copy the files (dirs) you need into the build-tree and check them into source control as part of your project. Which method you choose is entirely dependent on the nature of your project and the files you need.
Notes
There are other workarounds documented for this, all of them without exception break the intent of 'portability' of your build. The only quality solutions are those documented here (though I'm happy to add to this list if I've missed any that preserve portability).

Prevent docker-compose from reinstalling requirements.txt while using a built image

I have an app ABC, which I want to put on docker environment. I built a Dockerfile and got the image abcd1234 which I used in docker-compose.yml
But on trying to build the docker-compose, All the requirements.txt files are getting reinstalled. Can it not use the already existing image and prevent time from reinstalling it?
I'm new to docker and trying to understand all the parameters. Also, is the 'context' correct? in docker-compose.yml or it should contain path inside the Image?
PS, my docker-compose.yml is not in same directory of project because I'll be using multiple images to expose more ports.
docker-compose.yml:
services:
app:
build:
context: /Users/user/Desktop/ABC/
ports:
- "8000:8000"
image: abcd1234
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- PROJECT_ENV=development
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano
RUN apt-get install -y libsm6 libxext6 libxrender-dev
COPY . /ABC/
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r /ABC/requirements.txt
WORKDIR .
Please guide me on how to tackle these 2 scenarios. Thanks!
The context: directory is the directory on your host system that includes the Dockerfile. It's the same directory you would pass to docker build, and it frequently is just the current directory ..
Within the Dockerfile, Docker can cache individual build steps so that it doesn't repeat them, but only until it reaches the point where something has changed. That "something" can be a changed RUN line, but at the point of your COPY, if any file at all changes in your local source tree that also invalidates the cache for everything after it.
For this reason, a typical Dockerfile has a couple of "phases"; you can repeat this pattern in other languages too. You can restructure your Dockerfile in this order:
# 1. Base information; this almost never changes
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
WORKDIR /ABC
# 2. Install OS packages. Doesn't depend on your source tree.
# Frequently just one RUN line (but could be more if you need
# packages that aren't in the default OS package repository).
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential unzip libxrender-dev libpq-dev
# 3. Copy _only_ the file that declares language-level dependencies.
# Repeat starting from here only if this file changes.
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Copy the rest of the application in. In a compiled language
# (Javascript/Webpack, Typescript, Java, Go, ...) build it.
COPY . .
# 5. Explain how to run the application.
EXPOSE 8000
CMD python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000

How to Have NaCL at AWS Lambda Properly Working?

I am using Pychromeless repo with success at AWS lambda.
Now I need to use NaCL dependency, to decrypt a string, but I am getting
Unable to import module 'lambda_function': /var/task/lib/nacl/_sodium.abi3.so
with a complement of
invalid ELF header
 
when running the function on AWS Lambda.
I know it is a problem specific related to AWS Lambda environment, because I can run the function on my Mac inside docker.
Here's my requirements.txt file
boto3==1.6.18
botocore==1.9.18
selenium==2.53.6
chromedriver-install==1.0.3
beautifulsoup4==4.6.1
certifi==2018.11.29
chardet==3.0.4
editdistance==0.5.3
future==0.17.1
idna==2.7
python-telegram-bot==10.1.0
requests==2.19.1
soupsieve==1.7.3
urllib3==1.23
PyNaCl==1.3.0
Here is the dockerfile
FROM lambci/lambda:python3.6
MAINTAINER tech#21buttons.com
USER root
ENV APP_DIR /var/task
WORKDIR $APP_DIR
COPY requirements.txt .
COPY bin ./bin
COPY lib ./lib
RUN mkdir -p $APP_DIR/lib
RUN pip3 install -r requirements.txt -t /var/task/lib
And the makefile:
clean:
rm -rf build build.zip
rm -rf __pycache__
fetch-dependencies:
mkdir -p bin/
# Get chromedriver
curl -SL https://chromedriver.storage.googleapis.com/2.37/chromedriver_linux64.zip > chromedriver.zip
unzip chromedriver.zip -d bin/
# Get Headless-chrome
curl -SL https://github.com/adieuadieu/serverless-chrome/releases/download/v1.0.0-37/stable-headless-chromium-amazonlinux-2017-03.zip > headless-chromium.zip
unzip headless-chromium.zip -d bin/
# Clean
rm headless-chromium.zip chromedriver.zip
docker-build:
docker-compose build
docker-run:
docker-compose run lambda src/lambda_function.lambda_handler
build-lambda-package: clean fetch-dependencies
mkdir build
cp -r src build/.
cp -r bin build/.
cp -r lib build/.
pip install -r requirements.txt -t build/lib/.
cd build; zip -9qr build.zip .
cp build/build.zip .
rm -rf build
Without the decryption part, the code works great. So the issue is 100% related to PyNaCl.
Any help on solving this?
I think you may try to setup PyNaCl like so:
SODIUM_INSTALL=system pip3 install pynacl
that will force PyNaCl to use the version of libsodium provided by AWS
see this
in the last version of PyNaClis updated to libsodium 1.0.16. so maybe it is not compatible with AWS
so you may remove PyNaCl from requirements.txt and add this to your Dockerfile:
RUN SODIUM_INSTALL=system pip3 install pynacl -t /var/task/lib
or maybe setup the dockerfile like this and keep PyNaCl in requirements.txt:
ARG SODIUM_INSTALL=system
Try also to setup sodium before installing PyNaCI:
RUN wget https://download.libsodium.org/libsodium/releases/libsodium-1.0.15.tar.gz \
&& tar xzf libsodium-1.0.15.tar.gz \
&& cd libsodium-1.0.15 \
&& ./configure \
&& make install
Ok, this is how I did it. I had to build everything on an EC2 AMI Linux 2 instance.
amzn2-ami-hvm-2.0.20190823.1-x86_64-gp2 (ami-0a1f49a762473adbd)
After launching the instance, I use this script to install Python 3.6 (and pip) and to create and activate a virtual environment.
For the docker part, I followed this tutorial, not without some troubles on the way (had to
sudo yum install polkit
and
sudo usermod -a -G docker ec2-user
and
reboot
and
sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose
and
sudo chmod +x /usr/local/bin/docker-compose).
But anyway, I manage to work with docker on the EC2 instance, building the zip file and uploading it to Lambda environment, where everything worked fine, as I expected.
I thought docker was an environment independent from the host, but I guess that is not the case.

Categories

Resources