Trying to create a docker image that has Python3 and Node v4.4.7 so that I can use it as a container for my project that needs both Python and that version of Node.
# Pull base image.
FROM python:3-onbuild
CMD [ "python", "./hello.py" ]
# Install Node.js
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
When I first tried it complained about not having a python script to run so added a basic python file: hello.py
that just has this:
print "Hello, Python!"
Then it complains about not having a requirements.txt file so added an empty requirements.txt
Now when I run docker build -t isaacweathersnet/sampledockerimage . it snafus during the node install with
node-v4.4.0/benchmark/arrays/zero-int.js
File "./configure", line 446
'''
^
SyntaxError: Missing parentheses in call to 'print'
The command '/bin/sh -c cd /tmp && wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && tar xvzf node-v4.4.7.tar.gz && rm -f node-v4.4.7.tar.gz && cd node-v* && ./configure && CXX="g++ -Wno-unused-local-typedefs" make && CXX="g++ -Wno-unused-local-typedefs" make install && cd /tmp && rm -rf /tmp/node-v* && npm install -g npm && print '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc' returned a non-zero code: 1
Found the solution on Github that had Python and Node. No luck with Python 3+ but worked well with 2.7
https://github.com/nsdont/python-node/blob/master/Dockerfile
FROM python:2.7
RUN \
cd /tmp && \
wget http://nodejs.org/dist/v4.4.7/node-v4.4.7.tar.gz && \
tar xvzf node-v4.4.7.tar.gz && \
rm -f node-v4.4.7.tar.gz && \
cd node-v* && \
./configure && \
CXX="g++ -Wno-unused-local-typedefs" make && \
CXX="g++ -Wno-unused-local-typedefs" make install && \
cd /tmp && \
rm -rf /tmp/node-v* && \
npm install -g npm && \
echo -e '\n# Node.js\nexport PATH="node_modules/.bin:$PATH"' >> /root/.bashrc
# Define working directory.
WORKDIR /data
# Define default command.
CMD ["bash"]
There are nodejs-python and python-nodejs (which is built on top of nodejy-python). It's worth to have a look into there.
python-nodejs provides Node 10.x, npm 6.x, yarn stable, Python latest, pip latest and pipenv latest. The versions used should be adjustable to your version needs. Use the Dockerfile as basis and adjust the RUN section
RUN \
echo "deb https://deb.nodesource.com/node_10.x stretch main" > /etc/apt/sources.list.d/nodesource.list && \
wget -qO- https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list && \
wget -qO- https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
apt-get update && \
apt-get install -yqq nodejs yarn && \
pip install -U pip && pip install pipenv && \
npm i -g npm#^6 && \
rm -rf /var/lib/apt/lists/*
to the Node version you need. The yarn (dependency management alternative to nmp) and in case you need yarn) part can be removed.
Related
For a Docker container I am making a build of OpenCV which should work together with CUDA.
Here is an excerpt from the Dockerfile:
ARG OPENCV_VERSION=4.7.0
RUN cd /opt/ &&\
# Download and unzip OpenCV and opencv_contrib and delte zip files
wget https://github.com/opencv/opencv/archive/$OPENCV_VERSION.zip &&\
unzip $OPENCV_VERSION.zip &&\
rm $OPENCV_VERSION.zip &&\
wget https://github.com/opencv/opencv_contrib/archive/$OPENCV_VERSION.zip &&\
unzip ${OPENCV_VERSION}.zip &&\
rm ${OPENCV_VERSION}.zip &&\
# Create build folder and switch to it
mkdir /opt/opencv-${OPENCV_VERSION}/build && cd /opt/opencv-${OPENCV_VERSION}/build &&\
# Cmake configure
cmake \
-DOPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-${OPENCV_VERSION}/modules \
-DWITH_CUDA=ON \
-DCMAKE_BUILD_TYPE=RELEASE \
# Install path will be /usr/local/lib (lib is implicit)
-DCMAKE_INSTALL_PREFIX=/usr/local \
.. &&\
# Make
make -j"$(nproc)" && \
# Install to /usr/local/lib
make install && \
ldconfig &&\
# Remove OpenCV sources and build folder
rm -rf /opt/opencv-${OPENCV_VERSION} && rm -rf /opt/opencv_contrib-${OPENCV_VERSION}
So far this works, but I have to create a very small container. If I now want to use OpenCV with CUDA, I end up with 2.7GB with all the data, etc.
I would now like to get this smaller. My idea would be to omit the extensions of OpenCV, but then I can no longer create CUDA with OpenCV. Since there are probably libraries in there which are needed.
Is there a way to do this without the libraries or can I delete them and "tell" OpenCV not to look for them? If I just leave them out there are a lot of error messages in OpenCV.
Here is the whole Dockerfile, it is not perfect:
FROM ghcr.io/ifm/ifm3d:latest-l4t-arm64 AS buildstage
COPY requirements.txt /tmp/
COPY python /tmp/python
USER root
ARG JETPACK_VERSION_BASE="r32.4"
ARG JETPACK_VERSION="${JETPACK_VERSION_BASE}.3"
ARG BASE_IMAGE="nvcr.io/nvidia/l4t-base:${JETPACK_VERSION}"
ARG SOC="t186"
ADD --chown=root:root https://repo.download.nvidia.com/jetson/jetson-ota-public.asc /etc/apt/trusted.gpg.d/jetson-ota-public.asc
RUN chmod 644 /etc/apt/trusted.gpg.d/jetson-ota-public.asc \
&& apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
&& echo "deb https://repo.download.nvidia.com/jetson/common ${JETPACK_VERSION_BASE} main" > /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& echo "deb https://repo.download.nvidia.com/jetson/${SOC} ${JETPACK_VERSION_BASE} main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& cat /etc/apt/sources.list.d/nvidia-l4t-apt-source.list \
&& apt-get update \
&& rm -rf /var/lib/apt/lists/*
ARG CUDA=10.2
ENV CUDA=${CUDA}
ENV PATH /usr/local/cuda-$CUDA/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/cuda-$CUDA/targets/aarch64-linux/lib:${LD_LIBRARY_PATH}
RUN ldconfig
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ARG OPENCV_VERSION=4.7.0
RUN apt-get update && \
apt-get install -y \
bc \
bzip2 \
language-pack-en-base \
python3-distutils \
python3-pip \
build-essential \
cuda-libraries-dev-${CUDA} \
cuda-cudart-dev-${CUDA} \
cuda-compiler-${CUDA} \
libnvinfer-samples \
ca-certificates \
python-dev \
git \
cmake \
wget \
unzip \
yasm \
pkg-config \
libswscale-dev \
libtbb2 \
libtbb-dev \
libjpeg-dev \
libpng-dev \
libtiff-dev \
libavformat-dev \
libpq-dev \
libxine2-dev \
libglew-dev \
libtiff5-dev \
zlib1g-dev \
libjpeg-dev \
libavcodec-dev \
libavformat-dev \
libavutil-dev \
libpostproc-dev \
libswscale-dev \
libeigen3-dev \
libtbb-dev \
libgtk2.0-dev \
pkg-config \
## Python
python3-numpy \
libopencv-dev \
g++ \
libcudnn8 \
nvidia-cudnn8 \
libsm6 \
libxrender-dev \
&& rm -rf /var/lib/apt/lists/*
USER ifm
RUN python3 -m pip install --upgrade pip
RUN pip3 install --requirement /tmp/requirements.txt
USER root
ARG OPENCV_VERSION=4.7.0
RUN cd /opt/ &&\
# Download and unzip OpenCV and opencv_contrib and delte zip files
wget https://github.com/opencv/opencv/archive/$OPENCV_VERSION.zip &&\
unzip $OPENCV_VERSION.zip &&\
rm $OPENCV_VERSION.zip &&\
wget https://github.com/opencv/opencv_contrib/archive/$OPENCV_VERSION.zip &&\
unzip ${OPENCV_VERSION}.zip &&\
rm ${OPENCV_VERSION}.zip &&\
# Create build folder and switch to it
mkdir /opt/opencv-${OPENCV_VERSION}/build && cd /opt/opencv-${OPENCV_VERSION}/build &&\
# Cmake configure
cmake \
-DOPENCV_EXTRA_MODULES_PATH=/opt/opencv_contrib-${OPENCV_VERSION}/modules \
-DWITH_CUDA=ON \
-DCMAKE_BUILD_TYPE=RELEASE \
# Install path will be /usr/local/lib (lib is implicit)
-DCMAKE_INSTALL_PREFIX=/usr/local \
.. &&\
# Make
make -j"$(nproc)" && \
# Install to /usr/local/lib
make install && \
ldconfig &&\
# Remove OpenCV sources and build folder
rm -rf /opt/opencv-${OPENCV_VERSION} && rm -rf /opt/opencv_contrib-${OPENCV_VERSION}
#RUN ls /usr/local/lib/libopencv*
FROM ghcr.io/ifm/ifm3d:latest-l4t-arm64
USER root
#COPY --from=buildstage /usr/local/include/opencv4/ /usr/local/include/opencv4/
COPY --from=buildstage /home/ifm/venv/lib/python3.9/site-packages /home/ifm/venv/lib/python3.9/site-packages
COPY --from=buildstage /usr/local/lib/python3.9/site-packages/cv2/ /home/ifm/venv/lib/python3.9/site-packages/cv2/
COPY --from=buildstage /usr/local/lib/libopencv* /usr/local/lib/
COPY --from=buildstage /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/
#RUN ldconfig
COPY python /tmp/python
The basis container it's a l4t for TX2
Several try to build the container, but currently only OpenCV works without CUDA
We are using python:3.9 image for the base, and run some command on that.
Base Image
########################
# Base Image Section #
########################
#
# Creates an image with the common requirements for a flask app pre-installed
# Start with a smol OS
FROM python:3.9
# Install basic requirements
RUN apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends \
apt-transport-https \
ca-certificates && \
apt-get autoremove -yq && apt-get clean && rm -rf "/var/lib/apt/lists"/*
# Install CA certs
# Prefer the mirror for package downloads
COPY ["ca_certs/*.crt", "/usr/local/share/ca-certificates/"]
RUN update-ca-certificates && \
mv /etc/apt/sources.list /etc/apt/sources.list.old && \
printf 'deb https://mirror.company.com/debian/ buster main contrib non-free\n' > /etc/apt/sources.list && \
cat /etc/apt/sources.list.old >> /etc/apt/sources.list
# Equivalent to `cd /app`
WORKDIR /app
# Fixes a host of encoding-related bugs
ENV LC_ALL=C.UTF-8
# Tells `apt` and others that no one is sitting at the keyboard
ENV DEBIAN_FRONTEND=noninteractive
# Set a more helpful shell prompt
ENV PS1='[\u#\h \W]\$ '
#####################
# ONBUILD Section #
#####################
#
# ONBUILD commands take effect when another image is built using this one as a base.
# Ref: https://docs.docker.com/engine/reference/builder/#onbuild
#
#
# And that's it! The base container should have all your dependencies and ssl certs pre-installed,
# and will copy your code over when used as a base with the "FROM" directive.
ONBUILD ARG BUILD_VERSION
ONBUILD ARG BUILD_DATE
# Copy our files into the container
ONBUILD ADD . .
# pre_deps: packages that need to be installed before code installation and remain in the final image
ONBUILD ARG pre_deps
# build_deps: packages that need to be installed before code installation, then uninstalled after
ONBUILD ARG build_deps
# COMPILE_DEPS: common packages needed for building/installing Python packages. Most users won't need to adjust this,
# but you could specify a shorter list if you didn't need all of these.
ONBUILD ARG COMPILE_DEPS="build-essential python3-dev libffi-dev libssl-dev python3-pip libxml2-dev libxslt1-dev zlib1g-dev g++ unixodbc-dev"
# ssh_key: If provided, writes the given string to ~/.ssh/id_rsa just before Python package installation,
# and deletes it before the layer is written.
ONBUILD ARG ssh_key
# If our python package is installable, install system packages that are needed by some python libraries to compile
# successfully, then install our python package. Finally, delete the temporary system packages.
ONBUILD RUN \
if [ -f setup.py ] || [ -f requirements.txt ]; then \
install_deps="$pre_deps $build_deps $COMPILE_DEPS" && \
uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && \
apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && \
if [ -n "${ssh_key}" ]; then \
mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && \
printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; \
fi ; \
if [ -f requirements.txt ]; then \
pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; \
elif [ -f setup.py ]; then \
pip3 install --no-cache-dir --compile --editable . || exit 1 ; \
fi ; \
if [ -n "${ssh_key}" ]; then \
rm -rf ~/.ssh || exit 1 ; \
fi ; \
fi
We build this image last year, and it was working fine, but we decided to use latest changes and build new base image, once we build it, it start failing for last RUN command.
DEBU[0280] Deleting in layer: map[]
INFO[0281] Cmd: /bin/sh
INFO[0281] Args: [-c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps="$pre_deps $build_deps $COMPILE_DEPS" && uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && if [ -n "${ssh_key}" ]; then mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n "${ssh_key}" ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi]
INFO[0281] Running: [/bin/sh -c if [ -f setup.py ] || [ -f requirements.txt ]; then install_deps="$pre_deps $build_deps $COMPILE_DEPS" && uninstall_deps=$(python3 -c 'all_deps=set("'"$install_deps"'".split()); to_keep=set("'"$pre_deps"'".split()); print(" ".join(sorted(all_deps-to_keep)), end="")') && apt-get -q update -o Acquire::Languages=none && apt-get -yq install --no-install-recommends $install_deps && if [ -n "${ssh_key}" ]; then mkdir -p ~/.ssh && chmod 700 ~/.ssh && printf "%s\n" "${ssh_key}" > ~/.ssh/id_rsa && chmod 600 ~/.ssh/id_rsa && printf "%s\n" "StrictHostKeyChecking=no" > ~/.ssh/config && chmod 600 ~/.ssh/config || exit 1 ; fi ; if [ -f requirements.txt ]; then pip3 install --no-cache-dir --compile -r requirements.txt || exit 1 ; elif [ -f setup.py ]; then pip3 install --no-cache-dir --compile --editable . || exit 1 ; fi ; if [ -n "${ssh_key}" ]; then rm -rf ~/.ssh || exit 1 ; fi ; fi]
error building image: error building stage: failed to execute command: starting command: fork/exec /bin/sh: exec format error
We label the image, based on date, to know when it was working, we have base image, build on 12-09-22 works fine.
Something new in python:3.9 cause this issue. Same script was working.
I need to create a Dockerfile that emulates a normal workspace.
We have a virtual machine where we train models.
We Use R and Python3.
I want to automate some of the processes without changing the codebase.
e.g. ~ must point to a /home/<some user>
Biggest problem is Anaconda3 in docker. because every RUN is a standalone login.
Basis for my answer: https://github.com/xychelsea/anaconda3-docker/blob/main/Dockerfile
I've created my own mini R package installer:
install_r_packages.sh
#!/bin/bash
input="r-requirements.txt"
Rscript -e "install.packages('remotes')"
IFS='='
while IFS= read -r line; do
read -r package version <<<$line
package=$(echo "$package" | sed 's/ *$//g')
version=$(echo "$version" | sed 's/ *$//g')
if ! [[ ($package =~ ^#.*) || (-z $package) ]]; then
Rscript -e "remotes::install_version('$package', version = '$version')"
fi
done <$input
r-requirement
# packages for rmarkdown
htmltools=0.5.2
jsonlite=1.7.2
...
rmarkdown=2.11
# more packages
...
Dockerfile
FROM debian:bullseye
RUN apt-get update
# install R
RUN apt-get install -y r-base r-base-dev libatlas3-base r-recommended libssl-dev openssl \
libcurl4-openssl-dev libfontconfig1-dev libxml2-dev xml2 pandoc lua5.3 clang
ENV ARROW_S3=ON \
LIBARROW_MINIMAL=false \
LIBARROW_BINARY=true \
RSTUDIO_PANDOC=/usr/lib/rstudio-server/bin/pandoc \
TZ=Etc/UTC
COPY r-requirements.txt .
COPY scripts/install_r_packages.sh scripts/install_r_packages.sh
RUN bash scripts/install_r_packages.sh
# create user
ENV REPORT_USER="reporter"
ENV PROJECT_HOME=/home/${REPORT_USER}/<project>
RUN useradd -ms /bin/bash ${REPORT_USER} \
&& mkdir /data \
&& mkdir /opt/mlflow \
&& chown -R ${REPORT_USER}:${REPORT_USER} /data \
&& chown -R ${REPORT_USER}:${REPORT_USER} /opt/mlflow
# copy project files
WORKDIR ${PROJECT_HOME}
COPY src src
... bla bla bla ...
COPY requirements.txt .
RUN chown -R ${REPORT_USER}:${REPORT_USER} ${PROJECT_HOME}
# Install python Anaconda env
ENV ANACONDA_PATH="/opt/anaconda3"
ENV PATH=${ANACONDA_PATH}/bin:${PATH}
ENV ANACONDA_INSTALLER=Anaconda3-2021.11-Linux-x86_64.sh
RUN mkdir ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER}:${REPORT_USER} ${ANACONDA_PATH}
RUN apt-get install -y wget
USER ${REPORT_USER}
RUN wget https://repo.anaconda.com/archive/${ANACONDA_INSTALLER} \
&& /bin/bash ${ANACONDA_INSTALLER} -b -u -p ${ANACONDA_PATH} \
&& chown -R ${REPORT_USER} ${ANACONDA_PATH} \
&& rm -rvf ~/${ANACONDA_INSTALLER}.sh \
&& echo ". ${ANACONDA_PATH}/etc/profile.d/conda.sh" >> ~/.bashrc \
&& echo "conda activate base" >> ~/.bashrc
RUN pip3 install --upgrade pip \
&& pip3 install -r requirements.txt \
&& pip3 install awscli
# run training and report
ENV PYTHONPATH=/home/${REPORT_USER}/<project> \
MLFLOW_TRACKING_URI=... \
MLFLOW_EXPERIMENT_NAME=...
CMD dvc config core.no_scm true \
&& dvc repro
I have followed the tutorial for the Azure Functions using python.
everything wen smooth.
for the next step I need to add a C compiled dependency.
I just added the C compiler + the dependency script rows.
I have edited the Docker file and it now looks like this:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
FROM julia:1.3
RUN apt-get update && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
FROM python:3.7
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
When I build this docker file it look good.
but when I run it it just opens up a GCC promp.
What am I doing wrong?
Thanks
I found an issue with your multi stage FROM statements. Also, you needed to add apt-get install make.
The following works:
FROM mcr.microsoft.com/azure-functions/python:3.0-python3.7
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
COPY requirements.txt /
RUN pip install -r /requirements.txt
COPY . /home/site/wwwroot
# Adding "apt-get install make" here
RUN apt-get update && apt-get install make && apt-get install -y gcc g++ && rm -rf /var/lib/apt/lists/*
RUN pip install numpy
RUN wget http://prdownloads.sourceforge.net/ta-lib/ta-lib-0.4.0-src.tar.gz && \
tar -xvzf ta-lib-0.4.0-src.tar.gz && \
cd ta-lib/ && \
./configure --prefix=/usr && \
make && \
make install
RUN rm -R ta-lib ta-lib-0.4.0-src.tar.gz
Build failure with a non-zero code: 2
The docker file is provided below:
FROM ubuntu:bionic
RUN \
apt-get update \
&& apt-get install -y -q curl gnupg \
&& curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add - \
&& echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe' >> /etc/apt/sources.list \
&& apt-get update
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get install -y --allow-unauthenticated -q \
python3-pip \
python3-sawtooth-sdk \
python3-sawtooth-rest-api \
python3-sawtooth-cli \
cron-apt \
curl
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
RUN pip3 install \
pylint \
pycodestyle \
grpcio-tools==1.29.0 \
nose2 \
bcrypt \
pycrypto \
rethinkdb \
sanic \
swagger-ui-py \
itsdangerous
WORKDIR /project/sawtooth-marketplace
COPY sawbuck_app/package.json /project/sawtooth-marketplace/sawbuck_app/
RUN cd sawbuck_app/ && npm install
ENV PATH $PATH:/project/sawtooth-marketplace/bin
# Note that the context must be set to the project's root directory
COPY . .
RUN market-protogen
The following is the LOG from the server that is recorded, I don't know why the build is failing, can anybody guide please?
Service 'market-shell' failed to build: The command '/bin/sh -c apt-get update && apt-get install -y -q curl gnupg
&& curl -sSL 'http://p80.pool.sks-keyservers.net/pks/lookup?op=get&search=0x8AA7AF1F1091A5FD' | apt-key add -
&& echo 'deb [arch=amd64] http://repo.sawtooth.me/ubuntu/chime/stable bionic universe'>> /etc/apt/sources.list
&& apt-get update' returned a non-zero code: 2
......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................