Python script can't access some environment variables - python

I am building Docker Image FROM node:8.9.3-alpine (which is Debian) and then running it as usual and passing parameters like this:
docker run -dt \
-e lsRegion=${bamboo_lsRegion} \
-e lsCluster=${bamboo_lsCluster} \
Then inside that container I am exporting some variables and when I echo them, I can see proper value
export lsEnv=${lsEnv:-'dev'}
Later in scripts I run python script and when I run the print(os.environ) I can see all the variables from docker run like lsRegion but I do not see the newly exported one like lsEnv.
I already found and tried to solve with this: Python: can't access newly defined environment variables by calling the source ~/.bashrc but I cannot find that file.
I have tried
~/.bashrc
/etc/bash.bashrc
/root/.bashrc
But neither of those exist (also does not know if this solve my problem) and it ends with this error message /app/deploy.sh: source: line 16: can't open '/root/.bashrc'
More reproducible example:
Dockerfile
FROM node:8.9.3-alpine
RUN apk add --no-cache \
python \
py-pip \
ca-certificates \
openssl \
groff \
less \
bash \
curl \
jq \
git \
zip \
build-base \
&& pip install --no-cache-dir --upgrade pip awscli \
&& aws configure set preview.cloudfront true
ENV TERRAFORM_VERSION 0.11.10
RUN wget -O terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip && \
unzip terraform.zip -d /usr/local/bin && \
rm -f terraform.zip
RUN apk -v --update add python py-pip
RUN pip install --upgrade awscli
RUN pip install --upgrade boto3
COPY ./build.variables /app/build.variables
COPY ./aws/taskdef/template.json /app/template.json
COPY ./deploy.sh /app/deploy.sh
COPY ./deploy.py /app/deploy.py
COPY ./terraform /app/terraform
CMD ["sh", "/app/deploy.sh"]
deploy.sh
#!/bin/bash -x
cd /app/terraform
./run-terraform.sh
cd ..
python /app/deploy.py
terraform/run-terraform.sh
...
export lsEnv="NotThere"
...
python script
#!/usr/bin/env python
import os
print(os.environ)
The print will show lsRegion or lsCluster but it will not show the lsEnv

Inside deploy.sh, you need to source run-terraform.sh if you want to affect the environment of the process that runs deploy.py, rather than the environment created for the process that runs run-terraform.sh.
#!/bin/bash -x
cd /app/terraform
source ./run-terraform.sh
cd ..
python /app/deploy.py
(You could also use . ./run-terraform.sh; source is a more readable bash synonym for the POSIX . command, but . is necessary if you are using some other POSIX-compliant shell that doesn't support source.)

I solve it by calling this command in terraform/run-terraform.sh for each environment variable I will need in python script:
echo "export lsTargetGroup=$lsTargetGroup" >> ~/.bashrc
And then in deploy.sh I just add source ~/.bashrc before calling python script

Related

How to import pandas in java application to run python script using dockerfile [duplicate]

I need both java and python in my docker container to run some code.
This is my dockerfile:
It works perpectly if I don't add the FROM openjdk:slim
#get python
FROM python:3.6-slim
RUN pip install --trusted-host pypi.python.org flask
#get openjdk
FROM openjdk:slim
COPY . /targetdir
WORKDIR /targetdir
# Make port 81 available to the world outside this container
EXPOSE 81
CMD ["python", "test.py"]
And the test.py app is in the same directory:
from flask import Flask
import os
app = Flask(__name__)
#app.route("/")
def hello():
html = "<h3>Test:{test}</h3>"
test = os.environ['JAVA_HOME']
return html.format(test = test)
if __name__ == '__main__':
app.run(debug=True,host='0.0.0.0',port=81)
I'm getting this error:
D:\MyApps\Docker Toolbox\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown.
What exactly am I doing wrong here? I'm new to docker, perhaps I'm missing a step.
Additional details
My goal
I have to run a python program that runs a Java file. The python library I'm using requires the path to JAVA_HOME.
My issues:
I do not know Java, so I cannot run the file properly.
My entire code is in Python, except this Java bit
The Python wrapper runs the file in a way I need it to run.
An easier solution to the above issue is to use multi-stage docker containers where you can copy the content from one to another. In the above case you can have openjdk:slim as the base container and then use content from a python container to be copied over into this base container as follows:
FROM openjdk:slim
COPY --from=python:3.6 / /
...
<normal instructions for python container continues>
...
This feature is available as of Docker 17.05 and there are more things you can do using multi-stage build as in copying only the content you need from one to another.
Reference documentation
OK it took me a little while to figure it out. And my thanks go to this answer.
I think my approach didn't work because I did not have a basic version of Linux.
So it goes like this:
Get Linux (I'm using Alpine because it's barebones)
Get Java via the package manager
Get Python, PIP
OPTIONAL: find and set JAVA_HOME
Find the path to JAVA_HOME. Perhaps there is a better way to do this, but I did this running the running the container, then I looked inside the container using docker exec -it [COINTAINER ID] bin/bash and found it.
Set JAVA_HOME in dockerfile and build + run it all again
Here is the final Dockerfile ( it should work with the python code in the question) :
### 1. Get Linux
FROM alpine:3.7
### 2. Get Java via the package manager
RUN apk update \
&& apk upgrade \
&& apk add --no-cache bash \
&& apk add --no-cache --virtual=build-dependencies unzip \
&& apk add --no-cache curl \
&& apk add --no-cache openjdk8-jre
### 3. Get Python, PIP
RUN apk add --no-cache python3 \
&& python3 -m ensurepip \
&& pip3 install --upgrade pip setuptools \
&& rm -r /usr/lib/python*/ensurepip && \
if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \
if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \
rm -r /root/.cache
### Get Flask for the app
RUN pip install --trusted-host pypi.python.org flask
####
#### OPTIONAL : 4. SET JAVA_HOME environment variable, uncomment the line below if you need it
#ENV JAVA_HOME="/usr/lib/jvm/java-1.8-openjdk"
####
EXPOSE 81
ADD test.py /
CMD ["python", "test.py"]
I'm new to Docker, so this may not be the best possible solution. I'm open to suggestions.
UPDATE: COMMON ISUUES
Difficulty using python packages
As Joabe Lucena pointed out here, Alpine can have issues certain python packages.
I recommend that you use a Linux distro that works best for you, e.g. centos.
Another alternative is to simply use docker-java-python image from docker hub. https://hub.docker.com/r/rappdw/docker-java-python
FROM rappdw/docker-java-python:openjdk1.8.0_171-python3.6.6
RUN java -version
RUN python --version
I found Sunny Pal's answer very useful but I made the copy more specific and added the necessary environment variables and update-alternatives lines so that Java was accessible from the command line in the Python container.
FROM python:3.9-slim
COPY --from=openjdk:8-jre-slim /usr/local/openjdk-8 /usr/local/openjdk-8
ENV JAVA_HOME /usr/local/openjdk-8
RUN update-alternatives --install /usr/bin/java java /usr/local/openjdk-8/bin/java 1
...
Oh, let me add my five cents. I took python slim as a base image. Then I found open-jdk-11 (Note, open-jdk-10 will fail because it is not supported) base image code!... And copy-pasted it into my docker file.
Note, copy-paste driven development is cool... ONLY when you understand each line you use in your code!!!
And here it is!
<!-- language: shell -->
FROM python:3.7.2-slim
# Do your stuff, install python.
# and now Jdk
RUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get upgrade -y \
&& apt-get install -y --no-install-recommends curl ca-certificates \
&& rm -rf /var/lib/apt/lists/*
ENV JAVA_VERSION jdk-11.0.2+7
COPY slim-java* /usr/local/bin/
RUN set -eux; \
ARCH="$(dpkg --print-architecture)"; \
case "${ARCH}" in \
ppc64el|ppc64le) \
ESUM='c18364a778b1b990e8e62d094377af48b000f9f6a64ec21baff6a032af06386d'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.1_13.tar.gz'; \
;; \
s390x) \
ESUM='e39aacc270731dadcdc000aaaf709adae7a08113ccf5b4a045bc87fc13458d71'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11%2B28/OpenJDK11-jdk_s390x_linux_hotspot_11_28.tar.gz'; \
;; \
amd64|x86_64) \
ESUM='d89304a971e5186e80b6a48a9415e49583b7a5a9315ba5552d373be7782fc528'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.2%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.2_7.tar.gz'; \
;; \
aarch64|arm64) \
ESUM='b66121b9a0c2e7176373e670a499b9d55344bcb326f67140ad6d0dc24d13d3e2'; \
BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.1_13.tar.gz'; \
;; \
*) \
echo "Unsupported arch: ${ARCH}"; \
exit 1; \
;; \
esac; \
curl -Lso /tmp/openjdk.tar.gz ${BINARY_URL}; \
sha256sum /tmp/openjdk.tar.gz; \
mkdir -p /opt/java/openjdk; \
cd /opt/java/openjdk; \
echo "${ESUM} /tmp/openjdk.tar.gz" | sha256sum -c -; \
tar -xf /tmp/openjdk.tar.gz; \
jdir=$(dirname $(dirname $(find /opt/java/openjdk -name javac))); \
mv ${jdir}/* /opt/java/openjdk; \
export PATH="/opt/java/openjdk/bin:$PATH"; \
apt-get update; apt-get install -y --no-install-recommends binutils; \
/usr/local/bin/slim-java.sh /opt/java/openjdk; \
apt-get remove -y binutils; \
rm -rf /var/lib/apt/lists/*; \
rm -rf ${jdir} /tmp/openjdk.tar.gz;
ENV JAVA_HOME=/opt/java/openjdk \
PATH="/opt/java/openjdk/bin:$PATH"
ENV JAVA_TOOL_OPTIONS="-XX:+UseContainerSupport"
Now references.
https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/11/jdk/ubuntu/Dockerfile.hotspot.releases.slim
https://hub.docker.com/_/python/
https://hub.docker.com/r/adoptopenjdk/openjdk11/
I used them to answer this question, which may help you sometime.
Running Python and Java in Docker
I believe that by adding FROM openjdk:slim line, you tell docker to execute all of your subsequent commands in openjdk container (which does not have python)
I would approach this by creating two separate containers for openjdk and python and specify individual sets of commands for them.
Docker is made to modularize your solutions and mashing everything into one container is usually a bad practice.
I tried pajamas's anwser which worked very well for creating this image. However, when trying to install packages like gensim, pandas or else, I faced some errors like: don't know how to compile Fortran code on platform 'posix'. I searched and tried this, this and that but none worked for me.
So, based on pajamas's anwser I decided to convert his image from Alpine to Centos which worked very well. So here's a Dockerfile that might help someone who's may be struggling in this scenario like I was:
# Get Linux
FROM centos:7
# Install Java
RUN yum update -y \
&& yum install java-1.8.0-openjdk -y \
&& yum clean all \
&& rm -rf /var/cache/yum
# Set JAVA_HOME environment var
ENV JAVA_HOME="/usr/lib/jvm/jre-openjdk"
# Install Python
RUN yum install python3 -y \
&& pip3 install --upgrade pip setuptools wheel \
&& if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi \
&& if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \
&& yum clean all \
&& rm -rf /var/cache/yum
CMD ["bash"]
you should have one FROM in your dockerfile
(unless you use multi-stage build for the docker)
I think i found easiest way to mix java jdk 17 and python3. I is not working on python2
FROM openjdk:17.0.1-jdk-slim
RUN apt-get update && \
apt-get install -y software-properties-common && \
apt-get install -y python3-pip
Software Commons have python3 lightweight version. (3.9.1 version)
U can also install some libraries like that.
RUN python3 -m pip install --upgrade pip && \
python3 -m pip install numpy && \
python3 -m pip install opencv-python
OR
RUN apt-get update && \
apt-get install -y ffmpeg
Easiest is to just start from a Python image and add the OpenJDK. Note that FROM openjdk has been deprecated and replaced with eclipse-temurin
FROM python:3.10
ENV JAVA_HOME=/opt/java/openjdk
COPY --from=eclipse-temurin:17-jre $JAVA_HOME $JAVA_HOME
ENV PATH="${JAVA_HOME}/bin:${PATH}"
RUN pip install --trusted-host pypi.python.org flask
See How to use this Image - Using a different base Image section of https://hub.docker.com/_/eclipse-temurin for details.
Instead of using FROM openjdk:slim you can separately install Java, please refer below example:
# Install OpenJDK-8
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;
# Fix certificate issues
RUN apt-get update && \
apt-get install ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f;
# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME

standard_init_linux.go:211: exec user process caused "exec format error" when build tiktokapi:latest [duplicate]

docker started throwing this error:
standard_init_linux.go:178: exec user process caused "exec format error"
whenever I run a specific docker container with CMD or ENTRYPOINT, with no regard to any changes to the file other then removing CMD or ENTRYPOINT. here is the docker file I have been working with which worked perfectly until about an hour ago:
FROM buildpack-deps:jessie
ENV PATH /usr/local/bin:$PATH
ENV LANG C.UTF-8
RUN apt-get update && apt-get install -y --no-install-recommends \
tcl \
tk \
&& rm -rf /var/lib/apt/lists/*
ENV GPG_KEY 0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
ENV PYTHON_VERSION 3.6.0
ENV PYTHON_PIP_VERSION 9.0.1
RUN set -ex \
&& buildDeps=' \
tcl-dev \
tk-dev \
' \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& rm -r "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz \
\
&& cd /usr/src/python \
&& ./configure \
--enable-loadable-sqlite-extensions \
--enable-shared \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
\
&& if [ ! -e /usr/local/bin/pip3 ]; then : \
&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
&& python3 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
&& rm /tmp/get-pip.py \
; fi \
&& pip3 install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
\
&& find /usr/local -depth \
\( \
\( -type d -a -name test -o -name tests \) \
-o \
\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
\) -exec rm -rf '{}' + \
&& apt-get purge -y --auto-remove $buildDeps \
&& rm -rf /usr/src/python ~/.cache
RUN cd /usr/local/bin \
&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \
&& ln -s python3-config python-config
RUN pip install uwsgi
RUN mkdir /config
RUN mkdir /logs
ENV HOME /var/www
WORKDIR /config
ADD conf/requirements.txt /config
RUN pip install -r /config/requirements.txt
ADD conf/wsgi.py /config
ADD conf/wsgi.ini /config
ADD conf/__init__.py /config
ADD start.sh /bin/start.sh
RUN chmod +x /bin/start.sh
EXPOSE 8000
ENTRYPOINT ["start.sh", "uwsgi", "--ini", "wsgi.ini"]
I forgot to put
#!/bin/bash
at the top of the sh file, problem solved.
This can happen if you're trying to run an x86 built image on an arm64/aarch64 machine.
You'll need to rebuild the image using the corresponding architecture
This error could also occur if an image was built on a MacBook Pro with a Apple M1 Pro chip, which is ARM-based, so by default the Docker build command targets arm64.
Docker in fact detects the Apple M1 Pro platform as linux/arm64/v8
Specifying the platform to both the build command and version tag was enough:
# Build for ARM64 (default)
docker build -t <image-name>:<version>-arm64 .
# Build for ARM64
docker build --platform=linux/arm64 -t <image-name>:<version>-arm64 .
# Build for AMD64
docker build --platform=linux/amd64 -t <image-name>:<version>-amd64 .
Environment
Chip: Apple M1 Pro, 10 Cores (8 performance and 2 efficiency)
Docker version 20.10.12, build e91ed57
Add this code
#!/usr/bin/env bash
at the top of your script file.
If the Docker image is built on an M1 chip and uploaded to be deployed by Fargate then you’ll notice this container error in Fargate:
standard_init_linux.go:228: exec user process caused: exec format error
There’s a couple ways to work around this. You can either:
Build your docker image using:
docker buildx build --platform=linux/amd64 -t image-name:version .
Update your Dockerfile’s FROM statements with
FROM --platform=linux/amd64 BASE_IMAGE:VERSION
Got the same Error, i was building ARM image after changing to AMD. Issue Fixed
That error usually means you're trying to run this amd64 image on a non-amd64 host (such as 32-bit or ARM).
TRY BUILDING by using buildx and specifying --platform linux/amd64
Sample Command
docker buildx build -t ranjithkumarmv/node-12.13.0-awscli . --platform linux/amd64
If you're getting this in AWS ECS, you probably built the image with an Apple M1 Pro chip. In your Dockerfile, you can add the following:
FROM --platform=linux/amd64 <image>:<tag>.
If you're using a sub-image, ex: FROM <parent_image_you_created>:<tag> you'll want to make sure the <parent_image_you_created>:<tag> was built with FROM --platform=linux/amd64 <image>:<tag>.
Another possible reason for this could be if the file is saved with Windows line endings (CRLF). Save it with Unix line endings (LF) and the file will be found.
Extending to the accepted answer:
For an alpine (without bash) image:
#!/bin/ash
at the top of the sh file, solves the problem.
I'm currently rocking an M1 Mac, I was also running into this issue a little earlier. I actually realized a Fargate task that I had deployed as part of a stack, had not been running for over a month because I had deployed it on my M1 Mac laptop. The deploy script was working fine on my old Intel-based Mac.
The error I was just noticing in the CW logs (again the task had been failing for over a month) was just exactly as follows:
standard_init_linux.go:228: exec user process caused: exec format error
To fix it, in all the docker build steps -- since I had one for building a lambda layer, and another for building the Fargate task -- I just updated to add the --platform=linux/amd64. Please note, it did not work for me if I for added the tag after the -t argument, for example like img-test:0.1-amd64. I wonder if perhaps that could be because I was referencing the :latest tag later in my script.
In any case, these were the changes necessitated. You'll notice, all I really did was add the --platform argument; everything else stayed the same.
ecr_repository="some/app"
docker build -t ${ecr_repository} \
--platform=linux/amd64 \
--build-arg SOME_ARG='value' \
.
And I'm not sure if it's technically required, but just to be on the safe side, I also updated all the FROM statements in my Dockerfiles:
FROM --platform=linux/amd64 ubuntu:<version>
For those trying to build images for aarch64 or armv7l architectures on amd64 Linux system and running into the same error: check if qemu-user-static package is installed. If not, install it with sudo apt install qemu-user-static on Ubuntu/Debian/Mint etc, or with sudo dnf install qemu-user-static on Fedora
None of this worked for me. Why is no one mentioning the BOM?
This error occurs if you have a Byte Order Mark at the start of your file.
You can check with:
head -n 1 yourscript | LC_ALL=C od -tc
In Notepad++ you can save text in UTF-8 without the BOM by selecting the appropriate option in the Encoding menu:
Encoding > UTF-8
I have faced same issue in RHEL 7.3, docker 17.05-ce when running offline loaded image. It appeared the default storage driver of RHEL/CentOS changed from device-mapper to overlay. Reverting back the driver to devicemapper fixed the problem.
dockerd --storage-driver=devicemapper
or
/etc/docker/daemon.json
{
"storage-driver": "devicemapper"
}
One more possibility is that #!/bin/bash is not in the very first line. There must be really nothing before it (no empty lines, nothing).
Not a direct answer to the question asked. Although I got the error while calling "docker-compose up" to bring my nodejs application up. Realized that in my "Dockerfile" i had CMD ["./server.js"].
To fix i replaced it with CMD ["npm","start"] and that solved the issue. Hope if someone lands here for this exception may find this useful.
standard_init_linux.go:228: exec user process caused: exec format error
For me, it's because there is no main package in my go project. Hope it helps someone.
If you have buildx installed and still run into this error, it might be because cross-platform emulators are missing.
These can be installed like this:
docker run --privileged --rm tonistiigi/binfmt --install all
Source: https://docs.docker.com/build/building/multi-platform/#building-multi-platform-images
Happened to me when trying to build a linux/arm64 image via Gitlab pipeline running on a Gitlab runner with platform linux/amd64.
In my case, I "drained" my ECS instanced and "activated" them back again and thereafter the error vanished.
If you are using an IBR1700 router which runs containers, you may get a similar error when in the router command line after using command container logs test (where test is the name of the container).
To fix this you need to build the application so it runs on a different platform. It uses linux/arm/v7.
docker run -it --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64
docker buildx create --name mybuilder
docker buildx use mybuilder
docker buildx build --platform linux/arm/v7 --no-cache -t <username/repository>:<tag> . --push
Pushing to the repository with this build means it can run on the router.
https://github.com/cradlepoint/container-samples
For me, my ECS cluster was arm64 architecture, but my docker image was showing amd64 architecture. I rebuilt my docker image: https://docs.docker.com/desktop/multi-arch/
I had similar problem standard_init_linux.go:228: exec user process caused: exec format error, but nothing helped from the answers. Eventually i found that it was old docker version 17.09.0-ce which is also default on the Circle CI, so just after changing it to the most recent one problem was solved.

A more elegant docker run command

I have built a docker image using a Dockerfile that does the following:
FROM my-base-python-image
WORKDIR /opt/data/projects/project/
RUN mkdir files
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
COPY files/ files/
RUN python -m venv /opt/venv && . /opt/venv/activate
RUN yum install -y unzip
WORKDIR files/
RUN unzip file.zip && rm -rf file.zip && . /opt/venv/bin/activate && python -m pip install *
WORKDIR /opt/data/projects/project/
That builds an image that allows me to run a custom command. In a terminal, for instance, here is the commmand I run after activating my project venv:
python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c
Arguments a & b are custom tags to identify input files. -c calls a block of code.
So to run the built image successfully, I run the container and map local files to input files:
docker run --rm -it -v /local/inputfile_a.json:/opt/data/projects/project/inputfile_a.json -v /local/inputfile_b.json:/opt/data/projects/project/inputfile_b.json image-name:latest bash -c 'source /opt/venv/bin/activate && python -m pathA.ModuleB -a inputfile_a.json -b inputfile_b.json -c'
Besides shortening file paths, is there anythin I can do to shorten the docker run command? I'm thinking that adding a CMD and/or ENTRYPOINT to the Dockerfile would help, but I cannot figure out how to do it as I get errors.
There are a couple of things you can do to improve this.
The simplest is to run the application outside of Docker. You mention that you have a working Python virtual environment. A design goal of Docker is that programs in containers can't generally access files on the host, so if your application is all about reading and writing host files, Docker may not be a good fit.
Your file paths inside the container are fairly long, and this is bloating your -v mount options. You don't need an /opt/data/projects/project prefix; it's very typical just to use short paths like /app or /data.
You're also installing your application into a Python virtual environment, but inside a Docker image, which provides its own isolation. As you're seeing in your docker run command and elsewhere, the mechanics of activating a virtual environment in Docker are a little hairy. It's also not necessary; just skip the virtual environment setup altogether. (You can also directly run /opt/venv/bin/python and it knows it "belongs to" a virtual environment, without explicitly activating it.)
Finally, in your setup.py file, you can use a setuptools entry_points declaration to provide a script that runs your named module.
That can reduce your Dockerfile to more or less
FROM my-base-python-image
# OS-level setup
RUN rm -rf /etc/yum.repos.d/*.repo
COPY rss-centos-7-config.repo /etc/yum.repos.d/
RUN yum install -y unzip
# Copy the application in
WORKDIR /app/files
COPY files/ ./
RUN unzip file.zip \
&& rm file.zip \
&& pip install *
# Typical runtime metadata
WORKDIR /app
CMD main-script --help
And then when you run it, you can:
docker run --rm -it \
-v /local:/data \ # just map the entire directory
image-name:latest \
main-script -a /data/inputfile_a.json -b /data/inputfile_b.json -c
You can also consider the docker run -w /data option to change the current directory, which would add a Docker-level argument but slightly shorten the script command line.

Docker python output csv file

I have a script python which should output a file csv. I'm trying to have this file in the current working directory but without success.
This is my Dockerfile
FROM python:3.6.4
RUN apt-get update && apt-get install -y libaio1 wget unzip
WORKDIR /opt/oracle
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-
basiclite-linuxx64.zip && \ unzip instantclient-basiclite-linuxx64.zip && rm
-f instantclient-basiclite-linuxx64.zip && \ cd /opt/oracle/instantclient*
&& rm -f jdbc occi mysql *README jar uidrvci genezi adrci && \ echo
/opt/oracle/instantclient > /etc/ld.so.conf.d/oracle-instantclient.conf &&
ldconfig
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install pystan
RUN apt-get -y update && python3 -m pip install cx_Oracle --upgrade
RUN pip install -r requirements.txt
CMD [ "python", "Main.py" ]
And run the container with the following command
docker container run -v $pwd:/home/learn/rstudio_script/output image
This is bad practice to bind a volume just to have 1 file on your container be saved onto your host.
Instead, what you should leverage is the copy command:
docker cp <containerId>:/file/path/within/container /host/path/target
You can set this command to auto execute with bash, after your docker run.
So something like:
#!/bin/bash
# this stores the container id
CONTAINER_ID=$(docker run -dit img)
docker cp $CONTAINER_ID:/some_path host_path
If you are adamant on using a bind volume, then as the others have pointed out, the issue is most likely your python script isn't outputting the csv to the correct path.
Your script Main.py is probably not trying to write to /home/learn/rstudio_script/output. The working directory in the container is /app because of the last WORKDIR directive in the Dockerfile. You can override that at runtime with --workdir but then the CMD would have to be changed as well.
One solution is to have your script write files to /output/ and then run it like this:
docker container run -v $PWD:/output/ image

standard_init_linux.go:178: exec user process caused "exec format error"

docker started throwing this error:
standard_init_linux.go:178: exec user process caused "exec format error"
whenever I run a specific docker container with CMD or ENTRYPOINT, with no regard to any changes to the file other then removing CMD or ENTRYPOINT. here is the docker file I have been working with which worked perfectly until about an hour ago:
FROM buildpack-deps:jessie
ENV PATH /usr/local/bin:$PATH
ENV LANG C.UTF-8
RUN apt-get update && apt-get install -y --no-install-recommends \
tcl \
tk \
&& rm -rf /var/lib/apt/lists/*
ENV GPG_KEY 0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
ENV PYTHON_VERSION 3.6.0
ENV PYTHON_PIP_VERSION 9.0.1
RUN set -ex \
&& buildDeps=' \
tcl-dev \
tk-dev \
' \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& rm -r "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz \
\
&& cd /usr/src/python \
&& ./configure \
--enable-loadable-sqlite-extensions \
--enable-shared \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
\
&& if [ ! -e /usr/local/bin/pip3 ]; then : \
&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
&& python3 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
&& rm /tmp/get-pip.py \
; fi \
&& pip3 install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
\
&& find /usr/local -depth \
\( \
\( -type d -a -name test -o -name tests \) \
-o \
\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
\) -exec rm -rf '{}' + \
&& apt-get purge -y --auto-remove $buildDeps \
&& rm -rf /usr/src/python ~/.cache
RUN cd /usr/local/bin \
&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \
&& ln -s python3-config python-config
RUN pip install uwsgi
RUN mkdir /config
RUN mkdir /logs
ENV HOME /var/www
WORKDIR /config
ADD conf/requirements.txt /config
RUN pip install -r /config/requirements.txt
ADD conf/wsgi.py /config
ADD conf/wsgi.ini /config
ADD conf/__init__.py /config
ADD start.sh /bin/start.sh
RUN chmod +x /bin/start.sh
EXPOSE 8000
ENTRYPOINT ["start.sh", "uwsgi", "--ini", "wsgi.ini"]
I forgot to put
#!/bin/bash
at the top of the sh file, problem solved.
This can happen if you're trying to run an x86 built image on an arm64/aarch64 machine.
You'll need to rebuild the image using the corresponding architecture
This error could also occur if an image was built on a MacBook Pro with a Apple M1 Pro chip, which is ARM-based, so by default the Docker build command targets arm64.
Docker in fact detects the Apple M1 Pro platform as linux/arm64/v8
Specifying the platform to both the build command and version tag was enough:
# Build for ARM64 (default)
docker build -t <image-name>:<version>-arm64 .
# Build for ARM64
docker build --platform=linux/arm64 -t <image-name>:<version>-arm64 .
# Build for AMD64
docker build --platform=linux/amd64 -t <image-name>:<version>-amd64 .
Environment
Chip: Apple M1 Pro, 10 Cores (8 performance and 2 efficiency)
Docker version 20.10.12, build e91ed57
Add this code
#!/usr/bin/env bash
at the top of your script file.
If the Docker image is built on an M1 chip and uploaded to be deployed by Fargate then you’ll notice this container error in Fargate:
standard_init_linux.go:228: exec user process caused: exec format error
There’s a couple ways to work around this. You can either:
Build your docker image using:
docker buildx build --platform=linux/amd64 -t image-name:version .
Update your Dockerfile’s FROM statements with
FROM --platform=linux/amd64 BASE_IMAGE:VERSION
Got the same Error, i was building ARM image after changing to AMD. Issue Fixed
That error usually means you're trying to run this amd64 image on a non-amd64 host (such as 32-bit or ARM).
TRY BUILDING by using buildx and specifying --platform linux/amd64
Sample Command
docker buildx build -t ranjithkumarmv/node-12.13.0-awscli . --platform linux/amd64
If you're getting this in AWS ECS, you probably built the image with an Apple M1 Pro chip. In your Dockerfile, you can add the following:
FROM --platform=linux/amd64 <image>:<tag>.
If you're using a sub-image, ex: FROM <parent_image_you_created>:<tag> you'll want to make sure the <parent_image_you_created>:<tag> was built with FROM --platform=linux/amd64 <image>:<tag>.
Another possible reason for this could be if the file is saved with Windows line endings (CRLF). Save it with Unix line endings (LF) and the file will be found.
Extending to the accepted answer:
For an alpine (without bash) image:
#!/bin/ash
at the top of the sh file, solves the problem.
I'm currently rocking an M1 Mac, I was also running into this issue a little earlier. I actually realized a Fargate task that I had deployed as part of a stack, had not been running for over a month because I had deployed it on my M1 Mac laptop. The deploy script was working fine on my old Intel-based Mac.
The error I was just noticing in the CW logs (again the task had been failing for over a month) was just exactly as follows:
standard_init_linux.go:228: exec user process caused: exec format error
To fix it, in all the docker build steps -- since I had one for building a lambda layer, and another for building the Fargate task -- I just updated to add the --platform=linux/amd64. Please note, it did not work for me if I for added the tag after the -t argument, for example like img-test:0.1-amd64. I wonder if perhaps that could be because I was referencing the :latest tag later in my script.
In any case, these were the changes necessitated. You'll notice, all I really did was add the --platform argument; everything else stayed the same.
ecr_repository="some/app"
docker build -t ${ecr_repository} \
--platform=linux/amd64 \
--build-arg SOME_ARG='value' \
.
And I'm not sure if it's technically required, but just to be on the safe side, I also updated all the FROM statements in my Dockerfiles:
FROM --platform=linux/amd64 ubuntu:<version>
For those trying to build images for aarch64 or armv7l architectures on amd64 Linux system and running into the same error: check if qemu-user-static package is installed. If not, install it with sudo apt install qemu-user-static on Ubuntu/Debian/Mint etc, or with sudo dnf install qemu-user-static on Fedora
None of this worked for me. Why is no one mentioning the BOM?
This error occurs if you have a Byte Order Mark at the start of your file.
You can check with:
head -n 1 yourscript | LC_ALL=C od -tc
In Notepad++ you can save text in UTF-8 without the BOM by selecting the appropriate option in the Encoding menu:
Encoding > UTF-8
I have faced same issue in RHEL 7.3, docker 17.05-ce when running offline loaded image. It appeared the default storage driver of RHEL/CentOS changed from device-mapper to overlay. Reverting back the driver to devicemapper fixed the problem.
dockerd --storage-driver=devicemapper
or
/etc/docker/daemon.json
{
"storage-driver": "devicemapper"
}
One more possibility is that #!/bin/bash is not in the very first line. There must be really nothing before it (no empty lines, nothing).
Not a direct answer to the question asked. Although I got the error while calling "docker-compose up" to bring my nodejs application up. Realized that in my "Dockerfile" i had CMD ["./server.js"].
To fix i replaced it with CMD ["npm","start"] and that solved the issue. Hope if someone lands here for this exception may find this useful.
standard_init_linux.go:228: exec user process caused: exec format error
For me, it's because there is no main package in my go project. Hope it helps someone.
If you have buildx installed and still run into this error, it might be because cross-platform emulators are missing.
These can be installed like this:
docker run --privileged --rm tonistiigi/binfmt --install all
Source: https://docs.docker.com/build/building/multi-platform/#building-multi-platform-images
Happened to me when trying to build a linux/arm64 image via Gitlab pipeline running on a Gitlab runner with platform linux/amd64.
In my case, I "drained" my ECS instanced and "activated" them back again and thereafter the error vanished.
If you are using an IBR1700 router which runs containers, you may get a similar error when in the router command line after using command container logs test (where test is the name of the container).
To fix this you need to build the application so it runs on a different platform. It uses linux/arm/v7.
docker run -it --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64
docker buildx create --name mybuilder
docker buildx use mybuilder
docker buildx build --platform linux/arm/v7 --no-cache -t <username/repository>:<tag> . --push
Pushing to the repository with this build means it can run on the router.
https://github.com/cradlepoint/container-samples
For me, my ECS cluster was arm64 architecture, but my docker image was showing amd64 architecture. I rebuilt my docker image: https://docs.docker.com/desktop/multi-arch/
I had similar problem standard_init_linux.go:228: exec user process caused: exec format error, but nothing helped from the answers. Eventually i found that it was old docker version 17.09.0-ce which is also default on the Circle CI, so just after changing it to the most recent one problem was solved.

Categories

Resources