I have a Java application that should interact with Python agents installed on remote machines. My main application has a SSH access to these remote machines. To make user's live simpler, I want to automate process of agent installation, so user can just push button and main application connects, install and run application remotely.
I expect that:
remotes will be different UNIX for now (Windows then);
SSH user may not be root;
Python is installed on the remote machine;
Remote machine may not have an Internet connection.
My Python agents use virtualenv and pip, have number of dependencies. I've made a script to download virtualenv and dependencies as tar.gz / zip:
#!/usr/bin/env bash
# Should be the same in dist.sh and install_dist.sh.
VIRTUALENV_VERSION=13.0.3
VIRTUALENV_BASE_URL=https://pypi.python.org/packages/source/v/virtualenv
echo "Recreating dist directory..."
if [ -d "dist" ]; then
rm -rf dist
fi
mkdir -p dist
echo "Copying agent sources..."
cp -r src dist
cp requirements.txt dist
cp agent.sh dist
cp install_dist.sh dist
echo "Downloading virtualenv..."
curl -o dist/virtualenv-$VIRTUALENV_VERSION.tar.gz -O $VIRTUALENV_BASE_URL/virtualenv-$VIRTUALENV_VERSION.tar.gz
echo "Downloading requirements..."
mkdir dist/libs
./env/bin/pip install --download="dist/libs" --no-binary :all: -r requirements.txt
echo "Packing dist directory..."
tar cvzf dist.tar.gz dist
When installation starts, my main app scp all this archive to the remote machine, installs virtualenv and requirements, copy all required scripts:
#!/usr/bin/env bash
# Runs remotely.
SCRIPT_DIR="`dirname \"$0\"`"
SCRIPT_DIR="`( cd \"$SCRIPT_DIR\" && pwd )`"
HOME_DIR="`( cd \"$SCRIPT_DIR/..\" && pwd )`"
VIRTUALENV_VERSION=13.0.3
echo "Installing virtualenv..."
tar xzf $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION.tar.gz --directory $SCRIPT_DIR
python $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION/virtualenv.py $HOME_DIR/env
if [ $? -ne 0 ]
then
echo "Unable to create virtualenv."
exit 1
fi
$HOME_DIR/env/bin/pip install $SCRIPT_DIR/virtualenv-$VIRTUALENV_VERSION.tar.gz
if [ $? -ne 0 ]
then
echo "Unable to install virtualenv."
exit 1
fi
echo "Installing requirements..."
$HOME_DIR/env/bin/pip install --no-index --find-links="$SCRIPT_DIR/libs" -r $SCRIPT_DIR/requirements.txt
if [ $? -ne 0 ]
then
echo "Unable to install requirements."
exit 1
fi
echo "Copying agent sources..."
cp -r $SCRIPT_DIR/src $HOME_DIR
if [ $? -ne 0 ]
then
echo "Unable to copy agent sources."
exit 1
fi
cp -r $SCRIPT_DIR/agent.sh $HOME_DIR
if [ $? -ne 0 ]
then
echo "Unable to copy agent.sh."
exit 1
fi
cp -r $SCRIPT_DIR/requirements.txt $HOME_DIR
echo "Cleaning installation files..."
rm -rf $HOME_DIR/dist.tar.gz
rm -rf $SCRIPT_DIR
I faced a problem with building some of the dependencies remotely - for example, they may require gcc or other libraries that should be installed with sudo manually... may be, I should use pre-compiled wheels where possible, providing them per each target system? Or may be you see some better way to implement this task?
Maybe deliver fully packed applications ? Have a look at Freeze and py2exe.
Also, if you planned on delivering for windows environment, notice that compiling require a lots and is quite annoying, prefer sending libraries pre-compiled with your app. For linux environment, well, it mostly depend of the environment so you probably gonna stick with building depedencies.
Note: From your comment:
providing distribution for each target system requires building them
on each target system
Notice that you can do cross-compilation which allow you do build for different system (even Windows) without the need to have a running machine with this environment.
I would go ahead with building with cross compile. Worst case scenario is very little hosts will be annoying and won't be able to use your binaries. In this scenario just solve the problem case by case.
Best luck :)
Related
docker started throwing this error:
standard_init_linux.go:178: exec user process caused "exec format error"
whenever I run a specific docker container with CMD or ENTRYPOINT, with no regard to any changes to the file other then removing CMD or ENTRYPOINT. here is the docker file I have been working with which worked perfectly until about an hour ago:
FROM buildpack-deps:jessie
ENV PATH /usr/local/bin:$PATH
ENV LANG C.UTF-8
RUN apt-get update && apt-get install -y --no-install-recommends \
tcl \
tk \
&& rm -rf /var/lib/apt/lists/*
ENV GPG_KEY 0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
ENV PYTHON_VERSION 3.6.0
ENV PYTHON_PIP_VERSION 9.0.1
RUN set -ex \
&& buildDeps=' \
tcl-dev \
tk-dev \
' \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& rm -r "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz \
\
&& cd /usr/src/python \
&& ./configure \
--enable-loadable-sqlite-extensions \
--enable-shared \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
\
&& if [ ! -e /usr/local/bin/pip3 ]; then : \
&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
&& python3 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
&& rm /tmp/get-pip.py \
; fi \
&& pip3 install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
\
&& find /usr/local -depth \
\( \
\( -type d -a -name test -o -name tests \) \
-o \
\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
\) -exec rm -rf '{}' + \
&& apt-get purge -y --auto-remove $buildDeps \
&& rm -rf /usr/src/python ~/.cache
RUN cd /usr/local/bin \
&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \
&& ln -s python3-config python-config
RUN pip install uwsgi
RUN mkdir /config
RUN mkdir /logs
ENV HOME /var/www
WORKDIR /config
ADD conf/requirements.txt /config
RUN pip install -r /config/requirements.txt
ADD conf/wsgi.py /config
ADD conf/wsgi.ini /config
ADD conf/__init__.py /config
ADD start.sh /bin/start.sh
RUN chmod +x /bin/start.sh
EXPOSE 8000
ENTRYPOINT ["start.sh", "uwsgi", "--ini", "wsgi.ini"]
I forgot to put
#!/bin/bash
at the top of the sh file, problem solved.
This can happen if you're trying to run an x86 built image on an arm64/aarch64 machine.
You'll need to rebuild the image using the corresponding architecture
This error could also occur if an image was built on a MacBook Pro with a Apple M1 Pro chip, which is ARM-based, so by default the Docker build command targets arm64.
Docker in fact detects the Apple M1 Pro platform as linux/arm64/v8
Specifying the platform to both the build command and version tag was enough:
# Build for ARM64 (default)
docker build -t <image-name>:<version>-arm64 .
# Build for ARM64
docker build --platform=linux/arm64 -t <image-name>:<version>-arm64 .
# Build for AMD64
docker build --platform=linux/amd64 -t <image-name>:<version>-amd64 .
Environment
Chip: Apple M1 Pro, 10 Cores (8 performance and 2 efficiency)
Docker version 20.10.12, build e91ed57
Add this code
#!/usr/bin/env bash
at the top of your script file.
If the Docker image is built on an M1 chip and uploaded to be deployed by Fargate then you’ll notice this container error in Fargate:
standard_init_linux.go:228: exec user process caused: exec format error
There’s a couple ways to work around this. You can either:
Build your docker image using:
docker buildx build --platform=linux/amd64 -t image-name:version .
Update your Dockerfile’s FROM statements with
FROM --platform=linux/amd64 BASE_IMAGE:VERSION
Got the same Error, i was building ARM image after changing to AMD. Issue Fixed
That error usually means you're trying to run this amd64 image on a non-amd64 host (such as 32-bit or ARM).
TRY BUILDING by using buildx and specifying --platform linux/amd64
Sample Command
docker buildx build -t ranjithkumarmv/node-12.13.0-awscli . --platform linux/amd64
If you're getting this in AWS ECS, you probably built the image with an Apple M1 Pro chip. In your Dockerfile, you can add the following:
FROM --platform=linux/amd64 <image>:<tag>.
If you're using a sub-image, ex: FROM <parent_image_you_created>:<tag> you'll want to make sure the <parent_image_you_created>:<tag> was built with FROM --platform=linux/amd64 <image>:<tag>.
Another possible reason for this could be if the file is saved with Windows line endings (CRLF). Save it with Unix line endings (LF) and the file will be found.
Extending to the accepted answer:
For an alpine (without bash) image:
#!/bin/ash
at the top of the sh file, solves the problem.
I'm currently rocking an M1 Mac, I was also running into this issue a little earlier. I actually realized a Fargate task that I had deployed as part of a stack, had not been running for over a month because I had deployed it on my M1 Mac laptop. The deploy script was working fine on my old Intel-based Mac.
The error I was just noticing in the CW logs (again the task had been failing for over a month) was just exactly as follows:
standard_init_linux.go:228: exec user process caused: exec format error
To fix it, in all the docker build steps -- since I had one for building a lambda layer, and another for building the Fargate task -- I just updated to add the --platform=linux/amd64. Please note, it did not work for me if I for added the tag after the -t argument, for example like img-test:0.1-amd64. I wonder if perhaps that could be because I was referencing the :latest tag later in my script.
In any case, these were the changes necessitated. You'll notice, all I really did was add the --platform argument; everything else stayed the same.
ecr_repository="some/app"
docker build -t ${ecr_repository} \
--platform=linux/amd64 \
--build-arg SOME_ARG='value' \
.
And I'm not sure if it's technically required, but just to be on the safe side, I also updated all the FROM statements in my Dockerfiles:
FROM --platform=linux/amd64 ubuntu:<version>
For those trying to build images for aarch64 or armv7l architectures on amd64 Linux system and running into the same error: check if qemu-user-static package is installed. If not, install it with sudo apt install qemu-user-static on Ubuntu/Debian/Mint etc, or with sudo dnf install qemu-user-static on Fedora
None of this worked for me. Why is no one mentioning the BOM?
This error occurs if you have a Byte Order Mark at the start of your file.
You can check with:
head -n 1 yourscript | LC_ALL=C od -tc
In Notepad++ you can save text in UTF-8 without the BOM by selecting the appropriate option in the Encoding menu:
Encoding > UTF-8
I have faced same issue in RHEL 7.3, docker 17.05-ce when running offline loaded image. It appeared the default storage driver of RHEL/CentOS changed from device-mapper to overlay. Reverting back the driver to devicemapper fixed the problem.
dockerd --storage-driver=devicemapper
or
/etc/docker/daemon.json
{
"storage-driver": "devicemapper"
}
One more possibility is that #!/bin/bash is not in the very first line. There must be really nothing before it (no empty lines, nothing).
Not a direct answer to the question asked. Although I got the error while calling "docker-compose up" to bring my nodejs application up. Realized that in my "Dockerfile" i had CMD ["./server.js"].
To fix i replaced it with CMD ["npm","start"] and that solved the issue. Hope if someone lands here for this exception may find this useful.
standard_init_linux.go:228: exec user process caused: exec format error
For me, it's because there is no main package in my go project. Hope it helps someone.
If you have buildx installed and still run into this error, it might be because cross-platform emulators are missing.
These can be installed like this:
docker run --privileged --rm tonistiigi/binfmt --install all
Source: https://docs.docker.com/build/building/multi-platform/#building-multi-platform-images
Happened to me when trying to build a linux/arm64 image via Gitlab pipeline running on a Gitlab runner with platform linux/amd64.
In my case, I "drained" my ECS instanced and "activated" them back again and thereafter the error vanished.
If you are using an IBR1700 router which runs containers, you may get a similar error when in the router command line after using command container logs test (where test is the name of the container).
To fix this you need to build the application so it runs on a different platform. It uses linux/arm/v7.
docker run -it --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64
docker buildx create --name mybuilder
docker buildx use mybuilder
docker buildx build --platform linux/arm/v7 --no-cache -t <username/repository>:<tag> . --push
Pushing to the repository with this build means it can run on the router.
https://github.com/cradlepoint/container-samples
For me, my ECS cluster was arm64 architecture, but my docker image was showing amd64 architecture. I rebuilt my docker image: https://docs.docker.com/desktop/multi-arch/
I had similar problem standard_init_linux.go:228: exec user process caused: exec format error, but nothing helped from the answers. Eventually i found that it was old docker version 17.09.0-ce which is also default on the Circle CI, so just after changing it to the most recent one problem was solved.
I am trying to create a singularity image and recipe that will create an anaconda environment and then activate said environment so I can build the python wheel of a project in that environment so it's 100% installed and functional after the singularity build is completed.
Bootstrap: docker
From: nvidia/cuda:9.0-cudnn7-devel-ubuntu16.04
%environment
# use bash as default shell
SHELL=/bin/bash
# add CUDA paths
CPATH="/usr/local/cuda/include:$CPATH"
PATH="/usr/local/cuda/bin:$PATH"
LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
CUDA_HOME="/usr/local/cuda"
# add Anaconda path
PATH="/usr/local/anaconda3/bin:$PATH"
export PATH LD_LIBRARY_PATH CPATH CUDA_HOME
export MKL_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1
%setup
# runs on host
# the path to the image is $SINGULARITY_ROOTFS
%post
# post-setup script
# load environment variables
. /environment
# use bash as default shell
echo "\n #Using bash as default shell \n" >> /environment
echo 'SHELL=/bin/bash' >> /environment
# make environment file executable
chmod +x /environment
# default mount paths
mkdir /scratch /data
#Add CUDA paths
echo "\n #Cuda paths \n" >> /environment
echo 'export CPATH="/usr/local/cuda/include:$CPATH"' >> /environment
echo 'export PATH="/usr/local/cuda/bin:$PATH"' >> /environment
echo 'export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"' >> /environment
echo 'export CUDA_HOME="/usr/local/cuda"' >> /environment
# updating and getting required packages
apt-get update
apt-get install -y wget git vim build-essential cmake
# creates a build directory
mkdir build
cd build
# download and install Anaconda
CONDA_INSTALL_PATH="/usr/local/anaconda3"
wget https://repo.continuum.io/archive/Anaconda3-5.0.1-Linux-x86_64.sh
chmod +x Anaconda3-5.0.1-Linux-x86_64.sh
./Anaconda3-5.0.1-Linux-x86_64.sh -b -p $CONDA_INSTALL_PATH
# download and install CaImAn
git clone https://github.com/flatironinstitute/CaImAn.git
cd CaImAn
conda env create -n caiman -f environment.yml
source activate caiman
pip install .
caimanmanager.py install
source deactivate
%runscript
# executes with the singularity run command
# delete this section to use existing docker ENTRYPOINT command
%test
# test that script is a success
I've tried both conda activate and source activate and get the same error for both.
+ source activate caiman
/bin/sh: 41: source: not found
ABORT: Aborting with RETVAL=255
Cleaning up...
Is this just something I have to do afterwards by making the image writable?
That would be the next default solution, but it would be nice if the recipe could just work.
*Edit 1
. activate caiman returns.
+ . activate caiman
+ [[ -n ]]
/bin/sh: 4: /usr/local/anaconda3/bin/activate: [[: not found
+ [[ -n ]]
/bin/sh: 7: /usr/local/anaconda3/bin/activate: [[: not found
+ echo Only bash and zsh are supported
Only bash and zsh are supported
+ return 1
ABORT: Aborting with RETVAL=255
Cleaning up...
*Edit 2
By using a newer version of Anaconda, the not found error goes away. All I did was change the Anaconda distribution I got with wget, and I also forced and update just to be doubly sure.
# download and install Anaconda
CONDA_INSTALL_PATH="/usr/local/anaconda3"
wget https://repo.continuum.io/archive/Anaconda3-5.3.1-Linux-x86_64.sh
chmod +x Anaconda3-5.3.1-Linux-x86_64.sh
./Anaconda3-5.3.1-Linux-x86_64.sh -b -p $CONDA_INSTALL_PATH
conda update -n base -c defaults conda
pip install --upgrade pip
If I am not wrong (which is totally possible) the same happens with virtualenv.
The problem is source is not a command, try:
. activate caiman
instead of
source activate caiman
Editing after updated question, check this https://github.com/conda/conda/issues/6639 you might want to investigate what your activate is doing (seems to be looking for non existing files)
docker started throwing this error:
standard_init_linux.go:178: exec user process caused "exec format error"
whenever I run a specific docker container with CMD or ENTRYPOINT, with no regard to any changes to the file other then removing CMD or ENTRYPOINT. here is the docker file I have been working with which worked perfectly until about an hour ago:
FROM buildpack-deps:jessie
ENV PATH /usr/local/bin:$PATH
ENV LANG C.UTF-8
RUN apt-get update && apt-get install -y --no-install-recommends \
tcl \
tk \
&& rm -rf /var/lib/apt/lists/*
ENV GPG_KEY 0D96DF4D4110E5C43FBFB17F2D347EA6AA65421D
ENV PYTHON_VERSION 3.6.0
ENV PYTHON_PIP_VERSION 9.0.1
RUN set -ex \
&& buildDeps=' \
tcl-dev \
tk-dev \
' \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends && rm -rf /var/lib/apt/lists/* \
\
&& wget -O python.tar.xz "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz" \
&& wget -O python.tar.xz.asc "https://www.python.org/ftp/python/${PYTHON_VERSION%%[a-z]*}/Python-$PYTHON_VERSION.tar.xz.asc" \
&& export GNUPGHOME="$(mktemp -d)" \
&& gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$GPG_KEY" \
&& gpg --batch --verify python.tar.xz.asc python.tar.xz \
&& rm -r "$GNUPGHOME" python.tar.xz.asc \
&& mkdir -p /usr/src/python \
&& tar -xJC /usr/src/python --strip-components=1 -f python.tar.xz \
&& rm python.tar.xz \
\
&& cd /usr/src/python \
&& ./configure \
--enable-loadable-sqlite-extensions \
--enable-shared \
&& make -j$(nproc) \
&& make install \
&& ldconfig \
\
&& if [ ! -e /usr/local/bin/pip3 ]; then : \
&& wget -O /tmp/get-pip.py 'https://bootstrap.pypa.io/get-pip.py' \
&& python3 /tmp/get-pip.py "pip==$PYTHON_PIP_VERSION" \
&& rm /tmp/get-pip.py \
; fi \
&& pip3 install --no-cache-dir --upgrade --force-reinstall "pip==$PYTHON_PIP_VERSION" \
&& [ "$(pip list |tac|tac| awk -F '[ ()]+' '$1 == "pip" { print $2; exit }')" = "$PYTHON_PIP_VERSION" ] \
\
&& find /usr/local -depth \
\( \
\( -type d -a -name test -o -name tests \) \
-o \
\( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
\) -exec rm -rf '{}' + \
&& apt-get purge -y --auto-remove $buildDeps \
&& rm -rf /usr/src/python ~/.cache
RUN cd /usr/local/bin \
&& { [ -e easy_install ] || ln -s easy_install-* easy_install; } \
&& ln -s idle3 idle \
&& ln -s pydoc3 pydoc \
&& ln -s python3 python \
&& ln -s python3-config python-config
RUN pip install uwsgi
RUN mkdir /config
RUN mkdir /logs
ENV HOME /var/www
WORKDIR /config
ADD conf/requirements.txt /config
RUN pip install -r /config/requirements.txt
ADD conf/wsgi.py /config
ADD conf/wsgi.ini /config
ADD conf/__init__.py /config
ADD start.sh /bin/start.sh
RUN chmod +x /bin/start.sh
EXPOSE 8000
ENTRYPOINT ["start.sh", "uwsgi", "--ini", "wsgi.ini"]
I forgot to put
#!/bin/bash
at the top of the sh file, problem solved.
This can happen if you're trying to run an x86 built image on an arm64/aarch64 machine.
You'll need to rebuild the image using the corresponding architecture
This error could also occur if an image was built on a MacBook Pro with a Apple M1 Pro chip, which is ARM-based, so by default the Docker build command targets arm64.
Docker in fact detects the Apple M1 Pro platform as linux/arm64/v8
Specifying the platform to both the build command and version tag was enough:
# Build for ARM64 (default)
docker build -t <image-name>:<version>-arm64 .
# Build for ARM64
docker build --platform=linux/arm64 -t <image-name>:<version>-arm64 .
# Build for AMD64
docker build --platform=linux/amd64 -t <image-name>:<version>-amd64 .
Environment
Chip: Apple M1 Pro, 10 Cores (8 performance and 2 efficiency)
Docker version 20.10.12, build e91ed57
Add this code
#!/usr/bin/env bash
at the top of your script file.
If the Docker image is built on an M1 chip and uploaded to be deployed by Fargate then you’ll notice this container error in Fargate:
standard_init_linux.go:228: exec user process caused: exec format error
There’s a couple ways to work around this. You can either:
Build your docker image using:
docker buildx build --platform=linux/amd64 -t image-name:version .
Update your Dockerfile’s FROM statements with
FROM --platform=linux/amd64 BASE_IMAGE:VERSION
Got the same Error, i was building ARM image after changing to AMD. Issue Fixed
That error usually means you're trying to run this amd64 image on a non-amd64 host (such as 32-bit or ARM).
TRY BUILDING by using buildx and specifying --platform linux/amd64
Sample Command
docker buildx build -t ranjithkumarmv/node-12.13.0-awscli . --platform linux/amd64
If you're getting this in AWS ECS, you probably built the image with an Apple M1 Pro chip. In your Dockerfile, you can add the following:
FROM --platform=linux/amd64 <image>:<tag>.
If you're using a sub-image, ex: FROM <parent_image_you_created>:<tag> you'll want to make sure the <parent_image_you_created>:<tag> was built with FROM --platform=linux/amd64 <image>:<tag>.
Another possible reason for this could be if the file is saved with Windows line endings (CRLF). Save it with Unix line endings (LF) and the file will be found.
Extending to the accepted answer:
For an alpine (without bash) image:
#!/bin/ash
at the top of the sh file, solves the problem.
I'm currently rocking an M1 Mac, I was also running into this issue a little earlier. I actually realized a Fargate task that I had deployed as part of a stack, had not been running for over a month because I had deployed it on my M1 Mac laptop. The deploy script was working fine on my old Intel-based Mac.
The error I was just noticing in the CW logs (again the task had been failing for over a month) was just exactly as follows:
standard_init_linux.go:228: exec user process caused: exec format error
To fix it, in all the docker build steps -- since I had one for building a lambda layer, and another for building the Fargate task -- I just updated to add the --platform=linux/amd64. Please note, it did not work for me if I for added the tag after the -t argument, for example like img-test:0.1-amd64. I wonder if perhaps that could be because I was referencing the :latest tag later in my script.
In any case, these were the changes necessitated. You'll notice, all I really did was add the --platform argument; everything else stayed the same.
ecr_repository="some/app"
docker build -t ${ecr_repository} \
--platform=linux/amd64 \
--build-arg SOME_ARG='value' \
.
And I'm not sure if it's technically required, but just to be on the safe side, I also updated all the FROM statements in my Dockerfiles:
FROM --platform=linux/amd64 ubuntu:<version>
For those trying to build images for aarch64 or armv7l architectures on amd64 Linux system and running into the same error: check if qemu-user-static package is installed. If not, install it with sudo apt install qemu-user-static on Ubuntu/Debian/Mint etc, or with sudo dnf install qemu-user-static on Fedora
None of this worked for me. Why is no one mentioning the BOM?
This error occurs if you have a Byte Order Mark at the start of your file.
You can check with:
head -n 1 yourscript | LC_ALL=C od -tc
In Notepad++ you can save text in UTF-8 without the BOM by selecting the appropriate option in the Encoding menu:
Encoding > UTF-8
I have faced same issue in RHEL 7.3, docker 17.05-ce when running offline loaded image. It appeared the default storage driver of RHEL/CentOS changed from device-mapper to overlay. Reverting back the driver to devicemapper fixed the problem.
dockerd --storage-driver=devicemapper
or
/etc/docker/daemon.json
{
"storage-driver": "devicemapper"
}
One more possibility is that #!/bin/bash is not in the very first line. There must be really nothing before it (no empty lines, nothing).
Not a direct answer to the question asked. Although I got the error while calling "docker-compose up" to bring my nodejs application up. Realized that in my "Dockerfile" i had CMD ["./server.js"].
To fix i replaced it with CMD ["npm","start"] and that solved the issue. Hope if someone lands here for this exception may find this useful.
standard_init_linux.go:228: exec user process caused: exec format error
For me, it's because there is no main package in my go project. Hope it helps someone.
If you have buildx installed and still run into this error, it might be because cross-platform emulators are missing.
These can be installed like this:
docker run --privileged --rm tonistiigi/binfmt --install all
Source: https://docs.docker.com/build/building/multi-platform/#building-multi-platform-images
Happened to me when trying to build a linux/arm64 image via Gitlab pipeline running on a Gitlab runner with platform linux/amd64.
In my case, I "drained" my ECS instanced and "activated" them back again and thereafter the error vanished.
If you are using an IBR1700 router which runs containers, you may get a similar error when in the router command line after using command container logs test (where test is the name of the container).
To fix this you need to build the application so it runs on a different platform. It uses linux/arm/v7.
docker run -it --rm --privileged docker/binfmt:a7996909642ee92942dcd6cff44b9b95f08dad64
docker buildx create --name mybuilder
docker buildx use mybuilder
docker buildx build --platform linux/arm/v7 --no-cache -t <username/repository>:<tag> . --push
Pushing to the repository with this build means it can run on the router.
https://github.com/cradlepoint/container-samples
For me, my ECS cluster was arm64 architecture, but my docker image was showing amd64 architecture. I rebuilt my docker image: https://docs.docker.com/desktop/multi-arch/
I had similar problem standard_init_linux.go:228: exec user process caused: exec format error, but nothing helped from the answers. Eventually i found that it was old docker version 17.09.0-ce which is also default on the Circle CI, so just after changing it to the most recent one problem was solved.
I need to force a virtualenv to use a compiled source python on my ci server (long story short: travis ci support python 2.7.3. heroku works with 2.7.6 and we insist on testing in the same environment as production) . But I fail to get virtualenv to run against it.
travis first runs this script:
if [ ! -d ./compiled ]; then
echo "creating compiled folder"
mkdir compiled
else
echo "compiled exists"
fi
cd compiled
if [ ! -e Python-2.7.6.tar.xz ]; then
echo "Downloading python and compiling"
wget http://www.python.org/ftp/python/2.7.6/Python-2.7.6.tar.xz
tar xf Python-2.7.6.tar.xz
cd Python-2.7.6
./configure
make
chmod +x ./python
else
echo "Compiled python exists!"
fi
and then:
- virtualenv -p ./python ./compiled/python276
- source ./compiled/python276/bin/activate
but when then doing python --version shows 2.7.3 instead of 2.7.6
Guess I'm missing something, Thanks for the help!
Go to the virtualenv folder, and open bin/ folder:
~/.Virtualenv/my_project/bin
Remove 'python' file, and create a symbolic link to the python executable, that you want to use, like:
cd ~/.Virtualenv/my_project/bin
mv python python-bkp
ln -s /usr/bin/python .
I would like to install a nodejs script (lessc) into a virtualenv.
How can I do that ?
Thanks
Natim
I like shorrty's answer, he recommended using nodeenv, see:
is there an virtual environment for node.js?
I followed this guide:
http://calvinx.com/2013/07/11/python-virtualenv-with-node-environment-via-nodeenv/
All I had to do myself was:
. ../bin/activate # switch to my Python virtualenv first
pip install nodeenv # then install nodeenv (nodeenv==0.7.1 was installed)
nodeenv --python-virtualenv # Use current python virtualenv
npm install -g less # install lessc in the virtualenv
Here is what I used so far, but it may be optimized I think.
Install nodejs
wget http://nodejs.org/dist/v0.6.8/node-v0.6.8.tar.gz
tar zxf node-v0.6.8.tar.gz
cd node-v0.6.8/
./configure --prefix=/absolute/path/to/the/virtualenv/
make
make install
Install npm (Node Package Manager)
/absolute/path/to/the/virtualenv/bin/activate
curl https://npmjs.org/install.sh | sh
Install lesscss
npm install less -g
When you activate your virtualenv you can use lessc
I created a bash script to automate Natim's solution.
Makes sure your Python virtualenv is active and just run the script. NodeJS, NPM and lessc will be downloaded and installed into your virtualenv.
http://pastebin.com/wKLWgatq
#!/bin/sh
#
# This script will download NodeJS, NPM and lessc, and install them into you Python
# virtualenv.
#
# Based on a post by Natim:
# http://stackoverflow.com/questions/8986709/how-to-install-lessc-and-nodejs-in-a-python-virtualenv
NODEJS="http://nodejs.org/dist/v0.8.3/node-v0.8.3.tar.gz"
# Check dependencies
for dep in gcc wget curl tar make; do
which $dep > /dev/null || (echo "ERROR: $dep not found"; exit 10)
done
# Must be run from virtual env
if [ "$VIRTUAL_ENV" = "" ]; then
echo "ERROR: you must activate the virtualenv first!"
exit 1
fi
echo "1) Installing nodejs in current virtual env"
echo
cd "$VIRTUAL_ENV"
# Create temp dir
if [ ! -d "tmp" ]; then
mkdir tmp
fi
cd tmp || (echo "ERROR: entering tmp directory failed"; exit 4)
echo -n "- Entered temp dir: "
pwd
# Download
fname=`basename "$NODEJS"`
if [ -f "$fname" ]; then
echo "- $fname already exists, not downloading"
else
echo "- Downloading $NODEJS"
wget "$NODEJS" || (echo "ERROR: download failed"; exit 2)
fi
echo "- Extracting"
tar -xvzf "$fname" || (echo "ERROR: tar failed"; exit 3)
cd `basename "$fname" .tar.gz` || (echo "ERROR: entering source directory failed"; exit 4)
echo "- Configure"
./configure --prefix="$VIRTUAL_ENV" || (echo "ERROR: configure failed"; exit 5)
echo "- Make"
make || (echo "ERROR: build failed"; exit 6)
echo "- Install "
make install || (echo "ERROR: install failed"; exit 7)
echo
echo "2) Installing npm"
echo
curl https://npmjs.org/install.sh | sh || (echo "ERROR: install failed"; exit 7)
echo
echo "3) Installing lessc with npm"
echo
npm install less -g || (echo "ERROR: lessc install failed"; exit 8)
echo "Congratulations! lessc is now installed in your virtualenv"
I will provide my generic solution to works with Gems and NPMs inside a virtualenv
Gems and Npm support to be customized via env settings : GEM_HOME and npm_config_prefix
You can stick the snippet below in your postactivate or activate script ( it s more matter if you use virtualenvwrapper or not)
export GEM_HOME="$VIRTUAL_ENV/lib/gems"
export GEM_PATH=""
PATH="$GEM_HOME/bin:$PATH"
export npm_config_prefix=$VIRTUAL_ENV
export PATH
Now , inside your virtualenv all libs installed via gem install or npm -g install will be installed in your virtualenv and binary added in your PATH
if you are using virtualenvwrapper you can make the change global to all your virtualenv if you modify the postactivate living inside your $VIRTUALENVWRAPPER_HOOK_DIR
This solution don't cover installing nodejs inside the virtualenv but I think that it s better to delegate this task to the packaging system (apt,yum,brew..) and install node and npm globally
Edit :
I recently created 2 plugin for virtualenvwrapper to do this automatically. There is one for gem and npm :
http://pypi.python.org/pypi/virtualenvwrapper.npm
http://pypi.python.org/pypi/virtualenvwrapper.gem