I am trying to run a docker base image but am encountering the error /bin/sh: 1: python: not found. I am first building a parent image and then modifying it using the bash script below
#!/usr/bin/env bash
docker build -t <image_name>:latest .
docker run <image_name>:latest
docker push <image_name>:latest
and the Dockerfile
FROM ubuntu:18.04
# Installing Python
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install Pillow boto3
WORKDIR /app
After that, I run the following script to create and run the base image:
#!/usr/bin/env bash
docker build -t <base_image_name>:latest .
docker run -it <base_image_name>:latest
with the following Dockerfile:
FROM <image_name>:latest
COPY app.py /app
# Run app.py when the container launches
CMD python /app/app.py
I have also tried installing python through the Dockerfile of the base image, but I still get the same error.
IMHO a better solution would be to use one of the official python images.
FROM python:3.9-slim
RUN pip install --no-cache-dir Pillow boto3
WORKDIR /app
To fix the issue of python not being found -- instead of
cd /usr/local/bin \
&& ln -s /usr/bin/python3 python
OP should symlink to /usr/bin/python, not /usr/local/bin/python as they did in the original post. Another way to do this is with an absolute symlink as below.
ln -s /usr/bin/python3 /usr/bin/python
Related
I'm building Python 3.7.4 (It's a hard requirement for other software) on a base Ubuntu 20.04 image using a Dockerfile. I'm following this guide.
Everything works fine if I run the image and follow the guide, but I want to setup my virtual environment in the Dockerfile and have the pip requirements persistent when running the image.
Here's the relevant part of my Dockerfile:
...
RUN echo =============== Building and Install Python =============== \
&& cd /tmp \
&& wget https://www.python.org/ftp/python/3.7.4/Python-3.7.4.tgz \
&& tar xvf ./Python-3.7.4.tgz \
&& cd Python-3.7.4 \
&& ./configure --enable-optimizations --with-ensurepip=install \
&& make -j 8 \
&& sudo make install
ENV VIRTUAL_ENV=/opt/python-3.7.4
ENV PATH="$VIRTUAL_ENV:$PATH"
COPY "./hourequirements.txt" /usr/local/
RUN echo =============== Setting up Python Virtual Environment =============== \
&& python3 -m venv $VIRTUAL_ENV \
&& source $VIRTUAL_ENV/bin/activate \
&& pip install --upgrade pip \
&& pip install --no-input -r /usr/local/hourequirements.txt
...
The Dockerfile builds without errors, but when I run the image the environment doesn't exist and python 3.7.4 doesn't show any of the installed requirements.
How can I install Python modules in the virtual environment using PIP in the Dockerfile and have them persist when the docker image runs?
Usual find answer just after post.
I changed:
ENV PATH="$VIRTUAL_ENV:$PATH"
to:
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
in Dockerfile and started working correctly.
I have a python script that uses DigitalOcean tools (doctl and kubectl) I want to containerize. This means my container will need python, doctl, and kubectl installed. The trouble is, I figure out how to install both python and DigitalOcean tools in the dockerfile.
I can install python using the base image "python:3" and I can also install the DigitalOcean tools using the base image "alpine/doctl". However, the rule is you can only use one base image in a dockerfile.
So I can include the python base image and install the DigitalOcean tools another way:
FROM python:3
RUN <somehow install doctl and kubectl>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Or I can include the alpine/doctl base image and install python3 another way.
FROM alpine/doctl
RUN <somehow install python>
RUN pip install firebase-admin
COPY script.py
CMD ["python", "script.py"]
Unfortunately, I'm not sure how I would do this. Any help in how I can get all these tools installed would be great!
just add this with any other thing you want to apt-get install:
RUN apt-get update && apt-get install -y \
python3.6 &&\
python3-pip &&\
in alpine it should be something like:
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python &&\
python3 -m ensurepip &&\
pip3 install --no-cache --upgrade pip setuptools &&\
This Dockerfile worked for me:
FROM alpine/doctl
ENV PYTHONUNBUFFERED=1
RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
RUN python3 -m ensurepip
RUN pip3 install --no-cache --upgrade pip setuptools
This answer comes from here:(https://stackoverflow.com/a/62555259/7479816; I don't have enough street cred to comment)
You can try multi-stage build as shown below.
Also check your copy statement, you need to define where you want script.py file to be copied as second parameter. "." will copy it to root directory
FROM alpine/doctl
FROM python:3.6-slim-buster
ENV PYTHONUNBUFFERED 1
RUN pip install firebase-admin
COPY script.py .
CMD ["python", "script.py"]
I have an app ABC, which I want to put on docker environment. I built a Dockerfile and got the image abcd1234 which I used in docker-compose.yml
But on trying to build the docker-compose, All the requirements.txt files are getting reinstalled. Can it not use the already existing image and prevent time from reinstalling it?
I'm new to docker and trying to understand all the parameters. Also, is the 'context' correct? in docker-compose.yml or it should contain path inside the Image?
PS, my docker-compose.yml is not in same directory of project because I'll be using multiple images to expose more ports.
docker-compose.yml:
services:
app:
build:
context: /Users/user/Desktop/ABC/
ports:
- "8000:8000"
image: abcd1234
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
environment:
- PROJECT_ENV=development
Dockerfile:
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
RUN python3 -m venv /venv
RUN apt-get update && \
apt-get upgrade -y && \
apt-get install -y git && \
apt-get install -y build-essential && \
apt-get install -y awscli && \
apt-get install -y unzip && \
apt-get install -y nano
RUN apt-get install -y libsm6 libxext6 libxrender-dev
COPY . /ABC/
RUN apt-cache search mysql-server
RUN apt-cache search libmysqlclient-dev
RUN apt-get install -y libpq-dev
RUN apt-get install -y postgresql
RUN apt-cache search postgresql-server-dev-9.5
RUN pip install --upgrade awscli==1.14.5 s3cmd==2.0.1 python-magic
RUN pip install -r /ABC/requirements.txt
WORKDIR .
Please guide me on how to tackle these 2 scenarios. Thanks!
The context: directory is the directory on your host system that includes the Dockerfile. It's the same directory you would pass to docker build, and it frequently is just the current directory ..
Within the Dockerfile, Docker can cache individual build steps so that it doesn't repeat them, but only until it reaches the point where something has changed. That "something" can be a changed RUN line, but at the point of your COPY, if any file at all changes in your local source tree that also invalidates the cache for everything after it.
For this reason, a typical Dockerfile has a couple of "phases"; you can repeat this pattern in other languages too. You can restructure your Dockerfile in this order:
# 1. Base information; this almost never changes
FROM python:3.6-slim-buster AS build
MAINTAINER ABC
ENV PYTHONUNBUFFERED 1
WORKDIR /ABC
# 2. Install OS packages. Doesn't depend on your source tree.
# Frequently just one RUN line (but could be more if you need
# packages that aren't in the default OS package repository).
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
build-essential unzip libxrender-dev libpq-dev
# 3. Copy _only_ the file that declares language-level dependencies.
# Repeat starting from here only if this file changes.
COPY requirements.txt .
RUN pip install -r requirements.txt
# 4. Copy the rest of the application in. In a compiled language
# (Javascript/Webpack, Typescript, Java, Go, ...) build it.
COPY . .
# 5. Explain how to run the application.
EXPOSE 8000
CMD python manage.py migrate && \
python manage.py runserver 0.0.0.0:8000
I have a script python which should output a file csv. I'm trying to have this file in the current working directory but without success.
This is my Dockerfile
FROM python:3.6.4
RUN apt-get update && apt-get install -y libaio1 wget unzip
WORKDIR /opt/oracle
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-
basiclite-linuxx64.zip && \ unzip instantclient-basiclite-linuxx64.zip && rm
-f instantclient-basiclite-linuxx64.zip && \ cd /opt/oracle/instantclient*
&& rm -f jdbc occi mysql *README jar uidrvci genezi adrci && \ echo
/opt/oracle/instantclient > /etc/ld.so.conf.d/oracle-instantclient.conf &&
ldconfig
RUN pip install --upgrade pip
COPY . /app
WORKDIR /app
RUN pip install --upgrade pip
RUN pip install pystan
RUN apt-get -y update && python3 -m pip install cx_Oracle --upgrade
RUN pip install -r requirements.txt
CMD [ "python", "Main.py" ]
And run the container with the following command
docker container run -v $pwd:/home/learn/rstudio_script/output image
This is bad practice to bind a volume just to have 1 file on your container be saved onto your host.
Instead, what you should leverage is the copy command:
docker cp <containerId>:/file/path/within/container /host/path/target
You can set this command to auto execute with bash, after your docker run.
So something like:
#!/bin/bash
# this stores the container id
CONTAINER_ID=$(docker run -dit img)
docker cp $CONTAINER_ID:/some_path host_path
If you are adamant on using a bind volume, then as the others have pointed out, the issue is most likely your python script isn't outputting the csv to the correct path.
Your script Main.py is probably not trying to write to /home/learn/rstudio_script/output. The working directory in the container is /app because of the last WORKDIR directive in the Dockerfile. You can override that at runtime with --workdir but then the CMD would have to be changed as well.
One solution is to have your script write files to /output/ and then run it like this:
docker container run -v $PWD:/output/ image
I create a docker image in order to set a python code with schedule,so I use python-crontab module, how can i solve permission denied problem?
Ubuntu 16.04.6 LTS
python 3.5.2
I create sche.py and it can trigger weather.py,
it is success in local,but it can't package to docker image
```
#dockerfile
FROM python:3.5.2
WORKDIR /weather
ENTRYPOINT ["/weather"]
ADD . /weather
RUN chmod u+x sche.py
RUN chmod u+x weather.py
RUN mkdir /usr/bin/crontab
#add due to /usr/bin/crontab not found
RUN pip3 install python-crontab
RUN pip3 install -r requirements.txt
EXPOSE 80
#ENV NAME World
CMD ["sudo"]
#CMD ["python", "sche.py"] ## build step fail
ENTRYPOINT ["python","sche.py"]
## can build same as "RUN ["python","sche.py"] "
```
I expect it can run in docker image rather than each python file only.
Try USER root after FROM python:3.5.2 line.
Remove CMD ["sudo"] and ENTRYPOINT ["/weather"]
Updated
Replace RUN mkdir /usr/bin/crontab
RUN apt-get update \
&& apt-get install -y cron \
&& apt-get autoremove -y