Running a simple Python Container on Ubuntu - python

So I have a directory that just includes a Dockerfile. I want to experiment with the poetry package in python. So I do not want any python files to be inside it initially because I want to create a poetry project from scratch inside the directory. So ı went ahead and built the image called local:python3.10ubuntu for it. When I ran the container for it with the command docker run --name py3.10ubuntu local:python3.10ubuntu I can see that the docker container was not running why is that and how would I be able to run it. when I try to see the docker logs for the container it wouldnt show anything as well. Besides starting the container how would I be able to run the command shell within the container and run python files?
Directory Structure:
.
└── Dockerfile
Dockerfile contents
FROM python:3.10-slim
RUN pip install --no-cache-dir poetry

ughhhh?
docker run -ti <your_image>
or if you want see more than python itself:
docker run -ti --entrypoint /bin/bash <your_image>

Your dockerfile is not configured properly try to refer below website on how to create a docker file for poetry.
Dockerfile for poetry
Also to run a shell script in dockerfile it could be done if your code has a .sh file. You could either use RUN, ENTRYPOINT, CMD or combination of these to run the shell script in your docker file.Refer the below answer
Using ENTRYPOINT to run shell script in dockerfile
To run a docker container you can use
docker run -it <docker-image>
Try exec command to run something inside the container
docker exec -it <container-name-or-id> bash

Related

How to install a Python package inside a docker image?

Is there a way to install a python package without rebuilding the docker image? I have tried in this way:
docker compose run --rm web sh -c "pip install requests"
but if I list the packages using
docker-compose run --rm web sh -c "pip freeze"
I don't get the new one.
It looks like that is installed in the container but not in the image.
My question is what is the best way to install a new python package after building the docker image?
Thanks in advance
docker-compose is used to run multi-container applications with Docker.
It seems that in your case you use Docker image with python installed as entrypoint to do some further work.
After building docker image you can run it:
$ docker run -dit -name my_container_name image_name
And then run:
$ docker exec -ti my_container_name bash or
$ docker exec -ti my_container_name sh
in case there is no bash in the docker image.
This will give you shell access to the container you just created. Then if there is pip installed inside your container you can install whatever python package you need like you would do on your OS.
Take note that everything you install is only persisted inside the container you created. If you delete this container, all the things you installed manually will be gone.
I don't know too much about docker but if you execute your commands, the docker engine will spin up a new container based on your web image and runs the pip install requests command. After it executed the command, the container has nothing more to do and will stop. Since you specified the --rm flag, the docker engine will remove your new container after it has stopped such that the whole container and thus also the installed packages are removed.
AFAIK you cannot add packages without rebuilding the image.
I know that you can run the command without removing the container and that you can also make images from your containers. (Those images should include the packages then).

Why are there differences in container contents depending on whether I `docker run ...`or `docker-compose run ...`?

I'm experiencing differences with the contents of a container depending on whether I open a bash shell via docker run -i -t <container> bash or docker-compose run <container> bash and I don't know/understand how this is possible.
To aid in the explanation, please see this screenshot from my terminal. In both instances, I am running the image called blaze which has been built from the Dockerfile in my code. One of the steps during the build is to create a virutalenv called venv, however when I open a bash shell via docker-compose this virtualenv doesn't seem to exist unlike when I run docker run ....
I am relatively new to setting up my own builds with Docker, but surely if they are both referencing the same image, the output of ls within a bash shell should be the same? I would greatly appreciate any help or guidance to resources that would explain what exactly is going wrong here...
As an additional point, running docker images shows that both commands must be using the same image...
Thanks in advance!
This is my Dockerfile:
FROM blaze-base-image:latest
# add an URL that PIP automatically searches (e.g., Azure Artifact Store URL)
ARG INDEX_URL
ENV PIP_EXTRA_INDEX_URL=$INDEX_URL
# Copy source code to docker image
RUN mkdir /opt/app
COPY . /opt/app
RUN ls /opt/app
# Install Blaze pip dependencies
WORKDIR /opt/app
RUN python3.7 -m venv /opt/app/venv
RUN /opt/app/venv/bin/python -m pip install --upgrade pip
RUN /opt/app/venv/bin/python -m pip install keyring artifacts-keyring
RUN touch /opt/app/venv/pip.conf
RUN echo $'[global]\nextra-index-url=https://www.index.com' > /opt/app/venv/pip.conf
RUN /opt/app/venv/bin/python -m pip install -r /opt/app/requirements.txt
RUN /opt/app/venv/bin/python -m spacy download en_core_web_sm
# Comment
CMD ["echo", "Container build complete"]
And this is my docker-compose.yml:
version: '3'
services:
blaze:
build: .
image: blaze
volumes:
- .:/opt/app
There are two intersecting things going on here:
When you have a Compose volumes: or docker run -v option mounting host content over a container directory, the host content completely replaces what's in the image. If you don't have a ./venv directory on the host, then there won't be a /opt/app/venv directory in the container. That's why, when you docker-compose run blaze ..., the virtual environment is missing.
If you docker run a container, the only options that are considered are those in that specific docker run command. docker run doesn't know about the docker-compose.yml file and won't take options from there. That means there isn't this volume mount in the docker run case, which is why the virtual environment reappears.
Typically in Docker you don't need a virtual environment at all: the Docker image is isolated from other images and Python installations, and so it's safe and normal to install your application into the "system" Python. You also typically want your image to be self-contained and not depend on content from the host, so you wouldn't generally need the bind mount you show.
That would simplify your Dockerfile to:
FROM blaze-base-image:latest
# Any ARG will automatically appear as an environment variable to
# RUN directives; this won't be needed at run time
ARG PIP_EXTRA_INDEX_URL
# Creates the directory if it doesn't exist
WORKDIR /opt/app
# Install the Python-level dependencies
RUN pip install --upgrade pip
COPY requirements.txt .
RUN pip install -r requirements.txt
# The requirements.txt file should list every required package
# Install the rest of the application
COPY . .
# Set the main container command to run the application
CMD ["./app.py"]
The docker-compose.yml file can be similarly simplified to
version: '3.8' # '3' means '3.0'
services:
blaze:
build: .
# Compose picks its own image name
# Do not need volumes:, the image is self-contained
and then it will work consistently with either docker run or docker-compose run (or docker-compose up).

How to run a python program using Singularity from a docker container?

I have created a docker container for my pure python program and have set python main.py to be executed when the container is run. Running the container works as expected on my local machine. However, I want to run the container on my institution's high-performance cluster. The cluster machines use Singularity, which I am using to pull my docker image hosted on Dockerhub (the repo is darshank11/ga_paci_final). However, when I try to run the Singularity container, I get the following error: python3: can't open file 'main.py': [Errno 2] No such file or directory.
I've tried to change the base image in the Dockerfile, for example from FROM python:latest to FROM ubuntu:latest. I've made sure the docker container worked on my local machine, and then got one of my co-workers to pull the container from Dockerhub and run it too. Everything works fine until I get to Singularity.
Here is my docker file:
FROM ubuntu:16.04
RUN apt-get update -y && \
apt-get install -y python3-pip python3-dev
RUN mkdir src
WORKDIR /src
COPY . /src
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD ["python3", "-u", "main.py"]
You're getting that error because the execution context is not what you're expecting. The run path in singularity is the current directory on the host OS (e.g., ~/ga_paci_final), which has been mounted into the singularity image.
As mentioned in the comments, one solution is to give the full path to the python file in the docker CMD statement. Another option is to modify the %runscript block of singularity definition file to something like:
%runscript
cd /src
python3 -u main.py
That way you ensure the run environment is identical between Docker and Singularity.

How to setup python environment in docker in my case?

I am trying to setup my python environment in docker.
My docker image is like this:
FROM python:2.7
# updating repository
RUN apt-get update
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt requirements.txt
RUN pip install --no-cache -r requirements.txt
EXPOSE 8888
COPY . .
CMD ["python", "test.py"]
with this build command:
docker build -t ml-python-2.7 .
After image is built,
I ran
docker run -it --rm --name ml-container ml-python-2.7 python test.py
My sample test.py
print('test here')
It works when I first run this command.
docker run -it --rm --name ml-container ml-python-2.7 python test.py
but after I change the test.py to print('second test')
and run the above command again, it still output test here.
How do I make sure it updates automatically or if there is more elegant way to do this?
Thanks!
Docker does not store the changes you are making to files inside the container unless you commit it. If you want it to do so, you need to do a Docker Commit like:
docker commit <CONTAINER NAME HERE>
Or you could mount a local folder to the docker image like this:
docker run -ti -v ~/folder_in_host:/var/log/folder_in_container <IMAGE NAME HERE>

paho MQTT is not responding with docker container

i am new to docker and making my first application I would be very thankful if someone points to me right direction.
I build the image and when run this image, I get no response from docker run commands. it keeps loading. below is python script:
When I interrupt(ctrl+c)through keyboard it immediately shows outputs(print statement) otherwise it does not perform anything.
The Dockerfile is:
FROM python:2.7-slim
WORKDIR /root/
ADD . /root
RUN pip install numpy
COPY app.py app.py
ENTRYPOINT []
CMD ["python", "app.py"]
Docker run command:
docker run ImageName
Please help!
This is probably because python buffers stdout/stdin by default. Edit your docker file to add the -u to the python command line:
CMD ["python", "-u", "app.py"]
This is the solution of my problem, docker run commands takes -it flag.
sudo docker run -it imageName

Categories

Resources