In my local machine, I am installing an unofficial python package (not released) from a git repo using the following command
python -m pip install "mod1 # git+https://github.com/....git"
How can I install the library inside a docker file?
If the docker container is running, and you are not running interactively inside the container, you can use the exec command:
docker exec -it <container_id_or_name> python -m pip install "mod1 # git+https://github.com/....git"
If you are inside a container, you should be able to use the command as is, assuming you have installed python in the container (i.e. command 'python' not found).
From a dockerfile, depending on your needs, you could use the RUN command.
Related
Is there a way to install a python package without rebuilding the docker image? I have tried in this way:
docker compose run --rm web sh -c "pip install requests"
but if I list the packages using
docker-compose run --rm web sh -c "pip freeze"
I don't get the new one.
It looks like that is installed in the container but not in the image.
My question is what is the best way to install a new python package after building the docker image?
Thanks in advance
docker-compose is used to run multi-container applications with Docker.
It seems that in your case you use Docker image with python installed as entrypoint to do some further work.
After building docker image you can run it:
$ docker run -dit -name my_container_name image_name
And then run:
$ docker exec -ti my_container_name bash or
$ docker exec -ti my_container_name sh
in case there is no bash in the docker image.
This will give you shell access to the container you just created. Then if there is pip installed inside your container you can install whatever python package you need like you would do on your OS.
Take note that everything you install is only persisted inside the container you created. If you delete this container, all the things you installed manually will be gone.
I don't know too much about docker but if you execute your commands, the docker engine will spin up a new container based on your web image and runs the pip install requests command. After it executed the command, the container has nothing more to do and will stop. Since you specified the --rm flag, the docker engine will remove your new container after it has stopped such that the whole container and thus also the installed packages are removed.
AFAIK you cannot add packages without rebuilding the image.
I know that you can run the command without removing the container and that you can also make images from your containers. (Those images should include the packages then).
Below my docker file,
FROM python:3.9.0
ARG WORK_DIR=/opt/quarter_1
RUN apt-get update && apt-get install cron -y && apt-get install -y default-jre
# Install python libraries
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
WORKDIR $WORK_DIR
EXPOSE 8888
VOLUME /home/data/quarter_1/
# Copy etl code
# copy code on container under your workdir "/opt/quarter_1"
COPY . .
I tried to connect to the server then i did the build with docker build -t my-python-app .
when i tried to run the container from a build image i got nothing and was not able to do it.
docker run -p 8888:8888 -v /home/data/quarter_1/:/opt/quarter_1 image_id
work here is opt
Update based on comments
If I understand everything you've posted correctly, my suggestion here is to use a base Docker Jupyter image, modify it to add your pip requirements, and then add your files to the work path. I've tested the following:
Start with a dockerfile like below
FROM jupyter/base-notebook:python-3.9.6
COPY requirements.txt /tmp/requirements.txt
RUN pip install --upgrade pip && pip install -r /tmp/requirements.txt
COPY ./quarter_1 /home/jovyan/quarter_1
Above assumes you are running the build from the folder containing dockerfile, "requirements.txt", and the "quarter_1" folder with your build files.
Note "home/joyvan" is the default working folder in this image.
Build the image
docker build -t biwia-jupyter:3.9.6 .
Start the container with open port to 8888. e.g.
docker run -p 8888:8888 biwia-jupyter:3.9.6
Connect to the container to access token. A few ways to do but for example:
docker exec -it CONTAINER_NAME bash
jupyter notebook list
Copy the token in the URL and connect using your server IP and port. You should be able to paste the token there, and afterwards access the folder you copied into the build, as I did below.
Jupyter screenshot
If you are deploying the image to different hosts this is probably the best way to do it using COPY/ADD etc., but otherwise look at using Docker Volumes which give you access to a folder (for example quarter_1) from the host, so you don't constantly have to rebuild during development.
Second edit for Python 3.9.0 request
Using the method above, 3.9.0 is not immediately available from DockerHub. I doubt you'll have much compatibility issues between 3.9.0 and 3.9.6, but we'll build it anyway. We can download the dockerfile folder from github, update a build argument, create our own variant with 3.9.0, and do as above.
Assuming you have git. Otherwise download the repo manually.
Download the Jupyter Docker stack repo
git clone https://github.com/jupyter/docker-stacks
change into the base-notebook directory of the cloned repo
cd ./base-notebook
Build the image with python 3.9.0 instead
docker build --build-arg PYTHON_VERSION=3.9.0 -t jupyter-base-notebook:3.9.0 .
Create the version with your copied folders and 3.9.0 version from the steps above, replacing the first line in the dockerfile instead with:
FROM jupyter-base-notebook:3.9.0
I've tested this and it works, running Python 3.9.0 without issue.
There are lots of ways to build Jupyter images, this is just one method. Check out docker hub for Jupyter to see their variants.
I'm running python script inside docker container. Depending on the passed parameters my script should show different information. I want to pass this parameter trough the docker run {my_image_name} {parameters} command, where instead {parameters} i want to type some custom values that my script expects to receive. Found some info about arguments and env variables, but don`t understand it. Can anybody explain how to do it resolving my issue? What should i add to dockerfile?
Dockerfile content:
FROM python:3.8.2-buster
WORKDIR /usr/src/app
COPY metrics.py .
RUN pip install --upgrade pip
RUN python -m pip install psutil
CMD python /usr/src/app/metrics.py
When i'm running docker run {my_image_name} {my_script_name} {parameter} i'm getting:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"metrics.py\": executable file not found in $PATH": unknown.
I work on windows.
You are missing ENTRYPOINT
Docker file should be something like this:
FROM python:3.8.2-buster
WORKDIR /usr/src/app
COPY metrics.py .
RUN pip install --upgrade pip
RUN python -m pip install psutil
ENTRYPOINT ["python"]
CMD ["/usr/src/app/metrics.py"]
I'm trying to automate the deployment of my Python-Flask app on Ubuntu 18.04 using Bash by going through the motion of preparing all the necessary files/directories and cloning the source code from Github followed by creating the virtual environment, installing the pre-requisite modules and etc.
Now because I have to execute my Bash script using sudo, this means that the entire script will be executed as root except where I specify otherwise using sudo -u myuser and when it comes to activating my virtual environment, I get the following output: sudo: source: command not found and my subsequent pip installs are all installed outside of the virtual environment. Excerpts of my code below:
#!/bin/bash
...
sudo -u "$user" python3 -m venv .env
sudo -u $SUDO_USER source /srv/www/www.mydomain.com/.env/bin/activate
sudo -u "$user" pip install wheel
sudo -u "$user" pip install uwsgi
sudo -u "$user" pip install -r requirements.txt
...
Now for the life of me, I can't figure out how to activate the virtual environment in the context of the virtual environment if this makes any sense.
I've scoured the web and most of the questions/answers I found revolves around how to activate the virtual environment in a Bash script but not how to activate the virtual environment as a separate user within a Bash script that was executed as sudo.
That's because source is not an executable file, but a built-in bash command. It won't work with sudo, since the latter accepts a program name (i.e. executable file) as argument.
P.S. It's not clear why you have to execute the whole script as root. If you need to execute only a number of commands as root (e.g. for starting/stopping a service) and run a remaining majority as a regular user, you can use sudo only for these commands. E.g. the following script
#!/bin/bash
# The `whoami` command outputs the current username. Unlike `source`, this is
# a full-fledged executable file, not a built-in command
whoami
sudo whoami
sudo -u postgres whoami
on my machine outputs
trolley813
root
postgres
P.P.S. You probably don't need to activate an environment as root.
I want to install Apache Superset on my Windows machine. But I don't want to use any virtual environment of Python.
with the help of python-3.8.10-embed-amd64.zip, you can do the job.
and here is the solution in details, https://github.com/alitrack/superset_app
prepare job
download 3 files
python-3.8.10-embed-amd64.zip
get-pip.py
python_geohash‑0.8.5‑cp38‑cp38‑win_amd64.whl
unzip python-3.8.10-embed-amd64.zip and goto the unzip folder
modify python38._ph
python38.zip
.
# Uncomment to run site.main() automatically
import site
install pip
python ..\get-pip.py
install python_geohash
python -m pip install ..\python_geohash‑0.8.5‑cp38‑cp38‑win_amd64.whl
install apache superset and pillow
pip install apache-superset pillow
pip install markdown==3.2.2
init superset
# initialize the database:
superset db upgrade
# Create an admin user
set FLASK_APP=superset
superset create-admin
# Load some data to play with
superset load_examples
# Create default roles and permissions
superset init
run superset
# To start a development web server on port 8088, use -p to bind to another port
superset run -p 8088 --with-threads --reload --debugger
You don't need to install python or create virtual env. Install docker and docker-compose.
git clone https://github.com/apache/incubator-superset/
cd incubator-superset/contrib/docker
# prefix with SUPERSET_LOAD_EXAMPLES=yes to load examples:
docker-compose run --rm superset ./docker-init.sh
# you can run this command everytime you need to start superset now:
docker-compose up