I am new to Docker and want to know if there is a slim or light-sized Python winamd64 image?
In the Docker file, as follows:
FROM python:3.8-slim-buster
# Create project directory (workdir)
WORKDIR /SMART
# Add requirements.txt to WORKDIR and install dependencies
COPY requirements.txt .
RUN pip install -r requirements.txt
# Add source code files to WORKDIR
ADD . .
# Application port (optional)
EXPOSE 8000
# Container start command
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
It works well in the Linux container state, but I need my container to be Windows. I face a problem when I switch it to the Windows container, even if I change "experimental" to the true value. Therefore, I decided to use a Python winamd64 image, but all of the official images on the website are around 1 GByte. I need a small-sized Python image like (python:3.8-slim-buster) so how could I solve this problem? Or how could I change the Docker file in the first line (FROM python:3.8-slim-buster) to use a Python image that I push to my hub?
Related
New to Docker here. I'm trying to create a basic Dockerfile where I run a python script that runs some queries in postgres through psycopg2. I currently have a pgpass file setup in my environment variables so that I can run these tools without supplying a password in the code. I'm trying to replicate this in Docker. I have windows on my local.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
RUN cp C:\Users\user.name\AppData\Roaming\postgresql\pgpass.conf /test
ENV PGPASSFILE="test/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
This is what I've tried in my Dockerfile. I've tried to copy over my pgpassfile and set it as my environment variable. Apologies if I have a forward/backslashes wrong, or syntax. I'm very new to Docker, Linux, etc.
Any help or alternatives would be appreciated
It's better to pass your secrets into the container at runtime than it is to include the secret in the image at build-time. This means that the Dockerfile doesn't need to know anything about this value.
For example
$ export PGPASSWORD=<postgres password literal>
$ docker run -e PGPASSWORD <image ref>
Now in that example, I've used PGPASSWORD, which is an alternative to PGPASSFILE. It's a little more complicated to do this same if you're using a file, but that would be something like this:
The plan will be to mount the credentials as a volume at runtime.
FROM datascienceschool/rpython as base
RUN mkdir /test
WORKDIR /test
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY main_airflow.py /test
ENV PGPASSFILE="/credentials/pgpass.conf"
ENV PGUSER="user"
ENTRYPOINT ["python", "/test/main_airflow.py"]
As I said above, we don't want to include the secrets in the image. We are going to indicate where the file will be in the image, but we don't actually include it yet.
Now, when we start the image, we'll mount a volume containing the file at the location specified in the image, /credentials
$ docker run --mount src="<host path to directory>",target="/credentials",type=bind <image ref>
I haven't tested this so you may need to adjust the exact paths and such, but this is the idea of how to set sensitive values in a docker container
I've a django site that parses pdf using tika-python and stores the parsed pdf content in elasticsearch index. it works fine in my local machine. I want to run this setup using docker. However, tika-python does not work as it requires java 8 to run the REST server in background.
my dockerfile:
FROM python:3.6.5
WORKDIR /site
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8000
EXPOSE 9200
ENV PATH="/site/poppler/bin:${PATH}"
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
requirements.txt file :
django==2.2
beautifulsoup4==4.6.0
json5==0.8.4
jsonschema==2.6.0
django-elasticsearch-dsl==0.5.1
tika==1.19
sklearn
where (dockerfile or requirements) and how should i add java 8 required for tika to make it work in docker. Online tutorials/ examples contain java+tika in container, which is easy to achieve. Unfortunately, couldn't find a similar solution in stackoverflow also.
I have 3 python script which I want to run at the same time through batch file on docker CE windows.
I have created 3 containers for 3 python scripts. All the 3 python require input files.
python_script_1 : docker-container-name
python_script_2 : docker-container-name
python_script_2 : docker-container-name
The docker files for 3 python scripts are:
Docker_1
FROM python:3.7
RUN pip install pandas
COPY input.csv ./
COPY script.py ./
CMD ["python", "./script.py", "input.csv"]
Docker_2
FROM python:3.7
RUN pip install pandas
COPY input_itgear.txt ./
COPY script.py ./
CMD ["python", "./script.py", "input_itgear.txt"]
Docker_3
FROM python:3.7
RUN pip install pandas
COPY sale_new.txt ./
COPY store.txt ./
COPY script.py ./
CMD ["python", "./script.py", "sale_new.txt", "store.txt"]
I want to run all the three scripts at the same time on docker through a batch file. Any help will be greatly appreciated.
so here is the gist
https://gist.github.com/karlisabe/9f0d43fe09536efa8035092ccbb593d4
You need to place your csv and py files accordingly so that they can be copied into the images, but what is going to happen is that when you run docker-compose up -d it should build the images (or use from cache if no changes are made) and run all 3 services. In essence it's like using docker run my-image 3 times but there is some additional features available, like all 3 of the containers will be a special network created by docker. You should read more about docker-compose here https://docs.docker.com/compose/
So basically I have a python script that will write to a file once it is done running. How do I access this file? My end goal is to run the docker image on jenkins and then read the xml file that the python script generates.
FROM python:3
ADD WebChecker.py /
ADD requirements.txt /
ADD sites.csv /
RUN pip install -r requirements.txt
CMD [ "python", "./WebChecker.py" ]
That is my Dockerfile. I have a print("Finished") in there and it is printing so that means everything is working fine. It's just now I need to see my output.xml file.
You should have done it now by following above comments. In case if you still stuck, you may give a try as below:
Build:
docker build -t some_tag_name_to_your_image .
After build is completed, you may run a container and get the xml file as below:
1. Write output file to bind volume
Run your container as below:
docker run -d --rm --name my_container \
-v ${WORKSPACE}:/path/to/xml/file/in/container \
some_tag_name_to_your_image
Once the xml file generated, that will be available at the Jenkins-host:${WORKSPACE}
Notes:
${WORKSPACE} is an env variable set by Jenkins. Read more env-vars here
Read more about bind mount here
I want to make an external config file via volume and pass it like:
docker run MyImage -v /home/path/my_config.conf:folder2/(is that right btw?)
But have no idea how to link this volume to the argument for the main.py...
My DocekrFile:
FROM python:3.6-jessie
MAINTAINER Vladislav Ladenkov
WORKDIR folder1/folder2
COPY folder2/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY folder2/*.py ./
?? how to link volume ??
ENTRYPOINT ["python3", "main.py", "#??volume??"]
You want to use a folder-name to map the volume:
docker run MyImage -v /home/path/:/folder1/folder2/
So now /home/path folder on the host machine is mounted to /folder1/folder2 inside the container.
Then just pass the path of the conf file as seen within the container to the cmd.
ENTRYPOINT ["python3", "main.py", "/folder1/folder2/myconf.conf"]