Docker Container Not Executing Python Commands - python

I have a simple script writecsv.py which queries an external API, parses the response, and writes to 2 CSVs.
My Dockerfile reads:
FROM python:3.8.5-slim-buster
COPY . /src
RUN pip install -r /src/requirements.txt
CMD ["python", "./src/writecsv.py"]
In the directory containing my Dockerfile, I have 4 files:
writecsv.py # My script that queries API and writes 2 csvs
keys.yaml # Stores my API keys which are read by writecsv.py
requirements.txt
Dockerfile
When I build this image, I use docker build -t write-to-csv-application . and to run this image, I use docker run write-to-csv-application
I am able to show that the script runs, and the 2 CSV files are successfully created by printing the contents of the current working directory before and after calling csv.DictWriter.
So far, so good. Now I'd like to expose these files on localhost:5000, to be downloaded.
My current approach isn't working. I don't know Docker very well, so any suggestions are welcomed. Here's where things go wrong:
I then add 2 more lines to my Dockerfile; expose 5000, and http.server to get:
FROM python:3.8.5-slim-buster
COPY . /src
RUN pip install -r /src/requirements.txt
EXPOSE 5000/tcp
CMD ["python", "./src/writecsv.py"]
CMD python -m http.server -dp ./src/ 5000
Now when I build this image, again using docker build -t write-to-csv-application . and run this image using docker run -p 5000:5000 write-to-csv-application I don't get any command line output from the writecsv.py program, that I did previously. I am able to access localhost:5000 and see the image file structure, but I find that the files weren't created. The output in command line hangs indefinitely (which I would expect, as I don't have anything to terminate the http server.)
What I've tried:
I wrote the files to ./src/data/ and pointed the http.server to /src/data/, which doesn't exist.
I pointed the http.server to ./ and checked the entire file structure: they aren't being written anywhere when ran with docker run -p 5000:5000 write-to-csv-application

This worked for me. I neglected the requirements.txt. I just made a very basic example.
"The main purpose of a CMD is to provide defaults for an executing container". See the official docs for more information.
writecsv.py:
import csv
fields = ['Name', 'Branch', 'Year', 'CGPA']
rows = [ ['Nikhil', 'COE', '2', '9.0'],
['Sanchit', 'COE', '2', '9.1'],
['Aditya', 'IT', '2', '9.3'],
['Sagar', 'SE', '1', '9.5'],
['Prateek', 'MCE', '3', '7.8'],
['Sahil', 'EP', '2', '9.1']]
filename = "university_records.csv"
with open(filename, 'w') as csvfile:
csvwriter = csv.writer(csvfile)
csvwriter.writerow(fields)
csvwriter.writerows(rows)
Dockerfile:
FROM python:3.8.5-slim-buster
WORKDIR /src
COPY . /src
EXPOSE 5000/tcp
RUN ["python", "./writecsv.py"]
CMD python -m http.server -d . 5000
docker build -t python-test .
docker run --rm -it -p 5000:5000 --name python-test python-test

Related

Run docker image with json file as variable

I have the following docker image
FROM python:3.8-slim
WORKDIR /app
# copy the dependencies file to the working directory
COPY requirements.txt .
COPY model-segmentation-512.h5 .
COPY run.py .
# TODO add python dependencies
# install pip deps
RUN apt update
RUN pip install --no-cache-dir -r requirements.txt
RUN mkdir /app/input
RUN mkdir /app/output
# copy the content of the local src directory to the working directory
#COPY src/ .
# command to run on container start
ENTRYPOINT [ "python", "run.py"]
and then I would like to run my image using the following command where json_file is a file I can update on my machine whenever I want that will be read by run.py to import all the required parameters for the python script.:
docker run -v /local/input:/app/input -v /local/output:/app/output/ -t docker_image python3 run.py model-segmentation-512.h5 json_file.json
However when I do this I get a FileNotFoundError: [Errno 2] No such file or directory: 'path/json_file.json' so I think I'm not introducing properly my json file. What should I change to allow my docker image to read an updated json file (just like a variable) every time I run it?
I think you are using ENTRYPOINT in the wrong way. Please see this question and read more about ENTRYPOINT and CMD. In short, what you specify after image name when you run docker, will be passed as CMD and means will be passed to the ENTRYPOINT as a list of arguments. See the next example:
Dockerfile:
FROM python:3.8-slim
WORKDIR /app
COPY run.py .
ENTRYPOINT [ "python", "run.py"]
run.py:
import sys
print(sys.argv[1:])
When you run it:
> docker run -it --rm run-docker-image-with-json-file-as-variable arg1 arg2 arg3
['arg1', 'arg2', 'arg3']
> docker run -it --rm run-docker-image-with-json-file-as-variable python3 run.py arg1 arg2 arg3
['python3', 'run.py', 'arg1', 'arg2', 'arg3']
Map the json file into the container using something like -v $(pwd)/json_file.json:/mapped_file.json and pass the mapped filename to your program, so you get
docker run -v $(pwd)/json_file.json:/mapped_file.json -v /local/input:/app/input -v /local/output:/app/output/ -t docker_image python3 run.py model-segmentation-512.h5 /mapped_file.json

run two python scripts with docker compose

My folder structure looked like this:
My Dockerfile looked like this:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
CMD [ "python", "main.py"]
When I ran these commands:
docker build --tag FinTechExplained_Python_Docker .
docker run free
my main.pyfile ran and gave the correct print statements as well. Now, I have added another file tests.py in the src folder. I want to run the tests.py first and then main.py.
I tried modifying the cmdwithin my docker file like this:
CMD [ "python", "test.py"] && [ "python", "main.py"]
but then it gives me the print statements from only the first test.pyfile.
I read about docker-compose and added this docker-compose.yml file to the root folder:
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python tests.py'
main:
image: free
command: >
/bin/sh -c 'python main.py'
then I changed my docker file by removing the cmd:
FROM python:3.8-slim-buster
WORKDIR /src
COPY src/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ .
Then I ran the following commands:
docker compose build
docker compose run tests
docker compose run main
When I run these commands separately, I get the correct print statements for both testsand main. However, I am not sure if I am using docker-composecorrectly or not.
Am I supposed to run both scripts separately? Or is there a way to run one after another using a single docker command?
How is my Dockerfile supposed to look like if I am running the python scripts from the docker-compose.yml instead?
Edit:
Ideally looking for solutions based on docker-compose
In the Bourne shell, in general, you can run two commands in sequence by putting && between them. It sounds like you're already aware of this.
# without Docker, at a normal shell prompt
python test.py && python main.py
The Dockerfile CMD has two syntactic forms. The JSON-array form does not run a shell, and so it is slightly more efficient and has slightly more consistent escaping rules. If it's not a JSON array then Docker automatically runs it via a shell. So for your use you can use the shell form:
CMD python test.py && python main.py
In comments to other answers you ask about providing this as an override in the docker-compose.yml file. Compose will not normally run a shell for you, so you need to explicitly specify it as part of the command: override.
command: /bin/sh -c 'python test.py && python main.py'
Your Dockerfile should generally specify a CMD and the docker-compose.yml often will not include a command:. This makes it easier to run the image in other contexts (via docker run without Compose; in Kubernetes) since you won't have to retype the command every different way you want to run the container. The entrypoint wrapper pattern highlighted in #sytech's answer is very useful in general and it's easy to add to a container that uses a CMD without an ENTRYPOINT; but it requires the Dockerfile to use CMD as a normal well-formed shell command.
You have to change CMD to ENTRYPOINT. And run the 1st script as daemon in the background using &.
ENTRYPOINT ["/docker_entrypoint.sh"]
docker_entrypoint.sh
#!/bin/bash
set -e
exec python tests.py &
exec python main.py
In general, it is a good rule of thumb that a container should only a single process and that essential process should be pid 1
Using an entrypoint can help you do multiple things at runtime and optionally run user-defined commands using exec, as according to the best practices guide.
For example, if you always want the tests to run whenever the container starts, then execute the defined command in CMD.
First, create an entrypoint script (be sure to make it executable with chmod +x):
#!/usr/bin/env bash
# always run tests first
python /src/tests.py
# then run user-defined command
exec "$#"
Then configure the dockerfile to copy the script and set it as the entrypoint:
#...
COPY entrypoint.sh /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "main.py"]
Then when you build an image from this dockerfile and run it, the entrypoint will first execute the tests then run the command to run main.py
The command can also still be overridden by the user when running the image like docker run ... myimage <new command> which will still result in the entrypoint tests being executed, but the user can change the command being run.
You can achieve this by creating a bash script(let's name entrypoint.sh) which is containing the python commands. If you want, you can create background processes of those.
#!/usr/bin/env bash
set -e
python tests.py
python main.py
Edit your docker file as follows:
FROM python:3.8-slim-buster
# Create workDir
RUN mkdir code
WORKDIR code
ENV PYTHONPATH = /code
#upgrade pip if you like here
COPY requirements.txt .
RUN pip install -r requirements.txt
# Copy Code
COPY . .
RUN chmod +x entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
In the docker compose file, add the following line to the service.
entrypoint: [ "./entrypoint.sh" ]
Have you try this in your docker-compose.yaml?
version: '3'
services:
main:
image: free
command: >
/bin/sh -c 'python3 tests.py & && python3 main.py &'
both will run in the background
then run in terminal
docker-compose up --build

Using Docker and Python, how to access csv file created in volume?

Edit
Added suggestions from daudnadeem.
Created folder in the directory with my Dockerfile called temp_folder.
Updated the last line of the python file to be df.to_csv('/temp_folder/temp.csv').
Ran the docker build and then new run command docker run -v temp_folder:/temp_folder alexd/myapp ..
I have a very simple Python example using Docker. The code runs fine, but I can't figure out how to access the CSV file created by the Python code. I have created a volume in Docker and used docker inspect to try to access the CSV file but I'm unsure of the syntax and can't find an example online that makes sense to me.
Python Code
import pandas as pd
import numpy as np
import os
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=['A', 'B', 'C', 'D'])
df.to_csv('temp.csv')
Dockerfile
FROM python:3.7.1
RUN mkdir -p /var/docker-example
WORKDIR /var/docker-example
COPY ./ /var/docker-example
RUN pip install -r requirements.txt
ENTRYPOINT python /var/docker-example/main.py
Docker commands
$ docker build -t alexf/myapp -f ./Dockerfile .
$ docker volume create temp-vol
$ docker run -v temp-vol alexf/myapp .
$ docker inspect -f temp.csv temp-vol
temp.csv
Your "temp.csv" lives on the ephemeral docker image. So in order for you to access it outside of the docker image, the best thing for you to do is expose a volume.
In the directory where you have your Dockerfile, make a folder called "this_folder"
Then when you run your image, mount that volume to the folder within your container docker run -v this_folder:/this_folder <image-name>
Then change this code to:
import pandas as pd
import numpy as np
import os
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 4)), columns=['A', 'B', 'C', 'D'])
df.to_csv('/this_folder/temp.csv')
"this_folder" is now a "volume" which is mutually accessible by your docker container and the host machine. So outside of your docker container, if you ls /this_folder you should see temp.csv lives there now.
If you don't want to mount a volume, you could upload the file somewhere, and download it later. But in a local env, just mount the folder and use it to share files between your container and your local machine.
Edit
when stuff is not going as planned with docker, you may want to access it interactively. That means 'ssh'ing into your docker container'
You do that with: docker run -it pandas_example /bin/bash
When I logged in, I saw the file 'temp.csv' is being made in the same folder as the main.py
Now for you to solve the issue further is up to you. You need to move the 'temp.csv' file into the directory which is shared with your local machine.
FROM python:3.7.1
RUN pip3 install pandas
COPY test.py /my_working_folder/
CMD python3 /my_working_folder/test.py
For a quick fix add
import subprocess
subprocess.call("mv temp.csv /temp_folder/", shell=True)
to the end of main.py. But this is not recommended.
Let's make things simple if your only goal is to understand how volumes work and where to find the file on the host created by Python code inside the container.
Dockerfile:
FROM python:3.7.1
RUN mkdir -p /var/docker-example
WORKDIR /var/docker-example
COPY . /var/docker-example
ENTRYPOINT python /var/docker-example/main.py
main.py - will create /tmp/temp.txt inside the container with hi inside
with open('/tmp/temp.txt', 'w') as f:
f.write('hi')
Docker commands (run inside the project folder):
Build image:
docker build -t alexf/myapp .
Use named volumed vol which is mapped to /tmp folder inside the container:
Run container: docker run -d -v vol:/tmp alexf/myapp
Inspect the volume:docker inspect vol
[
{
"CreatedAt": "2019-11-05T22:07:02+02:00",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/vol/_data",
"Name": "vol",
"Options": null,
"Scope": "local"
}
]
Bash commands run on the docker host
sudo ls /var/lib/docker/volumes/vol/_data
temp.txt
sudo cat /var/lib/docker/volumes/vol/_data/temp.txt
hi
You can use also bind mounts and anonymous volumes to achieve the same result.
Docs

Creating files (pdf, xls) with a python script inside a docker container

Trying to create simple files with a python script called "scriptfile.py". When I run it, it outputs a pdf with a sine wave and an xls file containing a 3x10 dataframe that was initially imported from a csv file called "csv_file.csv". In addition, the sine wave plot is shown. This all works fine.
Now I've created a Dockerfile, based on the app.py example in the Docker documentation. I build an image using
sudo docker build --tag=testrun .
and run it using
sudo docker run -p 4000:80 testrun
The console output is normal, but no files are created and no plot displayed. The code of the Dockerfile and the scriptfile.py are given below.
It reads
FROM python:3
WORKDIR /app
COPY . /app
ADD scriptfile.py /
RUN pip install matplotlib
RUN pip install xlwt
RUN pip install pandas
EXPOSE 80
ENV NAME DockerTester
CMD ["python","/scriptfile.py"]
The scriptfile.py reads
import math
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv('csv_file.csv', sep=",", header=None)
df.to_excel(r'xlx_file.xls')
print("plotting ...")
sinusoid=[]
for i in range(100):
sinusoid.append(math.sin(i))
f = plt.figure()
plt.plot(sinusoid)
plt.show()
f.savefig("sin.pdf")
plt.close()
print("... success")
Question: Where are the files?
There are multiple ways to do this, here are some.
Using docker cp
First figure out your containerid by using docker ps -a, then do:
docker cp <containerid>:/app /tmp/mydir
You will find the content on your host at /tmp/mydir.
Using Dockerfile VOLUME
Add this line to your Dockerfile after your COPY:
VOLUME /app
Now run your container like you are:
docker run -p 4000:80 testrun
Now do:
docker inspect -f '{{ .Mounts }}' <containerid>
Where <containerid> is obtained from docker ps -a. You will see something like:
[{volume 511961d95cd5de9a32afe3358c7b9af3eabd50179846fdebd9c882d50c7ffee7 /var/lib/docker/volumes/511961d95cd5de9a32afe3358c7b9af3eabd50179846fdebd9c882d50c7ffee7/_data /app local true }]
As you can see there is a path:
/var/lib/docker/volumes/511961d95cd5de9a32afe3358c7b9af3eabd50179846fdebd9c882d50c7ffee7/_data
That is where the container's /app directory contents are located.
Using docker run -v
Change your python script to write a location other than /app, something like f.savefig("/tmp/sin.pdf").
Then run docker like this:
docker run -it -v /tmp/share/:/tmp -p 4000:80 testrun
Now you will find your file on your host at /tmp/share/

Run multiple python programs through a batch file on docker CE desktop(windows)

I have 3 python script which I want to run at the same time through batch file on docker CE windows.
I have created 3 containers for 3 python scripts. All the 3 python require input files.
python_script_1 : docker-container-name
python_script_2 : docker-container-name
python_script_2 : docker-container-name
The docker files for 3 python scripts are:
Docker_1
FROM python:3.7
RUN pip install pandas
COPY input.csv ./
COPY script.py ./
CMD ["python", "./script.py", "input.csv"]
Docker_2
FROM python:3.7
RUN pip install pandas
COPY input_itgear.txt ./
COPY script.py ./
CMD ["python", "./script.py", "input_itgear.txt"]
Docker_3
FROM python:3.7
RUN pip install pandas
COPY sale_new.txt ./
COPY store.txt ./
COPY script.py ./
CMD ["python", "./script.py", "sale_new.txt", "store.txt"]
I want to run all the three scripts at the same time on docker through a batch file. Any help will be greatly appreciated.
so here is the gist
https://gist.github.com/karlisabe/9f0d43fe09536efa8035092ccbb593d4
You need to place your csv and py files accordingly so that they can be copied into the images, but what is going to happen is that when you run docker-compose up -d it should build the images (or use from cache if no changes are made) and run all 3 services. In essence it's like using docker run my-image 3 times but there is some additional features available, like all 3 of the containers will be a special network created by docker. You should read more about docker-compose here https://docs.docker.com/compose/

Categories

Resources