I am trying to write a text (.txt) file to a local Desktop folder on Windows 10 after building a docker image (docker_run_test). The docker build seems to work seamlessly using "docker build -t docker_run_test .", running the command from the working directory on the Desktop where the Dockerfile and Python script reside (C:/Users/mdl518/Desktop/docker_tests/). The script is a simple print statement and writing of the print statement to a .txt file, but I cannot locate the output .txt. Below is the associated Dockerfile, and the Python script.
The Dockerfile:
FROM python:3
ADD docker_test.py ./
RUN pip install pandas
CMD ["python3","./docker_test.py"]
The Python script (docker_test.py):
import os
print("This is a Docker test.")
with open('docker_test.txt', 'w') as f:
f.write("This is a Docker test.")
I have searched the contents of the Docker image as well, but cannot locate the output .txt within the image either. Any assistance is most appreciated!
You have to mount/bind the folder where you want to see the results into the container.
Change the output filename to write in another folder, let say /output
with open('/output/docker_test.txt', 'w') as f:
And then ask Docker to bind host folder %HOMEPATH%\Desktop to container /output :
docker run -v %HOMEPATH%\Desktop:/output docker_run_test
Not sure for %HOMEPATH% syntax as I'm a Linux user.
Related
I have a python script that takes an array of BeautifulSoup tags, loops through them and prints the text property of each tag, as well as writing it to a .txt file. Unfortunately, this .txt file never gets created. This is my code:
with open('output/scrape.txt', 'a+') as text_file:
for headline in soup.select('#top-news span[class*="headline"]'):
text_file.write(headline.text)
print(headline.text)
The print statement is working, but the attempt to create and write to the file is not. I've tried a simpler write operation, and that doesn't work either:
text_file = open("output/scrape.txt", "a+")
text_file.write("Test")
text_file.close()
The output folder exists as you can see here:
...and I've confirmed the folder allows reading and writing:
The script is executed from a Dockerfile:
FROM python:3.8.2-slim
WORKDIR /src
RUN pip install --upgrade -v pip \
lxml \
requests \
beautifulsoup4
COPY ./scrape.py .
RUN mkdir -p /src/output
CMD python scrape.py
The Dockerfile you have provided copies the ./scrape.py script into the container. But it does not share any files between your container and your host computer. You will need to set up a way of sharing the files that get created inside your container with your host. This is usually with the use of Volumes or bind mounts. Docker Use Volumes Doc
My command to mount a volume to the container when I run it had the incorrect directory. I was using docker run -v output:/output nuprizm when I should've been using docker run -v output:/src/output nuprizm.
I have problem with my Dockerfile.
I want to execute my python script during image building (this script creates a few files).
But this script is not execute during image building - there aren't file created by script in container.
It's my Dockerfile
FROM ubuntu:latest
RUN python script1.py
If you need to interact with files during image building, you must add those files to the image first, otherwise how is Docker to know where those files are?
FROM ubuntu:latest
ADD script1.py /tmp/script1.py
RUN python /tmp/script1.py
Of course, I'll leave the paths to you.
FROM ubuntu:latest
ADD script1.py /tmp/script1.py #add/copy files from some path
CMD ["python", "/tmp/script1.py"] # to run python with provided script/argument
I have a Python script, python_script.py, that reads an HDF5 file, hdf_file.h5, on my local machine. The directory path to the files is
folder1
folder2
python_script.py
hdf_file.h5
I have the following sample code:
from pandas import read_hdf
df = read_hdf('hdf_file.h5')
When I run this code on my local machine, it works fine.
However, I need to place the Python script inside a Docker container, keep the HDF file out of the container, and have the code read the file. I want to have something like the following directory path for the container:
folder1
folder2
hdf_file.h5
docker-folder
python_script.py
requirements.txt
Dockerfile
I use the following Dockerfile:
FROM python:3
WORKDIR /project
COPY ./requirements.txt /project/requirements.txt
RUN pip install -r requirements.txt
COPY . /project
CMD [ "python", "python_script.py" ]
I am new to Docker and am having a lot of trouble figuring out how to get a Python script inside a container to read a file outside the container. What commands do I use or code changes do I make to be able to do this?
It seems you need to use docker volumes (https://docs.docker.com/storage/volumes/).
Try the following:
docker run -v path/where/lives/hdf5/:path/to/your/project/folder/your_image your_docker_image:your_tag
Where the first part before the : refers to host machine and after, the container.
Hope it helps!
I am trying to run a very simple python file that simply prints "woof" within a docker container. As far as I know I have created a docker container called:
c5d3c4c383d1
I then run the following command, in an attempt to tell myself what directory I am running things from in docker:
sudo docker run c5d3c4c383d1 pwd
This returns the following value:
/
Which I assume to be my root directory, so I go to my root directory. Typing pwd shows:
/
I then create a file called meow.py via the nano command and enter in this a single line that is:
print("Woof!")
I save this and confirm this is in the / directory with an ls command.
I then enter the following:
sudo docker run c5d3c4c383d1 python meow.py
Which returns:
python: can't open file 'meow.py': [Errno 2] No such file or directory
I don't understand this. Obviously I am not in the root directory when running a command with the docker as the meow.py file is DEFINETLY in the root directory but it is saying this file cannot be found. What the heck... As i said when I run pwd within the docker container it says i am in the / directory, but I cannot be given this file not found error.
docker is a container ... thats its root directory ... think of it like a totally different machine that you would normally ssh into... try something like this
docker run -it c5d3c4c383d1 bash
thats basically like you have just ssh'd into your remote machine
go ahead and try some commands (ls,pwd,etc)
now run echo print("hello world")>test.py
now run ls you should see your test.py ... go ahead and run it with python test.py
now you can exit you container ... if you launch the same container again you should still have your test.py file there... although i think its more common that people write a dockerfile that sets up their environment and then they just treat each session as disposable, as opposed to keeping the same container
A very simple Python program. Suppose the current directory is /PYTHON. I want to pass file.txt as an argument to the Python script boot.py. Here is my Dockerfile:
FROM python
COPY boot.py ./
COPY file.txt ./
RUN pip install numpy
CMD ["python", "boot.py", "file.txt"]
Then I build the Docker container with:
docker build -t boot/latest .
Then run the container
docker run -t boot:latest python boot.py file.txt
I got the correct results.
But if I copy another file, file1.txt, to the current directory (from a different directory (not /PYTHON)), then I run the container again:
docker run -t boot:latest python boot.py file1.txt
I got the following error:
FileNotFoundError: [Errno 2] No such file or directory: 'file1.txt'
so the error is due to fact that file1.txt is not in the container, but if I share this container with a friend and the friend wants to pass a very different file as argument, how do I write the Dockerfile so anybody with my container can pass very different files as argument without errors?
It won't work that way. Like you said, file1.txt is not in the container.
The workaround is to use Docker volumes to inject files from your host machine to the container when running it.
Something like this:
docker run -v /local/path/to/file1.txt:/container/path/to/file1.txt -t boot:latest python boot.py /container/path/to/file1.txt
Then /local/path/to/file1.txt would be the path on your host machine which will override /container/path/to/file1.txt on the container.
You may also make your script read from STDIN and then pass data to docker using cat. Have a look at how to get docker container to read from stdin?
The trick is to keep STDIN open even if not attached with
--interactive or -i (alias) option for Docker.
Something like:
cat /path/to/file | docker run -i --rm boot python boot.py
Or:
docker run -i --rm boot python booty.py < /path/to/file
EOF is the end of the input.
If I understand the question correctly, you are acknowledging that the file isn't in the container, and you are asking how to best share you container with the world, allowing people to add their own content into it.
You have a couple of options, either use Docker volumes, which allows your friends (and other interested parties) to mount local volumes inside your Docker containers. That is, you can overlay a folder on your local filesystem onto a folder inside the container (this is generally quite nifty when you are developing locally as well).
Or, again, depending on the purpose of your container, somebody could extend your image. For example, a Dockerfile like
FROM yourdockerimage:latest
COPY file1.txt ./
CMD ["python", "boot.py", "file1.txt"]
Choose whichever option suits your project the best.
One option is to make use of volumes.
This way all collaborators on the project are able to mount them in the containers.
You could change your Dockerfile to:
FROM python
COPY boot.py ./
COPY file.txt ./
RUN pip install numpy
ENTRYPOINT ["python", "boot.py"]
And then run it to read from STDIN:
docker run -i boot:latest -<file1.txt