I am using python SDK package to run docker from python.
Here is the docker command I tried to run using python package:
docker run -v /c/Users/msagovac/pdf_ocr:/home/docker jbarlow83/ocrmypdf-polyglot --skip-text 0ce9d58432bf41174dde7148486854e2.pdf output.pdf
Here is a python code:
import docker
client = docker.from_env()
client.containers.run('jbarlow83/ocrmypdf-polyglot', '--skip-text "0ce9d58432bf41174dde7148486854e2.pdf" "output.pdf"', "-v /c/Users/msagovac/pdf_ocr:/home/docker")
Error says file ot found. I am not sure where to set run options:
-v /c/Users/msagovac/pdf_ocr:/home/docker
Try with named parameters:
client.containers.run(
image='jbarlow83/ocrmypdf-polyglot',
command='--skip-text "0ce9d58432bf41174dde7148486854e2.pdf" "output.pdf"',
volumes={'/c/Users/msagovac/pdf_ocr': {'bind': '/home/docker', 'mode': 'rw'}},
)
Also it seems that the path of the volume to mount is incorrect, try with C:/Users/msagovac/pdf_ocr
Related
I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
I have been struggling to add env variables into my container for the past 3 hrs :( I have looked through the docker run docs but haven't managed to get it to work.
I have built my image using docker build -t sellers_json_analysis . which works fine.
I then go to run it with: docker run -d --env-file ./env sellers_json_analysis
As per the docs: $ docker run --env-file ./env.list ubuntu bash but I get the following error:
docker: open ./env: no such file or directory.
The .env file is in my root directory
But when running docker run --help I am unable to find anything about env variables, but it doesn't provide the following:
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
So not sure I am placing things incorrectly. I could add my variables into the dockerfile but I want to keep it as a public repo as it's a project I would like to display.
Your problem is wrong path, either use .env or ./.env, when you use ./env it mean a file named env in current directory
docker run -d --env-file .env sellers_json_analysis
I want to run the following Docker container via a Python script.
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower --run-once
So far I've gotten this using the docker-py documents.
bindVolume = {'/var/run/docker.sock': {'bind': '/var/run/docker.sock', 'mode': 'rw'}}
client.containers.run('containrrr/watchtower', name="watchtower", volumes = bindVolume, auto_remove=True
But how do I call --run-once?
try this :
client.containers.run('containrrr/watchtower',command=["--run-once"], name="watchtower", volumes = bindVolume, auto_remove=True)
command (str or list) – The command to run in the container.
see this
I'm trying to send files as an argument in python3 cli app (which uses argp arse for parsing) which is hosted in docker. But I'm getting OSError:
Error opening b'input_file.txt' when I perform docker run -t
input_file.txt
I tried:
docker run -t docker_image_name input_file.txt
My docker file has entry point as:
ENTRYPOINT [ "python", "/src/cli_app.py" ]
You're telling your python application to look for input_file.txt, but that file doesn't exist in the container. You're not passing a file as is, just an argument/parameter. Try the following to mount your local file (I'm assuming it's in your working directory) into the container:
docker run -it -v $(pwd)/input_file.txt:/tmp/input_file.txt docker_image_name /tmp/input_file.txt
docker run docker_image_name -e IP_FILE_NAME="input_file.txt"
In your python code access the filename as environment variable $IP_FILE_NAME. This is considering the input_file.txt is present in the container. Use COPY command to copy the file from the machine you use to build to the container.
COPY input_file.txt /src/input_file.txt
I am using this:
Specify docker containers in /etc/ansible/hosts file
to run my ansible playbooks against a docker container.
But is there any way to avoid having a physical /etc/ansible/hosts file with the information about the container? E.g. run it from code where this information can be configured?
I looked at:
Running ansible-playbook using Python API
but when looking at the answers I see variables pointing to physical files, e.g.:
inventory = Inventory(loader=loader, sources='/home/slotlocker/hosts2')
playbook_path = '/home/slotlocker/ls.yml'
So not really sure why that is better than simply just calling from command line without using the Python ansible API.
May be install Ansible in Docker container and then run it locally inside the container. For example in the Dockerfile, include:
# Install Ansible
RUN pip install ansible
COPY ansible /tmp/ansible
# Run Ansible to configure the machine
RUN cd /tmp/ansible && ansible-playbook -i inventory/docker example_playbook.yml