I have a code in python that periodically takes pictures (using OpenCV). I created an image to execute this code in container. In linux, I can use it by perfectly executing the following command:
docker run -it --device=/dev/video0:/dev/video0 myImage
A little searching, the equivalent command in windows ( 10 Pro ) would be
docker run --privileged -v /dev/bus/usb:/dev/bus/usb myImage
But I am getting Error response from daemon: Mount denied error
I already tried to enable the shared drivers option in the Docker app, but the same error continued
I also tried some similar commands, but the same error continued
The command that generated a different error was:
docker run -it --device /dev/video0 myImage
generating the error:
Error response from daemon: error gathering device information while adding custom device "C": not a device node.
Related
I'm trying to train on azureml using a custom docker container with azure-cli, using the below command:
az ml job create -f train.yaml --resource-group DefaultResourceGroup-EUS2 --workspace-name test1234
and the train.yaml is:
$schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json
type: command
environment:
image: user2001.azurecr.io/test/train:latest
command: >-
python test_local.py
compute: azureml:test1234
Upon firing the above command I'm getting this error on azure ml jobs:
Error: python: can't open file 'test_local.py': [Errno 2] No such file or directory
I have checked in my docker image, test_local.py is present,I have also tried the following combination- "./test_local.py & /test_local.py"
The error still continues. Can't figure out where I'm going wrong.
Edit:docker run -it user2001.azurecr.io/test/train:latest python test_local.py
on running this command the container executes, but same thing doesn't work on azureml
Adding the /app prefix solved the issue,
so the final statement is
command: >-
python /app/test_local.py
I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash
To use TensorFlow serving, I had to use docker.
I downloaded the TensorFlow image using
docker pull tensorflow/serving
After that, I had to start tf serving and map my directories.
$ docker run -it -v D:\softwares\software saved file\GITHUB\potato-disease\Plant-disease-classification-using-Convolution-Neural-Networks:/tf_serving -p 8605:8605 --entrypoint /bin/bash tensorflow/serving
As a result I have an error :-
Unable to find image 'saved:latest' locally
docker: Error response from daemon: pull access denied for saved, repository does not exist or may require 'docker login': denied: requested access to the resource is denied.
See 'docker run --help'.
The volume path contained spaces, putting "" around the path could solve the error
In this case, I changed the name of the directory.
Answered by #ai2ys
Attempt to run ray docker image on M1 results in
$ docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
> WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I've tried to use DOCKER_DEFAULT_PLATFORM=linux/amd64, but then nothing happens:
$ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
>
$ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The latest tag has digest 744f499644cc
The image has /bin/bash defined as the command to run when it starts. When you run it, you don't attach a TTY, so the container exits immediately.
I'm not familiar with the image, so I don't know the way to run it correctly and your port mappings confuse me a bit. But a way to run it is
docker run -it rayproject/ray:latest
That will put you at a prompt inside the container and you can explore the contents.
Trying to run docker command :
nvidia-docker run -d -p 8888:8888 -e PASSWORD="123abcChangeThis" theano_secure start-notebook.sh
# Then open your browser at http://HOST:8888
taken from https://github.com/nouiz/Theano-Docker
returns error :
Error: image library/theano_secure:latest not found
Appears the theano_secure image is not currently available ?
Searching for theano_secure :
$ nvidia-docker search theano_secure:latest
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
The return of this command is empty so image is not available ?
If so is there an alternative Theano docker image from nvidia ?
Update :
building from source :
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
returns :
Err http://developer.download.nvidia.com Release.gpg
Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
and :
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease
Manually checking URL's : http://developer.download.nvidia.com & http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease are both not available. Should I build with alternative docker file ?
Update 2 :
I think this error is occurring as http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease does not exist. However http://archive.ubuntu.com/ubuntu/dists/trusty/Release does exist.
Can docker be modified to use http://archive.ubuntu.com/ubuntu/dists/trusty/Release instead of http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease ?
OS version :
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
Update 3 :
"you are supposed to docker build first", before nvidia-docker run" I did try
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
which returns :
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
I can pull image docker pull kaixhin/theano but this does not run via Jupyter notebook in same way as nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu documented at https://hub.docker.com/r/tensorflow/tensorflow/ . There does not appear to be a docker Jupyter Theano container available.
How to expose the docker instance kaixhin/theano via Jupyter notebook ?
I tried : nvidia-docker run -d -p 8893:8893 -v --name theano2 kaixhin/theano start-notebook.sh but receive error :
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247:
starting container process caused \"exec: \\\"start-notebook.sh\\\": executable file not found in $PATH\"\n".
Modification of kaixhin/theano docker container in order expose it via Jupyter notebook ?
Error: image library/theano_secure:latest not found
Because theano_secure doesn't like ubuntu,centos, it is not official repository on docker hub, so you need to build it by yourself.
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
Please check your internet connection first, telnet 184.24.98.231 80.
Maybe you are in a limited network place, try behind a proxy to do this again. You may want take a look about how to build image behind a proxy.
From what I understand of the nouiz/Theano-Docker README, you are supposed to docker build first, before nvidia-docker run.
But since the build is tricky, I would try instead docker pull kaixhin/theano (from kaixhin/cuda-theano/), much more recent (3 days ago), which is based on theano Dockerfile.
That image does rely on CUDAand needs to be run on an Ubuntu host OS with NVIDIA Docker installed. The driver requirements can be found on the NVIDIA Docker wiki.