Nvidia Theano docker image not available - python

Trying to run docker command :
nvidia-docker run -d -p 8888:8888 -e PASSWORD="123abcChangeThis" theano_secure start-notebook.sh
# Then open your browser at http://HOST:8888
taken from https://github.com/nouiz/Theano-Docker
returns error :
Error: image library/theano_secure:latest not found
Appears the theano_secure image is not currently available ?
Searching for theano_secure :
$ nvidia-docker search theano_secure:latest
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
The return of this command is empty so image is not available ?
If so is there an alternative Theano docker image from nvidia ?
Update :
building from source :
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
returns :
Err http://developer.download.nvidia.com Release.gpg
Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
and :
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease
Manually checking URL's : http://developer.download.nvidia.com & http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease are both not available. Should I build with alternative docker file ?
Update 2 :
I think this error is occurring as http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease does not exist. However http://archive.ubuntu.com/ubuntu/dists/trusty/Release does exist.
Can docker be modified to use http://archive.ubuntu.com/ubuntu/dists/trusty/Release instead of http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease ?
OS version :
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
Update 3 :
"you are supposed to docker build first", before nvidia-docker run" I did try
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
which returns :
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
I can pull image docker pull kaixhin/theano but this does not run via Jupyter notebook in same way as nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu documented at https://hub.docker.com/r/tensorflow/tensorflow/ . There does not appear to be a docker Jupyter Theano container available.
How to expose the docker instance kaixhin/theano via Jupyter notebook ?
I tried : nvidia-docker run -d -p 8893:8893 -v --name theano2 kaixhin/theano start-notebook.sh but receive error :
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247:
starting container process caused \"exec: \\\"start-notebook.sh\\\": executable file not found in $PATH\"\n".
Modification of kaixhin/theano docker container in order expose it via Jupyter notebook ?

Error: image library/theano_secure:latest not found
Because theano_secure doesn't like ubuntu,centos, it is not official repository on docker hub, so you need to build it by yourself.
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
Please check your internet connection first, telnet 184.24.98.231 80.
Maybe you are in a limited network place, try behind a proxy to do this again. You may want take a look about how to build image behind a proxy.

From what I understand of the nouiz/Theano-Docker README, you are supposed to docker build first, before nvidia-docker run.
But since the build is tricky, I would try instead docker pull kaixhin/theano (from kaixhin/cuda-theano/), much more recent (3 days ago), which is based on theano Dockerfile.
That image does rely on CUDAand needs to be run on an Ubuntu host OS with NVIDIA Docker installed. The driver requirements can be found on the NVIDIA Docker wiki.

Related

Docker run failing when mounting host dir inside a container

I am trying to mount a directory from host to container and at the same time running jupyter from that directory. What am I doing wrong here that docker is complaining as file now found please?
docker run -it --rm -p 8888:8888 tensorflow/tensorflow:nightly-jupyter -v $HOME/mytensor:/tensor --name TensorFlow python:3.9 bash
WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
docker: Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "-v": executable file not found in $PATH: unknown.
I tried removing the version of python but still same problem. I searched extensively online and couldnt get an answer.
Basically i want to mount that directory which is is a git clone where I have tensor files. At the same time, I want to run jupyter notebook where I can see the files and run them. With so many issues with apple:m1 processor and tensorflow, i thought going the docker route would be better but am i not surprised :)
Appreciate the help
Docker run command syntax is
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
image name tensorflow/tensorflow:nightly-jupyter should be after options (-v, -p --name et.al.) and before the command.
docker run -it --rm -p 8888:8888 -v $HOME/mytensor:/tensor --name TensorFlow tensorflow/tensorflow:nightly-jupyter bash

AWS CDK Python docker throwing invalid bind mount error when trying to bundle code

I'm trying to deploy a python lambda function with dependencies and I'm getting an error from the docker daemon (on Centos linux) that there is an invalid bind mount spec. The error is "/path//to/my/code:/asset-input:z,delegated": invalid mode: delegated
The following is what my code looks like for the lambda function:
python_function = Function(
self,
id="PythonFunction",
runtime=Runtime.PYTHON_3_9,
handler="app.main.lambda_handler",
timeout=Duration.seconds(20),
code=Code.from_asset(
path=str(python_function_path.resolve()),
bundling=BundlingOptions(
image=Runtime.PYTHON_3_9.bundling_image,
command=[
"bash",
"-c",
"pip install -r requirements.txt -t /asset-output && cp -au . /asset-output",
],
),
),
memory_size=128,
log_retention=RetentionDays.TWO_WEEKS,
)
This works just fine on my Mac, but trying to deploy from Centos is unsuccessful.
Your docker version is out of date. You need to be running docker CE at least version 1.17.04 or higher (this was the version when support for delegated mode was added, but ideally you should install a more recent version).
As stated in comments, your current version is 1.13.1, which does not have support for this mode.
To resolve this, you should update your docker version.

Docker and fcntl OSError Errno 22 Invalid argument

I've encountered a weird problem and I do not know how to proceed.
I have docker 18.09.2, build 6247962 on a VMware ESXi 6.5 virtual machine running Ubuntu 18.04. I have docker 19.03.3, build a872fc2f86 on a Azure virtual machine running Ubuntu 18.04. I have the following little test script that I run on both hosts and in different docker containers:
#!/usr/bin/python3
import fcntl
import struct
image_path = 'foo.img'
f_obj = open(image_path, 'rb')
binary_data = fcntl.ioctl(f_obj, 2, struct.pack('I', 0))
bsize = struct.unpack('I', binary_data)[0]
print('bsize={0}'.format(bsize))
exit(0)
I run "ps -ef >foo.img" to get the foo.img file. The output of the above script on both virtual machines is bsize=4096.
I have the following Dockerfile on both VMs:
FROM ubuntu:19.04
RUN apt-get update && \
apt-get install -y \
python \
python3 \
vim
WORKDIR /root
COPY testfcntl01.py foo.img ./
RUN chmod 755 testfcntl01.py
If I create a docker image with the above Dockerfile on the VM running docker 18.09.2, the above gives me the same results as the host.
If I create a docker image with the above Dockerfile on the VM running docker 19.03.3, the above gives me the following error:
root#d317404714a6:~# ./testfcntl01.py
Traceback (most recent call last):
File "./testfcntl01.py", line 9, in <module>
binary_data = fcntl.ioctl(f_obj, 2, struct.pack('I', 0))
OSError: [Errno 22] Invalid argument
I compared the docker directory structure, the daemon.json file, the logs, the "docker info" between the hosts. They look to be identical. I tried with a FROM ubuntu:18.04 as well as ubuntu:19.04. I've tried with python2 as well as python3. Same results.
I do not know why the fcntl fails only on a docker container on the Azure VM running docker 19.03.3. Did something change in docker between 18 and 19 that might have caused this? Is there some configuration change that I need to make to get this to work? Something else I'm missing?
Any help would be greatly appreciated.
Thank you
Lewis Muhlenkamp
UPDATE01:
I was following the steps here to prepare my own custom Ubuntu 18.04 VHD to use in Azure. I started with a generic install of Ubuntu Server 18.04 using ubuntu-18.04.3-live-server-amd.iso that I just downloaded from Ubuntu's website. The test below works just fine on that freshly intalled VM. I finish the step
sudo apt-get install linux-generic-hwe-18.04 linux-cloud-tools-generic-hwe-18.04
and then my test fails. So, I believe there is some issue with these hardware enablement packages.
I had a pretty similar error and found that if the file is in a mounted volume, at least owned by the host, it won't fail. Ie:
docker run -it -v $PWD:/these_work ubuntu:18.04 bash
Files under the /these_work directory in the container worked, however other files that were solely accessible from within the container resulted in [Errno 22] Invalid Argument.
I came here from a yocto build error from a nearly identical method of accessing the blocksize within filemap.py:
# Get the block size of the host file-system for the image file by calling
# the FIGETBSZ ioctl (number 2).
try:
binary_data = fcntl.ioctl(file_obj, 2, struct.pack('I', 0))
except OSError:
raise IOError("Unable to determine block size")

How to use camera in docker container

I have a code in python that periodically takes pictures (using OpenCV). I created an image to execute this code in container. In linux, I can use it by perfectly executing the following command:
docker run -it --device=/dev/video0:/dev/video0 myImage
A little searching, the equivalent command in windows ( 10 Pro ) would be
docker run --privileged -v /dev/bus/usb:/dev/bus/usb myImage
But I am getting Error response from daemon: Mount denied error
I already tried to enable the shared drivers option in the Docker app, but the same error continued
I also tried some similar commands, but the same error continued
The command that generated a different error was:
docker run -it --device /dev/video0 myImage
generating the error:
  Error response from daemon: error gathering device information while adding custom device "C": not a device node.

Errno 13 while running docker-compose up

I'm building an application using django and I wanted to add docker to this project.
I'm trying to run
sudo docker-compose up
Which gives me this output:
ERROR: .IOError: [Errno 13] Permission denied: './docker-compose.yml'
I checked the permissions using GUI. Everything is fine.
I'm trying to run my app from an mounted drive. I also tested it on other drives. The only drive this problem does not appear is my main drive running Ubuntu 18.04.
Looking forward to some answers
I found a working solution.
Don't use the snap installation and do this instead (tested Ubuntu 20.04)
apt install docker.io docker-compose
adding the directory where I am running my docker-compose.yml using the apparmor reconfigure tool:
$ sudo dpkg-reconfigure apparmor
You need to update your AppArmor configuration :
Snap Dockers are heavily controlled with AppArmor.
To diagnose if it is really the case, check the last lines of the syslog after you triggered the error :
dmesg | grep docker-compose
You should see a snap.docker that was denied:
kernel: [ ] audit: type=1400 audit(....):
apparmor="DENIED" operation="exec" profile="snap.docker.dockerd"
name="/bin/kmod" pid=7213 comm="exe" requested_mask="x"
denied_mask="x" fsuid=0 ouid=0
To correct this, just go to apparmor config's tunables :
cd /etc/apparmor.d/tunables
And edit HOMEDIRS variables in the 'home' file, for example from :
#{HOMEDIRS}=/home/
to
#{HOMEDIRS}=/home/ /media/aUser/Linux/
hope that helps.
All the other answers didn't work for me.
docker --version
Docker version 20.10.17, build 100c701
docker-compose -v
docker-compose version 1.29.2, build unknown
Instead of
docker-compose up
please use
docker compose up

Categories

Resources