Serverless - Numpy - Unable to find good bind path format - python

I've been beating on this for over a week and been through all sorts of forum issues and posts and cannot resolve. I'm trying to package numpy in a function, individually building requirements (I have multiple functions with multiple requirements that I'd like to keep separate).
Environment:
Windows 10 Home
Docker Toolbox for Windows:
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24302
Built: Fri Mar 23 08:31:36 2018
OS/Arch: windows/amd64
Experimental: false
Orchestrator: swarm
Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:52:55 2018
OS/Arch: linux/amd64
Experimental: false
Serverless Version:
serverless version 6.4.1
serverless-python-requirements version 6.4.1
Directory Structure:
|-test
|-env.yml
|-serverless.yml
|-Dockerfile
|-functions
|-f1
|-index.py
|-requirements.txt
|-sub_function_1.py
|-sub_function_2.py
|-f2
|-index.py
|-requirements.txt
|-sub_function_3.py
|-sub_function_4.py
serverless.yml
service: test
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
zip: true
dockerFile: Dockerfile
dockerizePip: non-linux
provider:
name: aws
runtime: python3.6
stage: dev
environment: ${file(./env.yml):${opt:stage, self:provider.stage}.env}
region: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.region}
profile: ${file(./env.yml):${opt:stage, self:provider.stage}.aws.profile}
package:
individually: true
functions:
f1:
handler:index.handler
module:functions/f1
f2:
handler:index.handleer
module:functions/f2
I have my project files in C:\Serverless\test. I run npm init, followed by npm i --save serverless-python-requirements, accepting all defaults. I get the following on sls deploy -v. even though I've added C:\ to Shared Folders on the running default VM in VirtualBox, and selected auto-mount and permanent.
If I comment out both dockerizePip and dockerFile I get the following as expected based on here and other SO posts:
Serverless: Invoke invoke
{
"errorMessage": "Unable to import module 'index'"
}
If I comment out dockerfile I get:
Serverless: Docker Image: lambci/lambda:build-python3.6
Error --------------------------------------------------
error during connect: Get https://XXXXXX/v1.37/version: dial tcp
XXXXXXXXXX: connectex: A connection attempt failed because the
connected party did not properly respond after a period of time, or
established connection failed because connected host has failed to
respond.
at dockerCommand (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:100:3)
With Dockerfile
# AWS Lambda execution environment is based on Amazon Linux 1
FROM amazonlinux:1
# Install Python 3.6
RUN yum -y install python36 python36-pip
# Install your dependencies
RUN curl -s https://bootstrap.pypa.io/get-pip.py | python3
RUN yum -y install python3-devel mysql-devel gcc
# Set the same WORKDIR as default image
RUN mkdir /var/task
WORKDIR /var/task
.
Serverless: Building custom docker image from Dockerfile...
Serverless: Docker Image: sls-py-reqs-custom
Error --------------------------------------------------
Unable to find good bind path format
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: Unable to find good bind path format
at getBindPath (C:\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:142:9)
at installRequirements (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:152:7)
at installRequirementsIfNeeded (C:\Serverless\test\node_modules\serverless-python-requirements\lib\pip.js:451:3)
If I move my project to C:\Users\, I get this instead:
Serverless: Docker Image: sls-py-reqs-custom
Serverless: Trying bindPath /c/Users/Serverless/test/.serverless/requirements (run,--rm,-v,/c/Users/Serverless/test/.serverless/req
uirements:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt
Error --------------------------------------------------
docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/.serv
erless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you in
tended to pass a host directory, use absolute path.
See 'docker run --help'.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: docker: Error response from daemon: create "/c/Users/Serverless/test/.serverless/requirements": "\"/c/Users/Serverless/test/
.serverless/requirements\"" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If y
ou intended to pass a host directory, use absolute path.
See 'docker run --help'.
at dockerCommand (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:20:11)
at getDockerUid (C:\Users\Serverless\test\node_modules\serverless-python-requirements\lib\docker.js:162:14)
I've seen the Makefile style recommendation from #brianz here, but I'm not sure how to adapt that to this (Makefiles are not my strong suit). I'm a bit at a loss as to what to do next and advice would be greatly appreciated. TIA.

I was unable to make the plugin work but I found a better solution anyhow - Lambda Layers. This is a bonus because it reduces the size of the lambda and allows code/file reuse. There is a pre-built lambda layer for numpy and scipy that you can use, but I built my own to show myself how it all works. Here's how I made it work:
Create a layer package:
Open an EC2 instance or Ubuntu or Linux or whatever - This is needed so we can compile the runtime binaries correctly
Make a dependencies package zip - Must use the directory structure python/lib/python3.6/site-packages for python to find during runtime
mkdir -p tmpdir/python/lib/python3.6/site-packages
pip install -r requirements.txt --no-deps -t tmpdir/python/lib/python3.6/site-packages
cd tmpdir zip -r ../py_dependencies.zip .
cd ..
rm -r tmpdir
Push layer zip to AWS - requires latest awscli
sudo pip install awscli --upgrade --user
sudo aws lambda publish-layer-version \
--layer-name py_dependencies \
--description "Python 3.6 dependencies [numpy=0.15.4]" \
--license-info "MIT" \
--compatible-runtimes python3.6 \
--zip-file fileb://py_dependencies.zip \
--profile python_dev_serverless
To use in any function that requires numpy, just use the arn that is shown in the console or during the upload above
f1:
handler: index.handler_f_use_numpy
include:
- functions/f_use_numpy.py
layers:
- arn:aws:lambda:us-west-2:XXXXX:layer:py_dependencies:1
As an added bonus, you can push common files like constants to a layer as well. Here's how I did it for testing use in windows and on the lambda:
import platform
\# Set common path
COMMON_PATH = "../../layers/common/"
if platform.system() == "Linux": COMMON_PATH = "/opt/common/"
def handler_common(event, context):
# Read from a constants.json file
with open(COMMON_PATH + 'constants.json') as f:
return text = json.load(f)

when I got the same issue, I opened docker went to settings/shared drive opted to reset credentials and after applied my changes and this cleared the error

I fixed this issue by temporarily disabling Windows Firewall.

Related

Error deploying Python package to AWS lambda using Serverless framework

I've followed this tutorial from the serverless website to try deploy my first AWS lambda function with a package dependency.
The error I get is STDERR: ERROR: Invalid requirement: '��' (from line 1 of /var/task/requirements.txt) I haven't been able to find a solution on using google. Having tried to go through the tutorial various times the same error keeps reoccurring, sometimes as ERROR: Invalid requirement: '\x00' or ERROR: Invalid requirement: '\x00\x01' or something similar. It seems to me that the serverless-python-requirements plugin is formatting it's own requirement file incorrectly, but I just don't know.
My requirements.txtwhen I have no dependencies is empty, which then translates to a serverless generated .serverless\requirements.txt:
��
When my requirements.txt is
numpy==1.19.2
this translates to a .serverless/requirements.txt as follows:
��n u m p y = = 1 . 1 9 . 2
I have gone through each step of the tutorial, and have not run into any problems until I run serverless deploy. This is the stack trace I get:
Serverless: Invoke deploy
Serverless: Invoke package
Serverless: Invoke aws:common:validate
Serverless: Invoke aws:common:cleanupTempDir
Serverless: Generated requirements from C:\Users\...\Documents\Serverless\my\requirements.txt in C:\Users\...\Documents\Serverless\my\.serverless\requirements.txt...
Serverless: Installing requirements from C:\Users\...\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\943a69dded6372ca37aaaacaf21570a18766193003231d5130a067451373395d_slspyc\requirements.txt ...
Serverless: Docker Image: lambci/lambda:build-python3.8
Serverless: Trying bindPath C:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/943a69dded6372ca37aaaacaf21570a18766193003231d5130a067451373395d_slspyc (run,--rm,-v,C:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/943a69dded6372ca37aaaacaf21570a18766193003231d5130a067451373395d_slspyc:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt
Serverless: Using download cache directory C:\Users\...\AppData\Local\UnitedIncome\serverless-python-requirements\Cache\downloadCacheslspyc
Serverless: Trying bindPath C:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/downloadCacheslspyc (run,--rm,-v,C:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/downloadCacheslspyc:/test,alpine,ls,/test/requirements.txt)
Serverless: /test/requirements.txt
Serverless: Running docker run --rm -v C\:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/4870b1f009d955f0e7d5138512661e3ec4364d6a9c1e3c6cadc9d51a7e3b8dd2_slspyc\:/var/task\:z -v C\:/Users/.../AppData/Local/UnitedIncome/serverless-python-requirements/Cache/downloadCacheslspyc\:/var/useDownloadCache\:z -u 0 lambci/lambda\:build-python3.8 python -m pip install -t /var/task/ -r /var/task/requirements.txt --cache-dir /var/useDownloadCache...
Error --------------------------------------------------
Error: STDOUT:
STDERR: ERROR: Invalid requirement: '��' (from line 1 of /var/task/requirements.txt)
at C:\Users\...\Documents\Serverless\my\node_modules\serverless-python-requirements\lib\pip.js:325:13
at Array.forEach (<anonymous>)
at installRequirements (C:\Users\...\Documents\Serverless\my\node_modules\serverless-python-requirements\lib\pip.js:312:28)
at installRequirementsIfNeeded (C:\Users\...\Documents\Serverless\my\node_modules\serverless-python-requirements\lib\pip.js:556:3)
at ServerlessPythonRequirements.installAllRequirements (C:\Users\...\Documents\Serverless\my\node_modules\serverless-python-requirements\lib\pip.js:635:29)
From previous event:
at PluginManager.invoke (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:498:22)
at PluginManager.spawn (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:518:17)
at C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\plugins\deploy\deploy.js:122:50
From previous event:
at Object.before:deploy:deploy [as hook] (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\plugins\deploy\deploy.js:102:22)
at C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:498:55
From previous event:
at PluginManager.invoke (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:498:22)
at C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:533:24
From previous event:
at PluginManager.run (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\classes\PluginManager.js:533:8)
at C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:168:33
From previous event:
at Serverless.run (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\lib\Serverless.js:155:74)
at C:\Users\...\AppData\Roaming\npm\node_modules\serverless\scripts\serverless.js:50:26
at processImmediate (internal/timers.js:456:21)
From previous event:
at Object.<anonymous> (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\scripts\serverless.js:50:4)
at Module._compile (internal/modules/cjs/loader.js:1137:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
at Module.load (internal/modules/cjs/loader.js:985:32)
at Function.Module._load (internal/modules/cjs/loader.js:878:14)
at Module.require (internal/modules/cjs/loader.js:1025:19)
at require (internal/modules/cjs/helpers.js:72:18)
at Object.<anonymous> (C:\Users\...\AppData\Roaming\npm\node_modules\serverless\bin\serverless.js:47:1)
at Module._compile (internal/modules/cjs/loader.js:1137:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
at Module.load (internal/modules/cjs/loader.js:985:32)
at Function.Module._load (internal/modules/cjs/loader.js:878:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
at internal/main/run_main_module.js:17:47
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: win32
Node Version: 12.18.4
Framework Version: 2.1.1
Plugin Version: 4.0.4
SDK Version: 2.3.2
Components Version: 3.1.3
Fix:
Converting the requirements.txt to UTF8 will fix this issue. (tested on linux)
Try converting requirements.txt to ASCII.
This is the problem with requirements.txt file encoding.
Detailed explanation
This is an open issue as on date 10th March 2022
The serverless plugin rewrites the file into .serverless directory & it assumes UTF8 encoding when it reads the file.
The problem occurs as serverless attempts to read a file with another encoding & then dump it into .serverless folder.
Serverless-python-requirements issue
Add dockerizePip command at the end then it will not show this error
custom:
wsgi:
app: handler.app
pythonBin: python # Some systems with Python3 may require this
packRequirements: false
pythonRequirements:
dockerizePip: non-linux
or delete the requirements.txt then enter serverless deploy and then paste requirements.txt and run again serverless deploy

how to create docker-build n Docker file after pushed from dockerhub(Error response from daemon: unexpected error reading Dockerfile)

am new to docker I am unable to run the docker code which is pulled from docker hub
1.Windows 10 pro, 64bit
2.able to run hello-world
3.C:\Program Files\Docker\Docker\Dockerfile>docker version
Client: Docker Engine - Community
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:22:37 2019
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 633a0ea
Built: Wed Nov 13 07:29:19 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
C:\Program Files\Docker\Docker\Dockerfile>docker push mydockerhub/shoprite:latest
pulled successfully
5.PROBLEM
C:\Program Files\Docker\newfile>docker build -t mydockerhub/shoprite:latest .
Sending build context to Docker daemon 4.096kB
Error response from daemon: unexpected error reading Dockerfile: read /var/lib/docker/tmp/docker-builder712136353/Dockerfile: is a directory
6.Error:
I also want used commands: https://docs.google.com/document/d/1aRt71Vtx13IbGLlcMzH-CyESvudLrCAnHg6fG9Zx3eU/edit
Q1.how to create Dockerfile and docker build
You can already see the problem at step 3. You created Dockerfile as a directory. It is a file. Delete the directory and make sure you have a file called Dockerfile in the directory where you run docker build. Make sure that Dockerfile is correctly edited.
If you have a dockerfile located somewhere in a file, first go to that file through commands i.e. cd ./onedrive/desktop/newfolder/docker/.
Then run command docker build . -t <your imagename : your image tag>
Here . gives you a default path where all the files in the local directory get tard and sent to the Docker daemon. -t gives you chance to give your image a name with a tag separated by :.
You then need to use the docker run command i.e. docker run <option/command> imagename.
It will give you a writeable container layer over the specified image, and then starts it using the specified command.
Use command docker ps to see if you have created a container with the image name you used. If not, there is probably a problem with how you are doing it or your pc environment.
P.S. You need to understand that dockerfile is a docker file, not a directory. So, if you see that your current directory is something/something/docker/dockerfile, you made a mistake. The dockerfile goes into docker folder by default.
Good luck
If you are using docker-compose.yml
Just specify Dockerfile name
Wrong
dockerfile: ./
Correct
dockerfile: ./Dockerfile

Local GitLab runner freezes while Shared GitLab.com runner succeeds

EDIT: As Rekovni pointed out, using a GitLab runner with Docker on a Windows machine is a problem. Installing the runner in a Linux-based virtual machine solved the problem.
I am developing a Python program using a conda environment. It is hosted on GitLab.com and I am using GitLab-CI to generate the documentation.
I configured the following .gitlab-ci.yml file for it:
image: continuumio/miniconda3:latest
before_script:
# Update conda and create environment, which is then activated.
- conda update -vvv -y -c conda-forge conda
- conda env create -f helpers/NAME.yml
- source activate NAME
# Correct installation.
- conda install -q -y gsl=2.2.1
pages:
script:
# Install make.
- apt-get update
- apt-get install -q -y build-essential
# Install Spinx-related packages.
- conda install -q -y sphinx sphinx_rtd_theme
# Create documentation.
- cd REPO/doc
- sphinx-apidoc -o source/ ../REPO --force --separate
- make html
# Transfer documentation to public pages folder.
- mv build/html/ ../../public/
artifacts:
paths:
- public
# only:
# - master
Running this script with a shared GitLab runner that is supplied with GitLab.com works and the documentation is generated and placed in the public folder.
For future unit tests (which take much longer), I want to provide a local runner on a Win 10 machine in my network. For this, I installed the gitlab-runner.exe and Docker Desktop. I successfully registered the runner with the project on GitLab.com.
The runner is using the following config.toml configuration file:
concurrent = 1
check_interval = 0
log_level = "info"
[session_server]
session_timeout = 1800
[[runners]]
name = "NAME"
url = "https://gitlab.com"
token = "TOKEN"
executor = "docker"
[runners.custom_build_dir]
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["/cache"]
shm_size = 0
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
The problem is now that the local runner freezes during the execution of the above script without producing any error messages and I am at a loss on how to debug it. What I have is
The log of the script that is shown on the Job page on GitLab.com; and
The console output of the gitlab-runner.exe on the local machine.
Regarding 1., I see
[0KRunning with gitlab-runner 11.10.0 (3001a600)
...
[32;1mChecking out COMMIT_HASH as BRANCH_NAME...[0;m
...
[0K[32;1m$ conda update -vvv -y -c conda-forge conda[0;m
DEBUG conda.gateways.logging:set_verbosity(148): verbosity set to 3
...
...
...
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_new.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.gateways.disk.update:rename(52): renaming /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html => /opt/conda/share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_close.html.c~
TRACE conda.core.path_actions:execute(1041): renaming share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html => share/doc/openssl/html/man3/OSSL_STORE_LOADER_set_ctrl.html.c~
where it abruptly stops without reaching the - conda env create -f helpers/NAME.yml line.
Regarding 2., I see
C:\GitLab-Runner>gitlab-runner.exe --debug run
Runtime platform arch=amd64 os=windows pid=14116 revision=3001a600 version=11.10.0Starting multi-runner from C:\GitLab-Runner\config.toml ... builds=0
Checking runtime mode GOOS=windows uid=-1
Configuration loaded builds=0
...
Feeding runners to channel builds=0
Checking for jobs... nothing runner=TOKEN
Feeding runners to channel builds=0
Checking for jobs... received job=203033130 repo_url=REPO_URL.git runner=TOKEN
...
Attaching to container HASH ... job=203033130 project=6249897 runner=TOKEN
Starting container HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for attach to finish HASH ... job=203033130 project=6249897 runner=TOKEN
Waiting for container HASH ... job=203033130 project=6249897 runner=TOKEN
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-10348 job-status=running runner=TOKEN sent-log=1801-10347 status=202 Accepted
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-19445 job-status=running runner=TOKEN sent-log=10348-19444 status=202 Accepted
...
Appending trace to coordinator... ok code=202 job=203033130 job-log=0-933150 job-status=running runner=TOKEN sent-log=241860-933149 status=202 Accepted
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
Submitting job to coordinator... ok code=200 job=203033130 job-status= runner=TOKEN
where it seems that the switch from Appending trace to coordinator to Submitting job to coordinator happens around the time when it gets stuck.
After this, 1. is not updated with any further information and 2. is stuck in a Submitting job to coordinator loop.
Does anyone know:
What the reason for the failure with a local runner could be (when the same script works with a shared runner)?
What I could do to debug this problem?
Thanks and all the best,
Thomas
GitLab CI doesn't currently offer a solution for using its runner with Docker on a Windows environment, however there is an epic at the moment which is tracking progress for this.
In one of the issues of the epic, a contributer has managed to get a working version of a gitlab-runner which uses Docker for Windows, with which more details can be found here.
A more common (and potentially easier) way of using Docker in a Windows environment, would be to install the gitlab-runner as a Shell runner, and call the Docker commands manually to run your tests.
Conversely, if you just want to keep using the same CI script, you could install a Linux VM on your Windows 10 machine, and have that host the docker runner!

Errno 13 while running docker-compose up

I'm building an application using django and I wanted to add docker to this project.
I'm trying to run
sudo docker-compose up
Which gives me this output:
ERROR: .IOError: [Errno 13] Permission denied: './docker-compose.yml'
I checked the permissions using GUI. Everything is fine.
I'm trying to run my app from an mounted drive. I also tested it on other drives. The only drive this problem does not appear is my main drive running Ubuntu 18.04.
Looking forward to some answers
I found a working solution.
Don't use the snap installation and do this instead (tested Ubuntu 20.04)
apt install docker.io docker-compose
adding the directory where I am running my docker-compose.yml using the apparmor reconfigure tool:
$ sudo dpkg-reconfigure apparmor
You need to update your AppArmor configuration :
Snap Dockers are heavily controlled with AppArmor.
To diagnose if it is really the case, check the last lines of the syslog after you triggered the error :
dmesg | grep docker-compose
You should see a snap.docker that was denied:
kernel: [ ] audit: type=1400 audit(....):
apparmor="DENIED" operation="exec" profile="snap.docker.dockerd"
name="/bin/kmod" pid=7213 comm="exe" requested_mask="x"
denied_mask="x" fsuid=0 ouid=0
To correct this, just go to apparmor config's tunables :
cd /etc/apparmor.d/tunables
And edit HOMEDIRS variables in the 'home' file, for example from :
#{HOMEDIRS}=/home/
to
#{HOMEDIRS}=/home/ /media/aUser/Linux/
hope that helps.
All the other answers didn't work for me.
docker --version
Docker version 20.10.17, build 100c701
docker-compose -v
docker-compose version 1.29.2, build unknown
Instead of
docker-compose up
please use
docker compose up

Nvidia Theano docker image not available

Trying to run docker command :
nvidia-docker run -d -p 8888:8888 -e PASSWORD="123abcChangeThis" theano_secure start-notebook.sh
# Then open your browser at http://HOST:8888
taken from https://github.com/nouiz/Theano-Docker
returns error :
Error: image library/theano_secure:latest not found
Appears the theano_secure image is not currently available ?
Searching for theano_secure :
$ nvidia-docker search theano_secure:latest
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
The return of this command is empty so image is not available ?
If so is there an alternative Theano docker image from nvidia ?
Update :
building from source :
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
returns :
Err http://developer.download.nvidia.com Release.gpg
Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
and :
W: Failed to fetch http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease
Manually checking URL's : http://developer.download.nvidia.com & http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease are both not available. Should I build with alternative docker file ?
Update 2 :
I think this error is occurring as http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease does not exist. However http://archive.ubuntu.com/ubuntu/dists/trusty/Release does exist.
Can docker be modified to use http://archive.ubuntu.com/ubuntu/dists/trusty/Release instead of http://archive.ubuntu.com/ubuntu/dists/trusty/InRelease ?
OS version :
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.4 LTS
Release: 14.04
Codename: trusty
Update 3 :
"you are supposed to docker build first", before nvidia-docker run" I did try
docker build -t theano_secure -f Dockerfile.0.8.X.jupyter.cuda.secure .
which returns :
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
I can pull image docker pull kaixhin/theano but this does not run via Jupyter notebook in same way as nvidia-docker run -it -p 8888:8888 tensorflow/tensorflow:latest-gpu documented at https://hub.docker.com/r/tensorflow/tensorflow/ . There does not appear to be a docker Jupyter Theano container available.
How to expose the docker instance kaixhin/theano via Jupyter notebook ?
I tried : nvidia-docker run -d -p 8893:8893 -v --name theano2 kaixhin/theano start-notebook.sh but receive error :
docker: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247:
starting container process caused \"exec: \\\"start-notebook.sh\\\": executable file not found in $PATH\"\n".
Modification of kaixhin/theano docker container in order expose it via Jupyter notebook ?
Error: image library/theano_secure:latest not found
Because theano_secure doesn't like ubuntu,centos, it is not official repository on docker hub, so you need to build it by yourself.
Err http://developer.download.nvidia.com Release.gpg Unable to connect to developer.download.nvidia.com:http: [IP: 184.24.98.231 80]
Please check your internet connection first, telnet 184.24.98.231 80.
Maybe you are in a limited network place, try behind a proxy to do this again. You may want take a look about how to build image behind a proxy.
From what I understand of the nouiz/Theano-Docker README, you are supposed to docker build first, before nvidia-docker run.
But since the build is tricky, I would try instead docker pull kaixhin/theano (from kaixhin/cuda-theano/), much more recent (3 days ago), which is based on theano Dockerfile.
That image does rely on CUDAand needs to be run on an Ubuntu host OS with NVIDIA Docker installed. The driver requirements can be found on the NVIDIA Docker wiki.

Categories

Resources