This question already has answers here:
Mount SMB/CIFS share within a Docker container
(5 answers)
Closed 7 years ago.
I have a small Python application that I'd like to run on Linux in Docker (using boot2docker for now). This application reads some data from my Windows network share, which works fine on Windows using the network path but fails on Linux. After doing some research I figured out how to mount a Windows share on Ubuntu. I'm attempting to implement the dockerfile so that it sets up the share for me but have been unsuccessful so far. Below is my current approach, which encounters operation not permitted at the mount command during the build process.
#Sample Python functionality
import os
folders = os.listdir(r"\\myshare\folder name")
#Dockerfile
RUN apt-get install cifs-utils -y
RUN mkdir -p "//myshare/folder name"
RUN mount -t cifs "//myshare/folder name" "//myshare/folder name" -o username=MyUserName,password=MyPassword
#Error at mount during docker build
#"mount: error(1): Operation not permitted"
#Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Edit
Not a duplicate of Mount SMB/CIFS share within a Docker container. The solution for that question references a fix during docker run. I can't run --privileged if the docker build process fails.
Q: What is the correct way to mount a Windows network share inside a Docker container?
Docker only abstracts away applications, whereas mounting filesystems happens at the kernel level and so can't be restricted to only happen inside the container. When using --privileged, the mount happens on the host and then it's passed through into the container.
Really the only way you can do this is to have the share available on the host (put it in /etc/fstab on a Linux machine, or mount it to a drive letter on a Windows machine) and have the host mount it, then make it available to the container as you would with any other volume.
Also bear in mind that mkdir -p "//myshare/folder name" is a semi-invalid path - most shells will condense the // into / so you may not have access to a folder called /myshare/folder name since the root directory of a Linux system is not normally where you put files. You might have better success using /mnt/myshare-foldername or similar instead.
An alternative could be to find a way to access the files without needing to mount them. For example you could use the smbclient command to transfer files between the Docker container and the SMB/CIFS share without needing to mount it, and this will work within the Docker container, just as you might use wget or curl to upload or download files, as is also commonly done in Dockerfiles.
You are correct that you can only use --privileged during docker run. You cannot perform mount operations without --privileged, ergo, you cannot perform mount operations during the docker build process.
This is probably by design: the goal is that a Dockerfile is largely self contained; anyone should be able to use your Dockerfile and other contents in the same directory to generate the same image; linking things to an external mount would violate this restriction.
However, your question says that you have an application that needs to read some data from a share so it's not clear why you need the share mounted during docker build. It sounds like you would be better off building an image that will launch your application as part of docker run, and possibly use docker volumes to access the share rather than attempting to mount it inside the container.
Related
I have a Python script inside a container that needs to continuously read changing values inside a file located on the host file system. Using a volume to mount the file won't work because that only captures a snapshot of the values in the file at that moment. I know it's possible since the node_exporter container is able to read files on the host filesystem using custom methods in Golang. Does anyone know a general method to accomplish this?
I have a Python script [...] that needs to continuously read changing values inside a file located on the host file system.
Just run it. Most Linux systems have Python preinstalled. You don't need Docker here. You can use tools like Python virtual environments if your application has Python library dependencies that need to be installed.
Is it possible to get a Docker container to read a file from host file system not using Volumes?
You need some kind of mount, perhaps a bind mount; docker run -v /host/path:/container/path image-name. Make sure to not overwrite the application's code in the image when you do this, since the mount will completely hide anything in the underlying image.
Without a bind mount, you can't access the host filesystem at all. This filesystem isolation is a key feature of Docker.
...the [Prometheus] node_exporter container...
Reading from the GitHub link in the question, "It's not recommended to deploy it as a Docker container because it requires access to the host system." The docker run example there uses a bind mount to access the entire host filesystem, circumventing Docker's filesystem isolation.
I'm trying create a container to run a program. I'm using a pre configurate image and now I need run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Is there any way to do this reference with some docker command? Or using Dockerfile?
Yes, you can do this. What you are describing is a bind mount. See https://docs.docker.com/storage/bind-mounts/ for documentation on the subject.
For example, if I want to mount a folder from my home directory into /mnt/mydata in a container, I can do:
docker run -v /Users/andy/mydata:/mnt/mydata myimage
Now, /mnt/mydata inside the container will have access to /Users/andy/mydata on my host.
Keep in mind, if you are using Docker for Mac or Docker for Windows there are specific directories on the host that are allowed by default:
If you are using Docker Machine on Mac or Windows, your Docker Engine daemon has only limited access to your macOS or Windows filesystem. Docker Machine tries to auto-share your /Users (macOS) or C:\Users (Windows) directory. So, you can mount files or directories on macOS using.
Update July 2019:
I've updated the documentation link and naming to be correct. These type of mounts are called "bind mounts". The snippet about Docker for Mac or Windows no longer appears in the documentation but it should still apply. I'm not sure why they removed it (my Docker for Mac still has an explicit list of allowed mounting paths on the host).
Im working on a OpenCv project and as many know, the instalation of that on windows is iritating. So, what i want to do is to run the project in a docker container and store the output to a folder on the host computer. In simple terms it is something like this:
Program python / opencv code
Build Docker image
Run Docker image --> Saves the output data somewhere
In some way - get access to the output data on host.
Now, i have been trying to find many ways of dooing this, and i will probably at a later time send it using other means. However, for development i need this slightly more direct approach. It also has somthing to do with colaboration with others.
Simple Docker file that can be used as the base:
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD [ "python", "./script.py" ]
Lets say that script.py creates a file called output.txt. I want that output.txt stored at my E: drive.
How to do this automatically - without having to do multiple comandline operations?
TLDR; How to get files from Docker container to host? Goal: File physically stored on E:
There are two ways to do this. First is to mount a docker volume.
docker run --name=name -d -v /path/in/host:/path/in/docker name
By mounting a volume like this, whatever you write in the directory you've mounted will be written in the host automatically. Check this for more information on volumes.
The second way is to copy the files from a container to the host like this
docker cp <containerId>:/file/path/within/container /host/path/target
Check docs here for more info on this.
I'm trying to create a container to run a program. I'm using a pre-configured image and now I need to run the program. However, it's a machine learning program and I need a dataset from my computer to run.
The file is too large to be copied to the container. It would be best if the program running in the container searched the dataset in a local directory of my computer, but I don't know how I can do this.
Well, I have made the shared folder from my machine appeared using docker run -it -v ~/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py in the container, but all files in folder ydk-py are not shown. This is the safe, usually-desired behavior. But for development and instance setup, it would be immensely useful to have access to an existing file structure.
docker run with -v will automatically mount sub-directories. In your case you are using relative path, which you need to use absolute path as per this documentation.
So change your command from
docker run -it -v ~/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py
to
docker run -it -v /home/<what ever user>/Volumes/Data/Studies/PhD\Work/gitlab/J2/ydk-py:/ydk-py ydkdev/ydk-py
it will work.
Make sure you have enough permissions on directory that you are trying to mount.
I need to be able to restart/revert a container to its original image state. Simply doing a docker restart will not work (e.g. files created during a session etc. are still persisted).
Currently I have the following python script that does the work:
import subprocess
# Stop and remove a named container if it exists (meaning is running or have exited).
def resetContainer( imageName, containerName ):
containerExists=subprocess.check_output(['docker', 'ps','-aqf name=%s' % CONTAINER_NAME])
if containerExists:
print('Stop and remove container')
subprocess.call(['docker', 'stop','%s' % CONTAINER_NAME])
subprocess.call(['docker', 'rm','%s' % CONTAINER_NAME])
return;
resetContainer(IMAGE_NAME,CONTAINER_NAME)
# Finally re-create it from image
subprocess.call(['docker', 'run','-d','--name',CONTAINER_NAME,IMAGE_NAME,'tail', '-f','/dev/null'])
But is there a better way than this?
I have looked at:
https://docker-py.readthedocs.io/en/stable/containers.html
but from what I can see I will end up with the same number of lines + the additional "overhead" of introducing an additional layer on top of docker "native" commands.
Additional information to #larsks' comment and OkieOth's answer:
Containers are considered to be ephemeral, so you are already doing the right thing, which is:
to stop and remove the old container
to run a new one
from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
If you use plain docker or docker-compose, then you need to remove a maybe existing container on your machine and start a new container to get a fresh copy.
# stop a container
docker stop CONTAINER_NAME
# removes the container
docker rm -f CONTAINER_NAME
In the case your container uses external volumes (on host or in other containers) you need also to remove them. This could be the case if you work with databases.
I'm using docker stack on my machines. This approach don't need additional dependencies (opposite to docker-compose) and it does a complete container reset for you.
# bring up a fresh, clean docker stack
docker stack deploy -c "$composeFile" "$STACK_NAME"
# halt and remove the existing containers
docker stack rm $STACK_NAME
Docker stack don't clean mounted host directories.
IMO docker stack has a much cleaner handling of the container state than the pure docker run.