I'm running into an issue without volume mounting, combined with the creation of directories in python.
Essentially inside my container, I'm writing to some path /opt/…, and I may have to make the path (which I'm using os.makedirs for)
If I mount a host file path like -v /opt:/opt, with bad "permissions" where the docker container does not seem to be able to write to, the creation of the path inside the container DOES NOT FAIL. The makedirs(P) works, because inside the container, it can make the dir just fine, because it has sudo permissions. However, nothing gets written, silently, on the host at /opt/…. The data just isn't there, but no exception is ever raised.
If I mount a path with proper/open permissions, like -v /tmp:/opt, then the data shows up on the host machine at /tmp/… as expected.
So, how do I not silently fail if there are no write permissions on the host on the left side of the -v argument?\
EDIT: my question is "how do I detect this bad deployment scenario, crash, and fail fast inside the container, if the person who deploys the container, does it wrong"? Just silently not writing data isn't acceptable.
The bad mount is root on the host right, and the good mount is the user in the Docker group on the host? Can you check the user/group of the mounted /opt? It should be different than that of /tmp.
You could expect a special file to exist in /opt and fail if it's not present
if not os.path.exists(PATH):
raise Exception("could not find PATH: file missing or failed to mount /opt")
Related
I have a Python script inside a container that needs to continuously read changing values inside a file located on the host file system. Using a volume to mount the file won't work because that only captures a snapshot of the values in the file at that moment. I know it's possible since the node_exporter container is able to read files on the host filesystem using custom methods in Golang. Does anyone know a general method to accomplish this?
I have a Python script [...] that needs to continuously read changing values inside a file located on the host file system.
Just run it. Most Linux systems have Python preinstalled. You don't need Docker here. You can use tools like Python virtual environments if your application has Python library dependencies that need to be installed.
Is it possible to get a Docker container to read a file from host file system not using Volumes?
You need some kind of mount, perhaps a bind mount; docker run -v /host/path:/container/path image-name. Make sure to not overwrite the application's code in the image when you do this, since the mount will completely hide anything in the underlying image.
Without a bind mount, you can't access the host filesystem at all. This filesystem isolation is a key feature of Docker.
...the [Prometheus] node_exporter container...
Reading from the GitHub link in the question, "It's not recommended to deploy it as a Docker container because it requires access to the host system." The docker run example there uses a bind mount to access the entire host filesystem, circumventing Docker's filesystem isolation.
I'm trying to access multiple Windows CIFS shares from a Python code that will run in a Docker container. I've seen that there are multiple SMB libraries like pysmb and smbprotocol that claim that they can be used to access CIFS shares, but I haven't managed to get it to work and haven't seen a single example online where they are used to access CIFS shares.
I know that a solution would be to mount the share on the host and mount it to the container, but I'd rather to avoid that if possible, as the code will need to access multiple shares and not all will be known when the container starts.
Am I missing something? Is there a good way or a good example online on how to access CIFS share from Python code that runs on Linux? (I know that on Windows you can simply open the folder, but I need it to work on Linux).
What is known to not work is a call to mount inside the container unless the container was started with privileges. However client code could also connect to CIFS drives without first mounting the directory (e.g. for Java it is jcifs-ng).
Find out how the library you use works internally. If it can connect directly go on. Otherwise you may also add smbclient to your container and call that to access files on the CIFS side.
There's no CIFS specific example because it should work the same as SMB in linux. I have been struggling for a while with both libraries (pysmb and smbprotocol) and I have made them work.
For pysmb, the only thing I've found different is that examples usually tell you can leave domain empty or don't tell it. On windows, it is the workgroup. If leaving it empty doesn't work, you can check it out by right-clicking on “My Computer” and selecting “Properties”. You'll find it there together with the machine name.
Here's a working example with pysmb from linux to a windows machine:
from smb.SMBConnection import SMBConnection
conn = SMBConnection(username="user", password="my_passwrd", my_name="name", remote_name="WINDOWS-MACHINE", domain="WORKGROUP", use_ntlm_v2=True)
conn.connect(my_server_ip_addr)
for file in conn.listPath("shared_dir", "/"):
print(file.filename)
conn.close()
You can check an example using smbprotocol here (where "server" is you IP address or host and "share" your shared directory name). You can make it work without remote_name nor domain.
I'm trying to read a file from the host to the container which is python script. That script reads a file from the root folder.
For eg /a/b/c/log.txt(This is a dynamic file so I cant add this into Dockerfile)
I need to access this in the docker container. However, from this platform I got a hint that we need to volume.
docker run -v /path/from/host:/path/to/container sntalarmdocker_snt
*sntalarmdocker_snt is the name of image
The main thing is being confused about this path/to/container
Is it path were DockerFile exist
Is it /var/snap/docker/common/var-lib-docker/containers/f901e49b67375d4b1105309569c92afae415309ac1787afa2a565a9c08708b18#
This question is related to Python file writing is not working inside docker
May I know how can I resolve this issue. In short I need to read a file from host and write a file into host in which I cant add that in the docker file. Thanks in Advance for time & help
/path/to/container is a poor choice of words. It is actually the path within the container where you would like to mount /path/from/host.
For example, if you had a directory on the host, /home/edward/data, and you wanted the contents of that directory to be available in the container at /data, you would use -v /home/edward/data:/data.
In the container's process, you can then read and/or write files in the /data directory and they will be read from/written to /home/edward/data on the host.
Bind mounts are explained in detail in the documentation.
./path/to/container is the directory path inside the container where you want your host's data to be bind mount with container.
It can be any directory path like /data ,/root/data etc.
(/data does not need to already exist in the container)
Whatever updates are taking place in the container's directory by read/write operation will be updated there in your host's path as well.
The only thing need to check is your host machine path to the specified folder/file
After bind mounting ,you can use exec -it command to enter into container interactive shell and then use ls to see the list of files/folders.
The path provided at time of bind mount can be seen there.
I want to check if a certain file fake-file.txt exists on a shared folder //123.456.7.890/Data/ and if it does I want to remove it and write a new file real-file.txt. I do not want to use paramiko module for this task and got it working on a windows machine like this:
filename = '//123.456.7.890/Data/fake-file.txt'
if os.exists(filename):
os.remove(filename)
#and so on
However, this method does not work on a unix based machine (CentOS in this case). I get an IOError that the file doesnt exists. I am not really familiar with unix based machines so there is probably something going wrong with the reference. How can I fix this problem? If something is unclear, let me know!
PS. the folder is password protected and I am able to ssh to it from the terminal
It's not really a Python question. It's that on Linux you access remote filesystems by mounting them onto a local empty directory (which may require privileges you don't have) and then access them through that directory (then known as a mountpoint). Something like
$ mkdir ./1234567890
$ mount -t cifs //123.456.7.890/Data -o username=username,password=password ./1234567890
if this succeeds, the linux filename you use inside your Python program will be ./1234567890/fake-file.txt.
Some Linux systems may be configured with an automounter system so that particular filestore references automagically do the mount for you. You'll need to talk to your local system management to establish whether and how this is implemented locally.
PS 123.456.7.890 is not a valid IP address but I'm assuming you chose to hide the real and valid IP address that you were actually using. 123.45.67.89 is a better choice for a "random" IP address, or 192.168.22.33 (random private IP).
This question already has answers here:
Mount SMB/CIFS share within a Docker container
(5 answers)
Closed 7 years ago.
I have a small Python application that I'd like to run on Linux in Docker (using boot2docker for now). This application reads some data from my Windows network share, which works fine on Windows using the network path but fails on Linux. After doing some research I figured out how to mount a Windows share on Ubuntu. I'm attempting to implement the dockerfile so that it sets up the share for me but have been unsuccessful so far. Below is my current approach, which encounters operation not permitted at the mount command during the build process.
#Sample Python functionality
import os
folders = os.listdir(r"\\myshare\folder name")
#Dockerfile
RUN apt-get install cifs-utils -y
RUN mkdir -p "//myshare/folder name"
RUN mount -t cifs "//myshare/folder name" "//myshare/folder name" -o username=MyUserName,password=MyPassword
#Error at mount during docker build
#"mount: error(1): Operation not permitted"
#Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Edit
Not a duplicate of Mount SMB/CIFS share within a Docker container. The solution for that question references a fix during docker run. I can't run --privileged if the docker build process fails.
Q: What is the correct way to mount a Windows network share inside a Docker container?
Docker only abstracts away applications, whereas mounting filesystems happens at the kernel level and so can't be restricted to only happen inside the container. When using --privileged, the mount happens on the host and then it's passed through into the container.
Really the only way you can do this is to have the share available on the host (put it in /etc/fstab on a Linux machine, or mount it to a drive letter on a Windows machine) and have the host mount it, then make it available to the container as you would with any other volume.
Also bear in mind that mkdir -p "//myshare/folder name" is a semi-invalid path - most shells will condense the // into / so you may not have access to a folder called /myshare/folder name since the root directory of a Linux system is not normally where you put files. You might have better success using /mnt/myshare-foldername or similar instead.
An alternative could be to find a way to access the files without needing to mount them. For example you could use the smbclient command to transfer files between the Docker container and the SMB/CIFS share without needing to mount it, and this will work within the Docker container, just as you might use wget or curl to upload or download files, as is also commonly done in Dockerfiles.
You are correct that you can only use --privileged during docker run. You cannot perform mount operations without --privileged, ergo, you cannot perform mount operations during the docker build process.
This is probably by design: the goal is that a Dockerfile is largely self contained; anyone should be able to use your Dockerfile and other contents in the same directory to generate the same image; linking things to an external mount would violate this restriction.
However, your question says that you have an application that needs to read some data from a share so it's not clear why you need the share mounted during docker build. It sounds like you would be better off building an image that will launch your application as part of docker run, and possibly use docker volumes to access the share rather than attempting to mount it inside the container.