Suppose the following setup:
Website written in php / laravel
User uploads a file (either text / doc / pdf)
We have a docker container which contains a python script for converting text into a numpy array.
I want to take this uploaded data and pass it to the python script.
I can't find anything which explains how to pass dynamically generated inputs into a container.
Can this be done by executing a shell script from inside the laravel app which contains the uploaded file as a variable specified in the dockerfile's ENTRYPOINT?
Are there any other ways of doing this?
I would strongly recommend using tcp/ip for such purposes. By the way, in this case you benefit from:
You can detect whether your python service is online
You can move python container to another machine
Implementation is really simple. You can choose any framework, but for me suitable is Twisted, and implement your python script as follows:
from twisted.internet.protocol import Factory, Protocol
from twisted.protocols.basic import LineReceiver
class DataProcessor(LineReceiver):
def lineReceived(self, line):
# line contains your data
pass
Factory factory = Factory()
factory.protocol = DataProcessor
reactor.listenTCP(8080, factory)
... a python script for ...
Just run it; don't package it into a Docker container. That's doubly true if its inputs and outputs are both local files, and it expects to do its thing and exit promptly: the filesystem isolation Docker provides works against you here.
This is, of course, technically possible. Depending on how exactly the support program container is set up, the "command" at the end of docker run will be visible to the Python script in sys.argv, like any other command-line options. You can use a docker run -v option to publish parts of the host's filesystem into the container. So you might be able to run something like
docker run --rm -v $PWD/files:/data \
converter_image \
python convert.py /data/in.txt /data/out.pkl
where all of the /data paths are in the container's private filesystem space.
There are two big caveats:
The host paths in the docker run -v option are paths specifically on the physical host. If your HTTP service is also running in a container you need to know some host-system path you can write to that's also visible in your container filesystem.
Running any docker command at all effectively requires root privileges. If any of the filenames or paths involved are dynamic, shell injection attacks can compromise your system. Be very careful with how you run this from a network-accessible script.
One way to do this would be to upload the files to a directory to which the Docker container has access to and then poll the directory for new files using the Python script. You can access local directories from Docker containers using "bind mounts". Google something like "How to share data between a Docker container and host system" to read more about bind mount and sharing volumes.
Related
I have a Python script inside a container that needs to continuously read changing values inside a file located on the host file system. Using a volume to mount the file won't work because that only captures a snapshot of the values in the file at that moment. I know it's possible since the node_exporter container is able to read files on the host filesystem using custom methods in Golang. Does anyone know a general method to accomplish this?
I have a Python script [...] that needs to continuously read changing values inside a file located on the host file system.
Just run it. Most Linux systems have Python preinstalled. You don't need Docker here. You can use tools like Python virtual environments if your application has Python library dependencies that need to be installed.
Is it possible to get a Docker container to read a file from host file system not using Volumes?
You need some kind of mount, perhaps a bind mount; docker run -v /host/path:/container/path image-name. Make sure to not overwrite the application's code in the image when you do this, since the mount will completely hide anything in the underlying image.
Without a bind mount, you can't access the host filesystem at all. This filesystem isolation is a key feature of Docker.
...the [Prometheus] node_exporter container...
Reading from the GitHub link in the question, "It's not recommended to deploy it as a Docker container because it requires access to the host system." The docker run example there uses a bind mount to access the entire host filesystem, circumventing Docker's filesystem isolation.
I have a scenario where I have created an emulator in Python3 of a test node that can be launched in a Docker container.
So basically, on one server running Ubuntu 18.04, I have 50 ~ 100 containers, each one emulating a node and performing a basic file transfer task.
Each container is running a Python3 application that emulates a node. For logging purposes, I have the following:
import logging
logging.basicConfig (format='%(asctime)s : %(message)s', filename='test.log', datefmt='%Y-%m-%d %H:%M:%S', level=logging.DEBUG)
So basically by executing:
logging.error ("File transfer failed")
I get a log file test.log with the proper formatted time stamp and error message.
The issue is this is occurring inside the container, and for that matter, inside 50 ~ 100 containers.
Is there a way to have all the containers logging to a single log file on the localhost where the containers exist? I have looked at log handlers in Python but cannot seem to wrap my head around getting out of the container and writing to file on local host.
How about using a docker volume. docker volumes can be used to persist data to an external file system. By doing so, your containers will have the access to read and write to your local hard drive instead of creating log files inside of the containers itself.
But you may have to find a way to avoid race conditions, to write to the shared location.
Read about docker volumes in their official docs. It's pretty easy.
The default idiom for Docker logging is to log to stdout. So just don't specify a file when you do basicConfig() and logs will go there by default.
You can then access those logs with docker logs command.
I am building a webapp (a simple flask site) that uses docker. I want my development code to not reside within docker, but be executed by the development environment (using python3) I have defined in my dockerfile. I know that I can use the COPY . . syntax in a dockerfile to copy my source code into the image for execution, but that violates my aim of separating the container from my source. Is there a way to have a docker container read and execute the code that it is in the directory I run the docker container run command from?
Right now my container uses the copy company to copy all the source code into the container. It then uses the CMD command to automatically run the flask app:
CMD [ "python", "flask_app/server.py" ]
(I'm storing all my flask code in a directory called flask_app). I'm assuming this works because all this has been copied into the container (according to the specifications given in the dockerfile) and is being executed when I run the container. I would like for the container to instead access and execute flask_app/server.py without copying this information into itself -- is this possible? If so, how?
Instead of using COPY to move the code into the container, you'll use a "bind mount" (https://docs.docker.com/storage/bind-mounts/).
When you run the container, you'll do it with a command like this:
docker run --mount type=bind,source=<path_outside_container>,target=<path_inside_container> <image_tag>
For portability, I recommending putting this line in a script intended to be run from the repository root, and having the <path_outside_container> be "$(pwd)", so that it will work on other people's computers. You'll need to adjust <path_inside_container> and your CMD depending on where you want the code to live inside the container.
(Obviously you can also put whatever other options you'd like on the command, like --it --rm or -p <whatever>.)
This question already has answers here:
Mount SMB/CIFS share within a Docker container
(5 answers)
Closed 7 years ago.
I have a small Python application that I'd like to run on Linux in Docker (using boot2docker for now). This application reads some data from my Windows network share, which works fine on Windows using the network path but fails on Linux. After doing some research I figured out how to mount a Windows share on Ubuntu. I'm attempting to implement the dockerfile so that it sets up the share for me but have been unsuccessful so far. Below is my current approach, which encounters operation not permitted at the mount command during the build process.
#Sample Python functionality
import os
folders = os.listdir(r"\\myshare\folder name")
#Dockerfile
RUN apt-get install cifs-utils -y
RUN mkdir -p "//myshare/folder name"
RUN mount -t cifs "//myshare/folder name" "//myshare/folder name" -o username=MyUserName,password=MyPassword
#Error at mount during docker build
#"mount: error(1): Operation not permitted"
#Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Edit
Not a duplicate of Mount SMB/CIFS share within a Docker container. The solution for that question references a fix during docker run. I can't run --privileged if the docker build process fails.
Q: What is the correct way to mount a Windows network share inside a Docker container?
Docker only abstracts away applications, whereas mounting filesystems happens at the kernel level and so can't be restricted to only happen inside the container. When using --privileged, the mount happens on the host and then it's passed through into the container.
Really the only way you can do this is to have the share available on the host (put it in /etc/fstab on a Linux machine, or mount it to a drive letter on a Windows machine) and have the host mount it, then make it available to the container as you would with any other volume.
Also bear in mind that mkdir -p "//myshare/folder name" is a semi-invalid path - most shells will condense the // into / so you may not have access to a folder called /myshare/folder name since the root directory of a Linux system is not normally where you put files. You might have better success using /mnt/myshare-foldername or similar instead.
An alternative could be to find a way to access the files without needing to mount them. For example you could use the smbclient command to transfer files between the Docker container and the SMB/CIFS share without needing to mount it, and this will work within the Docker container, just as you might use wget or curl to upload or download files, as is also commonly done in Dockerfiles.
You are correct that you can only use --privileged during docker run. You cannot perform mount operations without --privileged, ergo, you cannot perform mount operations during the docker build process.
This is probably by design: the goal is that a Dockerfile is largely self contained; anyone should be able to use your Dockerfile and other contents in the same directory to generate the same image; linking things to an external mount would violate this restriction.
However, your question says that you have an application that needs to read some data from a share so it's not clear why you need the share mounted during docker build. It sounds like you would be better off building an image that will launch your application as part of docker run, and possibly use docker volumes to access the share rather than attempting to mount it inside the container.
I have two applications:
a Python console script that does a short(ish) task and exits
a Flask "frontend" for starting the console app by passing it command line arguments
Currently, the Flask project carries a copy of the console script and runs it using subprocess when necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.
I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.
Q: How can I spawn a container with a short lived task from inside a container?
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need a docker client installed on the container as well.
A simple example of a container that can execute docker commands on the host:
docker run -v /var/run/docker.sock:/var/run/docker.sock your_image
It's important to note that this is not the same as running a docker daemon in a container. For that you need a solution like jpetazzo/dind.