I am using the Docker Python SDK docker-py to create a script that allows starting one or multiple containers (depending on a program argument in a way like script.py --all or script.py --specific_container) and it has to be possible to start each container with its own configuration (image, container_name, etc.) just like in typical docker-compose.yml files.
So basically, im trying to do the same what docker-compose does, just with the Python Docker SDK.
I've read that some people are trying to stick with docker-compose by using subprocess but it is not recommended and i would like to avoid this.
I am searching for possibly existing libraries for this but i haven't found anything just yet. Do you know anything i could use?
Another option would be to somehow store configuration files for the "specific_container"-profiles and for the "all"-profile as JSON (?) and then parse them and populate the Docker SDK's run method of the Container class, which lets you give all options that you can also give in the docker-compose file.
Maybe someone knows another, better solution?
Thanks in advance guys.
Related
The current workflow I have is that I created many images of different python setups, which people can pull from if they want to test a python script with a certain configuration. Then they build the container form the image and transfer the scripts, data, etc... from their local machine to the container. Next they run it in the container, and then they finally transfer the results back to their local machine.
I'm new to docker, so is there a better way to go about this? Something I have in mind that would be convenient is if there was a central machine or docker container where people could save their python scripts and data they need to run their tests and then run them in the image of the python environment they want to and save the results. Is this possible? I've been reading about volumes and think I can maybe do something with that, but I don't know...
I am running a dockerized django app and I am looking for a way to run (a) directive(s) every time before I build a docker container. More concretely, I would like to run docker-compose -f production.yml run --rm django python manage.py check --deploy each time before I either build or up the production.yml file and stop the build process if any erroroccur. Like a pre-hook.
I know I could achieve this with a bash-script, yet I was wondering if there is a way of doing this inside the docker-compose file. I can't find anything in the docker documentation (except events, but I don't understand if they serve for what I want to achieve) about it and I assume that this is not possible. Yet, maybe it is in fact possible or maybe there is a hacky workaround?
Thanks in advance for any tips.
Currently, this is not possible. There have been multiple requests to add such functionality, but the maintainers do not consider this a good idea.
See:
https://github.com/docker/compose/issues/468
https://github.com/docker/compose/issues/1341
https://github.com/docker/compose/issues/6736
I'm creating microservices using Docker Containers.
Initially, I run one Docker container and it provides me with some output that I need as input for a second docker container.
So the flow of steps would be:
Run Docker container;
Get output;
Trigger running of second Docker container with previous output.
I have looked into Kubernetes, cloud functions and pub/sub on Google Cloud. Though I would like to run this locally first.
Unfortunately, I haven't found a solution, my processes are more like scripts than web-based applications.
If you are asking from a auto-devops point of view, you can try utilizing kubectl wait command for this.
Prepare yaml files for each of the containers (eg pod, deployment or statefulset)
Write a shell script, running kubectl apply -f on the files.
Use kubectl wait and jsonpath to pass info from prvious pods to the next, once it's in the ready / RUNNING state.
Find the detailed docs here.
You can have a look at Kubernetes Jobs. It might do the trick, although Kubernetes was not really made for running scripts in sequence and sharing dependencies between containers.
This thread is similar to what you need. One tool mentioned that intrigued me was brigade.
I have a server and a front end, I would like to get python code from the user in the front end and execute it securely on the backend.
I read this article explaining the problematic aspect of each approach - eg pypl sandbox, docker etc.
I can define the input the code needs, and the output, and can put all of them in a directory, what's important to me is that this code should not be able to hurt my filesystem, and create a disk, memory, cpu overflow (I need to be able to set timeouts, and prevent it from accesing files that are not devoted to it)
What is the best practice here? Docker? Kubernetes? Is there a python module for that?
Thanks.
You can have a python docker container with the volume mount. Volume mount will be the directory in local system for code which will be available in your docker container also. By doing this you have isolated user supplied code only to the container when it runs.
Scan your python container with CIS benchmark for better security
I am doing a project with regard to fog computing. I would like to use one docker container to simulate a fog node which can process data, store data in the database and send data to Cloud. I need Ubuntu, Python, and Redis to develop my application.
I was wondering is it possible to install them in a single container? Because I can only install them separately in different containers by using 'docker pull' command.
Can anyone help me out here?
Thanks!
This is bad practice and very time consuming. Anyway if you really want to go down that way you have to create your own image, follow the instructions to install Redis and put each step inside the your new Dockerfile (or you can try to adapt the redis Dockerfile - https://github.com/docker-library/redis/blob/99a06c057297421f9ea46934c342a2fc00644c4f/3.2/Dockerfile).
Once this is done, you simply add the new commands to install python and build.
Good luck.