I have multiple pipelines with scripts defined on my azure devops.
I would like to be able to run them from my local machine using python.
Is this possible?
If yes, how to achieve it?
Regards,
Maciej
You can't run them in this way that you will take YAML code and put it python or actually any language and run it locally. You need to build agent to run your pipeline. So you can create a pool, install agent on you local machine, change your pipeline to use this pool.
Related
I'm building a simple kind 'automation' on container image builder.
I wrote it in python, and use nixpacks via python.subprocess.
docker-py python's module is use to docker run nixpacks result, push it to registry, and delete nixpacks result.
Currently it's run well in my laptop.
Now i want to containerize all my script and also nixpacks, to be run inside my kubernetes cluster.
For that I need to build/have an image that have 'docker-server' inside.
the 'docker-server' it self doesn't need to be accessible from the outside world, just 'localy' by nixpacks and the python script .
My question is: which one of docker image that currently have docker-server in it?
is this enough?
Sincerely
-bino-
I am working on a project that allows users to upload a python script to an API and run it on a schedule. Currently, I'm trying to figure out a way to limit the functionality of the script so that it cannot access local files, mess with the flask server running the API, etc. Do you have any ideas on how I can achieve this? Is there anyway to make it so only specific libraries are available for importing?
Running other scripts on your server is serious security issue. If you are trying to deploy Python interpreter on your web application, you can try with something like judge0 - GitHub. It is free if you deploy it yourself and it will run scripts safely inside containers.
The simplest way is to ensure the user running the script is not root, but a user specifically designed for this task (e.g. part of a group that can only read and not write or execute). This means at minimum you should ensure all files have the appropriate mode. Then you can just use a pipe or something to run the script.
Alternatively, you could use a runtime that’s not “local”, like a VM or compute service (AWS lambda, etc). The latter would be simplest, and there’s lots of vendors who offer compute service with programmatic api.
I am using the Docker Python SDK docker-py to create a script that allows starting one or multiple containers (depending on a program argument in a way like script.py --all or script.py --specific_container) and it has to be possible to start each container with its own configuration (image, container_name, etc.) just like in typical docker-compose.yml files.
So basically, im trying to do the same what docker-compose does, just with the Python Docker SDK.
I've read that some people are trying to stick with docker-compose by using subprocess but it is not recommended and i would like to avoid this.
I am searching for possibly existing libraries for this but i haven't found anything just yet. Do you know anything i could use?
Another option would be to somehow store configuration files for the "specific_container"-profiles and for the "all"-profile as JSON (?) and then parse them and populate the Docker SDK's run method of the Container class, which lets you give all options that you can also give in the docker-compose file.
Maybe someone knows another, better solution?
Thanks in advance guys.
The current workflow I have is that I created many images of different python setups, which people can pull from if they want to test a python script with a certain configuration. Then they build the container form the image and transfer the scripts, data, etc... from their local machine to the container. Next they run it in the container, and then they finally transfer the results back to their local machine.
I'm new to docker, so is there a better way to go about this? Something I have in mind that would be convenient is if there was a central machine or docker container where people could save their python scripts and data they need to run their tests and then run them in the image of the python environment they want to and save the results. Is this possible? I've been reading about volumes and think I can maybe do something with that, but I don't know...
I am doing a project with regard to fog computing. I would like to use one docker container to simulate a fog node which can process data, store data in the database and send data to Cloud. I need Ubuntu, Python, and Redis to develop my application.
I was wondering is it possible to install them in a single container? Because I can only install them separately in different containers by using 'docker pull' command.
Can anyone help me out here?
Thanks!
This is bad practice and very time consuming. Anyway if you really want to go down that way you have to create your own image, follow the instructions to install Redis and put each step inside the your new Dockerfile (or you can try to adapt the redis Dockerfile - https://github.com/docker-library/redis/blob/99a06c057297421f9ea46934c342a2fc00644c4f/3.2/Dockerfile).
Once this is done, you simply add the new commands to install python and build.
Good luck.