how to docker-server in kubernetes container? - python

I'm building a simple kind 'automation' on container image builder.
I wrote it in python, and use nixpacks via python.subprocess.
docker-py python's module is use to docker run nixpacks result, push it to registry, and delete nixpacks result.
Currently it's run well in my laptop.
Now i want to containerize all my script and also nixpacks, to be run inside my kubernetes cluster.
For that I need to build/have an image that have 'docker-server' inside.
the 'docker-server' it self doesn't need to be accessible from the outside world, just 'localy' by nixpacks and the python script .
My question is: which one of docker image that currently have docker-server in it?
is this enough?
Sincerely
-bino-

Related

run tests on a lambda container image?

I'm using Lambda container images to package complicated libraries like opencv and pdf2image in Python.
Is there a way to run unit tests against it so I can get a code coverage for tools like Sonar?
With normal code, I could do the following:
python -m unittest -v
But not sure how to do that if the code is inside a container image.
I'm using bitbuckett pipelines as well.
In order to run Unit tests inside a container there are different solutions, and they mostly depends on where you are going to run the image.
Assuming you are going to run the image locally and assuming you are proficient with docker, you could build your image including development dependencies and then you could modify the entrypoint of the image accordingly in order to run your unit test suite. I suggest to bind a path to the local host in order to be able to retrieve possible junit.xml (or any test report) files. This way, you just do:
docker run --entrypoint="/path/to/my/ut/runner" -v /tmp/testout:/container/path myimage
Assuming you want to run the lambda remotely I suggest either to bind it to any APIGateway and to perform a remote API call (maybe create an endpoint just for development purposes that returns the test report), or to use additional tools like Terratest to invoke the lambda remotely without any additional infrastructure (note that this require using Terraform). Finally note that AWS Serverless Application Model documentation gives additional examples on how you could run your tests inside a lambda.

Upload docker image to GCR through python

I'm working on a python code where I need to build a docker image and push it to GCR (Google Container Registry). I'm able to create a docker image using Docker SDK for Python however I'm not able to find out a way to push it to gcr. I was looking into docker_client.images.push()method but I don't see a way to connect to gcr using this method. I can build the docker image using docker_client.images.build() but not able to find any way to push it to google container registry. There are ways to push it to docker registry but I need specific to gcr.
I have already implemented this using google cli or through Azure DevOps however I'm trying to do the same using python application.
Any help/suggestion is appreciated.
This seems to have worked for me using the Docker SDK:
import docker
client = docker.from_env()
image, logs = client.images.build(path="<path/to/your/docker-repo>")
image_dest = "gcr.io/<your-gcp-project>/<your-repo-name>"
image.tag(image_dest)
client.api.push(image_dest)
I also had to add permissions with gcloud auth configure-docker.
Have you looked at the container registry python library?
https://github.com/google/containerregistry
Or better yet, have you considered switching to go
https://github.com/google/go-containerregistry

Starting multiple containers with configs using the Python Docker SDK

I am using the Docker Python SDK docker-py to create a script that allows starting one or multiple containers (depending on a program argument in a way like script.py --all or script.py --specific_container) and it has to be possible to start each container with its own configuration (image, container_name, etc.) just like in typical docker-compose.yml files.
So basically, im trying to do the same what docker-compose does, just with the Python Docker SDK.
I've read that some people are trying to stick with docker-compose by using subprocess but it is not recommended and i would like to avoid this.
I am searching for possibly existing libraries for this but i haven't found anything just yet. Do you know anything i could use?
Another option would be to somehow store configuration files for the "specific_container"-profiles and for the "all"-profile as JSON (?) and then parse them and populate the Docker SDK's run method of the Container class, which lets you give all options that you can also give in the docker-compose file.
Maybe someone knows another, better solution?
Thanks in advance guys.

How do I upload my own binary (Python module) as a resource for my Kubernetes application?

I have my own Python module (an '.so' file that I'm able to import locally) that I want to make available to my application running in Kubernetes. I am wholly unfamiliar with Kubernetes and Helm, and the documentation and attempts I've made so far haven't gotten me anywhere.
I looked into ConfigMaps, trying kubectl.exe create configmap mymodule --from-file=MyModule.so, but kubectl says "Request entity too large: limit is 3145728". (My binary file is ~6mb.) I don't know if this is even the appropriate way to get my file there. I've also looked at Helm Charts, but I see nothing about how to package up a file to upload. Helm Charts look more like a way to configure deployment of existing services.
What's the appropriate way to package up my file, upload it, and use it within my application (ensuring that Python will be able to import MyModule successfully when running in my AKS cluster)?
The python module should be added to the container image that runs in Kubernetes, not to Kubernetes itself.
The current container image running in Kubernetes has a build process, usually controlled by a Dockerfile. That container image is then published to an image repository where the container runtime in Kubernetes can pull the image in from and run the container.
If you don't currently build this container, you may need to create your own build process to add the python module to the existing container. In a Dockerfile you use the FROM old/image:1.7.1 and then add your content.
FROM old/image:1.7.1
COPY MyModule.so /app/
Publish the new container image to an ECR (Elastic Container Registry) so it is available for use in your AKS cluster.
The only change you might need to make in Kubernetes then is to set the image for the deployment to the newly published image.
Here is a simple end to end guide for a python application.
Kubernetes is a container orchestration engine. A binary / package from that perspective is part of the application running within a container. As such, normally you would add your package to the container during its build process and then deploy new version of the container.
Details will depend on your application and build pipeline, but generally in some way (using package repository or copying the package manually) you would make the package available on the build agent and then copy it into the container and install within the dockerfile.

docker: is it possible to run containers from an other container?

I am running a dockerized python web application that has to run long tasks on certain requests (i.e. running some R scripts taking around 1 minute to complete). At the moment I put everything in one container and I am just running it just like this.
However, I think that it would be faster and cleaner to separate this 'background web app' and the R scripts one process = one container). I was therefore wondering if there is a way to run a container from within an other container (i.e being able to call docker run [...] on the host from the already-dockerized web application).
I tried to search for it and found some useful information on linking containers together, but in my case I'd more be interested in being able to create single-use containers on the fly.
I quite like this solution: Run docker inside a docker container? which basically allows you to use docker that's running on the host.
But if you really want to run docker in docker, here is the official solution using the dind image: https://blog.docker.com/2013/09/docker-can-now-run-within-docker/

Categories

Resources