Upload docker image to GCR through python - python

I'm working on a python code where I need to build a docker image and push it to GCR (Google Container Registry). I'm able to create a docker image using Docker SDK for Python however I'm not able to find out a way to push it to gcr. I was looking into docker_client.images.push()method but I don't see a way to connect to gcr using this method. I can build the docker image using docker_client.images.build() but not able to find any way to push it to google container registry. There are ways to push it to docker registry but I need specific to gcr.
I have already implemented this using google cli or through Azure DevOps however I'm trying to do the same using python application.
Any help/suggestion is appreciated.

This seems to have worked for me using the Docker SDK:
import docker
client = docker.from_env()
image, logs = client.images.build(path="<path/to/your/docker-repo>")
image_dest = "gcr.io/<your-gcp-project>/<your-repo-name>"
image.tag(image_dest)
client.api.push(image_dest)
I also had to add permissions with gcloud auth configure-docker.

Have you looked at the container registry python library?
https://github.com/google/containerregistry
Or better yet, have you considered switching to go
https://github.com/google/go-containerregistry

Related

how to docker-server in kubernetes container?

I'm building a simple kind 'automation' on container image builder.
I wrote it in python, and use nixpacks via python.subprocess.
docker-py python's module is use to docker run nixpacks result, push it to registry, and delete nixpacks result.
Currently it's run well in my laptop.
Now i want to containerize all my script and also nixpacks, to be run inside my kubernetes cluster.
For that I need to build/have an image that have 'docker-server' inside.
the 'docker-server' it self doesn't need to be accessible from the outside world, just 'localy' by nixpacks and the python script .
My question is: which one of docker image that currently have docker-server in it?
is this enough?
Sincerely
-bino-

azure cloud publish docker greyed out

I'm new to Azure, I have used AWS for many years. I want to create a Azure function using a docker greyed out no matter how a vary the options. image (ideally pub to docker hub). when I try to manually create the function in the portal (new account) the publish option has docker . I have setup billing.
The answer is as of sep 2021 Azure is unable to do Docker Functions with the consumption service plan kind of sucks but I'll see where that puts me.
The option magically stopped being greyed out.

Run a gsutil command in a Google Cloud Function

I would like to run a gsutil command every x minutes as a cloud function. I tried the following:
# main.py
import os
def sync():
line = "gsutil -m rsync -r gs://some_bucket/folder gs://other_bucket/other_folder"
os.system(line)
While the Cloud Function gets triggered, the execution of the line does not work (or i.e. the files are not copied from one bucket to another). However, it does work fine when I run it locally in Pycharm or with cmd. What is the difference with cloud functions?
You can use Cloud Run for this. You have very few change to perform in your code.
Create a container with gsutil installed and python also, for example gcr.io/google.com/cloudsdktool/cloud-sdk as base image
Take care of the service account used when you deploy Cloud Run, grant the correct permission for accessing to your bucket
Let me know if you need more guidance
Cloud Functions server instances don't have gsutil installed. It works on your local machine because you do have it installed and configured there.
I suggest trying to find a way to do what you want with the Cloud Storage SDK for python. Or figure out how to deploy gsutil with your function and figure out how to configure and invoke it from your code, but that might be very difficult.
There's no straightforward option for that.
I think the best for Cloud Functions is to use google-cloud-storage python library

How do I upload my own binary (Python module) as a resource for my Kubernetes application?

I have my own Python module (an '.so' file that I'm able to import locally) that I want to make available to my application running in Kubernetes. I am wholly unfamiliar with Kubernetes and Helm, and the documentation and attempts I've made so far haven't gotten me anywhere.
I looked into ConfigMaps, trying kubectl.exe create configmap mymodule --from-file=MyModule.so, but kubectl says "Request entity too large: limit is 3145728". (My binary file is ~6mb.) I don't know if this is even the appropriate way to get my file there. I've also looked at Helm Charts, but I see nothing about how to package up a file to upload. Helm Charts look more like a way to configure deployment of existing services.
What's the appropriate way to package up my file, upload it, and use it within my application (ensuring that Python will be able to import MyModule successfully when running in my AKS cluster)?
The python module should be added to the container image that runs in Kubernetes, not to Kubernetes itself.
The current container image running in Kubernetes has a build process, usually controlled by a Dockerfile. That container image is then published to an image repository where the container runtime in Kubernetes can pull the image in from and run the container.
If you don't currently build this container, you may need to create your own build process to add the python module to the existing container. In a Dockerfile you use the FROM old/image:1.7.1 and then add your content.
FROM old/image:1.7.1
COPY MyModule.so /app/
Publish the new container image to an ECR (Elastic Container Registry) so it is available for use in your AKS cluster.
The only change you might need to make in Kubernetes then is to set the image for the deployment to the newly published image.
Here is a simple end to end guide for a python application.
Kubernetes is a container orchestration engine. A binary / package from that perspective is part of the application running within a container. As such, normally you would add your package to the container during its build process and then deploy new version of the container.
Details will depend on your application and build pipeline, but generally in some way (using package repository or copying the package manually) you would make the package available on the build agent and then copy it into the container and install within the dockerfile.

What are the ways to deploy python code on aws ec2?

I have a python project and i want to deploy it on an AWS EC2 instance. My project has dependencies to other python libraries and uses programs installed on my machine. What are the alternatives to deploy my project on an AWS EC2 instance?
Further details : My project consist on a celery periodic task that uses ffmpeg and blender to create short videos.
I have checked elastic bean stalk but it seems it is tailored for web apps. I don't know if containerizing my project via docker is a good idea...
The manual way and the cheapest way to do it would be :
1- Launch a spot instance
2- git clone the project
3- Install the librairies via pip
4- Install all dependant programs
5- Launch periodic task
I am looking for a more automatic way to do it.
Thanks.
Beanstalk is certainly an option. You don't necessarily have to use it for web apps and you can configure all of the dependencies needed via .ebextensions.
Containerization is usually my go to strategy now. If you get it working within Docker locally then you have several deployment options and the whole thing gets much easier since you don't have to worry about setting up all the dependencies within the AWS instance.
Once you have it running in Docker you could use Beanstalk, ECS or CodeDeploy.

Categories

Resources