I have deployed my Django application on AWS using Kubernetes.
I am using docker containers to deploy my application.
I have created the custom management command lets say
python manage.py customcommand
I want execute it on kubernetes level.
To get pods i am using
kubctl get pods
I am trying to find solution but not succeeded
Thank you.
You can access a shell in your running pod by using a command similar to the following:
kubectl exec -ti <podname> sh
From the shell prompt you get, you can run your administrative command.
Related
It's been asked before but I haven't been able to find the answer so far. I have a script which is called via a Flask app. It's Dockerized and I used docker-compose.yml. The docker command, which worked outside of Docker, creates a html file using openscad. As you can see below, it takes a variable path:
cmd_args = f"docker run -v '{path}':/documents/ --rm --name manual-asciidoc-to-html " \
f"asciidoctor/docker-asciidoctor asciidoctor -D /documents *.adoc"
Popen(cmd_args, shell=True)
time.sleep(1)
When the script executes, the print out in Terminal shows:
myapp | /bin/sh: 1: docker: not found
How can I get this docker command to run in my already running docker container?
I don’t really get what you are trying to say here but I‘m assuming you want to run the docker command from within your container. You don’t really do it that way. The way to communicate with the docker daemon from within a container is to add the Docker Unix socket from the host system to the container using the -v when starting the container or adding it to the volumes section of your docker-compose:
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After doing that you should be able to use a docker API (https://github.com/docker/docker-py) to connect to the Daemon from within the container and do the actions you want to. You should be able to convert the command you initially wanted to execute to simple docker API calls.
Regards
Dominik
Context
I am running Apache Airflow, and trying to run a sample Docker container using Airflow's DockerOperator. I am testing using docker-compose and deploying to Kubernetes (EKS). Whenever I run my task, I am receiving the Error: ERROR - Error while fetching server API version. The erros happens both on docker-compose as well as EKS (kubernetes).
I guess your Airflow Docker container is trying to launch a worker on the same Docker machine where it is running. To do so, you need to give Airflow's container special permissions and, as you said, acces to the Docker socket. This is called Docker In Docker (DIND). There are more than one way to do it. In this tutorial there are 3 different ways explained. It also depends on where those containers are run: Kubernetes, Docker machines, external services (like GitLab or GitHub), etc.
There is a python script in my repo that I would like to run whenever I call an API.This Python script merely transfer data from one database to another. The Jenkins server for the project currently is used for builds/pipelines/running tests, I was wondering if I could use this Jenkins service to run this script when i call an API since I found that Jenkins allows you to remotely trigger scripts via REST.
I was wondering it I could use Jenkin's feature of trigger remotely to run this python script in my repo when I need to. The python script is built using a python image in the dockerfile, so docker helps to setup the dependencies/python needed to run the script. the command to run by Jenkins is something like docker build and docker run
Yes you can.
Just setup a pipeline that
Runs in docker (with your image). Have a look at this
Does a git clone of you repository
Runs you python script with something like: sh "python <your script>"
I try to run django on a docker container using sqllite as the db and the django dev server. So far I was able to launch locally the django server:
python .\manage.py runserver
I can build the docker image using Dockerfile:
docker build . -t pythocker
But when I run the image with docker run -p 8000:8000 pythocker no output is shown and the machine is not reachable, I have to kill the running container.
If I add the -it flag on the docker run command then the server is running and I can go to http://192.168.99.100:8000 and display the django welcome page. Why is this flag mandatory here?
Docker logs on the container gives nothing. I also tried to add custom logging inside the manage.py but it's not diplayed in the console or in the docker logs.
I am using the Docker Windows toolbox as I have only a windows home computer.
I want to run a python script 'indefinitely' on an EB instance, automatically once deployed (I don't want to SSH in). To do that, I think I should run
source /opt/python/run/venv/bin/activate && nohup python myscriptname.py &
after the Elastic Beanstalk instance is deployed. But where do I put the above command to automatically run this post deployment?
I have looked at How do I install a Python script on Amazon's Elastic Beanstalk? but I don't think a cron job is suitable for me.
I have used container_commands before but they seem to run pre-deployment.
Container commands run after the application and web server have been set up and the application version archive has been extracted, but before the application version is deployed.
How do I run a post-deploy script?
As always there's a few ways to do this in AWS. You could start an instance, customize away, then create an AMI from this image. You could then tell beanstalk to use this custom AMI
Or could use ebextensions to custom your instance on creation and app restart. I'm currently using this to install (if not already instaled) logstash and download the latest GeoLiteCity db when my app starts up.
Here's a sample from the docs
container_commands:
collectstatic:
command: "django-admin.py collectstatic --noinput"
01syncdb:
command: "django-admin.py syncdb --noinput"
leader_only: true
02migrate:
command: "django-admin.py migrate"
leader_only: true
99customize:
command: "scripts/customize.sh"