Kubernetes python client equivalent of "kubectl wait --for " command - python

I am using kubernetes-client/python and want to write a method which will block control until a set of Pods is in Ready state (Running state). I found that kubernetes supports wait --for command for doing same thing via command. Can someone please help me with how to achieve same functionality using kubernetes python client.
To be precise i am mostly interested in equivalent of-
kubectl wait --for condition=Ready pod -l 'app in (kafka,elasticsearch)'

You can use the watch functionality available in the client library.
from kubernetes import client, config, watch
config.load_kube_config()
w = watch.Watch()
core_v1 = client.CoreV1Api()
for event in w.stream(func=core_v1.list_namespaced_pod,
namespace=namespace,
label_selector=label,
timeout_seconds=60):
if event["object"].status.phase == "Running":
w.stop()
end_time = time.time()
logger.info("%s started in %0.2f sec", full_name, end_time-start_time)
return
# event.type: ADDED, MODIFIED, DELETED
if event["type"] == "DELETED":
# Pod was deleted while we were waiting for it to start.
logger.debug("%s deleted before it started", full_name)
w.stop()
return

Related

fabric 2 traffic generation with non-blocking commands

I need to run some tests with a traffic generator that has different client and server commands. I would like to roll this into a fabric2 script which executes the traffic generation commands while cd'd into /root.
I have public-key authentication on the iperf machines. How can I run this traffic generation test under fabric2?
This was a little interesting to get running because the fabric2 docs don't include much information about run() arguments... you need to look at the invoke Runner.run() documentation to see all fabric run() keywords.
The key to making iperf work in this case was setting pty=True and asynchronous=True when I run the iperf server commands. If I did not run the iperf server as asynchronous, it would block execution of the iperf client command.
# Save this script as run_iperf.py and run with "python run_iperf.py"
from getpass import getuser
import os
#from fabric import Config, SerialGroup, ThreadingGroup, exceptions, runners
#from fabric.exceptions import GroupException
from fabric import Connection
server_vm = "10.1.0.1"
client_vm = "10.2.0.1"
# This matters because my user .ssh/id_rsa.pub is authorized on the remote sytems
assert getuser()=="mpenning"
hosts = list()
conn1 = Connection(host=client_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
conn2 = Connection(host=server_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
hosts.append(conn1)
hosts.append(conn2)
iperf_udp_client_cmd = "nice -19 iperf3 --plus-more-client-commands"
iperf_udp_server_cmd = "nice -19 iperf3 --plus-more-server-commands"
# ThreadingGroup is optional for this use case, but the iperf commands
# definitely require pty and asynchronous (server-side)...
# ThreadingGroup() is required for concurrent fabric commands.
#
# Uncomment below to use ThreadingGroup()...
# t_hosts = ThreadingGroup.from_connections(hosts)
#
# also ref invoke Runner.run() for more run() args:
# -> https://github.com/pyinvoke/invoke/blob/master/invoke/runners.py
with conn2.cd("/root"):
conn2.run(iperf_udp_server_cmd, pty=True, asynchronous=True, disown=False, echo=True)
with conn1.cd("/root"):
conn1.run("sleep 1;%s" % iperf_udp_client_cmd, pty=True, asynchronous=False, echo=True)
This script was loosely based-on this answer:
https://stackoverflow.com/a/53763786/667301

How do i send a signal FROM container TO host on some event?

I have a mariadb running in a container. On 'docker run', an import script (from a db dump) is run by mariadb, which creates users, builds schema, etc.
As the size of that dump script grows, the time to do the import increases. At this point it's about 8-10 seconds, but i expect amount of data to increase substantially, and the import time will be more difficult to predict.
I'd like to be able to send a signal from the container to the host, to let it know that the data has been loaded, and that db is ready to be used. So far i have found info on how to send signal from one container to another container, but there's no information on how to send signal from container to the host. Also, i need to be able to do this programmatically, as creating container is part part of a larger pipeline.
Ideally, i'd like to be able to do something like this:
client = docker.from_env()
db_c = client.containers.run('my_db_image', ....)
# check for signal from db_c container
# do other things
Thank you!
AFAIK you cannot send signals from the container to a process running on the host but there are other ways to know when the import has finished. I think the easiest is to start the container in detached mode and wait until a specific line gets logged. The following script for example waits until the line done is logged:
import os
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:latest', 'bash -c "for i in {1..10}; do sleep 1; echo working; done; echo done"', detach=True)
print('container started')
for line in container.logs(stream=True):
print line.strip()
if line.strip() == 'done':
break
print('continue....')
If the output of the import script goes to stdout it could contain a simple print at the end:
select 'The import has finished' AS '';
Wait for this string in the python script.
Another approach is to use some other form of inter-process communication. An example using named pipes:
import os
import docker
import errno
client = docker.from_env()
FIFO = '/tmp/apipe'
# create the pipe
try:
os.mkfifo(FIFO)
except OSError as oe:
if oe.errno != errno.EEXIST:
raise
# start the container sharing the pipe
container = client.containers.run('ubuntu:latest', 'bash -c "sleep 5; echo done > /tmp/apipe"', volumes={FIFO: {'bind': FIFO, 'mode': 'rw'}}, detach=True)
print("container started")
with open(FIFO) as fifo:
print("FIFO opened")
while True:
data = fifo.read()
if len(data) == 0:
print("Writer closed")
break
print('Read: "{0}"'.format(data))
print("continue...")
The host shares the named pipe with the container. In the python script the read call to the FIFO is blocked until some data is available in the pipe.
In the container the import script writes to the pipe notifying the program that the data has been loaded. The mysql system command, \! command to execute an external command might come in handy in this situation. You could simply add to the end of the script:
\! echo done > /tmp/apipe
In a similar way you could use IPC sockets (aka Unix sockets) or shared memory but things get a bit more complicated.
Yet another solution is to add a health-check to the container. The health status can be polled on the host by inspecting the container. See How to wait until docker start is finished?
Edited:
The above approaches assume the container is initialized and accepting connections. If the script is executed as part of the initialization process (Initializing a fresh instance), which seems to be the case here, the database is not ready and accepting connections when the import completes. For the initialization the server is temporarily started with the
--skip_networking (allowing only local clients) and only after the initialization completes it is restarted and becomes available remotely.
you can add this code to check if the db is ready to accept the connections:
import MySQLdb
import time
db = MySQLdb.connect(host='MYHost', user='MYNAME', passwd='PASS', db='MYDB')
if not db:
while True:
db = MySQLdb.connect(host='MYHost', user='MYNAME', passwd='PASS', db='MYDB')
if db:
break
print("Still waiting for the DB")
time.sleep(10)

Google Pubsub emulator with python

Does anyone have a very basic pub/sub example for Python that uses the emulator.
This here is my subscriber code
## setup subscribers
from google.cloud import pubsub
print("subscribing to topic")
subscriber = pubsub.SubscriberClient()
subscription_path = subscriber.subscription_path(app.config['PUB_SUB_PROJECT'], app.config['PUB_SUB_TOPIC'])
def callback(message):
print('Received message: {}'.format(message))
subscriber.subscribe(subscription_path, callback=callback)
And then here is my code for publishing
from google.cloud import pubsub
publisher = pubsub.PublisherClient()
topic_path = publisher.topic_path(app.config['PUB_SUB_PROJECT'], app.config['PUB_SUB_TOPIC'])
try:
topic = publisher.create_topic(topic_path)
except Exception:
app.logger.info("Topic already exists")
data = "ein test"
data = data.encode('utf-8')
publisher.publish(topic_path, data=data)
print("published topic")
It seems that publishing works -> but I think it's actually publishing to the cloud queue and not to the emulator. Therefor my subscriber never receives anything.
Any tipps and tricks are welcome. I believe it's as simple as ensuring that the publisher publishes to the emulator and the subscriber reads from the emulator.
In Python you don't need to make any code changes to use the emulator. Instead, you must have the PUBSUB_EMULATOR_HOST and PUBSUB_PROJECT_ID environmental variables defined.
The easiest way to set them is to run $(gcloud beta emulators pubsub env-init) before starting your program. if you are using Google App Engine locally, run that command and then start your app with dev_appserver.py app.yaml --env_var PUBSUB_EMULATOR_HOST=${PUBSUB_EMULATOR_HOST}.
This is documented at https://cloud.google.com/pubsub/docs/emulator

How can I list or discover queues on a RabbitMQ exchange using python?

I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library?
There does not seem to be a direct AMQP-way to manage the server but there is a way you can do it from Python. I would recommend using a subprocess module combined with the rabbitmqctl command to check the status of the queues.
I am assuming that you are running this on Linux. From a command line, running:
rabbitmqctl list_queues
will result in:
Listing queues ...
pings 0
receptions 0
shoveled 0
test1 55199
...done.
(well, it did in my case due to my specific queues)
In your code, use this code to get output of rabbitmqctl:
import subprocess
proc = subprocess.Popen("/usr/sbin/rabbitmqctl list_queues", shell=True, stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
print stdout_value
Then, just come up with your own code to parse stdout_value for your own use.
As far as I know, there isn't any way of doing this. That's nothing to do with Python, but because AMQP doesn't define any method of queue discovery.
In any case, in AMQP it's clients (consumers) that declare queues: publishers publish messages to an exchange with a routing key, and consumers determine which queues those routing keys go to. So it does not make sense to talk about queues in the absence of consumers.
You can add plugin rabbitmq_management
sudo /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
Then use rest-api
import requests
def rest_queue_list(user='guest', password='guest', host='localhost', port=15672, virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, password))
queues = [q['name'] for q in response.json()]
return queues
I'm using requests library in this example, but it is not significantly.
Also I found library that do it for us - pyrabbit
from pyrabbit.api import Client
cl = Client('localhost:15672', 'guest', 'guest')
queues = [q['name'] for q in cl.get_queues()]
Since I am a RabbitMQ beginner, take this with a grain of salt, but there's an interesting Management Plugin, which exposes an HTTP interface to "From here you can manage exchanges, queues, bindings, virtual hosts, users and permissions. Hopefully the UI is fairly self-explanatory."
http://www.rabbitmq.com/blog/2010/09/07/management-plugin-preview-release/
I use https://github.com/bkjones/pyrabbit. It's talks directly to RabbitMQ's mgmt plugin's API interface, and is very handy for interrogating RabbitMQ.
Management features are due in a future version of AMQP. So for now you will have to wait till for a new version that will come with that functionality.
I found this works for me, /els being my demo vhost name..
rabbitmqctl list_queues --vhost /els
pyrabbit didn't work so well for me; However, the Management Plugin itself has its own command line script that you can download from your own admin GUI and use later on (for example, I downloaded mine from
http://localhost:15672/cli/
for local use)
I would use simply this:
Just replace the user(default= guest), passwd(default= guest) and port with your values.
import requests
import json
def call_rabbitmq_api(host, port, user, passwd):
url = 'http://%s:%s/api/queues' % (host, port)
r = requests.get(url, auth=(user,passwd))
return r
def get_queue_name(json_list):
res = []
for json in json_list:
res.append(json["name"])
return res
if __name__ == '__main__':
host = 'rabbitmq_host'
port = 55672
user = 'guest'
passwd = 'guest'
res = call_rabbitmq_api(host, port, user, passwd)
print ("--- dump json ---")
print (json.dumps(res.json(), indent=4))
print ("--- get queue name ---")
q_name = get_queue_name(res.json())
print (q_name)
Referred from here: https://gist.github.com/hiroakis/5088513#file-example_rabbitmq_api-py-L2

tornado - transferring a file to cdn without blocking

I have the nginx upload module handling site uploads, but still need to transfer files (let's say 3-20mb each) to our cdn, and would rather not delegate that to a background job.
What is the best way to do this with tornado without blocking other requests? Can i do this in an async callback?
You may find it useful in the overall architecture of your site to add a message queuing service such as RabbitMQ.
This would let you complete the upload via the nginx module, then in the tornado handler, post a message containing the uploaded file path and exit. A separate process would be watching for these messages and handle the transfer to your CDN. This type of service would be useful for many other tasks that could be handled offline ( sending emails, etc.. ). As your system grows, this also provides you a mechanism to scale by moving queue processing to separate machines.
I am using an architecture very similar to this. Just make sure to add your message consumer process to supervisord or whatever you are using to manage your processes.
In terms of implementation, if you are on Ubuntu installing RabbitMQ is a simple:
sudo apt-get install rabbitmq-server
On CentOS w/EPEL repositories:
yum install rabbit-server
There are a number of Python bindings to RabbitMQ. Pika is one of them and it happens to be created by an employee of LShift, who is responsible for RabbitMQ.
Below is a bit of sample code from the Pika repo. You can easily imagine how the handle_delivery method would accept a message containing a filepath and push it to your CDN.
import sys
import pika
import asyncore
conn = pika.AsyncoreConnection(pika.ConnectionParameters(
sys.argv[1] if len(sys.argv) > 1 else '127.0.0.1',
credentials = pika.PlainCredentials('guest', 'guest')))
print 'Connected to %r' % (conn.server_properties,)
ch = conn.channel()
ch.queue_declare(queue="test", durable=True, exclusive=False, auto_delete=False)
should_quit = False
def handle_delivery(ch, method, header, body):
print "method=%r" % (method,)
print "header=%r" % (header,)
print " body=%r" % (body,)
ch.basic_ack(delivery_tag = method.delivery_tag)
global should_quit
should_quit = True
tag = ch.basic_consume(handle_delivery, queue = 'test')
while conn.is_alive() and not should_quit:
asyncore.loop(count = 1)
if conn.is_alive():
ch.basic_cancel(tag)
conn.close()
print conn.connection_close
advice on the tornado google group points to using an async callback (documented at http://www.tornadoweb.org/documentation#non-blocking-asynchronous-requests) to move the file to the cdn.
the nginx upload module writes the file to disk and then passes parameters describing the upload(s) back to the view. therefore, the file isn't in memory, but the time it takes to read from disk–which would cause the request process to block itself, but not other tornado processes, afaik–is negligible.
that said, anything that doesn't need to be processed online shouldn't be, and should be deferred to a task queue like celeryd or similar.

Categories

Resources