I am following the Google Cloud Functions python testing example here:
https://cloud.google.com/functions/docs/testing/test-http
import os
import subprocess
import uuid
import requests
from requests.packages.urllib3.util.retry import Retry
def test_args():
name = str(uuid.uuid4())
port = os.getenv('PORT', 8080) # Each functions framework instance needs a unique port
process = subprocess.Popen(
[
'functions-framework',
'--target', 'hello_http',
'--port', str(port)
],
cwd=os.path.dirname(__file__),
stdout=subprocess.PIPE
)
# Send HTTP request simulating Pub/Sub message
# (GCF translates Pub/Sub messages to HTTP requests internally)
BASE_URL = f'http://localhost:{port}'
retry_policy = Retry(total=6, backoff_factor=1)
retry_adapter = requests.adapters.HTTPAdapter(
max_retries=retry_policy)
session = requests.Session()
session.mount(BASE_URL, retry_adapter)
name = str(uuid.uuid4())
res = session.post(
BASE_URL,
json={'name': name}
)
assert res.text == 'Hello {}!'.format(name)
# Stop the functions framework process
process.kill()
process.wait()
After running pytest against this test file, the test passes and the code exits nearly immediately, but it doesn't appear to have killed the process:
± ps -ef | grep functions
thoraxe 11661 1985 49 20:30 pts/0 00:00:02 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/python3.9 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/functions-framework --target train --port 8080
thoraxe 11684 11661 0 20:30 pts/0 00:00:00 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/python3.9 /home/thoraxe/.pyenv/versions/3.9.13/envs/avogadro-trainer-3-9-13/bin/functions-framework --target train --port 8080
I'm using Python 3.9.13 in a virtual environment on Fedora.
As this is sample code from Google, I'd expect it to work, but something is definitely not working here. Can someone suggest what I might be doing wrong?
When a Python assertion fails, the program exits immediately and does not continue. The kill/wait are never actually executed unless the test is successful. This is a major bummer because the function framework will continue to run in the background and new code changes aren't apparently picked up on subsequent pytest runs.
Using a different wrapper framework like https://github.com/okken/pytest-check solves the problem because all steps will be performed, even if there are failures.
However, note that legitimate Python failures/errors/explosions will still result in the functions framework not properly exiting.
Related
I've spent the last hour and a half trying and failing to debug this test and I am utterly stumped. To simplify the process of testing the Flask server I am building, I have made a relatively simple script which starts the server, then runs pytest, kills the server, writes the outputs to files, and exits with Pytest's exit code. This code was working perfectly until today, and I haven't modified it since (aside from debugging this issue).
Here's the problem: when it gets to a certain point in the tests, it hangs. The weird thing is that this does not happen if I run my tests in any other way.
Debugging my server in VS Code, and running tests in the terminal: works
Running my server using the same code used in the test script and running pytest manually: works
Running pytest using the test script and running the server through the start server script (which uses the same code for running the server as the test script does) in a second terminal: works
Here's the other interesting thing: the tests always hang in the same place, part way through the setup fixture. It sends the clear command, and an echo request to the server (which prints the name of the current test). The database clears successfully, and the server echoes the correct information, but the echo route never exits - my tests never get a response. This echo route behaves perfectly for the 50 or so tests that happen before this point. If I comment out the test that is causing it to fail, it fails on the next test. If I comment out the call to the echo then it hangs on a later test on a completely different request to a different route. When it hangs, the server cannot be killed using a SIGTERM, but instead requires a SIGKILL.
Here is my echo route:
#debug.get('/echo')
def echo() -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
try:
value = request.args['value']
except KeyError:
raise http_errors.BadRequest('echo route requires a `value` argument')
to_print = f'{Fore.MAGENTA}[ECHO]\t\t{value}{Fore.RESET}'
# Print it to both stdout and stderr to ensure it is seen across all logs
# Otherwise it could be more difficult to figure out what's up with server
# output
print(to_print)
print(to_print, file=sys.stderr)
return {'value': value}
And here is my code that sends the requests:
def get(token: JWT | None, url: str, params: dict) -> dict:
"""
Returns the response to a GET web request
This also parses the response to help with error checking
### Args:
* `url` (`str`): URL to request to
* `params` (`dict`): parameters to send
### Returns:
* `dict`: response data
"""
return handle_response(requests.get(
url,
params=params,
headers=encode_headers(token),
timeout=3
))
def echo(value: str) -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
return cast(IEcho, get(None, f"{URL}/echo", {"value": value}))
#pytest.fixture(autouse=True)
def before_each(request: pytest.FixtureRequest):
"""Clear the database between tests"""
clear()
echo(f"{request.module.__name__}.{request.function.__name__}")
print("After echo") # This never prints
Here is my code for running Pytest in my test script
def pytest():
pytest = subprocess.Popen(
[sys.executable, '-u', '-m', 'pytest', '-v', '-s'],
)
# Wait for tests to finish
print("🔨 Running tests...")
try:
ret = pytest.wait()
except KeyboardInterrupt:
print("❗ Testing cancelled")
pytest.terminate()
# write_outputs(pytest, None)
# write_outputs(pytest, "pytest")
raise
# write_outputs(pytest, "pytest")
if ret == 0:
print("✅ It works!")
else:
print("❌ Tests failed")
return bool(ret)
And here is my code for running my server in my test script:
def backend(debug=False, live_output=False):
env = os.environ.copy()
if debug:
env.update({"ENSEMBLE_DEBUG": "TRUE"})
debug_flag = ["--debug"]
else:
debug_flag = []
if live_output is False:
outputs = subprocess.PIPE
else:
outputs = None
flask = subprocess.Popen(
[sys.executable, '-u', '-m', 'flask'] + debug_flag + ['run'],
env=env,
stderr=outputs,
stdout=outputs,
)
if outputs is not None and (flask.stderr is None or flask.stdout is None):
print("❗ Can't read flask output", file=sys.stderr)
flask.kill()
sys.exit(1)
# Request until we get a success, but crash if we failed to start in 10
# seconds
start_time = time.time()
started = False
while time.time() - start_time < 10:
try:
requests.get(
f'http://localhost:{os.getenv("FLASK_RUN_PORT")}/debug/echo',
params={'value': 'Test script startup...'},
)
except requests.ConnectionError:
continue
started = True
break
if not started:
print("❗ Server failed to start in time")
flask.kill()
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
else:
if flask.poll() is not None:
print("❗ Server crashed during startup")
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
print("✅ Server started")
return flask
So in summary, does anyone have any idea what on earth is happening? It freezes on such a simple route that this makes me very concerned. I think I may have found some crazy bug in Flask or in the requests library or something.
Even if you don't know what's happening with this, it'd be really helpful to have any ideas as to how I can debug this further, as I have absolutely no idea what is going on.
It turns out that my server output was filling up all the buffer space in the pipe, meaning that it would wait for the buffer to empty. The issue is that my test script was waiting for the tests to exit, and the tests could not progress unless the server was active. As such, the code reached a three-way deadlock. I fixed it by redirecting my output through a file (where limited buffer size wasn't a problem).
I need to run some tests with a traffic generator that has different client and server commands. I would like to roll this into a fabric2 script which executes the traffic generation commands while cd'd into /root.
I have public-key authentication on the iperf machines. How can I run this traffic generation test under fabric2?
This was a little interesting to get running because the fabric2 docs don't include much information about run() arguments... you need to look at the invoke Runner.run() documentation to see all fabric run() keywords.
The key to making iperf work in this case was setting pty=True and asynchronous=True when I run the iperf server commands. If I did not run the iperf server as asynchronous, it would block execution of the iperf client command.
# Save this script as run_iperf.py and run with "python run_iperf.py"
from getpass import getuser
import os
#from fabric import Config, SerialGroup, ThreadingGroup, exceptions, runners
#from fabric.exceptions import GroupException
from fabric import Connection
server_vm = "10.1.0.1"
client_vm = "10.2.0.1"
# This matters because my user .ssh/id_rsa.pub is authorized on the remote sytems
assert getuser()=="mpenning"
hosts = list()
conn1 = Connection(host=client_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
conn2 = Connection(host=server_vm, user="root",
connect_kwargs={"key_filename": os.path.expanduser("~/.ssh/id_rsa")})
hosts.append(conn1)
hosts.append(conn2)
iperf_udp_client_cmd = "nice -19 iperf3 --plus-more-client-commands"
iperf_udp_server_cmd = "nice -19 iperf3 --plus-more-server-commands"
# ThreadingGroup is optional for this use case, but the iperf commands
# definitely require pty and asynchronous (server-side)...
# ThreadingGroup() is required for concurrent fabric commands.
#
# Uncomment below to use ThreadingGroup()...
# t_hosts = ThreadingGroup.from_connections(hosts)
#
# also ref invoke Runner.run() for more run() args:
# -> https://github.com/pyinvoke/invoke/blob/master/invoke/runners.py
with conn2.cd("/root"):
conn2.run(iperf_udp_server_cmd, pty=True, asynchronous=True, disown=False, echo=True)
with conn1.cd("/root"):
conn1.run("sleep 1;%s" % iperf_udp_client_cmd, pty=True, asynchronous=False, echo=True)
This script was loosely based-on this answer:
https://stackoverflow.com/a/53763786/667301
Flow of the program is:
Connect to OpenSSH server on Linux machine using Paramiko library
Open X11 session
Run xterm executable
Run some other program (e.g. Firefox) by typing executable name in the terminal and running it.
I would be grateful if someone can explain how to cause some executable to run in a terminal which was open by using the following code and provide sample source code (source):
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command('xterm')
transport.accept()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I was thinking about running the program in child thread, while the thread running the xterm is waiting for the child to terminate.
Well, this is a bit of a hack, but hey.
What you can do on the remote end is the following: Inside the xterm, you run netcat, listen to any data coming in on some port, and pipe whatever you get into bash. It's not quite the same as typing it into xterm direclty, but it's almost as good as typing it into bash directly, so I hope it'll get you a bit closer to your goal. If you really want to interact with xterm directly, you might want to read this.
For example:
terminal 1:
% nc -l 3333 | bash
terminal 2 (type echo hi here):
% nc localhost 3333
echo hi
Now you should see hi pop out of the first terminal. Now try it with xterm&. It worked for me.
Here's how you can automate this in Python. You may want to add some code that enables the server to tell the client when it's ready, rather than using the silly time.sleeps.
import select
import sys
import paramiko
import Xlib.support.connect as xlib_connect
import os
import socket
import subprocess
# for connecting to netcat running remotely
from multiprocessing import Process
import time
# data
import getpass
SSHServerPort=22
SSHServerIP = "localhost"
# get username/password interactively, or use some other method..
user = getpass.getuser()
pwd = getpass.getpass("enter pw for '" + user + "': ")
NETCAT_PORT = 3333
FIREFOX_CMD="/path/to/firefox &"
#FIREFOX_CMD="xclock&"#or this :)
def run_stuff_in_xterm():
time.sleep(5)
s = socket.socket(socket.AF_INET6 if ":" in SSHServerIP else socket.AF_INET, socket.SOCK_STREAM)
s.connect((SSHServerIP, NETCAT_PORT))
s.send("echo \"Hello there! Are you watching?\"\n")
s.send(FIREFOX_CMD + "\n")
time.sleep(30)
s.send("echo bye bye\n")
time.sleep(2)
s.close()
# run xming
XmingProc = subprocess.Popen("C:/Program Files (x86)/Xming/Xming.exe :0 -clipboard -multiwindow")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(SSHServerIP, SSHServerPort, username=user, password=pwd)
transport = ssh_client.get_transport()
channelOppositeEdges = {}
local_x11_display = xlib_connect.get_display(os.environ['DISPLAY'])
inputSockets = []
def x11_handler(channel, (src_addr, src_port)):
local_x11_socket = xlib_connect.get_socket(*local_x11_display[:3])
inputSockets.append(local_x11_socket)
inputSockets.append(channel)
channelOppositeEdges[local_x11_socket.fileno()] = channel
channelOppositeEdges[channel.fileno()] = local_x11_socket
transport._queue_incoming_channel(channel)
session = transport.open_session()
inputSockets.append(session)
session.request_x11(handler = x11_handler)
session.exec_command("xterm -e \"nc -l 0.0.0.0 %d | /bin/bash\"" % NETCAT_PORT)
p = Process(target=run_stuff_in_xterm)
transport.accept()
p.start()
while not session.exit_status_ready():
readable, writable, exceptional = select.select(inputSockets,[],[])
if len(transport.server_accepts) > 0:
transport.accept()
for sock in readable:
if sock is session:
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stderr.write(session.recv_stderr(4096))
else:
try:
data = sock.recv(4096)
counterPartSocket = channelOppositeEdges[sock.fileno()]
counterPartSocket.sendall(data)
except socket.error:
inputSockets.remove(sock)
inputSockets.remove(counterPartSocket)
del channelOppositeEdges[sock.fileno()]
del channelOppositeEdges[counterPartSocket.fileno()]
sock.close()
counterPartSocket.close()
p.join()
print 'Exit status:', session.recv_exit_status()
while session.recv_ready():
sys.stdout.write(session.recv(4096))
while session.recv_stderr_ready():
sys.stdout.write(session.recv_stderr(4096))
session.close()
XmingProc.terminate()
XmingProc.wait()
I tested this on a Mac, so I commented out the XmingProc bits and used /Applications/Firefox.app/Contents/MacOS/firefox as FIREFOX_CMD (and xclock).
The above isn't exactly a secure setup, as anyone connecting to the port at the right time could run arbitrary code on your remote server, but it sounds like you're planning to use this for testing purposes anyway. If you want to improve the security, you could make netcat bind to 127.0.0.1 rather than 0.0.0.0, setup an ssh tunnel (run ssh -L3333:localhost:3333 username#remote-host.com to tunnel all traffic received locally on port 3333 to remote-host.com:3333), and let Python connect to ("localhost", 3333).
Now you can combine this with selenium for browser automation:
Follow the instructions from this page, i.e. download the selenium standalone server jar file, put it into /path/to/some/place (on the server), and pip install -U selenium (again, on the server).
Next, put the following code into selenium-example.py in /path/to/some/place:
#!/usr/bin/env python
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
import time
browser = webdriver.Firefox() # Get local session of firefox
browser.get("http://www.yahoo.com") # Load page
assert "Yahoo" in browser.title
elem = browser.find_element_by_name("p") # Find the query box
elem.send_keys("seleniumhq" + Keys.RETURN)
time.sleep(0.2) # Let the page load, will be added to the API
try:
browser.find_element_by_xpath("//a[contains(#href,'http://docs.seleniumhq.org')]")
except NoSuchElementException:
assert 0, "can't find seleniumhq"
browser.close()
and change the firefox command:
FIREFOX_CMD="cd /path/to/some/place && python selenium-example.py"
And watch firefox do a Yahoo search. You might also want to increase the time.sleep.
If you want to run more programs, you can do things like this before or after running firefox:
# start up xclock, wait for some time to pass, kill it.
s.send("xclock&\n")
time.sleep(1)
s.send("XCLOCK_PID=$!\n") # stash away the process id (into a bash variable)
time.sleep(30)
s.send("echo \"killing $XCLOCK_PID\"\n")
s.send("kill $XCLOCK_PID\n\n")
time.sleep(5)
If you want to do perform general X11 application control, I think you might need to write similar "driver applications", albeit using different libraries. You might want search for "x11 send {mouse|keyboard} events" to find more general approaches. That brings up these questions, but I'm sure there's lots more.
If the remote end isn't responding instantaneously, you might want to sniff your network traffic in Wireshark, and check whether or not TCP is batching up the data, rather than sending it line by line (the \n seems to help here, but I guess there's no guarantee). If this is the case, you might be out of luck, but nothing is impossible. I hope you don't need to go that far though ;-)
One more note: if you need to communicate with CLI programs' STDIN/STDOUT, you might want to look at expect scripting (e.g. using pexpect, or for simple cases you might be able to use subprocess.Popen.communicate](http://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate)).
I'm using Celery to manage asynchronous tasks. Occasionally, however, the celery process goes down which causes none of the tasks to get executed. I would like to be able to check the status of celery and make sure everything is working fine, and if I detect any problems display an error message to the user. From the Celery Worker documentation it looks like I might be able to use ping or inspect for this, but ping feels hacky and it's not clear exactly how inspect is meant to be used (if inspect().registered() is empty?).
Any guidance on this would be appreciated. Basically what I'm looking for is a method like so:
def celery_is_alive():
from celery.task.control import inspect
return bool(inspect().registered()) # is this right??
EDIT: It doesn't even look like registered() is available on celery 2.3.3 (even though the 2.1 docs list it). Maybe ping is the right answer.
EDIT: Ping also doesn't appear to do what I thought it would do, so still not sure the answer here.
Here's the code I've been using. celery.task.control.Inspect.stats() returns a dict containing lots of details about the currently available workers, None if there are no workers running, or raises an IOError if it can't connect to the message broker. I'm using RabbitMQ - it's possible that other messaging systems might behave slightly differently. This worked in Celery 2.3.x and 2.4.x; I'm not sure how far back it goes.
def get_celery_worker_status():
ERROR_KEY = "ERROR"
try:
from celery.task.control import inspect
insp = inspect()
d = insp.stats()
if not d:
d = { ERROR_KEY: 'No running Celery workers were found.' }
except IOError as e:
from errno import errorcode
msg = "Error connecting to the backend: " + str(e)
if len(e.args) > 0 and errorcode.get(e.args[0]) == 'ECONNREFUSED':
msg += ' Check that the RabbitMQ server is running.'
d = { ERROR_KEY: msg }
except ImportError as e:
d = { ERROR_KEY: str(e)}
return d
From the documentation of celery 4.2:
from your_celery_app import app
def get_celery_worker_status():
i = app.control.inspect()
availability = i.ping()
stats = i.stats()
registered_tasks = i.registered()
active_tasks = i.active()
scheduled_tasks = i.scheduled()
result = {
'availability': availability,
'stats': stats,
'registered_tasks': registered_tasks,
'active_tasks': active_tasks,
'scheduled_tasks': scheduled_tasks
}
return result
of course you could/should improve the code with error handling...
To check the same using command line in case celery is running as daemon,
Activate virtualenv and go to the dir where the 'app' is
Now run : celery -A [app_name] status
It will show if celery is up or not plus no. of nodes online
Source:
http://michal.karzynski.pl/blog/2014/05/18/setting-up-an-asynchronous-task-queue-for-django-using-celery-redis/
The following worked for me:
import socket
from kombu import Connection
celery_broker_url = "amqp://localhost"
try:
conn = Connection(celery_broker_url)
conn.ensure_connection(max_retries=3)
except socket.error:
raise RuntimeError("Failed to connect to RabbitMQ instance at {}".format(celery_broker_url))
One method to test if any worker is responding is to send out a 'ping' broadcast and return with a successful result on the first response.
from .celery import app # the celery 'app' created in your project
def is_celery_working():
result = app.control.broadcast('ping', reply=True, limit=1)
return bool(result) # True if at least one result
This broadcasts a 'ping' and will wait up to one second for responses. As soon as the first response comes in, it will return a result. If you want a False result faster, you can add a timeout argument to reduce how long it waits before giving up.
I found an elegant solution:
from .celery import app
try:
app.broker_connection().ensure_connection(max_retries=3)
except Exception as ex:
raise RuntimeError("Failed to connect to celery broker, {}".format(str(ex)))
You can use ping method to check whether any worker (or specific worker) is alive or not https://docs.celeryproject.org/en/latest/_modules/celery/app/control.html#Control.ping
celey_app.control.ping()
You can test on your terminal by running the following command.
celery -A proj_name worker -l INFO
You can review every time your celery runs.
The below script is worked for me.
#Import the celery app from project
from application_package import app as celery_app
def get_celery_worker_status():
insp = celery_app.control.inspect()
nodes = insp.stats()
if not nodes:
raise Exception("celery is not running.")
logger.error("celery workers are: {}".format(nodes))
return nodes
Run celery status to get the status.
When celery is running,
(venv) ubuntu#server1:~/project-dir$ celery status
-> celery#server1: OK
1 node online.
When no celery worker is running, you get the below information displayed in terminal.
(venv) ubuntu#server1:~/project-dir$ celery status
Error: No nodes replied within time constraint
I need to have a python client that can discover queues on a restarted RabbitMQ server exchange, and then start up a clients to resume consuming messages from each queue. How can I discover queues from some RabbitMQ compatible python api/library?
There does not seem to be a direct AMQP-way to manage the server but there is a way you can do it from Python. I would recommend using a subprocess module combined with the rabbitmqctl command to check the status of the queues.
I am assuming that you are running this on Linux. From a command line, running:
rabbitmqctl list_queues
will result in:
Listing queues ...
pings 0
receptions 0
shoveled 0
test1 55199
...done.
(well, it did in my case due to my specific queues)
In your code, use this code to get output of rabbitmqctl:
import subprocess
proc = subprocess.Popen("/usr/sbin/rabbitmqctl list_queues", shell=True, stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
print stdout_value
Then, just come up with your own code to parse stdout_value for your own use.
As far as I know, there isn't any way of doing this. That's nothing to do with Python, but because AMQP doesn't define any method of queue discovery.
In any case, in AMQP it's clients (consumers) that declare queues: publishers publish messages to an exchange with a routing key, and consumers determine which queues those routing keys go to. So it does not make sense to talk about queues in the absence of consumers.
You can add plugin rabbitmq_management
sudo /usr/lib/rabbitmq/bin/rabbitmq-plugins enable rabbitmq_management
sudo service rabbitmq-server restart
Then use rest-api
import requests
def rest_queue_list(user='guest', password='guest', host='localhost', port=15672, virtual_host=None):
url = 'http://%s:%s/api/queues/%s' % (host, port, virtual_host or '')
response = requests.get(url, auth=(user, password))
queues = [q['name'] for q in response.json()]
return queues
I'm using requests library in this example, but it is not significantly.
Also I found library that do it for us - pyrabbit
from pyrabbit.api import Client
cl = Client('localhost:15672', 'guest', 'guest')
queues = [q['name'] for q in cl.get_queues()]
Since I am a RabbitMQ beginner, take this with a grain of salt, but there's an interesting Management Plugin, which exposes an HTTP interface to "From here you can manage exchanges, queues, bindings, virtual hosts, users and permissions. Hopefully the UI is fairly self-explanatory."
http://www.rabbitmq.com/blog/2010/09/07/management-plugin-preview-release/
I use https://github.com/bkjones/pyrabbit. It's talks directly to RabbitMQ's mgmt plugin's API interface, and is very handy for interrogating RabbitMQ.
Management features are due in a future version of AMQP. So for now you will have to wait till for a new version that will come with that functionality.
I found this works for me, /els being my demo vhost name..
rabbitmqctl list_queues --vhost /els
pyrabbit didn't work so well for me; However, the Management Plugin itself has its own command line script that you can download from your own admin GUI and use later on (for example, I downloaded mine from
http://localhost:15672/cli/
for local use)
I would use simply this:
Just replace the user(default= guest), passwd(default= guest) and port with your values.
import requests
import json
def call_rabbitmq_api(host, port, user, passwd):
url = 'http://%s:%s/api/queues' % (host, port)
r = requests.get(url, auth=(user,passwd))
return r
def get_queue_name(json_list):
res = []
for json in json_list:
res.append(json["name"])
return res
if __name__ == '__main__':
host = 'rabbitmq_host'
port = 55672
user = 'guest'
passwd = 'guest'
res = call_rabbitmq_api(host, port, user, passwd)
print ("--- dump json ---")
print (json.dumps(res.json(), indent=4))
print ("--- get queue name ---")
q_name = get_queue_name(res.json())
print (q_name)
Referred from here: https://gist.github.com/hiroakis/5088513#file-example_rabbitmq_api-py-L2