I am trying to perform load test using Locust library for an API endpoint. Here, i am running Locust as a library instead of using locust command. I am trying to perform global setup and global teardown so that a global state is created initially which is used by all the users and then later cleared on teardown(Eg. Downloading S3 files once and then removing it at end).
There are built-in event hooks to add this functionality like init and quitting which can be used when running the locustfile using locust command. But, I am unable to trigger these events when running it as a library. Based on the Locust's source code, I can check that these events are fired in locust main.py file but it's not called when running as a library.
How to add such events when running it as a library? I have tried with the below 2 approaches. Is adding event listener and manually calling event.fire() a correct approach or directly creating and calling custom methods for it instead of using events is a better approach?
In general, should init and quitting events be used for setting a global state initially and then clearing at end or test_start and test_stop events can also be used in its place?
Source Code for reference:
Approach - 1 (Using event hooks)
import gevent
from locust import HttpUser, task, between
from locust.env import Environment
from locust.stats import stats_printer, stats_history
from locust.log import setup_logging
from locust import events
setup_logging("INFO", None)
def on_init(environment, **kwargs):
print("Perform global setup to create a global state")
def on_quit(environment, **kwargs):
print('Perform global teardown to clear the global state')
events.quitting.add_listener(on_quit)
events.init.add_listener(on_init)
class User(HttpUser):
wait_time = between(1, 3)
host = "https://docs.locust.io"
#tas
def my_task(self):
self.client.get("/")
#task
def task_404(self):
self.client.get("/non-existing-path")
# setup Environment and Runner
env = Environment(user_classes=[User], events=events)
runner = env.create_local_runner()
### Fire init event and environment and local runner have been instantiated
env.events.init.fire(environment=env, runner=runner) # Is it correct approach?
# start a WebUI instance
env.create_web_ui("127.0.0.1", 8089)
# start a greenlet that periodically outputs the current stats
gevent.spawn(stats_printer(env.stats))
# start a greenlet that save current stats to history
gevent.spawn(stats_history, env.runner)
# start the test
env.runner.start(1, spawn_rate=10)
# in 5 seconds stop the runner
gevent.spawn_later(5, lambda: env.runner.quit())
# wait for the greenlets
env.runner.greenlet.join()
### Fire quitting event when locust process is exiting
env.events.quitting.fire(environment=env, reverse=True) # Is it correct approach?
# stop the web server for good measures
env.web_ui.stop()
Approach - 2 (Creating custom methods and calling these directly)
import gevent
from locust import HttpUser, task, between
from locust.env import Environment
from locust.stats import stats_printer, stats_history
from locust.log import setup_logging
from locust import events
setup_logging("INFO", None)
class User(HttpUser):
wait_time = between(1, 3)
host = "https://docs.locust.io"
#classmethod
def perform_global_setup(cls):
print("Perform global setup to create a global state")
#classmethod
def perform_global_teardown(cls):
print('Perform global teardown to clear the global state')
#task
def my_task(self):
self.client.get("/")
#task
def task_404(self):
self.client.get("/non-existing-path")
# setup Environment and Runner
env = Environment(user_classes=[User])
runner = env.create_local_runner()
### Perform global setup
for cls in env.user_classes:
cls.perform_global_setup() # Is it correct approach?
# start a WebUI instance
env.create_web_ui("127.0.0.1", 8089)
# start a greenlet that periodically outputs the current stats
gevent.spawn(stats_printer(env.stats))
# start a greenlet that save current stats to history
gevent.spawn(stats_history, env.runner)
# start the test
env.runner.start(1, spawn_rate=10)
# in 5 seconds stop the runner
gevent.spawn_later(5, lambda: env.runner.quit())
# wait for the greenlets
env.runner.greenlet.join()
### Perform global teardown
for cls in env.user_classes:
cls.perform_global_teardown() # Is it correct approach?
# stop the web server for good measures
env.web_ui.stop()
Both approaches are fine. Using event hooks makes more sense if you think you might want to run in the normal (not as-a-library) way in the future, but if that is unlikely to happen then choose the approach that feels most natural to you.
init/quitting only differ from test_start/stop in a meaningful way when doing multiple runs in gui mode (where test_start/stop may happen multiple times). Use the one that is appropriate for what you are doing in the event handler, there is no other guideline.
Related
The following code is from the tutorial. I just added some codes to fire the test_start event(not sure if I fire it in the right place ?) and listen to both init and test_start events.
import gevent
from locust import HttpUser, task, events
from locust.env import Environment
from locust.stats import stats_printer, stats_history
from locust.log import setup_logging
setup_logging("INFO", None)
class MyUser(HttpUser):
host = "https://docs.locust.io"
#task
def t(self):
self.client.get("/")
#events.init.add_listener
def on_locust_init(**kwargs):
print("on locust init ...")
#events.test_start.add_listener
def on_test_start(**kwargs):
print("on test start ...")
# setup Environment and Runner
env = Environment(user_classes=[MyUser])
runner = env.create_local_runner()
# start a WebUI instance
web_ui = env.create_web_ui("127.0.0.1", 8089)
# execute init event handlers (only really needed if you have registered any)
env.events.init.fire(environment=env, runner=runner, web_ui=web_ui)
# start a greenlet that periodically outputs the current stats
gevent.spawn(stats_printer(env.stats))
# start a greenlet that save current stats to history
gevent.spawn(stats_history, env.runner)
# start the test
runner.start(1, spawn_rate=1)
# execute test_start event handlers (only really needed if you have registered any)
env.events.test_start.fire(environment=env, runner=runner, web_ui=web_ui)
# in 10 seconds stop the runner
gevent.spawn_later(10, lambda: runner.quit())
# wait for the greenlets
runner.greenlet.join()
# stop the web server for good measures
web_ui.stop()
When I ran it as a library (e.g. python use_as_lib.py), the two messages in MyUser didn't print. But if I remove those run-as-lib codes, and run it as tool (e.g. locust -f use_as_lib.py --headless -u 1 -r 1 -t=10s), messages been printed in the console. Seems I missed anything...
Here's my locust version.
locust 2.13.0 from /Users/myuser/workspace/tmp/try_python/venv/lib/python3.8/site-packages/locust (python 3.8.12)
Any ideas? Thanks!
Check the source code a bit, I need to add locust.events when initialize the Environment().
# ...
env = Environment(user_classes=[MyUser], events=events)
I think the use_as_lib example need to be updated.
I have a working PyTest here with multiple patch and fixture like the code in here
I want to wrap the setup of the test case including the patch into a single decorator so other test cases in other repo can use it.
Ideally my vision of the code with the decorator would look like below:
import pytest
from module.dummy_microservice import DummyMicroservice
#my_custom_decorator(microservice=DummyMicroservice, ini_file="tests/Microservice/appsettings.ini")
def test_receive_message(test_resource: Resources):
# ARRANGE
Resources.start()
# ACT
Resources.transport...
# ASSERT
class Resources:
def __init__(self, microservice: Microservice, transport:Transport) -> None:
self.microservice = microservice
self.transport = transport
def start(self, delay=0.05):
t1 = Thread(target=self.microservice.start)
t1.setDaemon(True) # run the thread in the background
t1.start()
sleep(0.05)
Where the setup of the test & generation of Resources happen behind the scene in the my_custom_decorator
I can pass the DummyMicroservice to the microservice parameter, so the my_custom_decorator can create a DummyMicroservice instance
I am new to Python and I am not sure where to start for this issue.
Can it be done in Python? If yes, can you show me some example?
I'd like to use events subscription / notification together with multithreading. It sounds like it should just work in theory and the documentation doesn't include any warnings. The events should be synchronous, so no deferring either.
But in practice, when I notify off the main thread, nothing comes in:
def run():
logging.config.fileConfig(sys.argv[1])
with bootstrap(sys.argv[1]) as env:
get_current_registry().notify(FooEvent()) # <- works
Thread(target=thread).start() # <- doesn't work
def thread():
get_current_registry().notify(FooEvent())
Is this not expected to work? Or am I doing something wrong?
I tried also the suggested solution. It doesn't print the expected event.
class Foo:
pass
#subscriber(Foo)
def metric_report(event):
print(event)
def run():
with bootstrap(sys.argv[1]) as env:
def foo(env):
try:
with env:
get_current_registry().notify(Foo())
except Exception as e:
print(e)
t = Thread(target=foo, args=(env,))
t.start()
t.join()
get_current_registry() is trying to access the threadlocal variable Pyramid registers when processing requests or config to tell the thread what Pyramid app is currently active IN THAT THREAD. The gotcha here is that get_current_registry() always returns a registry, just not the one you want, so it's hard to see why it's not working.
When spawning a new thread, you need to register your Pyramid app as the current threadlocal. The best way to do this is with pyramid.scripting.prepare. The "easy" way is just to run bootstrap again in your thread. I'll show the "right" way though.
def run():
pyramid.paster.setup_logging(sys.argv[1])
get_current_registry().notify(FooEvent()) # doesn't work, just like in the thread
with pyramid.paster.bootstrap(sys.argv[1]) as env:
registry = env['registry']
registry.notify(FooEvent()) # works
get_current_registry().notify(FooEvent()) # works
Thread(target=thread_main, args=(env['registry'],)).start()
def thread_main(registry):
registry.notify(FooEvent()) # works, but threadlocals are not setup if other code triggered by this invokes get_current_request() or get_current_registry()
# so let's setup threadlocals
with pyramid.scripting.prepare(registry=registry) as env:
registry.notify(FooEvent()) # works
get_current_registry().notify(FooEvent()) # works
pyramid.scripting.prepare is what bootstrap uses under the hood, and is a lot more efficient than running bootstrap multiple times because it shares the registry and all of your app configuration instead of making a completely new copy of your app.
Is it just that the 'with' context applies to the Thread() create statement only and does not propogate to the thread() method. i.e. in the case that works the 'get_current_registry' call has 'with' env context, but this 'with' context will not propogate to the point where the thread runs the 'get_current_registry'. So you need to propogate the env to the thread() - perhaps by creating a simple runnable class that takes the env in the init method.
class X:
def __init__(self,env):
self.env = env
def __call__(self):
with self.env:
get_current_registry().notify(FooEvent())
return
def run():
logging.config.fileConfig(sys.argv[1])
with bootstrap(sys.argv[1]) as env:
get_current_registry().notify(FooEvent())
Thread(target=X(env)).start()
I create simple Django app. Inside this app I have single checkbox. I save this checkbox state to database if it's checked in database I have True value if checkbox is uncecked I have False value. There is no problem with this part. Now I created function that prints for me every 10 second all the time this checkbox state value from database.
Function I put into views.py file and it looks like:
def get_value():
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
The point is that function should work all the time. For example If I in models.py code checkbox = models.BooleanField(default=False) after I run command python manage.py runserver it should give me output like:
Performing system checks...
System check identified no issues (0 silenced).
January 04, 2019 - 09:19:47
Django version 2.1.3, using settings 'CMS.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CTRL-BREAK.
true
true
true
true
then if I visit website and change state is should print false this is obvious. But as you notice problem is how start this method. It should work all the time even if I don't visit the website yet. And this part confuse me. How to do this properly ?
I need to admit that I tried some solutions
put this function at the end of manage.py file,
put this function into def ready(self),
create middleware class and put method here (example code below).
But this solutions doesn't work.
middleware class :
class SimpleMiddleware:
def __init__(self, get_response):
self.get_response = get_response
get_value()
You can achieve this by using the AppConfig.ready() hook and combining it with a sub-process/thread.
Here is an example apps.py file (based on the tutorial Polls app):
import time
from multiprocessing import Process
from django.apps import AppConfig
from django import db
class TurnOnOffMonitor(Process):
def __init__(self):
super().__init__()
self.daemon = True
def run(self):
# This import needs to be delayed. It needs to happen after apps are
# loaded so we put it into the method here (it won't work as top-level
# import)
from .models import TurnOnOff
# Because this is a subprocess, we must ensure that we get new
# connections dedicated to this process to avoid interfering with the
# main connections. Closing any existing connection *should* ensure
# this.
db.connections.close_all()
# We can do an endless loop here because we flagged the process as
# being a "daemon". This ensures it will exit when the parent exists
while True:
value_change = TurnOnOff.objects.first()
if value_change.turnOnOff:
print("true")
else:
print("false")
time.sleep(10)
class PollsConfig(AppConfig):
name = 'polls'
def ready(self):
monitor = TurnOnOffMonitor()
monitor.start()
Celery is the thing that best suits your needs from what you've described.
Celery is an asynchronous task queue/job queue based on distributed message passing. It is focused on real-time operation, but supports scheduling as well.
The execution units, called tasks, are executed concurrently on a single or more worker servers using multiprocessing, Eventlet, or gevent. Tasks can execute asynchronously (in the background) or synchronously (wait until ready).
You need to create task, run it periodically, call if you want to manually trigger is (in some view/controller).
NOTE: do not use time.sleep(10)
I'm using the VSphere API, here are the lines that I'm dealing with:
task = vm.PowerOff()
while task.info.state not in [vim.TaskInfo.State.success, vim.TaskInfo.State.error]:
time.sleep(1)
log.info("task {} is running".format(task))
log.ingo("task {} is done".format(task))
The problem here is that this blocks the execution completely whilst the task is not finished. I would like the logging part to be ran "in parallel", so I can start other tasks.
I thought about creating a function that would accept a task as parameter, and poll the info.state attribute just like now, but how do I make this non blocking ?
EDIT: I'm using Python 2.7
You could use asyncio and create an event loop. You can use asyncio.async() to create an asynchronous task that won't block the event loop execution.
Here is an example of using the threading module:
import threading
class VMShutdownThread(threading.Thread):
def __init__(self, vm):
self.vm = vm
def run(self):
task = vm.PowerOff()
while task.info.state not in [vim.TaskInfo.State.success, vim.TaskInfo.State.error]:
time.sleep(1)
log.info("task {} is running".format(task))
log.info("task {} is done".format(task))
vm_shutdown_thread = VMShutdownThread(vm)
vm_shutdown_thread.start()
If you create a logger, you can configure it to print the thread name.