How to use multiple hosts in multiple classes in Locust - python

I need to test some APIs having different addresses, I have created locustfile for Locust Tool as mentioned below, but only api1 is working, endpoints in api2 are not being called
from locust import HttpUser, task, between
class api1(HttpUser):
host = 'http://localhost:6001'
wait_time = between(2, 4)
#task()
def api1_ep1(self):
self.client.post('/ep1')
#task()
def api1_ep2(self):
self.client.post('/ep2')
class api2(HttpUser):
host = 'http://localhost:6002'
wait_time = between(2, 4)
#task()
def api2_ep1(self):
self.client.post('/ep1')
#task()
def api2_ep2(self):
self.client.post('/ep2')
I tried suggestions from issue: 150 and put full path as self.client.post('http://localhost:6001/ep1') but the same problem persists

I was spawning single user, spawning more users fixed the probem

Related

When using airflow I get "UserWarning: MongoClient opened before fork" + best practice tips to instantiate mongo client in python

I have created a python flow that executes various steps. I also use MongoDB which for now is used for config purposes but will further use it for persistence.
When I execute my code from Pycharm all is good but when I executed it through Airflow I get the fork warning "UserWarning: MongoClient opened before fork. Create MongoClient only after forking.".
At first I thought that the problem is that I open many instances of the Mongo client so I used the singleton pattern so that the connection is instantiated only once -please see code below-.
The only way to get rid of the warning is by adding connect=False parameter as you may see commented in my code, but it seems to be a workaround and not a steady solution --according to documentation-.
So what is the problem here? It has something to do with Airflow maybe -since I am not using the mongo_hook-?
In addition please let me know if using the singleton pattern to instantiate once the Mongo Client is a good practice? The client is getting called from various modules.
Note that Mongo as well as Airflow are running in separate dockers.
def singleton(class_):
instances = {}
def get_instance(*args, **kwargs):
if class_ not in instances:
instances[class_] = class_(*args, **kwargs)
return instances[class_]
return get_instance
#singleton
class MongoPersistence(Persistence):
def __init__(self, driver, host, user, password, port, db):
self._uri = '{driver}://{host}:{port}'.format(driver=driver, host=host, port=port)
self._client = MongoClient(self._uri
, serverSelectionTimeoutMS=3000 # 3 second timeout
, username=self.user
, password=self.password
# ,connect=False
)
def find_one(self, **kwargs):
return self._client[kwargs.get('db_name')][kwargs.get('collection_name')].find_one(kwargs.get('query'))
Thank you,
Dina

Can't reach Locust WebInterface "ERR_CONNECTION_REFUSED"

I wanted to test Locust for my Project on Windows 10.
The Script seems to run properly (no Errors in CMD), but i can't connect to the web interface http://127.0.0.1:8089 (ERR_CONNECTION_REFUSED).
I am guessing, that this has to do with Browser/Windows config, but i can't find the problem. I have no Proxy set up in Lan settings, i get my ip from DNS, and i have no changes in my hosts file.
locustfile.py
from locust import HttpLocust, TaskSet, task
class UserBehavior(TaskSet):
def on_start(self):
""" on_start is called when a Locust start before any task is scheduled """
self.login()
def on_stop(self):
""" on_stop is called when the TaskSet is stopping """
self.logout()
def login(self):
self.client.post("/login", {"username":"tester", "password":"abcd1234"})
def logout(self):
self.client.post("/logout", {"username":"ellen_key", "password":"education"})
#task(2)
def index(self):
self.client.get("/")
#task(1)
def profile(self):
self.client.get("/profile")
class WebsiteUser(HttpLocust):
task_set = UserBehavior
min_wait = 5000
max_wait = 9000
host="http://google.com"
CMD
D:\workspace\WebTesting>locust
CMD result :
[2019-05-13 09:49:45,069] LB6-001-DTMH/INFO/locust.main: Starting web monitor at *:8089
[2019-05-13 09:49:45,070] LB6-001-DTMH/INFO/locust.main: Starting Locust 0.11.0
When i interrupt the script in the command line i get the "KeyboardInterrupt" message and some statistics without data
python -m http.server 8089 seems to work
Try going to http://localhost:8089/
I am not quite sure why, but I can't reach the Locust webinterface through http://127.0.0.1 either (other webservers run fine locally with this address), but going to localhost does work for me.
I faced a similar issue. Instead of using IP address, try using localhost.
http://localhost:portnumber -> http://localhost:8089. This solved the issue for me.

How to detect Celery Connection failure and switch to failover then back?

So our use case might be out of the remit of what Celery can do, but I thought I'd ask...
Use Case
We are planning on using a hosted/managed RabbitMQ cluster backing which Celery will be using for it's broker.
We want to ensure that our app has 0 downtime (obviously) so we're trying to figure out how we can handle the event when our upstream cluster has a catastrophic failure whereby the entire cluster is unavailable.
Our thought is that we have a standby Rabbit cluster that when the connection drops, we can automatically switch Celery to use that connection instead.
In the meantime, Celery is determining whether the master cluster is up and running and when it is, all of the publishers reconnect to the master, the workers drain the backup cluster and when empty, switch back onto the master.
The issue
What I'm having difficulty with is capturing the connection failure as it seems to happen deep within celery as the Exception doesn't bubble up to the app.
I can see that Celery has a BROKER_FAILOVER_STRATEGY configuration property, which would handle the initial swap, but it (seemingly) is only utilised when failover occurs, which doesn't fit our use case of swapping back to the master when it is back up.
I've also come across Celery's "bootsteps", but these are applied after Celery's own "Connection" bootstep which is where the exception is being thrown.
I have a feeling this approach is probably not the best one given the limitations I've been finding, but has anyone got any ideas on how I'd go about overriding the default Connection bootstep or achieving this via a different means?
It's quite old, but maybe useful to someone. I'm usin FastApi with Celery 5.2.
run_api.py file:
import uvicorn
if __name__ == "__main__":
port=8893
print("Starting API server on port {}".format(port))
uvicorn.run("endpoints:app", host="localhost", port=port, access_log=False)
endpoints.py file:
import threading
import time
import os
from celery import Celery
from fastapi import FastAPI
import itertools
import random
# Create object for fastAPI
app = FastAPI()
# Create and onfigure Celery to manage queus
# ----
celery = Celery(__name__)
celery.conf.broker_url = ["redis://localhost:6379"]
celery.conf.result_backend = "redis://localhost:6379"
celery.conf.task_track_started = True
celery.conf.task_serializer = "pickle"
celery.conf.result_serializer = "pickle"
celery.conf.accept_content = ["pickle"]
def random_failover_strategy(servers):
# The next line is necessary to work, even you don't use them:
it = list(servers) # don't modify callers list
shuffle = random.shuffle
for _ in itertools.repeat(None):
# Do whatever action required here to obtain the new url
# As an example, ra.
ra = random.randint(0, 100)
it = [f"redis://localhost:{str(ra)}"]
celery.conf.result_backend = it[0]
shuffle(it)
yield it[0]
celery.conf.broker_failover_strategy = random_failover_strategy
# Start the celery worker. I start it in a separate thread, so fastapi can run in parallel
worker = celery.Worker()
def start_worker():
worker.start()
ce = threading.Thread(target=start_worker)
ce.start()
# ----
#app.get("/", tags=["root"])
def root():
return {"message": ""}
#app.post("/test")
def test(num: int):
task = test_celery.delay(num)
print(f'task id: {task.id}')
return {
"task_id": task.id,
"task_status": "PENDING"}
#celery.task(name="test_celery", bind=True)
def test_celery(self, num):
self.update_state(state='PROGRESS')
print("ENTERED PROCESS", num)
time.sleep(100)
print("EXITING PROCESS", num)
return {'number': num}
#app.get("/result")
def result(id: str):
task_result = celery.AsyncResult(id)
if task_result.status == "SUCCESS":
return {
"task_status": task_result.status,
"task_num": task_result.result['number']
}
else:
return {
"task_status": task_result.status,
"task_num": None
}
Place both files in the same folder. Run python3 run_api.py.
Enjoy!

Django celery task run at once on startup of celery server

I need to find how to specify a kind of initial celery task, that will start all other tasks in specially defined way. This initial task should be run immediately at once on celery server startup and never run again.
How about using celeryd_after_setup or celeryd_init signal?
Follwing example code from the documentation:
from celery.signals import celeryd_init
#celeryd_init.connect(sender='worker12#example.com')
def configure_worker12(conf=None, **kwargs):
...
I found the way to do this. It has one negative side - impossible to specify current year and task will run after year again. But usually server restarts more often, then this period.
from celery.task import PeriodicTask
class InitialTasksStarter(PeriodicTask):
starttime = datetime.now() + timedelta(minutes=1)
run_every = crontab(month_of_year=starttime.month, day_of_month=starttime.day, hour=starttime.hour, minute=starttime.minute)
def run(self, **kwargs):
....
return True

Best practice of testing django-rq ( python-rq ) in Django

I'll start using django-rq in my project.
Django integration with RQ, a Redis based Python queuing library.
What is the best practice of testing django apps which is using RQ?
For example, if I want to test my app as a black box, after User makes some actions I want to execute all jobs in current Queue, and then check all results in my DB. How can I do it in my django-tests?
I just found django-rq, which allows you to spin up a worker in a test environment that executes any tasks on the queue and then quits.
from django.test impor TestCase
from django_rq import get_worker
class MyTest(TestCase):
def test_something_that_creates_jobs(self):
... # Stuff that init jobs.
get_worker().work(burst=True) # Processes all jobs then stop.
... # Asserts that the job stuff is done.
I separated my rq tests into a few pieces.
Test that I'm correctly adding things to the queue (using mocks).
Assume that if something gets added to the queue, it will eventually be processed. (rq's test suite should cover this).
Test, given the correct input, my tasks work as expected. (normal code tests).
Code being tested:
def handle(self, *args, **options):
uid = options.get('user_id')
# ### Need to exclude out users who have gotten an email within $window
# days.
if uid is None:
uids = User.objects.filter(is_active=True, userprofile__waitlisted=False).values_list('id', flat=True)
else:
uids = [uid]
q = rq.Queue(connection=redis.Redis())
for user_id in uids:
q.enqueue(mail_user, user_id)
My tests:
class DjangoMailUsersTest(DjangoTestCase):
def setUp(self):
self.cmd = MailUserCommand()
#patch('redis.Redis')
#patch('rq.Queue')
def test_no_userid_queues_all_userids(self, queue, _):
u1 = UserF.create(userprofile__waitlisted=False)
u2 = UserF.create(userprofile__waitlisted=False)
self.cmd.handle()
self.assertItemsEqual(queue.return_value.enqueue.mock_calls,
[call(ANY, u1.pk), call(ANY, u2.pk)])
#patch('redis.Redis')
#patch('rq.Queue')
def test_waitlisted_people_excluded(self, queue, _):
u1 = UserF.create(userprofile__waitlisted=False)
UserF.create(userprofile__waitlisted=True)
self.cmd.handle()
self.assertItemsEqual(queue.return_value.enqueue.mock_calls, [call(ANY, u1.pk)])
I commited a patch that lets you do:
from django.test impor TestCase
from django_rq import get_queue
class MyTest(TestCase):
def test_something_that_creates_jobs(self):
queue = get_queue(async=False)
queue.enqueue(func) # func will be executed right away
# Test for job completion
This should make testing RQ jobs easier. Hope that helps!
Just in case this would be helpful to anyone. I used a patch with a custom mock object to do the enqueue that would run right away
#patch django_rq.get_queue
with patch('django_rq.get_queue', return_value=MockBulkJobGetQueue()) as mock_django_rq_get_queue:
#Perform web operation that starts job. In my case a post to a url
Then the mock object just had one method:
class MockBulkJobGetQueue(object):
def enqueue(self, f, *args, **kwargs):
# Call the function
f(
**kwargs.pop('kwargs', None)
)
what I've done for this case is to detect if I'm testing, and use fakeredis during tests. finally, in the test itself, I enqueue the redis worker task in synch mode:
first, define a function that detects if you're testing:
TESTING = len(sys.argv) > 1 and sys.argv[1] == 'test'
def am_testing():
return TESTING
then in your file that uses redis to queue up tasks, manage the queue this way.
you could extend get_queue to specify a queue name if needed:
if am_testing():
from fakeredis import FakeStrictRedis
from rq import Queue
def get_queue():
return Queue(connection=FakeStrictRedis())
else:
import django_rq
def get_queue():
return django_rq.get_queue()
then, enqueue your task like so:
queue = get_queue()
queue.enqueue(task_mytask, arg1, arg2)
finally, in your test program, run the task you are testing in synch mode, so that it runs in the same process as your test. As a matter of practice, I first clear the fakeredis queue, but I don't think its necessary since there are no workers:
from rq import Queue
from fakeredis import FakeStrictRedis
FakeStrictRedis().flushall()
queue = Queue(async=False, connection=FakeStrictRedis())
queue.enqueue(task_mytask, arg1, arg2)
my settings.py has the normal django_redis settings, so django_rq.getqueue() uses these when deployed:
RQ_QUEUES = {
'default': {
'HOST': env_var('REDIS_HOST'),
'PORT': 6379,
'DB': 0,
# 'PASSWORD': 'some-password',
'DEFAULT_TIMEOUT': 360,
},
'high': {
'HOST': env_var('REDIS_HOST'),
'PORT': 6379,
'DB': 0,
'DEFAULT_TIMEOUT': 500,
},
'low': {
'HOST': env_var('REDIS_HOST'),
'PORT': 6379,
'DB': 0,
}
}
None of the answers above really solved how to test without having redis installed and using django settings. I found including the following code in the tests does not impact the project itself yet gives everything needed.
The code uses fakeredis to pretend there is a Redis service available, set up the connection before RQ Django reads the settings.
The connection must be the same because in fakeredis connections
do not share the state. Therefore, it is a singleton object to reuse it.
from fakeredis import FakeStrictRedis, FakeRedis
class FakeRedisConn:
"""Singleton FakeRedis connection."""
def __init__(self):
self.conn = None
def __call__(self, _, strict):
if not self.conn:
self.conn = FakeStrictRedis() if strict else FakeRedis()
return self.conn
django_rq.queues.get_redis_connection = FakeRedisConn()
def test_case():
...
You'll need your tests to pause while there are still jobs in the queue. To do this, you can check Queue.is_empty(), and suspend execution if there are still jobs in the queue:
import time
from django.utils.unittest import TestCase
import django_rq
class TestQueue(TestCase):
def test_something(self):
# simulate some User actions which will queue up some tasks
# Wait for the queued tasks to run
queue = django_rq.get_queue('default')
while not queue.is_empty():
time.sleep(5) # adjust this depending on how long your tasks take to execute
# queued tasks are done, check state of the DB
self.assert(.....)
I came across the same issue. In addition, I executed in my Jobs e.g. some mailing functionality and then wanted to check the Django test mailbox if there were any E-Mail. However, since the with Django RQ the jobs are not executed in the same context as the Django test, the emails that are sent do not end up in the test mailbox.
Therefore I need to execute the Jobs in the same context. This can be achieved by:
from django_rq import get_queue
queue = get_queue('default')
queue.enqueue(some_job_callable)
# execute input watcher
jobs = queue.get_jobs()
# execute in the same context as test
while jobs:
for job in jobs:
queue.remove(job)
job.perform()
jobs = queue.get_jobs()
# check no jobs left in queue
assert not jobs
Here you just get all the jobs from the queue and execute them directly in the test. One can nicely implement this in a TestCase Class and reuse this functionality.

Categories

Resources