I have an API, which performs the task likes this:
Client sends the request.
The task is received by the celery worker and put in the message queue.
The response is sent as soon as the request is received and the task is executed.
Now, I am load testing my application on JMeter. In my server logs, whenever I send a request to the server, the response is received asap and the task is executed. But, JMeter is executing the whole query and sending me the response after the entire query is executed. Now, I want JMeter to send me the response as soon as the request is received, not when the whole query is executed.I want to record the response times of the server, when mulitple queries are sent.
How can I achieve this? Can anybody help me with this?
Thanks
Related
I have 2 machines A and B and A can send restful request to B as follows:
curl -XPOST -H "Content-type: application/json" -d '{"data":"python /tmp/demo.py","action":"demo"}' 'http://192.168.95.8:51888/api/host'
I have deployed an api service on B and when such request is received, B will execute the python script /tmp/demo.py and the execution may last 0.5-3 hours.
My question is:
1) How to write a job on A that keeps tracking the status of the task running on B and end it self when the task finishes successfully or failed?
2) In the tracking job, how to add a module that can kill itself after exceeding a pre-set time threshold?
Treat the job as an HTTP resource. When you do POST /api/host, that request creates a new id for that job and returns it. For good use of HTTP, the response would contain a Location header with the URL of the resource where the job's status can be checked, e.g.:
POST /api/hosts
Content-type: application/json
{"data":"python /tmp/demo.py","action":"demo"}
HTTP/1.1 201 Created
Location: /api/host/jobs/c2de232b-f63e-4178-a053-d3f3459ab538
You can now GET /api/host/jobs/c2de232b-f63e-4178-a053-d3f3459ab538 at any time and see what status the job has, e.g.:
{"status": "pending"}
You may POST commands to that resource, e.g. for cancelling it.
How exactly your HTTP API would get the status of that Python script is obviously up to you. Perhaps it can communicate with it over a socket, or the job itself will periodically write its status to some database or file.
I have a Django App that accepts messages from a remote device as a POST message.
This fits well with Django's framework! I used the generic View class (from django.views import View) and defined my own POST function.
But the remote device requires a special reply that I cannot generate in Django (yet). So, I use the Requests library to re-create the POST message and send it up to the manufacturer's cloud server.
That server processes the data, and responds with the special message in the body. Idealy, the entire HTML response message should go back to the remote device. If it does not get a valid reply, it will re-send the message. Which would be annoying!
I've been googling, but am having trouble getting a clear picture on how to either:
(a): Reply back in Django with the Requests.response object without any edits.
(b): Build a Django response and send it back.
Actually, I think I do know how to do (b), but its work. I would rather do (a) if its possible.
Thanks in Advance!
Rich.
Thanks for the comments and questions!
The perils of late night programming: you might over-think something, or miss the obvious. I was so focused on finding a way to return the request.response without any changes/edits I did not even sketch out what option (b) would be.
Well, it turns out its pretty simple:
s = Session()
# Populate POST to cloud with data from remote device request:
req = Request('POST', url, data=data, headers=headers)
prepped = req.prepare()
timeout = 10
retries = 3
while retries > 0:
try:
logger.debug("POST data to remote host")
resp = s.send(prepped, timeout=timeout)
break
except:
logger.debug("remote host connection failed, retry")
retries -= 1
logger.debug("retries left: %d", retries)
time.sleep(.3)
if retries == 0:
pass # There isn't anything I can do if this fails repeatedly...
# Build reply to remote device:
r = HttpResponse(resp.content,
content_type = resp.headers['Content-Type'],
status = resp.status_code,
reason = resp.reason,
)
r['Server'] = resp.headers['Server']
r['Connection'] = resp.headers['Connection']
logger.debug("Returning Server response to remote device")
return r
The Session "s" allows one to use "prepped" and "send", which allows one to monkey with the request object before its sent, and to re-try the send. I think at least some of it can be removed in a refactor; making this process even simpler.
There are 3 HTTP object at play here:
"req" is the POST I send up to the cloud server to get back a special (encrypted) reply.
"resp" is the reply back from the cloud server. The body (.content) contains the special reply.
"r" is the Django HTTP response I need to send back to the remote device that started this ball rolling by POSTing data to my view.
Its pretty simple to populate the response with the data, and set headers to the values returned by the cloud server.
I know this works because the remote device does not POST the same data twice! If there was a mistake anyplace in this process, it would re-send the same data over and over. I copied the While/try loop from a Socket repeater module. I don't know if that is really applicable to HTTP. I have been testing this on live hardware for over 48 hours and so far it has never failed. Timeouts are a question mark too, in that I know the remote device and cloud server have strict limits. So if there is an error in my "repeater", re-trying may not work if the process takes too long. It might be better to just discard/give up on the current POST. And wait for the remote device to re-try. Sorry, refactoring out loud...
I have several Flask servers which handle POST requests and returns some values. I need to run an infinite process which sends requests to all servers, waiting until first response, update internal state based on that response and send new request to that server again. Here is a pseudocode for this:
obj = SomeObject()
requests = [obj.make_request() for _ in range(10)]
responses = grequests.imap(requests, size=10)
for response in responses:
obj.update_state(response)
requests.append(obj.make_request())
What is the proper implementation of such logic in python?
I have a Scrapy project with a lot of spiders. There is a server side solution to restart HMA VPN in order to change interface IP(so that we get different IP and don't get blocked).
There is a custom download middleware that sends corresponding socket message for each request and response so that server side solution can trigger VPN restart. Obviously Scrapy must NOT yield any new requests when VPN restart is about to happen - we control that by having a lock file. Scrapy however must handle all not yet received responses before VPN restart can actually happen.
Putting sleep in download middleware stops Scrapy completely. Is there a way to handle responses but hold off new requests(until lock file gets removed)?
This obviously is the case when more then 1x concurrent request is yielded.
Following middleware code is used:
class CustomMiddleware(object):
def process_request(self, request, spider):
while os.path.exists(LOCK_FILE_PATH):
time.sleep(10)
# Send corresponding socket message("OPEN")
def process_response(self, request, response, spider):
# Send corresponding socket message("CLOSE")
return response
Turned out solution is very simple:
if os.path.exists(LOCK_FILE_PATH):
return request
Doing so request is passed through middlewares all over until it can be executed.
I receive posted data and immediately return an empty 200 OK response. After that I will process the received data. I'm considering how to do it with a teardown function but I didn't find how to pass it the received data:
#app.route('/f', methods = ['POST'])
def f():
data = request.stream.read()
return ''
#app.teardown_request
def teardwon_request(exception=None):
# How to use posted data here?
Flask version is 0.10.1
I'm trying to implement a Paypal IPN listener
https://developer.paypal.com/webapps/developer/docs/classic/ipn/gs_IPN/#overview
Notice that the listener's HTTP 200 response happens before the listener's IPN message.
You are overcomplicating things; just send a request from your Flask server in the request handler. Paypal IPN notifications just require a empty 200 response, Paypal does not mandate that you send the 200 OK before you can send the HTTP request to their servers.
The overview page is indeed confusing, but the PHP code posted won't close the request until the Paypal IPN post back to their server has completed either.
If this was a hard requirement (making this a terrible design), you'd have to handle the request back to Paypal asynchronously. You can do this with a separate thread, for example, using a queue, push in the data you received from the IPN, and have a separate thread poll the queue and communicate to Paypal from that thread. Or you could use Celery to simplify the job (push a task out to be handled asynchronously). Either way, this would let you close the incoming request early.