grpc python measure response time - python

How can I measure the full time grpc-python takes to handle a request?
So far the best I can do is:
def Run(self, request, context):
start = time.time()
# service code...
end = time.time()
return myservice_stub.Response()
But this doesn't measure how much time grpc takes to serialize the request, response, to transfer it over the network.. and so on. I'm looking for a way to "hook" into these steps.

You can measure on the client side:
start = time.time()
response = stub.Run(request)
total_end_to_end = time.time() - start
Then you can get the total overhead (serialization, transfer) by reducing the computation of the Run method.
To automate the process, you can add (at least for the sake of the test) the computation time as a field to the myservice_stub.Response.

Related

Faster way to iterate over dataframe?

I have a dataframe where each row is a record and I need to send each record in the body of a post request. Right now I am looping through the dataframe to accomplish this. I am constrained by the fact that each record must be posted individually. Is there a faster way to accomplish this?
Iterating over the data frame is not the issue here. The issue is you have to wait for the server to response to each of your request. Network request takes eons compared to CPU time need to iterate over the data frame. In other words, your program is I/O bound, not CPU bound.
One way to speed it up is to use coroutines. Let's say you have to make 1000 requests. Instead of firing one request, wait for the response, then fire the next request and so on, you fire 1000 requests at once and tell Python to wait until you have received all 1000 responses.
Since you didn't provide any code, here's a small program to illustrate the point:
import aiohttp
import asyncio
import numpy as np
import time
from typing import List
async def send_single_request(session: aiohttp.ClientSession, url: str):
async with session.get(url) as response:
return await response.json()
async def send_all_requests(urls: List[str]):
async with aiohttp.ClientSession() as session:
# Make 1 coroutine for each request
coroutines = [send_single_request(session, url) for url in urls]
# Wait until all coroutines have finished
return await asyncio.gather(*coroutines)
# We will make 10 requests to httpbin.org. Each request will take at least d
# seconds. If you were to fire them sequentially, they would have taken at least
# delays.sum() seconds to complete.
np.random.seed(42)
delays = np.random.randint(0, 5, 10)
urls = [f"https://httpbin.org/delay/{d}" for d in delays]
# Instead, we will fire all 10 requests at once, then wait until all 10 have
# finished.
t1 = time.time()
result = asyncio.run(send_all_requests(urls))
t2 = time.time()
print(f"Expected time: {delays.sum()} seconds")
print(f"Actual time: {t2 - t1:.2f} seconds")
Output:
Expected time: 28 seconds
Actual time: 4.57 seconds
You have to read up a bit on coroutines and how they work but for the most part, they are not too complicated for your use case. This comes with a couple caveats:
All your requests must be independent from each other.
The rate limit on the server must be sufficient to handle your workload. For example, if it restricts you to 2 requests per minute, there is no way around that other than upgrading to different service tier.

How can I implement a stopwatch in my socket Python program

I am creating a very simple ping program in Python that will send 16- 64 bytes of information to a local server, once the server has received all the bytes, it will send back a 1 Byte message back to the client. (We are testing wifi speeds...) it is very crucial that my client program measures how much time it took to send these bytes. I need a "stopwatch" to measure in milliseconds how much time it took to get the 1 byte message back from the server.
My question is, how can I do this?
I know that there is time library, but there is no function in there that can help me measure in milliseconds like I need. Thank you
Also I am using Python 3.4 for this project.
you can Use timeit module or use decorator to get execution time of function:
import time
def timer(func):
def func_wrapper(*args, **kwargs):
before = time.time()
call = func(*args, **kwargs)
after = time.time()
print '%s function took %0.5f millisecond' % (func.__name__, (after-before)*1000.0)
return call
return func_wrapper
#timer
def test():
return[i**2 for i in range(10000)]
test()
output:
test function took 3.99995 millisecond

Set delay between tasks in group in Celery

I have a python app where user can initiate a certain task.
The whole purpose of a task is too execute a given number of POST/GET requests with a particular interval to a given URL.
So user gives N - number of requests, V - number of requests per second.
How is it better to design such a task taking into account that due to a I/O latency the actual r/s speed could bigger or smaller.
First of all I decided to use Celery with Eventlet because otherwise I would need dozen of works which is not acceptable.
My naive approach:
Client starts a task using task.delay()
Inside task I do something like this:
#task
def task(number_of_requests, time_period):
for _ in range(number_of_requests):
start = time.time()
params_for_concrete_subtask = ...
# .... do some IO with monkey_patched eventlet requests library
elapsed = (time.time() - start)
# If we completed this subtask to fast
if elapsed < time_period / number_of_requests:
eventlet.sleep(time_period / number_of_requests)
A working example is here.
if we are too fast we try to wait to keep the desired speed. If we are too slow it's ok from client's prospective. We do not violate requests/second requirement. But will this resume correctly if I restart Celery?
I think this should work but I thought there is a better way.
In Celery I can define a task with a particular rate limit which will almost match my needs guarantee. So I could use Celery group feature and write:
#task(rate_limit=...)
def task(...):
#
task_executor = task.s(number_of_requests, time_period)
group(task_executor(params_for_concrete_task) for params_for_concrete_task in ...).delay()
But here I hardcode the the rate_limit which is dynamic and I do not see a way of changing it. I saw an example:
task.s(....).set(... params ...)
But I tried to pass rate_limit to the set method it it did not work.
Another maybe a bettre idea was to use Celery's periodic task scheduler. With the default implementation periods and tasks to be executed periodically is fixed.
I need to be able to dynamically create tasks, which run periodically a given number of times with a specific rate limit. Maybe I need to run my own Scheduler which will take tasks from DB? But I do not see any documentation around this.
Another approach was to try to use a chain function, but I could not figure out is there a delay between tasks parameter.
If you want to adjust the rate_limit dynamically you can do it using the following code. It is also creating the chain() at runtime.
Run this you will see that we successfully override the rate_limit of 5/sec to 0.5/sec.
test_tasks.py
from celery import Celery, signature, chain
import datetime as dt
app = Celery('test_tasks')
app.config_from_object('celery_config')
#app.task(bind=True, rate_limit=5)
def test_1(self):
print dt.datetime.now()
app.control.broadcast('rate_limit',
arguments={'task_name': 'test_tasks.test_1',
'rate_limit': 0.5})
test_task = signature('test_tasks.test_1').set(immutable=True)
l = [test_task] * 100
chain = chain(*l)
res = chain()
I also tried to override the attribute from within the class, but IMO the rate_limit is set when the task is registered by the worker, that is why the .set() has no effects. I'm speculating here, one would have to check the source code.
Solution 2
Implement your own waiting mechanism using the end time of the previous call, in the chain the return of the function is passed to the next one.
So it would look like this:
from celery import Celery, signature, chain
import datetime as dt
import time
app = Celery('test_tasks')
app.config_from_object('celery_config')
#app.task(bind=True)
def test_1(self, prev_endtime=dt.datetime.now(), wait_seconds=5):
wait = dt.timedelta(seconds=wait_seconds)
print dt.datetime.now() - prev_endtime
wait = wait - (dt.datetime.now() - prev_endtime)
wait = wait.seconds
print wait
time.sleep(max(0, wait))
now = dt.datetime.now()
print now
return now
#app.control.rate_limit('test_tasks.test_1', '0.5')
test_task = signature('test_tasks.test_1')
l = [test_task] * 100
chain = chain(*l)
res = chain()
I think this is actually more reliable than the broadcast.

Timing a HTTP / database request: time.time() or time.clock()?

I am interested in measuring the time elapsed during a (synchronous) HTTP request and/or a (synchronous) request to a database on a remote server. After reading this page, my understanding is that time.clock() is an accurate measure of the processor time. But I don't know if "processor time" is relevant in my case, since the CPU would be idling while waiting for the response. In other words:
s0 = time.time()
# send a HTTP request
s1 = time.time()
t0 = time.clock()
# send a HTTP request
t1 = time.clock()
Which one actually measures what I want?
For measuring HTTP response time, I think time.time() is enough.
As others suggested, use timeit if you want to do benchmarking.
I personally haven't used time.clock() before, but after reading the example :
#!/usr/bin/python
import time
def procedure():
time.sleep(2.5)
# measure process time
t0 = time.clock()
procedure()
print time.clock() - t0, "seconds process time"
# measure wall time
t0 = time.time()
procedure()
print time.time() - t0, "seconds wall time"
I don't think time.clock() is appropriate measuring HTTP response time.
One approach is to use New Relic for python. You just install it and enable in application. After that, you will be able to see such charts in your New Relic account. It has a free plan.

Is this the right way to measure round-trip time?

I need to compare a few CDN services, so I write a short python script to repeatedly send get requests to resources deployed on these CDN, and record the round-trip time. I run the scripts on several PCs in different cities.
This is how I did it:
t0 = time.clock()
r = requests.get(test_cdn_url)
t1 = time.clock()
roundtrip = t1-t0 # in seconds
For most requests, the roundtrip time is within 1 second:200-500ms, but occasionally, it reports a request that finishes in several seconds: 3-5 seconds, once 9 seconds.
Is this just the way it is, or am I using the wrong tool to measure? In other words, does requests lib do something (caching or some heavy-weight operations) that makes the metric totally wrong?
The Response object provides an elapsed attribute:
The amount of time elapsed between sending the request and the arrival
of the response (as a timedelta)
Your code would then look like:
r = requests.get(test_cdn_url)
roundtrip = r.elapsed.total_seconds()
If you're worried that requests is doing anything heavy-weight (or caching), you could always use urllib:
nf = urllib.urlopen(url)
t0 = time.time()
page = nf.read()
t1 = time.time()
nf.close()
roundtrip = t1 - t0
Alternatively, if you include a Cache-Control: no-cache header along with your request, then that should ensure that no caching happens along the way - and your original code should time the request effectively.

Categories

Resources