Hello I would like to do a simple bandwidth test. The size of the default html page is 10 MB. The upload speed of the server is 5 Mbps, so under no circumstances 10 MB can be completed in 10 seconds. My plan is to start time interval in get request and after 10 seconds later I should be able to get either percentage or amount of total bytes sent to one particular client. So my question here is how do I get the percentage or amount of total bytes?
Approach 1
So the simplest solution would be to use this open source speedtest that will show in the browser the download speeds they are getting from your specific server. It will also show the upload speeds to your server. This particular solution uses php and perl on the server side. There is a related question Python speedtest.net, or equivalent that didn't have an answer in the lines you are looking for.
Approach 2
You can do the download test using javascript and ajax which let you monitor the download progress of a file from the web browser. For the upload portion you also use the same technique of monitoring a ajax upload.
Related
I would like to know what is the most elegant way to scrape live webcam (traffic) data, ideally using Python. The webcam feed is represented by an API, with each get request yielding a image of the currently available feed from the webcam. The feed in question has a 2/3 second delay and therefore there are ~ 30 images per minute that can be requested.
My current (trivial) solution simply queries the API in a loop (perhaps with a sleep timer) and then cleans up any duplicated images. However, this seems quite dirty and I was wondering if there is a cleaner/more elegant solution available.
In principal I would like the solution to (if at all possible) avoid:
downloading duplicated images
sleep timers
looping
Is something like this possible?
To avoid sleep timers in your code, you can write a process that is triggered by cron. Cron will handle running your script at defined intervals, such as every 2 seconds (60s / 30 images per minute).
An example process might call the API using requests. Assuming an image is passed back, the following example code might work. If a JSON string is passed back then you will need to parse it and extract the image URL.
r = requests.get('https://traffic-cam-site.com/cam', auth=('user', 'pass'))
if r.status == 200:
image = r.content
To avoid downloading duplicate images, you would need to know when a new image is present on the API. You will need to periodically check the cam site API for this purpose. Store a hash of the collected images in a database (or text file), and send that with your request to the API. Then hash the image that is currently present on the cam site - if the hashes match, don't download the image to your server.
Alternatively, if the cam site API does push notifications then you may be notified when a new image is present.
I have written a python script that helps me get the prices from alphavantage using an api key. I have 2 different alphavantage api keys (mine and my brother's) and the python script requests data from alphavantage separately from both keys but from the same laptop. But even though I request it separately from separate keys, I get the maximum api call frequency exceeded error (5 api calls per minute per key).
My best assumption is that alphavantage knows whether the request is coming from the same location or the same system. Is there any workaround this problem? Maybe bounce my signal (sort of found an answer but don't know if that's the problem though!) or pretend like the request is going from different system?
Your IP is limited to 5 requests per minute regardless of how many keys you cycle through, so you would need to cycle through that as well.
I went through the same issue and just ended up buying their premium key. It's well worth it. You can also alternatively add a time delay and just run the script in the background so it doesn't time out.
You can also try polygon API its 60 requests per minute for free
https://polygon.io/pricing
I have a python script running continuously as a webjob on Azure. In almost every 3 minutes it generates a new set of data. Once the data is generated we want to send it to UI(angular) in real time.
What could be the ideal approach (fastest) to get this functionality?
The data generated is a json containing 50 key value pairs. I read about signalr, but can I directly use signalr with my python code? Is there any other approach like sockets etc.?
What you need is called WebSocket, this is a protocol which allows back-end servers to push data to connected web clients.
There are implementations of WebSocket for python (a quick search found me this one).
Once you have a WebSocket going, you can create a service in o your angular project to handle the yields from your python service, most likely using observables.
Hopefully this sets you on the right path
How to get the properties as shown on the image (Blocked, DNS resolution, Connecting ...) after sending the request?
From firefox, the waiting time = ~650ms
From python, requests.Response.elapsed.total_seconds() = ~750ms
Since the result is difference, i want to have a more details result as shown on firefox developer mode.
You can only get the total time of the request because the response doesn´t know more itself.
More informations are only logged by the programms which does handle the request and start-stopping a timer for some steps.
You need to track times in/on you connection-framework or you can have a look on the FireFox API for "timings" - there are some more APIs so maybe you find something you are able to use for your case - but main fact is you can´t do it directly and only with your script because request and response are fired/catched and logging/getting data does happen between other components then.
My frontend web app is calling my python Flask API on an endpoint that is cached and returns a JSON that is about 80,000 lines long and 1.7 megabytes.
It takes my UI about 7.5 seconds to download all of it.
It takes Chrome when calling the path directly about 6.5 seconds.
I know that I can split up this endpoint for performance gains, but out of curiosity, what are some other great options to improve the download speed of all this content?
Options I can think of so far:
1) compressing the content. But then I would have to decompress it on the frontend
2) Use something like gRPC
Further info:
My flask server is using WSGIServer from gevent and the endpoint code is below. PROJECT_DATA_CACHE is the already Jsonified data that is returned:
#blueprint_2.route("/projects")
def getInitialProjectsData():
global PROJECT_DATA_CACHE
if PROJECT_DATA_CACHE:
return PROJECT_DATA_CACHE
else:
LOGGER.debug('No cache available for GET /projects')
updateProjectsCache()
return PROJECT_DATA_CACHE
Maybe you could stream the file? I cannot see any way to transfer a file 80,000 lines long without some kind of download or wait.
This would be an opportunity to compress and decompress it, like you suggested. Definitely make sure that the JSON is minified.
One way to minify a JSON: https://www.npmjs.com/package/json-minify
Streaming a file:
https://blog.al4.co.nz/2016/01/streaming-json-with-flask/
It also really depends on the project, maybe you could get the users to download it completely?
The best way to do this is to break your JSON into chunks and stream it by passing a generator to the Response. You can then render the data as you receive it or show a progress bar displaying the percentage that is done. I have an example of how to stream data as a file is being downloaded from AWS s3 here. That should point you in the right direction.