I have a big threaded feed retrieval script in python.
My question is, how can I load balance outgoing requests so that I don't hit any one host too often?
This is a big problem for feedburner, since a large percentage of sites proxy their RSS through feedburner and to further complicate matters many sites will alias a subdomain on their domain to feedburner to obscure the fact that they're using it (e.g. "mysite" sets its RSS url to feeds.mysite.com/mysite, where feeds.mysite.com bounces to feedburner). Sometimes it blocks me for awhile and redirects to their "automated requests" error page.
You should probably do a one-time request (per week/month, whatever fits). for each feed and follow redirects to get the "true" address. Regardless of your throttling situation at the time, you should be able to resolve all feeds, save that data and then just do it once for every new feed you add to the list. You can look at urllib's geturl() as it returns the final url from the URL you put into it. When you do ping the feeds, be sure to use the original (keep the "real" simply for load-balancing) to make sure it redirects properly if the user has moved it or similar.
Once that is done, you can simply devise a load mechanism such as only X requests per hour for a given domain, going through each feed and skipping feeds whose hosts have hit the limit. If feedburner keeps their limits public (not likely) you can use that for X, but otherwise you will just have to estimate it and make a rough estimate that you know to be below the limit. Knowing google however, their limits might measure patterns and not have a specific hard limit.
Edit: Added suggestion from comment.
If your problem is related to Feedburner "throttling you", it most certainly does this because of the source IP of your bot. The way to "load balance to Feedburner" would be to have multiple different source IPs to start from.
Now there are numerous ways to achieving this, 2 of them being:
Multi-homed server: multiple IPs on the same machine
Multiple discrete machines
Of course, don't you go a put a NAT box in front of them now ;-)
The above takes care of the possible "throttling problems", now for the "scheduling part". You should maintain a "virtual scheduler" per "destination" and make sure not to exceed the parameters of the Web Service (e.g. Feedburner) in question. Now, the tricky part is to get hold of these "limits"... sometimes they are advertised and sometimes you need to figure them out experimentally.
I understand this is "high level architectural guidelines" but I am not ready to be coding this for you... I hope you forgive me ;-)
"how can I load balance outgoing requests so that I don't hit any one host too often?"
Generally, you do this by designing a better algorithm.
For example, randomly scramble your requests.
Or shuffle them 'fairly' so so that you round-robin through the sources. That would be a simple list of queues where you dequeue one request from each host.
Related
Hi I am quite new to web application development. I have been designing an application where a user uploads a file, some calculation is done and an output table will be shown. This process takes approximately 5-6 seconds.
I am saving my data in sessions like this:
request.session ['data']=resultDATA.
And loading the data whenever I need from sessions like this:
resultDATA = request.session['data']
I dont need DATA once the user is signed out. So is approach correct to save user data (not involving passwords)?
My biggest problem is if n number of users upload their files at exact moment do the last user have to wait for n*6 seconds for his calculation to complete? If yes is there any solution for this?
Right now I am using django built-in web server.
Do I have to use a different server to solve this problem?
There are quiet some questions in this question, however i think they are related enough and concise enough to deserve an answer:
So is approach correct to save user data (not involving passwords)?
I don't see any problem with this approach, since it's volatile data and it's not sensitive.
My biggest problem is if n number of users upload their files at exact moment do the last user have to wait for n*6 seconds for his calculation to complete?
This shouldn't be an issue as you put it. obviously if your server is handling huge ammounts of traffic it will slow down, and it will take a bit longer than your usual 5-6 seconds. However it won't be n*6, the server should be able to handle multiple requests at once.
Do I have to use a different server to solve this problem?
No, but kind of yes... what i mean is that in development the built-in server is great. It does everything you need it to do, however when you decide to push the app into production, you'll need a proper server for it.
As a side note, try to see if you can improve the data collection time, because right now everything is running on your own PC, which means it will probably be faster than when you push it to production. When you "upload" a file to localhost it takes a lot less time than when you upload it to an actual server over the internet, so that's a thing to keep in mind.
i'm working on my project which is to make a streaming client over libtorrent.
i'm using the python client (python binding).
i searched a lot about these functions set_sequential_download() and set_piece_deadline() and i couldn't find a good answer on how to force download pieces in order, which means first piece 1 and then 2,3,4 etc..
i saw people are asking this in forums, but none of them got a good answer on the changes need to be done in order it to succeed.
i understood that the set_sequential_download() just asks for the pieces in order but in fact they are randomly downloaded. i tried to change the deadline of the pieces using set_piece_deadline() , increment each piece but it doesn't work for me at all.
** UPDATE
the goal i'm trying to acomplish , it's downloading one piece at a time so i can make a streaming throgh torrents.
i hope some of you can help me,
thanks Ben.
set_sequential_download() will request pieces in order. However:
all peers may not have all pieces. If the next piece you want to download is 3 and one of your peers doesn't have 3 but the next it has is 5, libtorrent will start requesting blocks from piece 5 from that peer.
peers provide varying upload rates, which means that some peers will satisfy your request sooner than others.
This makes it possible for the pieces to complete out-of-order.
set_piece_deadline() is a more flexible way to specify piece priority. It supports arbitrary range requests (as described by Jacob Zelek). Its main feature, though, is that it uses a different approach to requesting blocks. Instead of considering a peer at a time, and asking "what should I request from this peer", it considers a piece at a time, asking "which peer should I request this block from".
This makes it deliberately attempt to make pieces complete in the order of their deadlines. It is still an estimate based on historical download rates from peers, and if the bottleneck for download rates is your own download capacity, it may be very difficult to make predictions of future download rates for peers. A few important things to keep in mind when using the `set_piece_deadline()`` API are:
It's not important that the deadline is in the future. If the deadline cannot be met given the current download or upload capacity, the pieces will be prioritized in the order they were asked to be completed.
If a deadline is far out in the future, libtorrent may wait to prioritize it until it believe it needs to request it to make the deadline. If you're streaming a large file, and you know the bit-rate, you can set up deadlines for every piece, and if your capacity is higher than the bitrate, you'll still request some pieces in rarest-first order. Improves swarm quality.
When streaming data, it's absolutely critical to read-ahead. If you don't set the deadline until you want the piece, you'll always fall behind. There's typically a pretty long round-trip between requesting a piece and completing it. If you don't keep the request pipe full of deadline-pieces, libtorrent will start requesting other pieces again, and you'll get non-prioritized pieces interleaved with your high-priority pieces. You should probably keep a few seconds and at least a few pieces as read-ahead. For video, I would imagine tens of megabytes is appropriate (but experimentation and measurement is the best way to tweak it).
If you are in fact looking to stream video to a player or web browser over HTTP, you may want to take a look at (or use and submit pull requests to):
https://github.com/arvidn/libtorrent-webui/blob/master/src/file_downloader.cpp
that's a file-downloader provider that fits into simple http framework in that repository.
UPDATE:
If all you want is to guarantee that piece 1 completes before piece 2 (at any cost, specifically very poor performance), you can set the priority of all pieces to 0, except for the one piece you want to download. Once it completes, you'll be notified by an alert and you can set the priority of the next piece you want to 1. And so on.
This will be incredibly slow, since you'll pause the download constantly, and be in constant end-game mode (where you may download the same block from multiple peers, if one is slow). For instance, if you have more peers than there are blocks in one piece, you will leave download bandwidth unused, by not being able to request from all peers.
I've ran into the same problem as you. Setting a torrent to sequential download means the pieces will be downloaded in a somewhat ordered fashion. This may be the intuitive solution for streaming. However, streaming video is more complicated then just downloading all the pieces in order.
Video files come in different containers (e.g. mkv, mp4, avi) and different codes (h264, theora, etc). Some codecs/containers store metadata/headers in different locations in a file. I can't remember off the top of my head but a certain container/codec stores all header information at the end of the file. Such a file may not stream well if downloaded sequentially.
Unless you write the code for determining which pieces are needed to start streaming, you will have to rely on an existing mechanisms. Take for example Peerflix which spawns a browser video player, VLC, of Mplayer. These applications have a good idea of what byte ranges they need for various containers/codecs. When Peerflix launches VLC to play, lets say, an AVI file, VLC will attempt to read the first several bytes and last several bytes (headers).
The genius behind Peerflix is that it tries to serve the video file through it's own web server and therefore knows what byte ranges of the file VLC is seeking. It then determines which pieces the byte ranges fall into and prioritizes those pieces. Peerflix uses some Node.js BitTorrent library, whose exact piece prioritization mechanisms are unknown to me. However, in the case of libtorrent-rasterbar, the set_piece_deadline() function allows you to signal the library to what pieces you need. In my experience, once I determined the pieces needed, I would call set_piece_deadline() with a short deadline (50ms or so) and wait for the arrival. Please note that using set_piece_dealine() is incompatible with sequential downloads (just set them to false).
One thing to note, libtorrent-rasterbar will not write the piece to the hard drive as soon as it gets it. This is a trap I fell into because I tried to read that byte range from the file when the piece arrived. For this you will need to run a thread to catch the alerts that libtorrent-rasterbar passes to your application. More specifically you will receive the raw binary data for that piece in a read_piece_alert.
I'm about to scrape some 50.000 records of a real estate website (with Scrapy).
The programming has been done and tested, and the database properly designed.
But I want to be prepared for unexpected events.
So how do I go about actually running the scrape flawlessly and with minimal risk of failure and loss of time?
More specifically :
Should I carry it out in phases (scraping in smaller batches) ?
What and how should I log ?
Which other points of attention should I take into account before launching ?
First of all, study the following topics to have a general idea on how to be a good web-scraping citizen:
Web scraping etiquette
Screen scraping etiquette
In general, first, you need to make sure you are legally allowed to scrape this particular web-site and follow their Terms of Use rules. Also, check web-site's robots.txt and respect the rules listed there (for example, there can be Crawl-delay directive set). Also, a good idea would be to contact web-site owner's and let them know what you are going to do or ask for the permission.
Identify yourself by explicitly specifying a User-Agent header.
See also:
Is this Anti-Scraping technique viable with Robots.txt Crawl-Delay?
What will happen if I don't follow robots.txt while crawling?
Should I carry it out in phases (scraping in smaller batches) ?
This is what DOWNLOAD_DELAY setting is about:
The amount of time (in secs) that the downloader should wait before
downloading consecutive pages from the same website. This can be used
to throttle the crawling speed to avoid hitting servers too hard.
CONCURRENT_REQUESTS_PER_DOMAIN and CONCURRENT_REQUESTS_PER_IP are also relevant.
Tweak these settings for not hitting the web-site servers too often.
What and how should I log ?
The information that Scrapy puts on the console is pretty much extensive, but you may want to log all the errors and exceptions being raised while crawling. I personally like the idea of listening for spider_error signal to be fired, see:
how to process all kinds of exception in a scrapy project, in errback and callback?
Which other points of attention should I take into account before
launching ?
You still have several things to think about.
At some point, you may get banned. There is always a reason for this, the most obvious would be that you would still crawl them too hard and they don't like it. There are certain techniques/tricks to avoid getting banned, like rotating IP addresses, using proxies, web-scraping in the cloud etc, see:
Avoiding getting banned
Another thing to worry about might be the crawling speed and scaling; at this point you may want to think about distributing your crawling process. This is there scrapyd would help, see:
Distributed crawls
Still, make sure you are not crossing the line and staying on the legal side.
I'd like to do the following:
the queries on a django site (first server) are send to a second
server (for performance and security reasons)
the query is processed on the second server using sqlite
the python search function has to keep a lot of data in memory. a simple cgi would always have to reread data from disk which would further slow down the search process. so i guess i need some daemon to run on the second server.
the search process is slow and i'd like to send partial results back, and show them as they arrive.
this looks like a common task, but somehow i don't get it.
i tried Pyro first which exposes the search class (and then i needed a workaround to avoid sqlite threading issues). i managed to get the complete search results onto the first server, but only as a whole. i don't know how to "yield" the results one by one (as generators cannot be pickled), and i anyway wouldn't know how to write them one by one onto the search result page.
i may need some "push technology" says this thread: https://stackoverflow.com/a/5346075/1389074 talking about some different framework. but which?
i don't seem to search for the right terms. maybe someone can point me to some discussions or frameworks that address this task?
thanks a lot in advance!
You can use python tornado websockets. This will allow you to establish 2 way connection from the client side to the server and return data as it comes. Tornado is an async framework built in python.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I develop a client-server style, database based system and I need to devise a way to stress / load test the system. Customers inevitably want to know such things as:
• How many clients can a server support?
• How many concurrent searches can a server support?
• How much data can we store in the database?
• Etc.
Key to all these questions is response time. We need to be able to measure how response time and performance degrades as new load is introduced so that we could for example, produce some kind of pretty graph we could throw at clients to give them an idea what kind of performance to expect with a given hardware configuration.
Right now we just put out fingers in the air and make educated guesses based on what we already know about the system from experience. As the product is put under more demanding conditions, this is proving to be inadequate for our needs going forward though.
I've been given the task of devising a method to get such answers in a meaningful way. I realise that this is not a question that anyone can answer definitively but I'm looking for suggestions about how people have gone about doing such work on their own systems.
One thing to note is that we have full access to our client API via the Python language (courtesy of SWIG) which is a lot easier to work with than C++ for this kind of work.
So there we go, I throw this to the floor: really interested to see what ideas you guys can come up with!
Test 1: Connect and Disconnect clients like mad, to see how well you handle the init and end of sessions, and just how much your server will survive under spikes, also while doing this measure how many clients fail to connect. That is very important
Test 2: Connect clients and keep them logged on for say a week, doing random actions (FuzzTest). Time the round-trip of each action. Also keep record of the order of actions, because this way your "clients" will find loopholes in your usecases (very important, and VERY hard to test rationally).
Test 3 & 4: Determine major use cases for your system, and write up scripts that do these tasks. Then run several clients doing same task(test 3), and also several clients doing different tasks(test 4).
Series:
Now the other dimension you need here is amount of clients.
A nice series would be:
5,10,50,100,500,1000,5000,10000,...
This way you can get data for each series of tests with different work loads.
Also congrats on SWIGing your clients api to Python! That is a great way to get things ready.
Note: IBM has a sample of fuzz testing on Java, which is unrealted to your case, but will help you design a good fuzztest for your system
If you are comfortable coding tests in Python, I've found funkload to be very capable. You don't say your server is http-based, so you may have to adapt their test facilities to your own client/server style.
Once you have a test in Python, funkload can run it on many threads, monitoring response times, and summarizing for you at the end of the test.
For performance you are looking at two things: latency (the responsiveness of the application) and throughput (how many ops per interval). For latency you need to have an acceptable benchmark. For throughput you need to have a minimum acceptable throughput.
These are you starting points. For telling a client how many xyz's you can do per interval then you are going to need to know the hardware and software configuration. Knowing the production hardware is important to getting accurate figures. If you do not know the hardware configuration then you need to devise a way to map your figures from the test hardware to the eventual production hardware.
Without knowledge of hardware then you can really only observe trends in performance over time rather than absolutes.
Knowing the software configuration is equally important. Do you have a clustered server configuration, is it load balanced, is there anything else running on the server? Can you scale your software or do you have to scale the hardware to meet demand.
To know how many clients you can support you need to understand what is a standard set of operations. A quick test is to remove the client and write a stub client and the spin up as many of these as you can. Have each one connect to the server. You will eventually reach the server connection resource limit. Without connection pooling or better hardware you can't get higher than this. Often you will hit a architectural issue before here but in either case you have an upper bounds.
Take this information and design a script that your client can enact. You need to map how long your script takes to perform the action with respect to how long it will take the expected user to do it. Start increasing your numbers as mentioned above to you hit the point where the increase in clients causes a greater decrease in performance.
There are many ways to stress test but the key is understanding expected load. Ask your client about their expectations. What is the expected demand per interval? From there you can work out upper loads.
You can do a soak test with many clients operating continously for many hours or days. You can try to connect as many clients as you can as fast you can to see how well your server handles high demand (also a DOS attack).
Concurrent searches should be done through your standard behaviour searches acting on behalf of the client or, write a script to establish a semaphore that waits on many threads, then you can release them all at once. This is fun and punishes your database. When performing searches you need to take into account any caching layers that may exist. You need to test both caching and without caching (in scenarios where everyone makes unique search requests).
Database storage is based on physical space; you can determine row size from the field lengths and expected data population. Extrapolate this out statistically or create a data generation script (useful for your load testing scenarios and should be an asset to your organisation) and then map the generated data to business objects. Your clients will care about how many "business objects" they can store while you will care about how much raw data can be stored.
Other things to consider: What is the expected availability? What about how long it takes to bring a server online. 99.9% availability is not good if it takes two days to bring back online the one time it does go down. On the flip side a lower availablility is more acceptable if it takes 5 seconds to reboot and you have a fall over.
If you have the budget, LoadRunner would be perfect for this.
On a related note: Twitter recently OpenSourced their load-testing framework. Could be worth a go :)