Python & web scraping performance - python

I am trying to do some python based web scraping where execution time is pretty critical.
I've tried phantomjs, selenium, and pyqt4 now, and all three libraries have given me similar response times. I'd post example code, but my problem affects all three, so I believe the problem either lies in a shared dependency or outside of my code. At around 50 concurrent requests, we see a huge desegregation in response time. It takes about 40 seconds to get back all 50 pages, and that time gets exponentially slower with greater page demands. Ideally I'm looking for ~200+ requests in about 10 seconds. I used multiprocessing to spawn each instance of phantonjs/pyqt4/selenium, so each url request gets it's own instance so that I'm not blocked by single threading.
I don't believe it's a hardware bottleneck, it's running on 32 dedicated cpu cores, totaling to 64 threads, and cpu usage doesn't typically spike to over 10-12%. Bandwidth as well sits reasonably comfortably at around 40-50% of my total throughput.
I've read about the GIL, which I believe I've addressed with using multiprocessing. Is webscraping just an inherently slow thing? Should I stop expecting to pull 200ish webpages in ~10 seconds?
My overall question is, what is the best approach to high performance web scraping, where evaluating js on the webpage is a requirement?

"evaluating js on the webpage is a requirement" <- I think this is your problem right here. Simply downloading 50 web pages is fairly trivially parallelized and should only take as long as the slowest server takes to respond.
Now, spawning 50 javascript engines in parallel (which is essentially what I guess it is you are doing) to run the scripts on every page is a different matter. Imagine firing up 50 chrome browsers at the same time.
Anyway: profile and measure the parts of your application to find where the bottleneck lies. Only then you can see if you're dealing with an I/O bottleneck (sounds unlikely), a CPU bottleneck (more likely) or a global lock somewhere that serializes stuff (also likely but impossible to say without any code posted)

Related

Python Selenium Web Scraping with Same URL but Multiple Instances (Multiprocessing?)

Running a python selenium script using ChromeDriver that goes to one (and only one) URL, enters data into the form then scrapes the results. Would like to do this in parallel by entering different data into the same URL's form and getting the different results.
In researching Multiprocessing or Multithreading I have found Multithreading is best for I/O bound tasks and Multiprocessing best for CPU bound tasks.
Overall amount of data I'm scraping is small, select text only so don't believe I/O bound? Does this sound correct? From what I've gathered is that in general web scrapers are I/O intensive, maybe my example scenario is just an exception?
Running my current (sequential, non parallel) script, Resource Monitor shows chrome instance CPU usage ramp up AND across all (4) cores. So is chrome using multiprocessing by default and the advantage of multiprocessing within python really in being able to apply the scripts function to each chrome instance? Maybe I got this all wrong...
Also is it that a script that wants to open multiple URL's at once and interact with them inherently CPU bound due to that fact that it runs a lot of chrome instances? Assuming data scraped is small. Ignoring headless for now.
Image attached of CPU usage, spike in the middle (across all 4 CPU's) is when chrome is launched.
Any comments or advice appreciated, including any pseudo code on how you might implement something like this. Didn't share base code, question more around the structure of all this.

How to run multithreaded Python scripts

i wrote a Python web scraper yesterday and ran it in my terminal overnight. it only got through 50k pages. so now i just have a bunch of terminals open concurrently running the script at different starting and end points. this works fine because the main lag is obviously opening web pages and not actual CPU load. more elegant way to do this? especially if it can be done locally
You have an I/O bound process, so to speed it up you will need to send requests concurrently. This doesn't necessarily require multiple processors, you just need to avoid waiting until one request is done before sending the next.
There are a number of solutions for this problem. Take a look at this blog post or check out gevent, asyncio (backports to pre-3.4 versions of Python should be available) or another async IO library.
However, when scraping other sites, you must remember: you can send requests very fast with concurrent programming, but depending on what site you are scraping, this may be very rude. You could easily bring a small site serving dynamic content down entirely, forcing the administrators to block you. Respect robots.txt, try to spread your efforts between multiple servers at once rather than focusing your entire bandwidth on a single server, and carefully throttle your requests to single servers unless you're sure you don't need to.

Extreme django performance issues after launching

We have recently launched a django site which amongst other things, has a screen representing all sorts of data. A request to the server is sent every 10 seconds to get new data. The average response size is 10kb.
The site is working on approx. 30 clients, meaning every client sends a get request every 10 seconds.
When locally testing, responses came back after 80ms. After deployment with 30~ users, we're taking up to 20 seconds!!
So the initial thought is that my code sucks. I went through all my queries and did everything i can to optimize then and reduce calls to the database (which was hard, nearly everything is somwething like object.filter(id=num) and my tables have less thab 5k rows atm...)
But then i noticed the same issue occurs in the admin panel! Which is clearly optimized and doesn't have my perhaps inefficient code, since I didn't write it. Opening the users tab takes 30 seconds at certain requests!!
So, what is it? Do I argue with the company sysadmins and demand a better server? They say we dont need better hardware (running on dual core 2.67ghz and 4gb ram, which isnt a lot, but still shouldn't be THAT slow)
Doesn't the fact that the admin site is slow imply that this is a hardware issue?

Diagnosing HTTP Request Speed

I am making head requests on anywhere between 100,000 to 500,000 URLs to returns the size and the HTTP status code. I have tried four different methods: a threadpool, an asynchronous twisted client, a grequests implementation, and a concurrent.futures based solution. In a previous question similar to this one, the threadpool implementation is said to finish in 6 to 10 minutes. Trying the exact code and feeding a dummy list of 100,000 URLs takes over 4 hours on my machine. My twisted solution (different than the one mentioned in the link question) similarly takes around 3.5 hours to complete, the same with the the concurrent.futures solution.
I am relatively confident I have written the implementations correctly, specifically in the case of copy and pasting the code from a previous example. How can I diagnose where the slow down is occurring? My guess is that it is when making the connection, but I have no idea how to prove this or fix it if it is a problem. I am pretty certain it is not a CPU bound problem, as the CPU time after 100,000 URLs is only 3 minutes. Any help in figuring out how to diagnose the issue, and in turn fixing would be greatly appreciated.
Some More Information:
Using Requests to make the request, or treq with twisted.
Appending the results to a list (with garbage compiler off) or a
pandas dataframe does not seem to make a speed difference.
I have experimented with anywhere between 4 and 200 workers/threads
in my various tests, and 15 seems to be optimal.
The machine I am using has 16 cores and a high speed (100 MBPS)
internet connection.

How do I improve scrapy's download speed?

I'm using scrapy to download pages from many different domains in parallel. I have hundreds of thousands of pages to download, so performance is important.
Unfortunately, as I've profiled scrapy's speed, I'm only getting a couple pages per second. Really, about 2 pages per second on average. I've previously written my own multithreaded spiders to do hundreds of pages per second -- I thought for sure scrapy's use of twisted, etc. would be capable of similar magic.
How do I speed scrapy up? I really like the framework, but this performance issue could be a deal-breaker for me.
Here's the relevant part of the settings.py file. Is there some important setting I've missed?
LOG_ENABLED = False
CONCURRENT_REQUESTS = 100
CONCURRENT_REQUESTS_PER_IP = 8
A few parameters:
Using scrapy version 0.14
The project is deployed on an EC2 large instance, so there should be plenty of memory, CPU, and bandwidth to play with.
I'm scheduling crawls using the JSON protocol, keeping the crawler topped up with a few dozen concurrent crawls at any given time.
As I said at the beginning, I'm downloading pages from many sites, so remote server performance and CONCURRENT_REQUESTS_PER_IP shouldn't be a worry.
For the moment, I'm doing very little post-processing. No xpath; no regex; I'm just saving the url and a few basic statistics for each page. (This will change later once I get the basic performance kinks worked out.)
I had this problem in the past...
And large part of it I solved with a 'Dirty' old tricky.
Do a local cache DNS.
Mostly when you have this high cpu usage accessing simultaneous remote sites it is because scrapy is trying to resolve the urls.
And please remember to change your dns settings at the host (/etc/resolv.conf) to your LOCAL caching DNS server.
In the first ones will be slowly, but as soon it start caching and it is more efficient resolving you are going to see HUGE improvements.
I hope this will help you in your problem!

Categories

Resources