Why does a page keep loading in the loop? - python

In python(Selenium)
driver = webdriver.Chrome()
driver.get("https://www.baidu.com")
for keywords in open('klist','r'):
driver.get("https://www.baidu.com")
driver.find_element_by_class_name('...').click()
....
Although the whole page appears, it just hangs and keeps loading. So a lot of time is wasted.
Not every time it freezes. But once it freezes, it can hang for several minutes before the next step.

I guess it hangs because some resource loads slowly. You can emulate such behavior manually by setting low bandwidth speed in network tab in developer tools (chrome).
In order to find what exact resource causing the problem in case if it's not reproducible by hands you can use proxy like Fiddler, Browsermob or whatever your favorite proxy is.

Related

Python Selenium Web Scraping with Same URL but Multiple Instances (Multiprocessing?)

Running a python selenium script using ChromeDriver that goes to one (and only one) URL, enters data into the form then scrapes the results. Would like to do this in parallel by entering different data into the same URL's form and getting the different results.
In researching Multiprocessing or Multithreading I have found Multithreading is best for I/O bound tasks and Multiprocessing best for CPU bound tasks.
Overall amount of data I'm scraping is small, select text only so don't believe I/O bound? Does this sound correct? From what I've gathered is that in general web scrapers are I/O intensive, maybe my example scenario is just an exception?
Running my current (sequential, non parallel) script, Resource Monitor shows chrome instance CPU usage ramp up AND across all (4) cores. So is chrome using multiprocessing by default and the advantage of multiprocessing within python really in being able to apply the scripts function to each chrome instance? Maybe I got this all wrong...
Also is it that a script that wants to open multiple URL's at once and interact with them inherently CPU bound due to that fact that it runs a lot of chrome instances? Assuming data scraped is small. Ignoring headless for now.
Image attached of CPU usage, spike in the middle (across all 4 CPU's) is when chrome is launched.
Any comments or advice appreciated, including any pseudo code on how you might implement something like this. Didn't share base code, question more around the structure of all this.

Python selenium invisible ram usage until it hits 99% by time

EDIT: Fixed after implementing page refresh instead of opening new browsers.
I believe code isn't needed for this question and since it's way too long I'll try to explain the problem without it.
I made a selenium bot that checks a website for the freshly added content. It simply starts chromedriver, checks the nesecery page, quits the driver via driver.quit() and repeats forever in a loop but after about 24 hours ram usage hits above 95% and simply locks the computer. But in the task manager, there is nothing using the ram. Ram cleaning programs can't get rid of the invisible ram usage either.
Now what I am wondering is, is it because of my unlimited amount of browser turn on and offs even though I am closing the browser with driver.quit() or do I need to just change the code with page refreshing instead of turning the browser completely off and on all the time?
Just looking for ideas, not code. Thanks a lot.
Edit: The computer in the photo has 6 gb ram but it's at 93% when there is nothing using it. It becomes like that by time. When computer restarted ram usage is at normal levels like any idle computer. And the app on top is my bot.

Selenium using too much RAM with Firefox

I am using selenium with Firefox to automate some tasks on Instagram. It basically goes back and forth between user profiles and notifications page and does tasks based on what it finds.
It has one infinite loop that makes sure that the task keeps on going. I have sleep() function every few steps but the memory usage keeps increasing. I have something like this in Python:
while(True):
expected_conditions()
...doTask()
driver.back()
expected_conditions()
...doAnotherTask()
driver.forward()
expected_conditions()
I never close the driver because that will slow down the program by a lot as it has a lot of queries to process. Is there any way to keep the memory usage from increasing overtime without closing or quitting the driver?
EDIT: Added explicit conditions but that did not help either. I am using headless mode of Firefox.
Well, This the serious problem I've been going through for some days. But I have found the solution. You can add some flags to optimize your memory usage.
options = Options()
options.add_argument("start-maximized")
options.add_argument("disable-infobars")
options.add_argument("--disable-extensions")
options.add_argument('--no-sandbox')
options.add_argument('--disable-application-cache')
options.add_argument('--disable-gpu')
options.add_argument("--disable-dev-shm-usage")
These are the flags I added. Before I added the flags RAM usage kept increasing after it crosses 4GB (8GB my machine) my machine stuck. after I added these flags memory usage didn't cross 500MB. And as DebanjanB answers, if you running for loop or while loop tries to put some seconds sleep after each execution it will give some time to kill the unused thread.
To start with Selenium have very little control over the amount of RAM used by Firefox. As you mentioned the Browser Client i.e. Mozilla goes back and forth between user profiles and notifications page on Instagram and does tasks based on what it finds is too broad as a single usecase. So, the first and foremost task would be to break up the infinite loop pertaining to your usecase into smaller Tests.
time.sleep()
Inducing time.sleep() virtually puts a blanket over the underlying issue. However while using Selenium and WebDriver to execute tests through your Automation Framework, using time.sleep() without any specific condition defeats the purpose of automation and should be avoided at any cost. As per the documentation:
time.sleep(secs) suspends the execution of the current thread for the given number of seconds. The argument may be a floating point number to indicate a more precise sleep time. The actual suspension time may be less than that requested because any caught signal will terminate the sleep() following execution of that signal’s catching routine. Also, the suspension time may be longer than requested by an arbitrary amount because of the scheduling of other activity in the system.
You can find a detailed discussion in How to sleep webdriver in python for milliseconds
Analysis
There were previous instances when Firefox consumed about 80% of the RAM.
However as per this discussion some of the users feels that the more memory is used the better because it means you don't have RAM wasted. Firefox uses RAM to make its processes faster since application data is transferred much faster in RAM.
Solution
You can implement either/all of the generic/specific steps as follows:
Upgrade Selenium to current levels Version 3.141.59.
Upgrade GeckoDriver to GeckoDriver v0.24.0 level.
Upgrade Firefox version to Firefox v65.0.2 levels.
Clean your Project Workspace through your IDE and Rebuild your project with required dependencies only.
If your base Web Client version is too old, then uninstall it and install a recent GA and released version of Web Client.
Some extensions allow you to block such unnecessary content, as an example:
uBlock Origin allows you to hide ads on websites.
NoScript allows you to selectively enable and disable all scripts running on websites.
To open the Firefox client with an extension you can download the extension i.e. the XPI file from https://addons.mozilla.org and use the add_extension(extension='webdriver.xpi') method to add the extension in a FirefoxProfile as follows:
from selenium import webdriver
profile = webdriver.FirefoxProfile()
profile.add_extension(extension='extension_name.xpi')
driver = webdriver.Firefox(firefox_profile=profile, executable_path=r'C:\path\to\geckodriver.exe')
If your Tests doesn't requires the CSS you can disable the CSS following the this discussion.
Use Explicit Waits or Implicit Waits.
Use driver.quit() to close all
the browser windows and terminate the WebDriver session because if
you do not use quit() at the end of the program, the WebDriver
session will not be closed properly and the files will not be cleared
off memory. And this may result in memory leak errors.
Creating new firefox profile and use it every time while running test cases in Firefox shall eventually increase the performance of execution as without doing so always new profile would be created and caching information would be done there and if driver.quit does not get called somehow before failure then in this case, every time we end up having new profiles created with some cached information which would be consuming memory.
// ------------ Creating a new firefox profile -------------------
1. If Firefox is open, close Firefox.
2. Press Windows +R on the keyboard. A Run dialog will open.
3. In the Run dialog box, type in firefox.exe -P
Note: You can use -P or -ProfileManager(either one should work).
4. Click OK.
5. Create a new profile and sets its location to the RAM Drive.
// ----------- Associating Firefox profile -------------------
ProfilesIni profile = new ProfilesIni();
FirefoxProfile myprofile = profile.getProfile("automation_profile");
WebDriver driver = new FirefoxDriver(myprofile);
Please share execution performance with community if you plan to implement this way.
There is no fix for that as of now.
I suggest you use driver.close() approach.
I was also struggling with the RAM issue and what i did was i counted the number of loops and when the loop count reached to a certain number( for me it was 200) i called driver.close() and then start the driver back again and also reset the count.
This way i did not need to close the driver every time the loop is executed and has less effect on the performance too.
Try this. Maybe it will help in your case too.

Time out issues with chrome and flask

I have a web application which acts as an interface to an offsite server which runs a very long task. The user enters information and hits submit and then chrome waits for the response, and loads a new webpage when it receives it. However depending on the network, input of the user, the task can take a pretty long time and occasionally chrome loads a "no data received page" before the data is returned (though the task is still running).
Is there a way to put either a temporary page while my task is thinking or simply force chrome to continue waiting? Thanks in advance
While you could change your timeout on the server or other tricks to try to keep the page "alive", keep in mind that there might be other parts of the connection that you have no control over that could timeout the request (such as the timeout value of the browser, or any proxy between the browser and server, etc). Also, you might need to constantly up your timeout value if the task takes longer to complete (becomes more advanced, or just slower because more people use it).
In the end, this sort of problem is typically solved by a change in your architecture.
Use a Separate Process for Long-Running Tasks
Rather than submitting the request and running the task in the handling view, the view starts the running of the task in a separate process, then immediately returns a response. This response can bring the user to a "Please wait, we're processing" page. That page can use one of the many push technologies out there to determine when the task was completed (long-polling, web-sockets, server-sent events, an AJAX request every N seconds, or the dead-simplest: have the page reload every 5 seconds).
Have your Web Request "Kick Off" the Separate Process
Anyway, as I said, the view handling the request doesn't do the long action: it just kicks off a background process to do the task for it. You can create this background process dispatch yourself (check out this Flask snippet for possible ideas), or use a library like Celery or (RQ).
Once the task is complete, you need some way of notifying the user. This will be dependent on what sort of notification method you picked above. For a simple "ajax request every N seconds", you need to create a view that handles the AJAX request that checks if the task is complete. A typical way to do this is to have the long-running task, as a last step, make some update to a database. The requests for checking the status can then check this part of the database for updates.
Advantages and Disadvantages
Using this method (rather than trying to fit the long-running task into a request) has a few benefits:
1.) Handling long-running web requests is a tricky business due to the fact that there are multiple points that could time out (besides the browser and server). With this method, all your web requests are very short and much less likely to timeout.
2.) Flask (and other frameworks like it) is designed to only support a certain number of threads that can respond to web queries. Assume it has 8 threads: if four of them are handling the long requests, that only leaves four requests to actually handle more typical requests (like a user getting their profile page). Half of your web server could be tied up doing something that is not serving web content! At worse, you could have all eight threads running a long process, meaning your site is completely unable to respond to web requests until one of them finishes.
The main drawback: there is a little more set up work in getting a task queue up and running, and it does make your entire system slightly more complex. However, I would highly recommend this strategy for long-running tasks that run on the web.
I believe this is due to your web server (apache in most cases) which has a timeout to small. Try to increase this number
For apache, have a look at the timeout option
EDIT: I don't think you can do set this time out in Chrome (see this topic on google forums even though it's really old)
In firefox, on the about:config page, type timeout and you'll have some options you can set. I have no idea about Internet Explorer.
Let's assume:
This is not a server issue, so we don't have to go fiddle with Apache, nginx, etc. timeout settings.
The delay is minutes, not hours or days, just to make the scenario manageable.
You control the web page on which the user hits submit, and from which user interaction is managed.
If those obtain, I'd suggest not using a standard HTML form submission, but rather have the submit button kick off a JavaScript function to oversee processing. It would put up a "please be patient...this could take a little while" style message, then use jQuery.ajax, say, to call the long-time-taking server with a long timeout value. jQuery timeouts are measured in milliseconds, so 60000 = 60 seconds. If it's longer than that, increase your specified timeout accordingly. I have seen reports that not all clients will allow super-extra-long timeouts (e.g. Safari on iOS apparently has a 60-second limitation). But in general, this will give you a platform from which to manage the interactions (with your user, with the slow server) rather than being at the mercy of simple web form submission.
There are a few edge cases here to consider. The web server timeouts may indeed need to be adjusted upward (Apache defaults to 300 seconds aka 5 minutes, and nginx less, IIRC). Your client timeouts (on iOS, say) may have maximums too low for the delays you're seeing. Etc. Those cases would require either adjusting at the server, or adopting a different interaction strategy. But an AJAX-managed interaction is where I would start.

Set up a real timeout for loading page in Selenium WebDriver?

I'm testing a site with lots of proxies, and the problem is some of those proxies are awfully slow. Therefore my code is stuck at loading pages every now and then.
from selenium import webdriver
browser = webdriver.Firefox()
browser.get("http://example.com/example-page.php")
element = browser.find_element_by_id("someElement")
I've tried lots of stuff like explicit waits or implicit waits and been searching around for quite a while but still not yet found a solution or workaround. Nothing seems to really affect page loading line browser.get("http://example.com/example-page.php"), and that's why it's always stuck there.
Anybody got a solution for this?
Update 1:
JimEvans' answer solved my previous problem, and here you can find python patch for this new feature.
New problem:
browser = webdriver.Firefox()
browser.set_page_load_timeout(30)
browser.get("http://example.com/example-page.php")
element = browser.find_element_by_id("elementA")
element.click() ## assume it's a link to a new page http://example.com/another-example.php
another_element = browser.find_element_by_id("another_element")
As you can see browser.set_page_load_timeout(30) only affects browser.get("http://example.com/example-page.php") which means if this page loads for over 30 seconds it will throw out a timeout exception, but the problem is that it has no power over page loading such as element.click(), although it does not block till the new page entirely loads up, another_element = browser.find_element_by_id("another_element") is the new pain in the ass, because either explicit waits or implicit waits would wait for the whole page to load up before it starts to look for that element. In some extreme cases this would take even HOURS. What can I do about it?
You could try using the page load timeout introduced in the library. The implementation of it is not universal, but it's exposed for certain by the .NET and Java bindings, and has been implemented in and the Firefox driver now, and in the IE driver in the forthcoming 2.22. In Java, to set the page load timeout to 15 seconds, the code to set it would look like this:
driver.manage().timeouts().pageLoadTimeout(15, TimeUnit.SECONDS);
If it's not exposed in the Python language bindings, I'm sure the maintainer would eagerly accept a patch that implemented it.
You can still speedup your script execution by waiting for presence (not waiting for visibility) of expected element for say 5-8 sec and then sending window.stop() JS Script (to stop loading further elements ) without waiting for entire page to load or catching the timeout exception for page load after 5-8 seconds then calling window.stop()
Because if the page not adopted lazy loading technique (loading only visible element and loading rest of element only after scroll) it loads each element before returning window.ready state so it will be slower if any of the element takes longer time to render.

Categories

Resources