I realise that this question is very similar to this, and other SO questions. However, I've played around with the screen size, and also the wait times, and that has not fixed the problem.
I am
starting a driver
opening a website and logging in with Selenium
scraping specific values from the home page
This is the code that isn't working in PhantomJS (but works fine if I use chromedriver):
time.sleep(60)
driver.set_window_size(1024,768)
todays_points = driver.find_elements_by_xpath("//div/a[contains(text(),'Today')]/preceding-sibling::span")
total = 0
for today in todays_points:
driver.set_window_size(1024,768)
points = today.find_elements_by_class_name("stream_total_points")[0].text
points = int(points[:-4])
total += points
driver.close()
print total
The HTML that I'm trying to access is inside a div element:
<span class="stream-type">tracked 7 Minutes-Recode for <span class="stream_total_points">349 pts</span></span>
<a class="action_time gray_link" href="/entry/42350781/">Today</a>
I want to grab the '349 pts' text. However, with PhantomJS the value returned is always 0, so I think it's not finding that element.
EDIT: When I print the html source using print(driver.page_source) I get the correct page outputted but the element is not visible. Checking on chrome using the view source tool, I can't see the element here either (but I can with inspect element). Could this be why PhantomJS cannot access the element?
Related
I am trying to print by ID using Selenium. As far as I can tell, "a" is the tag and "title" is the attribute. See HTML below.
When I run the following code:
print(driver.find_elements(By.TAG_NAME, "a")[0].get_attribute('title'))
I get the output:
Zero Tolerance
So I'm getting the first attribute correctly. When I increment the code above:
print(driver.find_elements(By.TAG_NAME, "a")[1].get_attribute('title'))
My expected output is:
Aaliyah Love
However, I'm just getting blank. No errors. What am I doing incorrectly? Pls don't suggest using xpath or css, I'm trying to learn Selenium tags.
HTML:
<a class=" Link ScenePlayer-ChannelName-Link styles_1lHAYbZZr4 Link ScenePlayer-ChannelName-Link styles_1lHAYbZZr4" href="/en/channel/ztfilms" title="Zero Tolerance" rel="">Zero Tolerance</a>
...
<a class=" Link ActorThumb-ActorImage-Link styles_3dXcTxVCON Link ActorThumb-ActorImage-Link styles_3dXcTxVCON" href="/[RETRACTED]/Aaliyah-Love/63565" title="Aaliyah Love"
Selenium locators are a toolbox and you're saying you only want to use a screwdriver (By.TAG_NAME) for all jobs. We aren't saying that you shouldn't use By.TAG_NAME, we're saying that you should use the right tool for the right job and sometimes (most times) By.TAG_NAME is not the right tool for the job. CSS selectors are WAY more powerful locators because they can search for not only tags but also classes, properties, etc.
It's hard to say for sure what's going on without access to the site/page. It could be that the entire page isn't loaded and you need to add a wait for the page to finish loading (maybe count links expected on the page?). It could be that your locator isn't specific enough and is catching other A tags that don't have a title attribute.
I would start by doing some debugging.
links = driver.find_elements(By.TAG_NAME, "a")
for link in links:
print(link.get_attribute('title'))
What does this print?
If it prints some blank lines sprinkled throughout the actual titles, your locator is probably not specific enough. Try a CSS selector
links = driver.find_elements(By.CSS_SELECTOR, "a[title]")
for link in links:
print(link.get_attribute('title'))
If instead it returns some titles and then nothing but blank lines, the page is probably not fully loaded. Try something like
count = 20 # the number of expected links on the page
link_locator = (By.TAG_NAME, "a")
WebDriverWait(driver, 10).until(lambda wd: len(wd.find_elements(link_locator)) == count)
links = driver.find_elements(link_locator)
for link in links:
print(link.get_attribute('title'))
Context
This is a repost of Get a page with Selenium but wait for element value to not be empty, which was Closed without any validity so far as I can tell.
The linked answers in the closure reasoning both rely on knowing what the expected text value will be. In each answer, it explicitly shows the expected text hardcoded into the WebDriverWait call. Furthermore, neither of the linked answers even remotely touch upon the final part of my question:
[whether the expected conditions] come before or after the page Get
"Duplicate" Questions
How to extract data from the following html?
Assert if text within an element contains specific partial text
Original Question
I'm grabbing a web page using Selenium, but I need to wait for a certain value to load. I don't know what the value will be, only what element it will be present in.
It seems that using the expected condition text_to_be_present_in_element_value or text_to_be_present_in_element is the most likely way forward, but I'm having difficulty finding any actual documentation on how to use these and I don't know if they come before or after the page Get:
webdriver.get(url)
Rephrase
How do I get a page using Selenium but wait for an unknown text value to populate an element's text or value before continuing?
I'm sure that my answer is not the best one but, here is a part of my own code, which helped me with similar to your question.
In my case I had trouble with loading time of the DOM. Sometimes it took 5 sec sometimes 1 sec and so on.
url = 'www.somesite.com'
browser.get(url)
Because in my case browser.implicitly_wait(7) was not enought. I made a simple for loop to check if the content is loaded.
some code...
for try_html in range(7):
""" Make 7 tries to check if the element is loaded """
browser.implicitly_wait(7)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
raw_data = soup.find_all('script', type='application/ld+json')
"""if SKU in not found in the html page we skip
for another loop, else we break the
tryes and scrape the page"""
if 'sku' not in html:
continue
else:
scrape(raw_data)
break
It's not perfect, but you can try it.
I'm trying to create a program extracting all persons I follow on Instagram. I'm using Python, Selenium and Chromedriver.
To do so, I first get the number of followed persons and click on the 'following' button : `
nb_abonnements = int(webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a/span').text)
sleep(randrange(1,3))
abonnements = webdriver.find_element_by_xpath('/html/body/span[1]/section[1]/main/div[1]/header/section[1]/ul/li[3]/a')
abonnements.click()
I then use the following code to get the followers and scroll the popup page in case I can't find one:
followers_panel = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]')
while i < nb_abonnements:
try:
print(i)
followed = webdriver.find_element_by_xpath('/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1)).text
#the followeds are in an ul-list
i += 1
followed_list.append(followed)
except NoSuchElementException:
webdriver.execute_script(
"arguments[0].scrollBy(0,400)",followers_panel
)
sleep(7)
The problem is once i is at 12, the program raises the exception and scrolls. From there, he still can't find the next follower and is stuck in a loop where he does nothing but scroll. I've checked the source codeof the IG page, and it turns out the path is still good, but apparently I can't access the elements as I do anymore, probably because the ul-list in which I am accessing them has become to long (line 5 of the program).
I can't work out how to solve this. I hope you will be of some help.
UPDATE: the DOM looks like this:
html
body
span
script
...
div[3]
div
...
div
div
div[2]
ul
div
li
li
li
li
...
li
The ul is the list of the followers.
The lis contain the info i'm trying to extract (username). Even when I go go by myself on the webpage, open the popup window, scroll a little and let everything load, I can't find the element I'm looking for by typing the xpath in the search bar of the DOM manually. Although the path is correct, I can check it by looking at the DOM.
I've tried various webdrivers for selenium, currently I am using chromedriver 2.45.615291. I've also put an explicit wait to wait for the element to show (WebDriverWait(webdriver, 10).until(EC.presence_of_element_located((By.XPATH, '/html/body/div[3]/div/div/div[2]/ul/div/li[{}]/div/div[2]/div/div/div/a'.format(i+1))))), but I just get a timeout exception: selenium.common.exceptions.TimeoutException: Message:.
It just seems like once the ul list is too long (which is from the moment I've scrolled down enough to load new people), I can't access any element of the list by its XPATH, even the elements that were already loaded before I began scrolling.
Instead of using xpath for each of the child element... find the ul-list element then find all the child elements using something like : ul-list element.find_elements_by_tag_name(). Then iterate through each element in the collection & get the required text
I've foud a solution: i just access the element through the XPATH like this: find_element_by_xpath("(//*[#class='FPmhX notranslate _0imsa '])[{}]".format(i)). I don't know why it didn't work the other way, but like this it works just fine.
Webscraping results from Indeed.com
-Searching 'Junior Python' in 'Los Angeles, CA' (Done)
-Sometimes popup window opens. Close the window if popup occurs.(Done)
-Top 3 results are sponsored so skip these and go to real results
-Click on result summary section which opens up side panel with full summary
-Scrape the full summary
-When result summary is clicked, url changes. Rather than opening new window, I would like to scrape the side panel full summary
-Each real result is under ('div':{'data-tn-component':'organicJob'}). I am able to get jobtitle, company, and short summary using BeautifulSoup. However, I would like to get the full summary on the side panel.
Problem
1) When I try to click on the link(using Selenium) (jobtitle or the short summary, which opens up the side panel), the code only ends up clicking on the 1st link which is the 'sponsored'. Unable to locate and click on real result under id='jobOrganic'
2) Once a real result is clicked on(manually), I can see that the full summary side panel is under <td id='auxCol'> and within this, under . The full summary is contained within the <p> tags. When I try to have a selenium scrape full summary using findAll('div':{'id':'vjs-desc'}), all I get is blank result [].
3) The url also changes when the side panel is opened. I tried using Selenium to have driver get the new url and then soup the url to get results but all I'm getting is the 1st sponsored result which is not what I want. I'm not sure why BeautifulSoup keeps getting the result for sponsored, even when I'm running the code under 'id='jobOrganic' real results.
Here is my code. I've been working on this for almost past two days, went through stackoverflow, documentation, and google but unable to find the answer. I'm hoping someone can point out what I'm doing wrong and why I'm unable to get the full summary.
Thanks and sorry for this being so long.
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup as bs
url = 'https://www.indeed.com/'
driver = webdriver.Chrome()
driver.get(url)
whatinput = driver.find_element_by_id('text-input-what')
whatinput.send_keys('Junior Python')
whereinput = driver.find_element_by_id('text-input-where')
whereinput.click()
whereinput.clear()
whereinput.send_keys('Los Angeles, CA')
findbutton = driver.find_element_by_xpath('//*[#id="whatWhere"]/form/div[3]/button')
findbutton.click()
try:
popup = driver.find_element_by_id('prime-popover-close-button')
popup.click()
except:
pass
This is where I'm stuck. The result summary is under {'data-tn-component':'organicJob'}, span class='summary'. Once I click on this, side panel opens up.
soup = bs(driver.page_source,'html.parser')
contents = soup.findAll('div',{"data-tn-component":"organicJob"})
for each in contents:
summary = driver.find_element_by_class_name('summary')
summary.click()
This opens side panel but it clicks the first sponsored link in the whole page (sponsored link), not the real result. This basically goes out of the 'organicJob' resultset for some reason.
url = driver.current_url
driver.get(url)
I tried to set the new url after clicking on the link(sponsored) to test out whether I can even get the side panel full summary(albeit sponsored, as test purpose).
soup=bs(driver.page_source,'html.parser')
fullsum = soup.findAll('div',{"id":"vjs-desc"})
print(fullsum)
This actually prints out the full summary of side panel, but it keeps printing the same 1st result over and over through the whole loop, instead of moving to the next one.
The problem is you are fetching divs using beautiful soup but, clicking using selenium which is not aware of your collected divs.
As you are using find_element_by_class_name() method of the driver object. It searches the whole page instead of your intended div object each(in the for loop). Thus, it ends up fetching the same first result from the whole page in each iterations.
One, quick work around is possible using only selenium(this will be slower though)
elements = driver.find_elements_by_tag_name('div')
for element in elements:
if "organicJob" in element.get_attribute("data-tn-component"):
summary = element.find_element_by_class_name('summary')
summary.click()
The above code will search for all the divs and, iterate over them to find divs with data-tn-component attribute containing organicJob. Once, it find one it will search for element with class name summary and click on that element.
I've never studied HTML seriously, so probably what I'm going to say is not right.
While I was writing my selenium code, I noticed that some buttons on some webpages does not redirect to other pages, but they change the structure of the the first one. From what I've understood, this happens because there's some JavaScript code that modifies it.
So, when I wanna get some data which is not present on the first page loading, I just have to click the right sequence of button to obtain it, rigth?
The page I wanted to load is https://watch.nba.com/, and what I want to get is the match list of a given day. The fastest path to get it is to open the calendary:
calendary = driver.find_element_by_class_name("calendar-btn")
calendary.click()
and click the selected day:
page = calendary.find_element_by_xpath("//*[contains(text(), '" + time.strftime("%d") + "')]")
page.click()
running this code, I get this error:
selenium.common.exceptions.ElementNotVisibleException
I read somewhere that the problem is that the page is not correctly loaded, or the element is not visible/clickable, so I tried with this:
wait = WebDriverWait(driver, 10)
page = wait.until(EC.visibility_of_element_located((By.XPATH, "//*[contains(text(), '" + time.stfrtime("%d") + "')]")))
page.click()
But this time I get this error:
selenium.common.exceptions.TimeoutException
Can you help me to solve at least one of these two problems?
The reason you are getting such behavior is because this page is loaded with iFrames (I can see 15 on the main page) once you click the calendar button, you will need to switch your context to either be on the defaultContext or a specific iframe where the calendar resides. There is tons of code outthere that shows you how to switch into and out of iframe. Hope this helps.