python using Selenium to download files - python

guys I need to write a script that use selenium to go over the pages on the website and download each page to a file.
This is the website I need to go through and I wanna download all 10 pages of reviews.
This is my code:
import urllib2,os,sys,time
from selenium import webdriver
browser=urllib2.build_opener()
browser.addheaders=[('User-agent', 'Mozilla/5.0')]
url='http://www.imdb.com/title/tt2948356/reviews?ref_=tt_urv'
driver = webdriver.Chrome('chromedriver.exe')
driver.get(url)
time.sleep(2)
if not os.path.exists('reviewPages'):os.mkdir('reviewPages')
response=browser.open(url)
myHTML=response.read()
fwriter=open('reviewPages/'+str(1)+'.html','w')
fwriter.write(myHTML)
fwriter.close()
print 'page 1 done'
page=2
while True:
cssPath='#tn15content > table:nth-child(4) > tbody > tr > td:nth-child(2) > a:nth-child(11) > img'
try:
button=driver.find_element_by_css_selector(cssPath)
except:
error_type, error_obj, error_info = sys.exc_info()
print 'STOPPING - COULD NOT FIND THE LINK TO PAGE: ', page
print error_type, 'Line:', error_info.tb_lineno
break
button.click()
time.sleep(2)
response=browser.open(url)
myHTML=response.read()
fwriter=open('reviewPages/'+str(page)+'.html','w')
fwriter.write(myHTML)
fwriter.close()
time.sleep(2)
print 'page',page,'done'
page+=1
But the program just stop downloading the first page. Could someone help? Thanks.

So a few things that are causing this.
Your first I think that's causing you issues is:
table:nth-child(4)
When I go to that website, I think you just want:
table >
The second error is the break statement in your except message. This says, when I get an error, stop looping.
So what's happening is your try, except is not working because your CSS selector is not quite correct, and going to your exception where you are telling it to stop looping.

Instead of that very complex CSS path try this simpler xpath ('//a[child::img[#alt="[Next]"]]/#href') which will return the URL associated with the little triangular 'next' button on each page.
Or notice that each page has 10 reviews and the URLs for pages 2 to 10 just give the start review number, ie http://www.imdb.com/title/tt2948356/reviews?start=10 which is the URL for page 2. Simply calculate the URL for the next page and stop when it doesn't fetch anything.

Related

Webscraping Click Button Selenium

I am trying to webscrape indeed.com to search for jobs using python, with selenium and beautifulsoup. I want to click next page but cant seem to figure out how to do this. Looked at many threads but it is unclear to me which element I am supposed to perform on. Here is the web page html and the code marked with grey comes up when I inspect the next button.
Also just to mention I tried first to follow what happens to the url when mousedown is executed. After reading the addppurlparam function and adding the strings in the function and using that url I just get thrown back to page one.
Here is my code for the class with selenium meant to click on the button:
from selenium import webdriver
from selenium.webdriver import ActionChains
driver = webdriver.Chrome("C:/Users/alleballe/Downloads/chromedriver.exe")
driver.get("https://se.indeed.com/Internship-jobb")
print(driver.title)
#assert "Python" in driver.title
elem = driver.find_element_by_class_name("pagination-list")
elem = elem.find_element_by_xpath("//li/a[#aria-label='Nästa']")
print(elem)
assert "No results found." not in driver.page_source
assert elem
action = ActionChains(driver).click(elem)
action.perform()
print(elem)
driver.close()
The indeed site is formatted so that it shows 10 per page.
Your photo shows the wrong section of HTML instead you can see the links contain start=0 for the first page, start=10 for the second, start=20 for the third,...
You could use this knowledge to do a code like this:
while True:
i = 0
driver.get(f'https://se.indeed.com/jobs?q=Internship&start={i}')
# code here
i = i + 10
But, to directly answer to your question you should do:
next_page_link = driver.find_element_by_xpath('/html/head/link[6]')
driver.get(next_page_link)
This will find the link and then get it.
its work. paginated to next page.
driver.find_element_by_class_name("pagination-list").find_element_by_tag_name('a').click()

Instagram crawling with scrolling down...with python selenium

total_link = []
temp = ['a']
total_num = 0
while driver.find_element_by_tag_name('div'):
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
Divs=driver.find_element_by_tag_name('div').text
html = driver.page_source
soup = BeautifulSoup(html, 'html.parser')
my_titles = soup.select(
'div._6d3hm > div._mck9w'
)
for title in my_titles:
try:
if title in temp:
#print('중복')
pass
else:
#print('중복이 아니다')
link = str(title.a.get("href")) #주소를 가져와!
total_link.append(link)
#print(link)
except:
pass
print("현재 모은 개수: " + str(len(total_link)))
temp = my_titles
time.sleep(2)
if 'End of Results' in Divs:
print('end')
break
else:
continue
Blockquote
Hello I was scraping instagram data with the tags in korean.
My code is consisted in the followings.
scroll down the page
by using bs4 and requests, get their HTML
locate to the point where the time log, picture src, text, tags, ID
select them all, and crawl it.
after it is done with the HTML that is on the page, scroll down
do the same thing until the end
By doing this, and using the codes of the people in this site, it seemed to work...
but after few scrolls going down, at certain points, scroll stops with the error message showing
'읽어드리지 못합니다' or in English 'Unable to read'
Can I know the reason why the error pops up and how to solve the problem?
I am using python and selenium
thank you for your answer
Instagram is trying to protect against malicious attacks, such as scraping or any other automated ways. It often occurs when you are trying to access to Instagram pages abnormally fast. So you have to set time.sleep() options more frequently or longer.

Python: how i can print all the source code by using Selenium

driver.page_source don't returns all the source code.It is detaily printing only some parts of code, but it's missing a big part of code. How can i fix this?
This is my code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
def htmlToLuna():
url ='https://codefights.com/tournaments/Xph7eTJQssbXjDLzP/A'
driver = webdriver.Chrome('C:\\Python27\\chromedriver\\chromedriver.exe')
driver.get(url)
web=open('web.txt','w')
web.write(driver.page_source)
print driver.page_source
web.close()
print htmlToLuna()
Here is a simple code all it does is it opens the url and gets the length page source and waits for five seconds and will get the length of page source again.
if __name__=="__main__":
browser = webdriver.Chrome()
browser.get("https://codefights.com/tournaments/Xph7eTJQssbXjDLzP/A")
initial = len(browser.page_source)
print(initial)
time.sleep(5)
new_source = browser.page_source
print(len(new_source)
see the output:
15722
48800
you see that the length of the page source increases after a wait? you must make sure that the page is fully loaded before getting the source. But this is not a proper implementation since it blindly waits.
Here is a nice way to do this, The browser will wait until the element of your choice is found. Timeout is set for 10 sec.
if __name__=="__main__":
browser = webdriver.Chrome()
browser.get("https://codefights.com/tournaments/Xph7eTJQssbXjDLzP/A")
try:
WebDriverWait(browser, 10).until(EC.presence_of_element_located((By.CSS_SELECTOR, '.CodeMirror > div:nth-child(1) > textarea:nth-child(1)'))) # 10 seconds delay
print("Result:")
print(len(browser.page_source))
except TimeoutException:
print("Your exception message here!")
The output: Result: 52195
Reference:
https://stackoverflow.com/a/26567563/7642415
http://selenium-python.readthedocs.io/locating-elements.html
Hold on! even that wont make any guarantees for getting full page source, since individual elements are loaded dynamically. If the browser finds the element it moves on. So make sure you find the proper element to make sure the page has been loaded fully.
P.S Mine is Python3 & webdriver is in my environment PATH. So my code needs to be modified a bit to make it work for Python 2.x versions. I guess only print statements are to be modified.

Python with Selenium : pagination issue

I am trying to scrape using Selenium in Python where I am trying to loop through landing pages on bigkinds.or.kr by clicking on the increasing number button.
The next page is located in the following HTML according to the Chrome Inspector:
<div class="newsPage">
<div class="btmDelBtn">
...</div>
<span>
1
2
3
4
5
6
</span>
I am not getting success in crawling by clicking next page. Please help me.
Here is my code:
url = "https://www.bigkinds.or.kr/main.do"
browser.get(url)
...
currentPageElement = browser.find_element_by_xpath("//*[#id='content']/div/div/div[2]/div[7]/span/a[2]")
print(currentPageElement)
currentPageNumber = int(currentPageElement.text)
print(currentPageNumber)
In xpath, "/span/a[2]" is a page number. How can I make loop for this xpath.
Try to use below code:
from selenium.common.exceptions import NoSuchElementException
url = "https://www.bigkinds.or.kr/main.do"
browser.get(url)
page_count = 1
while True:
# Increase page_count value on each iteration on +1
page_count += 1
# Do what you need to do on each page
# Code goes here
try:
# Clicking on "2" on pagination on first iteration, "3" on second...
browser.find_element_by_link_text(str(page_count)).click()
except NoSuchElementException:
# Stop loop if no more page available
break
Update
If you still want to use search by XPath, you might need to replace line
browser.find_element_by_link_text(str(page_count)).click()
with line
browser.find_element_by_xpath('//a[#onclick="getSearchResultNew(%s)"]' % page_count).click()
...or if you want to use your absolute XPath (not the best idea), you can try
browser.find_element_by_xpath("//*[#id='content']/div/div/di‌​v[2]/div[7]/span/a[%s​]" % page_count).click()

Scraper: Try skips code in while loop (Python)

I am working on my first scraper and ran into an issue. My scraper accesses a website and saves links from the each result page. Now, I only want it to go through 10 pages. The problem comes when the search results has less than 10 pages. I tried using a while loop along with a try statement, but it does not seem to work. After the scraper goes through the first page of results, it does not return any links on the successive pages; however, it does not give me an error and stops once it reaches 10 pages or the exception.
Here is a snippet of my code:
links = []
page = 1
while(page <= 10):
try:
# Get information from the propertyInfo class
properties = WebDriverWait(driver, 10).until(lambda driver: driver.find_elements_by_xpath('//div[#class = "propertyInfo item"]'))
# For each listing
for p in properties:
# Find all elements with a tags
tmp_link = p.find_elements_by_xpath('.//a')
# Get the link from the second element to avoid error
links.append(tmp_link[1].get_attribute('href'))
page += 1
WebDriverWait(driver, 10).until(lambda driver: driver.find_element_by_xpath('//*[#id="paginador_siguiente"]/a').click())
except ElementNotVisibleException:
break
I really appreciate any pointers on how to fix this issue.
You are explicitely catching ElementNotVisibleException exception and stopping on it. This way you won't see any error message. The error is probably in this line:
WebDriverWait(driver, 10).until(lambda driver:
driver.find_element_by_xpath('//*[#id="paginador_siguiente"]/a').click())
I assume lambda here should be a test, which is run until succeeded. So it shouldn't make any action like click. I actually believe that you don't need to wait here at all, page should be already fully loaded so you can just click on the link:
driver.find_element_by_xpath('//*[#id="paginador_siguiente"]/a').click()
This will either pass to next page (and WebDriverWait at the start of the loop will wait for it) or raise exception if no next link is found.
Also, you better minimize try ... except scope, this way you won't capture something unintentionally. E.g. here you only want to surround next link finding code not the whole loop body:
# ...
while(page <= 10):
# Scrape this page
properties = WebDriverWait(driver, 10).until(...)
for p in properties:
# ...
page += 1
# Try to pass to next page
try:
driver.find_element_by_xpath('//*[#id="paginador_siguiente"]/a').click()
except ElementNotVisibleException:
# Break if no next link is found
break

Categories

Resources