I am trying to save selenium web elements into a separate file for later use in my program to find that element easily on the same website. I tried with pickle and continuously I am getting the following error.
AttributeError: Can't pickle local object '_createenviron.<locals>.encodekey'
Tried setting the local variable into global, but still no luck.
Here is an example code to demonstrate what I am trying to achieve.
import pickle
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Chrome()
driver.get('http://website.com/')
anchors = driver.find_elements(By.TAG_NAME, 'a')
for anchor in anchors:
with open('anchors_pickle', 'wb') as pick_file:
pickle.dump(anchor, pick_file) # Error
Since with JSON also this is not possible, I guess pickle is the best solution for this. Is there any method to overcome this error?
Related
I'm trying to take screenshot of a <canvas> tag element from google chrome using selenium web driver and python.
I tried using the below code,
driver.find_element_by_css('#canvas-xyz').save_screenshot('canvas.png')
It returned
AttributeError: 'WebElement' object has no attribute 'save_screenshot'
I also tried this in dev tools,
document.querySelector('#canvas-xyz').toDataURL()
It returned the following DATA URI, which is empty.
data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABq8AAAAhCAYAAABHnTGeAAAEQ0lEQVR4Xu3ZMREAAAwCseLfdG38kCrgUjZ2jgABAgQIECBAgAABAgQIECBAgAABAgQIECBAgEBEYJEcYhAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBA445USECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIZASMV5lXCEKAAAECBAgQIECAAAECBAgQIECAAAECBAgQIGC80gECBAgQIECAAAECBAgQIECAAAECBAgQIECAAIGMgPEq8wpBCBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIEjFc6QIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgkBEwXmVeIQgBAgQIECBAgAABAgQIECBAgAABAgQIECBAgIDxSgcIECBAgAABAgQIECBAgAABAgQIECBAgAABAgQyAsarzCsEIUCAAAECBAgQIECAAAECBAgQIECAAAECBAgQMF7pAAECBAgQIECAAAECBAgQIECAAAECBAgQIECAQEbAeJV5hSAECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQLGKx0gQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECBDICBivMq8QhAABAgQIECBAgAABAgQIECBAgAABAgQIECBAwHilAwQIECBAgAABAgQIECBAgAABAgQIECBAgAABAhkB41XmFYIQIECAAAECBAgQIECAAAECBAgQIECAAAECBAgYr3SAAAECBAgQIECAAAECBAgQIECAAAECBAgQIEAgI2C8yrxCEAIECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAeOVDhAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIECGQEjFeZVwhCgAABAgQIECBAgAABAgQIECBAgAABAgQIECBgvNIBAgQIECBAgAABAgQIECBAgAABAgQIECBAgACBjIDxKvMKQQgQIECAAAECBAgQIECAAAECBAgQIECAAAECBIxXOkCAAAECBAgQIECAAAECBAgQIECAAAECBAgQIJARMF5lXiEIAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQICA8UoHCBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIEMgLGq8wrBCFAgAABAgQIECBAgAABAgQIECBAgAABAgQIEDBe6QABAgQIECBAgAABAgQIECBAgAABAgQIECBAgEBGwHiVeYUgBAgQIECAAAECBAgQIECAAAECBAgQIECAAAECxisdIECAAAECBAgQIECAAAECBAgQIECAAAECBAgQyAgYrzKvEIQAAQIECBAgQIAAAQIECBAgQIAAAQIECBAgQMB4pQMECBAgQIAAAQIECBAgQIAAAQIECBAgQIAAAQIZAeNV5hWCECBAgAABAgQIECBAgAABAgQIECBAgAABAgQIGK90gAABAgQIECBAgAABAgQIECBAgAABAgQIECBAICPwDiwAIhIPZYcAAAAASUVORK5CYII=
Is it possible to take screenshot of an element using chromedriver and selenium in python. I am aware that chrome dev tools allows us to take screenshot of a particular element. Even if it is a JavaScript method also i can get the data URI using driver.execute_script() command.
WebElement doesn't have save_screenshot. You can use screenshot_as_png (property) and save it
element = driver.find_element_by_css('#canvas-xyz')
scrrenshot = element.screenshot_as_png
with open('canvas.png', 'wb') as f:
f.write(scrrenshot)
Use WebElement.screenshot()method:
def screenshot(self, filename):
"""
Saves a screenshot of the current element to a PNG image file. Returns
False if there is any IOError, else returns True. Use full paths in
your filename.
:Args:
- filename: The full path you wish to save your screenshot to. This
should end with a `.png` extension.
:Usage:
element.screenshot('/Screenshots/foo.png')
"""
So, in your example, that would be:
driver.find_element_by_css('#canvas-xyz').screenshot('canvas.png')
I found a tutorial and I'm trying to run this script, I did not work with python before.
tutorial
I've already seen what is running through logging.debug, checking whether it is connecting to google and trying to create csv file with other scripts
from urllib.parse import urlencode, urlparse, parse_qs
from lxml.html import fromstring
from requests import get
import csv
def scrape_run():
with open('/Users/Work/Desktop/searches.txt') as searches:
for search in searches:
userQuery = search
raw = get("https://www.google.com/search?q=" + userQuery).text
page = fromstring(raw)
links = page.cssselect('.r a')
csvfile = '/Users/Work/Desktop/data.csv'
for row in links:
raw_url = row.get('href')
title = row.text_content()
if raw_url.startswith("/url?"):
url = parse_qs(urlparse(raw_url).query)['q']
csvRow = [userQuery, url[0], title]
with open(csvfile, 'a') as data:
writer = csv.writer(data)
writer.writerow(csvRow)
print(links)
scrape_run()
The TL;DR of this script is that it does three basic functions:
Locates and opens your searches.txt file.
Uses those keywords and searches the first page of Google for each
result.
Creates a new CSV file and prints the results (Keyword, URLs, and
page titles).
Solved
Google add captcha couse i use to many request
its work when i use mobile internet
Assuming the links variable is full and contains data - please verify.
if empty - test the api call itself you are making, maybe it returns something different than you expected.
Other than that - I think you just need to tweak a little bit your file handling.
https://www.guru99.com/reading-and-writing-files-in-python.html
here you can find some guidelines regarding file handling in python.
in my perspective, you need to make sure you create the file first.
start on with a script which is able to just create a file.
after that enhance the script to be able to write and append to the file.
from there on I think you are good to go and continue with you're script.
other than that I think that you would prefer opening the file only once instead of each loop, it could mean much faster execution time.
let me know if something is not clear.
I have written a script in python using send_key to type some text in a textarea on this webpage. However, it is really slow to use send_key as my text is really chunky.
from selenium import webdriver
text = "gckugcgaygartty"
link_url ="http://www.bioinformatics.org/sms2/translate.html"
driver = webdriver.Chrome('chromedriver', chrome_options=options)
driver.get(link_url)
driver.find_element_by_tag_name("textarea").clear()
driver.find_element_by_tag_name("textarea").send_keys("gckugcgaygartty")
I then tried to replace the send_keys with execute_script() like following but it didn't work (no errors but nothing changed on the webpage), could anyone give me some advice please?
driver.execute_script("document.getElementById('main_form').getElementsByTagName('textarea')[0].click()")
driver.execute_script("document.getElementById('main_form').getElementsByTagName('textarea')[0].setAttribute('value', 'gckugcgaygartty' )")
Modification : Changed setAttribute function with value property
Use following Code :
driver.execute_script("document.getElementsByTagName('textarea')[0].value='your_lengthy_data'")
OR
driver.execute_script("document.getElementById('main_form').getElementsByTagName('textarea')[0].value='your_lengthy_data'")
I am working on a larger code that will display the links of the results for a Google Newspaper search and then analyze those links for certain keywords and context and data. I've gotten everything this one part to work, and now when I try to iterate through the pages of results I come to a problem. I'm not sure how to do this without an API, which I do not know how to use. I just need to be able to iterate through multiple pages of search results so that I can then apply my analysis to it. It seems like there is a simple solution to iterating through the pages of results, but I am not seeing it.
Are there any suggestions on ways to approach this problem? I am somewhat new to Python and have been teaching myself all of these scraping techniques, so I'm not sure if I'm just missing something simple here. I know this may be an issue with Google restricting automated searches, but even pulling in the first 100 or so links would be beneficial. I have seen examples of this from regular Google searches but not from Google Newspaper searches
Here is the body of the code. If there are any lines where you have suggestions, that would be helpful. Thanks in advance!
def get_page_tree(url):
page = requests.get(url=url, verify=False)
return html.fromstring(page.text)
def find_other_news_sources(initial_url):
forwarding_identifier = '/url?q='
google_news_search_url = "https://www.google.com/search?hl=en&gl=us&tbm=nws&authuser=0&q=ohio+pay-to-play&oq=ohio+pay-to-play&gs_l=news-cc.3..43j43i53.2737.7014.0.7207.16.6.0.10.10.0.64.327.6.6.0...0.0...1ac.1.NAJRCoza0Ro"
google_news_search_tree = get_page_tree(url=google_news_search_url)
other_news_sources_links = [a_link.replace(forwarding_identifier, '').split('&')[0] for a_link in google_news_search_tree.xpath('//a//#href') if forwarding_identifier in a_link]
return other_news_sources_links
links = find_other_news_sources("https://www.google.com/search? hl=en&gl=us&tbm=nws&authuser=0&q=ohio+pay-to-play&oq=ohio+pay-to-play&gs_l=news-cc.3..43j43i53.2737.7014.0.7207.16.6.0.10.10.0.64.327.6.6.0...0.0...1ac.1.NAJRCoza0Ro")
with open('textanalysistest.csv', 'wt') as myfile:
wr = csv.writer(myfile, quoting=csv.QUOTE_ALL)
for row in links:
print(row)
I'm looking into building a parser for a site with similar structure to google's (i.e. a bunch of consecutive results pages, each with a table of content of interest).
A combination of the Selenium package (for page-element based site navigation) and BeautifulSoup (for html parsing) seems like it's the weapon of choice for harvesting written content. You may find them useful too, although I have no idea what kinds of defenses google has in place to deter scraping.
A possible implementation for Mozilla Firefox using selenium, beautifulsoup and geckodriver:
from bs4 import BeautifulSoup, SoupStrainer
from bs4.diagnose import diagnose
from os.path import isfile
from time import sleep
import codecs
from selenium import webdriver
def first_page(link):
"""Takes a link, and scrapes the desired tags from the html code"""
driver = webdriver.Firefox(executable_path = 'C://example/geckodriver.exe')#Specify the appropriate driver for your browser here
counter=1
driver.get(link)
html = driver.page_source
filter_html_table(html)
counter +=1
return driver, counter
def nth_page(driver, counter, max_iter):
"""Takes a driver instance, a counter to keep track of iterations, and max_iter for maximum number of iterations. Looks for a page element matching the current iteration (how you need to program this depends on the html structure of the page you want to scrape), navigates there, and calls mine_page to scrape."""
while counter <= max_iter:
pageLink = driver.find_element_by_link_text(str(counter)) #For other strategies to retrieve elements from a page, see the selenium documentation
pageLink.click()
scrape_page(driver)
counter+=1
else:
print("Done scraping")
return
def scrape_page(driver):
"""Takes a driver instance, extracts html from the current page, and calls function to extract tags from html of total page"""
html = driver.page_source #Get html from page
filter_html_table(html) #Call function to extract desired html tags
return
def filter_html_table(html):
"""Takes a full page of html, filters the desired tags using beautifulsoup, calls function to write to file"""
only_td_tags = SoupStrainer("td")#Specify which tags to keep
filtered = BeautifulSoup(html, "lxml", parse_only=only_td_tags).prettify() #Specify how to represent content
write_to_file(filtered) #Function call to store extracted tags in a local file.
return
def write_to_file(output):
"""Takes the scraped tags, opens a new file if the file does not exist, or appends to existing file, and writes extracted tags to file."""
fpath = "<path to your output file>"
if isfile(fpath):
f = codecs.open(fpath, 'a') #using 'codecs' to avoid problems with utf-8 characters in ASCII format.
f.write(output)
f.close()
else:
f = codecs.open(fpath, 'w') #using 'codecs' to avoid problems with utf-8 characters in ASCII format.
f.write(output)
f.close()
return
After this, it is just a matter of calling:
link = <link to site to scrape>
driver, n_iter = first_page(link)
nth_page(driver, n_iter, 1000) # the 1000 lets us scrape 1000 of the result pages
Note that this script assumes that the result pages you are trying to scrape are sequentially numbered, and those numbers can be retrieved from the scraped page's html using 'find_element_by_link_text'. For other strategies to retrieve elements from a page, see the selenium documentation here.
Also, note that you need to download the packages on which this depends, and the driver that selenium needs in order to talk with your browser (in this case geckodriver, download geckodriver, place it in a folder, and then refer to the executable in 'executable_path')
If you do end up using these packages, it can help to spread out your server requests using the time package (native to python) to avoid exceeding a maximum number of requests allowed to the server off of which you are scraping. I didn't end up needing it for my own project, but see here, second answer to the original question, for an implementation example with the time module used in the fourth code block.
Yeeeeaaaahhh... If someone with higher rep could edit and add some links to beautifulsoup, selenium and time documentations, that would be great, thaaaanks.
I am trying to learn python and also create a web utility. One task I am trying to accomplish is creating a single html file which can be run locally but link to everything it needs to look like the original web page. (if you are going to ask why i want this, its because it may act of a part of a utility i am creating, or if not, just for education) So i have two questions, a theoretical one and a practical one:
1) Is this, for visual (as opposed to functional) purposes, possible? Can a html page work offline while linking to everything it needs online? or if their something fundamental about having the html file itself execute on the web server which does not allow this to be possible? How far can I go with it?
2) I have started a python script which de-relativises (made that one up) linked elements on a html page, but I am a noob so most likely I missed some elements or attributes which would also link to outside resources. I have noticed after trying a few pages that the one in the code below does not work properly, their appears to be a .js file which is not linking correctly. (the first of many problems to come) Assuming the answer to my first question was at least a partial yes, can anyone help me fix the code for this website?
Thank you.
Update, I missed the script tag on this, but even after I added it it still does not work correctly.
import lxml
import sys
from lxml import etree
from StringIO import StringIO
from lxml.html import fromstring, tostring
import urllib2
from urlparse import urljoin
site = "www.script-tutorials.com/advance-php-login-system-tutorial/"
output_filename = "output.html"
def download(site):
response = urllib2.urlopen("http://"+site)
html_input = response.read()
return html_input
def derealitivise(site, html_input):
active_html = lxml.html.fromstring(html_input)
for element in tags_to_derealitivise:
for tag in active_html.xpath(str(element+"[#"+"src"+"]")):
tag.attrib["src"] = urljoin("http://"+site, tag.attrib.get("src"))
for tag in active_html.xpath(str(element+"[#"+"href"+"]")):
tag.attrib["href"] = urljoin("http://"+site, tag.attrib.get("href"))
return lxml.html.tostring(active_html)
active_html = ""
tags_to_derealitivise = ("//img", "//a", "//link", "//embed", "//audio", "//video", "//script")
print "downloading..."
active_html = download(site)
active_html = derealitivise(site, active_html)
print "writing file..."
output_file = open (output_filename, "w")
output_file.write(active_html)
output_file.close()
Furthermore, I could make the code more through by checking all of the elements...
It would look kind of like this, but I do not know the exact way to iterate through all of the elements. This is a seperate problem, and I will most likely figure it out by the time anyone responds...:
def derealitivise(site, html_input):
active_html = lxml.html.fromstring(html_input)
for element in active_html.xpath:
for tag in active_html.xpath(str(element+"[#"+"src"+"]")):
tag.attrib["src"] = urljoin("http://"+site, tag.attrib.get("src"))
for tag in active_html.xpath(str(element+"[#"+"href"+"]")):
tag.attrib["href"] = urljoin("http://"+site, tag.attrib.get("href"))
return lxml.html.tostring(active_html)
update
Thanks to Burhan Khalid's solution, which seemed too simple to be viable at first glance, I got it working. The code is so simple most of you will most likely not require it, but I will post it anyway incase it helps:
import lxml
import sys
from lxml import etree
from StringIO import StringIO
from lxml.html import fromstring, tostring
import urllib2
from urlparse import urljoin
site = "www.script-tutorials.com/advance-php-login-system-tutorial/"
output_filename = "output.html"
def download(site):
response = urllib2.urlopen(site)
html_input = response.read()
return html_input
def derealitivise(site, html_input):
active_html = html_input.replace('<head>', '<head> <base href='+site+'>')
return active_html
active_html = ""
print "downloading..."
active_html = download(site)
active_html = derealitivise(site, active_html)
print "writing file..."
output_file = open (output_filename, "w")
output_file.write(active_html)
output_file.close()
Despite all of this, and its great simplicity, the .js object running on the website I have listed in the script still will not load correctly. Does anyone know if this is possible to fix?
while i am trying to make only the html file offline, while using the
linked resources over the web.
This is a two step process:
Copy the HTML file and save it to your local directory.
Add a BASE tag in the HEAD section, and point the href attribute of it to the absolute URL.
Since you want to learn how to do it yourself, I will leave it at that.
#Burhan has an easy answer using <base href="..."> tag in the <head>, and it works as you have found out. I ran the script you posted, and the page downloaded fine. As you noticed, some of the JavaScript now fails. This can be for multiple reasons.
If you are opening the HTML file as a local file:/// URL, the page may not work. Many browsers heavily sandbox local HTML files, not allowing them to perform network requests or examine local files.
The page may perform XmlHTTPRequests or other network operations to the remote site, which will be denied for cross domain scripting reasons. Looking in the JS console, I see the following errors for the script you posted:
XMLHttpRequest cannot load http://www.script-tutorials.com/menus.php?give=menu. Origin http://localhost:8000 is not allowed by Access-Control-Allow-Origin.
Unfortunately, if you do not have control of www.script-tutorials.com, there is no easy way around this.