Webscraping from a script - python

I'm trying to extract the language proportion spoken at companies, using python's BeautifulSoup.
Yet, the information seems to come from a script, not from HTML, and I'm having some trouble.
For instance, from the following page, when I try
webpage ="https://www.zippia.com/amazon-com-careers-487/"
page = requests.get(webpage)
soup = BeautifulSoup(page.content, 'lxml')
for links in soup.find_all('div', {'class':'companyEducationDegrees'}):
raw_text = links.get_text()
lines = raw_text.split('\n')
print(lines)
print('-------------------')
I don't get any result while the ideal result should be Spanish 61.1%, French 9,7%, etc

As you already found out the data is put into the page via JS. However, you can still get that data, because the entire data over the comapany is always loaded with the page. You can access this data via requests + BeautifulSoup + json (+ re):
import json
import re
import requests
from bs4 import BeautifulSoup
webpage = "https://www.zippia.com/amazon-com-careers-487/"
page = requests.get(webpage)
soup = BeautifulSoup(page.content, 'lxml')
for script in soup.find_all('script', {'type': 'text/javascript'}):
if 'getCompanyInfo' in script.text:
match = re.search("{[^\n]*}", script.text)
data = json.loads(match.group())
print(data["companyDiversity"]["languages"])
json.dump(data, open("test.json", "w"), indent=2) # Only if you want the data put in a readable format to a file (like if you want to find the path to an entry)

Related

How to webscrape old school website that uses frames

I am trying to webscrape a government site that uses frameset.
Here is the URL - https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm
I've tried using splinter/selenium
url = "https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm"
browser.visit(url)
time.sleep(10)
full_xpath_frame = '/html/frameset/frameset/frame[2]'
tree = browser.find_by_xpath(full_xpath_frame)
for i in tree:
print(i.text)
It just returns an empty string.
I've tried using the requests library.
import requests
from lxml import HTML
url = "https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/index.htm"
# get response object
response = requests.get(url)
# get byte string
data = response.content
print(data)
And it returns this
b"<html>\r\n<head>\r\n<meta http-equiv='Content-Type'\r\ncontent='text/html; charset=iso-
8859-1'>\r\n<title>Lake_ County Election Results</title>\r\n</head>\r\n<FRAMESET rows='20%,
*'>\r\n<FRAME src='titlebar.htm' scrolling='no'>\r\n<FRAMESET cols='20%, *'>\r\n<FRAME
src='menu.htm'>\r\n<FRAME src='Lake_ElecSumm_all.htm' name='reports'>\r\n</FRAMESET>
\r\n</FRAMESET>\r\n<body>\r\n</body>\r\n</html>\r\n"
I've also tried using beautiful soup and it gave me the same thing. Is there another python library I can use in order to get the data that's inside the second table?
Thank you for any feedback.
As mentioned you could go for the frames and its src:
BeautifulSoup(r.text).select('frame')[1].get('src')
or directly to the menu.htm:
import requests
from bs4 import BeautifulSoup
r = requests.get('https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults/menu.htm')
link_list = ['https://lakecounty.in.gov/departments/voters/election-results-c/2022GeneralElectionResults'+a.get('href') for a in BeautifulSoup(r.text).select('a')]
for link in link_list[:1]:
r = requests.get(link)
soup = BeautifulSoup(r.text)
###...scrape what is needed

scrape book body text from project gutenberg de

I am new to python and I am looking for a way to extract with beautiful soup existing open source books that are available on gutenberg-de, such as this one
I need to use them for further analysis and text mining.
I tried this code, found in a tutorial, and it extracts metadata, but instead of the body content it gives me a list of the "pages" I need to scrape the text from.
import requests
from bs4 import BeautifulSoup
# Make a request
page = requests.get(
"https://www.projekt-gutenberg.org/keller/heinrich/")
soup = BeautifulSoup(page.content, 'html.parser')
# Extract title of page
page_title = soup.title
# Extract body of page
page_body = soup.body
# Extract head of page
page_head = soup.head
# print the result
print(page_title, page_head)
I suppose I could use that as a second step to extract it then? I am not sure how, though.
Ideally I would like to store them in a tabular way and be able to save them as csv, preserving the metadata author, title, year, and chapter. any ideas?
What happens?
First of all you will get a list of pages, cause you not requesting the right url it to:
page = requests.get('https://www.projekt-gutenberg.org/keller/heinrich/hein101.html')
Recommend that if your looping all the urls store the content in a list of dicts and push it to csv or pandas or ...
Example
import requests
from bs4 import BeautifulSoup
data = []
# Make a request
page = requests.get('https://www.projekt-gutenberg.org/keller/heinrich/hein101.html')
soup = BeautifulSoup(page.content, 'html.parser')
data.append({
'title': soup.title,
'chapter': soup.h2.get_text(),
'text': ' '.join([p.get_text(strip=True) for p in soup.select('body p')[2:]])
}
)
data

How to get CData from html using beautiful soup

I am trying to get a value from a webpage. In the source code of the webpage, the data is in CDATA format and also comes from a jQuery. I have managed to write the below code which gets a large amount of text, where the index 21 contains the information I need. However, this output is large and not in a format I understand. Within the output I need to isolate and output "redshift":"0.06" but dont know how. what is the best way to solve this.
import requests
from bs4 import BeautifulSoup
link = "https://wis-tns.weizmann.ac.il/object/2020aclx"
html = requests.get(link).text
soup = BeautifulSoup(html, "html.parser")
res = soup.findAll('b')
print soup.find_all('script')[21]
It can be done using the current approach you have. However, I'd advise against it. There's a neater way to do it by observing that the redshift value is present in a few convenient places on the page itself.
The following approach should work for you. It looks for tables on the page with the class "atreps-results-table" -- of which there are two. We take the second such table and look for the table cell with the class "cell-redshift". Then, we just print out its text content.
from bs4 import BeautifulSoup
import requests
link = 'https://wis-tns.weizmann.ac.il/object/2020aclx'
html = requests.get(link).text
soup = BeautifulSoup(html, 'html.parser')
tab = soup.find_all('table', {'class': 'atreps-results-table'})[1]
redshift = tab.find('td', {'class': 'cell-redshift'})
print(redshift.text)
Try simply:
soup.select_one('div.field-redshift > div.value>b').text
If you view the Page Source of the URL, you will find that there are two script elements that are having CDATA. But the script element in which you are interested has jQuery in it. So you have to select the script element based on this knowledge. After that, you need to do some cleaning to get rid of CDATA tags and jQuery. Then with the help of json library, convert JSON data to Python Dictionary.
import requests
from bs4 import BeautifulSoup
import json
page = requests.get('https://wis-tns.weizmann.ac.il/object/2020aclx')
htmlpage = BeautifulSoup(page.text, 'html.parser')
scriptelements = htmlpage.find_all('script')
for script in scriptelements:
if 'CDATA' in script.text and 'jQuery' in script.text:
scriptcontent = script.text.replace('<!--//--><![CDATA[//>', '').replace('<!--', '').replace('//--><!]]>', '').replace('jQuery.extend(Drupal.settings,', '').replace(');', '')
break
jsondata = json.loads(scriptcontent)
print(jsondata['objectFlot']['plotMain1']['params']['redshift'])

how to scrape multipage website with python and export data into .csv file?

I would like to scrape the following website using python and need to export scraped data into a CSV file:
http://www.swisswine.ch/en/producer?search=&&
This website consist of 154 pages to relevant search. I need to call every pages and want to scrape data but my script couldn't call next pages continuously. It only scrape one page data.
Here I assign value i<153 therefore this script run only for the 154th page and gave me 10 data. I need data from 1st to 154th page
How can I scrape entire data from all page by once I run the script and also how to export data as CSV file??
my script is as follows
import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i=+1
r.content
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
You should put your HTML parsing code to under the loop as well. And you are not incrementing the i variable correctly (thanks #MattDMo):
import csv
import requests
from bs4 import BeautifulSoup
i = 0
while i < 153:
url = ("http://www.swisswine.ch/en/producer?search=&&&page=" + str(i))
r = requests.get(url)
i += 1
soup = BeautifulSoup(r.content)
print (soup.prettify())
g_data = soup.find_all("ul", {"class": "contact-information"})
for item in g_data:
print(item.text)
I would also improve the following:
use requests.Session() to maintain a web-scraping session, which will also bring a performance boost:
if you're making several requests to the same host, the underlying TCP connection will be reused, which can result in a significant performance increase
be explicit about an underlying parser for BeautifulSoup:
soup = BeautifulSoup(r.content, "html.parser") # or "lxml", or "html5lib"

How to go through a list of urls to retrieve page data - Python

In a .py file, I have a variable that's storing a list of urls. How do I properly build a loop to retrieve the code from each url, so that I can extract specific data items from each page?
This is what I've tried so far:
import requests
import re
from bs4 import BeautifulSoup
import csv
#Read csv
csvfile = open("gymsfinal.csv")
csvfilelist = csvfile.read()
print csvfilelist
#Get data from each url
def get_page_data():
for page_data in csvfilelist.splitlines():
r = requests.get(page_data.strip())
soup = BeautifulSoup(r.text, 'html.parser')
return soup
pages = get_page_data()
print pages
By not using the csv module, you are reading the gymsfinal.csv file as text files. Read through the documentation on reading/writing csv files here: CSV File Reading and Writing.
Also you will get only the first page's soup content from your current code. Because get_page_data() function will return after creating the first soup. For your current code, You can yield from the function like,
def get_page_data():
for page_data in csvfilelist.splitlines():
r = requests.get(page_data.strip())
soup = BeautifulSoup(r.text, 'html.parser')
yield soup
pages = get_page_data()
# iterate over the generator
for page in pages:
print pages
Also close the file you just opened.

Categories

Resources