Very fast webpage scraping (Python) - python

So I'm trying to filter through a list of urls (potentially in the hundreds) and filter out every article who's body is less than X number of words (ARTICLE LENGTH). But when I run my application, it takes an unreasonable amount of time, so much so that my hosting service times out. I'm currently using Goose (https://github.com/grangier/python-goose) with the following filter function:
def is_news_and_data(url):
"""A function that returns a list of the form
[True, title, meta_description]
or
[False]
"""
result = []
if url == None:
return False
try:
article = g.extract(url=url)
if len(article.cleaned_text.split()) < ARTICLE_LENGTH:
result.append(False)
else:
title = article.title
meta_description = article.meta_description
result.extend([True, title, meta_description])
except:
result.append(False)
return result
In the context of the following. Dont mind the debug prints and messiness (tweepy is my twitter api wrapper):
def get_links(auth):
"""Returns a list of t.co links from a list of given tweets"""
api = tweepy.API(auth)
page_list = []
tweets_list = []
links_list = []
news_list = []
regex = re.compile('http://t.co/.[a-zA-Z0-9]*')
for page in tweepy.Cursor(api.home_timeline, count=20).pages(1):
page_list.append(page)
for page in page_list:
for status in page:
tweet = status.text.encode('utf-8','ignore')
tweets_list.append(tweet)
for tweet in tweets_list:
links = regex.findall(tweet)
links_list.extend(links)
#print 'The length of the links list is: ' + str(len(links_list))
for link in links_list:
news_and_data = is_news_and_data(link)
if True in news_and_data:
news_and_data.append(link)
#[True, title, meta_description, link]
news_list.append(news_and_data[1:])
print 'The length of the news list is: ' + str(len(news_list))
Can anyone recommend a perhaps faster method?

This code is probably causing your slow performance:
len(article.cleaned_text.split())
This is performing a lot of work, most of which is discarded. I would profile your code to see if this is the culprit, if so, replace it with something that just counts spaces, like so:
article.cleaned_text.count(' ')
That won't give you exactly the same result as your original code, but will be very close. To get closer you could use a regular expression to count words, but it won't be quite as fast.

I'm not saying this is the most absolute best you can do, but it will be faster. You'll have to redo some of your code to fit this new function.
It will at least give you less function calls.
You'll have to pass the whole url list.
def is_news_in_data(listings):
new_listings = {}
tmp_listing = ''
is_news = {}
for i in listings:
url = listings[i]
is_news[url] = 0
article = g.extract(url=url).cleaned_text
tmp_listing = '';
for s in article:
is_news[url] += 1
tmp_listing += s
if is_news[url] > ARTICLE_LENGTH:
new_listings[url] = tmp_listing
del is_news[url]
return new_listings

Related

How to find the total number of pages on a website with BeautifulSoup?

Context: I'm working on pagination of this website: https://skoodos.com/schools-in-uttarakhand. When I inspected, this website has no proper number of pages visible except the next button which is ?page=2 after the url. Also, searching for page-link gave me number 20 at the end. So I assumed that the total number of pages is 20, upon checking manually, I learnt that, there only exist 11 pages.
After many trials and errors, I finally decided to go with just the indexing from 0 until 12 (12 is excluded by python however).
What I want to know is that, how wold you go about figuring out the number of pages on a particular website that doesn't show the actual number of pages other than previous and next button and how can I optimize this in terms of the same?
Here's my solution to pagination. Any way to optimize this other than me manually finding the number of pages?
from myWork.commons import url_parser, write
def data_fetch(url):
school_info = []
for page_number in range(0, 4):
next_web_page = url + f'?page={page_number}'
soup = url_parser(next_web_page)
search_results = soup.find('section', {'id': 'search-results'}).find(class_='container').find(class_='row')
# rest of the code
for page_number in range(4, 12):
next_web_page = url + f'?page={page_number}'
soup = url_parser(next_web_page)
search_results = soup.find('section', {'id': 'search-results'}).find(class_='container').find(class_='row')
# rest of the code
def main():
url = "https://skoodos.com/schools-in-uttarakhand"
data_fetch(url)
if __name__ == "__main__":
main()
Each of your pages (except the last one) will have an element like this:
<a class="page-link"
href="https://skoodos.com/schools-in-uttarakhand?page=2"
rel="next">Next ยป</a>
E.g. you can extract the link as follows (here for the first page):
link = soup.find('a', class_='page-link', href=True, rel='next')
print(link['href'])
https://skoodos.com/schools-in-uttarakhand?page=2
So, you could make your function recursive. E.g. use something like this:
import requests
from bs4 import BeautifulSoup
def data_fetch(url, results = list()):
resp = requests.get(url)
soup = BeautifulSoup(resp.content, 'lxml')
search_results = soup.find('section', {'id': 'search-results'})\
.find(class_='container').find(class_='row')
results.append(search_results)
link = soup.find('a', class_='page-link', href=True, rel='next')
# link will be `None` for last page (i.e. `page=11`)
if link:
# just adding some prints to show progress of iteration
if not 'page' in url:
print('getting page: 1', end=', ')
url = link['href']
# subsequent page nums being retrieved
print(f'{url.rsplit("=", maxsplit=1)[1]}', end=', ')
# recursive call
return data_fetch(url, results)
else:
# `page=11` with no link, we're done
print('done')
return results
url = 'https://skoodos.com/schools-in-uttarakhand'
data = data_fetch(url)
So, a call to this function will print progress as:
getting page: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, done
And you'll end up with data with 11x bs4.element.Tag, one for each page.
print(len(data))
11
print(set([type(d) for d in data]))
{<class 'bs4.element.Tag'>}
Good luck with extracting the required info; the site is very slow, and the HTML is particularly sloppy and inconsistent. (e.g. you're right to note there is a page-link elem, which suggests there are 20 pages. But its visibility is set to hidden, so apparently this is just a piece of deprecated/unused code.)
There's a bit at the top that says "Showing the 217 results as per selected criteria". You can code to extract the number from that, then count the number number of results per page and divide by that to get the expected number of pages (don't forget to round up ).
If you want to double check, add more code to go to the calculated last page and
if there's no such page, keep decrementing the total and checking until you hit a page that exists
if there is such a page, but it has an active/enabled "Next" button, keep going to Next page until reaching the last (basically as you are now)
(Remember that the two listed above are contingencies and wouldn't be executed in an ideal scenario.)
So, just to find the number of pages, you could do something like:
import requests
from bs4 import BeautifulSoup
import math
def soupFromUrl(scrapeUrl):
req = requests.get(scrapeUrl)
if req.status_code == 200:
return BeautifulSoup(req.text, 'html.parser')
else:
raise Exception(f'{req.reason} - failed to scrape {scrapeUrl}')
def getPageTotal(url):
soup = soupFromUrl(url)
#totalResults = int(soup.find('label').get_text().split('(')[-1].split(')')[0])
totalResults = int(soup.p.strong.get_text()) # both searches should work
perPageResults = len(soup.select('.m-show')) #probably always 20
print(f'{perPageResults} of {totalResults} results per page')
if not (perPageResults > 0 and totalResults > 0):
return 0
lastPageNum = math.ceil(totalResults/perPageResults)
# Contingencies - will hopefully never be needed
lpSoup = soupFromUrl(f'{url}?page={lastPageNum}')
if lpSoup.select('.m-show'): #page exists
while lpSoup.select_one('a[rel="next"]'):
nextLink = lpSoup.select_one('a[rel="next"]')['href']
lastPageNum = int(nextLink.split('page=')[-1])
lpSoup = soupFromUrl(nextLink)
else: #page does not exist
while not (lpSoup.select('.m-show') or lastPageNum < 1):
lastPageNum = lastPageNum - 1
lpSoup = soupFromUrl(f'{url}?page={lastPageNum}')
# end Contingencies section
return lastPageNum
However, it looks like you only want the total pages in order to start the for-loop, but it's not even necessary to use a for-loop at all - a while-loop might be better:
def data_fetch(url):
school_info = []
nextUrl = url
while nextUrl:
soup = soupFromUrl(nextUrl)
#GET YOUR DATA FROM PAGE
nextHL = soup.select_one('a[rel="next"]')
nextUrl = nextHL.get('href') if nextHL else None
# code after fetching all pages' data
Although, you could still use for-loop if you had a maximum page number in mind:
def data_fetch(url, maxPages):
school_info = []
for p in range(1, maxPages+1):
soup = soupFromUrl(f'{url}?page={p}')
if not soup.select('.m-show'):
break
#GET YOUR DATA FROM PAGE
# code after fetching all pages' data [upto max]

Only items from first Beautiful Soup object are being added to my lists

I suspect this isn't very complicated, but I can't see to figure it out. I'm using Selenium and Beautiful Soup to parse Petango.com. Data will be used to help a local shelter understand how they compare in different metrics to other area shelters. so next will be taking these data frames and doing some analysis.
I grab detail urls from a different module and import the list here.
My issue is, my lists are only showing the value from the HTML from the first dog. I was stepping through and noticed my len are different for the soup iterations, so I realize my error is after that somewhere but I can't figure out how to fix.
Here is my code so far (running the whole process vs using a cached page)
from bs4 import BeautifulSoup
from selenium import webdriver
import pandas as pd
from Petango import pet_links
headings = []
values = []
ShelterInfo = []
ShelterInfoWebsite = []
ShelterInfoEmail = []
ShelterInfoPhone = []
ShelterInfoAddress = []
Breed = []
Age = []
Color = []
SpayedNeutered = []
Size = []
Declawed = []
AdoptionDate = []
# to access sites, change url list to pet_links (break out as needed) and change if false to true. false looks to the html file
url_list = (pet_links[4], pet_links[6], pet_links[8])
#url_list = ("Petango.html", "Petango.html", "Petango.html")
for link in url_list:
page_source = None
if True:
#pet page = link should populate links from above, hard code link was for 1 detail page, =to hemtl was for cached site
PetPage = link
#PetPage = 'https://www.petango.com/Adopt/Dog-Terrier-American-Pit-Bull-45569732'
#PetPage = Petango.html
PetDriver = webdriver.Chrome(executable_path='/Users/paulcarson/Downloads/chromedriver')
PetDriver.implicitly_wait(30)
PetDriver.get(link)
page_source = PetDriver.page_source
PetDriver.close()
else:
with open("Petango.html",'r') as f:
page_source = f.read()
PetSoup = BeautifulSoup(page_source, 'html.parser')
print(len(PetSoup.text))
#get the details about the shelter and add to lists
ShelterInfo.append(PetSoup.find('div', class_ = "DNNModuleContent ModPethealthPetangoDnnModulesShelterShortInfoC").find('h4').text)
ShelterInfoParagraphs = PetSoup.find('div', class_ = "DNNModuleContent ModPethealthPetangoDnnModulesShelterShortInfoC").find_all('p')
First_Paragraph = ShelterInfoParagraphs[0]
if "Website" not in First_Paragraph.text:
raise AssertionError("first paragraph is not about site")
ShelterInfoWebsite.append(First_Paragraph.find('a').text)
Second_Paragraph = ShelterInfoParagraphs[1]
ShelterInfoEmail.append(Second_Paragraph.find('a')['href'])
Third_Paragraph = ShelterInfoParagraphs[2]
ShelterInfoPhone.append(Third_Paragraph.find('span').text)
Fourth_Paragraph = ShelterInfoParagraphs[3]
ShelterInfoAddress.append(Fourth_Paragraph.find('span').text)
#get the details about the pet
ul = PetSoup.find('div', class_='group details-list').ul # Gets the ul tag
li_items = ul.find_all('li') # Finds all the li tags within the ul tag
for li in li_items:
heading = li.strong.text
headings.append(heading)
value = li.span.text
if value:
values.append(value)
else:
values.append(None)
Breed.append(values[0])
Age.append(values[1])
print(Age)
Color.append(values[2])
SpayedNeutered.append(values[3])
Size.append(values[4])
Declawed.append(values[5])
AdoptionDate.append(values[6])
ShelterDF = pd.DataFrame(
{
'Shelter': ShelterInfo,
'Shelter Website': ShelterInfoWebsite,
'Shelter Email': ShelterInfoEmail,
'Shelter Phone Number': ShelterInfoPhone,
'Shelter Address': ShelterInfoAddress
})
PetDF = pd.DataFrame(
{'Breed': Breed,
'Age': Age,
'Color': Color,
'Spayed/Neutered': SpayedNeutered,
'Size': Size,
'Declawed': Declawed,
'Adoption Date': AdoptionDate
})
print(PetDF)
print(ShelterDF)
output from print len and print the age value as the loop progresses
12783
['6y 7m']
10687
['6y 7m', '6y 7m']
10705
['6y 7m', '6y 7m', '6y 7m']
Could someone please point me in the right direction?
Thank you for your help!
Paul
You need to change the find method into find_all() in BeautifulSoup so it locates all the elements.
Values is global and you only ever append the first value in this list to Age
Age.append(values[1])
Same problem for your other global lists (static index whether 1 or 2 etc...).
You need a way to track the appropriate index to use perhaps through a counter, or determine other logic to ensure current value is added e.g. with current Age, is it is the second li in the loop? Or just append PetSoup.select_one("[data-bind='text: age']").text
It looks like each item of interest e.g. colour, spayed contains the data-bind attribute so you can use those with the appropriate attribute value to select each value and avoid a loop over li elements.
e.g. current_colour = PetSoup.select_one("[data-bind='text: color']").text
Best to set in a variable and test is not None before accessing with .text

Python - print 2nd argument

I am new to Python and I've written this test-code for practicing purposes, in order to find and print email addresses from various web pages:
def FindEmails(*urls):
for i in urls:
totalemails = []
req = urllib2.Request(i)
aResp = urllib2.urlopen(req)
webpage = aResp.read()
patt1 = '(\w+[-\w]\w+#\w+[.]\w+[.\w+]\w+)'
patt2 = '(\w+[\w]\w+#\w+[.]\w+)'
regexlist = [patt1,patt2]
for regex in regexlist:
match = re.search(regex,webpage)
if match:
totalemails.append(match.group())
break
#return totalemails
print "Mails from webpages are: %s " % totalemails
if __name__== "__main__":
FindEmails('https://www.urltest1.com', 'https://www.urltest2.com')
When I run it, it prints only one argument.
My goal is to print the emails acquired from webpages and store them in a list, separated by commas.
Thanks in advance.
The problem here is the line: totalemails = []. Here, you are re-instantiating the the variables totalemails to have zero entries. So, in each iteration, it only has one entry inside it. After the last iteration, you'll end up with just the last entry in the list. To get a list of all emails, you need to put the variable outside of the for loop.
Example:
def FindEmails(*urls):
totalemails = []
for i in urls:
req = urllib2.Request(i)
....

Link Scraping Program Redundancy?

I am attempting to create a small script to simply take a given website along with a keyword, follow all the links a certain number of times(only links on website's domain), and finally search all the found links for the keyword and return any successful matches. Ultimately it's goal is if you remember a website where you saw something and know a good keyword that the page contained, this program might be able to help find the link to the lost page. Now my bug: upon looping through all these pages, extracting their URLs, and creating a list of them, it seems to somehow end up redundantly going over and removing the same links from the list. I did add a safeguard in place for this but it doesn't seem to be working as expected. I feel like some url(s) are mistakenly being duplicated into the list and end up being checked an infinite number of times.
Here's my full code(sorry about the length), problem area seems to be at the very end in the for loop:
import bs4, requests, sys
def getDomain(url):
if "www" in url:
domain = url[url.find('.')+1:url.rfind('.')]
elif "http" in url:
domain = url[url.find("//")+2:url.rfind('.')]
else:
domain = url[:url.rfind(".")]
return domain
def findHref(html):
'''Will find the link in a given BeautifulSoup match object.'''
link_start = html.find('href="')+6
link_end = html.find('"', link_start)
return html[link_start:link_end]
def pageExists(url):
'''Returns true if url returns a 200 response and doesn't redirect to a dns search.
url must be a requests.get() object.'''
response = requests.get(url)
try:
response.raise_for_status()
if response.text.find("dnsrsearch") >= 0:
print response.text.find("dnsrsearch")
print "Website does not exist"
return False
except Exception as e:
print "Bad response:",e
return False
return True
def extractURLs(url):
'''Returns list of urls in url that belong to same domain.'''
response = requests.get(url)
soup = bs4.BeautifulSoup(response.text)
matches = soup.find_all('a')
urls = []
for index, link in enumerate(matches):
match_url = findHref(str(link).lower())
if "." in match_url:
if not domain in match_url:
print "Removing",match_url
else:
urls.append(match_url)
else:
urls.append(url + match_url)
return urls
def searchURL(url):
'''Search url for keyword.'''
pass
print "Enter homepage:(no http://)"
homepage = "http://" + raw_input("> ")
homepage_response = requests.get(homepage)
if not pageExists(homepage):
sys.exit()
domain = getDomain(homepage)
print "Enter keyword:"
#keyword = raw_input("> ")
print "Enter maximum branches:"
max_branches = int(raw_input("> "))
links = [homepage]
for n in range(max_branches):
for link in links:
results = extractURLs(link)
for result in results:
if result not in links:
links.append(result)
Partial output(about .000000000001%):
Removing /store/apps/details?id=com.handmark.sportcaster
Removing /store/apps/details?id=com.handmark.sportcaster
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.mobisystems.office
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.joelapenna.foursquared
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.dashlabs.dash.android
Removing /store/apps/details?id=com.eweware.heard
Removing /store/apps/details?id=com.eweware.heard
Removing /store/apps/details?id=com.eweware.heard
You are repeatedly looping over the same link multiple times with your outer loop:
for n in range(max_branches):
for link in links:
results = extractURLs(link)
I would also be careful appending to a list you are iterating over or you could well end up with an infinite loop
Okay, I found a solution. All I did was change the links variable to a dictionary with the values 0 representing a not searched link and 1 representing a searched link. Then I iterated through a copy of the keys in order to preserve the branches and not let it wildly go follow every link that is added on in the loop. And finally if a link is found that is not already in links it is added and set to 0 to be searched.
links = {homepage: 0}
for n in range(max_branches):
for link in links.keys()[:]:
if not links[link]:
results = extractURLs(link)
for result in results:
if result not in links:
links[result] = 0

Recursive function gives no output

I'm scraping all the URL of my domain with recursive function.
But it outputs nothing, without any error.
#usr/bin/python
from bs4 import BeautifulSoup
import requests
import tldextract
def scrape(url):
for links in url:
main_domain = tldextract.extract(links)
r = requests.get(links)
data = r.text
soup = BeautifulSoup(data)
for href in soup.find_all('a'):
href = href.get('href')
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == main_domain.domain :
problem.append(href)
elif not href == '#' and link_domain.tld == '':
new = 'http://www.'+ main_domain.domain + '.' + main_domain.tld + '/' + href
problem.append(new)
return len(problem)
return scrape(problem)
problem = ["http://xyzdomain.com"]
print(scrape(problem))
When I create a new list, it works, but I don't want to make a list every time for every loop.
You need to structure your code so that it meets the pattern for recursion as your current code doesn't - you also should not call variables the same name as libraries, e.g. href = href.get() because this will usually stop the library working as it becomes the variable, your code as it currently is will only ever return the len() as this return is unconditionally reached before: return scrap(problem).:
def Recursive(Factorable_problem)
if Factorable_problem is Simplest_Case:
return AnswerToSimplestCase
else:
return Rule_For_Generating_From_Simpler_Case(Recursive(Simpler_Case))
for example:
def Factorial(n):
""" Recursively Generate Factorials """
if n < 2:
return 1
else:
return n * Factorial(n-1)
Hello I've made a none recursive version of this that appears to get all the links on the same domain.
The code below I've tested using the problem included in the code. When I'd solved the problems with the recursive version the next problem was hitting the recursion depth limit so I rewrote it so it ran in an iterative fashion, the code and result below:
from bs4 import BeautifulSoup
import requests
import tldextract
def print_domain_info(d):
print "Main Domain:{0} \nSub Domain:{1} \nSuffix:{2}".format(d.domain,d.subdomain,d.suffix)
SEARCHED_URLS = []
problem = [ "http://Noelkd.neocities.org/", "http://youpi.neocities.org/"]
while problem:
# Get a link from the stack of links
link = problem.pop()
# Check we haven't been to this address before
if link in SEARCHED_URLS:
continue
# We don't want to come back here again after this point
SEARCHED_URLS.append(link)
# Try and get the website
try:
req = requests.get(link)
except:
# If its not working i don't care for it
print "borked website found: {0}".format(link)
continue
# Now we get to this point worth printing something
print "Trying to parse:{0}".format(link)
print "Status Code:{0} Thats: {1}".format(req.status_code, "A-OK" if req.status_code == 200 else "SOMTHINGS UP" )
# Get the domain info
dInfo = tldextract.extract(link)
print_domain_info(dInfo)
# I like utf-8
data = req.text.encode("utf-8")
print "Lenght Of Data Retrived:{0}".format(len(data)) # More info
soup = BeautifulSoup(data) # This was here before so i left it.
print "Found {0} link{1}".format(len(soup.find_all('a')),"s" if len(soup.find_all('a')) > 1 else "")
FOUND_THIS_ITERATION = [] # Getting the same links over and over was boring
found_links = [x for x in soup.find_all('a') if x.get('href') not in SEARCHED_URLS] # Find me all the links i don't got
for href in found_links:
href = href.get('href') # You wrote this seems to work well
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == dInfo.domain: # JUST FINDING STUFF ON SAME DOMAIN RIGHT?!
if href not in FOUND_THIS_ITERATION: # I'ma check you out next time
print "Check out this link: {0}".format(href)
print_domain_info(link_domain)
FOUND_THIS_ITERATION.append(href)
problem.append(href)
else: # I got you already
print "DUPE LINK!"
else:
print "Not on same domain moving on"
# Count down
print "We have {0} more sites to search".format(len(problem))
if problem:
continue
else:
print "Its been fun"
print "Lets see the URLS we've visited:"
for url in SEARCHED_URLS:
print url
Which prints, after a lot of other logging loads of neocities websites!
What's happening is the script is popping a value of the list of websites yet to visit, it then gets all the links on the page which are on the same domain. If those links are to pages we haven't visited we add the link to the list of links to be visited. After we do that we pop the next page and do the same thing again until there are no pages left to visit.
Think this is what your looking for, get back to us in the comments if this doesn't work in the way that you want or if anyone can improve please leave a comment.

Categories

Resources