How to scrape the content on different tabs from this website? - python

I'm trying to scrape the synonyms of words from Thesaurus.com (using bs4 and requests). An example page: https://www.thesaurus.com/browse/put?s=t
The problem I'm running into is that there are three different tabs for different meanings of the word, and each one has a different list of synonyms. I want to scrape them all, but I'm not sure how to access the synonyms listed in the non-active tab. When I print the soup'd html and ctrl+f, the synonyms from the non-primary tab don't seem to be in the html at all, which seems to indicate to me that I cannot just simply scrape the page once. Anyway, what's the easiest way to get these other synonyms?
EDIT:
As requested by a comment, here is my code to scrape a word currently. This code works perfectly, as I said; I just can't figure out how to scrape from the other tabs on the page.
import requests
from bs4 import BeautifulSoup
results = {}
def scrape_word(word):
url = baseURL + word
response = requests.get(url, timeout=5)
if (response.status_code == 404):
results[word] = []
return
soup = BeautifulSoup(response.content, 'html.parser')
# Look only at the main content div or else you'll get other undesired tags
soup = soup.find('div', attrs={'class': 'css-1kc5m8x e1qo4u830'})
# The below variable is the class used by Thesaurus.com for synonyms
word_class = 'css-133coio etbu2a32'
# Structure of site is spans hold links which have text of the word
spanList = soup.find_all('span', attrs={'class': word_class})
synList = [span.find('a').text for span in spanList]
results[word] = synList
EDIT2: a comment by cddt has given me an alternative route to the data I desire, but if anyone knows, I'm still curious how I could've solved the problem without his solution.

Related

Matching a specific piece of text in a title using Beuatiful Soup

Basically, I want to find all links that contain certain key terms. In my case, the titles of these links that I want come in this form: abc... (common text), dce... (common text), ... I want to take all of the links containing "(common text)" and put them in the list. I got the code working and I understand how to find all links. However, I converted the links to strings to find the "(common text)". I know that this isn't good practice and I am not sure how to use Beautiful Soup to find this common element without converting to a string. The issue here is that the titles I am searching for are not all the same. Here's what I have so far:
from bs4 import BeautifulSoup
import requests
import webbrowser
url = 'website.com'
http = requests.get(url)
soup = BeautifulSoup(http.content, "lxml")
links = soup.find_all('a', limit=4000)
links_length = len(links)
string_links = []
targetlist = []
for a in range(links_length):
string_links.append(str(links[a]))
if '(common text)' in string_links[a]:
targetlist.append(string_links[a])
NOTE: I am looking for the simplest method using Beautiful Soup to accomplish this. Any help will be appreciated.
Without the actual website and actual output you want, it's very difficult to say what you want but this is a "cleaner" solution using list comprehension.
from bs4 import BeautifulSoup
import requests
import webbrowser
url = 'website.com'
http = requests.get(url)
soup = BeautifulSoup(http.content, "lxml")
links = soup.find_all('a', limit=4000)
targetlist = [str(link) for link in links if "(common text)" in str(link)]

Scrape texts from multiple websites and save separately in text files

I am a beginner in python, have been using it for my master thesis to conduct textual analysis in gaming industry. I have been trying to scrape reviews from several gaming critic sites.
I used a list of URLs in the code to scrape the reviews and have been successful. Unfortunately, i could not write each reviews in a separate file. as i write the files, either i receive only the review from the last URL in the list to all the files, or all of the reviews in all of the files after changing the indent. following here is my code. could you kindly suggest what's wrong in here?
from bs4 import BeautifulSoup
import requests
urls= ['http://www.playstationlifestyle.net/2018/05/08/ao-international-tennis-review/#/slide/1',
'http://www.playstationlifestyle.net/2018/03/27/atelier-lydie-and-suelle-review/#/slide/1',
'http://www.playstationlifestyle.net/2018/03/15/attack-on-titan-2-review-from-a-different-perspective-ps4/#/slide/1']
for url in urls:
r=requests.get(url).text
soup= BeautifulSoup(r, 'lxml')
for i in range(len(urls)):
file=open('filename%i.txt' %i, 'w')
for article_body in soup.find_all('p'):
body=article_body.text
file.write(body)
file.close()
I think you only need one for loop. If I understand correctly, you only want to iterate through urls and store an individual file for each.
Therefore, I would suggest removing the second for statement. You do though then need to modify the for url in urls to get a unique index for the current url you can use for i and you can use enumerate for that.
Your single for statement would become:
for i, url in enumerate(urls):
I've not tested this myself but this is what I believe should resolve your issue.
I totally believe you are a beginner in python. I post the right one before explaining it.
for i,url in enumerate(urls):
r = requests.get(url).text
soup = BeautifulSoup(r, 'lxml')
file = open('filename{}.txt'.format(i), 'w')
for article_body in soup.find_all('p'):
body = article_body.text
file.write(body)
file.close()
The reason why i receive only the review from the last URL in the list to all the files
one variable for one value , so after for-loop finished you will get the last result(the third one). The result of first and second result will be override
for url in urls:
r = requests.get(url).text
soup = BeautifulSoup(r, 'lxml')

Extract text from embedded tweets with Python & BeautifulSoup

I need to extract separately text from embedded tweets on a webpage. The code below works ok but I need to get rid of start and end lines like these: Skip Twitter post by... and End Twitter post by..., date and Report leaving only tweets. I cannot even see where these lines come from and which tag to use. Will really appreciate your help!
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.bbc.co.uk/news/uk-44496876')
soup = BeautifulSoup(r.content, "html.parser")
article_soup = [s.get_text() for s in soup.find_all( 'div', {'class': 'social-embed'})]
tweets = '\n'.join(article_soup)
print(tweets)
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.bbc.co.uk/news/uk-44496876')
soup = BeautifulSoup(r.content, "html.parser")
article_soup = [s.get_text() for s in soup.find_all('p', {'dir': 'ltr'})]
tweets = '\n'.join(article_soup)
print(tweets)
If you also want to get the author of the tweets it's a bit tricky since you don't have a tag for the author. So I used a python code to remove all the tags in between the author like this:
import requests
from bs4 import BeautifulSoup
r = requests.get('https://www.bbc.co.uk/news/uk-44496876')
soup = BeautifulSoup(r.content, "html.parser")
articles_soup = [s for s in soup.find_all('blockquote', {'class': 'twitter-tweet'})]
tweets = []
for article_soup in articles_soup:
tweet = article_soup.find('p').get_text()
# The last <a href='...'></a> is the date, others are part of the tweet
date = article_soup.find_all('a')[-1].get_text()
tweet_author = article_soup.get_text()[len(tweet):-len(date)].strip()
tweets.append((tweet_author, tweet))
print(tweets)
Note1: if you want to get only parts of the tweet_author you can easily take the tuple first element and tweek it to get the object that you want.
Note2: the question code example does not always return tweets, the issue is with the html page since from time to time several elements do not return. The fast solution is to run the requests.get method once more - I suggest you look into this issue.
Once I got the tweets with the original question, I found the tags and I got the tweets that you expected to get, each tweet in a different line in my code.

Beautiful Soup - Blank screen for a long time without any output

I am quite new to python and am working on a scraping based project- where I am supposed to extract all the contents from links containing a particular search term and place them in a csv file. As a first step, I wrote this code to extract all the links from a website based on a search term entered. I only get a blank screen as output and I am unable to find my mistake.
import urllib
import mechanize
from bs4 import BeautifulSoup
import datetime
def searchAP(searchterm):
newlinks = []
browser = mechanize.Browser()
browser.set_handle_robots(False)
browser.addheaders = [('User-agent', 'Firefox')]
text = ""
start = 0
while "There were no matches for your search" not in text:
url = "http://www.marketing-interactive.com/"+"?s="+searchterm
text = urllib.urlopen(url).read()
soup = BeautifulSoup(text, "lxml")
results = soup.findAll('a')
for r in results:
if "rel=bookmark" in r['href'] :
newlinks.append("http://www.marketing-interactive.com"+ str(r["href"]))
start +=10
return newlinks
print searchAP("digital marketing")
You made four mistakes:
You are defining start but you never use it. (Nor can you, as far as I can see on http://www.marketing-interactive.com/?s=something. There is no url based pagination.) So you endlessly looping over the first set of results.
"There were no matches for your search" is not the no-results string returned by that site. So it would go on forever anyway.
You are appending the link, including http://www.marketing-interactive.com to http://www.marketing-interactive.com. So you would end up with http://www.marketing-interactive.comhttp://www.marketing-interactive.com/astro-launches-digital-marketing-arm-blaze-digital/
Concerning rel=bookmark selection: arifs solution is the proper way to go. But if you really want to do it this way you'd need to something like this:
for r in results:
if r.attrs.get('rel') and r.attrs['rel'][0] == 'bookmark':
newlinks.append(r["href"])
This first checks if rel exists and then checks if its first child is "bookmark", as r['href'] simply does not contain the rel. That's not how BeautifulSoup structures things.
To scrape this specific site you can do two things:
You could do something with Selenium or something else that supports Javascript and press that "Load more" button. But this is quite a hassle.
You can use this loophole: http://www.marketing-interactive.com/wp-content/themes/MI/library/inc/loop_handler.php?pageNumber=1&postType=search&searchValue=digital+marketing
This is the url that feeds the list. It has pagination, so you can easily loop over all results.
The following script extracts all the links from the web page based on given search key. But it does not explore beyond the first page. Although the following code can easily be modified to get all results from multiple pages by manipulating page-number in the URL (as described by Rutger de Knijf in the other answer.).
from pprint import pprint
import requests
from BeautifulSoup import BeautifulSoup
def get_url_for_search_key(search_key):
base_url = 'http://www.marketing-interactive.com/'
response = requests.get(base_url + '?s=' + search_key)
soup = BeautifulSoup(response.content)
return [url['href'] for url in soup.findAll('a', {'rel': 'bookmark'})]
Usage:
pprint(get_url_for_search_key('digital marketing'))
Output:
[u'http://www.marketing-interactive.com/astro-launches-digital-marketing-arm-blaze-digital/',
u'http://www.marketing-interactive.com/singapore-polytechnic-on-the-hunt-for-digital-marketing-agency/',
u'http://www.marketing-interactive.com/how-to-get-your-bosses-on-board-your-digital-marketing-plan/',
u'http://www.marketing-interactive.com/digital-marketing-institute-launches-brand-refresh/',
u'http://www.marketing-interactive.com/entropia-highlights-the-7-original-sins-of-digital-marketing/',
u'http://www.marketing-interactive.com/features/futurist-right-mindset-digital-marketing/',
u'http://www.marketing-interactive.com/lenovo-brings-board-new-digital-marketing-head/',
u'http://www.marketing-interactive.com/video/discussing-digital-marketing-indonesia-video/',
u'http://www.marketing-interactive.com/ubs-melvin-kwek-joins-credit-suisse-as-apac-digital-marketing-lead/',
u'http://www.marketing-interactive.com/linkedins-top-10-digital-marketing-predictions-2017/']
Hope this is what you wanted as the first step for your project.

How to use BeautifulSoup to scrape links in a html

I need download few links in a html. But I don't need all of them, I only need few of them in certain section on this webpage.
For example, in http://www.nytimes.com/roomfordebate/2014/09/24/protecting-student-privacy-in-online-learning, I need links in the debaters section. I plan to use BeautifulSoup and I looked the html of one of the links:
Data Collection Is Out of Control
Here's my code:
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
link_set = set()
for link in soup.find_all("a", class = "bl-bigger"):
href = link.get('href')
if href == None:
continue
elif '/roomfordebate/' in href:
link_set.add(href)
for link in link_set:
print link
This code is supposed to give me all the links with bl-bigger class. But it actually returns nothing. Could anyone figure what's wrong with my code or how to make it work?
Thanks
I don't see bl-bigger class at all when I view the source from Chrome. May be that's why your code is not working?
Lets start looking at the source. The whole Debaters section seems to be put within div with class nytint-discussion-content. So using BeautifulSoup, lets get that whole div first.
debaters_div = soup.find('div', class_="nytint-discussion-content")
Again learning from the source, seems all the links are within a list, li tag. Now all you have to do is, find all li tags and find anchor tags within them. One more thing you can notice is, all the li tags have class nytint-bylines-1.
list_items = debaters_div.find_all("li", class_="nytint-bylines-1")
list_items[0].find('a')
# Data Collection Is Out of Control
So, your whole code can be:
link_set = set()
response = requests.get(url)
html_data = response.text
soup = BeautifulSoup(html_data)
debaters_div = soup.find('div', class_="nytint-discussion-content")
list_items = debaters_div.find_all("li", class_="nytint-bylines-1")
for each_item in list_items:
html_link = each_item.find('a').get('href')
if html_link.startswith('/roomfordebate'):
link_set.add(html_link)
Now link_set will contain all the links you want. From the link given in question, it will fetch 5 links.
PS: link_set contains only uri and not actual html addresses. So I would add http://www.nytimes.com at start before adding those links to link_set. Just change the last line to:
link_set.add('http://www.nytimes.com' + html_link)
You need to call the method with an object instead of keyword argument:
soup.find("tagName", { "class" : "cssClass" })
or use .select method which executes CSS queries:
soup.select('a.bl-bigger')
Examples are in the docs, just search for '.select' string. Also, instead of writing the entire script you'll quickly get some working code with ipython interactive shell.

Categories

Resources