Loop over different webpages using Python - python

I am currently following a course in Big Data but do not understand much of it. For an assignment, I would like to find out which topics are discussed on the TripAdvisor-forum about Amsterdam. I want to create a CSV-file including the topic, the author and the amount of replies per topic. Some questions:
How can a make a list of all the topics? I checked the website-source for all the pages and the topic is always stated behind 'onclick="setPID(34603)' and ends with </a>. I tried '(re.findall(r'onclick="setPID(34603)">(.*?)</a>', post)' but it's not working.
The replies are not given in the commentsection, but in a separate row on the page. How can I make a loop and append all the replies to a new variable?
How do I loop over the first 20 pages? The URL in my code only includes the 1st page, giving 20 topics.
Do I create the CSV file before or after the looping?
Here is my code:
from urllib import request
import re
import csv
topiclist=[]
metalist=[]
req = request.Request('https://www.tripadvisor.com/ShowForum-g188590-i60-
Amsterdam_North_Holland_Province.html', headers={'User-Agent' :
"Mozilla/5.0"})
tekst=request.urlopen(req).read()
tekst=tekst.decode(encoding="utf-8",errors="ignore").replace("\n"," ")
.replace("\t"," ")
topicsection=re.findall(r'<b><a(.*?)</div>',tekst)
topic=[]
for post in topicsection:
topic.append(re.findall(r'onclick="setPID(34603)">(.*?)</a>', post)
author=[]
for post in topicsection:
author.append(re.findall(r'(.*?)',
post))
replies=re.findall(r'<td class="reply rowentry.*?">(.*?)</td>',tekst)

Don't use regular expressions to parse HTML. Use an html parser such as beautifulsoup.
e.g -
from bs4 import BeautifulSoup
import requests
r = requests.get("https://www.tripadvisor.com/ShowForum-g188590-i60-Amsterdam_North_Holland_Province.html")
soup = BeautifulSoup(r.content, "html.parser") #or another parser such as lxml
topics = soup.find_all("a", {'onclick': 'setPID(34603)'})
#do stuff

Related

Web Scraping & Csv making

I am doing a project that I am a little stuck on. I am trying to web scrape a website, to grab all of the cast and its characters.
I successfully grab the cast and characters from the wiki page and I get it to print out their personal wikipedia pages.
My question is, is it possible to make a program that produces the wiki by using a loop?
Next, I would want to use the loop to scrape each actors/actresses from their personal wikipedia page to scrape their age and they are from?
After the full code works and outputs what I have asked I need help by creating a csv file to make it print into the created csv file.
Thanks in advance.
from urllib.request import urlopen
from bs4 import BeautifulSoup
import pandas as pd
url = 'https://en.wikipedia.org/wiki/Law_%26_Order:_Special_Victims_Unit'
html = urlopen(url)
bs = BeautifulSoup(html, 'html.parser')
The above code imports the libraries I am going to use and has the url I am trying to scrape:
right_table=bs.find('table', class_='wikitable plainrowheaders')
table_rows = right_table.find_all('td')
for row in table_rows:
try:
if row.a['href'].startswith('/wiki'):
print(row.get_text())
print(row.a['href'])
print('-------------------------')
except TypeError:
pass
This is before I added the below print statements, I think there is a way to create a list that then creates a loop to grab where it prints "/wiki/...."
right_table=bs.find('table', class_='wikitable plainrowheaders')
table_rows = right_table.find_all('td')
for row in table_rows:
try:
if row.a['href'].startswith('/wiki'):
print(row.get_text())
link = ('https://en.wikipedia.org/'+ row.a['href'])
print(link)
print('-------------------------')
except TypeError:
pass
The above code currently prints the Cast and their allocated wikipedia page but I am unsure if thats the write way to do it. I am also printing my outcomes first before putting it all in the CSV to check and make sure I am printing out the right code
def age_finder(url):
...
return age
The code above I am unsure what to put in where the "..." is to help return the age
for url in url_list:
age_finder(url)

How to collect data from website when wanted tag haven't class?

I would know how to get data from a website
I find a tutorial and finished with this
import os
import csv
import requests
from bs4 import BeautifulSoup
requete = requests.get("https://www.palabrasaleatorias.com/mots-aleatoires.php")
page = requete.content
soup = BeautifulSoup(page)
The tutorial say me that I should use something like this to get the string of a tag
h1 = soup.find("h1", {"class": "ico-after ico-tutorials"})
print(h1.string)
But I got a problem : the tag where I want to get text content haven't class... how should I do ?
I tried to put {} but not working
this too {"class": ""}
In fact, it's return me a None
I want to get the text content of this part of the website :
<div style="font-size:3em; color:#6200C5;">
Orchard</div>
Where Orchard is the random word
Thank for any type of help
Unfortunately, there aren't many pointers featured in BeautifulSoup, and the page you are trying to get is terribly ill-suited for your task (no IDs, classes, or other useful html features to point at).
Hence, you should change the way you use to point at the html element, and use the Xpath - and you can't do it with BeautifulSoup. In order to do that, just use html from package lxml to parse the page. Below a code snippet (based on the answers to this question) which extracts the random word in your example.
import requests
from lxml import html
requete = requests.get("https://www.palabrasaleatorias.com/mots-aleatoires.php")
tree = html.fromstring(requete.content)
rand_w = tree.xpath('/html/body/center/center/table[1]/tr/td/div/text()')
print(rand_w)

How do I scrape a full instagram page in python?

Long story short, I'm trying to create an Instagram python scraper, that loads the entire page and grabs all the links to the images. I have it working, only problem is, it only loads the original 12 photos that Instagram shows. Is there anyway I can tell requests to load the entire page?
Working code;
import json
import requests
from bs4 import BeautifulSoup
import sys
r = requests.get('https://www.instagram.com/accountName/')
soup = BeautifulSoup(r.text, 'lxml')
script = soup.find('script', text=lambda t: t.startswith('window._sharedData'))
page_json = script.text.split(' = ', 1)[1].rstrip(';')
data = json.loads(page_json)
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)
for post in data['entry_data']['ProfilePage'][0]['graphql']['user']['edge_owner_to_timeline_media']['edges']:
image_src = post['node']['display_url']
print(image_src)
As Scratch already mentioned, Instagram uses "infinite scrolling" which won't allow you to load the entire page. But you can check the total amount of messages at the top of the page (within the first span with the _fd86t class). Then you can check if the page already contains all of the messages. Otherwise, you'll have to use a GET request to get a new JSON response. The benefit to this is that this request contains the first field, which seems to allow you to modify how many messages you get. You can modify this from its standard 12 to get all of the remaining messages (hopefully).
The necessary request looks similar to the following (where I've anonymised the actual entries, and with some help from the comments):
https://www.instagram.com/graphql/query/?query_hash=472f257a40c653c64c666ce877d59d2b&variables={"id":"XXX","first":12,"after":"XXX"}
parse_ig.py
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from bs4 import BeautifulSoup
from InstagramAPI import InstagramAPI
import time
c = webdriver.Chrome()
# load IG page here, whether a hashtag or a public user's page using c.get(url)
for i in range(10):
c.send_keys(Keys.END)
time.sleep(1)
soup = BeautifulSoup(c.page_source, 'html.parser')
ids = [a['href'].split('/') for a in soup.find_all('a') if 'tagged' in a['href']]
Once you have the ids, you can use Instagram's old API to get data for those. I'm not sure if it still works, but there was an API that I used -- which was limited by how much FB has slowly deprecated parts of the old API. Here's the link, in case you don't want to access Instagram API on your own :)
You can also add improvements to this simple code. Like instead of a "for" loop, you could do a "while" loop instead (i.e. while page is still loading, keep pressing END button.)
#zero's answer is incomplete (at least as of 1/15/19). c.send_keys is not a valid method. Instead, this is what I did:
c = webdriver.Chrome()
c.get(some_url)
element = c.find_element_by_tag_name('body') # or whatever tag you're looking to scrape from
for i in range(10):
element.send_keys(Keys.END)
time.sleep(1)
soup = BeautifulSoup(c.page_source, 'html.parser')
Here is a link to good tutorial for scraping Instagram profile info and posts that also handles pagination and works in 2022: Scraping Instagram
In summary, you have to use Instagram GraphQL API endpoint that requires user identifier and cursor from the previous page response: https://instagram.com/graphql/query/?query_id=17888483320059182&id={user_id}&first=24&after={end_cursor}

Retrieving a subset of href's from findall() in BeautifulSoup

My goal is to write a python script that takes an artist's name as a string input and then appends it to the base URL that goes to the genius search query.Then retrieves all the lyrics from the returned web page's links (Which is the required subset of this problem that will also contain specifically the artist name in every link in that subset.).I am in the initial phase right now and just have been able to retrieve all links from the web page including the ones that I don't want in my subset. I tried to find a simple solution but failed continuously.
import requests
# The Requests library.
from bs4 import BeautifulSoup
from lxml import html
user_input = input("Enter Artist Name = ").replace(" ","+")
base_url = "https://genius.com/search?q="+user_input
header = {'User-Agent':''}
response = requests.get(base_url, headers=header)
soup = BeautifulSoup(response.content, "lxml")
for link in soup.find_all('a',href=True):
print (link['href'])
This returns this complete list while I only need the ones that end with lyrics and the artist's name (here for instance Drake). These will the links from where I should be able to retrieve the lyrics.
https://genius.com/
/signup
/login
https://www.facebook.com/geniusdotcom/
https://twitter.com/Genius
https://www.instagram.com/genius/
https://www.youtube.com/user/RapGeniusVideo
https://genius.com/new
https://genius.com/Drake-hotline-bling-lyrics
https://genius.com/Drake-one-dance-lyrics
https://genius.com/Drake-hold-on-were-going-home-lyrics
https://genius.com/Drake-know-yourself-lyrics
https://genius.com/Drake-back-to-back-lyrics
https://genius.com/Drake-all-me-lyrics
https://genius.com/Drake-0-to-100-the-catch-up-lyrics
https://genius.com/Drake-started-from-the-bottom-lyrics
https://genius.com/Drake-from-time-lyrics
https://genius.com/Drake-the-motto-lyrics
/search?page=2&q=drake
/search?page=3&q=drake
/search?page=4&q=drake
/search?page=5&q=drake
/search?page=6&q=drake
/search?page=7&q=drake
/search?page=8&q=drake
/search?page=9&q=drake
/search?page=672&q=drake
/search?page=673&q=drake
/search?page=2&q=drake
/embed_guide
/verified-artists
/contributor_guidelines
/about
/static/press
mailto:brands#genius.com
https://eventspace.genius.com/
/static/privacy_policy
/jobs
/developers
/static/terms
/static/copyright
/feedback/new
https://genius.com/Genius-how-genius-works-annotated
https://genius.com/Genius-how-genius-works-annotated
My next step would be to use selenium to emulate scroll which in the case of genius.com gives the entire set of search results. Any suggestions or resources would be appreciated. I would also like a few comments about the way I wish to proceed with this solution. Can we make it more generic?
P.S. I may not have well lucidly explained my problem but I have tried my best. Also, any ambiguities are welcome too. I am new to scraping and python and programming as well in so, just wanted to make sure that I am following the right path.
Use the regex module to match only the links you want.
import requests
# The Requests library.
from bs4 import BeautifulSoup
from lxml import html
from re import compile
user_input = input("Enter Artist Name = ").replace(" ","+")
base_url = "https://genius.com/search?q="+user_input
header = {'User-Agent':''}
response = requests.get(base_url, headers=header)
soup = BeautifulSoup(response.content, "lxml")
pattern = re.compile("[\S]+-lyrics$")
for link in soup.find_all('a',href=True):
if pattern.match(link['href']):
print (link['href'])
Output:
https://genius.com/Drake-hotline-bling-lyrics
https://genius.com/Drake-one-dance-lyrics
https://genius.com/Drake-hold-on-were-going-home-lyrics
https://genius.com/Drake-know-yourself-lyrics
https://genius.com/Drake-back-to-back-lyrics
https://genius.com/Drake-all-me-lyrics
https://genius.com/Drake-0-to-100-the-catch-up-lyrics
https://genius.com/Drake-started-from-the-bottom-lyrics
https://genius.com/Drake-from-time-lyrics
https://genius.com/Drake-the-motto-lyrics
This just looks if your link matches the pattern ending in -lyrics. You may use similar logic to filter using user_input variable as well.
Hope this helps.

Scraping the web in python

I'm completely new to scraping the web but I really want to learn it in python. I have a basic understanding of python.
I'm having trouble understanding a code to scrape a webpage because I can't find a good documentation about the modules which the code uses.
The code scraps some movie's data of this webpage
I get stuck after the comment "selection in pattern follows the rules of CSS".
I would like to understand the logic behind that code or a good documentation to understand that modules. Is there any previous topic which I need to learn?
The code is the following :
import requests
from pattern import web
from BeautifulSoup import BeautifulSoup
url = 'http://www.imdb.com/search/title?sort=num_votes,desc&start=1&title_type=feature&year=1950,2012'
r = requests.get(url)
print r.url
url = 'http://www.imdb.com/search/title'
params = dict(sort='num_votes,desc', start=1, title_type='feature', year='1950,2012')
r = requests.get(url, params=params)
print r.url # notice it constructs the full url for you
#selection in pattern follows the rules of CSS
dom = web.Element(r.text)
for movie in dom.by_tag('td.title'):
title = movie.by_tag('a')[0].content
genres = movie.by_tag('span.genre')[0].by_tag('a')
genres = [g.content for g in genres]
runtime = movie.by_tag('span.runtime')[0].content
rating = movie.by_tag('span.value')[0].content
print title, genres, runtime, rating
Here's the documentation for BeautifulSoup, which is an HTML and XML parser.
The comment
selection in pattern follows the rules of CSS
means the strings such as 'td.title' and 'span.runtime' are CSS selectors that help find the data you are looking for, where td.title searches for the <TD> element with attribute class="title".
The code is iterating through the HTML elements in the webpage body and extracting title, genres, runtime, and rating by the CSS selectors .

Categories

Resources