I'd like to scrape, using Python 3.6, H3 titles from within DIV - from the page:
https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1
Note that the page number changes, increment of 1.
I'm struggling to return or identify the title.
from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)
from bs4 import BeautifulSoup
html_soup = BeautifulSoup(response.text, 'lxml')
type(html_soup)
movie_containers = html_soup.find_all('div', class_ = 'card card--rentals')
print(type(movie_containers))
print(len(movie_containers))
I've tried looping through them also:
for dd in page("div.card__content"):
print(div.select_one("h3.card__title").text.strip())
Any help would be great.
Thanks,
I'm expecting results of Title of each film from each page, including link to the film. Eg. https://player.bfi.org.uk/rentals/film/watch-akenfield-1975-online
The page is loading content via xhr to another url so you are missing this. You can mimic that xhr POST request the page uses and alter post json sent. If you change size you get more results.
import requests
data = {"size":1480,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
r = requests.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
for film in r['hits']['hits']:
print(film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url'])
The actual result count for rentals is in the json, r['hits']['total'], so you can do an initial request, starting with a number much higher than you expect, check if another request is needed, and then gather any extra by altering the from and size to mop up any outstanding.
import requests
import pandas as pd
initial_count = 10000
results = []
def add_results(r):
for film in r['hits']['hits']:
results.append([film['_source']['title'], 'https://player.bfi.org.uk' + film['_source']['url']])
with requests.Session() as s:
data = {"size": initial_count,"from":0,"sort":"sort_title","aggregations":{"genre":{"terms":{"field":"genre.raw","size":10}},"captions":{"terms":{"field":"captions"}},"decade":{"terms":{"field":"decade.raw","order":{"_term":"asc"},"size":20}},"bbfc":{"terms":{"field":"bbfc_rating","size":10}},"english":{"terms":{"field":"english"}},"audio_desc":{"terms":{"field":"audio_desc"}},"colour":{"terms":{"field":"colour"}},"mono":{"terms":{"field":"mono"}},"fiction":{"terms":{"field":"fiction"}}},"min_score":0.5,"query":{"bool":{"must":{"match_all":{}},"must_not":[],"should":[],"filter":{"term":{"pillar.raw":"rentals"}}}}}
r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
total_results = int(r['hits']['total'])
add_results(r)
if total_results > initial_count :
data['size'] = total_results - initial_count
data['from'] = initial_count
r = s.post('https://search-es.player.bfi.org.uk/prod-films/_search', json = data).json()
add_results(r)
df = pd.DataFrame(results, columns = ['Title', 'Link'])
print(df.head())
the issue you are having is not actually with finding the div - I think you are doing that correctly. However, when you try to access the website with
from requests import get
url = 'https://player.bfi.org.uk/search/rentals?q=&sort=title&page=1'
response = get(url)
the response doesn't actually include all the content you see in the browser. You can check that this is the case with 'card' in response == False. This is most likely because after the website is loaded, all the cards are loaded via javascript, therefore just loading the basic content in with requests library is not sufficient to get all the information you want to scrape.
I suggest you maybe try looking at how the website loads all the cards - Network tab in the browser dev tools might help.
Related
I would like to get a table on ebi.ac.uk/interpro with the list of all the thousands of proteins names, accession number, species, and length for the entry I put on the website. I tried to write a script with python using requests, BeautifulSoup, and so on, but I am always getting the error
AttributeError: 'NoneType' object has no attribute 'find_all'.
The code
import requests
from bs4 import BeautifulSoup
# Set the URL of the website you want to scrape
url = xxxx
# Send a request to the website and get the response
response = requests.get(url)
# Parse the response using BeautifulSoup
soup = BeautifulSoup(response.text, "html.parser")
# Find the table on the page
table = soup.find("table", class_ = 'xxx')
# Extract the data from the table
# This will return a list of rows, where each row is a list of cells
table_data = []
for row in table.find_all('tr'):
cells = row.find_all("td")
row_data = []
# for cell in cells:
# row_data.append(cell.text)
# table_data.append(row_data)
# Print the extracted table data
#print(table_data)
for table = soup.find("table", class_ = 'xxx'), I fill in the class according to the name when I inspect the page.
Thank you.
I would like to get a table listing all the thousands of proteins that the website lists back from my request
sure it is take a look at this example:
import requests
url = "https://www.ebi.ac.uk/interpro/wwwapi/entry/hamap/"
querystring = {"search":"","page_size":"9999"}
payload = ""
response = requests.request("GET", url, data=payload, params=querystring)
print(response.text)
Please do not use selenium unless absolutely necessary. In the following example we request all the entries from /hamap/ I have no idea what this means but this is the API used to fetch the data. You can get the API for the dataset you want to scrape data from by doing the following:
open chrome dev tools -> network -> click Fetch/XAR -> click on the specific source you want -> wait until the page loads -> click the red icon for record -> look through the requests for the one that you want. It is important to not record requests after you retrieved the initial response. This website sends a tracking request every 1 second or so and it becomes cluttered really quick. Once you have the source that you want just loop over the array and get the fields that you want. I hope this answer was useful to you.
Hey I checked it out some more this site uses something similar to Elasticsearch's scroll here is a full implementation of what you are looking for:
import requests
import json
results_array = []
def main():
count = 0
starturl = "https://www.ebi.ac.uk/interpro/wwwapi//protein/UniProt/entry/InterPro/IPR002300/?page_size=100&has_model=true" ## This is the URL you want to scrape on page 0
startpage = requests.get(starturl) ## This is the page you want to scrape
count += int(startpage.json()['count']) ## This is the total number of indexes
next = startpage.json()['next'] ## This is the next page
for result in startpage.json()['results']:
results_array.append(result)
while count:
count -= 100
nextpage = requests.get(next) ## this is the next page
if nextpage.json()['next'] is None:
break
next = nextpage.json()['next']
for result in nextpage.json()['results']:
results_array.append(result)
print(json.dumps(nextpage.json()))
print(count)
if __name__ == '__main__':
main()
with open("output.json", "w") as f:
f.write(json.dumps(results_array))
To use this for any other type replace the startURL string with that one. make sure it is the url that controls pages. To get this click on the data you want then click on the next page use that url.
I hope this answer is what you were looking for.
I have this website https://www.futbin.com/22/player/7504 and I want to know if there is a way to get the XHR url for the information using python. For example for the URL above I know the XHR I want is https://www.futbin.com/22/playerPrices?player=231443 (got it from inspect element -> network).
My objective is to get the price value from https://www.futbin.com/22/player/1 to https://www.futbin.com/22/player/10000 at once without using inspect element one by one.
import requests
URL = 'https://www.futbin.com/22/playerPrices?player=231443'
page = requests.get(URL)
x = page.json()
data = x['231443']['prices']
print(data['pc']['LCPrice'])
print(data['ps']['LCPrice'])
print(data['xbox']['LCPrice'])
You can find the player-resource id and build the url yourself. I use beautifulsoup. It's made for parsing websites, but you can take the requests content and throw that into an html parser as well if you don't want to install beautifulsoup
With it, read the first url, get the id and use your code to pull the prices. To test, change the 10000 to 2 or 3 and you'll see it works.
import re, requests
from bs4 import BeautifulSoup
for i in range(1,10000):
url = 'https://www.futbin.com/22/player/{}'.format(str(i))
html = requests.get(url).content
soup = BeautifulSoup(html, "html.parser")
player_resource = soup.find(id=re.compile('page-info')).get('data-player-resource')
# print(player_resource)
URL = 'https://www.futbin.com/22/playerPrices?player={}'.format(player_resource)
page = requests.get(URL)
x = page.json()
# print(x)
data = x[player_resource]['prices']
print(data['pc']['LCPrice'])
print(data['ps']['LCPrice'])
print(data['xbox']['LCPrice'])
url1 = "https://www.imdb.com/user/ur34087578/watchlist"
url = "https://www.imdb.com/search/title/?groups=top_1000&ref_=adv_prv"
results1 = requests.get(url1, headers=headers)
results = requests.get(url, headers=headers)
soup1 = BeautifulSoup(results1.text, "html.parser")
soup = BeautifulSoup(results.text, "html.parser")
movie_div1 = soup1.find_all('div', class_='lister-item-content')
movie_div = soup.find_all('div', class_='lister-item mode-advanced')
#using unique tag for each movie in the respective link
print(movie_div1)
#empty list
print(movie_div)
#gives perfect list
Why is movie_div1 giving an empty list? I am not able to identify any difference in the URL structures to indicate the code should be different. All leads appreciated.
Unfortunately the div you want is processed by a javascript code so you can't get by scraping the raw html request.
You can get the movies you want by the request json your browser gets, which you won't need to scrape the code with beautifulsoup, making your script much faster.
2nd option is using Selenium.
Good luck.
As #SakuraFreak mentioned, you could parse the JSON received. However, this JSON response is embedded within the HTML itself which is later converted to HTML by browser JS (this is what you see as <div class="lister-item-content">...</div>.
For example, this is how you would extract the JSON content from the HTML to display movie/show names from the watchlist:
import requests
from bs4 import BeautifulSoup
import json
url = "https://www.imdb.com/user/ur34087578/watchlist"
response = requests.get(url)
soup = BeautifulSoup(response.text, "html.parser")
details = str(soup.find('span', class_='ab_widget'))
json_initial = "IMDbReactInitialState.push("
json_leftover = ");\n"
json_start = details.find(json_initial) + len(json_initial)
details = details[json_start:]
json_end = details.find(json_leftover)
json_data = json.loads(details[:json_end])
imdb_titles = json_data["titles"]
for item in imdb_titles.values():
print(item["primary"]["title"])
I want to parse the price information in Bitmex using bs4.
(The site url is 'https://www.bitmex.com/app/trade/XBTUSD')
So, I wrote down the code like this
from bs4 import BeautifulSoup
import requests
url = 'https://www.bitmex.com/app/trade/XBTUSD'
bitmex = requests.get(url)
if bitmex.status_code == 200:
print("connected...")
else:
print("Error...")
bitmex_html = bitmex.text
soup = BeautifulSoup(bitmex_html , 'lxml' )
price = soup.find_all("span", {"class": "price"})
print(price)
And, the result is like this
connected...
[]
Why '[]' poped up? and To bring the price text like '6065.5', what should I do?
The text I want to parse is
<span class="price">6065.5</span>
and the selector is
content > div > div.tickerBar.overflown > div > span.instruments.tickerBarSection > span:nth-child(1) > span.price
I just study Python, so question can seems to be odd to pro...sorry
You were pretty close. Give the following a try and see if it's more what you wanted. Perhaps the format you seeing or retrieving is not quite what you expect. Hope this is helpful.
from bs4 import BeautifulSoup
import requests
import sys
import json
url = 'https://www.bitmex.com/app/trade/XBTUSD'
bitmex = requests.get(url)
if bitmex.status_code == 200:
print("connected...")
else:
print("Error...")
sys.exit(1)
bitmex_html = bitmex.text
soup = BeautifulSoup(bitmex_html , 'lxml' )
# extract the json text from the returned page
price = soup.find_all("script", {"id": "initialData"})
price = price.pop()
# parse json text
d = json.loads(price.text)
# pull out the order book and then each price listed in the order book
order_book = d['orderBook']
prices = [v['price'] for v in order_book]
print(prices)
Example output:
connected...
[6045, 6044.5, 6044, 6043.5, 6043, 6042.5, 6042, 6041.5, 6041, 6040.5, 6040, 6039.5, 6039, 6038.5, 6038, 6037.5, 6037, 6036.5, 6036, 6035.5, 6035, 6034.5, 6034, 6033.5, 6033, 6032.5, 6032, 6031.5, 6031, 6030.5, 6030, 6029.5, 6029, 6028.5, 6028, 6027.5, 6027, 6026.5, 6026, 6025.5, 6025, 6024.5, 6024, 6023.5, 6023, 6022.5, 6022, 6021.5, 6021, 6020.5]
Your problem is that the page doesn't contain those span elements in first place. If you check the response tab in your browser developer tools (press F12 in firefox) you can see that the page is composed of script tags with some code written in javascript that creates the elements dynamically when executed.
Since BeautifulSoup can't execute Javascript, you can't extract the elements directly with it. You have two alternatives:
Use something like selenium that allows you to drive a browser from python - that means javascript will be executed because you're using a real browser - however the performance suffers.
Read the javascript code, understand it and write python code to simulate it. This usually is harder but luckly for you this seem very simple for the page you want:
import requests
import lxml.html
r = requests.get('https://www.bitmex.com/app/trade/XBTUSD')
doc = lxml.html.fromstring(r.text)
data = json.loads(doc.xpath("//script[#id='initialData']/text()")[0])
As you can see the data is in json format inside the page. After loading the data variable you can use it to access the infomation you want:
for row in data['orderBook']:
print(row['symbol'], row['price'], row['side'])
Will print:
('XBTUSD', 6051.5, 'Sell')
('XBTUSD', 6051, 'Sell')
('XBTUSD', 6050.5, 'Sell')
('XBTUSD', 6050, 'Sell')
Python/Webscraping Beginner so please bear with me. I'm trying to grab all product names from this URL
Unfortunately, nothing gets returned when I run my code. The same code works fine for most other sites but I've tried dozens of variations and I can't make it work for this site.
Is this URL even scrapable using Bsoup? Any feedback is appreciated.
import bs4
import requests
url = 'http://www.rakuten.com/sr/searchresults.aspx?qu'
payload = {'q': 'Python',}
r = requests.get(url % payload)
soup = bs4.BeautifulSoup(r.text)
titles = [a.attrs.get('href') for a in soup.findAll('div.productscontainer a[href^=/prod]')]
for t in titles:
print(t)
import bs4
import requests
url = 'http://www.rakuten.com/sr/searchresults.aspx?qu'
r = requests.get(url)
soup = bs4.BeautifulSoup(r.text)
titles = [td.text for td in soup.findAll('td', attrs={'class': 'searchlist'})]
for t in titles:
print(t)
If this formatting is correct, is the JS for sure preventing me from pulling anything?
First of all, your string formatting likely is wrong. Look at this:
>>> url = 'http://www.rakuten.com/sr/searchresults.aspx?qu'
>>> payload = {'q': 'Python',}
>>> url % payload
'http://www.rakuten.com/sr/searchresults.aspx?qu'
I guess that's not what you want. You should look up how string formatting works in Python, and then come up with a proper way to construct your URL.
Secondly, that "search engine" makes heavy use of JavaScript. Probably you will not be able to retrieve the information you want by just looking at the initially retrieved HTML content.