I am new to webscraping. So I have been given a task to extract data from : Here
I am choosing dataset of "comments". Below is my code for scraping.
import requests
from bs4 import BeautifulSoup
url = 'https://www.kaggle.com/hacker-news/hacker-news'
headers = {'User-Agent' : 'Mozilla/5.0'}
response = requests.get(url, headers = headers)
response.status_code
response.content
soup = BeautifulSoup(response.content, 'html.parser')
soup.find_all('tbody', class_ = 'TableBody-kSbjpE jGqIxa')
When I try to execute the last command it returns : [].
So, I am stuck here. I know we can get the data from kernel, but just for practice purpose where am I going wrong? Am I choosing wrong class? I want to scrape the data and probably save it to a CSV file or to a No-SQL Database, preferred Cassandra.
you are getting this [] because data you want to scrape is coming from API which loads after you web page load so page you are accessing does not contain that class
you can open you browser console and check in network as given in screenshot there you find data you want to scrape so you have to make request to that URL to get data
you can retrive data in this URL in preview tab you can see all data.
also if you have good knowledge of python you can also use this to scrape data
https://doc.scrapy.org/en/latest/intro/overview.html
Even though you were able to see the 'tbody', class_ = 'TableBody-kSbjpE jGqIxa' in the element inspector, the request that you make does not contain this class. See for yourself print(soup.prettify()). This is most likely because you're not requesting the correct url.
This may be not something you're aware of, but as a fyi:
You don't actually need to scrape using BeautifulSoup, you can get a list of all the available datasets from the API. Once you have it installed and configured, you can get the dataset: kaggle datasets download -d . Here's more info if you wish to proceed with the API instead: https://github.com/Kaggle/kaggle-api
Related
So my issue is that, I want to get user's id info from the chat.
The chat area what I'm looking for, looks like this...
<div id="chat_area" class="chat_area" style="will-change: scroll-position;">
<dl class="" user_id="asdf1234"><dt class="user_m"><em class="pc"></em> :</dt><dd id="1">blah blah</dd></dl>
asdf1234
...
What I want do is to,
Get the part starting with <a href='javascript:'' user_id='asdf1234' ...
so that I can parse this and do some other stuffs.
But this webpage is the one I'm currently using, and it can not be proxy(webdriver by selenium).
How can I extract that data from the chat?
It looks like you've got two separate problems here. I'd use both the requests and BeautifulSoup libraries to accomplish this.
Use your browser's developer tools, the network tab, to refresh the page and look for the request which responds with the HTML you want. Use the requests library to emulate this request exactly.
import requests
headers = {"name": "value"}
# Get case example.
response = requests.get("some_url", headers=headers)
# Post case example.
data = {"key": "value"}
response = requests.post("some_url", headers=headers, data=data)
Web-scraping is always finicky, if this doesn't work you're most likely going to need to use a requests session. Or a one-time hacky solution is just to set your cookies from the browser.
Once you have made the request you can use BeautifulSoup to scrape your user id very easily.
from bs4 import BeautifulSoup
# Create BS parser.
soup = BeautifulSoup(response.text, 'lxml')
# Find all elements with the attribute "user_id".
find_results = soup.findAll("a", {"user_id" : True})
# Iterate results. Could also just index if you want the single user_id.
for result in find_results:
user_id = result["user_id"]
I'm learning BeautifulSoup and I want to make a list of all image urls from a webpage (https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection).
import requests
from bs4 import BeautifulSoup
url = 'https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection/'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
The code above doesn't yield any image urls.
And when I print(soup), I can't see any image urls either.
But when I right click on one of the images and manually copy the link, I find out the url starts with https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/.
So I try setting url = 'https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/' for the above code, but that doesn't yield any image urls either.
Thanks for any help!
Since it is not in the page source it is likely loaded in by Javascript. You could look at the Javascript to see where it is generated or alternatively if you are just using this to learn BeautifulSoup you get get the page source with Selenium. Selenium uses a Chromedriver so you will need to have a chromedriver in your repo. (extract from here https://chromedriver.storage.googleapis.com/92.0.4515.107/chromedriver_win32.zip). Where 92.0.4515.107 is the version you want, or the latest version , which you can see here)
from selenium import webdriver
from bs4 import BeautifulSoup
url = 'https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection/'
driver = webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
# parse the soup as you normally would from here
If you go to Chrome Dev Tools you can see the requests to these images:
.
If you click on one of these requests you can see these requests have some query string parameters which have values like X-Goog-Signature and X-Goog-Algorithm..
To get this full URL you would need to replicate the POST requests to https://www.kaggle.com/requests/GetDataViewExternalRequest which returns a JSON object like so:
{"result":{"dataView":{"type":"url","dataTable":null,"dataUrl":{"url":"https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/no/1%20no.jpeg?X-Goog-Algorithm=GOOG4-RSA-SHA256\u0026X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20210801%2Fauto%2Fstorage%2Fgoog4_request\u0026X-Goog-Date=20210801T194627Z\u0026X-Goog-Expires=345599\u0026X-Goog-SignedHeaders=host\u0026X-Goog-Signature=2f3672e41a5821b19eb88a8452237a36943ca0cb54874ec47e47c832480870f1ae29ba4cab3e3717ab1decdb74012135bdb1324b85fd8159084dd9587f5504dbf60f6890f12277e418ddbbf61c720083ce7cca6b8936fa45cb9132a396c12136106c6dcfca8574475156f199169b2eecee7fd51fd784d7ddec3f8e3b80b75a17216893ffa22248e98e9bb5cae7cd5b3598e7f3fbbc6e51c24c864c8746c9fe202d1f6a221baea2f300dedf4ba62eb510d9369607ab2f6e659e3b4e4a18e763943632b110c57e223ffb9f1c09db8dac32da6e273f6248c5146dce8d5633ba38787394852b4bcc240dfa62210f042902e84833cf8817a050fc64655b0ed5f43ac9","type":"image"},"dataRaw":null,"dataCase":"dataUrl"}},"wasSuccessful":true}
Which is easily parsed by a simple code like:
r = request.post("https://www.kaggle.com/requests/GetDataViewExternalRequest", data=data)
url = r.json()["result"]["dataView"]["dataUrl"]
The hard bit will come at generating this data, something like this:
data = {
"verificationInfo": {
"competitionId": None,
"datasetId": 165566,
"databundleVersionId": 391742,
"datasetHashLink": None
},
"firestorePath": "hIPSqqCWJs6oriNI20r6/versions/kKBcaXwa0lr8cvBuOMna/directories/no/files/1 no.jpeg",
"tableQuery": None
}
I would expect most of those values are static for this page, it's likely the firestorePath changes. From a very quick search, it looks like all those values are scrapeable from the page either using regex or BeautifulSoup.
The request also has some headers including __requestverificationtoken and x-xsrf-token. Which looks like they are there to validate your request, they may be scrapeable, they may not. But, they are equal in value. You would need to add these headers to your request as well. I recommend this site to help with creating requests easily. You just need to check the requests and delete any values which are not constants.
In summary, it's not easy! Use selenium if speed is not an issue, use requests and work all this out if it is.
After all that, the best option is using their API as Phorys said. It will be very easy to use.
The scraper looks good so the problem usually is the server which try to protect itself from... the scrapers! BeautifulSoup is a great tool for scraping static pages hence the problem is when you need to request a page:
Pass a user-agent to your request
user_agent = # for example 'Mozilla/5.0 (X11; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/88.0'}
response = requests.get(url, headers={ "user-agent": user_agent})
If this doesn't work investigate the response (cookies, coding, ...)
response = requests.get(url)
for key, value in response.headers.items():
print(key, ":", value)
If you want only links maybe this could be better
for link in soup.find_all('a', href=True): # so you are sure the you don't get an error when retrieving the value
print(link['href'])
Personally, I would look into using Kaggle's official API to access the data. They don't seem particularly restrictive and once you have it figured out it should also be possible to upload solutions as well: https://www.kaggle.com/docs/api
Just to push you in the right direction:
"The Kaggle API and CLI tool provide easy ways to interact with Datasets on Kaggle. The commands available can make searching for and downloading Kaggle Datasets a seamless part of your data science workflow.
If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first.
Some of the commands for interacting with Datasets via CLI include:"
kaggle datasets list -s [KEYWORD]: list datasets matching a search term
kaggle datasets download -d [DATASET]: download files associated with a dataset
I'm new to web scraping, programming, and StackOverflow, so I'll try to phrase things as clearly as I can.
I'm using the Python requests library to try to scrape some info from a local movie theatre chain. When I look at the Chrome developer tools response/preview tabs in the network section, I can see what appears to be very clean and useful JSON:
However, when I try to use requests to obtain this same info, instead I get the entire page content (pages upon pages of html). Upon further inspection of the cascade in the Chrome developer tools, I can see there are two events called GetNowPlayingByCity: One contains the JSON info while the other seems to be the HTML.
JSON Response
HTML Response
How can I separate the two and only obtain the JSON response using the Python requests library?
I have already tried modifying the headers within requests.post (the Chrome developer tools indicate this is a post method) to include "accept: application/json, text/plain, */*" but didn't see a difference in the response I was getting with requests.post. As it stands I can't parse any JSON from the response I get with requests.post and get the following error:
"json.decoder.JSONDecodeError: Expecting value: line 4 column 1 (char 3)"
I can always try to parse the full HTML, but it's so long and complex I would much rather work with friendly JSON info. Any help would be much appreciated!
This is probably because the javascript the page sends to your browser is making a request to an API to get the json info about the movies.
You could either try sending the request directly to their API (see edit 2), parse the html with a library like Beautiful Soup or you can use a dedicated scraping library in python. I've had great experiences with scrapy. It is much faster than requests
Edit:
If the page uses dynamically loaded content, which I think is the case, you'd have to use selenium with the PhantomJS browser instead of requests. here is an example:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "your url"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
# Then parse the html code here
Or you could load the dynamic content with scrapy
I recommend the latter if you want to get into scraping. It would take a bit more time to learn but it is a better solution.
Edit 2:
To make a request directly to their api you can just reproduce the request you see. Using google chrome, you can see the request if you click on it and go to 'Headers':
After that, you simply reproduce the request using the requests library:
import requests
import json
url = 'http://paste.the.url/?here='
response = requests.get(url)
content = response.content
# in my case content was byte string
# (it looks like b'data' instead of 'data' when you print it)
# if this is you case, convert it to string, like so
content_string = content.decode()
content_json = json.loads(content_string)
# do whatever you like with the data
You can modify the url as you see fit, for example if it is something like http://api.movies.com/?page=1&movietype=3 you could modify movietype=3 to movietype=2 to see a different type of movie, etc
I've been trying to create a simple web scraper program to scrape the book titles of a 100 bestseller list on Amazon. I've used this code before on another site with no problems. But for some reason, it scraps the first page fine but then posts the same results for the following iterations.
I'm not sure if it's something to do with how Amazon creates its urls or not. When I manually enter the "#2" (and beyond) at the end of the url in the browser it navigates fine.
(Once the scrape is working I plan on dumping the data in csv files. But for now, print to the terminal will do.)
import requests
from bs4 import BeautifulSoup
for i in range(5):
url = "https://smile.amazon.com/Best-Sellers-Kindle-Store-Dystopian-Science-Fiction/zgbs/digital-text/6361470011/ref=zg_bs_nav_kstore_4_158591011#{}".format(i)
r = requests.get(url)
soup = BeautifulSoup(r.content, "lxml")
for book in soup.find_all('div', class_='zg_itemWrapper'):
title = book.find('div', class_='p13n-sc-truncate')
name = book.find('a', class_='a-link-child')
price = book.find('span', class_='p13n-sc-price')
print(title)
print(name)
print(price)
print("END")
This is a common problem that you have to face, some sites load the data asynchronous(with ajax) those are XMLHttpRequest that you can see in the tab networking of your DOM inspector. Usually the websites load the data from a different endpoint with POST method to solve that you can use urllib or requests library.
In this case the request is through a GET method and you can scrape it from this url with no need of extend your code https://www.amazon.com/Best-Sellers-Kindle-Store-Dystopian-Science-Fiction/zgbs/digital-text/6361470011/ref=zg_bs_pg_3?_encoding=UTF8&pg=3&ajax=1 where you only change the pg parameter
I'm attempting to scrape some data from www.ksl.com/auto/ using Python Requests and Beautiful Soup. I'm able to get the results from the first search page but not subsequent pages. When I request the second page using the same URL Chrome constructs when I click the "Next" button on the page, I get a set of results that no longer matches my search query. I've found other questions on Stack Overflow that discuss Ajax calls that load subsequent pages, and using Chrome's Developer tools to examine the request. But, none of that has helped me with this problem -- which I've had on other sites as well.
Here is an example query that returns only Acuras on the site. When you advance in the browser to the second page, the URL is simply this: https://www.ksl.com/auto/search/index?page=1. When I use Requests to hit those two URLs, the second search results are not Acuras. Is there, perhaps a cookie that my browser is passing back to the server to preserve my filters?
I would appreciate any advice someone can give about how to get subsequent pages of the results I searched for.
Here is the simple code I'm using:
from requests import get
from bs4 import BeautifulSoup
page1 = get('https://www.ksl.com/auto/search/index?keyword=&make%5B%5D=Acura&yearFrom=&yearTo=&mileageFrom=&mileageTo=&priceFrom=&priceTo=&zip=&miles=25&newUsed%5B%5D=All&page=0&sellerType=&postedTime=&titleType=&body=&transmission=&cylinders=&liters=&fuel=&drive=&numberDoors=&exteriorCondition=&interiorCondition=&cx_navSource=hp_search&search.x=63&search.y=8&search=Search+raquo%3B').text
page2 = get('https://www.ksl.com/auto/search/index?page=2').text
soup = BeautifulSoup(page1, 'html.parser')
listings = soup.findAll("div", { "class" : "srp-listing-body-right" })
listings[0] # An Acura - success!
soup2 = BeautifulSoup(page2, 'html.parser')
listings2 = soup2.findAll("div", { "class" : "srp-listing-body-right" })
listings2[0] # Not an Acura. :(
Try this. Create a Session object and then call the links. This will maintain your session with the server when you send a call to the next link.
import requests
from bs4 import BeautifulSoup
s = requests.Session() # Add this line
page1 = s.get('https://www.ksl.com/auto/search/index?keyword=&make%5B%5D=Acura&yearFrom=&yearTo=&mileageFrom=&mileageTo=&priceFrom=&priceTo=&zip=&miles=25&newUsed%5B%5D=All&page=0&sellerType=&postedTime=&titleType=&body=&transmission=&cylinders=&liters=&fuel=&drive=&numberDoors=&exteriorCondition=&interiorCondition=&cx_navSource=hp_search&search.x=63&search.y=8&search=Search+raquo%3B').text
page2 = s.get('https://www.ksl.com/auto/search/index?page=1').text
Yes, the website uses cookies so that https://www.ksl.com/auto/search/index shows or extends your last search. More specifically, the search parameters are stored on the server for you particular session cookie, that is, the value of the PHPSESSID cookie.
However, instead of passing that cookie around, you can simply always do full queries (in the sense of the request parameters), each time using a different value for the page parameter.
https://www.ksl.com/auto/search/index?keyword=&make%5B%5D=Acura&yearFrom=&yearTo=&mileageFrom=&mileageTo=&priceFrom=&priceTo=&zip=&miles=25&newUsed%5B%5D=All&page=0&sellerType=&postedTime=&titleType=&body=&transmission=&cylinders=&liters=&fuel=&drive=&numberDoors=&exteriorCondition=&interiorCondition=&cx_navSource=hp_search&search.x=63&search.y=8&search=Search+raquo%3B
https://www.ksl.com/auto/search/index?keyword=&make%5B%5D=Acura&yearFrom=&yearTo=&mileageFrom=&mileageTo=&priceFrom=&priceTo=&zip=&miles=25&newUsed%5B%5D=All&page=1&sellerType=&postedTime=&titleType=&body=&transmission=&cylinders=&liters=&fuel=&drive=&numberDoors=&exteriorCondition=&interiorCondition=&cx_navSource=hp_search&search.x=63&search.y=8&search=Search+raquo%3B