I am trying to grab an element from tradingview.com. Specifically this link. I want the price of a symbol of whatever link I give my program. I noticed when looking through the elements of the url, I can find the price of the stock here.
<div class="tv-symbol-price-quote__value js-symbol-last">
"3.065"
<span class>57851</span>
</div>
When running this code below, I get this output.
#This will not run on online IDE
import requests
from bs4 import BeautifulSoup
URL = "https://www.tradingview.com/symbols/NEARUSD/"
r = requests.get(URL)
soup = BeautifulSoup(r.content, 'html.parser') # If this line causes an error, run 'pip install html5lib' or install html5lib
L = [soup.find_all(class_ = "tv-symbol-price-quote__value js-symbol-last")]
print(L)
output
[[<div class="tv-symbol-price-quote__value js-symbol-last"></div>]]
How can I grab the entire price from this website? I would like the 3.065 as well as the 57851.
You have the most common problem: page uses JavaScript to add/update elements but BeautifulSoup/lxml, requests/urllib can't run JS. You may need Selenium to control real web browser which can run JS. OR use (manually) DevTools in Firefox/Chrome (tab Network) to see if JavaScript reads data from some URL. And try to use this URL with requests. JS usually gets JSON which can be easy converted to Python dictionary (without BS). You can also check if page has (free) API for programmers.
Using DevTool I found it uses JavaScript to send POST (with some JSON data) and it gets fresh price.
import requests
payload = {
"columns": ["market_cap_calc", "market_cap_diluted_calc", "total_shares_outstanding", "total_shares_diluted", "total_value_traded"],
"range": [0, 1],
"symbols": {"tickers": ["BINANCE:NEARUSD"]}
}
url = 'https://scanner.tradingview.com/crypto/scan'
response = requests.post(url, json=payload)
print(response.text)
data = response.json()
print(data['data'][0]["d"][1]/1_000_000_000)
Result:
{"totalCount":1,"data":[{"s":"BINANCE:NEARUSD","d":[2507704855.0467912,3087555230,812197570,1000000000,106737372.9550421]}]}
3.08755523
EDIT:
It seems above code gives only market cap. And page uses websocket to get fresh price every few seconds.
wss://data.tradingview.com/socket.io/websocket?from=symbols%2FNEARUSD%2F&date=2022_10_17-11_33
And this would need more complex code.
Other answer (with Selenium) gives you correct value.
The webpage's contents are loaded dynamically by JavaScript. So you have to use an automation tool something like selenium or hidden API.
Here I use selenium with bs4 to grab the desired dynamic content.
import time
from selenium import webdriver
from bs4 import BeautifulSoup
from selenium.webdriver.chrome.service import Service
webdriver_service = Service("./chromedriver") #Your chromedriver path
driver = webdriver.Chrome(service=webdriver_service)
url= "https://www.tradingview.com/symbols/NEARUSD/"
driver.get(url)
driver.maximize_window()
time.sleep(5)
soup = BeautifulSoup(driver.page_source,"lxml")
price = soup.find('div',class_ = "tv-symbol-price-quote__value js-symbol-last").get_text(strip=True)
print(price)
Output:
3.07525163
Related
I am working on scrapping numbers from the Powerball website with the code below.
However, numbers keeps coming back empty. Why is this?
import requests
from bs4 import BeautifulSoup
url = 'https://www.powerball.com/games/home'
page = requests.get(url).text
bsPage = BeautifulSoup(page)
numbers = bsPage.find_all("div", class_="field_numbers")
numbers
Can confirm #Teprr is absolutely correct. You'll need to download chrome and add chromedriver.exe to your system path for this to work but the following code gets what you are looking for. You can use other browsers too you just need their respective driver.
from bs4 import BeautifulSoup
from selenium import webdriver
import time
url = 'https://www.powerball.com/games/home'
options = webdriver.ChromeOptions()
options.add_argument('headless')
browser = webdriver.Chrome(options=options)
browser.get(url)
time.sleep(3) # wait three seconds for all the js to happen
html = browser.page_source
soup = BeautifulSoup(html, 'html.parser')
draws = soup.findAll("div", {"class":"number-card"})
print(draws)
for d in draws:
info = d.find("div",{"class":"field_draw_date"}).getText()
balls = d.find("div",{"class":"field_numbers"}).findAll("div",{"class":"numbers-ball"})
numbers = [ball.getText() for ball in balls]
print(info)
print(numbers)
If you download that file and inspect it locally, you can see that there is no <div> with that class. That means that it is likely generated dynamically using javascript by your browser, so you would need to use something like selenium to get the full, generated HTML content.
Anyway, in this specific case, this piece of HTML seems to be the container for the data you are looking for:
<div data-url="/api/v1/numbers/powerball/recent?_format=json" class="recent-winning-numbers"
data-numbers-powerball="Power Play" data-numbers="All Star Bonus">
Now, if you check that custom data-url, you can find the information you want in JSON format.
I am looking to scrape the following web page, where I wish to scrape all the text on the page, including all the clickable elements.
I've attempted to use requests:
import requests
response = requests.get("https://cronoschimp.club/market/details/2424?isHonorary=false")
response.text
Which scrapes the meta-data but none of the actual data.
Is there a way to click through and get the elements in the floating boxes?
As it's a Javascript enabled web page, you can't get anything as output using requests, bs4 because they can't render javascript. So, you need an automation tool something like selenium. Here I use selenium with bs4 and it's working fine. Please, see the minimal working example as follows:
Code:
from bs4 import BeautifulSoup
import time
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
driver.maximize_window()
time.sleep(8)
url = 'https://cronoschimp.club/market/details/2424?isHonorary=false'
driver.get(url)
time.sleep(20)
soup = BeautifulSoup(driver.page_source, 'lxml')
name = soup.find('div',class_="DetailsHeader_title__1NbGC").get_text(strip=True)
p= soup.find('span',class_="DetailsHeader_value__1wPm8")
price= p.get_text(strip=True) if p else "Not for sale"
print([name,price])
Output:
['Chimp #2424', 'Not for sale']
I have been trying to use web scraping on a website using the requests and Beautifulsoup python libraries.
The problem is that I'm getting the html data of the web page but the body tag content is empty while on the inspect panel on the website it isn't.
Does anyone can explain why is it happening and what can I do to get the content of the body?
Here is my code:
from bs4 import BeautifulSoup
import requests
source = requests.get('https://webaccess-il.rexail.com/?s_jwe=eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A#/store-products-shopping-non-customers').text
soup = BeautifulSoup(source, 'lxml')
print(soup)
Here is the inspect panel of the website:
And here is the output of my code:
Thank you :)
There are two reasons, your code could not work for. The fist one is, the website does require additional header or cookie information, that you could try to find using the Inspect Browser Tool and add via
requests.get(url, headers=headers, cookies=cookies)
where headers and cookies are dictionaries.
Another reason, which I believe it is, is that the content is dynamically loaded via Javascript after the side is build, and what you do get is the initially loaded website.
To also provide you a solution, I attache an example using Selenium, which simulates a whole browser, which does serve the full website, however selenium has a bit of a setup overhead, that you can easily google.
from time import sleep
from selenium import webdriver
from bs4 import BeautifulSoup
url = 'https://webaccess-il.rexail.com/?s_jwe=eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A#/store-products-shopping-non-customers'
driver = webdriver.Firefox()
driver.get(url)
sleep(10)
content = driver.page_source
soup = BeautifulSoup(content)
If you want the browser simulation to be none visible you can add
from selenium.webdriver.firefox.options import Options
options = Options()
options.headless = True
driver = webdriver.Firefox(options=options)
which will make it run in the backgroud.
Alternatively to Firefox, you can use pretty much any browser using the appropriate driver.
A Linux based setup example can be found here Link
Even though I find the use of Selenium easier for beginners, that site bothered me, so I figured out a pure requests way, that I also want to share.
Process:
When you look at the network traffic after loading the website, you find a lot of outgoing get requests. Assuming, you are interested in the products, that are loaded, I found a call right above the product images being loaded from Amazon S3 going to
https://client-il.rexail.com/client/public/public-catalog?s_jwe=eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A
importantly
https://client-il.rexail.com/client/public/public-catalog?s_jwe=[...]
Upon clicking the URL I found it to be indeed a JSON of the products. However the s_jwe token is dynamic and without it, the JSON doesn't load.
Now investigating the initially loading url and searching for s_jwe you will find
<script>
window.customerStore = {store: angular.fromJson({"id":26,"name":"\u05de\u05e9\u05e7 \u05d4\u05e8 \u05e4\u05e8\u05d7\u05d9\u05dd","imagePath":"images\/stores\/26\/88aa6827bcf05f9484b0dafaedf22b0a.png","secondaryImagePath":"images\/stores\/4d5d1f54038b217244956071ca62312d.png","thirdImagePath":"images\/stores\/26\/2f9294180e7d656ba7280540379869ee.png","fourthImagePath":"images\/stores\/26\/bd2861565b18613497a6ce66903bf9eb.png","externalWebTrackingAccounts":"[{\"accountType\":\"googleAnalytics\",\"identifier\":\"UA-130110792-1\",\"primaryDomain\":\"ecomeshek.co.il\"},{\"accountType\":\"facebookPixel\",\"identifier\":\"3958210627568899\"}]","worksWithStoreCoupons":false,"performSellingUnitsEstimationLearning":false}), s_jwe: "eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A"};
const externalWebTrackingAccounts = angular.fromJson(customerStore.store.externalWebTrackingAccounts);
</script>
containing
s_jwe: "eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A"
So to summerize, even though, the initial page does not contain the products, it does contain the token and the product url.
Now you can extract the two and call the product catalog directly as such:
FINAL CODE:
import requests
import re
import json
s = requests.Session()
initial_url = 'https://webaccess-il.rexail.com/?s_jwe=eyJhbGciOiJkaXIiLCJlbmMiOiJBMTI4Q0JDLUhTMjU2In0..gKfb7AnqhUiIMIn0PGb35g.SUsLS70gBec9GBgraaV5BK8hKyqm-VvMSNjP3nIumtcrj9h19zOkYjaBHrW4SDL10DjeIcwQcz9ul1p8umMHKxPPC-QZpCyJbk7JQkUSqFM._d_sGsiSyPF_Xqs2hmLN5A#/store-products-shopping-non-customers'
initial_site = s.get(url= initial_url).content.decode('utf-8')
jwe = re.findall(r's_jwe:.*"(.*)"', initial_site)
product_url = "https://client-il.rexail.com/client/public/public-catalog?s_jwe="+ jwe[0]
products_site = s.get(url= product_url).content.decode('utf-8')
products = json.loads(products_site)["data"]
print(products[0])
There is a little bit of finetuning required with the decoding, but I am sure you can manage that. ;)
This of course is the leaner way of scraping that website, but as I hopefully showed, scraping is always a bit of playing Sherlock Holmes.
Any questions, glad to help.
Solution: The action for this specific site is action="user/ajax/login" so this is what has to be appended to url of the main site in order to implement the payload. (action can be found by searching ctrl + f for action in the Page Source). The url is the what is going to be scraped. The with requests.Session() as s: is what is maintaining the cookies from within the site, which is what allows consistent scraping. The res variable is the response that posts the payload into the login url, allowing the user to scrape from a specific account page. After the post, requests will then attain the specified url. With this in place, BeautifulSoup can now grab and parse the HTML from within the accounts site. "html.parser" and "lxml" are both compatible in this case. If there is HTML from within an iframe, it's doubtful it can be grabbed and parsed using only requests, so I recommend using selenium preferably using Firefox.
import requests
payload = {"username":"?????", "password":"?????"}
url = "https://9anime.to/user/watchlist"
loginurl = "https://9anime.to/user/ajax/login"
with requests.Session() as s:
res = s.post(loginurl, data=payload)
res = s.get(url)
from bs4 import BeautifulSoup
soup = BeautifulSoup(res.text, "html.parser")
[Windows 10] To install Selenium pip3 install selenium and for the drivers - (chrome: https://sites.google.com/a/chromium.org/chromedriver/downloads) (Firefox: https://github.com/mozilla/geckodriver/releases) How to place "geckodriver" into PATH for Firefox Selenium: control panel "environmental variables "Path" "New" "file location for "geckodriver" enter Then your'e all set.
Also, in order to grab the iframes when using selenium, try import time and time.sleep(5) after 'getting' the url with your driver. This will give the site more time to load those extra iframes
Example:
import time
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Firefox() # The WebDriver for this script
driver.get("https://www.google.com/")
time.sleep(5) # Extra time for the iframe(s) to load
soup = BeautifulSoup(driver.page_source, "lxml")
print(soup.prettify()) # To see full HTML content
print(soup.find_all("iframe")) # Finds all iframes
print(soup.find("iframe"))["src"] # If you need the 'src' from within an iframe.
You’re trying to make a GET request to a URL which requires being logged in to and therefore it is producing a 403 error which means forbidden. This means that the request is not authenticated to view the content.
If you think about it in terms of the URL you're constructing in your GET request, you would literally expose the username (x) and password (y) within the url like so:
https://9anime.to/user/watchlist?username=x&password=y
... which would of course be a security risk.
Without knowing what specific access you have to this particular site, in principle, you need to simulate authentication with a POST request first and then perform the GET request on that page afterwards. A successful response would return a 200 status code ('OK') and then you would be in a position to use BeautifulSoup to parse the content and target your desired part of that content from between the relevant HTML tags.
I suggest, to start, give the address of the login page and connect. Then you make an
input('Enter something')
to allow you to pause the time you connect (You must hit the ENTER key in the terminal to continue the process once connected and voila.)
Solved: The action-tag was user/ajax/login in this case. So by appending that to the original main url of the website - not https://9anime.to/user/watchlist but to https://9anime.to you get https://9anime.to/user/ajax/login and this gives you the login url.
import requests
from bs4 import BeautifulSoup as bs
url = "https://9anime.to/user/watchlist"
loginurl = "https://9anime.to/user/ajax/login"
payload = {"username":"?????", "password":"?????"}
with requests.Session() as s:
res = s.post(loginurl, data=payload)
res = s.get(url)
I am learning web scraping using python but I can't get the desired result. Below is my code and the output
code
import bs4,requests
url = "https://twitter.com/24x7chess"
r = requests.get(url)
soup = bs4.BeautifulSoup(r.text,"html.parser")
soup.find_all("span",{"class":"account-group-inner"})
[]
Here is what I was trying to scrape
https://i.stack.imgur.com/tHo5S.png
I keep on getting an empty array. Please Help.
Sites like Twitter load the content dynamically, which sometimes depends upon the browser you are using etc. And due to dynamic loading there could be some elements in the webpage which are lazily loaded, which means that the DOM is inflated dynamically, depending upon the user actions, The tag you are inspecting in your browser Inspect element, is inspected the fully dynamically inflated HTML, But the response you are getting using requests, is inflated HTML, or a simple DOM waiting to load the elements dynamically on the user actions which in your case while fetching from requests module is None.
I would suggest you to use selenium webdriver for scraping dynamic javascript web pages.
Try this. It will give you the items you probably look for. Selenium with BeautifulSoup is easy to handle. I've written it that way. Here it is.
from bs4 import BeautifulSoup
from selenium import webdriver
driver = webdriver.Chrome()
driver.get("https://twitter.com/24x7chess")
soup = BeautifulSoup(driver.page_source,"lxml")
driver.quit()
for title in soup.select("#page-container"):
name = title.select(".ProfileHeaderCard-nameLink")[0].text.strip()
location = title.select(".ProfileHeaderCard-locationText")[0].text.strip()
tweets = title.select(".ProfileNav-value")[0].text.strip()
following = title.select(".ProfileNav-value")[1].text.strip()
followers = title.select(".ProfileNav-value")[2].text.strip()
likes = title.select(".ProfileNav-value")[3].text.strip()
print(name,location,tweets,following,followers,likes)
Output:
akul chhillar New Delhi, India 214 44 17 5
You could have done the whole thing with requests rather than selenium
import requests
from bs4 import BeautifulSoup as bs
import re
r = requests.get('https://twitter.com/24x7chess')
soup = bs(r.content, 'lxml')
bio = re.sub(r'\n+',' ', soup.select_one('[name=description]')['content'])
stats_headers = ['Tweets', 'Following', 'Followers', 'Likes']
stats = [item['data-count'] for item in soup.select('[data-count]')]
data = dict(zip(stats_headers, stats))
print(bio, data)