I'm trying to scrape automobile information from a dynamic webpage. However, after running Selenium chrome browser, inspection elements are not shown as they are in original source page. Instead of html code of the car details (Informative area near the product image), " ::after " element is appeared in html source code.
You can see my scraping code below;
import requests
from requests import get
from bs4 import BeautifulSoup
from selenium import webdriver
driver_path = ("C:\\Desktop\\chromedriver.exe")
driver = webdriver.Chrome(driver_path)
driver.get('https://www.arabam.com/ilan/galeriden-satilik-citroen-c-elysee-1-6-hdi-attraction/fiat-onkol-oto-dan-c-elysee-1-6-attraction-92-hp-beyaz/14046287')
soup = BeautifulSoup(driver.page_source, 'html.parser')
table = soup.table
table_rows = table.find_all('li')
print(table_rows)
When i used given code to get relative information from the webpage, i could not see any html attributes which is necessary for further scraping loops.
What can be the reason of that problem and how can i solve that?
Thanks,
Edit;
HTML element content in selenium browser,
Normal Google Chrome HTML element content that i try to reach,
There is no table in the HTML page you provided, try using a different selector. You could try selecting by
driver.find_elements_by_class_name("w100 semi-bold lh18")
This should give you an ordered list of the span elements
Related
I am trying to scrape data from AGMARKNET website. The tables are split into 11 pages but all of the pages use the same url. I am very new to webscraping (or python in general), but AGMARKNET does not have a public API so scraping the page seems to be my only option. I am currently using BeautifulSoup to parse the HTML code and I am able to scrape the initial table, but that only contains the first 500 data points, but I want the entire 11 page data. I am stuck and frustrated. Link and my current code are below. Any direction would be helpful, thank you .
#αԋɱҽԃ αмєяιcαη
https://agmarknet.gov.in/SearchCmmMkt.aspx?Tx_Commodity=17&Tx_State=JK&Tx_District=0&Tx_Market=0&DateFrom=01-Oct-2004&DateTo=18-Oct-2022&Fr_Date=01-Oct-2004&To_Date=18-Oct-2022&Tx_Trend=2&Tx_CommodityHead=Apple&Tx_StateHead=Jammu+and+Kashmir&Tx_DistrictHead=--Select--&Tx_MarketHead=--Select--
import requests
import pandas as pd
url = 'https://agmarknet.gov.in/SearchCmmMkt.aspx?Tx_Commodity=17&Tx_State=JK&Tx_District=0&Tx_Market=0&DateFrom=01-Oct-2004&DateTo=18-Oct-2022&Fr_Date=01-Oct-2004&To_Date=18-Oct-2022&Tx_Trend=2&Tx_CommodityHead=Apple&Tx_StateHead=Jammu+and+Kashmir&Tx_DistrictHead=--Select--&Tx_MarketHead=--Select--'
response = requests.get(url)
# Use BeautifulSoup to parse the HTML code
soup = BeautifulSoup(response.content, 'html.parser')
# changes stat_table from ResultSet to a Tag
stat_table = stat_table[0]
# Convert html table to list
rows = []
for tr in stat_table.find_all('tr')[1:]:
cells = []
tds = tr.find_all('td')
if len(tds) == 0:
ths = tr.find_all('th')
for th in ths:
cells.append(th.text.strip())
else:
for td in tds:
cells.append(td.text.strip())
rows.append(cells)
# convert table to df
table = pd.DataFrame(rows)
The website you linked to seems to be using JavaScript to navigate to the next page. The requests and BeautifulSoup libraries are only for parsing static HTML pages, so they can't run JavaScript.
Instead of using them, you should try something like Selenium that actually simulates a full browser environment (including HTML, CSS, etc.). In fact, Selenium can even open a full browser window so you can see it in action as it navigates!
Here is a quick sample code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.firefox.options import Options
# If you prefer Chrome to Firefox, there is a driver available
# for that as well
# Set the URL
url = 'https://agmarknet.gov.in/SearchCmmMkt.aspx?Tx_Commodity=17&Tx_State=JK&Tx_District=0&Tx_Market=0&DateFrom=01-Oct-2004&DateTo=18-Oct-2022&Fr_Date=01-Oct-2004&To_Date=18-Oct-2022&Tx_Trend=2&Tx_CommodityHead=Apple&Tx_StateHead=Jammu+and+Kashmir&Tx_DistrictHead=--Select--&Tx_MarketHead=--Select--'
# Start the browser
opts = Options()
driver = webdriver.Firefox(options=opts)
driver.get(url)
Now you can use functions like driver.find_element(...) and driver.find_elements(...) to extract the data you want from this page, the same way you did with BeautifulSoup.
For your given link, the page number navigators seem to be running a function of the form,
__doPostBack('ctl00$cphBody$GridViewBoth','Page$2')
...replacing Page$2 with Page$3, Page$4, etc. depending on which page you want. So you can use Selenium to run that JavaScript function when you're ready to navigate.
driver.execute_script("__doPostBack('ctl00$cphBody$GridViewBoth','Page$2')")
A more generic solution is to just select which button you want and then run that button's click() function. General example (not necessarily for the current website):
btn = driver.find_element('id', 'next-button')
btn.click()
A final note: after the button is clicked, you might want to time.sleep(...) for a little while to make sure the page is fully loaded before you start processing the next set of data.
I am looking to scrape the following web page, where I wish to scrape all the text on the page, including all the clickable elements.
I've attempted to use requests:
import requests
response = requests.get("https://cronoschimp.club/market/details/2424?isHonorary=false")
response.text
Which scrapes the meta-data but none of the actual data.
Is there a way to click through and get the elements in the floating boxes?
As it's a Javascript enabled web page, you can't get anything as output using requests, bs4 because they can't render javascript. So, you need an automation tool something like selenium. Here I use selenium with bs4 and it's working fine. Please, see the minimal working example as follows:
Code:
from bs4 import BeautifulSoup
import time
from selenium import webdriver
driver = webdriver.Chrome('chromedriver.exe')
driver.maximize_window()
time.sleep(8)
url = 'https://cronoschimp.club/market/details/2424?isHonorary=false'
driver.get(url)
time.sleep(20)
soup = BeautifulSoup(driver.page_source, 'lxml')
name = soup.find('div',class_="DetailsHeader_title__1NbGC").get_text(strip=True)
p= soup.find('span',class_="DetailsHeader_value__1wPm8")
price= p.get_text(strip=True) if p else "Not for sale"
print([name,price])
Output:
['Chimp #2424', 'Not for sale']
This question follows this previous question. I want to scrape data from a betting site using Python. I first tried to follow this tutorial, but the problem is that the site tipico is not available from Switzerland. I thus chose another betting site: Winamax. In the tutorial, the webpage tipico is first inspected, in order to find where the betting rates are located in the html file. In the tipico webpage, they were stored in buttons of class “c_but_base c_but". By writing the following lines, the rates could therefore be saved and printed using the Beautiful soup module:
from bs4 import BeautifulSoup
import urllib.request
import re
url = "https://www.tipico.de/de/live-wetten/"
try:
page = urllib.request.urlopen(url)
except:
print(“An error occured.”)
soup = BeautifulSoup(page, ‘html.parser’)
regex = re.compile(‘c_but_base c_but’)
content_lis = soup.find_all(‘button’, attrs={‘class’: regex})
print(content_lis)
I thus tried to do the same with the webpage Winamax. I inspected the page and found that the betting rates were stored in buttons of class "ui-touchlink-needsclick price odd-price". See the code below:
from bs4 import BeautifulSoup
import urllib.request
import re
url = "https://www.winamax.fr/paris-sportifs/sports/1/7/4"
try:
page = urllib.request.urlopen(url)
except Exception as e:
print(f"An error occurred: {e}")
soup = BeautifulSoup(page, 'html.parser')
regex = re.compile('ui-touchlink-needsclick price odd-price')
content_lis = soup.find_all('button', attrs={'class': regex})
print(content_lis)
The problem is that it prints nothing: Python does not find elements of such class (right?). I thus tried to print the soup object in order to see what the BeautifulSoup function was exactly doing. I added this line
print(soup)
When printing it (I do not show it the print of soup because it is too long), I notice that this is not the same text as what appears when I do a right click "inspect" of the Winamax webpage. So what is the BeautifulSoup function exactly doing? How can I store the betting rates from the Winamax website using BeautifulSoup?
EDIT: I have never coded in html and I'm a beginner in Python, so some terminology might be wrong, that's why some parts are in italics.
That's because the website is using JavaScript to display these details and BeautifulSoup does not interact with JS on it's own.
First try to find out if the element you want to scrape is present in the page source, if so you can scrape, pretty much everything! In your case the button/span tag's were not in the page source(meaning hidden or it's pulled through a script)
No <button> tag in the page source :
So I suggest using Selenium as the solution, and I tried a basic scrape of the website.
Here is the code I used :
from selenium import webdriver
option = webdriver.ChromeOptions()
option.add_argument('--headless')
option.binary_location = r'Your chrome.exe file path'
browser = webdriver.Chrome(executable_path=r'Your chromedriver.exe file path', options=option)
browser.get(r"https://www.winamax.fr/paris-sportifs/sports/1/7/4")
span_tags = browser.find_elements_by_tag_name('span')
for span_tag in span_tags:
print(span_tag.text)
browser.quit()
This is the output:
There are some junk data present in this output, but that's for you to figure out what you need and what you don't!
I'm trying to scrape the get CNSs drop down menu from the following page
Just to walk you through, I start of with a main link that links to all the sequences(link from a above is an url to a sequence).
I go to that link and try to grab each item from the drop down menu
that takes you to a different page(This is the main issue, that I'm
trying to solve).
Once on the page that the drop down menu takes you, I want to grab the link that directs you to get all CNSs alignments and scrape the information that the link provides you. I have to do this for 10000 alignments.
I'm currently struggling with the drop down menu everything else I should be able to figure out.
I've tried implementing Selenium and BeautifulSoup as you can tell from the code I've written so far. I'm open to suggestions and modification.
This is python2.7
Thank you
#importing libraries
import urllib
from bs4 import BeautifulSoup
from selenium import webdriver
from selenium.webdriver.support.ui import Select
#parsing the html
url = ("http://pipeline.lbl.gov/cgi-bin/textBrowser2?act=mvista&run=u233-9GR6Sl35")
html = urllib.urlopen(url).read()
soup = BeautifulSoup(html,'html.parser')
#saving the links to a list so I can access those links and scrape them
sequenceurl=[]
for link in soup.find_all('a', string ="VISTA-Point"):
sequenceurl.append(link.get('href'))
for item in sequenceurl:
print item
print
#open the webpage and go to the web browser
driver = webdriver.Firefox()
driver.get(sequenceurl[0])
driver.maximize_window()
Select(driver.find_element_by_xpath('//*[#id="x-auto-131"]/tbody/tr/td[2]/select')).select_by_index(1).click()
Edit: The main link is the link inside the code that says url =. Here it is again for reference http://pipeline.lbl.gov/cgi-bin/textBrowser2?act=mvista&run=u233-9GR6Sl35
I have tried to parse the data on this table:
https://www.rad.cvm.gov.br/enetconsulta/frmGerenciaPaginaFRE.aspx?CodigoTipoInstituicao=1&NumeroSequencialDocumento=62733
you will notice that this is a dynamically generated table (apparently javascript). It seems somehow that when I open the url using selenium or beautiful soup, it it's not possible to recognize / parse the table, although the table is there (if you right click on the table you and check frame source / page source you will see that they do not seem to be related).
please let me know if you are able to parse the table in python.
You can do it using selenium or any other library, once you look at the source, you'll find out that the table is being loaded inside an iframe, and the frame url, which is being set from javascript, is :
urlFrame = "https://www.rad.cvm.gov.br/enetconsulta/frmDemonstracaoFinanceiraITR.aspx?Informacao=2&Demonstracao=4&Periodo=0&Grupo=DFs+Consolidadas&Quadro=Demonstra%C3%A7%C3%A3o+do+Resultado&NomeTipoDocumento=DFP&Titulo=Demonstra%C3%A7%C3%A3o%20do%20Resultado&Empresa=VALE%20S.A.&DataReferencia=31/12/2016&Versao=1&CodTipoDocumento=4&NumeroSequencialDocumento=62733&NumeroSequencialRegistroCvm=1789&CodigoTipoInstituicao=1"
but looks like this url needs some cookies which the browser automatically sends, so we'll first load our original url, then simply go to the frame url and extract the data from the table.
Solution using selenium :
from bs4 import BeautifulSoup
from selenium import webdriver
url = "https://www.rad.cvm.gov.br/enetconsulta/frmGerenciaPaginaFRE.aspx?CodigoTipoInstituicao=1&NumeroSequencialDocumento=62733"
urlFrame = "https://www.rad.cvm.gov.br/enetconsulta/frmDemonstracaoFinanceiraITR.aspx?Informacao=2&Demonstracao=4&Periodo=0&Grupo=DFs+Consolidadas&Quadro=Demonstra%C3%A7%C3%A3o+do+Resultado&NomeTipoDocumento=DFP&Titulo=Demonstra%C3%A7%C3%A3o%20do%20Resultado&Empresa=VALE%20S.A.&DataReferencia=31/12/2016&Versao=1&CodTipoDocumento=4&NumeroSequencialDocumento=62733&NumeroSequencialRegistroCvm=1789&CodigoTipoInstituicao=1"
driver = webdriver.Chrome()
driver.maximize_window()
driver.get(url)
driver.get(urlFrame)
print(driver.page_source)
soup = BeautifulSoup(driver.page_source, "html.parser")
table_data = soup.findAll("table", {"id": "ctl00_cphPopUp_tbDados"})
# do something with table_data/ parse it further
print(table_data)