How do I find CSS Selector of slider on webpage? [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I dont really have any experience with CSS but for a python selenium script im writing I need to figure out how I find the CSS selector of the slider on the bottom of this webpage https://www.publish0x.com/blockchain-insights/millennials-and-crypto-xvwykyo
When I try to use the option 'copy selector' while selecting the element I only get #tipslider which doesn't seem to be it.

it seem to be it actually...
the input[type=range] is a special HTML element which represents a range slider.
you can change the value of it by changing the value attribute!

Fixed it. The #tipslider actually did work.

Related

what is the better way to get the information from this website with scrapy? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I am trying to scrape this website with scrapy and I have had to search for each link extracting the information from each one, I would like to know if there is an API of the site that I can use (I don't know how to find it).
I would also like to know how I can obtain the latitude and longitude? Currently the map is shown but I do not know how to obtain the numbers
I appreciate any suggestions
The website may be loading the data dynamically using Javascript. Use your browser dev tools and look at the networking tab, look for any XHR calls which may be accessing an API. Then you can scrape from that directly.

how to find the favicon of a website with python with Beautifulsoup [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
i need to do a little program that would be able to find the favicon of a website like the one of YouTube or google but i didn't found any exemple on google i already tried to do a code that can find picture on Wikipedia with Beautifulsoup but not the little image of the title.
Thanks for helping
You don't need bs4.
The icon is just a static file with the name "favicon.ico" .
For exsample, the favicon of stack overflow is at "www.stackoverflow.com/favicon.ico"
And the favicon of Google is at "www.google.com/favicon.ico"
etc.

Web scraping options [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Could you recommend me some ways to scrape data from a web page?
I have been trying to use Python but I am stuck with my code. I was thinking about using Octoparse. This is the webpage (http://www.mlsa.am/?page_id=368), it is a drop-down list where the selection of a previous case allows you to choose other options in the other cases.
You could use scrapy framework specially built for scraping purpose only.
As an starter you can start from official documentation & you will find everything you need from it.
https://docs.scrapy.org/en/latest/intro/tutorial.html
except scrapy you can use beautifulsoup also.

XPath locator is not Working on this webpage [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I would like to click the Table of content list using Xpath, but Xpath is completely not working in this URL
https://www.hindawi.com/journals/ecam/contents/
Use CSS Selector in place of XPath as follow :
CSS Selector : a[href='/journals/ecam/2019/']
Code to click :
content = driver.find_element_by_css_selector("a[href='/journals/ecam/2019/']")
I don't know why you are having problems with XPath...
This code snip works fine for me:
from selenium import webdriver
driver = webdriver.Chrome(r'C:\path\to\chromedriver.exe')
driver.get('https://www.hindawi.com/journals/ecam/contents/')
driver.find_element_by_xpath('//*[#id="TableofContentsNav"]').click()
all_links = driver.find_elements_by_xpath('//*[#class="middle_content"]//*[#href]')
for i in all_links:
print(i.get_attribute('href'))
Hope you find this helpful!

Python Automation [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to capture a test case result where table content Search/filter output need to cross check each time when the test run. I have attached a table grid that I need to use to search/filter. I'm using python script for the automation.
Any suggestion?
You can use selenium to test. The table's inner HTML can be accessed using
table_content = element.get_attribute('innerHTML').
you can parse that HTML to cross check your results.
Have a look at this question for reference.
Get HTML Source of WebElement in Selenium WebDriver using Python

Categories

Resources