I would like to get the text value of a span class "currency-coins value" to be used in a comparison.
Basically I want to check the market value of a specific player. I get the player listed 20 times in a container. So the "currency-coins value" is shown 20 times on the page.
Now I need to get the "200" as shown in the screenshot of the HTML code above as value I can work with. And this for all 20 results on the page. The value might be different for all 20 results.
After I got all 20 values, I want to check which one is the lowest.
I will then afterwards use the lowest value as price to list my element on the market.
Is there a way to do this? Since I am learning python for a bit more than one week now, I cant figure it out myself.
The idea is to first iterate over the player containers - usually, these are table rows, and, for each container, locate that price element within. For instance:
for row in driver.find_elements_by_css_selector("table tbody > tr"):
coin_value = float(row.find_element_by_css_selector(".currency-coins.value").text)
print(coin_value)
Note that table tbody > tr is used as an example, your locator for table rows or player containers is likely different.
Related
I am using website https://www.techlistic.com/p/demo-selenium-practice.html
wherein i am using Demo table 2 for the reference
I am trying to write an xpath for a Web table wherein I need to get the list of all details in structure column below in the below list format
Burj Khalifa
Clock Tower Hotel
Taipei 101
Financial Center
I have written an relative xpath = //th[normalize-space()='Burj Khalifa'] but it only selects the first value. I require an relative xpath wherein it selects all the above values present in the list so that I can run a for loop
The below xpath expression will select all of your above mentioned/required data:
//*[#class="tsc_table_s13"]/tbody/tr/th[1]
select table with class "tsc_table_s13" and find the required data
//table[#class="tsc_table_s13"]/tbody/tr/th
if you want to get all the data in the table then use
//table[#class="tsc_table_s13"]/tbody/tr to loop over every row and use relative paths to get the required data like .//th/text() and so on
There is a table that I want to get the XPATH of, however the amount of rows and columns is inconsistent across results so I can't just right click and get copy the full XPATH.
My current code:
result_priority_number = driver.find_element(By.XPATH, "/html/body/div/div[2]/div[6]/div/div[2]/table/tbody/tr[18]/td[2]")
The table header names though are always consistent. How do I get the value of an element where the table header specifically says something (i.e. "Priority Number")
I can't just right click and get copy the full XPATH.
Never use this method. Xpath has a very useful feature for search! It isn't just for nested pathing!
//td[contains(text(),'header value')]
or if it has many tables and you want only one of its:
//table[#id='id_of_table']//td[contains(text(),'header value')]
or the table hasn't id or class:
//table[2]//td[contains(text(),'header value')]
where 2 is index of table in page
and other many feature for searching in html nodes
in your case, for get Filing language:
//td[contains(text(),'Filing language')]/following-sibling::td
I'm using selenium for the first time to get some information about a fantasy soccer game I play with my friends (we have a competition). I'm facing issues iterating through a list of webelements. Apparently they become stale.
Here's some code and details:
I was able to get to the competition's page by myself. This page has cards for every team in the competition and they look like this
<span class="cartola-card-thin__nome__time">TEAM1</span>
When clicked, those cards lead to that team's page. This page contains a dropdown menu that looks like this
<span class="cartola-dropdown-bg__botao cartola-dropdown-bg-botao-rodada-id cartola-dropdown-bg__botao--aberto" ng-class="'cartola-dropdown-bg-botao-' + name"></span>
and this menu contains a div for each round of the competition. It looks like this
<div ng-if="!hasDescription" class="cartola-dropdown-bg__selecao" ng-bind="item.label">rodada 25</div>
When clicked, each div loads that specific team's formation, and its points during that round. The points are shown on the page like this:
<div class="cartola-time-adv__pontuacao pont-positiva" ng-class="{'pont-positiva': ctrl.timeService.dadosTime.pontos > 0,
'pont-negativa': ctrl.timeService.dadosTime.pontos < 0}" ng-bind="ctrl.timeService.dadosTime.pontos != null ? ctrl.timeService.dadosTime.pontos : ''">78.17</div>
My goal: I want to gather each team's points during each one of the rounds in a dict['round'] = points.
What I've tried already: I've tried to keep the teams in a list by doing
teams = browser.find_elements_by_class_name("cartola-card-thin__nome__time")
Then, for each team in teams I'd click on it.
When on that page I'd find each round like this
rounds = browser.find_elements_by_class_name("cartola-dropdown-bg__selecao")
Then, for each round in rounds I'd click on it and get that round's points.
The problem: those loops where I iterate through teams and rounds are not working because apparently those webelements become stale after the whole process inside the loop (clicking, etc)
How can I approach this problem?
Angular drop down elements are rebuild in runtime. After drop down is collapsed - previously found drop down item is no longer an element of DOM. It is added to DOM one more time, when drop down is expanded again - but it is not the same element for WebDriver (even if it can be found with the same locator).
So, you're following this logic:
Expand drop down
Get drop down elements ->
teams = browser.find_elements_by_class_name("cartola-card-thin__nome__time")
Do something for each team -> here, I suppose, that drop down is collapsed. So found WebElements are no longer in DOM -> stale element exception
What you have to do?
teamsCount = len(teams);
teamIndexes = range(teamsCount)
for(i in temIndexes)
team = driver.find_element(locator_that_usesTeamIndex)
I am working on a script that aims "taking all of the entries which are written by users" under a specific title in a website(In this case, the title is "python(programlama dili"). I would like to read the number which shows the current number of pages under this specific title.
The reason behind reading the number of this elements is that number of pages can increase at the time when we run the script due to increasing number of entries by users. Thus, I should take the number which exist within the element via script.
In this case, I need to read "122" as the value and assign it to a int variable . I use Selenium to take all entries and Firefox web driver.
It Would be better if you try to access it using the xpath.
Try and get the value attribute of the element, you've mentioned you can find the element using xpath so you can do the following.
user_count = element.get_attribute('value')
If that gets you the number (as a string) then you can just convert to an int as usual
value = int(user_count)
First pick the selector .last and then you can extract the reference of that. Don't forget to split the reference.
my_val = driver.find_element_by_css_selector(".last [href]").split("p=")[1]
I have searched similar questions and given it some thought but I am new to python and can't seem to figure this out. I am trying to scrape data from the player table on this page:
http://www.rotoworld.com/teams/depth-charts/mlb.aspx
The HTML for each entry (player) is for example:
<td><b>3B</b></td><td>1. <a href='/player/mlb/6242/manny-machado'>Manny Machado</a></td>
So I can run
players=soup.select('td > a')
to get a list of all players. However I would like to select only players of a specific position, i.e. all the 3B, SS etc. The position is just another text string, and I can't seem to differentiate by it. Does anybody have any idea where I might be able to start with this?
Edit: of course this would be simple if the same positions were always in the same rows, e.g. 1B always rows 2-3 but as can be seen from the table this is not the case.
You can loop over the rows of data and check siblings:
for row in soup.findAll('tr'):
cell = row.findNext('td')
if cell.text == '3B':
print(cell.next_sibling.find('a'))
Which will output:
Manny Machado