How to change platform when scraping a website (Futbin) in Python? - python

I am currently looking at obtaining price data from Futbin specifically from this page of player data. I have used bs4 successfully for this with the following specific code:
spans = soup.find_all("span", class_="ps4_color font-weight-bold")
This collects all PS4 prices from the players page but I would like to also obtain Xbox and PC prices. To do this on the site you have to manually select it from the icons in the top right but from what I can tell this links to the same url but with updated price data. How can I scrape this data in a similar way to above as I'm sure there must be an easier way than using Selenium or similar packages.
Any help would be greatly appreciated!

To change page for another platform, set cookie= parameter in your request:
import requests
from bs4 import BeautifulSoup
url = 'https://www.futbin.com/20/players?page=1'
platforms = ['ps4', 'xone', 'pc']
for platform in platforms:
print()
print('Platform: {}'.format(platform))
print('-' * 80)
soup = BeautifulSoup( requests.get(url, cookies={'platform': platform}).content, 'html.parser' )
for s in soup.select('span.font-weight-bold'):
print('{:<40} {}'.format(s.find_previous('a', class_="player_name_players_table").text, s.text))
Prints:
Platform: ps4
--------------------------------------------------------------------------------
Lionel Messi 2.5M
Virgil van Dijk 1.82M
Cristiano Ronaldo 3.2M
Diego Maradona 4.5M
Pelé 6.65M
Kevin De Bruyne 1.95M
Virgil van Dijk 1.75M
Lionel Messi 2.18M
Robert Lewandowski 805K
Cristiano Ronaldo 3.08M
Pelé 3.35M
Kylian Mbappé 2.62M
Kevin De Bruyne 1.21M
Sadio Mané 783K
Kylian Mbappé 2.66M
Neymar Jr 3.83M
Diego Maradona 2.19M
Sadio Mané 625K
Alisson 148K
N'Golo Kanté 1.51M
Robert Lewandowski 269K
Ronaldo 0
Zinedine Zidane 7.15M
Lionel Messi 4.6M
Lionel Messi 1.4M
Alisson 143K
Mohamed Salah 459K
Raphaël Varane 847K
Karim Benzema 310K
Luis Suárez 407K
Platform: xone
--------------------------------------------------------------------------------
Lionel Messi 2.15M
Virgil van Dijk 1.65M
Cristiano Ronaldo 2.53M
Diego Maradona 4.07M
Pelé 0
Kevin De Bruyne 1.73M
Virgil van Dijk 1.6M
Lionel Messi 1.9M
Robert Lewandowski 719K
Cristiano Ronaldo 2.51M
Pelé 3.15M
Kylian Mbappé 2.27M
Kevin De Bruyne 1.02M
Sadio Mané 695K
Kylian Mbappé 2.24M
Neymar Jr 3.27M
Diego Maradona 1.61M
Sadio Mané 585K
Alisson 153K
N'Golo Kanté 1.3M
Robert Lewandowski 247K
Ronaldo 0
Zinedine Zidane 6.78M
Lionel Messi 4.26M
Lionel Messi 1.24M
Alisson 130K
Mohamed Salah 470K
Raphaël Varane 725K
Karim Benzema 272K
Luis Suárez 351K
Platform: pc
--------------------------------------------------------------------------------
Lionel Messi 3.56M
Virgil van Dijk 2.5M
Cristiano Ronaldo 3.75M
Diego Maradona 4.3M
Pelé 0
Kevin De Bruyne 2.52M
Virgil van Dijk 2.4M
Lionel Messi 2.86M
Robert Lewandowski 1.16M
Cristiano Ronaldo 3.75M
Pelé 5.75M
Kylian Mbappé 3.35M
Kevin De Bruyne 1.4M
Sadio Mané 925K
Kylian Mbappé 3.3M
Neymar Jr 4.85M
Diego Maradona 1.98M
Sadio Mané 730K
Alisson 179K
N'Golo Kanté 1.9M
Robert Lewandowski 400K
Ronaldo 0
Zinedine Zidane 0
Lionel Messi 4.77M
Lionel Messi 2.3M
Alisson 160K
Mohamed Salah 520K
Raphaël Varane 940K
Karim Benzema 370K
Luis Suárez 679K

Related

Scraping issue with id_tag

I'm trying to extract data from a website with BeautifulSoup.
I'm actually stuck with this :
"Trad. de l'anglais par < a href="/searchinternet/advanced?all_authors_id=35534&SearchAction=1">Camille Fabien < /a>"
I want to get the names of translaters but the tag uses their id.
my code is
translater = soup.find_all("a", href="/searchinternet/advanced?all_authors_id=")
I tried with a str.startswith but it doesn't work.
Can someone help me plz?
Providing your HTML is correct, static (doesn't get loaded with javascript after initial page load), this is one way to select that/those links:
from bs4 import BeautifulSoup as bs
html = '''<p>Trad. de l'anglais par Camille Fabien </p>'''
soup = bs(html, 'html.parser')
a = soup.select('a[href^="/searchinternet/advanced?all_authors_id="]')
print(a[0])
print(a[0].get_text(strip=True))
print(a[0].get('href'))
Result in terminal:
Camille Fabien
Camille Fabien
/searchinternet/advanced?all_authors_id=35534&SearchAction=1
EDIT: Who doesn't like a challenge?... Based on further comments made by OP, here is a way of obtaining titles, authors, translators and illustrator from that page - considering there can be one, or more translators/one or more illustrators:
from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/104.0.0.0 Safari/537.36"
}
url = 'https://www.gallimard.fr/searchinternet/advanced/(editor_brand_id)/1/(fserie)/FOLIO-JUNIOR+LIVRE+HEROS%3A%3AFolio+Junior+-+Un+Livre+dont+Vous+%C3%AAtes+le+H%C3%A9ros+%40+DEFIS+FANTASTIQ%3A%3AS%C3%A9rie+D%C3%A9fis+Fantastiques/(limit)/3?date%5Bfrom%5D=1980-01-01&date%5Bto%5D=1995-01-01&SearchAction=OK'
big_list = []
r = requests.get(url, headers=headers)
soup = bs(r.text, 'html.parser')
items = soup.select('div[class="results bg_white"] > table div[class="item"]')
print()
for i in items:
title = i.select_one('div[class="title"] h3')
author = i.select_one('div[class="author"] a')
history = i.select_one('p[class="collective_work_entries"]')
translators = [[y.get_text() for y in x.find_previous_siblings('a')] for x in history.contents if "Illustrations" in x]
illustrators = [[y.get_text() for y in x.find_next_siblings('a')] for x in history.contents if "Illustrations" in x]
big_list.append((title.text.strip(), author.text.strip(), ', '.join([x for y in translators for x in y]), ', '.join([x for y in illustrators for x in y])))
df = pd.DataFrame(big_list, columns = ['Title', 'Author', 'Translator(s)', 'Illustrator(s)'])
print(df)
Result in terminal:
Title
Author
Translator(s)
Illustrator(s)
0
Le Sépulcre des Ombres
Jonathan Green
Noël Chassériau
Alan Langford
1
La Légende de Zagor
Ian Livingstone
Pascale Houssin
Martin McKenna
2
Les Mages de Solani
Keith Martin
Noël Chassériau
Russ Nicholson
3
Le Siège de Sardath
Keith P. Phillips
Yannick Surcouf
Pete Knifton
4
Retour à la Montagne de Feu
Ian Livingstone
Yannick Surcouf
Martin McKenna
5
Les Mondes de l'Aleph
Peter Darvill-Evans
Yannick Surcouf
Tony Hough
6
Les Mercenaires du Levant
Paul Mason
Mona de Pracontal
Terry Oakes
7
L'Arpenteur de la Lune
Stephen Hand
Pierre de Laubier
Martin McKenna, Terry Oakes
8
La Tour de la Destruction
Keith Martin
Mona de Pracontal
Pete Knifton
9
La Légende des Guerriers Fantômes
Stephen Hand
Alexis Galmot
Martin McKenna
10
Le Repaire des Morts-Vivants
Dave Morris
Nicolas Grenier
David Gallagher
11
L'Ancienne Prophétie
Paul Mason
Mona de Pracontal
Terry Oakes
12
La Vengeance des Démons
Jim Bambra
Mona de Pracontal
Martin McKenna
13
Le Sceptre Noir
Keith Martin
Camille Fabien
David Gallagher
14
La Nuit des Mutants
Peter Darvill-Evans
Anne Collas
Alan Langford
15
L'Élu des Six Clans
Luke Sharp
Noël Chassériau
Martin Mac Kenna, Martin McKenna
16
Le Volcan de Zamarra
Luke Sharp
Olivier Meyer
David Gallagher
17
Les Sombres Cohortes
Ian Livingstone
Noël Chassériau
Nik William
18
Le Vampire du Château Noir
Keith Martin
Mona de Pracontal
Martin McKenna
19
Le Voleur d'Âmes
Keith Martin
Mona de Pracontal
Russ Nicholson
20
Le Justicier de l'Univers
Martin Allen
Mona de Pracontal
Tim Sell
21
Les Esclaves de l'Eternité
Paul Mason
Sylvie Bonnet
Bob Harvey
22
La Créature venue du Chaos
Steve Jackson
Noël Chassériau
Alan Langford
23
Les Rôdeurs de la Nuit
Graeme Davis
Nicolas Grenier
John Sibbick
24
L'Empire des Hommes-Lézards
Marc Gascoigne
Jean Lacroix
David Gallagher
25
Les Gouffres de la Cruauté
Luke Sharp
Sylvie Bonnet
Russ Nicholson
26
Les Spectres de l'Angoisse
Robin Waterfield
Mona de Pracontal
Ian Miller
27
Le Chasseur des Étoiles
Luke Sharp
Arnaud Dupin de Beyssat
Cary Mayes, Gary Mayes
28
Les Sceaux de la Destruction
Robin Waterfield
Sylvie Bonnet
Russ Nicholson
29
La Crypte du Sorcier
Ian Livingstone
Noël Chassériau
John Sibbick
30
La Forteresse du Cauchemar
Peter Darvill-Evans
Mona de Pracontal
Dave Carson
31
La Grande Menace des Robots
Steve Jackson
Danielle Plociennik
Gary Mayes
32
L'Épée du Samouraï
Mark Smith
Pascale Jusforgues
Alan Langford
33
L'Épreuve des Champions
Ian Livingstone
Alain Vaulont, Pascale Jusforgues
Brian Williams
34
Défis Sanglants sur l'Océan
Andrew Chapman
Jean Walter
Bob Harvey
35
Les Démons des Profondeurs
Steve Jackson
Noël Chassériau
Bob Harvey
36
Rendez-vous avec la M.O.R.T.
Steve Jackson
Arnaud Dupin de Beyssat
Declan Considine
37
La Planète Rebelle
Robin Waterfield
C. Degolf
Gary Mayes
38
Les Trafiquants de Kelter
Andrew Chapman
Anne Blanchet
Nik Spender
39
Le Combattant de l'Autoroute
Ian Livingstone
Alain Vaulont, Pascale Jusforgues
Kevin Bulmer
40
Le Mercenaire de l'Espace
Andrew Chapman
Jean Walthers
Geoffroy Senior
41
Le Temple de la Terreur
Ian Livingstone
Denise May
Bill Houston
42
Le Manoir de l'Enfer
Steve Jackson
43
Le Marais aux Scorpions
Steve Jackson
Camille Fabien
Duncan Smith
44
Le Talisman de la Mort
Steve Jackson
Camille Fabien
Bob Harvey
45
La Sorcière des Neiges
Ian Livingstone
Michel Zénon
Edward Crosby, Gary Ward
46
La Citadelle du Chaos
Steve Jackson
Marie-Raymond Farré
Russ Nicholson
47
La Galaxie Tragique
Steve Jackson
Camille Fabien
Peter Jones
48
La Forêt de la Malédiction
Ian Livingstone
Camille Fabien
Malcolm Barter
49
La Cité des Voleurs
Ian Livingstone
Henri Robillot
Iain McCaig
50
Le Labyrinthe de la Mort
Ian Livingstone
Patricia Marais
Iain McCaig
51
L'Île du Roi Lézard
Ian Livingstone
Fabienne Vimereu
Alan Langford
52
Le Sorcier de la Montagne de Feu
Steve Jackson
Camille Fabien
Russ Nicholson
Bear in mind this method fails for Le Manoir de l'Enfer, because word 'Illustrations' is not found in text. It's down to the OP to find a solution for that one.
BeautifulSoup documentation can be found at https://beautiful-soup-4.readthedocs.io/en/latest/index.html
Also, Pandas docs can be found here: https://pandas.pydata.org/pandas-docs/stable/index.html
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("./test.html", "r"),'html.parser') #returns a list
names = []
for elem in soup:
names.append(elem.text)

Check if elements from a Dataframe column are in another Pandas dataframe row and append to a new column

I have a DataFrame like this:
Casa
Name
Solo Deportes
Paleta De Padel Adidas Metalbone CTRL
Solo Deportes
Zapatillas Running Under Armour Charged Stamin...
Solo Deportes
Rompeviento Con Capucha Reebok Woven Azu
Solo Deportes
Remera Michael Jordan Chicago Bulls
and Df2:
Palabra
Marca
Acqualine
Acqualine
Addnice
Addnice
Adnnice
Addnice
Under Armour
Under Armour
Jordan
Nike
Adidas
Adidas
Reebok
Reebok
How can I check each row of df['Name'], see if the row contains a value of Df2['Palabra'], and in that case get the value of Df2['Marca'] and put it in the new column?. The result should be something like this:
Casa
Name
Marca
Solo Deportes
Paleta De Padel Adidas Metalbone CTRL
Adidas
Solo Deportes
Zapatillas Running Under Armour Charged Stamin...
Under Armour
Solo Deportes
Rompeviento Con Capucha Reebok Woven Azu
Reebok
Solo Deportes
Remera Michael Jordan Chicago Bulls
Nike
Data:
df:
{'Casa': ['Solo Deportes', 'Solo Deportes', 'Solo Deportes', 'Solo Deportes'],
'Name': ['Paleta De Padel Adidas Metalbone CTRL',
'Zapatillas Running Under Armour Charged Stamin...',
'Rompeviento Con Capucha Reebok Woven Azu',
'Remera Michael Jordan Chicago Bulls']}
Df2:
{'Palabra': ['Acqualine', 'Addnice', 'Adnnice', 'Under Armour', 'Jordan', 'Adidas', 'Reebok'],
'Marca': ['Acqualine', 'Addnice', 'Addnice', 'Under Armour', 'Nike', 'Adidas', 'Reebok']}
A simple solution is to use iterrows in a generator expression and next to iterate over Df2 and check if a matching item appears:
df1['Marca'] = df1['Name'].apply(lambda name: next((r['Marca'] for _, r in Df2.iterrows() if r['Palabra'] in name), float('nan')))
Output:
Casa Name Marca
0 Solo Deportes Paleta De Padel Adidas Metalbone CTRL Adidas
1 Solo Deportes Zapatillas Running Under Armour Charged Stamin... Under Armour
2 Solo Deportes Rompeviento Con Capucha Reebok Woven Azu Reebok
3 Solo Deportes Remera Michael Jordan Chicago Bulls Nike

Exact visual matches between common columns in dataframes not matching on merge

I am trying to merge two dataframes on a common column, "long_name". But the merge is not happening for some names, even what look like visually exact matches, (ie "Lionel Andrés Messi Cuccittini" (df1) to "Lionel Andrés Messi Cuccittini" (df2)) when I merge on "long_name":
df_merged = df.merge(df1, on="long_name", indicator=True, how='right')
Lionel Messi is left out and according to the indicator column, he's a "right_only" row from the merge. What's odd is that "Neymar da Silva Santos Júnior" IS merging. Why is there a discrepancy between the rows? Both have been sourced consistently, df from kaggle and df2 from scraping and using the same script for all row name extractions.
I tried to isolate both the Lionel Messi entries from df and df1 using the following code:
name1 = df.loc[df.short_name == 'L. Messi', ["long_name"]]
name2 = df1.loc[df1.name == 'Lionel Messi', ["long_name"]]
name1.values == name2.values
But the result is array([[False]]). I'm not sure why they're not matching.
The first df looks like this (first 8 lines, df = df.loc[0:7,["short_name", "long_name"]]):
short_name long_name
0 L. Messi Lionel Andrés Messi Cuccittini
1 Cristiano Ronaldo Cristiano Ronaldo dos Santos Aveiro
2 Neymar Jr Neymar da Silva Santos Junior
3 J. Oblak Jan Oblak
4 E. Hazard Eden Hazard
5 K. De Bruyne Kevin De Bruyne
6 M. ter Stegen Marc-André ter Stegen
7 V. van Dijk Virgil van Dijk
The second df looks like this (first 8 lines, df1 = df1.loc[0:7,["name", "long_name"]]):
name long_name
0 Kylian Mbappé Kylian Sanmi Mbappé Lottin
1 Neymar Neymar da Silva Santos Júnior
2 Mohamed Salah محمد صلاح
3 Harry Kane Harry Edward Kane
4 Eden Hazard Eden Michael Hazard
5 Lionel Messi Lionel Andrés Messi Cuccitini
6 Raheem Sterling Raheem Shaquille Sterling
7 Antoine Griezmann Antoine Griezmann
Are you sure it is not just a case of a misspelled name?
df lists the long_name as Lionel Andrés Messi Cuccittini, whereas df1 lists it as Lionel Andrés Messi Cuccitini. I notice df has 2 t's in Cuccittini but df has 1.
Manually correct the second dataframe and retry.

Parse phone number and string into new columns in pandas dataframe

I've got a list of addresses in a single column address, how would I go about parsing the phone number and restaurant category into new columns? My dataframe looks like this
address
0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses
1 Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis
2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 French Bistro
where I want to get
address | phone_number | category
0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles | 310-246-1501 | Steakhouses
1 Art's Deli 12224 Ventura Blvd. Studio City | 818-762-1221 | Delis
2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air | 310-472-1211 | French Bistro
Does anybody have any suggestions?
Try using Regex with str.extract.
Ex:
df = pd.DataFrame({'address':["Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses",
"Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis",
"Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 French Bistro"]})
df[["address", "phone_number", "category"]] = df["address"].str.extract(r"(?P<address>.*?)(?P<phone_number>\b\d{3}\-\d{3}\-\d{4}\b)(?P<category>.*$)")
print(df)
Output:
address phone_number \
0 Arnie Morton's of Chicago 435 S. La Cienega Bl... 310-246-1501
1 Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221
2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211
category
0 Steakhouses
1 Delis
2 French Bistro
Note:: Assuming the content of address is always address--phone_number--category
Using str.extract and str.split:
We extract the pattern numbers dash numbers dash numbers for phone_number
We split on the pattern 3 numbers followed by a space and grab the part after it for category. We use positive lookbehind for this, which is ?<= in regex
df['phone_number'] = df['address'].str.extract('(\d+-\d+-\d+)')
df['category'] = df['address'].str.split('(?<=\d{3})\s').str[-1]
Output
address phone_number category
0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses 310-246-1501 Steakhouses
1 Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis 818-762-1221 Delis
2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 French Bistro 310-472-1211 French Bistro

No tables found error when making an AJAX request

I am trying to scrape the results table from the following url: https://utmbmontblanc.com/en/page/107/results.html
However when I run my code it says 'No Tables Found'
import pandas as pd
url = 'https://utmbmontblanc.com/en/page/107/results.html'
data = pd.read_html(url, header = 0)
data.head()
ValueError: No tables found
Having used developer tools I know that there is definitely a table in the html code. Why is it not being found? Any help is greatly appreciated. Thanks in advance
build URL for Ajax request, for 2017 - CCC is like this
url = 'https://.......com/result.php?mode=edPass&ajax=true&annee=2017&course=ccc'
data = pd.read_html(url, header = 0)
print(data[0])
You can also use selenium if you are unable to find any other hacks.
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from time import sleep
from bs4 import BeautifulSoup as BSoup
import pandas as pd
url = "https://utmbmontblanc.com/en/page/107/results.html"
driver = webdriver.Chrome("/home/bitto/chromedriver")#change this to your chromedriver path
year = 2017
driver.get(url)
element = WebDriverWait(driver, 10).until(
#changes div[#class='bloc'] to change year - [1] for 2018, [2] for 2017 etc
#change index of div[#class='row'] - [1], [2] for TDS etc
#change #value of option match your preferred option's value - you can find this from the inspect tool - First two are Scratch and ScratchH
EC.presence_of_element_located((By.XPATH, "//div[#class='bloc'][2]/div[#class='row'][4]/span[#class='selectbutton']/select[#name='cat'][1]/option[#value='Scratch']"))
)
element.click()#select option
#make relevant changes you made in top here also
driver.find_element_by_xpath("//div[#class='bloc'][2]/div[#class='row'][4]/span[#class='selectbutton']/input").click();#click go
sleep(10)#not preferred but will do for now
table=pd.read_html(driver.page_source)
print(table)
Output
[ GeneralRanking Family name First name Club Cat. ... Time Difference/ 1st Nationality
0 1 3001 - HAWKS Hayden HOKA ONE ONE SEH ... 10:24:30 00:00:00 United States
1 2 3018 - ŚWIERC Marcin SALOMON SUUNTO TEAM POLAND SEH ... 10:42:49 00:18:19 Poland
2 3 3005 - POMMERET Ludovic TEAM HOKA V1H ... 10:50:47 00:26:17 France
3 4 3214 - EVANS Thomas COMPRESS SPORT SEH ... 10:57:44 00:33:14 United Kingdom
4 5 3002 - OWENS Tom SALOMON SEH ... 11:03:48 00:39:18 United Kingdom
5 6 3011 - JONSSON Thorbergur 66 NORTH SEH ... 11:14:22 00:49:52 Iceland
6 7 3026 - BOUVIER-GAZ Nicolas TEAM NEW BALANCE SEH ... 11:18:33 00:54:03 France
7 8 3081 - JONES Michael WWW.APEXRUNNING.CO SEH ... 11:31:50 01:07:20 United Kingdom
8 9 3020 - COLLET Aurélien HOKA ONE ONE SEH ... 11:33:10 01:08:40 France
9 10 3009 - MARAVILLA Jorge HOKA ONE ONE V1H ... 11:36:14 01:11:44 United States
10 11 3036 - PERRILLAT Christophe SEH ... 11:40:05 01:15:35 France
11 12 3070 - FRAGUELA BREIJO Alejandro STUDIO54 V1H ... 11:40:11 01:15:41 Spain
12 13 3092 - AIGROZ Mike TRUST SEH ... 11:41:53 01:17:23 Switzerland
13 14 3021 - O'LEARY Paddy THE NORTH FACE SEH ... 11:47:04 01:22:34 Ireland
14 15 3065 - PÉREZ TORREGLOSA Juan CLUB ULTRATRAIL ... SEH ... 11:47:51 01:23:21 Spain
15 16 3031 - SÁNCHEZ CEBRIÁN Miguel Ángel LURBEL-LI... V1H ... 11:49:15 01:24:45 Spain
16 17 3062 - ANDREWS Justin SEH ... 11:49:47 01:25:17 United States
17 18 3039 - PIANA Giulio TEAM MUD AND SNOW SEH ... 11:50:23 01:25:53 Italy
18 19 3047 - RONIMOISS Andris Inov8 / OSveikals.lv ... SEH ... 11:52:25 01:27:55 Latvia
19 20 3052 - DURAND Regis TEAM TRAIL ISOSTAR V1H ... 11:56:40 01:32:10 France
20 21 3027 - SANDES Ryan SALOMON SEH ... 12:04:39 01:40:09 South Africa
21 22 3014 - EL MORABITY Rachid ULTRA TRAIL ATLAS T... SEH ... 12:10:01 01:45:31 Morocco
22 23 3067 - JONES Harry RUNIVORE SEH ... 12:10:12 01:45:42 United Kingdom
23 24 3030 - CLAVERY Erik - SEH ... 12:12:56 01:48:26 France
24 25 3056 - JIMENEZ LLORENS Juan Maria GREEN POWER... SEH ... 12:13:18 01:48:48 Spain
25 26 3024 - GALLAGHER Clare THE NORTH FACE SEF ... 12:13:57 01:49:27 United States
26 27 3136 - ASSEL Garry LICENCE INDIVIDUELLE LUXEM... SEH ... 12:20:46 01:56:16 Luxembourg
27 28 3071 - RIGODANZA Francesco SPIRITO TRAIL TEAM SEH ... 12:22:49 01:58:19 Italy
28 29 3118 - POLASZEK Christophe CHARTRES VERTICAL V1H ... 12:24:49 02:00:19 France
29 30 3125 - CALERO RODRIGUEZ David Altmann Sports/... SEH ... 12:25:07 02:00:37 Spain
... ... ... ... ... ... ... ...
1712 1713 5734 - GOT Hang Fai V2H ... 26:25:01 16:00:31 Hong Kong, China
1713 1714 4154 - RAMOS Liliana NIKE RUNNING CLUB V3F ... 26:26:22 16:01:52 Argentina
1714 1715 5448 - BECKRICH Xavier PHOENIX57 V1H ... 26:26:45 16:02:15 France
1715 1716 5213 - BARBERIO ARNOULT Isabelle PHOENIX57 V1F ... 26:26:49 16:02:19 France
1716 1717 4704 - ZHANG Zheng XIAOMABENTENG SEH ... 26:28:37 16:04:07 China
1717 1718 5282 - GUISOLAN Frédéric SEH ... 26:28:46 16:04:16 Switzerland
1718 1719 5306 - MEDINA Rafael V1H ... 26:29:26 16:04:56 Mexico
1719 1720 5379 - PENTCHEFF Nicolas SEH ... 26:33:05 16:08:35 France
1720 1721 4665 - GONZALEZ SUANCES Israel BAR ES PUIG V1H ... 26:33:58 16:09:28 Spain
1721 1722 4389 - TONANNY Marie SEF ... 26:34:51 16:10:21 France
1722 1723 5616 - GLORIAN Thierry V2H ... 26:35:47 16:11:17 France
1723 1724 5684 - CHEUNG Ho FAITHWALKERS V1H ... 26:37:09 16:12:39 Hong Kong, China
1724 1725 5719 - GANDER Pascal JEFF B TRAIL SEH ... 26:39:04 16:14:34 France
1725 1726 4555 - JURGIELEWICZ Urszula SEF ... 26:39:44 16:15:14 Poland
1726 1727 4722 - HIDALGO José Miguel C.D. ATLETISMO SAN... V1H ... 26:40:27 16:15:57 Spain
1727 1728 4425 - JITTIWUTIKARN Gif V1F ... 26:41:02 16:16:32 Thailand
1728 1729 4556 - ZHU Jing SEF ... 26:41:12 16:16:42 China
1729 1730 4314 - HU Dongli V1H ... 26:41:27 16:16:57 China
1730 1731 4239 - DURET Estelle OXYGENE BELBEUF V1F ... 26:41:51 16:17:21 France
1731 1732 4525 - MAGLIERI Fabrice ATHLETIC CLUB PAYS DE... V1H ... 26:42:11 16:17:41 France
1732 1733 4433 - ANDERSEN Laura Jentsch RUN DEM CREW SEF ... 26:42:27 16:17:57 Denmark
1733 1734 4563 - CHEUNG Annie On Nai FAITHWALKERS V1F ... 26:45:35 16:21:05 Hong Kong, China
1734 1735 4355 - KHALED Naïm GENEVE AEROPORT SEH ... 26:47:50 16:23:20 Algeria
1735 1736 4749 - STELLA Sara COURMAYEUR TRAILERS V1F ... 26:48:07 16:23:37 Italy
1736 1737 4063 - LALIMAN Leslie SEF ... 26:48:09 16:23:39 France
1737 1738 5702 - BURKE Tony Alchester/CTR/Bicester Tri V2H ... 26:50:52 16:26:22 Ireland
1738 1739 5146 - OLIVEIRA Sandra BUDEGUITA RUNNERS V1F ... 26:52:23 16:27:53 Portugal
1739 1740 5545 - VELLANDI Emilio TEAM PEGGIORI SCARPA MICO V1H ... 26:55:32 16:31:02 Italy
1740 1741 5543 - GASPAROVIC Bernard STADE FRANCAIS V3H ... 26:56:31 16:32:01 France
1741 1742 4760 - MENDONCA Carine ASPTT COMPIEGNE V2F ... 27:19:15 16:54:45 Belgium
[1742 rows x 7 columns]]

Categories

Resources