First, obligatory advance apologies - almost newbie here, and this is my first question; please be kind...
I'm struggling to scrape javascript generated pages; in particular those of the Metropolitan Opera schedule. For any given month, I would like to create a calendar with just the name of the production, and the date and time of performance. I threw beautifulsoup and selenium at it, and I can get tons of info about the composer's love life, etc. - but not these 3 elements. Any help would be greatly appreciated.
Link to a random month in their schedule
One thing that you should look for (in the future) on websites are calls to an API. I opened up Chrome Dev Tools (F12) and reloaded the page while in the Network tab.
I found two api calls, one for "productions" and one for "events". The "events" response has much more information. This code below makes a call to the "events" endpoint and then returns a subset of that data (specifically, title, date and time according to your description).
I wasn't sure what you wanted to do with that data so I just printed it out. Let me know if the code needs to be updated/modified and I will do my best to help!
I wrote this code using Python 3.6.4
from datetime import datetime
import requests
BASE_URL = 'http://www.metopera.org/api/v1/calendar'
EVENT = """\
Title: {title}
Date: {date}
Time: {time}
---------------\
"""
def get_events(*, month, year):
params = {
'month': month,
'year': year
}
r = requests.get('{}/events'.format(BASE_URL), params=params)
r.raise_for_status()
return r.json()
def get_name_date_time(*, events):
result = []
for event in events:
d = datetime.strptime(event['eventDateTime'], '%Y-%m-%dT%H:%M:%S')
result.append({
'title': event['title'],
'date': d.strftime('%A, %B %d, %Y'),
'time': d.strftime('%I:%M %p')
})
return result
if __name__ == '__main__':
events = get_events(month=11, year=2018)
names_dates_times = get_name_date_time(events=events)
for event in names_dates_times:
print(EVENT.format(**event))
Console:
Title: Tosca
Date: Friday, November 02, 2018
Time: 08:00 PM
---------------
Title: Carmen
Date: Saturday, November 03, 2018
Time: 01:00 PM
---------------
Title: Marnie
Date: Saturday, November 03, 2018
Time: 08:00 PM
---------------
Title: Tosca
Date: Monday, November 05, 2018
Time: 08:00 PM
---------------
Title: Carmen
Date: Tuesday, November 06, 2018
Time: 07:30 PM
---------------
Title: Marnie
Date: Wednesday, November 07, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Thursday, November 08, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Friday, November 09, 2018
Time: 08:00 PM
---------------
Title: Marnie
Date: Saturday, November 10, 2018
Time: 01:00 PM
---------------
Title: Carmen
Date: Saturday, November 10, 2018
Time: 08:00 PM
---------------
Title: Mefistofele
Date: Monday, November 12, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Tuesday, November 13, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Wednesday, November 14, 2018
Time: 07:30 PM
---------------
Title: Carmen
Date: Thursday, November 15, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Friday, November 16, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Saturday, November 17, 2018
Time: 01:00 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Saturday, November 17, 2018
Time: 08:00 PM
---------------
Title: Mefistofele
Date: Monday, November 19, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Tuesday, November 20, 2018
Time: 08:00 PM
---------------
Title: Il Trittico
Date: Friday, November 23, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Saturday, November 24, 2018
Time: 01:00 PM
---------------
Title: Mefistofele
Date: Saturday, November 24, 2018
Time: 08:00 PM
---------------
Title: Il Trittico
Date: Monday, November 26, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Tuesday, November 27, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Wednesday, November 28, 2018
Time: 07:30 PM
---------------
Title: La Bohème
Date: Thursday, November 29, 2018
Time: 07:30 PM
---------------
Title: Il Trittico
Date: Friday, November 30, 2018
Time: 07:30 PM
---------------
For reference, here is a link to the full JSON response from the events endpoint. There is a bunch more potentially interesting information you may want but I just grabbed the subset of what you asked for in the description.
Related
I'm a newbie seeking help.
I've tried without success with the following.
from bs4 import BeautifulSoup
import pandas as pd
url = "https://www.canada.ca/en/immigration-refugees-citizenship/corporate/mandate/policies-operational-instructions-agreements/ministerial-instructions/express-entry-rounds.html"
html_text = requests.get(url).text
soup = BeautifulSoup(html_text, 'html.parser')
data = []
# Verifying tables and their classes
print('Classes of each table:')
for table in soup.find_all('table'):
print(table.get('class'))
Result:
['table']
None
Can anyone help me with how to get this data?
Thank you so much.
The data you see on the page is loaded from external URL. To load the data you can use next example:
import requests
import pandas as pd
url = "https://www.canada.ca/content/dam/ircc/documents/json/ee_rounds_123_en.json"
data = requests.get(url).json()
df = pd.DataFrame(data["rounds"])
df = df.drop(columns=["drawNumberURL", "DrawText1", "mitext"])
print(df.head(10).to_markdown(index=False))
Prints:
drawNumber
drawDate
drawDateFull
drawName
drawSize
drawCRS
drawText2
drawDateTime
drawCutOff
drawDistributionAsOn
dd1
dd2
dd3
dd4
dd5
dd6
dd7
dd8
dd9
dd10
dd11
dd12
dd13
dd14
dd15
dd16
dd17
dd18
231
2022-09-14
September 14, 2022
No Program Specified
3,250
510
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
September 14, 2022 at 13:29:26 UTC
January 08, 2022 at 10:24:52 UTC
September 12, 2022
408
6,228
63,860
5,845
9,505
19,156
16,541
12,813
58,019
12,245
12,635
9,767
11,186
12,186
68,857
35,833
5,068
238,273
230
2022-08-31
August 31, 2022
No Program Specified
2,750
516
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
August 31, 2022 at 13:55:23 UTC
April 16, 2022 at 18:24:41 UTC
August 29, 2022
466
7,224
63,270
5,554
9,242
19,033
16,476
12,965
58,141
12,287
12,758
9,796
11,105
12,195
68,974
36,001
5,120
239,196
229
2022-08-17
August 17, 2022
No Program Specified
2,250
525
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
August 17, 2022 at 13:43:47 UTC
December 28, 2021 at 11:03:15 UTC
August 15, 2022
538
8,221
62,753
5,435
9,129
18,831
16,465
12,893
58,113
12,200
12,721
9,801
11,138
12,253
68,440
35,745
5,137
238,947
228
2022-08-03
August 3, 2022
No Program Specified
2,000
533
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
August 03, 2022 at 15:16:24 UTC
January 06, 2022 at 14:29:50 UTC
August 2, 2022
640
8,975
62,330
5,343
9,044
18,747
16,413
12,783
57,987
12,101
12,705
9,747
11,117
12,317
68,325
35,522
5,145
238,924
227
2022-07-20
July 20, 2022
No Program Specified
1,750
542
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
July 20, 2022 at 16:32:49 UTC
December 30, 2021 at 15:29:35 UTC
July 18, 2022
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
226
2022-07-06
July 6, 2022
No Program Specified
1,500
557
Federal Skilled Worker, Canadian Experience Class, Federal Skilled Trades and Provincial Nominee Program
July 6, 2022 at 14:34:34 UTC
November 13, 2021 at 02:20:46 UTC
July 11, 2022
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
225
2022-06-22
June 22, 2022
Provincial Nominee Program
636
752
Provincial Nominee Program
June 22, 2022 at 14:13:57 UTC
April 19, 2022 at 13:45:45 UTC
June 20, 2022
664
8,017
55,917
4,246
7,845
16,969
15,123
11,734
53,094
10,951
11,621
8,800
10,325
11,397
64,478
33,585
4,919
220,674
224
2022-06-08
June 8, 2022
Provincial Nominee Program
932
796
Provincial Nominee Program
June 08, 2022 at 14:03:28 UTC
October 18, 2021 at 17:13:17 UTC
June 6, 2022
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
223
2022-05-25
May 25, 2022
Provincial Nominee Program
590
741
Provincial Nominee Program
May 25, 2022 at 13:21:23 UTC
February 02, 2022 at 12:29:53 UTC
May 23, 2022
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
222
2022-05-11
May 11, 2022
Provincial Nominee Program
545
753
Provincial Nominee Program
May 11, 2022 at 14:08:07 UTC
December 15, 2021 at 20:32:57 UTC
May 9, 2022
635
7,193
52,684
3,749
7,237
16,027
14,466
11,205
50,811
10,484
11,030
8,393
9,945
10,959
62,341
32,590
4,839
211,093
I have this list that represent Fedex tracking
history = ['Tuesday, March 16, 2021', '3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.', '5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery', '5:40 AM MIAMI, FL\nAt local FedEx facility', 'Monday, March 15, 2021', '11:42 PM OCALA, FL\nDeparted FedEx location', '10:01 PM OCALA, FL\nArrived at FedEx location', '8:28 PM OCALA, FL\nIn transit', '12:42 AM OCALA, FL\nIn transit']
How do I transform this list into this 3 columns dataframe
history = [
"Tuesday, March 16, 2021",
"3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.",
"5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery",
"5:40 AM MIAMI, FL\nAt local FedEx facility",
"Monday, March 15, 2021",
"11:42 PM OCALA, FL\nDeparted FedEx location",
"10:01 PM OCALA, FL\nArrived at FedEx location",
"8:28 PM OCALA, FL\nIn transit",
"12:42 AM OCALA, FL\nIn transit",
]
import re
r = re.compile("^(?:Sunday|Monday|Tuesday|Wednesday|Thursday|Friday|Saturday)")
data, cur_group = [], ""
for line in history:
if r.match(line):
cur_group = line
else:
data.append([cur_group, *line.split("\n", maxsplit=1)])
df = pd.DataFrame(data)
print(df)
Prints:
0 1 2
0 Tuesday, March 16, 2021 3:03 PM Hollywood, FL Delivered\nLeft at front door. Signature Servi...
1 Tuesday, March 16, 2021 5:52 AM MIAMI, FL On FedEx vehicle for delivery
2 Tuesday, March 16, 2021 5:40 AM MIAMI, FL At local FedEx facility
3 Monday, March 15, 2021 11:42 PM OCALA, FL Departed FedEx location
4 Monday, March 15, 2021 10:01 PM OCALA, FL Arrived at FedEx location
5 Monday, March 15, 2021 8:28 PM OCALA, FL In transit
6 Monday, March 15, 2021 12:42 AM OCALA, FL In transit
You can use dateutil.parser.parse to check if an element is a valid datetime.
This should be safer than just checking if an element contains a day string (Monday, Tuesday, etc.) in case an event also contains a day string somewhere (e.g., Delivery failed\nWill reattempt on Monday).
import dateutil.parser
history = ['Tuesday, March 16, 2021', '3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.', '5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery', '5:40 AM MIAMI, FL\nAt local FedEx facility', 'Monday, March 15, 2021', '11:42 PM OCALA, FL\nDeparted FedEx location', '10:01 PM OCALA, FL\nArrived at FedEx location', '8:28 PM OCALA, FL\nIn transit', '12:42 AM OCALA, FL\nIn transit']
data = []
for string in history:
try:
day = dateutil.parser.parse(string)
except:
data.append([day, *string.split('\n', maxsplit=1)])
df = pd.DataFrame(data)
# 0 1 2
# 0 2021-03-16 3:03 PM Hollywood, FL Delivered\nLeft at front door. Signature Servi...
# 1 2021-03-16 5:52 AM MIAMI, FL On FedEx vehicle for delivery
# 2 2021-03-16 5:40 AM MIAMI, FL At local FedEx facility
# 3 2021-03-15 11:42 PM OCALA, FL Departed FedEx location
# 4 2021-03-15 10:01 PM OCALA, FL Arrived at FedEx location
# 5 2021-03-15 8:28 PM OCALA, FL In transit
# 6 2021-03-15 12:42 AM OCALA, FL In transit
ok this is a bit hacky but might get the job done if the format is consistent, long term regex might be a better approach
col1 = []
col2 = []
col3 = []
for h in history:
if 'FL' in h:
col1.append(date)
new_list = h.split(',')
item2 = new_list[0][4:]
item3 = new_list[1][4:]
col2.append(item2.replace('\n', '. '))
col3.append(item3.replace('\n', '. '))
else:
date = h
pd.DataFrame({'col1': col1,
'col2': col2,
'col3': col3})
I want to extract only time from a text with so many different formats of dates and time such as 'thursday, august 6, 2020 4:32:54 pm', '25 september 2020 04:05 pm' and '29 april 2020 07:42'. So I want to extract, for example, 4:32:54, 07:42, 04:05. Can you help me with that?
I would try something like this:
times = [
'thursday, august 6, 2020 4:32:54 pm',
'25 september 2020 04:05 pm',
'29 april 2020 07:42',
]
print("\n".join("".join(i for i in t.split() if ":" in i) for t in times))
Output:
4:32:54
04:05
07:42
Im working on StockX scraping some products. There is a popup element called sales history where I click the text link and then loop through all the sales history through the "Load More" button.
My problem is that for the most part this works fine as I loop through URL's, but occasionally it will get hung up for a really long time where the button is present, but is not clickable (hasn't reached bottom either) so I believe it just stays in the loop. Any help with either breaking this loop or some workaround in Selenium would be awesome thank you!!
This is the function that I use to open the sales history information:
url = "https://stockx.com/adidas-ultra-boost-royal-blue"
driver = webdriver.Firefox()
driver.get(url)
content = driver.page_source
soup = BeautifulSoup(content, 'lxml')
def get_sales_history():
""" get sales history data from sales history table interaction """
sales_hist_data = []
try:
# click 'View All Sales' text link
View_all_sales_button = driver.find_element_by_xpath(".//div[#class='market-history-sales']/a[#class='all']")
View_all_sales_button.click()
# log in
login_button = driver.find_element_by_id("nav-signup")
login_button.click
# add username
username = driver.find_element_by_id("email-login")
username.clear()
username.send_keys("email#email.com")
# add password
password = driver.find_element_by_name("password-login")
password.clear()
password.send_keys("password")
except:
pass
while True:
try:
# If 'Load More' Appears Click Button
sales_hist_load_more_button = driver.find_element_by_xpath(
".//div[#class='latest-sales-container']/button[#class='button button-block button-white']")
sales_hist_load_more_button.click()
except:
#print("Reached bottom of page")
break
content = driver.page_source
soup = BeautifulSoup(content, 'lxml')
div = soup.find('div', class_='latest-sales-container')
for td in div.find_all('td'):
sales_hist_data.append(td.text)
return sales_hist_data
You can wait for button to become clickable using explicit wait.
while True:
try:
# If 'Load More' Appears Click Button
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, ".//div[#class='latest-sales-container']/button[#class='button button-block button-white']"))).click()
except StaleElementReferenceException:
pass
except TimeoutException:
break
Also, note that I have used 2 different exception handling. In case some time you get stale element ( it will be possible as you are trying to click same button after page refresh) it will ignore an again try to click same button , but when element is not found for 20 Sec it will get time out exception and break.
To click on the element with text View All Sales within the Last Sale block and click on the Load More element to scrape all the sales history you need to induce WebDriverWait for the visibility_of_all_elements_located() and you can use the following xpath based Locator Strategies:
Code Block:
driver.get('https://stockx.com/adidas-ultra-boost-royal-blue')
time.sleep(20) ## to interact with the location popup
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//div[#class='last-sale-block']//a[text()='View All Sales']"))).click()
while True:
try:
WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, "//button[#class='button button-block button-white' and text()='Load More']"))).click()
print("Clicked on Load More")
time.sleep(3)
except (TimeoutException):
print("No more Load More")
break
print([my_elem.text for my_elem in WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.XPATH, "//div[#class='modal-body']//tbody//tr")))])
Console Output:
Clicked on Load More
Clicked on Load More
Clicked on Load More
No more Load More
['Sunday, August 2, 2020 2:16 AM EST 11 $236', 'Tuesday, June 2, 2020 7:34 AM EST 11 $262', 'Monday, April 27, 2020 11:03 AM EST 9 $143', 'Tuesday, January 7, 2020 8:54 AM EST 12.5 $137', 'Friday, December 27, 2019 12:30 PM EST 10 $307', 'Sunday, December 1, 2019 3:09 PM EST 8.5 $290', 'Tuesday, November 12, 2019 1:05 AM EST 12 $275', 'Tuesday, May 7, 2019 2:26 PM EST 8.5 $181', 'Saturday, April 27, 2019 1:04 PM EST 10 $228', 'Tuesday, March 5, 2019 12:25 AM EST 8.5 $230', 'Monday, November 5, 2018 1:35 AM EST 8 $320', 'Tuesday, August 28, 2018 7:29 PM EST 8.5 $240', 'Friday, August 24, 2018 10:26 PM EST 11 $580', 'Monday, July 16, 2018 10:02 PM EST 10.5 $255', 'Friday, July 6, 2018 2:44 PM EST 9 $260', 'Saturday, June 30, 2018 8:14 AM EST 9.5 $300', 'Tuesday, June 5, 2018 11:06 PM EST 10 $299', 'Saturday, May 12, 2018 10:48 AM EST 12 $371', 'Tuesday, March 20, 2018 1:09 AM EST 7.5 $279', 'Tuesday, March 20, 2018 11:17 PM EST 8 $250', 'Saturday, February 24, 2018 2:18 AM EST 7.5 $250', 'Monday, February 19, 2018 6:11 PM EST 7 $300', 'Sunday, February 18, 2018 2:05 PM EST 10 $400', 'Saturday, February 3, 2018 3:24 PM EST 7.5 $299', 'Thursday, January 25, 2018 11:13 PM EST 7 $190', 'Wednesday, December 27, 2017 11:09 PM EST 9 $355', 'Thursday, October 12, 2017 8:37 PM EST 8 $300', 'Friday, September 1, 2017 2:05 AM EST 12.5 $333', 'Friday, September 1, 2017 10:38 PM EST 12 $495', 'Saturday, August 5, 2017 10:53 AM EST 8 $355', 'Friday, August 4, 2017 3:28 AM EST 9.5 $325', 'Thursday, July 6, 2017 7:31 AM EST 10 $350', 'Tuesday, June 13, 2017 11:42 PM EST 9 $350', 'Monday, May 15, 2017 4:19 AM EST 11.5 $200', 'Sunday, May 14, 2017 3:42 PM EST 13 $370', 'Sunday, March 26, 2017 1:49 PM EST 11 $347', 'Sunday, August 21, 2016 7:33 PM EST 11 $250']
The output when running a simple code breaks and is not entirely shown. What are the options to avoid the breaks?
22 December 23, 1989, Saturday, Late Edition - Final
23 December 22, 1989, Friday, Late Edition - Final
24 December 21, 1989, Thursday, Late Edition - Final
25 December 21, 1989, Thursday, Late Edition - Final
26 December 20, 1989, Wednesday, Late Edition - F...
27 December 20, 1989, Wednesday, Late Edition - F...
28 December 19, 1989, Tuesday, Late Edition - Final
29 December 18, 1989, Monday, Late Edition - Final
...
605 January 12, 2016 Tuesday
606 January 12, 2016 Tuesday 10:58 PM EST
607 January 12, 2016 Tuesday 8:28 PM EST
608 January 12, 2016 Tuesday 9:43 AM EST
Thanks!
PD: this is the code used to produce the file:
import json
import nltk
import re
import pandas
appended_data = []
for i in range(1989,2017):
df0 = pandas.DataFrame([json.loads(l) for l in open('NYT_%d.json' % i)])
df1 = pandas.DataFrame([json.loads(l) for l in open('USAT_%d.json' % i)])
df2 = pandas.DataFrame([json.loads(l) for l in open('WP_%d.json' % i)])
appended_data.append(df0)
appended_data.append(df1)
appended_data.append(df2)
appended_data = pandas.concat(appended_data)
print(appended_data.date)
You need to change the display width for pandas. The option you are looking for is pd.set_option('display.width', 2000), but you may also find some other pandas options helpful which I use regularily:
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
pd.set_option('display.width', 2000)
Detailed description can be found here.