Webscrape - Fields of different length - python

The current code scrapes individual fields, but I would like to map the time and the titles together.
Since the webpage does not have the time and titles in the same class, how would this mapping occur?
Piggy-backing off this question -Link (My question uses an example where the time and title is not of equal length)
Website for reference:
https://ash.confex.com/ash/2021/webprogram/WALKS.html
Sample Expected Output:
5:00 PM-6:00 PM, ASH Poster Walk on Geriatric Hematology: Selecting the Right Treatment for the Patient, Not Just the Disease
5:00 PM-6:00 PM, ASH Poster Walk on Healthcare Quality Improvement
etc
import requests
from bs4 import BeautifulSoup
url = 'https://ash.confex.com/ash/2021/webprogram/WALKS.html'
res = requests.get(url)
soup = BeautifulSoup(res.content,'html.parser')
productlist = soup.select('div.itemtitle > a')
times = soup.select('.time')

This could be an alternative:
import requests
from bs4 import BeautifulSoup
url = 'https://ash.confex.com/ash/2021/webprogram/WALKS.html'
#this is to get the url part before the last "/"
base_url = url.rsplit("/", 1)[0]
res = requests.get(url)
soup = BeautifulSoup(res.content,'html.parser')
productlist = soup.select('div.itemtitle > a')
#times = soup.select('.time')
for a in productlist:
title = a.text.strip()
time = a.find_previous('h3').text.strip()
date = a.find_previous('h4').text.strip()
page = a['href'].strip()
#sep = "/" is the separator between each parameter
#end = "makes the double linebreak when print function is done"
print(title, date, time, base_url + page, sep = "\n", end = "\n\n")
OUTPUT
ASH Poster Walk on What's Hot in Sickle Cell Disease
Wednesday, December 15, 2021
10:00 AM-11:00 AM
https://ash.confex.com/ash/2021/webprogramSession20816.html
ASH Poster Walk on Geriatric Hematology: Selecting the Right Treatment for the Patient, Not Just the Disease
Wednesday, December 15, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20695.html
ASH Poster Walk on Healthcare Quality Improvement
Wednesday, December 15, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession21143.html
ASH Poster Walk on Natural Killer Cell-Based Immunotherapy
Wednesday, December 15, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20655.html
ASH Poster Walk on Pediatric Non-malignant Hematology Highlights
Wednesday, December 15, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20721.html
ASH Poster Walk on Clinical Trials In Progress
Thursday, December 16, 2021
10:00 AM-11:00 AM
https://ash.confex.com/ash/2021/webprogramSession20589.html
ASH Poster Walk on Financial Toxicity in Hematologic Malignancies
Thursday, December 16, 2021
10:00 AM-11:00 AM
https://ash.confex.com/ash/2021/webprogramSession20663.html
ASH Poster Walk on Diversity, Equity, and Inclusion in Hematologic Malignancies and Cell Therapy
Thursday, December 16, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20809.html
ASH Poster Walk on Emerging Research in Immunotherapies
Thursday, December 16, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20805.html
ASH Poster Walk on the Spectrum of Hemostasis and Thrombosis Research
Thursday, December 16, 2021
5:00 PM-6:00 PM
https://ash.confex.com/ash/2021/webprogramSession20821.html

Try this:
content = soup.find('div', {"class": "content"})
times = content.find_all("h3")
output = []
for i,h3 in enumerate(times):
for j in h3.next_siblings:
if j.name:
if j.name == "h3":
break
j = j.text.replace('\n', '')
output.append(f"{times[i].text}, {j}")
print(output)

Related

Find all div with id (not class) inside <article> beautifulsoup

In my personal scraping project I cannot locate any job cards on https://unjobs.org neither with requests / requests_html, nor selenium. Job titles are the only fields that I can print in console. Company names and deadlines seem to be located in iframes, but there is no src, somehow href also are not scrapeable. I am not sure whether that site is SPA. Plus DevTools shows no XHR of interest. Please advise what selector/script tag contains all the data?
You are dealing with CloudFlare firewall. You've to inject the cookies. I couldn't share such answer for injecting the cookies as CloudFlare bots is very clever to fetch such threads and then improving the security.
Anyway below is a solution using Selenium
import pandas as pd
from selenium import webdriver
from selenium.common.exceptions import TimeoutException
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
mainurl = "https://unjobs.org/"
def main(driver):
driver.get(mainurl)
try:
element = WebDriverWait(driver, 10).until(
EC.presence_of_all_elements_located(
(By.XPATH, "//article/div[#id]"))
)
data = (
(
x.find_element_by_class_name('jtitle').text,
x.find_element_by_class_name('jtitle').get_attribute("href"),
x.find_element_by_tag_name('br').text,
x.find_element_by_css_selector('.upd.timeago').text,
x.find_element_by_tag_name('span').text
)
for x in element
)
df = pd.DataFrame(data)
print(df)
except TimeoutException:
exit('Unable to locate element')
finally:
driver.quit()
if __name__ == "__main__":
driver = webdriver.Firefox()
data = main(driver)
Note: you can use headless browser as well.
Output:
0 1 2 3 4
0 Republication : Une consultance internationale... https://unjobs.org/vacancies/1627733212329 about 9 hours ago Closing date: Friday, 13 August 2021
1 Project Management Support Associate (Informat... https://unjobs.org/vacancies/1627734534127 about 9 hours ago Closing date: Tuesday, 17 August 2021
2 Finance Assistant - Retainer, Nairobi, Kenya https://unjobs.org/vacancies/1627734537201 about 10 hours ago Closing date: Saturday, 14 August 2021
3 Procurement Officer, Sana'a, Yemen https://unjobs.org/vacancies/1627734545575 about 10 hours ago Closing date: Wednesday, 4 August 2021
4 ICT Specialist (Geospatial Information Systems... https://unjobs.org/vacancies/1627734547681 about 10 hours ago Closing date: Saturday, 14 August 2021
5 Programme Management - Senior Assistant (Grant... https://unjobs.org/vacancies/1627734550335 about 10 hours ago Closing date: Thursday, 5 August 2021
6 Especialista en Normas Internacionales de Cont... https://unjobs.org/vacancies/1627734552666 about 10 hours ago Closing date: Saturday, 14 August 2021
7 Administration Assistant, Juba, South Sudan https://unjobs.org/vacancies/1627734561330 about 10 hours ago Closing date: Wednesday, 11 August 2021
8 Project Management Support - Senior Assistant,... https://unjobs.org/vacancies/1627734570991 about 10 hours ago Closing date: Saturday, 14 August 2021
9 Administration Senior Assistant [Administrativ... https://unjobs.org/vacancies/1627734572868 about 10 hours ago Closing date: Wednesday, 11 August 2021
10 Project Management Support Officer, Juba, Sout... https://unjobs.org/vacancies/1627734574639 about 10 hours ago Closing date: Wednesday, 11 August 2021
11 Information Management Senior Associate, Bamak... https://unjobs.org/vacancies/1627734576597 about 10 hours ago Closing date: Saturday, 7 August 2021
12 Regional Health & Safety Specialists (French a... https://unjobs.org/vacancies/1627734578207 about 10 hours ago Closing date: Friday, 6 August 2021
13 Project Management Support - Associate, Bonn, ... https://unjobs.org/vacancies/1627734587268 about 10 hours ago Closing date: Tuesday, 10 August 2021
14 Associate Education Officer, Goré, Chad https://unjobs.org/vacancies/1627247597092 a day ago Closing date: Tuesday, 3 August 2021
15 Senior Program Officer, High Impact Africa 2 D... https://unjobs.org/vacancies/1627597499846 a day ago Closing date: Thursday, 12 August 2021
16 Specialist, Supply Chain, Geneva https://unjobs.org/vacancies/1627597509615 a day ago Closing date: Thursday, 12 August 2021
17 Project Manager, Procurement and Supply Manage... https://unjobs.org/vacancies/1627597494487 a day ago Closing date: Thursday, 12 August 2021
18 WCO Drug Programme: Analyst for AIRCOP Project... https://unjobs.org/vacancies/1627594132743 a day ago Closing date: Tuesday, 31 August 2021
19 Regional Desk Assistant, Geneva https://unjobs.org/vacancies/1627594929351 a day ago Closing date: Thursday, 26 August 2021
20 Programme Associate, Zambia https://unjobs.org/vacancies/1627586510917 a day ago Closing date: Wednesday, 11 August 2021
21 Associate Programme Management Officer, Entebb... https://unjobs.org/vacancies/1627512175261 a day ago Closing date: Saturday, 14 August 2021
22 Expert in Transport Facilitation and Connectiv... https://unjobs.org/vacancies/1627594978539 a day ago Closing date: Sunday, 15 August 2021
23 Content Developer for COP Trainings (two posit... https://unjobs.org/vacancies/1627594862178 a day ago
24 Consultant (e) en appui aux Secteurs, Haiti https://unjobs.org/vacancies/1627585454029 a day ago Closing date: Sunday, 8 August 2021
It looks like either Cloudflare knows your request is not coming from an actual browser and is giving a captcha instead of the actual site and/or you need javascript for the site to run.
I would try using something like puppeteer and see if the response you get is valid.

extract just date from beautifulsoup result

I am trying to scrape a date from a web-site using BeautifulSoup:
how do I extract only the date-time from this? I only want : May 21, 2021 19:47
You can use this example how to extract the date-time from the <ctag>s:
from bs4 import BeautifulSoup
html_doc = """
<ctag class="">May 21, 2021 19:47 Source: <span>BSE</span> </ctag>
"""
soup = BeautifulSoup(html_doc, "html.parser")
for ctag in soup.find_all("ctag"):
dt = ctag.get_text(strip=True).rsplit(maxsplit=1)[0]
print(dt)
Prints:
May 21, 2021 19:47
Or:
for ctag in soup.find_all("ctag"):
dt = ctag.contents[0].rsplit(maxsplit=1)[0]
print(dt)
Or:
for ctag in soup.find_all("ctag"):
dt = ctag.find_next(text=True).rsplit(maxsplit=1)[0]
print(dt)
EDIT: To get dataframe of articles, you can do:
import requests
from bs4 import BeautifulSoup
import pandas as pd
url = "https://www.moneycontrol.com/company-notices/reliance-industries/notices/RI"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
data = []
for ctag in soup.select("li ctag"):
data.append(
{
"title": ctag.find_next("a").get_text(strip=True),
"date": ctag.find_next(text=True).rsplit(maxsplit=1)[0],
"desc": ctag.find_next("p", class_="MT2").get_text(strip=True),
}
)
df = pd.DataFrame(data)
print(df)
Prints:
title date desc
0 Reliance Industries - Compliances-Reg. 39 (3) ... May 21, 2021 19:47 Pursuant to Regulation 39(3) of the Securities...
1 Reliance Industries - Announcement under Regul... May 19, 2021 21:20 We refer to Regulation 5 of the SEBI (Prohibit...
2 Reliance Industries - Announcement under Regul... May 17, 2021 17:18 In continuation of our letter dated May 15, 20...
3 Reliance Industries - Announcement under Regul... May 17, 2021 16:06 Please find attached a media release by Relian...
4 Reliance Industries - Announcement under Regul... May 15, 2021 15:15 The Company has, on May 15, 2021, published in...
5 Reliance Industries - Compliances-Reg. 39 (3) ... May 14, 2021 19:44 Pursuant to Regulation 39(3) of the Securities...
6 Reliance Industries - Notice For Payment Of Fi... May 13, 2021 22:57 We refer to our letter dated May 01, 2021. A...
7 Reliance Industries - Announcement under Regul... May 12, 2021 21:20 We wish to inform you that the Company partici...
8 Reliance Industries - Compliances-Reg. 39 (3) ... May 12, 2021 19:39 Pursuant to Regulation 39(3) of the Securities...
9 Reliance Industries - Compliances-Reg. 39 (3) ... May 11, 2021 19:49 Pursuant to Regulation 39(3) of the Securities...

How do I transform this list into a dataframe

I have this list that represent Fedex tracking
history = ['Tuesday, March 16, 2021', '3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.', '5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery', '5:40 AM MIAMI, FL\nAt local FedEx facility', 'Monday, March 15, 2021', '11:42 PM OCALA, FL\nDeparted FedEx location', '10:01 PM OCALA, FL\nArrived at FedEx location', '8:28 PM OCALA, FL\nIn transit', '12:42 AM OCALA, FL\nIn transit']
How do I transform this list into this 3 columns dataframe
history = [
"Tuesday, March 16, 2021",
"3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.",
"5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery",
"5:40 AM MIAMI, FL\nAt local FedEx facility",
"Monday, March 15, 2021",
"11:42 PM OCALA, FL\nDeparted FedEx location",
"10:01 PM OCALA, FL\nArrived at FedEx location",
"8:28 PM OCALA, FL\nIn transit",
"12:42 AM OCALA, FL\nIn transit",
]
import re
r = re.compile("^(?:Sunday|Monday|Tuesday|Wednesday|Thursday|Friday|Saturday)")
data, cur_group = [], ""
for line in history:
if r.match(line):
cur_group = line
else:
data.append([cur_group, *line.split("\n", maxsplit=1)])
df = pd.DataFrame(data)
print(df)
Prints:
0 1 2
0 Tuesday, March 16, 2021 3:03 PM Hollywood, FL Delivered\nLeft at front door. Signature Servi...
1 Tuesday, March 16, 2021 5:52 AM MIAMI, FL On FedEx vehicle for delivery
2 Tuesday, March 16, 2021 5:40 AM MIAMI, FL At local FedEx facility
3 Monday, March 15, 2021 11:42 PM OCALA, FL Departed FedEx location
4 Monday, March 15, 2021 10:01 PM OCALA, FL Arrived at FedEx location
5 Monday, March 15, 2021 8:28 PM OCALA, FL In transit
6 Monday, March 15, 2021 12:42 AM OCALA, FL In transit
You can use dateutil.parser.parse to check if an element is a valid datetime.
This should be safer than just checking if an element contains a day string (Monday, Tuesday, etc.) in case an event also contains a day string somewhere (e.g., Delivery failed\nWill reattempt on Monday).
import dateutil.parser
history = ['Tuesday, March 16, 2021', '3:03 PM Hollywood, FL\nDelivered\nLeft at front door. Signature Service not requested.', '5:52 AM MIAMI, FL\nOn FedEx vehicle for delivery', '5:40 AM MIAMI, FL\nAt local FedEx facility', 'Monday, March 15, 2021', '11:42 PM OCALA, FL\nDeparted FedEx location', '10:01 PM OCALA, FL\nArrived at FedEx location', '8:28 PM OCALA, FL\nIn transit', '12:42 AM OCALA, FL\nIn transit']
data = []
for string in history:
try:
day = dateutil.parser.parse(string)
except:
data.append([day, *string.split('\n', maxsplit=1)])
df = pd.DataFrame(data)
# 0 1 2
# 0 2021-03-16 3:03 PM Hollywood, FL Delivered\nLeft at front door. Signature Servi...
# 1 2021-03-16 5:52 AM MIAMI, FL On FedEx vehicle for delivery
# 2 2021-03-16 5:40 AM MIAMI, FL At local FedEx facility
# 3 2021-03-15 11:42 PM OCALA, FL Departed FedEx location
# 4 2021-03-15 10:01 PM OCALA, FL Arrived at FedEx location
# 5 2021-03-15 8:28 PM OCALA, FL In transit
# 6 2021-03-15 12:42 AM OCALA, FL In transit
ok this is a bit hacky but might get the job done if the format is consistent, long term regex might be a better approach
col1 = []
col2 = []
col3 = []
for h in history:
if 'FL' in h:
col1.append(date)
new_list = h.split(',')
item2 = new_list[0][4:]
item3 = new_list[1][4:]
col2.append(item2.replace('\n', '. '))
col3.append(item3.replace('\n', '. '))
else:
date = h
pd.DataFrame({'col1': col1,
'col2': col2,
'col3': col3})

Using Python and BeautifulSoup to scrape list from an URL

I am new to BeautifulSoup so please excuse any beginner mistakes here. I am attempting to scrape an url and want to store list of movies under one date.
Below is the code I have so far:
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/calendar?region=IN&ref_=rlm")
soup = BeautifulSoup(page.content, 'html.parser')
date = soup.find_all("h4")
ul = soup.find_all("ul")
for h4,h1 in zip(date,ul):
dd_=h4.get_text()
mv=ul.find_all('a')
for movie in mv:
text=movie.get_text()
print (dd_,text)
movielist.append((dd_,text))
I am getting "AttributeError: ResultSet object has no attribute 'find_all'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?"
Expected result in list or dataframe
29th May 2020 Romantic
29th May 2020 Sohreyan Da Pind Aa Gaya
5th June 2020 Lakshmi Bomb
and so on
Thanks in advance for help.
This script will get all movies and corresponding dates to a dataframe:
import requests
import pandas as pd
from bs4 import BeautifulSoup
url = 'https://www.imdb.com/calendar?region=IN&ref_=rlm'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')
out, last = [], ''
for tag in soup.select('#main h4, #main li'):
if tag.name == 'h4':
last = tag.get_text(strip=True)
else:
out.append({'Date':last, 'Movie':tag.get_text(strip=True).rsplit('(', maxsplit=1)[0]})
df = pd.DataFrame(out)
print(df)
Prints:
Date Movie
0 29 May 2020 Romantic
1 29 May 2020 Sohreyan Da Pind Aa Gaya
2 05 June 2020 Laxmmi Bomb
3 05 June 2020 Roohi Afzana
4 05 June 2020 Nikamma
.. ... ...
95 26 March 2021 Untitled Luv Ranjan Film
96 02 April 2021 F9
97 02 April 2021 Bell Bottom
98 02 April 2021 NTR Trivikiram Untitled Movie
99 09 April 2021 Manje Bistre 3
[100 rows x 2 columns]
I think you should replace "ul" with "h1" on the 10th line. And add definition of variable "movielist" ahead.
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/calendar?region=IN&ref_=rlm")
soup = BeautifulSoup(page.content, 'html.parser')
date = soup.find_all("h4")
ul = soup.find_all("ul")
# add code here
movielist = []
for h4,h1 in zip(date,ul):
dd_=h4.get_text()
# replace ul with h1 here
mv=h1.find_all('a')
for movie in mv:
text=movie.get_text()
print (dd_,text)
movielist.append((dd_,text))
print(movielist)
I didn't specify a list to receive, and I changed it from 'h1' to 'text capture' instead of 'h4'.
import requests
from bs4 import BeautifulSoup
page = requests.get("https://www.imdb.com/calendar?region=IN&ref_=rlm")
soup = BeautifulSoup(page.content, 'html.parser')
movielist = []
date = soup.find_all("h4")
ul = soup.find_all("ui")
for h4,h1 in zip(date,ul):
dd_=h4.get_text()
mv=h1.find_all('a')
for movie in mv:
text=movie.get_text()
print (dd_,text)
movielist.append((dd_,text))
The reason the date doesn't match in the output result is that the 'date' retrieved looks like the following, so you need to fix the logic.
There are multiple titles on the same release date, so the release date and number of titles don't match up. I can't help you that much because I don't have the time. Have a good night.
29 May 2020
05 June 2020
07 June 2020
07 June 2020 Romantic
12 June 2020
12 June 2020 Sohreyan Da Pind Aa Gaya
18 June 2020
18 June 2020 Laxmmi Bomb
19 June 2020
19 June 2020 Roohi Afzana
25 June 2020
25 June 2020 Nikamma
26 June 2020
26 June 2020 Naandhi
02 July 2020
02 July 2020 Mandela
03 July 2020
03 July 2020 Medium Spicy
10 July 2020
10 July 2020 Out of the Blue

Selenium vs. the NY Metropolitan Opera

First, obligatory advance apologies - almost newbie here, and this is my first question; please be kind...
I'm struggling to scrape javascript generated pages; in particular those of the Metropolitan Opera schedule. For any given month, I would like to create a calendar with just the name of the production, and the date and time of performance. I threw beautifulsoup and selenium at it, and I can get tons of info about the composer's love life, etc. - but not these 3 elements. Any help would be greatly appreciated.
Link to a random month in their schedule
One thing that you should look for (in the future) on websites are calls to an API. I opened up Chrome Dev Tools (F12) and reloaded the page while in the Network tab.
I found two api calls, one for "productions" and one for "events". The "events" response has much more information. This code below makes a call to the "events" endpoint and then returns a subset of that data (specifically, title, date and time according to your description).
I wasn't sure what you wanted to do with that data so I just printed it out. Let me know if the code needs to be updated/modified and I will do my best to help!
I wrote this code using Python 3.6.4
from datetime import datetime
import requests
BASE_URL = 'http://www.metopera.org/api/v1/calendar'
EVENT = """\
Title: {title}
Date: {date}
Time: {time}
---------------\
"""
def get_events(*, month, year):
params = {
'month': month,
'year': year
}
r = requests.get('{}/events'.format(BASE_URL), params=params)
r.raise_for_status()
return r.json()
def get_name_date_time(*, events):
result = []
for event in events:
d = datetime.strptime(event['eventDateTime'], '%Y-%m-%dT%H:%M:%S')
result.append({
'title': event['title'],
'date': d.strftime('%A, %B %d, %Y'),
'time': d.strftime('%I:%M %p')
})
return result
if __name__ == '__main__':
events = get_events(month=11, year=2018)
names_dates_times = get_name_date_time(events=events)
for event in names_dates_times:
print(EVENT.format(**event))
Console:
Title: Tosca
Date: Friday, November 02, 2018
Time: 08:00 PM
---------------
Title: Carmen
Date: Saturday, November 03, 2018
Time: 01:00 PM
---------------
Title: Marnie
Date: Saturday, November 03, 2018
Time: 08:00 PM
---------------
Title: Tosca
Date: Monday, November 05, 2018
Time: 08:00 PM
---------------
Title: Carmen
Date: Tuesday, November 06, 2018
Time: 07:30 PM
---------------
Title: Marnie
Date: Wednesday, November 07, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Thursday, November 08, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Friday, November 09, 2018
Time: 08:00 PM
---------------
Title: Marnie
Date: Saturday, November 10, 2018
Time: 01:00 PM
---------------
Title: Carmen
Date: Saturday, November 10, 2018
Time: 08:00 PM
---------------
Title: Mefistofele
Date: Monday, November 12, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Tuesday, November 13, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Wednesday, November 14, 2018
Time: 07:30 PM
---------------
Title: Carmen
Date: Thursday, November 15, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Friday, November 16, 2018
Time: 07:30 PM
---------------
Title: Tosca
Date: Saturday, November 17, 2018
Time: 01:00 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Saturday, November 17, 2018
Time: 08:00 PM
---------------
Title: Mefistofele
Date: Monday, November 19, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Tuesday, November 20, 2018
Time: 08:00 PM
---------------
Title: Il Trittico
Date: Friday, November 23, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Saturday, November 24, 2018
Time: 01:00 PM
---------------
Title: Mefistofele
Date: Saturday, November 24, 2018
Time: 08:00 PM
---------------
Title: Il Trittico
Date: Monday, November 26, 2018
Time: 07:30 PM
---------------
Title: Mefistofele
Date: Tuesday, November 27, 2018
Time: 07:30 PM
---------------
Title: Les Pêcheurs de Perles (The Pearl Fishers)
Date: Wednesday, November 28, 2018
Time: 07:30 PM
---------------
Title: La Bohème
Date: Thursday, November 29, 2018
Time: 07:30 PM
---------------
Title: Il Trittico
Date: Friday, November 30, 2018
Time: 07:30 PM
---------------
For reference, here is a link to the full JSON response from the events endpoint. There is a bunch more potentially interesting information you may want but I just grabbed the subset of what you asked for in the description.

Categories

Resources