parsing recurrence Google Calendar API - python

I'm trying to create small web app which will work with user Google Calendar data. Everything works fine except getting start datetime for events with 'recurrence' in item.keys().
Question: How can I get start time of event from given data:
{'kind': 'calendar#event',
'sequence': 0,
'htmlLink': 'link',
'creator': {my info},
'location': '...',
'summary': '...',
'etag': '"etag"',
'organizer': {org_info},
'status': 'confirmed',
'reminders': {'useDefault': True},
'created': '2016-09-18T07:02:56.000Z',
'id': event_id,
'iCalUID': 'iCalUID',
'start': {'timeZone': 'Europe/Moscow', 'dateTime': '2016-09-07T14:35:00+03:00'},
'updated': '2016-09-18T07:02:56.612Z',
'description': '...',
'recurrence': ['RRULE:FREQ=WEEKLY;WKST=SU;INTERVAL=2;BYDAY=WE'],
'end': {'timeZone': 'Europe/Moscow', 'dateTime': '2016-09-07T16:10:00+03:00'}}
My idea was to parse somehow RRULE, I've found way to get list of datetimes from it, but how to get this event start time is still question for me.
I'm getting data with service.events().list(...).execute()
I know that in the new API (i found this only today) item has nice called originStartTime, but old events don't have thing.

StackOverflow magic: I was trying to find solution for 2 days, only after posting the question I've found method or function which does exactly what I needed events().instances()

Related

Grabbing opening hours from google places API

I am using this python library to grab a response from the google places API.
Using the places function from the above library returns the object you see below. You can see that is just gives me a boolean on whether the restaurant is open or closed. How can I see the hours it is open for each day of the week? If this library isnt capable of it, can anyone show me an example that is?
Here is the line of code making the request for full context.
import googlemaps # https://googlemaps.github.io/google-maps-services-python/docs/index.html
gmaps = googlemaps.Client(key='apiKey')
response = gmaps.places(query=charlestonBars[idx], location=charleston)
[ { 'business_status': 'OPERATIONAL',
'formatted_address': '467 King St, Charleston, SC 29403, United States',
'geometry': { 'location': {'lat': 32.7890988, 'lng': -79.9386229},
'viewport': { 'northeast': { 'lat': 32.79045632989271,
'lng': -79.93725907010727},
'southwest': { 'lat': 32.78775667010727,
'lng': -79.93995872989272}}},
'icon': 'https://maps.gstatic.com/mapfiles/place_api/icons/v1/png_71/bar-71.png',
'icon_background_color': '#FF9E67',
'icon_mask_base_uri': 'https://maps.gstatic.com/mapfiles/place_api/icons/v2/bar_pinlet',
'name': "A.C.'s Bar & Grill",
------->'opening_hours': {'open_now': True},
'photos': [ { 'height': 1816,
'html_attributions': [ '<a '
'href="https://maps.google.com/maps/contrib/106222166168758671498">Lisa '
'Royce</a>'],
'photo_reference': 'ARywPAKpP_eyNL_y625xWYrQvSjAI91TzEx4XgT1rwCxjFyQjEAwZb2ha9EgE2RcKJalrZhjp0yTWa6QqvPNU9c7GeNBTDtzXVI0rHq2RXtTySGu8sjcB76keFugOmsl1ix4NnDVh0NO0vt_PO3nIZ-R-ytOzzIRhJgPAJd3SxKNQNfEyVp5',
'width': 4032}],
'place_id': 'ChIJk6nSrWt6_ogRE9KiNXVG5KA',
'plus_code': { 'compound_code': 'Q3Q6+JG Charleston, South Carolina',
'global_code': '8742Q3Q6+JG'},
'price_level': 1,
'rating': 4.3,
'reference': 'ChIJk6nSrWt6_ogRE9KiNXVG5KA',
'types': [ 'bar',
'restaurant',
'point_of_interest',
'food',
'establishment'
What you need to use is the Place Details of Places API
Before we proceed with the solution, we must understand that there's a difference in the result of a Place Search and a Place Details request.
According to the documentation:
"Place Search requests and Place Details requests do not return the same fields. Place Search requests return a subset of the fields that are returned by Place Details requests. If the field you want is not returned by Place Search, you can use Place Search to get a place_id, then use that Place ID to make a Place Details request."
Now what you used on your code according to the python library documentation is the places(*args, **kwargs) which is the Place Search.
The Google Maps API documentation you provided on your comment above where you can get the expected result of hours each day is from Place Details which is the place(*args, **kwargs) from the python library documentation.
As quoted above, to request the detail of a place, you need the place_id which you can get by doing a Places Search request like you did on your question. So what you only need to do is get the place_id of the location you want through Place Search, then use that place_id to get the Place Details result that includes the opening_hours field result.
Here's what it looks like on a python code:
# I used pprint to print the result on the console
from pprint import pprint
import googlemaps #import googlemaps python library
# Instantiate the client using your own API key
API_KEY = 'API_KEY'
map_client = googlemaps.Client(API_KEY)
# Store the location you want, in my case, I tried using 'Mall of Asia'
location_name = 'Mall of Asia'
# Instantiate Place Search request using `places(*args, **kwargs)` from the library
# Use the `stored location` as an argument for query
place_search = map_client.places(location_name)
# Get the search results
search_results = place_search.get('results')
# Store the place_id from the result to be used
place_id = (search_results[0]['place_id'])
# Instantiate Place Details request using the `place(*args, **kwargs)` from the library
# Use the stored place_id as an argument for the request
place_details = map_client.place(place_id)
# Get the Place Details result
details_results = place_details.get('result')
# Print the result specifying what you only need which is the `opening_hours` field
pprint(details_results['opening_hours'])
The result of this sample request would be this:
{'open_now': False,
'periods': [{'close': {'day': 0, 'time': '2200'},
'open': {'day': 0, 'time': '1000'}},
{'close': {'day': 1, 'time': '2200'},
'open': {'day': 1, 'time': '1000'}},
{'close': {'day': 2, 'time': '2200'},
'open': {'day': 2, 'time': '1000'}},
{'close': {'day': 3, 'time': '2200'},
'open': {'day': 3, 'time': '1000'}},
{'close': {'day': 4, 'time': '2200'},
'open': {'day': 4, 'time': '1000'}},
{'close': {'day': 5, 'time': '2200'},
'open': {'day': 5, 'time': '1000'}},
{'close': {'day': 6, 'time': '2200'},
'open': {'day': 6, 'time': '1000'}}],
'weekday_text': ['Monday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Tuesday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Wednesday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Thursday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Friday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Saturday: 10:00\u202fAM\u2009–\u200910:00\u202fPM',
'Sunday: 10:00\u202fAM\u2009–\u200910:00\u202fPM']}
That would be all and I hope this helps! Feel free to comment below if this does not meet your expected results.

Youtube Data API: extract Transcript for a list of dictionaries

Im trying to get the transcript of a number of videos of a playlist. When I run the code I get as a result the list below, which contains the id of each video as key of a dictionary, and a list of dictionaries as the value. Does anyone know how a could extract and join only the "text" from the list and store it in a variable named "GetText"?
this is the code:
from googleapiclient.discovery import build
from youtube_transcript_api import YouTubeTranscriptApi
import os
api_key = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
nextPageToken = None
srt = []
vid_ids = []
vid_title = []
while True:
#1.query API
rq = build("youtube", "v3", developerKey=api_key).playlistItems().list(
part="contentDetails, snippet",
playlistId="PLNIs-AWhQzckr8Dgmgb3akx_gFMnpxTN5",
maxResults=50,
pageToken=nextPageToken
).execute()
#2.Create a list with video Ids and Titles
for item in rq["items"]:
vid_ids.append(item["contentDetails"]["videoId"])
vid_title.append(item["snippet"]["title"])
nextPageToken = rq.get('nextPageToken')
if not nextPageToken:
break
#3.Get transcripts
for i in vid_ids:
try:
srt += [YouTubeTranscriptApi.get_transcripts([i])]
except:
print(f"{i} doesn't have a transcript")
print(srt)
#4.For each video id extract the Key:"text" from a list of dictionaries
?????????????????????
this is a part of the list of transcripts I get:
[
({
"KHO5NIcZAc4":[
{
"text":"welcome to this wise ell tutorial in",
"start":0.23,
"duration":4.15
},
{
"text":"this video we're going to teach you",
"start":3.06,
"duration":3.09
},
...
]
})
]
Frankly, I don't understand your problem.
This should be basis knowledge: use for-loops to work with list and dictionares.
That's all.
data = [({'KHO5NIcZAc4':
[{'text': 'welcome to this wise ell tutorial in', 'start': 0.23, 'duration': 4.15}, {'text': "this video we're going to teach you", 'start': 3.06, 'duration': 3.09}, {'text': 'about working with the visual basic', 'start': 4.38, 'duration': 3.66}, {'text': 'editor application with a name to', 'start': 6.15, 'duration': 4.409}, {'text': 'writing some Excel VBA code in this', 'start': 8.04, 'duration': 3.66}, {'text': "video we're not going to write any code", 'start': 10.559, 'duration': 2.881}, {'text': 'itself but we are going to do is show', 'start': 11.7, 'duration': 3.45}, {'text': 'you how you can set up and work with the', 'start': 13.44, 'duration': 3.839}, {'text': "visual basic editor so I'll start by", 'start': 15.15, 'duration': 3.99}, {'text': 'showing you how you can access the VBA', 'start': 17.279, 'duration': 3.75}, {'text': 'deter from whichever version of Excel', 'start': 19.14, 'duration': 4.11}, {'text': "you happen to be working in we'll talk", 'start': 21.029, 'duration': 3.931}, {'text': 'about how you can switch between the the', 'start': 23.25, 'duration': 4.17}, {'text': 'VBA editor and Excel itself with some', 'start': 24.96, 'duration': 4.649}, {'text': "nice quick keyboard shortcuts we'll also", 'start': 27.42, 'duration': 3.54}, {'text': 'give you a quick whirlwind tour of the', 'start': 29.609, 'duration': 3.001}, {'text': 'VB screen and explain what the main', 'start': 30.96, 'duration': 4.259}, {'text': 'window is in the VB editor application', 'start': 32.61, 'duration': 5.4}]
})]
for item in data:
#print(item)
for video_id, transcript in item.items():
print('ID:', video_id)
all_parts = []
for part in transcript:
#print(part['text'])
all_parts.append(part['text'])
full_text = " ".join(all_parts)
print(full_text)
Result:
ID: KHO5NIcZAc4
welcome to this wise ell tutorial in this video we're going to teach you about working with the visual basic editor application with a name to writing some Excel VBA code in this video we're not going to write any code itself but we are going to do is show you how you can set up and work with the visual basic editor so I'll start by showing you how you can access the VBA deter from whichever version of Excel you happen to be working in we'll talk about how you can switch between the the VBA editor and Excel itself with some nice quick keyboard shortcuts we'll also give you a quick whirlwind tour of the VB screen and explain what the main window is in the VB editor application
BTW:
When you use for-loop to work with list or dictionary then you can use print(...), print(type(...)) and print( some_dictionary.keys() ) to see what you have in variables and what to use in nested for-loop.

extract a link from a web page with python

I'm writing a code in which it sends the site's link in a chat (I know how to do this part), I make the request, but in this request it returns other things along with the link, how do I get only the link?
link = requests.get(f"https://sugoi-api.herokuapp.com/episode/{Episodio}/{AnimeN}")
resultado = link.json()
this is the result:
{'status': 200, 'info': {'name': 'Naruto classico', 'slug': 'naruto-classico', 'fc': 'N', 'epi': '12'}, 'cdn': [{'name': 'Superanimes', 'url': 'https://cdn.superanimes.tv/', 'links': ['https://cdn.superanimes.tv/010/animes/n/naruto-classico-dublado/12.mp4', 'https://cdn.superanimes.tv/010/animes/n/naruto-classico-legendado/12.mp4%27]%7D, {'name': 'Serverotaku', 'url': 'https://cdn.serverotaku01.co/', 'links': ['https://cdn.serverotaku01.co/010/animes/n/naruto-classico-dublado/12.mp4', 'https://cdn.serverotaku01.co/010/animes/n/naruto-classico-legendado/12.mp4%27]%7D, {'name': 'Servertv', 'url': 'https://servertv001.com/', 'links': ['https://servertv001.com/animes/n/naruto-classico-dublado/12.mp4', 'https://servertv001.com/animes/n/naruto-classico-legendado/12.mp4%27]%7D]%7D
if someone knows how to get only the result link it would help me a lot
One simple way to extract URL from any data (general) is mentioned below. First, Convert the json output you got into a string and then use regular expression.
str = json.dumps({'status': 200, 'info':........})
import re
re.findall("(?P<url>https?://[^\s]+)", str)

Using Softlayer Object Filters for activeTransaction

I am trying to use the Python SoftLayer API to return a list of virtual servers that are do not have an active transaction in "RECLAIM_WAIT" status (which is the state you have when you delete a virtual server in Softlayer). I am expecting to get back all virtual servers that have no activeTransaction at all, and also ones that have an activeTransaction but is in a status other than "RECLAIM_WAIT".
I call the vs manager with a filter that I think should work:
f={'virtualGuests': {'activeTransaction': {'transactionStatus': {'name': {'operation': '!= RECLAIM_WAIT'}}}}}
instance = vs.list_instances(hostname="node5-0",filter=f)
but it returns only instances that have an activeTransaction (including the ones that have a RECLAIM_WAIT status).
Here is an example of a returned instance from that call:
[{'status': {'keyName': 'DISCONNECTED', 'name': 'Disconnected'}, 'datacenter': {'statusId': 2, 'id': 265592, 'name': 'xxxx', 'longName': 'xxx'}, 'domain': 'xxxx', 'powerState': {'keyName': 'HALTED', 'name': 'Halted'}, 'maxCpu': 2, 'maxMemory': 8192, 'hostname': 'node5-0', 'primaryIpAddress': 'xxxx', 'activeTransaction': {'modifyDate': '2017-01-16T05:20:01-06:00', 'statusChangeDate': '2017-01-16T05:20:01-06:00', 'elapsedSeconds': 22261, 'createDate': '2017-01-16T05:19:05-06:00', 'hardwareId': '', 'guestId': 27490599, 'id': 46204349, 'transactionStatus': {'friendlyName': 'This is a buffer time in which the customer may cancel the server', 'name': 'RECLAIM_WAIT'}}, 'globalIdentifier': 'xx', 'primaryBackendIpAddress': 'xxx', 'id': xxx, 'fullyQualifiedDomainName': 'xxx'}]
What am I doing wrong with the filter?
There is nothing wrong in your request, unfortunately, it's not possible to filter transactions for its transactionStatus, because the transaction doesn't have access to "transactionStatusId" key, you can check in the transaction datatype, there not exist the "transactionStatusId" in the local properties.
SoftLayer_Provisioning_Version1_Transaction
So, the best way would be to filter directly in your code.

Iterating through and deleting certain elements in a list of dictionaries in python

I have json file that looks like this:
[{'Events': [{'EventName': 'Log',
'EventType': 'Native',
'LogLevel': 'error',
'Message': 'missing event: seqNum=1'},
{'EventName': 'Log',
'EventType': 'Native',
'LogLevel': 'error',
'Message': 'missing event: seqNum=2'}],
'Id': 116005},
{'Events': [{'EventName': 'Log',
'EventType': 'Native',
'LogLevel': 'error',
'Message': 'missing event: seqNum=101'},
{'EventName': 'Log',
'EventType': 'Native',
'LogLevel': 'error',
'Message': 'missing event: seqNum=102'},
{'BrowserInfo': {'name': 'IE ', 'version': '11'},
'EventName': 'Log',
'EventType': 'Native',
'LogLevel': 'info',
'SeqNum': 3,
'SiteID': 1454445626890,
'Time': 1454445626891,
'URL': 'http://test.com'},
{'BrowserInfo': {'name': 'IE ', 'version': '11'},
'EventName': 'eventIndicator',
'EventType': 'responseTime',
'SeqNum': 8,
'SiteID': 1454445626890,
'Time': 1454445626923,
'URL': 'http://test.com'}],
'Id': 116005}]
And I am trying to remove each of the events where "EventName": "Log".
I would assume there is a way to pop them out, but I can't even iterate far enough into the list to do that. What is the cleanest way to do this?
I should end up with a list that looks like:
[{'Events': [{'BrowserInfo': {'name': 'IE ', 'version': '11'},
'EventName': 'eventIndicator',
'EventType': 'responseTime',
'SeqNum': 8,
'SiteID': 1454445626890,
'Time': 1454445626923,
'URL': 'http://test.com'}],
'Id': 116005}]
It's difficult to modify a list or other data structure as you're iterating over it. It's often easier to create a new data structure, excluding the unwanted values.
You appear to want to do two things:
Remove dictionaries from the "Events" lists that have an "EventName" of "Log".
Remove any top level dictionaries who's lists of events have become empty after the "Log" events were removed.
It's a bit tricky to do both at once, I but not too bad:
filtered_json_list = []
for event_group in json_list:
filtered_events = [event for event in event_group["Events"]
if event["EventName"] != "Log"]
if filtered_events: # skip empty event groups!
filtered_json_list.append({"Id": event_group["Id"], "Events": filtered_events})
This was a lot easier than I expected because the top-level dictionaries (which I call event_groups, for lack of a better name) only had two keys, "Id" and "Events". If instead there were many keys and values in those dictionaries (or which keys and values they had were unpredictable), you'd probably need to replace the last line of my code with something more complicated (e.g. creating a dictionary with just the filtered events, then using kind of loop to copy over all the non-"Events" keys and values), rather than creating the dictionary by hand with a literal.
This program might help.
import json
# Parse the JSON
with open('x.json') as fp:
events = json.load(fp)
# Kill all "Log" events
for event_set in events:
event_list = event_set['Events']
event_list[:] = [event for event in event_list if event['EventName'] != 'Log']
# Kill all empty event sets
events[:] = [event_set for event_set in events if event_set['Events']]
print json.dumps(events, indent=2)
You can use Python generators/list comprenhensions for this
[x for x in json where x['EventName'] != 'Log']

Categories

Resources