I am editing a python-script to control home appliances (sonos, hue etc.) with the log of a plex mediaserver.
So far things went well, now i am struggeling with a question i could not find an answer to. These are my first steps, please bear with me.
I have a log file, which collects session information in this format:
...
2014-09-13 14:40:02 johnedoe is watching 30c3 Keynote
2014-09-13 14:48:06 thomas is watching Band of Brothers
2014-09-13 15:28:03 johnedoe is watching The Zero Theorem
...
if a new session is detected, the script evaluates if the part "johnedoe is watching 30c3 Keynote" is already present. i would like to include a timestamp-check, which would not only examine if the session is present, but also if a already logged session is older than two hours.
In pseudocode: if (alert not in log) or (if alert in log and older than two hours):
Can i access the corresponding timestamp directly, or to i have to regex the line? Any help is greatly appreciated. Many thanks in advance.
This is the tail of code:
logLocation = '/storage/downloads/plexMon.log'
logging.basicConfig(format='%(asctime)s %(message)s', datefmt='%Y-%m-%d %H:%M:%S', filename=logLocation, level=logging.INFO)
log = open(logLocation).read()
server = urllib2.urlopen('http://127.0.0.1:32400/status/sessions')
data = server.read()
server.close()
tree = ET.fromstring(data)
for video in tree.iter('Video'):
show = video.get('grandparentTitle')
episode = video.get('title')
if show == "None":
title = episode
else:
title = '%s - %s' % (show, episode)
user = video.find('User').get('title').split('#')[0]
alert = '%s is watching %s' % (user, title)
if alert not in log:
logging.info(alert)
if all(i not in alert for i in ignoreAlertList):
sendAlert(alert)
if user == "johnedoe":
b = Bridge('192.168.1.109')
b.set_light(1, 'bri', 50)
my_zone = SoCo('192.168.1.105')
my_zone.unjoin()
my_zone.pause()
Without regexes, this may be easier to achieve, if you use:
log_lines = open(logLocation).readlines() # read log files as a list[] of lines
instead of:
log = open(logLocation).read()
and then use something like:
for line in log_lines:
if(line.__contains__(alert)):
partsOfLine = str(line).split()
dateParts = str(partsOfLine[0]).split("-")
timeParts = str(partsOfLine[1]).split(":")
alerttime = datetime.datetime(int(dateParts[0]),int(dateParts[1]),int(dateParts[2]),int(timeParts[0]),int(timeParts[1]),int(timeParts[2]))
timediff_in_min = (alerttime - datetime.now()).total_seconds()/60
if(timediff_in_min >= 120):
print "ignoring older than 2 hours"
else:
print "new alert in the last 2 hours"
Related
I have the following method get_email() that basically every 20 seconds, gets the latest email and performs a series of other methods on it.
def get_email():
import win32com.client
import os
import time
import datetime as dt
date_time = time.strftime('%m-%d-%Y')
outlook = win32com.client.Dispatch("Outlook.Application").GetNameSpace("MAPI")
inbox = outlook.GetDefaultFolder(6)
messages = inbox.Items
message = messages.GetFirst() # any time calling GetFirst(), you can get GetNext()....
email_subject = message.subject
email_sender = message.SenderEmailAddress
attachments = message.Attachments
body_content = message.body
print ('From: ' + email_sender)
print ('Subject: ' + email_subject)
if attachments.Count > 0:
print (str(attachments.Count) + ' attachments found.')
for i in range(attachments.Count):
email_attachment = attachments.Item(i+1)
report_name = date_time + '_' + email_attachment.FileName
print('Pushing attachment - ' + report_name + ' - to check_correct_email() function.')
if check_correct_email(email_attachment, email_subject, report_name) == True:
save_incoming_report(email_attachment, report_name, get_report_directory(email_subject))
else:
print('Not the attachment we are looking for.')
# add error logging here
break
else: #***********add error logging here**************
print('No attachment found.')
My main question is:
Is there a way I can iterate over every email using the GetNext() function per se every hour instead of getting the latest one every 20 seconds (which is definitely not as efficient as searching through all emails)?
Given that there are two functions: GetFirst() and GetNext() how would I properly have it save the latest checked, and then go through all the ones that have yet to be checked?
Do you think it would be easier to potentially set up a different folder in Outlook where I can push all of these reports to, and then iterate through them on a time basis? The only problem here is, if an incoming report is auto-generated and the time interval between the email is less than 20 seconds, or even 1 second.
Any help at all is appreciated!
You can use the Restrict function to restrict your messages variable to emails sent within the past hour, and iterate over each of those. Restrict takes the full list of items from your inbox and gives you a list of the ones that meet specific criteria, such as having been received in a specified time range. (The MSDN documentation linked above lists some other potential properties you could Restrict by.)
If you run this every hour, you can Restrict your inbox to the messages you received in the past hour (which, presumably, are the ones that still need to be searched) and iterate over those.
Here's an example of restricting to emails received in the past hour (or minute):
import win32com.client
import os
import time
import datetime as dt
# this is set to the current time
date_time = dt.datetime.now()
# this is set to one hour ago
lastHourDateTime = dt.datetime.now() - dt.timedelta(hours = 1)
#This is set to one minute ago; you can change timedelta's argument to whatever you want it to be
lastMinuteDateTime = dt.datetime.now() - dt.timedelta(minutes = 1)
outlook = win32com.client.Dispatch("Outlook.Application").GetNameSpace("MAPI")
inbox = outlook.GetDefaultFolder(6)
# retrieve all emails in the inbox, then sort them from most recently received to oldest (False will give you the reverse). Not strictly necessary, but good to know if order matters for your search
messages = inbox.Items
messages.Sort("[ReceivedTime]", True)
# restrict to messages from the past hour based on ReceivedTime using the dates defined above.
# lastHourMessages will contain only emails with a ReceivedTime later than an hour ago
# The way the datetime is formatted DOES matter; You can't add seconds here.
lastHourMessages = messages.Restrict("[ReceivedTime] >= '" +lastHourDateTime.strftime('%m/%d/%Y %H:%M %p')+"'")
lastMinuteMessages = messages.Restrict("[ReceivedTime] >= '" +lastMinuteDateTime.strftime('%m/%d/%Y %H:%M %p')+"'")
print "Current time: "+date_time.strftime('%m/%d/%Y %H:%M %p')
print "Messages from the past hour:"
for message in lastHourMessages:
print message.subject
print message.ReceivedTime
print "Messages from the past minute:"
for message in lastMinuteMessages:
print message.subject
print message.ReceivedTime
# GetFirst/GetNext will also work, since the restricted message list is just a shortened version of your full inbox.
print "Using GetFirst/GetNext"
message = lastHourMessages.GetFirst()
while message:
print message.subject
print message.ReceivedTime
message = lastHourMessages.GetNext()
You seem to have it running every 20 seconds, so presumably you could run it at a different interval. If you can't run it reliably at a regular interval (which would then be specified in the timedelta, e.g. hours=1), you could save the ReceivedTime of the most recent email checked, and use it to Restrict your search. (In that case, the saved ReceivedTime would replace lastHourDateTime, and the Restrict would retrieve every email sent after the last one checked.)
I hope this helps!
I had a similar question and worked through the above solution. Including another general use example in case other folks find it easier:
import win32com.client
import os
import datetime as dt
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
# setup range for outlook to search emails (so we don't go through the entire inbox)
lastWeekDateTime = dt.datetime.now() - dt.timedelta(days = 7)
lastWeekDateTime = lastWeekDateTime.strftime('%m/%d/%Y %H:%M %p') #<-- This format compatible with "Restrict"
# Select main Inbox
inbox = outlook.GetDefaultFolder(6)
messages = inbox.Items
# Only search emails in the last week:
messages = messages.Restrict("[ReceivedTime] >= '" + lastWeekDateTime +"'")
print(message.subject)
# Rest of code...
The below solution gives past 30 minutes unread folder mails from outlook and the count of the mails within the past 30 minutes
import win32com.client
import os
import time
import datetime as dt
# this is set to the current time
date_time = dt.datetime.now()
#This is set to one minute ago; you can change timedelta's argument to whatever you want it to be
last30MinuteDateTime = dt.datetime.now() - dt.timedelta(minutes = 30 )
outlook = win32com.client.Dispatch("Outlook.Application").GetNameSpace("MAPI")
inbox = outlook.GetDefaultFolder(6)
# retrieve all emails in the inbox, then sort them from most recently received to oldest (False will give you the reverse). Not strictly necessary, but good to know if order matters for your search
messages = inbox.Items.Restrict("[Unread]=true")
messages.Sort("[ReceivedTime]", True)
last30MinuteMessages = messages.Restrict("[ReceivedTime] >= '" +last30MinuteDateTime.strftime('%m/%d/%Y %H:%M %p')+"'")
print "Current time: "+date_time.strftime('%m/%d/%Y %H:%M %p')
print "Messages from the past 30 minute:"
c=0
for message in last30MinuteMessages:
print message.subject
print message.ReceivedTime
c=c+1;
print "The count of meesgaes unread from past 30 minutes ==",c
Using import datetime, this is what I came up with:
count = 0
msg = messages[len(messages) - count - 1]
while msg.SentOn.strftime("%d-%m-%y") == datetime.datetime.today().strftime("%d-%m-%y"):
msg = messages[len(messages) - count - 1]
count += 1
# Rest of the code
I am currently having problems with Timeouts and performance on Django redirection. The issue was not visible until I was surfing to my locally hosted application with 2 devices and only one worker enabled on my localhost, timeout set to 30 seconds.
I have a views.py function that redirects a page, based on that is given the URL. I do a lookup for the pk in a table and return the url. I also have a counter that keeps track of the amount of forwards.
urls.py here:
url(r'^i/(?P<pk>[-\w]+)/$', frontendapp_views.item_view, name="item_view"),
The page redirects instantly to the "desired_url_forward", however, the connection stays open with the user, while in fact, the user has left my Django environment. This somehow leaves my worker waiting for 30 seconds while I was already forwarded to an external page, not allowing to process any other request with one worker.
I could increase the number of workers or shorten the timeout time, but that doesn't feel right as it is not fixing the core issue.
This is the only thing I found out on this topic but I am not skilled enough to understand this: https://github.com/requests/requests/issues/520
This is how the views.py looks like:
def item_view(request,pk):
pk_binairy = urlsafe_base64_decode(pk)
pk_int = int.from_bytes(pk_binairy, byteorder='little')
desired_url_forward_object = get_object_or_404(forwards,pk = pk_int)
channel_cleaned_utm = re.sub(' +',' ',"".join([request.GET.get('utm_source', ''),' ',request.GET.get('utm_medium', ''),' ',request.GET.get('utm_campaign', ''),' ',request.GET.get('utm_term', ''),' ',request.GET.get('utm_content', '')]))
channel_cleaned = request.META.get('HTTP_REFERER')
if channel_cleaned is None:
channel_cleaned = 'Direct Traffic'
visitor_ip_request = get_client_ip(request)
location_request = get_client_location(request, visitor_ip_request)
clickstat = clickstats(
urlid = pk_int,
user = desired_url_forward_object.user,
channel = channel_cleaned,
visitor_ip = visitor_ip_request,
city = location_request['city'],
region = location_request['region'],
country = location_request['country'],
device_type = request.user_agent.device.family,
browser = request.user_agent.browser.family,
browser_version = request.user_agent.browser.version_string,
operating_system = request.user_agent.os.family ,
operating_system_version = request.user_agent.os.version_string
)
clickstat.save()
if desired_url_forward_object.counterA <= desired_url_forward_object.counterB:
desired_url_forward = desired_url_forward_object.urlA
desired_url_forward_object.counterA = F('counterA') + 1
else:
desired_url_forward = desired_url_forward_object.urlB
desired_url_forward_object.counterB = F('counterB') + 1
desired_url_forward_object.save()
return redirect(desired_url_forward)
Anyone suggestions? Thanks for the help!
thanks to some help from yesterday from stackoverflow, I've made progress in my code. But I have a question, concerning my page. Here is my code
#!/usr/bin/python
print 'Content-type: text/html\n'
print
#May 17
import cgi, cgitb
cgitb.enable()
form=cgi.FieldStorage()
#May19
dataset1=open('Book1.csv','r')
dataset2=open('Race Demographic by state.txt','r')
sources='''<hr><h4>Sources!</h4>Race Demographics
SAT Scores
SAT Scores txt file
Race Demographics txt file</body></html>'''
def datasplitter(x):
data = x.read()
return data.split("\n")
def datdatamang(x):
data = datasplitter(x)
index = 0
WhileNumber = 0
while index < len(data):
WhileNumber = 0
string = data[index]
string.split(" ")
x=''
while WhileNumber < len(string):
if string[WhileNumber] == ',':
x=x+'</td><td>'
else:
x=x+string[WhileNumber]
WhileNumber+= 1
' '.join(string)
data[index]='<tr><td>'+x+'</td></tr>'
index+=1
result=' '.join(data)
result='''<table border='1'>'''+result+'</table>'
return result
#May 19
def getDescription():
page = ''
state = ''
#May20
if 'Drop' in form:
if form['Drop'].value =="Description":
page+= 'Ever since its conception, the SAT examinations have been widely controversial.Many individuals believe that their race is the most intellectually superior, and that their SAT scores reflect on their superiority. It is my job to find this correlation between scores and race and end the battle among the races!'
if form['Drop'].value=="High SAT Scores":
page+= datdatamang(dataset1)
if form['Drop'].value=="Race Demographics":
page+= datdatamang(dataset2)
if form['Drop'].value=="source":
page+= sources
else:
return page
#May 21
def getState():
table=''
if 'on' in form:
state+= form['on'].value
if state in dataset1:
state.header=dataset1.index(state)
for n in dataset1[state.header]:
table+='''<tr>'''+n+'''</tr>'''
return '''<td>'''+table+'''</td>'''
def getRacebyState():
if 'on' in form:
state+= form['on'].value
if state in dataset2:
state.header=dataset2.index(state)
for n in dataset2[state.header]:
table+='''<tr>'''+n+'''</tr>'''
return '''<td>'''+table+'''</td>'''
#May 20
print '''<html><title>SAT Analysis</title>'''
print '''<body>'''
print getDescription()
print '''</body></html>'''
#May 17 - Writing the analysisV2.html file and starting the code
#May 19 - Tables, and the like
#May 20 - Writing if statements for drop-down menu, building the page.
#May 21 - Working on text fields and getState
Essentially, my page works so that you have a drop-down menu to choose from (choosing one of these values: e.g. "High SAT Scores" and "Race Demographics" causes my python code to generate a page that with tables, or descriptions concerning the drop-down menu option), or a text field (which searches for a state in my CSV files, and returns a table-row with the data about that particular State). Using cgi.FieldStorage() Python collects the value that is submitted through the HTML form. However, how do I write the code so that I only send a value from the text-field only, through the HTML form.
I.e. if I do not want to use the drop-down menu, and instead only want to use the text-field to find a state in particular and not submit form data through the drop-down menu, how do I do that?
I am not sure whether I am missing something here but, assuming 'on' is the name of your text field, you can check to see whether the text field is empty like so:
if 'on' in form and form['on']:
doSomethingWithOn()
else:
doSomethingWithDrop()
Hello,
I am a novice to this (please answer like so for me). All of the code should run easily after fixing my (second) question. It actually runs great on my machine, but probably not yours yet. I have tried to comment everywhere for you to make it easier for someone to read. This runs on an Ubuntu 12.10 machine, but you don't need to be running Linux to help my issues!
SOLVED 1. Code Review: As you go through the code, I would appreciate any input on how to condense or do things in a better, more appropriate way. The rest of my questions are really just the things I already know should be worked on. But if there's something else you find in my coding style, et al, please be candid. Upvotes to all good comments.
SOLVED: 2. Relative Icon Path: At the following:
/home/mh00h/workspace/issindicator/src/International_Space_Station_clip_art.svg
This is an absolute path to the same folder as this script. I don't want that,, I want this script to work on anybody's machine. I tried these:
$HOME/workspace...
(nothing in front) International_Space_Station_clip_art.svg
./International_Space_Station_clip_art.svg
but those didn't work. The image above is what I am trying to use (yes, I know I have an SVG instead of png listed, imgur limitation). Here is the documentation. It talks about an ""icon-theme-path"... maybe that would do it somehow? Or perhaps there is a standard directory all programmers are expected to store icons?
3. Concentrate my datetime functions: Really, I was fortunate to get this to work at all. My way is roundabout, but as far as I can tell, it works. I'm pretty confident that there is a better way to fix that mess though!! You'll find a bunch of datetime stuff at the bottom of the script.
SOLVED: 4. Appindicator3 Hook: I would love to have GTK refresh only when the menu has been called instead of running every second regardless. This was partially answered here, but I don't really understand how to implement "realize." (Hopefully this is the right place to be asking this?)
Thank you!
#!/usr/bin/env python
import json, urllib2, time, math, datetime, os, webbrowser
from dateutil import tz
#indicator
from gi.repository import Gtk, GObject
from gi.repository import AppIndicator3 as appindicator
class indicator():
def __init__(self):
#######Set this to "False" if IssIndicator should hide it's icon during normal runtime (default = True)
self.isiconhidden = True
#
#create indicator
self.ind = appindicator.Indicator.new (
"issindicator",
"/home/mh00h/workspace/issindicator/src/International_Space_Station_clip_art.svg",
#"indicator-messages",
appindicator.IndicatorCategory.APPLICATION_STATUS)
if self.isiconhidden == True:
self.ind.set_status (appindicator.IndicatorStatus.PASSIVE)
else:
self.ind.set_status (appindicator.IndicatorStatus.ACTIVE)
#this is used to keep track of the gtk refresh period
self.refreshvalue = False
#dropdown menu
#current pass menu items
self.menu = Gtk.Menu()
self.curpass = Gtk.MenuItem("not refreshed")
self.curpass.connect("activate", self.checkiss)
self.menu.append(self.curpass)
self.curpassdur = Gtk.MenuItem(" ")
self.menu.append(self.curpassdur)
self.curpassrise = Gtk.MenuItem(" ")
self.menu.append(self.curpassrise)
self.curpassset = Gtk.MenuItem(" ")
self.menu.append(self.curpassset)
self.sep1 = Gtk.SeparatorMenuItem()
self.menu.append(self.sep1)
#future pass items
self.futpass = Gtk.MenuItem(" ")
self.futpass.connect("activate", self.onurl)
self.menu.append(self.futpass)
self.sep2 = Gtk.SeparatorMenuItem()
self.menu.append(self.sep2)
#Options items
self.aboutmenu = Gtk.MenuItem("About")
self.aboutmenu.connect("activate", self.onabout)
self.menu.append(self.aboutmenu)
self.quit = Gtk.MenuItem("Quit")
self.quit.connect("activate", self.quitnow)
self.menu.append(self.quit)
self.curpass.show()
self.sep1.show()
self.futpass.show()
self.sep2.show()
self.aboutmenu.show()
self.quit.show()
self.ind.set_menu(self.menu)
#get iss data at first run
self.updatecache()
self.checkiss()
Gtk.main()
#functions
def hideicon(self, w=None):
self.ind.set_status (appindicator.IndicatorStatus.PASSIVE)
def showicon(self, w=None):
self.ind.set_status (appindicator.IndicatorStatus.ACTIVE)
def quitnow(self, w=None):
Gtk.main_quit()
#open browser for more tracking info
def onurl(self, w=None):
webbrowser.open("http://www.n2yo.com/passes/")
def onabout(self,widget):
widget.set_sensitive(False)
ad=Gtk.AboutDialog()
ad.set_name("aboutdialog")
ad.set_version("0.1")
ad.set_copyright('Copyrignt (c) 2013 mh00h')
ad.set_comments('Indicating ISS Zarya')
ad.set_license(''+
'This program is free software: you can redistribute it and/or modify it\n'+
'under the terms of the GNU General Public License as published by the\n'+
'Free Software Foundation, either version 3 of the License, or (at your option)\n'+
'any later version.\n\n'+
'This program is distributed in the hope that it will be useful, but\n'+
'WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY\n'+
'or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for\n'+
'more details.\n\n'+
'You should have received a copy of the GNU General Public License along with\n'+
'this program. If not, see <http://www.gnu.org/licenses/>.')
ad.set_website('https://launchpad.net/~mh00h/+archive/issindicator')
ad.set_website_label('ISSIndicator Homepage')
ad.set_authors(['mh00h <abcd#abcd.com'])
ad.run()
ad.destroy()
widget.set_sensitive(True)
#how often to run checkiss
def setrefresh(self, r):
#this clause is required to keep script from hanging
if r != self.refreshvalue:
self.refreshvalue = r
try:
self.reftime = GObject.source_remove(True)
except:
pass
try:
self.reftime = GObject.timeout_add(r, self.checkiss)
except:
pass
#
def updatecache(self, w=None):
#this will show in the menu until the update process completes
self.passingstatus = 'not updated yet'
#get ISS data from api
self.ip = urllib2.urlopen("http://api.exip.org/?call=ip").read()
self.geoip = json.load(urllib2.urlopen("http://freegeoip.net/json/"+self.ip))
self.data = json.load(urllib2.urlopen("http://api.open-notify.org/iss/?lat="+str(self.geoip["latitude"])+"&lon="+str(self.geoip["longitude"])+"&alt=280&n=47"))
self.data = {"message": "success", "request": {"latitude": 45.0, "passes": 3, "altitude": 280, "longitude": -81.0, "datetime": 1361502063},
"response": [{"duration": 542, "risetime": time.time()+10}, {"duration": 642, "risetime": 1361560774}, {"duration": 593, "risetime": 1361566621}]}
def checkiss(self, w=None):
#so as to not overload api servers, this runs as a separate process
#this updates the timers
self.n = 0
self.passingstatus = "ISS Zarya is below the horizon"
#check if we've gone through cached iss passings and update api if needed
try:
#ignore errors in case internet is not accessible
#have a buffer of 5 passes remaining before updating cache
#at 2 passes left, stop the program to prevent the rest of the program from throwing codes
if time.time() > self.data['response'][len(self.data['response'])-5]['risetime']:
self.updatecache
except:
if time.time() > self.data['response'][len(self.data['response'])-2]['risetime']:
os.system("notify-send 'ISS Indicator tried multiple times to update its satellite cache but has run out of its cached track.' 'This may be due to a bad internet connection. The application will now quit.'")
Gtk.main_quit()
#get current time
current_utc = datetime.datetime.utcnow()
current_utc = current_utc.replace(tzinfo=tz.gettz('UTC'))
#iterate over all iss passes
for k in self.data['response']:
duration = self.data['response'][self.n]['duration']
risetime = self.data['response'][self.n]['risetime']
settime = risetime + duration
#if this iteration matches with the current time, do...
if risetime <= time.time() <= settime:
#make the countdown values for the current pass tick
#rise time calculations and date conversions to string format
currisetime_utc = datetime.datetime.utcfromtimestamp(self.data['response'][self.n]['risetime'])
currisetime_utc = currisetime_utc.replace(tzinfo=tz.gettz('UTC'))
currisetime_tz = currisetime_utc.astimezone(tz.tzlocal())
currisetime_tzstr = str("%02d" % (currisetime_tz.hour))+':'+str("%02d" % (currisetime_tz.minute))+':'+str("%02d" % (currisetime_tz.second))
#set time calculations and durations
cursettime_utc = datetime.datetime.utcfromtimestamp(self.data['response'][self.n]['risetime']+self.data['response'][self.n]['duration'])
cursettime_utc = cursettime_utc.replace(tzinfo=tz.gettz('UTC'))
cursettime_tz = cursettime_utc.astimezone(tz.tzlocal())
curremainingtimeleft = cursettime_utc - current_utc
curduration = cursettime_utc - currisetime_utc
z= curremainingtimeleft.seconds
zhours = z/60/60
zminutes = z/60-zhours*60
zseconds = z-zhours*60*60-zminutes*60
curremainingtimeleftstr = str(zhours)+':'+str("%02d" % (zminutes))+':'+str("%02d" % (zseconds))
z= curduration.seconds
zhours = z/60/60
zminutes = z/60-zhours*60
zseconds = z-zhours*60*60-zminutes*60
curdurationstr = str(zhours)+':'+str("%02d" % (zminutes))+':'+str("%02d" % (zseconds))
cursettime_tzstr = str("%02d" % (cursettime_tz.hour))+':'+str("%02d" % (cursettime_tz.minute))+':'+str("%02d" % (cursettime_tz.second))
#since the ISS is presently overhead, show the icon and update GTK menuitems to show timers on the ISS pass
self.showicon()
self.passingstatus = "ISS Zarya is above the horizon!"
self.curpass.get_child().set_text(self.passingstatus)
self.curpassdur.get_child().set_text("Duration: "+curdurationstr+" ("+curremainingtimeleftstr+" remaining)")
self.curpassdur.show()
self.curpassrise.get_child().set_text("Rise time: "+currisetime_tzstr)
self.curpassrise.show()
self.curpassset.get_child().set_text("Set time: "+cursettime_tzstr)
self.curpassset.show()
break
else:
#if this iteration of ISS passes does not match with current time, then increase self.n
self.n += 1
#regardless of results show the next pass time
#if the ISS is overhead, use the next dictionary key for data
if self.n != len(self.data['response']):
nextrisetime_utc = datetime.datetime.utcfromtimestamp(self.data['response'][self.n+1]['risetime'])
else:
#if the ISS is not overhead, use the first key in the dictionary
nextrisetime_utc = datetime.datetime.utcfromtimestamp(self.data['response'][0]['risetime'])
#calculate the next rise time and make timers
nextrisetime_utc = nextrisetime_utc.replace(tzinfo=tz.gettz('UTC'))
nextrisetime_tz = nextrisetime_utc.astimezone(tz.tzlocal())
remainingtimeleft = nextrisetime_utc - current_utc
z= remainingtimeleft.seconds
zhours = z/60/60
zminutes = z/60-zhours*60
zseconds = z-zhours*60*60-zminutes*60
remainingtimeleftstr = str(zhours)+':'+str("%02d" % (zminutes))+':'+str("%02d" % (zseconds))
nextrisetime_tzstr = str("%02d" % (nextrisetime_tz.hour))+':'+str("%02d" % (nextrisetime_tz.minute))+':'+str("%02d" % (nextrisetime_tz.second))
#update GTK menuitem
self.futpass.get_child().set_text("Next Pass: "+nextrisetime_tzstr+" ("+remainingtimeleftstr+")")
#if the ISS is not above the horizon, refresh GTK only once its time for the icon to be visible
if self.passingstatus != "ISS Zarya is above the horizon!":
self.setrefresh(remainingtimeleft.seconds*1000+100)
#self.setrefresh(1000)
self.curpass.get_child().set_text(self.passingstatus)
self.curpassdur.hide()
self.curpassrise.hide()
self.curpassset.hide()
if self.isiconhidden == True:
self.hideicon()
else:
#otherwise, refresh once a second to show the timers ticking in the menu
#test if the menu is active instead of always running like in this example
####MISSING CODE HERE##### DONT KNOW HOW TO DO IT, SO JUST SETTING TO 1 SEC
self.setrefresh(1000)
#for when setrefresh calls this function
return True
if __name__ == '__main__':
issindicator = indicator()
I am developing a web service using python and i want to filter out the videos which can not be played outside of the youtube page .
Like on this link [https://www.youtube.com/v/SC3pupLn-_8?version=3&f=videos&app=youtube_gdata]
you have to watch video on the youtube page is there is any way of filter which videos belong to the same category. So that i choose only those videos which can be played without any restriction.
import gdata.youtube.service
#------------------------------------------------------------------------------
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'YOUR API DEVELOPER KEY'
count=0
def PrintEntryDetails(entry):
if entry.media.category[0].text == "Movies" :
global count
count = count + 1
if entry.noembed != None:
print 'Video embedding not enable: %s' % entry.noembed.text
else :
print "entry embedable"
print 'Video title: %s' % entry.media.title.text
print 'Video category: %s' % entry.media.category[0].text
print 'Video published on: %s ' % entry.published.text
print 'Video description: %s' % entry.media.description.text
if entry.media.private != None :
print entry.media.private.text
else :
print "Right not found"
if entry.media.keywords :
print 'Video tags: %s' % entry.media.keywords.text
print 'Video watch page: %s' % entry.media.player.url
print 'Video flash player URL: %s' % entry.GetSwfUrl()
print 'Video duration: %s' % entry.media.duration.seconds
# For video statistics
if entry.statistics :
print 'Video view count: %s' % entry.statistics.view_count
# For video rating
if entry.rating :
print 'Video rating: %s' % entry.rating.average
# show alternate formats
for alternate_format in entry.media.content:
if 'isDefault' not in alternate_format.extension_attributes:
print 'Alternate format: %s | url: %s ' % (alternate_format.type,
alternate_format.url)
# show thumbnails
for thumbnail in entry.media.thumbnail:
print 'Thumbnail url: %s' % thumbnail.url
print "#########################################"
else :
pass
def PrintVideoFeed(feed):
counter = 0
for entry in feed.entry:
PrintEntryDetails(entry)
counter = counter+1
#print counter
def SearchAndPrint():
max = 20
yt_service = gdata.youtube.service.YouTubeService()
query = gdata.youtube.service.YouTubeVideoQuery()
# OrderBy must be one of: published viewCount rating relevance
query.orderby = "relevance"
query.racy = 'include'
query.author = "tseries"
query.max_results = 50
index = 01
for i in (range(max)):
query.start_index = index
index = index + 50
query.format = "5"
feed = yt_service.YouTubeQuery(query)
PrintVideoFeed(feed)
SearchAndPrint()
print "**********************************************************"
print "Total Movies"
print count
The general answer is to use the format=5 parameter when performing your search: https://developers.google.com/youtube/2.0/reference#formatsp
That will filter out videos from the search results that have embedding disabled completely.
That being said, there are videos that have embedding enabled but only are playable in certain regions or when embedded on certain domains.
To handle the regional restrictions, you should set the restriction= parameter to something appropriate for your use case, as described at https://developers.google.com/youtube/2.0/reference#restrictionsp
There is no way to exclude videos from search results that have domain-level embed restrictions, though.
This blog post has more general information about embedded playback restrictions: http://apiblog.youtube.com/2011/12/understanding-playback-restrictions.html
If I understand your question, you're looking for the app:control/yt:state tag. For example, if a video is restricted to playing on the YouTube site, but you're trying to access it through an embedded URL or through a non-browser, you'll get back something like this:
<app:control>
<yt:state name="restricted" reasonCode="limitedSyndication">Syndication of this video was restricted.</yt:state>
</app:control>
You can see this in your entry object as:
entry.control.FindExtensions('state')[0].attributes
Which will be:
{'name': 'restricted', 'reasonCode': 'limitedSyndication'}
Of course you need to make this more robust—control may be None, it may have no state tags, etc. But you get the idea.
I don't think you can directly search on the presence or absence or particular value of state, but you can use the fields parameter to post-filter the results before retrieving them. The docs actually give the example of returning only "entries that do not restrict playback in any way, which is indicated by the presence of the <yt:state> element":
entry[not(app:control/yt:state)]
I've left off the (title,media:group) part because you want the default tags, not a limited set.
For some reason, the fields parameter doesn't always get sent. This may be because, as the docs say, "The fields parameter is currently used for experimental features only." But anyway, you can just retrieve everything and filter on control yourself.