I am trying to modify the statecounts.py from pyral examples to pull all the defined/accepted/in progress/completed stories point for each release. I am keep getting query exception error. Is there anything that I have missed?
state = 'ScheduleState'
state_values = rally.getAllowedValues('HierarchicalRequirement', state)
output = []
for rel in sorted(release_names):
for state_value in sorted(state_values):
response = rally.get(artifact_type, fetch="FormattedID", query='(Release.Name= %s ) AND %s = %s' % (rel, state, state_value),
projectScopeUp=False, projectScopeDown=False)
output.append("%20s : %16s : %5d" % (rel, state, state_value, response.resultCount))
Thanks!!!
If you are ANDing multiple terms it's easiest to create a list of terms like so:
query=['Release.Name = %s' % (rel), '%s = %s' % (state, state_value)]
Also, whitespace matters. You were missing a space before the first =.
Related
Currently writing a program using an API from MarketStack.com. This is for school, so I am still learning.
I am writing a stock prediction program using Python on PyCharm and I have written the connection between the program and the API without issues. So, I can certainly get the High, Name, Symbols, etc. What I am trying to do now is call a range of dates. The API says I can call up to 30 years of historical data, so I want to call all 30 years for a date that is entered by the user. Then the program will average the high on that date in order to give a trend prediction.
So, the problem I am having is calling more than one date. As I said, I want to call all 30 dates, and then I will do the math, etc.
Can someone help me call a range of dates? I tried installing Pandas and that wasn't being accepted by PyCharm for some reason.. Any help is greatly appreciated.
import tkinter as tk
import requests
# callouts for window size
HEIGHT = 650
WIDTH = 600
# function for response
def format_response(selected_stock):
try:
name1 = selected_stock['data']['name']
symbol1 = selected_stock['data']['symbol']
high1 = selected_stock['data']['eod'][1]['high']
final_str = 'Name: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name1, symbol1, high1)
except:
final_str = 'There was a problem retrieving that information'
return final_str
# function linking to API
def stock_data(entry):
params = {'access_key': 'xxx'}
response = requests.get('http://api.marketstack.com/v1/tickers/' + entry + '/' + 'eod', params=params)
selected_stock = response.json()
label2['text'] = format_response(selected_stock)
# function for response
def format_response2(stock_hist_data):
try:
name = stock_hist_data['data']['name']
symbol = stock_hist_data['data']['symbol']
high = stock_hist_data['data']['eod'][1]['high']
name3 = stock_hist_data['data']['name']
symbol3 = stock_hist_data['data']['symbol']
high3 = stock_hist_data['data']['eod'][1]['high']
final_str2 = 'Name: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name, symbol, high)
final_str3 = '\nName: %s \nSymbol: %s \nEnd of Day ($ USD): %s' % (name3, symbol3, high3)
except:
final_str2 = 'There was a problem retrieving that information'
final_str3 = 'There was a problem retrieving that information'
return final_str2 + final_str3
# function for response in lower window
def stock_hist_data(entry2):
params2 = {'access_key': 'xxx'}
response2 = requests.get('http://api.marketstack.com/v1/tickers/' + entry2 + '/' + 'eod', params=params2)
hist_data = response2.json()
label4['text'] = format_response2(hist_data)
I have used pyral api
rally = Rally(server, apikey=api_key, workspace=workspace, project=project)
to connect to rally
Than ,
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
I need to extract all the test cases defined in the rally as:
TF*****
-TC1
-TC2
-TC3
I need to get all the test cases from the formattedID returned from
testfolders.
for tc in testfolders:
print(tc.Name)
print(tc.FormattedID)# this represents the TF***
tc_link=tc._ref#url for TC's https://rally1.rallydev.com/slm/webservice
query_criteria = 'TestSets = "%s"' % tp_name
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
I have tried looking for different combination sets to extract test cases
The tc._ref is actually a
https://rally1.rallydev.com/slm/webservice/v2.0/TestFolder/XXXXX/TestCases
when I tried to open it in browser it gave the entire Test cases list.
I need to extract those test case list .
I tried accessing using urllib to acess the data using python ,but said unauthorized access, I think if I am on the same session authorization shouldnt be an issue since rally object is already instantiated.
Any help on this will be much appreciated!!
Thanks in advance
rally = Rally(server, apikey=api_key, workspace=workspace, project=project)
tp_name = "specific test case "
query_criteria = 'FormattedID = "%s"' % tp_name
testfolders = rally.get('TestFolder', fetch=True, query=query_criteria)
for tc in testfolders:
print(tc.Name)
print(tc.FormattedID)
query_criteria = 'TestSets = "%s"' % tc._ref
response = rally.get('TestCase', fetch=True, query=query_criteria)
print(response)
422 Could not read: could not read all instances of class com.f4tech.slm.domain.TestCase for class TestCase
It's not clear what you are trying to achieve but please take a look at the code below. Probably, it will solve your issue:
query = 'FormattedID = %s'
test_folder_req = rally.get('TestFolder', fetch=True, projectScopeDown=True,
query=query % folder_id) # folder_id == "TF111"
if test_folder_req.resultCount == 0:
print('Rally returned nothing for folder: %s', folder_id)
sys.exit(1)
test_folder = test_folder_req.next()
test_cases = test_folder.TestCases
print('Start working with %s' % folder_id)
print('Test Folder %s contains %s TestCases' % (folder_id, len(test_cases)))
for test_case in test_cases:
print(test_case.FormattedID)
print(test_case.Name)
I need to revise a python code (given by a programmer) which I would like to use for a genealogy project of mine. I am very new to python and start being able to read code. Yet, I do not know how to fix the following thing.
I get the following error message when executing the code:
self['gebort'] += ", Taufe: %s" % place.get_title()
KeyError: 'gebort'
The issue is that for one of the persons in my database only the date of baptism (here: Taufe) is known but not the date of birth. This is were the code fails.
This is the relevant snippet of the code basis:
birth_ref = person.get_birth_ref()
if birth_ref:
birth = database.get_event_from_handle(birth_ref.ref)
self['gjahr'] = birth.get_date_object().get_year()
if self['gjahr'] >= 1990:
self['mindj'] = True
self['gebdat'] = dd.display(birth.get_date_object())
self['plaingebdat'] = self['gebdat']
place_handle = birth.get_place_handle()
self['geborthandle'] = place_handle
place = database.get_place_from_handle(place_handle)
if place:
self['gebort'] = place.get_title()
self['plaingebort'] = self['gebort']
for eventref in person.get_event_ref_list():
event = database.get_event_from_handle(eventref.ref)
if event.get_type() in (gramps.gen.lib.EventType.CHRISTEN, gramps.gen.lib.EventType.BAPTISM):
self['gebdat'] += ", Taufe: %s" % dd.display(event.get_date_object())
place_handle = event.get_place_handle()
place = database.get_place_from_handle(place_handle)
if place:
self['gebort'] += ", Taufe: %s" % place.get_title()
Now, I do not know to add an exception handling when there is no birth date/place found so that the code would not give out any values of birth. Would somebody be able to point me to the right direction?
Instead of:
dict_name['gebort'] += ", Taufe: %s" % place.get_title()
you could write
dict_name['gebort'] = dict_name.get('gebort', '') + ", Taufe: %s" % place.get_title()
As already written, naming something self is not clever, or is the code above from a class derived from dict? Using .get you can define what is returned in case if there is no key of that name, in the exampe an empty string.
I am developing a web service using python and i want to filter out the videos which can not be played outside of the youtube page .
Like on this link [https://www.youtube.com/v/SC3pupLn-_8?version=3&f=videos&app=youtube_gdata]
you have to watch video on the youtube page is there is any way of filter which videos belong to the same category. So that i choose only those videos which can be played without any restriction.
import gdata.youtube.service
#------------------------------------------------------------------------------
yt_service = gdata.youtube.service.YouTubeService()
yt_service.developer_key = 'YOUR API DEVELOPER KEY'
count=0
def PrintEntryDetails(entry):
if entry.media.category[0].text == "Movies" :
global count
count = count + 1
if entry.noembed != None:
print 'Video embedding not enable: %s' % entry.noembed.text
else :
print "entry embedable"
print 'Video title: %s' % entry.media.title.text
print 'Video category: %s' % entry.media.category[0].text
print 'Video published on: %s ' % entry.published.text
print 'Video description: %s' % entry.media.description.text
if entry.media.private != None :
print entry.media.private.text
else :
print "Right not found"
if entry.media.keywords :
print 'Video tags: %s' % entry.media.keywords.text
print 'Video watch page: %s' % entry.media.player.url
print 'Video flash player URL: %s' % entry.GetSwfUrl()
print 'Video duration: %s' % entry.media.duration.seconds
# For video statistics
if entry.statistics :
print 'Video view count: %s' % entry.statistics.view_count
# For video rating
if entry.rating :
print 'Video rating: %s' % entry.rating.average
# show alternate formats
for alternate_format in entry.media.content:
if 'isDefault' not in alternate_format.extension_attributes:
print 'Alternate format: %s | url: %s ' % (alternate_format.type,
alternate_format.url)
# show thumbnails
for thumbnail in entry.media.thumbnail:
print 'Thumbnail url: %s' % thumbnail.url
print "#########################################"
else :
pass
def PrintVideoFeed(feed):
counter = 0
for entry in feed.entry:
PrintEntryDetails(entry)
counter = counter+1
#print counter
def SearchAndPrint():
max = 20
yt_service = gdata.youtube.service.YouTubeService()
query = gdata.youtube.service.YouTubeVideoQuery()
# OrderBy must be one of: published viewCount rating relevance
query.orderby = "relevance"
query.racy = 'include'
query.author = "tseries"
query.max_results = 50
index = 01
for i in (range(max)):
query.start_index = index
index = index + 50
query.format = "5"
feed = yt_service.YouTubeQuery(query)
PrintVideoFeed(feed)
SearchAndPrint()
print "**********************************************************"
print "Total Movies"
print count
The general answer is to use the format=5 parameter when performing your search: https://developers.google.com/youtube/2.0/reference#formatsp
That will filter out videos from the search results that have embedding disabled completely.
That being said, there are videos that have embedding enabled but only are playable in certain regions or when embedded on certain domains.
To handle the regional restrictions, you should set the restriction= parameter to something appropriate for your use case, as described at https://developers.google.com/youtube/2.0/reference#restrictionsp
There is no way to exclude videos from search results that have domain-level embed restrictions, though.
This blog post has more general information about embedded playback restrictions: http://apiblog.youtube.com/2011/12/understanding-playback-restrictions.html
If I understand your question, you're looking for the app:control/yt:state tag. For example, if a video is restricted to playing on the YouTube site, but you're trying to access it through an embedded URL or through a non-browser, you'll get back something like this:
<app:control>
<yt:state name="restricted" reasonCode="limitedSyndication">Syndication of this video was restricted.</yt:state>
</app:control>
You can see this in your entry object as:
entry.control.FindExtensions('state')[0].attributes
Which will be:
{'name': 'restricted', 'reasonCode': 'limitedSyndication'}
Of course you need to make this more robust—control may be None, it may have no state tags, etc. But you get the idea.
I don't think you can directly search on the presence or absence or particular value of state, but you can use the fields parameter to post-filter the results before retrieving them. The docs actually give the example of returning only "entries that do not restrict playback in any way, which is indicated by the presence of the <yt:state> element":
entry[not(app:control/yt:state)]
I've left off the (title,media:group) part because you want the default tags, not a limited set.
For some reason, the fields parameter doesn't always get sent. This may be because, as the docs say, "The fields parameter is currently used for experimental features only." But anyway, you can just retrieve everything and filter on control yourself.
Whis scripts read from a source with lines consisting of artist names followed by a parenthesis with information about whether the artists cancelled and which country they come from.
A normal sentence may look like:
Odd Nordstoga (NO) (Cancelled), 20-08-2012, BlĂ¥
As I import the data I decode them into UTF-8 and this works fine. Uncommenting the second comment in the else block of the remove_extra() method shows that all variables are of type Unicode.
However, when a value is returned and put into another variable and the value of this is tested, the majority of the variables seems to be of NoneType.
Why does this happen? And how can it be corrected? Seems to be an error happening between the method return and assignment of the new variable.
# -*- charset: utf-8 -*-
import re
f1 = open("oya_artister_2011.csv")
artister = []
navnliste = []
PATTERN = re.compile(r"(.*)(\(.*\))")
TEST_PAT = re.compile(r"\(.*\)")
def remove_extra(tekst):
if re.search(PATTERN, tekst) > 1:
after = re.findall(PATTERN, tekst)[0][0]
#print "tekst is: %s " % tekst
#print "and of type: %s" % type(tekst)
remove_extra(after)
else:
#print "will return: ", tekst
#print "of type: %s" % type(tekst)
return tekst
for line in f1:
navn, _rest = line.split(",",1)
navn = navn.decode("utf-8")
artister.append(navn)
for artist in artister:
ny_artist = remove_extra(artist)
#print "%s" % ny_artist
print "of type: %s" % type(ny_artist)
Try
return remove_extra(after)
instead of just
remove_extra(after)