I am creating a integration tool to integrate with rally and my web application. I decided to use Python to run in my web-server to retrieve the contents from rally.
In one of the scenario, I need to get the last modified task from a story. I don't know the ID, Name or anything, but I know the story name. Using the story name, how can I get the last modified task(s)?
Here's an example of how to set up Kyle's query in pyral:
server = "rally1.rallydev.com"
user = "user#company.com"
password = "topsecret"
workspace = "My Workspace"
project = "My Project"
rally = Rally(server, user, password, workspace=workspace, project=project)
rally.enableLogging("rally.history.showtasks")
fields = "FormattedID,State,Name,WorkProduct,Name,LastUpdateDate",
criterion = 'Workproduct.Name = "My Tasks User Story"'
response = rally.get('Task', fetch=fields, query=criterion, order="LastUpdateDate Desc",pagesize=200, limit=400)
most_current_task = response.next()
print "%-8.8s %-52.52s %s" % (most_current_task.FormattedID, most_current_task.Name, most_current_task.State)
I'm not super familiar with how to use pyral but you should be able to get what you'd like by querying against the task wsapi endpoint like so:
/slm/webservice/1.40/task.js?query=(WorkProduct.Name = "Story Name")&order=LastUpdateDate DESC
Now you just need to get pyral to generate that request. :-)
Related
I have a successfully compiled and run a django rest consuming cocktaildb api. On local server when I run http://127.0.0.1:8000/api/ I get
{
"ingredients": "http://127.0.0.1:8000/api/ingredients/",
"drinks": "http://127.0.0.1:8000/api/drinks/",
"feeling-lucky": "http://127.0.0.1:8000/api/feeling-lucky/"
}
But when I go to one of the links mentioned in the json result above, for example:
http://127.0.0.1:8000/api/ingredients/
I get an empty [] with a status 200OK!
I need an endpoint to GET drinks and ingredients before I can destructure to specific details using angular.
I implemented helper folder in the app with the the API function as below:
class TheCoctailDBAPI:
THECOCTAILDB_URL = 'https://www.thecocktaildb.com/api/json/v1/1/'
async def __load_coctails_for_drink(self, drink, session):
for i in range(1, 16):
ingredientKey = 'strIngredient' + str(i)
ingredientName = drink[ingredientKey]
if not ingredientName:
break
if ingredientName not in self.ingredients:
async with session.get(f'{TheCoctailDBAPI.THECOCTAILDB_URL}search.php?i={ingredientName}') \
as response:
result = json.loads(await response.text())
self.ingredients[ingredientName] = result['ingredients'][0]
What was your expected responce?
Add the function that is called by this API as well as the DB settings in the question, so that we can properly help you.
Are you sure that you are connecting and pulling data from a remote location? It looks to me like your local DB is empty, so the API has no data to return.
I'm trying to create instances using openstacksdk python api everything is ok but even when i use:
conn2 = conn.connect_as_project(proj.name)
server = conn2.create_server(......)
the server is being created under property of admin project, not the project mentioned in proj.name I even tried project.id but didn't worked.
Finally instead of:
conn2 = conn.connect_as_project(project_id)
I used:
conn2 = openstack.connection.Connection(
region_name='RegionOne',
auth=dict(
auth_url='http://controller:5000/v3',
username=u_name,
password=password,
project_id=project_id,
user_domain_id='default'),
compute_api_version='2',
identity_interface='internal')
and it worked.
I did this just fine...the only difference is that the project is a new project and I have to give credentials to the user I was using.
It was something like that:
project = sconn.create_project(
name=name, domain_id='default')
user_id = conn.current_user_id
user = conn.get_user(user_id)
roles = conn.list_roles()
for r in roles:
conn.identity.assign_project_role_to_user(
project.id, user.id, r.id
)
# Make sure the roles are correctly assigned to the user before proceed
conn2 = self.conn.connect_as_project(project.name)
After that, anything created (servers, keypairs, networks, etc) is under the new project.
I use Python Social Auth - Django to log in my users.
My backend is Microsoft, so I can use Microsoft Graph but I don't think that it is relevant.
Python Social Auth deals with authentication but now I want to call the API and for that, I need a valid access token.
Following the use cases I can get to this:
social = request.user.social_auth.get(provider='azuread-oauth2')
response = self.get_json('https://graph.microsoft.com/v1.0/me',
headers={'Authorization': social.extra_data['token_type'] + ' '
+ social.extra_data['access_token']})
But the access token is only valid for 3600 seconds and so I need to refresh, I guess I can do it manually but there must be a better solution.
How can I get an access_token refreshed?
.get_access_token(strategy) refresh the token automatically if it's expired. You can use it like that:
from social_django.utils import load_strategy
#...
social = request.user.social_auth.get(provider='google-oauth2')
access_token = social.get_access_token(load_strategy())
Using load_strategy() at social.apps.django_app.utils:
social = request.user.social_auth.get(provider='azuread-oauth2')
strategy = load_strategy()
social.refresh_token(strategy)
Now the updated access_token can be retrieved from social.extra_data['access_token'].
The best approach is probably to check if it needs to be updated (customized for AzureAD Oauth2):
def get_azuread_oauth2_token(user):
social = user.social_auth.get(provider='azuread-oauth2')
if social.extra_data['expires_on'] <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
This is based on the method get_auth_tokenfrom AzureADOAuth2. I don't think this method is accessible outside the pipeline, please answer this question if there is any way to do it.
Updates
Update 1 - 20/01/2017
Following an Issue to request an extra data parameter with the time of the access token refresh, it is now possible to check if the access_token needs to be updated in every backend.
In future versions (>0.2.1 for the social-auth-core) there will be a new field in extra data:
'auth_time': int(time.time())
And so this works:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires']) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
Note: According to OAuth 2 RFC all responses should (it's a RECOMMENDED param) provide an expires_in but for most backends (including the azuread-oauth2) this value is being saved as expires. Be careful to understand how your backend behaves!
An Issue on this exists and I will be update the answer with the relevant info when it exists.
Update 2 - 17/02/17
Additionally, there is a method in UserMixin called access_token_expired (code) that can be used to assert if the token is valid or not (note: this method doesn't work for race conditions, as pointed out in this anwser by #SCasey).
Update 3 - 31/05/17
In Python Social Auth - Core v1.3.0 get_access_token(self, strategy) was introduced in storage.py.
So now:
from social_django.utils import load_strategy
social = request.user.social_auth.get(provider='azuread-oauth2')
response = self.get_json('https://graph.microsoft.com/v1.0/me',
headers={'Authorization': '%s %s' % (social.extra_data['token_type'],
social.get_access_token(load_strategy())}
Thanks #damio for pointing it out.
#NBajanca's update is almost correct for version 1.0.1.
extra_data['expires_in']
is now
extra_data['expires']
So the code is:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires']) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
I'd also recommend subtracting an arbitrary amount of time from that calc, so that we don't run into a race situation where we've checked the token 0.01s before expiry and then get an error because we sent the request after expiry. I like to add 10 seconds just to be safe, but it's probably overkill:
def get_token(user, provider):
social = user.social_auth.get(provider=provider)
if (social.extra_data['auth_time'] + social.extra_data['expires'] - 10) <= int(time.time()):
strategy = load_strategy()
social.refresh_token(strategy)
return social.extra_data['access_token']
EDIT
#NBajanca points out that expires_in is technically correct per the Oauth2 docs. It seems that for some backends, this may work. The code above using expires is what works with provider="google-oauth2" as of v1.0.1
Please help me understand what I must be doing wrong here.
I run a small video game on Google App Engine, and within the game we have an internal messaging service. Each message has a list of status keys that keep track of whether a player has read a message or not.
My problem is that when I attempt to delete a list of keys, only one of the entities is removed from the datastore, regardless of the number of keys in the list.
class Game_Message(db.Model)
sender = db.StringProperty()
recipients = db.ListProperty(str)
status_keys = db.ListProperty(db.Key)
payload = db.TextProperty()
def add_status_keys(self):
self.status_keys = []
for user_id in self.recipients:
gms = Game_Message_Status.create(self, user_id)
self.status_keys.append(gms.key())
def remove_status_keys(self):
db.delete(self.status_keys)
What I have found is that calling db.delete multiple times does delete all the entities, but I don't understand why.
for example: This works correctly.
def remove_status_keys(self):
db.delete(self.status_keys)
db.delete(self.status_keys)
I'm using the python-ldap library to connect to our LDAP server and run queries. The issue I'm running into is that despite setting a size limit on the search, I keep getting SIZELIMIT_EXCEEDED errors on any query that would return too many results. I know that the query itself is working because I will get a result if the query returns a small subset of users. Even if I set the size limit to something absurd, like 1, I'll still get a SIZELIMIT_EXCEEDED on those bigger queries. I've pasted a generic version of my query below. Any ideas as to what I'm doing wrong here?
result = self.ldap.search_ext_s(self.base, self.scope, '(personFirstMiddle=<value>*)', sizelimit=5)
When the LDAP client requests a size-limit, that is called a 'client-requested' size limit. A client-requested size limit cannot override the size-limit set by the server. The server may set a size-limit for the server as a whole, for a particular authorization identity, or for other reasons - whichever the case, the client may not override the server size limit. The search request may have to be issued in multiple parts using the simple paged results control or the virtual list view control.
Here's a Python3 implementation that I came up with after heavily editing what I found here and in the official documentation. At the time of writing this it works with the pip3 package python-ldap version 3.2.0.
def get_list_of_ldap_users():
hostname = "google.com"
username = "username_here"
password = "password_here"
base = "dc=google,dc=com"
print(f"Connecting to the LDAP server at '{hostname}'...")
connect = ldap.initialize(f"ldap://{hostname}")
connect.set_option(ldap.OPT_REFERRALS, 0)
connect.simple_bind_s(username, password)
connect=ldap_server
search_flt = "(personFirstMiddle=<value>*)" # get all users with a specific middle name
page_size = 500 # how many users to search for in each page, this depends on the server maximum setting (default is 1000)
searchreq_attrlist=["cn", "sn", "name", "userPrincipalName"] # change these to the attributes you care about
req_ctrl = SimplePagedResultsControl(criticality=True, size=page_size, cookie='')
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
total_results = []
pages = 0
while True: # loop over all of the pages using the same cookie, otherwise the search will fail
pages += 1
rtype, rdata, rmsgid, serverctrls = connect.result3(msgid)
for user in rdata:
total_results.append(user)
pctrls = [c for c in serverctrls if c.controlType == SimplePagedResultsControl.controlType]
if pctrls:
if pctrls[0].cookie: # Copy cookie from response control to request control
req_ctrl.cookie = pctrls[0].cookie
msgid = connect.search_ext(base=base, scope=ldap.SCOPE_SUBTREE, filterstr=search_flt, attrlist=searchreq_attrlist, serverctrls=[req_ctrl])
else:
break
else:
break
return total_results
This will return a list of all users but you can edit it as required to return what you want without hitting the SIZELIMIT_EXCEEDED issue :)