Trying to locate data based on its Name/ID field which is auto generated within Google cloud. I want to be able to update the given entity, however I am finding it hard to work with the data formatting. I have a list of data with a button which says 'Update" when clicking the update it gives the Unique Name/ID of that entity, however i cannot seem to find a method of also pulling the information associated with that Name/ID within google cloud.
Table with data in
Data inside google cloud
Unique ID located but struggling to pull the other data based on that ID
def updateSong():
songID = request.form['Update']
# songQuery = datastore_client.query(kind="Song")
# songs = list(songQuery.fetch())
query = datastore_client.query(kind='Song', ancestor=songID)
songData = query.fetch()
print(songData)
id_token = request.cookies.get("token")
error_message = None
if id_token:
try:
user_data = google.oauth2.id_token.verify_firebase_token(
id_token, firebase_request_adapter)
except ValueError as exc:
error_message = str(exc)
return render_template('UpdateSong.html', user_data=user_data, error_message=error_message, songID=songID)
Is there not a method of querying the song ID to then be able to use it as such:
song['Title'] = song title
Try this
query = datastore_client.query()
query.key_filter(datastore_client.key('Song', songID))
song = list(query.fetch())
Source: https://googleapis.dev/python/datastore/latest/_modules/google/cloud/datastore/query.html#Query.key_filter
Related
I'm using the Python ibm-cloud-sdk in an attempt to iterate all resources in a particular IBM Cloud account. My trouble has been that pagination doesn't appear to "work for me". When I pass in the "next_url" I still get the same list coming back from the call.
Here is my test code. I successfully print many of my COS instances, but I only seem to be able to print the first page....maybe I've been looking at this too long and just missed something obvious...anyone have any clue why I can't retrieve the next page?
try:
####### authenticate and set the service url
auth = IAMAuthenticator(RESOURCE_CONTROLLER_APIKEY)
service = ResourceControllerV2(authenticator=auth)
service.set_service_url(RESOURCE_CONTROLLER_URL)
####### Retrieve the resource instance listing
r = service.list_resource_instances().get_result()
####### get the row count and resources list
rows_count = r['rows_count']
resources = r['resources']
while rows_count > 0:
print('Number of rows_count {}'.format(rows_count))
next_url = r['next_url']
for i, resource in enumerate(resources):
type = resource['id'].split(':')[4]
if type == 'cloud-object-storage':
instance_name = resource['name']
instance_id = resource['guid']
crn = resource['crn']
print('Found instance id : name - {} : {}'.format(instance_id, instance_name))
############### this is SUPPOSED to get the next page
r = service.list_resource_instances(start=next_url).get_result()
rows_count = r['rows_count']
resources = r['resources']
except Exception as e:
Error = 'Error : {}'.format(e)
print(Error)
exit(1)
From looking at the API documentation for listing resource instances, the value of next_url includes the URL path and the start parameter including its token for start.
To retrieve the next page, you would only need to pass in the parameter start with the token as value. IMHO this is not ideal.
I typically do not use the SDK, but a simply Python request. Then, I can use the endpoint (base) URI + next_url as full URI.
If you stick with the SDK, use urllib.parse to extract the query parameter. Not tested, but something like:
from urllib.parse import urlparse,parse_qs
o=urlparse(next_url)
q=parse_qs(o.query)
r = service.list_resource_instances(start=q['start'][0]).get_result()
Could you use the Search API for listing the resources in your account rather than the resource controller? The search index is set up for exactly that operation, whereas paginating results from the resource controller seems much more brute force.
https://cloud.ibm.com/apidocs/search#search
I'm trying to authorize a view programmatically in BigQuery and I have the following issue: I just tried the code proposed in the Google docs (https://cloud.google.com/bigquery/docs/dataset-access-controls) but when it comes the part of getting the current access entries for the dataset the result is always empty. I don't want to overwrite the current configuration. Any idea about this behavior?
def authorize_view(dataset_id, view_name):
dataset_ref = client.dataset(dataset_id)
view_ref = dataset_ref.table(view_name)
source_dataset = bigquery.Dataset(client.dataset('mydataset'))
access_entries = source_dataset.access_entries # This returns []
access_entries.append(
bigquery.AccessEntry(None, 'view', view_ref.to_api_repr())
)
source_dataset.access_entries = access_entries
source_dataset = client.update_dataset(
source_dataset, ['access_entries']) # API request
I'm using the Google Cloud Datastore in a very simple way, and I try to retrieve an entity by its id. I've read this (it's in Java but seems to follow the same logic)
Def of my entity is here:
class Logs(ndb.Model):
startDate = ndb.DateTimeProperty()
endDate = ndb.DateTimeProperty()
requestedDate = ndb.DateProperty()
taskName = ndb.StringProperty()
status = ndb.StringProperty()
Then when I insert a new one I do
logs = Logs(startDate=datetime.utcnow(),
taskName=taskName,
requestedDate=requestedDate,
status=u'IN_PROGRESS')
key = logs.put()
id = key.id() # I use this variable later
And when I want to retrieve it
logs = Logs.get_by_id(id)
But it never returns any entity...
What's wrong with this ?
Thanks for helping
According to the documentation, you should be able to call get() directly from the Key object to retrieve the entity from Datastore:
logs_entity = Logs(startDate=datetime.utcnow(),
taskName=taskName,
requestedDate=requestedDate,
status=u'IN_PROGRESS')
# Saves entity to Datastore and returns Key
entity_key = logs_entity.put()
# Retrieves entity from Datastore using the previous Key
result = entity_key.get()
Edit:
In the case where you need to pass around the key as a string to rebuild the Key object later you might try using the urlsafe() method, which allows embedding it in a URL:
urlsafe_string = entity_key.urlsafe()
[...]
entity_key= ndb.Key(urlsafe=urlsafe_string)
logs_entity = entity_key.get()
I need your help to order listed item.
I am trying to make apps that can send message to his/her friends ( just like social feeds ). After watching Bret Slatkin talk about create microblogging here's my code:
class Message(ndb.Model):
content = ndb.TextProperty()
created = ndb.DateTimeProperty(auto_now=True)
class MessageIndex(ndb.Model):
receivers = ndb.StringProperty(repeated=True)
class BlogPage(Handler):
def get(self):
if self.request.cookies.get("name"):
user_loggedin = self.request.cookies.get("name")
else:
user_loggedin = None
receive = MessageIndex.query(MessageIndex.receivers == user_loggedin)
receive = receive.fetch()
message_key = [int(r.key.parent().id()) for r in receive]
messages = [Message.get_by_id(int(m)) for m in message_key]
for message in messages:
self.write(message)
The first I do a query to get all message that has my name in the receivers. MessageIndex is child of Message, then I can get key of all message that I receive. And the last is I iter get_by_id using list of message key that I get.
This works fine, but I want to filter each message by its created datetime and thats the problem. The final output is listed item, which cant be ordered using .order or .filter
Maybe some of you can light me up.
You can use the message keys in an 'IN' clause in the Message query. Note that you will need to use the parent() key value, not the id() in this case.
eg:
# dtStart, dtEnd are datetime values
message_keys = [r.key.parent() for r in receive]
query = Message.query(Message._key.IN(message_keys), Message.created>dtStart, Message.created<dtEnd)
query = query.order(Message.created) # or -Message.created for desc
messages = query.fetch()
I am unsure if you wish to simply order by the Message created date, or whether you wish to filter using the date. Both options are catered for above.
I want to get the top followed followers of a user in twitter using python-twitter. And that without getting the 'Rate limit exceeded' error message.
I can get followers of a user then get the number of folowers of each one, but the problem is when that user is big (thousands).
I use the following function to get the followers ids of a particular user:
def GetFollowerIDs(self, userid=None, cursor=-1):
url = 'http://twitter.com/followers/ids.json'
parameters = {}
parameters['cursor'] = cursor
if userid:
parameters['user_id'] = userid
json = self._FetchUrl(url, parameters=parameters)
data = simplejson.loads(json)
self._CheckForTwitterError(data)
return data
and my code is:
import twitter
api = twitter.Api(consumer_key='XXXX',
consumer_secret='XXXXX',
access_token_key='XXXXX',
access_token_secret='XXXXXX')
user=api.GetUser(screen_name="XXXXXX")
users=api.GetFollowerIDs(user)
#then i make a request per follower in users so that I can sort them according to the number of followers.
the problem is that when the user has a lot of followers i get the 'Rate limit exceeded' error message.
I think you need to get the results in chunks as explained in this link.
This is the work around currently shown on the github page. But if you would want an unlimited stream, you should upgrade the subscription for your twitter application.
def GetFollowerIDs(self, userid=None, cursor=-1, count = 10):
url = 'http://twitter.com/followers/ids.json'
parameters = {}
parameters['cursor'] = cursor
if userid:
parameters['user_id'] = userid
remaining = count
while remaining > 1:
remaining -= 1
json = self._FetchUrl(url, parameters=parameters)
try:
data = simplejson.loads(json)
self._CheckForTwitterError(data)
except twitterError:
break
return data
def main():
api = twitter.Api(consumer_key='XXXX',
consumer_secret='XXXXX',
access_token_key='XXXXX',
access_token_secret='XXXXXX')
user=api.GetUser(screen_name="XXXXXX")
count = 100 # you can find optimum value by trial & error
while(#users not empty):
users=api.GetFollowerIDs(user,count)
Or another possibility might be to try running Cron jobs in intervals as explained here.
http://knightlab.northwestern.edu/2014/03/15/a-beginners-guide-to-collecting-twitter-data-and-a-bit-of-web-scraping/
Construct your scripts in a way that cycles through your API keys to stay within the rate limit.
Cronjobs — A time based job scheduler that lets you run scripts at designated times or intervals (e.g. always at 12:01 a.m. or every 15 minutes).