App Engine Python db.delete not working as expected - python

Please help me understand what I must be doing wrong here.
I run a small video game on Google App Engine, and within the game we have an internal messaging service. Each message has a list of status keys that keep track of whether a player has read a message or not.
My problem is that when I attempt to delete a list of keys, only one of the entities is removed from the datastore, regardless of the number of keys in the list.
class Game_Message(db.Model)
sender = db.StringProperty()
recipients = db.ListProperty(str)
status_keys = db.ListProperty(db.Key)
payload = db.TextProperty()
def add_status_keys(self):
self.status_keys = []
for user_id in self.recipients:
gms = Game_Message_Status.create(self, user_id)
self.status_keys.append(gms.key())
def remove_status_keys(self):
db.delete(self.status_keys)
What I have found is that calling db.delete multiple times does delete all the entities, but I don't understand why.
for example: This works correctly.
def remove_status_keys(self):
db.delete(self.status_keys)
db.delete(self.status_keys)

Related

How to access data in documents from realtime listener python

Apologies if some of this doesn't make sense, I'm struggling to understand realtime listeners completely.
I'm trying to add a realtime listener to the chat part of my app, so I can add new messages to the screen as they come into the database. In the below code I load all current messages to the screen when the user opens the page and then I (try to) add the realtime listener in so any new messages can be added to the screen.
However, the doc_snapshot is just a list of the document ids, rather than the message_dict I have been using above, how do I access the data for each document in doc_snapshot, rather than just the id?
Or am I doing it completely wrong, should I not load do a one-time load of the messages when the screen is opened and just use a realtime listener to load the messages and listen for new messages?
self.local_id is the id of the user who has logged in, doc_id is the id of the person they're messaging.
def move_to_chat(self, doc_id):
group_id = self.local_id + ":"+ doc_id
doc_ref = self.my_firestore.db.collection(u'messages').document(group_id)
doc = doc_ref.get()
if doc.exists: # Check if the document exists. If it does, load the messages to the screen
get_messages = self.my_firestore.db.collection(u'messages').document(group_id).collection(group_id).order_by(u'Timestamp').limit(20)
messages = get_messages.stream()
for message in messages:
message_dict = message.to_dict()
try:
if message_dict['IdFrom'] == self.local_id:
#Add label to left of screen
else:
#Add label to right of screen
except:
pass
else: # If it doesn't, create it
self.my_firestore.db.collection(u'messages').document(group_id).set({
u'GroupId': group_id
})
add_to_doc = self.my_firestore.db.collection(u'messages').document(group_id).collection(group_id).document()
add_to_doc.set({
u'Timestamp': datetime.datetime.now()
})
# Watch for new messages
self.query_watch = self.my_firestore.db.collection(u'messages').document(group_id).collection(group_id)
# Watch the document
self.query_watch.on_snapshot(self.on_snapshot)
def on_snapshot(self, doc_snapshot, changes, read_time):
for doc in doc_snapshot:
#Here's where I'd like to access data from the documents, to find the message that has been added.
The Google Firestore documentation for Snapshot method explained the Classes for representing documents for the Google Cloud Firestore API.You can refer to this document to conform the returning values.

Pulling historical channel messages python

I am attempting to create a small dataset by pulling messages/responses from a slack channel I am a part of. I would like to use python to pull the data from the channel however I am having trouble figuring out my api key. I have created an app on slack but I am not sure how to find my api key. I see my client secret, signing secret, and verification token but can't find my api key
Here is a basic example of what I believe I am trying to accomplish:
import slack
sc = slack.SlackClient("api key")
sc.api_call(
"channels.history",
channel="C0XXXXXX"
)
I am willing to just download the data manually if that is possible as well. Any help is greatly appreciated.
messages
See below for is an example code on how to pull messages from a channel in Python.
It uses the official Python Slack library and calls
conversations_history with paging. It will therefore work with
any type of channel and can fetch large amounts of messages if
needed.
The result will be written to a file as JSON array.
You can specify channel and max message to be retrieved
threads
Note that the conversations.history endpoint will not return thread messages. Those have to be retrieved additionaly with one call to conversations.replies for every thread you want to retrieve messages for.
Threads can be identified in the messages for each channel by checking for the threads_ts property in the message. If it exists there is a thread attached to it. See this page for more details on how threads work.
IDs
This script will not replace IDs with names though. If you need that here are some pointers how to implement it:
You need to replace IDs for users, channels, bots, usergroups (if on a paid plan)
You can fetch the lists for users, channels and usergroups from the API with users_list, conversations_list and usergroups_list respectively, bots need to be fetched one by one with bots_info (if needed)
IDs occur in many places in messages:
user top level property
bot_id top level property
as link in any property that allows text, e.g. <#U12345678> for users or <#C1234567> for channels. Those can occur in the top level text property, but also in attachments and blocks.
Example code
import os
import slack
import json
from time import sleep
CHANNEL = "C12345678"
MESSAGES_PER_PAGE = 200
MAX_MESSAGES = 1000
# init web client
client = slack.WebClient(token=os.environ['SLACK_TOKEN'])
# get first page
page = 1
print("Retrieving page {}".format(page))
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
)
assert response["ok"]
messages_all = response['messages']
# get additional pages if below max message and if they are any
while len(messages_all) + MESSAGES_PER_PAGE <= MAX_MESSAGES and response['has_more']:
page += 1
print("Retrieving page {}".format(page))
sleep(1) # need to wait 1 sec before next call due to rate limits
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
cursor=response['response_metadata']['next_cursor']
)
assert response["ok"]
messages = response['messages']
messages_all = messages_all + messages
print(
"Fetched a total of {} messages from channel {}".format(
len(messages_all),
CHANNEL
))
# write the result to a file
with open('messages.json', 'w', encoding='utf-8') as f:
json.dump(
messages_all,
f,
sort_keys=True,
indent=4,
ensure_ascii=False
)
This is using the slack webapi. You would need to install requests package. This should grab all the messages in channel. You need a token which can be grabbed from apps management page. And you can use the getChannels() function. Once you grab all the messages you will need to see who wrote what message you need to do id matching(map ids to usernames) you can use getUsers() functions. Follow this https://api.slack.com/custom-integrations/legacy-tokens to generate a legacy-token if you do not want to use a token from your app.
def getMessages(token, channelId):
print("Getting Messages")
# this function get all the messages from the slack team-search channel
# it will only get all the messages from the team-search channel
slack_url = "https://slack.com/api/conversations.history?token=" + token + "&channel=" + channelId
messages = requests.get(slack_url).json()
return messages
def getChannels(token):
'''
function returns an object containing a object containing all the
channels in a given workspace
'''
channelsURL = "https://slack.com/api/conversations.list?token=%s" % token
channelList = requests.get(channelsURL).json()["channels"] # an array of channels
channels = {}
# putting the channels and their ids into a dictonary
for channel in channelList:
channels[channel["name"]] = channel["id"]
return {"channels": channels}
def getUsers(token):
# this function get a list of users in workplace including bots
users = []
channelsURL = "https://slack.com/api/users.list?token=%s&pretty=1" % token
members = requests.get(channelsURL).json()["members"]
return members

MS teams bot - create new conversation

I'm using the botbuilder-python to build MS Teams bot.
Following samples I am able to respond to messages. What I'm struggling with is creating completely new message, without existing activity passed from Teams. I modified some code from the tests (https://github.com/Microsoft/botbuilder-python/blob/62b0512a4dd918fa0d3837207012b31213aaedcc/libraries/botframework-connector/tests/test_conversations.py) but I'm getting:
botbuilder.schema.error_response_py3.ErrorResponseException: (BadSyntax) Could not parse tenant id
What is it, where can I find it (I can fish it out from request but it's not ideal) and how do I pass it? Can anyone point me at any Python samples of creating a new conversation?
I figured it out, just in case anybody else is trying to do the same thing and gets stuck:
to = ChannelAccount(id=to_user_id)
bot_channel = ChannelAccount(id=bot_id)
activity_reply = Activity(type=ActivityTypes.message, channel_id='msteams',from_property=bot_channel,recipient=to,text=message)
credentials=MicrosoftAppCredentials(app_id, app_password)
JwtTokenValidation.authenticate_request(activity_reply, "Authorization", credentials)
# That's where you pass the tenant id
reply_conversation_params=ConversationParameters(bot=bot_channel, members=[to], activity=activity_reply, channel_data={ 'tenant': { 'id': tenant_id } })
connector = ConnectorClient(credentials, base_url='https://smba.trafficmanager.net/uk/')
# Create conversation
conversation = connector.conversations.create_conversation(reply_conversation_params)
# And send it
connector.conversations.send_to_conversation(conversation.id, activity_reply)

Google Contacts API: Temporary internal error, when uploading contact photos in parallel

I need to change the contact photo for a large number of contacts, using the python client for the Google Contacts API 3.0
gdata==2.0.18
The code I'm running is:
client = gdata.contacts.client.ContactsClient(source=MY_APP_NAME)
GDClientAuth(client, MY_AUTH)
def _get_valid_contact(contact_id):
contact = client.GetContact(contact_id)
if contact.GetPhotoLink() is None:
# Generate a proper photo link for this contact
link = gdata.contacts.data.ContactLink()
link.etag = '*'
link.href = generate_photo_url(contact)
link.rel = 'http://schemas.google.com/contacts/2008/rel#photo'
link.type = 'image/*'
contact.link.append(link)
return contact
def upload_photo(contact_id, image_path, image_type, image_size):
contact = _get_valid_contact(contact_id)
try:
client.ChangePhoto(media=image_path,
contact_entry_or_url=contact,
content_type=image_type,
content_length=image_size)
except gdata.client.RequestError as req:
if req.status == 412:
#handle etag mismatches, etc...
pass
Given a list of valid Google contact ids, if I run the upload_photo method sequentially for each of them, everything goes smoothly, and all the contacts get their photo changed:
for contact_id in CONTACT_ID_LIST:
upload_photo(contact_id, '/path/to/image', 'image/png', 1234)
However, if I try to upload the photos in parallel (using at least 4 threads), some of them shall randomly fail with 500, A temporary internal problem has occurred. Try again later as a response to the client.ChangePhoto call. I can retry these photos later, though, and they finally get updated:
from multiprocessing.pool import ThreadPool
pool = ThreadPool(4)
for contact_id in CONTACT_ID_LIST:
pool.apply_async(func=upload_photo,
args=(contact_id,'/path/to/image', 'image/png', 1234))
The more threads I use, the more frequently the error happens.
The only similar issue I could find is http://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=2507, and it was solved some time ago.
The issue I'm facing now might be different, as it happens randomly, and only when running the updates in parallel. So there are chances there might be a race condition at some point at the Google Contacts API end.

How to retrieve the latest modified tasks using REST API?

I am creating a integration tool to integrate with rally and my web application. I decided to use Python to run in my web-server to retrieve the contents from rally.
In one of the scenario, I need to get the last modified task from a story. I don't know the ID, Name or anything, but I know the story name. Using the story name, how can I get the last modified task(s)?
Here's an example of how to set up Kyle's query in pyral:
server = "rally1.rallydev.com"
user = "user#company.com"
password = "topsecret"
workspace = "My Workspace"
project = "My Project"
rally = Rally(server, user, password, workspace=workspace, project=project)
rally.enableLogging("rally.history.showtasks")
fields = "FormattedID,State,Name,WorkProduct,Name,LastUpdateDate",
criterion = 'Workproduct.Name = "My Tasks User Story"'
response = rally.get('Task', fetch=fields, query=criterion, order="LastUpdateDate Desc",pagesize=200, limit=400)
most_current_task = response.next()
print "%-8.8s %-52.52s %s" % (most_current_task.FormattedID, most_current_task.Name, most_current_task.State)
I'm not super familiar with how to use pyral but you should be able to get what you'd like by querying against the task wsapi endpoint like so:
/slm/webservice/1.40/task.js?query=(WorkProduct.Name = "Story Name")&order=LastUpdateDate DESC
Now you just need to get pyral to generate that request. :-)

Categories

Resources