I'm building a web-app that runs analysis on Slack activity and I need test data. Do not know where to turn to, doesn't have to be millions of messages, but need data for the following:
user status (online/away/logged off) changes
messages sent (date, user, contents)
reactions (date, type, reaction on what message)
huddles, voice/video chats with metadata
any help is much appreciated
If you're developing an app to be used on the Enterprise Grid plan, you can request a sandbox from Slack directly, though it won't contain any test data off the bat.
More information on that here: https://api.slack.com/enterprise/grid/testing.
That being said, you could populate it with information yourself, using a combination of several different Web API methods. Here are a couple methods that may be of use to you:
chat.postMessage - posts a message in a channel
conversations.create - creates a new channel
reactions.add - adds a reaction onto an item
Related
I would like to save the conversation references to a blob file or SQL DB so that I can download this file and retrieve the conversation references and then send proactive messages to the users. I know that there is a sample that allows me to save the conversation references in a dictionary, but obviously this dictionary is deleted after a deployment of a new version of the bot and so I can't message the users anymore. So I thought to save this dictionary in a blob file in order to recover it and not to lose the conversation references. But this practice doesn't work.
I do the following to save the dictionary.
a = pickle.dumps(conversation_reference_dict)
blob_client.upload_blob(a, blob_type="BlockBlob", overwrite = True)
But I think this practice saves me a dictionary made like this : [id_client : address of the conversation_reference object] and this is clearly not what I want because after a future deployment this address will no longer mean anything.
Does anyone have any tips for doing this in python?
Thank you very much
UPDATE
After testing, the code is correctly saved in the dictionary. However, the problem arises when I try to execute the following code snippet after an update of the bot's source code.
# Send a message to all conversation members.
# This uses the shared Dictionary that the Bot adds conversation references to.
async def _send_proactive_message():
for conversation_reference in CONVERSATION_REFERENCES_STORED.values():
await ADAPTER.continue_conversation(
conversation_reference,
lambda turn_context: turn_context.send_activity("proactive hello"),
APP_ID,
)
The adapter method fails to continue the conversation with users, as if it no longer found them. Is there a way to update the conversation references that are saved in the blob so that after a release the bot can continue the pending conversations?
UPDATE II
I would like to work in this scenario:
I have my bot quizzing users of a teams channel, the bot is released on azure and I can't go through the internal app section to teams due to lack of permissions.
The bot works with proactive messages and saves the necessary conversation references to a file inside a blob.
I want to introduce a new feature inside the bot so I perform a new release, I would like the bot to be able to continue proactively messaging users, since I save the conversation references to a file that is not touched by this release.
The last point doesn't happen, the proactive messages are no longer sent, is there any way I can continue to send these messages? I'm assuming that in the new bot release new id/url are created and these do not match the old id/url saved on the file and therefore calling the method of sending proactive messages via conversation reference will fail or otherwise not be executed. Does anyone know which fields do not match? Can I possibly send a message to the post-release bot and "modify" the entire dictionary on blob so that these urls/ids match?
I make an example of what i mean: i know that after a release the id1 is modified, so i after the release for example through the test section inside azure i contact the bot, this contact triggers a method that calls the file saved on azure it scrolls it all and replaces the old id1 with the new id1 so the sending of proactive messages can continue safely. is this a possible scenario? Can you help me?
UPDATE III
I seem to have solved my problem by adding this line of code:
AppCredentials.trust_service_url(conversation_reference.service_url)
before:
await ADAPTER.continue_conversation(
conversation_reference,
lambda turn_context: turn_context.send_activity("proactive hello"),
APP_ID,
)
resulting in this final code:
# Send a message to all conversation members.
# This uses the shared Dictionary that the Bot adds conversation references to.
async def _send_proactive_message():
for conversation_reference in CONVERSATION_REFERENCES_STORED.values():
AppCredentials.trust_service_url(conversation_reference.service_url)
await ADAPTER.continue_conversation(
conversation_reference,
lambda turn_context: turn_context.send_activity("proactive hello"),
APP_ID,
)
I'm not sure why this was removed from the docs, but you always need to trust the service URL in order to send proactive messages in Teams, as explained in this answer: Proactive message not working in MS Teams after bot is restarted
EDIT: This is apparently no longer true for all languages, so you may not have to do it in Python for much longer
I have a question about a Microsoft Teams Python bot. If the bot has been added to some personal chats and group chats and I restart the bot, sometimes the bot needs to be added to the chats again. So I want to make bot sessions.
Is it possible to make a bot session in Microsoft Teams? I want to store session information on the local disk, and then have the bot load that data when it starts.
My bot code is very similar to this sample.
Thank you for your help.
Updated:
Like I said my bot code is very similar to this sample, but a bit different. Because of this I created an example for this question. First of all I create a bot in Azure and set up it.
After this, in my bot's config.py file I set up the port and Microsoft app ID and password (generated by clicking the "Manage" button).
import os
""" Bot Configuration """
class DefaultConfig:
""" Bot Configuration """
PORT = 3978
APP_ID = os.environ.get("MicrosoftAppId", "sadsadsadasd")
APP_PASSWORD = os.environ.get("MicrosoftAppPassword", "asdasdasdasdasd")
After this I execute the command ngrok http 3978 and put the generated endpoint in the Azure bot configuration. To register the bot as an application, I use App Studio in Teams. After I do that, I just to need run the bot in CMD so I run a command like python run.py
After I run the bot, I can add it in the channel and run commands and functions that I created in the code.
This is just an example of how I set up the bot. The main bot is on a Linux server.
Here is why I want to make the bot keep session information and load it after the server or bot was restarted. Sometimes after I restart the bot or server it is no longer in the chat or team. In the future I want make some kind of commands and execute them using a cron job or something like that.
If the bot disappears from a chat then I can't use bot commands in that chat. For example, I add two bots in a chat. After I restart one of them I can't get any response from it like in the picture below.
And with # I can't see the bot.
I have an idea. After I add the bot in the chat I get this in the console:
Adding new conversation to the list: {'additional_properties': {}, 'activity_id': '123215513', 'user': <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027C0ED60>, 'bot': <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027Cs2FD0>, 'conversation': <botbuilder.schema._models_py3.ConversationAccount object at 0x0000027C0400>, 'channel_id': 'msteams', 'locale': 'en-US', 'service_url': 'https://smba.trafficmanager.net/emea/'}
Formatted:
{
"additional_properties": {},
"activity_id": "123215513",
"user": <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027C0ED60>,
"bot": <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027Cs2FD0>,
"conversation": <botbuilder.schema._models_py3.ConversationAccount object at 0x0000027C0400>,
"channel_id": "msteams",
"locale": "en-US",
"service_url": "https://smba.trafficmanager.net/emea/"
}
So if I store this information and then load it when I start the bot, maybe it will work?
There is an option inside a bot to save a transcript of the conversation, but that's kind of unrelated. Basically, the important thing to know is that you don't need to store anything your side - from the user perspective, the entire conversation history is preserved in the Teams client, and from the perspective of your bot, storing the entire conversation history doesn't really gain you anything - user state is more relevant than conversation history. This would be storing an object in a persistence you choose (e.g. database, nosql store, azure blobs, whatever), but it would be state you choose to store for the user (basically whatever properties make sense to store for your app, in a kind of "User" collection). This is definitely a possible and often necessary concept, and this link will be useful for you: https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-state?view=azure-bot-service-4.0
It's important to know though that this is a separate idea from sending a message on a defined schedule (e.g. Cron) to a user. For this to work, you need to read up on a concept called "Proactive Messaging". I have a sample on the Teams PnP gallery specifically dealing with it (code only in Node and Dotnet - no python I'm afraid, but hopefully it's useful for you). See here for that. Note that at the bottom of this link is a list of further reading on the topic as well.
Where the two above ideas come together is that you need to store certain state about the user to be able to send the proactive message later. In the sample I link to, I show how to get the settings you need to send the proactive message, but I've not included the concept of saving them to a data store - that's up to your own implementation inside your bot (e.g. SQL Azure, MongoDb, blob, whatever).
Also important to note (and I think part of the confusion and in fact part of why I wrote the sample) - your proactive code does not need to live in the same set of code as your bot! Your bot could be a web service running somewhere, and your proactive code an Azure Function/Lambda/similar.
Like Hilton said, it's a bad idea to try and save bot state to the local disk. Also, there's no need for that. Both Hilton and I have linked to documentation that should help you understand how bot state is meant to be saved.
I cannot reproduce the problem you're encountering where the bot gets removed from Teams chats. The problem sounds impossible anyway, based on how Teams works and how bots work. Teams should have no way of knowing whether your bot is started or stopped. It's possible that your server is set up to manually uninstall the bot from Teams conversations based on when the bot starts and stops, but that would still be very strange. I'm willing to continue troubleshooting this with you, but I thought I'd post an answer now in case you'd like to award your bounty to someone before it expires.
Everyday, a sender "sender#sender.com" send me a message with a number inside.
I need to save this number everyday.
I want to write a python script with gmail API to get data from last mail from this sender, and then parse it.
I followed the Gmail API "Quickstart Guide" : here
I also check the page about User.message : here
However, I don't understand how to synchronize all of this to get the data.
Could someone explain me the process ?
If you where you able to complete the Gmail API quickstart, then you already have a GCP project, credentials and have authorized some Gmail API scopes for you app.
The above is the first step (being able to authenticate and be allowed to make requests for the API scope you need).
Since you need to pass a message's Id as a parameter for Users.messages.get you need to first retrieve it using listing messages for example.
So the next step is to make a request to Users.messages.list to list all messages from a user.
You could use the query (q) parameter to filter the messages by user like: q="from:someuser#example.com is:unread".
This will return a list of messages from someuser#example.com that are unread.
Try things out in the API explorer sidebar from the documentation until you have defined the request as you want, and then implement it into you app.
As aerials said.
users().messages().list(userId='me',q=("<parameters>"))).execute()
The above code will fulfill the exact same function as typing in a search request on the gmail website. You dont actually have to worry about labels or anything if you are operating at a small scale. Just follow the same syntax as the search bar on gmail.
However, I am not sure about the usage quotas on the q parameter for list. It may be more expensive for a bigger scale operation to use the q parameter instead of using the other api methods.
A report is posted every 5 hrs on a Slack channel, from which we need to sort/filter some information and put it into a file.
So, is there any way to read the channel continuously or run some command every 5 minutes or so before that time, and capture the report for future processing?
Yes, that is possible. Here is the basic outline of a solution:
Create a Slack app based on a script (e.g. in Python) that has access to
that channel's history (e.g. has the channels:history permission scope)
Use cron to call your script at the needed time
The script reads the channels history (e.g. with channel.history for public channels), filterers out what it needs
and then stores the report as file.
Another approach would be to continuously read every new message from the channel, parse for a trigger (e.g. a specific user that sends it or the name of the report) and then filter and safe the report when it appears. If you can identify a reliable trigger this would in my experience be the more stable solution, since scheduled reports can be delayed.
For that approach use the Events API of Slack instead of CRON and subscribe to receiving messages (e.g. message event for public channels). Slack will then automatically send each new message to your script as soon as it is posted.
If you are new to creating Slack apps I would advise to study the excellent official documentation and tutorials on the Slack API site to get started.
A Python example to this approach could be found here: https://gist.github.com/demmer/617afb2575c445ba25afc432eb37583b
This script counts the amount of messages per user.
Based on this code I created the following example for you:
# get the correct channel id
for channel in channels['channels']:
if channel['name'] == channel_name:
channel_id = channel['id']
if channel_id == None:
raise Exception("cannot find channel " + channel_name)
# get the history as follows:
history = sc.api_call("channels.history", channel=channel_id)
# get all the messages from the history:
messages = history['messages']
# Or reference them by ID, so in this case get the first message:
ids = messages[0]
I'm trying to figure out an effective way to test how my server handles webhooks from Stripe. I'm setting up a system to add multiple subscriptions to a customer's credit card, which is described on Stripe's website:
https://support.stripe.com/questions/can-customers-have-multiple-subscriptions
The issue I'm having is figuring out how to effectively test that my server is executing the scripts correctly (i.e., adding the correct subscriptions to the invoice, recording the events in my database, etc.). I'm not too concerned about automating the test right now, I'm just struggling to effectively run any good test on the script. Has anyone done this with Django previously? What resources and tools did you use to run these tests?
Thanks!
I did not use any tools to run the tests. Impact the stripe has a FULL API REFERENCE which display the information you have send to them and they also display the error. Stripe is very easy to setup, cheap, and have full details in documentation.
What I did is?
First I create a stripe account. In that account, they will give you:
TEST_SECRET_KEY: use for sending payment and information in stripe (for testing)
TEST_PUBS_KEY: identifies your website when communicating with Stripe (for testing)
LIVE_SECRET_KEY: use for sending payment and information in stripe (for live)
LIVE_PUBS_KEY: identifies your website when communicating with Stripe (for live)
API_VERSION: "2012-11-07" //this is the version for testing only
When you login you will see Documentation at the top. Click the documentation and they will give you step by step tutorial on how to create a form, how to create subscription, how to handle errors and many more.
To check if your script is executing and connecting to stripe. Click FULL API REFERENCE then choose Python. In that page you will see the information you have send and error that you have encountered.
What I really like is, if the Stripe detect an error the system will point out that and give you a solution. The solution is in the left side and checking the information send is on the right side.
Stripe is divided into two worlds: the test mode and the live. In test mode, you can perform creating new customer, add new invoices, set up your subscription, and many more. What ever you do in test mode, is the same when your Stripe is live.
I really love that stripe provides the logs for the web hooks, however, it is difficult to view the error responses from them, so I set up a script using the Requests library. First, I went to the Stripe dashboard and copied one of the requests they were sending.
Events & Webhooks --> click on one of the requests --> copy the entire request
import requests
data = """ PASTE COPIED JSON REQUEST HERE """
# insert the appropriate url/endpoint below
res = requests.post("http://localhost:8000/stripe_hook/", data=data).text
output = open("hook_result.html", "w")
output.write(res)
output.close()
Now I could open hook_result.html and see any django errors that may have come up (given DEBUG=True in django).
In django-stripe-payments I have a test suite that while far from comprehensive is meant to be a start at getting there. What I do is just copy a real webhook's data, scrub it for sensitive data and add it as a data to the test.
testing stripe webhooks is a pain. I don't use Django, so my answer will be more general.
My php webhook handler parses the webhook data and dispatches handler functions accordingly. In my handler class, I set up class properties with legitimate data for all the ids that the test webhooks mangles. Then I have a condition in each of my handler functions that tests for livemode. If false, I replace the mangled ids with legit test ids.
I also have another class property called $fakeLiveMode, which I set to true when I'm testing. This allows me to force the code to process as though in live mode.
So, for example, when testing the customer.subscription.updated event, the plan id and customer id get botched. So in that handler I would do this:
if ($event->livemode === true || $this->fakeLivemode)
{
if ($this->fakeLivemode)
{
// override botched data returned by test webhook
$event->data->object->plan->id = $this->testPlanId;
$event->data->object->customer = $this->testCustomerId;
}
// process webhook
}
Does that help?