I'm not strong in programming, the task has arrived. You need to write a lambda function to once a day, retrieve logs in 24 hours, sort logs with an error note, and send them to email or slack.
SNS topic I created, with sending to email, but I do not understand how you can use the lamb function to extract and sort logs
https://github.com/EvanErickson/aws-lambda-parse-cloudwatch-logs-send-email/blob/main/index.py
I found an example on the Internet, but I can not understand how it works and where to enter the name of the group log
Related
I have a question about a Microsoft Teams Python bot. If the bot has been added to some personal chats and group chats and I restart the bot, sometimes the bot needs to be added to the chats again. So I want to make bot sessions.
Is it possible to make a bot session in Microsoft Teams? I want to store session information on the local disk, and then have the bot load that data when it starts.
My bot code is very similar to this sample.
Thank you for your help.
Updated:
Like I said my bot code is very similar to this sample, but a bit different. Because of this I created an example for this question. First of all I create a bot in Azure and set up it.
After this, in my bot's config.py file I set up the port and Microsoft app ID and password (generated by clicking the "Manage" button).
import os
""" Bot Configuration """
class DefaultConfig:
""" Bot Configuration """
PORT = 3978
APP_ID = os.environ.get("MicrosoftAppId", "sadsadsadasd")
APP_PASSWORD = os.environ.get("MicrosoftAppPassword", "asdasdasdasdasd")
After this I execute the command ngrok http 3978 and put the generated endpoint in the Azure bot configuration. To register the bot as an application, I use App Studio in Teams. After I do that, I just to need run the bot in CMD so I run a command like python run.py
After I run the bot, I can add it in the channel and run commands and functions that I created in the code.
This is just an example of how I set up the bot. The main bot is on a Linux server.
Here is why I want to make the bot keep session information and load it after the server or bot was restarted. Sometimes after I restart the bot or server it is no longer in the chat or team. In the future I want make some kind of commands and execute them using a cron job or something like that.
If the bot disappears from a chat then I can't use bot commands in that chat. For example, I add two bots in a chat. After I restart one of them I can't get any response from it like in the picture below.
And with # I can't see the bot.
I have an idea. After I add the bot in the chat I get this in the console:
Adding new conversation to the list: {'additional_properties': {}, 'activity_id': '123215513', 'user': <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027C0ED60>, 'bot': <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027Cs2FD0>, 'conversation': <botbuilder.schema._models_py3.ConversationAccount object at 0x0000027C0400>, 'channel_id': 'msteams', 'locale': 'en-US', 'service_url': 'https://smba.trafficmanager.net/emea/'}
Formatted:
{
"additional_properties": {},
"activity_id": "123215513",
"user": <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027C0ED60>,
"bot": <botbuilder.schema._models_py3.ChannelAccount object at 0x0000027Cs2FD0>,
"conversation": <botbuilder.schema._models_py3.ConversationAccount object at 0x0000027C0400>,
"channel_id": "msteams",
"locale": "en-US",
"service_url": "https://smba.trafficmanager.net/emea/"
}
So if I store this information and then load it when I start the bot, maybe it will work?
There is an option inside a bot to save a transcript of the conversation, but that's kind of unrelated. Basically, the important thing to know is that you don't need to store anything your side - from the user perspective, the entire conversation history is preserved in the Teams client, and from the perspective of your bot, storing the entire conversation history doesn't really gain you anything - user state is more relevant than conversation history. This would be storing an object in a persistence you choose (e.g. database, nosql store, azure blobs, whatever), but it would be state you choose to store for the user (basically whatever properties make sense to store for your app, in a kind of "User" collection). This is definitely a possible and often necessary concept, and this link will be useful for you: https://learn.microsoft.com/en-us/azure/bot-service/bot-builder-concept-state?view=azure-bot-service-4.0
It's important to know though that this is a separate idea from sending a message on a defined schedule (e.g. Cron) to a user. For this to work, you need to read up on a concept called "Proactive Messaging". I have a sample on the Teams PnP gallery specifically dealing with it (code only in Node and Dotnet - no python I'm afraid, but hopefully it's useful for you). See here for that. Note that at the bottom of this link is a list of further reading on the topic as well.
Where the two above ideas come together is that you need to store certain state about the user to be able to send the proactive message later. In the sample I link to, I show how to get the settings you need to send the proactive message, but I've not included the concept of saving them to a data store - that's up to your own implementation inside your bot (e.g. SQL Azure, MongoDb, blob, whatever).
Also important to note (and I think part of the confusion and in fact part of why I wrote the sample) - your proactive code does not need to live in the same set of code as your bot! Your bot could be a web service running somewhere, and your proactive code an Azure Function/Lambda/similar.
Like Hilton said, it's a bad idea to try and save bot state to the local disk. Also, there's no need for that. Both Hilton and I have linked to documentation that should help you understand how bot state is meant to be saved.
I cannot reproduce the problem you're encountering where the bot gets removed from Teams chats. The problem sounds impossible anyway, based on how Teams works and how bots work. Teams should have no way of knowing whether your bot is started or stopped. It's possible that your server is set up to manually uninstall the bot from Teams conversations based on when the bot starts and stops, but that would still be very strange. I'm willing to continue troubleshooting this with you, but I thought I'd post an answer now in case you'd like to award your bounty to someone before it expires.
Everyday, a sender "sender#sender.com" send me a message with a number inside.
I need to save this number everyday.
I want to write a python script with gmail API to get data from last mail from this sender, and then parse it.
I followed the Gmail API "Quickstart Guide" : here
I also check the page about User.message : here
However, I don't understand how to synchronize all of this to get the data.
Could someone explain me the process ?
If you where you able to complete the Gmail API quickstart, then you already have a GCP project, credentials and have authorized some Gmail API scopes for you app.
The above is the first step (being able to authenticate and be allowed to make requests for the API scope you need).
Since you need to pass a message's Id as a parameter for Users.messages.get you need to first retrieve it using listing messages for example.
So the next step is to make a request to Users.messages.list to list all messages from a user.
You could use the query (q) parameter to filter the messages by user like: q="from:someuser#example.com is:unread".
This will return a list of messages from someuser#example.com that are unread.
Try things out in the API explorer sidebar from the documentation until you have defined the request as you want, and then implement it into you app.
As aerials said.
users().messages().list(userId='me',q=("<parameters>"))).execute()
The above code will fulfill the exact same function as typing in a search request on the gmail website. You dont actually have to worry about labels or anything if you are operating at a small scale. Just follow the same syntax as the search bar on gmail.
However, I am not sure about the usage quotas on the q parameter for list. It may be more expensive for a bigger scale operation to use the q parameter instead of using the other api methods.
This is more of a design question on what to use within Google Cloud's infrastructure to obtain the results from a Python script.
Take the following scenario: we have over 60 projects and one central project for Stackdriver logging and the such.
It is from this central project I want to run a Python script (using Cloud Scheduler which then triggers the Cloud Function) to obtain a list of disks that haven't had their snapshot taken in the past 24 hours, those that aren't assigned to a snapshot schedule, and the snapshot schedules that have names that do not match our naming convention. I have the script already prepared, and it works very well from my workstation (producing a list of dictionaries of the desired results per project).
However, my question is: where should I send the results to? And how could I then have an email sent out to the appropriate people to action it?
I've played about with sending the object attributes to Pub/Sub within the central project, but this requires me to manually pull the messages, and I can't see any way of scheduling the pull request. I also don't see an option of sending out an email from Pub/Sub to an email address, and so the only option seems to be to create an email Cloud Function which is then triggered whenever one of the Subscriptions receives a new message from the first Cloud Function containing the original script.
I suppose I could simply set this up on one of our Windows VM instances and convert the script to PowerShell, but I was rather hoping to keep it out of a VM if at all possible.
Has anyone done this before? And if so, what did you use to get the desired results?
I think you can use Sendgrid API to send emails from your Cloud Function. It's very easy to set up it has a free plan which includes 12,000 per month and has an API for Python :D.
You can signup using the Google Cloud Marketplace selecting the free plan.
Then create an API key for your code here. If you only need to send mails I suggest you can select the option Restricted Access and for Mail Send give Full Access or the level you think will work for you.
Here's a code snippet for you:
import logging
from sendgrid import SendGridAPIClient
from sendgrid.helpers.mail import Mail, Email
from python_http_client.exceptions import HTTPError
def send_mail(request):
log = logging.getLogger(__name__)
SENDGRID_API_KEY = 'SG.blahblahblah'
sg = SendGridAPIClient(SENDGRID_API_KEY)
"""
Maybe here goes the code you use to check what you need
"""
APP_NAME = "Testing"
html_content = f"""
Here goes your mail body in HTML format
"""
message = Mail(
to_emails="dest#a.domain.com",
from_email=Email('sender#your.domain.com', "Your name or your app name"),
subject=f"Warning!!!!",
html_content=html_content
)
try:
response = sg.send(message)
log.info(f"email.status_code={response.status_code}")
return f'Your mail was sent!'
except HTTPError as e:
log.error(e)
And don't forget to add the sendgrid lib to your requirements.txt file:
# Function dependencies, for example:
# package>=version
sendgrid
Hope this can help you.
A report is posted every 5 hrs on a Slack channel, from which we need to sort/filter some information and put it into a file.
So, is there any way to read the channel continuously or run some command every 5 minutes or so before that time, and capture the report for future processing?
Yes, that is possible. Here is the basic outline of a solution:
Create a Slack app based on a script (e.g. in Python) that has access to
that channel's history (e.g. has the channels:history permission scope)
Use cron to call your script at the needed time
The script reads the channels history (e.g. with channel.history for public channels), filterers out what it needs
and then stores the report as file.
Another approach would be to continuously read every new message from the channel, parse for a trigger (e.g. a specific user that sends it or the name of the report) and then filter and safe the report when it appears. If you can identify a reliable trigger this would in my experience be the more stable solution, since scheduled reports can be delayed.
For that approach use the Events API of Slack instead of CRON and subscribe to receiving messages (e.g. message event for public channels). Slack will then automatically send each new message to your script as soon as it is posted.
If you are new to creating Slack apps I would advise to study the excellent official documentation and tutorials on the Slack API site to get started.
A Python example to this approach could be found here: https://gist.github.com/demmer/617afb2575c445ba25afc432eb37583b
This script counts the amount of messages per user.
Based on this code I created the following example for you:
# get the correct channel id
for channel in channels['channels']:
if channel['name'] == channel_name:
channel_id = channel['id']
if channel_id == None:
raise Exception("cannot find channel " + channel_name)
# get the history as follows:
history = sc.api_call("channels.history", channel=channel_id)
# get all the messages from the history:
messages = history['messages']
# Or reference them by ID, so in this case get the first message:
ids = messages[0]
I'm looking into a possible feature for my little to-do application... I like the idea that I can send an email to a particular email address, containing a to-do task I need to complete, and this will be read by my web application and be put in the database... So, when I come to log into my application, the to-do task I emailed will be there as a entry in the app.
Is this possible? I have a slice with SliceHost (basically a dedicated server) so I have total control on what to install etc. I'm using Python/Django/MySQL for this.
Any ideas on what steps to take to make this happen?
If I were to implement this, I'd use a scheduler and a job to be scheduled.
That job would connect to the mail server (be it POP3 or IMAP) and parse the unread messages (or messages unread by the job). Based on that I would insert that record.
You'd get 2 types of records that way. A list of mail message ids which have been processed (so you don't reprocess mails) and a list of tasks.
Disadvantage is that it takes some time before you see the task, as the job only executes every X minutes, or seconds.
If that is not good enough I'd go for a permanent IMAP connection, but you'd have to implement more error handling; you don't just retry automatically every X minutes.
Googling for Django +scheduler will get you started.
also have a look at this StackOverflow thread, no need to reinvent the wheel :)
I needed the exact same thing. I use the Lamson project (which is written in python) to transform email, forward email based on rules to my www.evernote.com and thinking rock www.trgtd.com.au accounts, update firewall web filtering rules, update allow/deny lists for my spam filter, read and write databases etc....
I like to think of it as email server automation and email application development.
www.lamsonproject.org
Troy
One way that I've solved this in the past was using qmail's .qmail files (docs).
Basically you set up qmail and point your email address (for ease of use, lets assume proc#whatever.com is your email address) to your home directory. In that directory you set up a .qmail-proc file to handle the mail.
This allows you to use a full-fledged SMTP server on your server, including spam filtering, forwarding, aliases, all that fun stuff. You can then pipe the data from an email into an application. In your case, I would suggest making a Mangement Command in Django to process the email (I'll call it proc_email). Thus your .qmail-proc may look like:
/var/spool/mail/proc
| /www/django/myproject/manage.py proc_email
This stores a copy of the email in /var/spool/mail/proc, then passes the email to the script in the second line. The email itself is passed to proc_email via sys.stdin. Simply read the email from there, and store it through your Django Models.
If you need to process email for different addresses later, you can also set up aliases which point to your home directory, and use .qmail-<username> files for each alias. Allowing you to pass other flags (such as the username for each alias) to proc_email if needed.
I should note that this isn't the simplest solution, but it can scale, and is pretty darn bullet proof.
I would not focus on Django for this.
I would create a mail server to catch these emails. Use http://docs.python.org/library/smtpd.html.
I would then use just the Django ORM to update the database based on the emails received.