Send push notification to many users via python - python

For my project using Firebase messaging to send push notification. I have users's firebase tokens stored on the database. Using them I sent push to each user. Total time of sending is about 100 seconds for 100 users. Is there way to send push asynchronously(I mean at one time to send many push notifications)
# Code works synchronously
for user in users:
message = messaging.Message(
notification=messaging.Notification(
title="Push title",
body="Push body"
),
token = user['fcmToken']
)
response = messaging.send(message)

Sure, you could use one of the python concurrency libraries. Here's one option:
from concurrent.futures import ThreadPoolExecutor, wait, ALL_COMPLETED
def send_message(user):
message = messaging.Message(
notification=messaging.Notification(
title="Push title",
body="Push body"),
token = user['fcmToken'])
return messaging.send(message)
with ThreadPoolExecutor(max_workers=10) as executor: # may want to try more workers
future_list = []
for u in users:
future_list.append(executor.submit(send_message, u))
wait(future_list, return_when=ALL_COMPLETED)
# note: we must use the returned self to get the test count
print([future.result() for future in future_list])

If you want to send the same message to all tokens, you can use a single API call with a multicast message. The Github repo has this sample of sending a multicast message in Python:
def send_multicast():
# [START send_multicast]
# Create a list containing up to 500 registration tokens.
# These registration tokens come from the client FCM SDKs.
registration_tokens = [
'YOUR_REGISTRATION_TOKEN_1',
# ...
'YOUR_REGISTRATION_TOKEN_N',
]
message = messaging.MulticastMessage(
data={'score': '850', 'time': '2:45'},
tokens=registration_tokens,
)
response = messaging.send_multicast(message)
# See the BatchResponse reference documentation
# for the contents of response.
print('{0} messages were sent successfully'.format(response.success_count))
# [END send_multicast]

Related

How to process a response from the Twilio REST API

I am developing a program to help treat depression. I do not have a deep understanding of Twilio. I would like to collect the responses to this message:
Sent from your Twilio trial account - What are the positive results or outcomes you have achieved lately?
What are the strengths and resources you have available to you to get even more results and were likely the reason you got the results in the first question.
What are your current priorities? What do you and your team need to be focused on right now?
What are the benefits to all involved-you
your team and all other stakeholders who will be impacted by achieving your priority focus.
How can we (you and/or your team) move close? What action steps are needed?
What am I going to do today?
What am I doing tomorrow ?
What did I do yesterday?
and process them 1-9. The responses will be enumerated by 1-9.
I've contacted Twilio support and I read these docs https://www.twilio.com/docs/sms/tutorials/how-to-receive-and-reply-python.
Here is what I tried:
# Download the helper library from https://www.twilio.com/docs/python/install
import os
from twilio.rest import Client
import logging
import csv
import psycopg2
from flask import Flask, request, redirect
from twilio.twiml.messaging_response import MessagingResponse
app = Flask(__name__)
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s - %(message)s')
# Set environtment variables
DATABASE = os.environ["DATABASE"]
PASSWORD = os.environ["PASSWORD"]
PORT = os.environ["PORT"]
USER = os.environ["USER"]
HOST = os.environ["HOST"]
# initialization TODO: move into env vars
MY_PHONE_NUMBER = os.environ["MY_PHONE_NUMBER"]
TWILIO_PHONE_NUMBER = os.environ["TWILIO_PHONE_NUMBER"]
TWILIO_ACCOUNT_SID = os.environ["TWILIO_ACCOUNT_SID"]
TWILIO_AUTH_TOKEN = os.environ["TWILIO_AUTH_TOKEN"]
# Configure Twillio
# Set environment variables for your credentials
# Read more at http://twil.io/secure
client = Client(TWILIO_ACCOUNT_SID, TWILIO_AUTH_TOKEN)
logging.debug(f"Connected to Twilio using MY_PHONE_NUMBER:{MY_PHONE_NUMBER},TWILIO_PHONE_NUMBER{TWILIO_PHONE_NUMBER}")
# Establish db connection
# use psycopg to connect to the db and create a table
conn = psycopg2.connect(
database=DATABASE, user=USER, password=PASSWORD, host=HOST, port=PORT)
conn.autocommit = True
cursor = conn.cursor()
# Step 1: Set up frequency, i.e. times to send messages
# Step 2: Load questions
questionsFile = open('questions.csv')
questions = csv.reader(questionsFile)
logging.debug(f"message:{questions}")
message = "\n".join([question for row in questions for question in row])
logging.debug(f"message: {message}")
# Step 3: Send questions
# message = client.messages.create(
# body=message,
# from_=TWILIO_PHONE_NUMBER,
# to=MY_PHONE_NUMBER
# )
# Step 4: Collect response
#app.route("/sms", methods=['GET', 'POST'])
def incoming_sms():
"""Send a dynamic reply to an incoming text message"""
# Get the message the user sent our Twilio number
body = request.values.get('Body', None)
# Start our TwiML response
resp = MessagingResponse()
# Determine the right reply for this message
if body == 'hello':
resp.message("Hi!")
elif body == 'bye':
resp.message("Goodbye")
return str(resp)
if __name__ == "__main__":
app.run(debug=True)
# Step 5: Create a database table as the sheet name and Save responses in db
logging.debug(f'Step 2 creating table response')
# TODO: create 10 columns for saving responses (each response contains 10 answers)
sql = f'CREATE TABLE IF NOT EXISTS public.responses'
logging.debug(f'CREATE TABLE IF NOT EXISTS public.responses')
# cursor.execute(sql)
# conn.commit()
# Next steps:
# 1. Process positive and negative sentiment from responses
# 2. Calculuate total positive sentiment
# 3. Calculate total negative sentiment
# 4. Plot positive sentiment vs. negative sentiment
The documentation doesn't provide a clear path for completing step 4.
[shows response from text message.]
[messages url]
Expected
processed responses from the questions.
Actual:
Sent from your Twilio trial account - What are the positive results or outcomes you have achieved lately?
What are the strengths and resources you have available to you to get even more results and were likely the reason you got the results in the first question.
What are your current priorities? What do you and your team need to be focused on right now?
What are the benefits to all involved-you
your team and all other stakeholders who will be impacted by achieving your priority focus.
How can we (you and/or your team) move close? What action steps are needed?
What am I going to do today?
What am I doing tomorrow ?
What did I do yesterday?
I added a Twilio flow to test the connection to Twilio. This could go in two directions just python or using a Twilio flow. The flow does not work even after adding the correct number.
messages screenshot
this comes from the demo video which does not match the current UI: https://www.youtube.com/watch?v=VRxirse1UfQ.
There's a second code sample on the page you linked. This explains how you can parse the body of incoming message to read the payload with request.values.get('Body', None):
#app.route("/sms", methods=['GET', 'POST'])
def incoming_sms():
"""Send a dynamic reply to an incoming text message"""
# Get the message the user sent our Twilio number
body = request.values.get('Body', None)
# Start our TwiML response
resp = MessagingResponse()
# Determine the right reply for this message
if body == 'hello':
resp.message("Hi!")
elif body == 'bye':
resp.message("Goodbye")
return str(resp)
However, it makes sense to ask the questions after each other and this means you would need to handle many different endpoint and it can get complicated really fast.
But there's also a visual editor that allows you to define this within a few minutes in your browser: Build a Chatbot with Twilio Studio

SendGrid Random Duplicate Emails (Python Flask Heroku)

I'm currently working on an app that updates artists with times they are booked for events. Randomly the app will send duplicate emails a few times a day, over 90% of the time there are not duplicates.
Currently, there are 10+ emails that can be produced, but there is only one email send function. The duplicates occur across all emails which makes me think there is an issue with the email send function or the configuration of the web server to make multiple requests to sendgrid. PLEASE HELP ME FIND THE CAUSE OF THE DUPILCATES!
Stack:
Python (v3.10.6)
Flask (v2.2.2)
Heroku (buildstack-22)
Sendgrid python library (v6.9.7)
Email Send Function:
def send_email(to, subject, template, cc='None', attachment_location='None', attachment_name='None', private_email=False, **kwargs):
## Remove Guest DJ Emails Here
guest_dj_list = get_list_of_guest_djs()
if to in guest_dj_list:
return None
message = Mail(
from_email=From('test#test.com', current_app.config['MAIL_ALIAS']),
to_emails=[to],
subject=subject,
html_content=render_template(template, **kwargs))
## cc management
if private_email == False:
cc_list = ['test#test.com']
if cc != 'None':
cc_list.append(cc)
message.cc = cc_list
if attachment_location != 'None':
with open(attachment_location, 'rb') as f:
data = f.read()
f.close()
encoded_file = base64.b64encode(data).decode()
attachedFile = Attachment(
FileContent(encoded_file),
FileName(attachment_name),
FileType('application/xlsx'),
Disposition('attachment')
)
message.attachment = attachedFile
try:
sg = SendGridAPIClient(os.environ.get('SENDGRID_API_KEY'))
response = sg.send(message)
# print(response.status_code)
# print(response.body)
# print(response.headers)
print(f'Complete: Email Sent to {to}')
except Exception as e:
print(e.message)
Heroku Procfile
web: gunicorn test_app.app:app --preload --log-level=debug
According to your comments, the emails are triggered by user actions. From the code you have shared there is nothing that would cause an email to send twice, so my guess is that users are causing the occasional double sending by submitting forms more than once.
To discover the root of this, I would first attempt to disable your forms after the first submit. You can do this with a bit of JavaScript, something like:
document.querySelectorAll('form').forEach(form => {
form.addEventListener('submit', (e) => {
// Prevent if already submitting
if (form.classList.contains('is-submitting')) {
e.preventDefault();
}
// Add class to hook our visual indicator on
form.classList.add('is-submitting');
});
});
This comes from this article which has a good discussion of the issue too.
Once you have done that, you should likely also log the various actions that cause an email to send and try to chase back through your system to find what could be causing double submissions. I would pay attention to other things like callbacks or background jobs that may be the culprit here too.

Python grpc - reading in all messages before sending responses

I'm trying to understand if grpc server using streams is able to wait for all client messages to be read in prior to sending responses.
I have a trivial application where I send in several numbers I'd like to add and return.
I've set up a basic proto file to test this:
syntax = "proto3";
message CalculateRequest{
int64 x = 1;
int64 y = 2;
};
message CalculateReply{
int64 result = 1;
}
service Svc {
rpc CalculateStream (stream CalculateRequest) returns (stream CalculateReply);
}
On my server-side I have implemented the following code which returns the answer message as the message is received:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
resultToOutput = request.x + request.y
yield contracts_pb2.CalculateReply(result=resultToOutput)
def serve():
server = grpc.server(futures.ThreadPoolExecutor(max_workers=10))
contracts_pb2_grpc.add_SvcServicer_to_server(
CalculatorServicer(), server)
server.add_insecure_port('localhost:9000')
server.start()
server.wait_for_termination()
if __name__ == '__main__':
print( "We're up")
logging.basicConfig()
serve()
I'd like to tweak this to first read in all the numbers and then send these out at a later stage - something like the following:
class CalculatorServicer(contracts_pb2_grpc.SvcServicer):
listToReturn = []
def CalculateStream(self, request_iterator, context):
for request in request_iterator:
listToReturn.append (request.x + request.y)
# ...
# do some other stuff first before returning
for item in listToReturn:
yield contracts_pb2.CalculateReply(result=resultToOutput)
Currently, my implementation to write out later doesn't work as the code at the bottom is never reached. Is this by design that the connection seems to "close" before reaching there?
The grpc.io website suggests that this should be possible with BiDirectional streaming:
for example, the server could wait to receive all the client messages before writing its responses, or it could alternately read a message then write a message, or some other combination of reads and writes.
Thanks in advance for any help :)
The issue here is the definition of "all client messages." At the transport level, the server has no way of knowing whether the client has finished independent of the client closing its connection.
You need to add some indication of the client's having finished sending requests to the protocol. Either add a bool field to the existing CalculateRequest or add a top-level oneof with one of the options being something like a StopSendingRequests

Pulling historical channel messages python

I am attempting to create a small dataset by pulling messages/responses from a slack channel I am a part of. I would like to use python to pull the data from the channel however I am having trouble figuring out my api key. I have created an app on slack but I am not sure how to find my api key. I see my client secret, signing secret, and verification token but can't find my api key
Here is a basic example of what I believe I am trying to accomplish:
import slack
sc = slack.SlackClient("api key")
sc.api_call(
"channels.history",
channel="C0XXXXXX"
)
I am willing to just download the data manually if that is possible as well. Any help is greatly appreciated.
messages
See below for is an example code on how to pull messages from a channel in Python.
It uses the official Python Slack library and calls
conversations_history with paging. It will therefore work with
any type of channel and can fetch large amounts of messages if
needed.
The result will be written to a file as JSON array.
You can specify channel and max message to be retrieved
threads
Note that the conversations.history endpoint will not return thread messages. Those have to be retrieved additionaly with one call to conversations.replies for every thread you want to retrieve messages for.
Threads can be identified in the messages for each channel by checking for the threads_ts property in the message. If it exists there is a thread attached to it. See this page for more details on how threads work.
IDs
This script will not replace IDs with names though. If you need that here are some pointers how to implement it:
You need to replace IDs for users, channels, bots, usergroups (if on a paid plan)
You can fetch the lists for users, channels and usergroups from the API with users_list, conversations_list and usergroups_list respectively, bots need to be fetched one by one with bots_info (if needed)
IDs occur in many places in messages:
user top level property
bot_id top level property
as link in any property that allows text, e.g. <#U12345678> for users or <#C1234567> for channels. Those can occur in the top level text property, but also in attachments and blocks.
Example code
import os
import slack
import json
from time import sleep
CHANNEL = "C12345678"
MESSAGES_PER_PAGE = 200
MAX_MESSAGES = 1000
# init web client
client = slack.WebClient(token=os.environ['SLACK_TOKEN'])
# get first page
page = 1
print("Retrieving page {}".format(page))
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
)
assert response["ok"]
messages_all = response['messages']
# get additional pages if below max message and if they are any
while len(messages_all) + MESSAGES_PER_PAGE <= MAX_MESSAGES and response['has_more']:
page += 1
print("Retrieving page {}".format(page))
sleep(1) # need to wait 1 sec before next call due to rate limits
response = client.conversations_history(
channel=CHANNEL,
limit=MESSAGES_PER_PAGE,
cursor=response['response_metadata']['next_cursor']
)
assert response["ok"]
messages = response['messages']
messages_all = messages_all + messages
print(
"Fetched a total of {} messages from channel {}".format(
len(messages_all),
CHANNEL
))
# write the result to a file
with open('messages.json', 'w', encoding='utf-8') as f:
json.dump(
messages_all,
f,
sort_keys=True,
indent=4,
ensure_ascii=False
)
This is using the slack webapi. You would need to install requests package. This should grab all the messages in channel. You need a token which can be grabbed from apps management page. And you can use the getChannels() function. Once you grab all the messages you will need to see who wrote what message you need to do id matching(map ids to usernames) you can use getUsers() functions. Follow this https://api.slack.com/custom-integrations/legacy-tokens to generate a legacy-token if you do not want to use a token from your app.
def getMessages(token, channelId):
print("Getting Messages")
# this function get all the messages from the slack team-search channel
# it will only get all the messages from the team-search channel
slack_url = "https://slack.com/api/conversations.history?token=" + token + "&channel=" + channelId
messages = requests.get(slack_url).json()
return messages
def getChannels(token):
'''
function returns an object containing a object containing all the
channels in a given workspace
'''
channelsURL = "https://slack.com/api/conversations.list?token=%s" % token
channelList = requests.get(channelsURL).json()["channels"] # an array of channels
channels = {}
# putting the channels and their ids into a dictonary
for channel in channelList:
channels[channel["name"]] = channel["id"]
return {"channels": channels}
def getUsers(token):
# this function get a list of users in workplace including bots
users = []
channelsURL = "https://slack.com/api/users.list?token=%s&pretty=1" % token
members = requests.get(channelsURL).json()["members"]
return members

SendGrid Sending Multiple Copies

I'm using SendGrid to send emails from my python-based Heroku app.I'm okay with it taking 10 or so minutes to get to my inbox, but I'm receiving three copies of the message and I can't figure out why. Here is the relevant code:
import sendgrid
from sendgrid import SendGridError, SendGridClientError, SendGridServerError
sg = sendgrid.SendGridClient('xxx#heroku.com', 'xxx')
message = sendgrid.Mail()
message.add_to('John Doe <xxx#xxx.com>')
message.set_subject('Example')
message.set_html('Body')
message.set_text('Body')
message.set_from('Dark Knight <xxx#xxx.com>')
message.add_attachment('image.jpg', './image.jpg')
status, msg = sg.send(message)
#app.route('/test2')
def test2():
sg.send(message)
return "sent"
When I go to the relevant route I get 'sent' returned and the email is sent, but again, it send three copies. I'm not sure why. Any help would be great.
Emails one and two:
status, msg = sg.send(message) would send two emails and then set status and msg to the response object.
Email three: after you load the route sg.send(message) sends the next email.
I suggest you to use sendgrid sendmail api to send email. its efficient, fast way to send emails.
You are calling sg.send(message) three times in your code.
It is called twice here: status, msg = sg.send(message) - this will send one mail for status and log it's response to that variable. It will then send again for msg and log it's response to that variable as well.
Then when the user hits the /test2 the function is called again, making it three messages in total.
Here's how you might change it to log responses but just send the one message out:
import sendgrid
from sendgrid import SendGridError, SendGridClientError, SendGridServerError
sg = sendgrid.SendGridClient('xxx#heroku.com', 'xxx')
def sendMessage(options):
message = sendgrid.Mail()
message.add_to('John Doe <xxx#xxx.com>')
message.set_subject('Example')
message.set_html('Body')
message.set_text('Body')
message.set_from('Dark Knight <xxx#xxx.com>')
message.add_attachment('image.jpg', './image.jpg')
// send the message and log the results to status
msg = sg.send(message)
return msg
#app.route('/test2')
def test2():
// send the message, pass any options like email address (not required)
status = sendMessage(options)
return status
I've added a new function above to send out the message and given it an optional options var, so you could use that to pass things to the message, like a different email address, or subject.

Categories

Resources