I'm currently using the following views.py function sendsmss to allow a user to do a bulk sms message to their list of subscribers, after the user has completed an html form with the sms they want to send to their subscribers:
def sendsmss(request):
if request.method == "POST":
subscribers = Subscriber.objects.all()
sms = request.POST['sms']
mytwilionum = "+13421234567"
ACCOUNT_SID = TWILIO_ACCOUNT_SID
AUTH_TOKEN = TWILIO_AUTH_TOKEN
client = Client(ACCOUNT_SID, AUTH_TOKEN)
for subscriber in subscribers:
subscriber_num = subscriber.phone_number
client.messages.create(
to= subscriber_num,
from_=mytwilionum,
body=sms
)
return redirect('homepage')
This function works, but I have only tested the bulk send with 3 subscribers. If there were 100s or 1000s of subscribers how long would this take .. if it takes long then would user be waiting for task to complete before redirect to homepage happens? Is there a better way to do this in Django?
The questions are very subjective and I will try to answer those accordingly:
If there were 100s or 1000s of subscribers how long would this take
This is totally dependent on performance of Twilio. The API client is using the requests library and it is creating the messages one by one for each subscriber. In an ideal scenario the time taken seems proportional to the number of subscribers.
if it takes long then would user be waiting for task to complete before redirect to homepage happens?
Based on your current implementation, Yes. The return redirect('homepage') will be executed only after the message has been sent to all the subscribers. In case, there is an error it will be thrown and the page won't redirect to the home page.
Is there a better way to do this in Django?
Yes, there are. You can use an asynchronous job queue e.g. celery and hook it up with django. In this, you can start an async task in celery and return a response to the user immediately. You can also choose to display progress of the running celery task to the user (if required).
Related
I am using Python + Flask to build an app that takes a user CSV file and enriches it with data from an API.
My objective:
User uploads CSV file [done]
A payment amount is set and presented on Stripe payment page [done]
Once user pays, then the CSV file is enriched with data from an API (a column is appended), and is emailed to the user. [enriching and emailing is done. I just don't know how to make it wait + match payment to correct csv]
My question:
How can I make sure that the CSV file is not enriched/emailed to the user until the Stripe payment is completed?
I have set up a webhook. The problem is, I don't know how to match up the CSV file the user uploaded with the actual payment_id from Stripe to make sure I send them the right file.
I'm sure I am just blanking on some concept here, so any directional help is appreciated.
If you want to wait for Stripe to complete a payment before executing a Python function, you'll need to implement a webhook that listens for the payment_intent.succeeded event in Stripe. When this event is triggered, it indicates that the payment has been completed successfully, and you can then execute your Python function.
Here's a basic outline of the steps you'll need to take:
Implement a webhook endpoint in your application that listens for the payment_intent.succeeded event in Stripe.
In the webhook endpoint, when the payment_intent.succeeded event is triggered, you can call your Python function.
Configure the webhook endpoint in your Stripe dashboard to send the payment_intent.succeeded event to your application.
Here's a simple example implementation in Flask:
from flask import Flask, request
app = Flask(__name__)
#app.route("/webhook", methods=["POST"])
def webhook():
# Retrieve the event data from the request
event_json = request.get_json()
# Check if the event is a payment_intent.succeeded event
if event_json["type"] == "payment_intent.succeeded":
# Call your Python function here
execute_python_function()
return "success"
def execute_python_function():
# Your Python function code goes here
...
if __name__ == "__main__":
app.run(debug=True)
Note that this is just a basic example, and you'll need to modify it to meet the specific needs of your application. You'll also need to ensure that your webhook endpoint is secured and can only be accessed by Stripe.
I am trying to develop a web application that supports a long task at the backend. I am using flask-socketio package on my server along with celery. My workflow is following :
When a client opens the Html page -- I initiate a socket connection to the server which creates a uid for the user and emits it back.
Now once the user posts a request for the long task -- I schedule it using celery and once finished I need to emit it to the user who posted the request. (I stored the relevant userid in post request)
I have looked at #Miguels's answer for 39423646/flask-socketio-emit-to-specific-user which creates a separate room for each user and then broadcasts the message on that room. But I wanted to ask if there is some other simpler way to do this since it seems inefficient or forced way to do this.
I also came across the nodejs solution (how-to-send-a-message-to-a-particular-client-with-socket-io) which I felt to be a more natural way to accomplish this. Is there a similar solution in python-socketio too?
Update: After some more search I came across the following solution on a github gist. According to this -- ** flash-socketIO already puts all clients in the separate room given by request.sid **.
I would still wish to discuss other ways to do this. Specifically if the site traffic is quite high -- wouldn't it lead to too many rooms?
Update (2): my current (working) server code which makes use of rooms. This is borrowed and modified from flask-SocketIO celery example
#celery.task(bind=True)
def long_task(self, userid, url):
# LONG TASK
time.sleep(10)
# meta = some result
post(url, json=meta)
# It seems i can't directly emit from celery function so I mimic a post request and emit from that function
return meta
#app.route('/longtask', methods=['POST'])
def longtask():
userid = request.json['userid']
task = long_task.delay(elementid, userid, url_for('event', _external=True))
return jsonify({}), 202
#socketio.on('connect', namespace='/custom')
def events_connect():
userid = str(uuid.uuid4())
session['userid'] = userid
current_app.clients[userid] = request.sid
emit('userid', {'userid': userid})
#app.route('/event/', methods=['POST'])
def event():
userid = request.json['userid']
data = request.json
roomid = app.clients.get(userid)
socketio.emit('celerystatus', data, namespace='/custom', room=roomid)
return 'ok'
You don't have to create any rooms to address an individual user. Just set the to argument to the sid of the user you want to address:
emit('my event', my_data, to=user_sid)
The sid value that is assigned to each user is given to you in the connect event handler as request.sid.
You can make a room for every separate user. And you can emit to a particular room which user you want to share the message.
io.to(user.room).emit('specific-user', { message });
Here mean to separate user room that you need to define a room id to a chat room like a chat application (one-to-one-communication).
Room id can be concatinate of own id and user id which you want to send a message for a unique room id for each user.
I'm having some trouble understanding and implementing the Google Directory API's users watch function and push notification system (https://developers.google.com/admin-sdk/reports/v1/guides/push#creating-notification-channels) in my Python GAE app. What I'm trying to achieve is that any user (admin) who uses my app would be able to watch user changes within his own domain.
I've verified the domain I want to use for notifications and implemented the watch request as follows:
directoryauthdecorator = OAuth2Decorator(
approval_prompt='force',
client_id='my_client_id',
client_secret='my_client_secret',
callback_path='/oauth2callback',
scope=['https://www.googleapis.com/auth/admin.directory.user'])
class PushNotifications(webapp.RequestHandler):
#directoryauthdecorator.oauth_required
def get(self):
auth_http = directoryauthdecorator.http()
service = build("admin", "directory_v1", http=auth_http)
uu_id=str(uuid.uuid4())
param={}
param['customer']='my_customer'
param['event']='add'
param['body']={'type':'web_hook','id':uu_id,'address':'https://my-domain.com/pushNotifications'}
watchUsers = service.users().watch(**param).execute()
application = webapp.WSGIApplication(
[
('/pushNotifications',PushNotifications),
(directoryauthdecorator.callback_path, directoryauthdecorator.callback_handler())],
debug=True)
Now, the receiving part is what I don't understand. When I add a user on my domain and check the app's request logs I see some activity, but there's no usable data. How should I approach this part?
Any help would be appreciated. Thanks.
The problem
It seems like there's been some confusion in implementing the handler. Your handler actually sets up the notifications channel by sending a POST request to the Reports API endpoint. As the docs say:
To set up a notification channel for messages about changes to a particular resource, send a POST request to the watch method for the resource.
source
You should only need to send this request one time to set up the channel, and the "address" parameter should be the URL on your app that will receive the notifications.
Also, it's not clear what is happening with the following code:
param={}
param['customer']='my_customer'
param['event']='add'
Are you just breaking the code in order to post it here? Or is it actually written that way in the file? You should actually preserve, as much as possible, the code that your app is running so that we can reason about it.
The solution
It seems from the docs you linked - in the "Receiving Notifications" section, that you should have code inside the "address" specified to receive notifications that will inspect the POST request body and headers on the notification push request, and then do something with that data (like store it in BigQuery or send an email to the admin, etc.)
Managed to figure it out. In the App Engine logs I noticed that each time I make a change, which is being 'watched', on my domain I get a POST request from Google's API, but with a 302 code. I discovered that this was due to the fact I had login: required configured in my app.yaml for the script, which was handling the requests and the POST request was being redirected to the login page, instead of the processing script.
I'm trying to send emails in a function within my views.py file. I've set up the email in my settings file in the same manner as here.
Python Django Gmail SMTP setup
Email sending does work but it takes several minutes to occur which my users have been complaining about. I am receiving a gethostbyaddress error in my var/log/mail.log file which I'll post here. I used to get nginx timeout errors but put "proxy_read_timeout 150;" into my /etc/nginx/sites-enabled/django file.
http://www.nginxtips.com/upstream-timed-out-110-connection-timed-out-while-reading-response-header-from-upstream/
This solved the timeout errors when interacting with the website but the emails still take several minutes to load. I'm using a digitalocean django droplet and this slow speed has occured on all my droplets.
Here's my view function
#login_required
def AnnouncementPostView(request, leaguepk):
league = League.objects.get(pk=leaguepk)
lblog = league.blog
if request.method == 'POST':
form = AnnouncementPostForm(request.POST)
if form.is_valid():
posttext = request.POST['text']
newAnnouncement = Announcement(text=posttext, poster=request.user)
newAnnouncement.save()
lblog.announce.add(newAnnouncement)
titleText = "%s Announcement" % (league.name)
send_mail(titleText, posttext, settings.EMAIL_HOST_USER, ['mytestemail#gmail.com'], fail_silently=False)
return HttpResponseRedirect(reverse('league-view', args=[league.pk]))
else:
form = AnnouncementPostForm()
return render(request, 'simposting/announcementpost.html', {'form': form, 'league': league})
This has worked, the announcement is posted to the desired page and is even emailed, it's just a time problem, people have come to expect nearly instant emailing processes which makes the 2-3 minutes unacceptable, especially when signing up also causes the 2-3 minute wait.
One issue may be the fact that while trying to solve this issue with the DigitalOcean support team I changed my droplet name and the hostname to be the domain that I set up.
My current hostname and droplet name is mydomain.com. I have it setup that way in my /etc/hostname file. My /etc/hosts file looks like this
127.0.0.1 localhost.localdomain localhost mydomain.com
127.0.1.1 mydomain.com
My var/log/mail.log file responds with this whenever I try to send mail
Oct 6 16:13:24 "oldDropletName" sm-mta[13660]: gethostbyaddr(10.xxx.xx.x) failed: 1
Oct 6 16:13:24 "oldDropletName" sm-mta[13662]: starting daemon (8.14.4): SMTP+queueing#00:10:00
I hope this is enough information to help, it's been troubling for several weeks and usually I can either solve my problems by looking up stuff here or working with the support team but it's got us stumped. Thank you for taking the time to help!
Sending an email is a network bound task and you don't know how long it will take to finish exactly like in your case. Although there might be a latency in your network but it's better to do such task in an async fashion so your main thread is free.
I am using the following code in one my projects.
utils.py
import threading
from django.core.mail import EmailMessage
class EmailThread(threading.Thread):
def __init__(self, subject, html_content, recipient_list, sender):
self.subject = subject
self.recipient_list = recipient_list
self.html_content = html_content
self.sender = sender
threading.Thread.__init__(self)
def run(self):
msg = EmailMessage(self.subject, self.html_content, self.sender, self.recipient_list)
msg.content_subtype = 'html'
msg.send()
def send_html_mail(subject, html_content, recipient_list, sender):
EmailThread(subject, html_content, recipient_list, sender).start()
just call send_html_mail from your view.
I am not particularly familiar with sendmail (I use postfix) but I would suspect this is almost certainly related to something with sendmail and probably not Django. The second log entry has "SMTP+queueing#00:10:00". and this link would indicate that sendmail takes a flag on startup to determine how often to process the mail queue. You may want to look around your init or wherever your startup scripts are and see how sendmail is configured. Also, if you are using Gmail you really can't control any delays on their end, so along with determining the configuration of your mail server, you'll need to check logs for when actions are actually occurring such as the mail being queued/sent. Is the time that line shows in your log from when the view was executed? If so, it is in the hands of sendmail.
I have the app in python, using flask and iron worker. I'm looking to implement the following scenario:
User presses the button on the site
The task is queued for the worker
Worker processes the task
Worker finishes the task, notifies my app
My app redirects the user to the new endpoint
I'm currently stuck in the middle of point 5, I have the worker successfully finishing the job and sending a POST request to the specific endpoint in my app. Now, I'd like to somehow identify which user invoked the task and redirect that user to the new endpoint in my application. How can I achieve this? I can pass all kind of data in the worker payload and then return it with the POST, the question is how do I invoke the redirect for the specific user visiting my page?
You can do it as follows:
When the user presses the button the server starts the task, and then sends a response to the client, possibly a "please wait..." type page. Along with the response the server must include a task id that references the task accessible to Javascript.
The client uses the task id to poll the server regarding task completion status through ajax. Let's say this is route /status/<taskid>. This route returns true or false as JSON. It can also return a completion percentage that you can use to render a progress bar widget.
When the server reports that the task is complete the client can issue the redirect to the completion page. If the client needs to be told what is the URL to redirect to, then the status route can include it in the JSON response.
I hope this helps!