I have a problem with django,
it was like, when users submit some data,it will go into view.py to process, and it will eventually get to the success page.
But the process is too long.I don't want the users to wait for that long time.What i want is to get to the success page right away after users submiting the data.And the Server will process the data after return the success page.
Can you please tell me how to deal with it?
that was my code,but i don't know why it didn't work.
url.py
from django.conf.urls import patterns, url
from hebeu.views import handleRequest
urlpatterns = patterns('',
url(r'^$', handleRequest),
)
view.py
def handleRequest(request):
if request.method == 'POST':
response = HttpResponse(parserMsg(request))
return response
else:
return None
def parserMsg(request):
rawStr = smart_str(request.body)
msg = paraseMsgXml(ET.fromstring(rawStr))
queryStr = msg.get('Content')
openID = msg.get('FromUserName')
arr = smart_unicode(queryStr).split(' ')
#start a new thread
cache_classroom(openID,arr[1],arr[2],arr[3],arr[4]).start()
return "success"
My English is not good, i hope you can understand.
Take a look at Celery, it is a distributed task queue that will handle your situation perfectly. There is a little bit of setup to get everything working but once that is out of the way Celery really easy to work with.
For integration with Django start here: http://docs.celeryproject.org/en/latest/django/index.html
Write a management command to for parseMsg and trigger that using subprocess.popen and return success to user and parseMsg process will run in background. if this kind of operations are more in the application then you should use celery.
This is quiet easy, encapsulate #start a new thread with the code below
from threading import Thread
from datetime import datetime
class ProcessThread(Thread):
def __init__(self, name):
Thread.__init__(self)
self.name = name
self.started = datetime.now()
def run(self):
cache_classroom(openID,arr[1],arr[2],arr[3],arr[4]).start()
# I added this so you might know how long the process lasted
# just incase any optimization of your code is needed
finished = datetime.now()
duration = (self.start - finished).seconds
print "%s thread started at %s and finished at %s in "\
"%s seconds" % (self.name, self.started, finished, duration)
# let us now run start the thread
my_thread = ProcessThread("CacheClassroom")
my_thread.start()
Related
So i'm creating this application and a part of it is a web page where a trading algorithm is testing itself using live data. All that is working but the issue is if i leave (exit) the web page, it stops. I was wondering how i can keep it running in the background indefinitely as i want the algorithm to keep doing it's thing.
This is the route which i would like to run in the background.
#app.route('/live-data-source')
def live_data_source():
def get_live_data():
live_options = lo.Options()
while True:
live_options.run()
live_options.update_strategy()
trades = live_options.get_all_option_trades()
trades = trades[0]
json_data = json.dumps(
{'data': trades})
yield f"data:{json_data}\n\n"
time.sleep(5)
return Response(get_live_data(), mimetype='text/event-stream')
I've looked into multi-threading but not too sure if that's the right thing for the job. I am kind of still new to flask so hence the poor question. If you need more info, please do comment.
You can do it the following way - this is 100% working example below. Note, in production use Celery for such tasks, or write another one daemon app (another process) by yourself and feed it with tasks from http server with the help of message queue (e.g. RabbitMQ) or with the help of common database.
If any questions regarding code below, feel free to ask, it was quite good exercise for me:
from flask import Flask, current_app
import threading
from threading import Thread, Event
import time
from random import randint
app = Flask(__name__)
# use the dict to store events to stop other treads
# one event per thread !
app.config["ThreadWorkerActive"] = dict()
def do_work(e: Event):
"""function just for another one thread to do some work"""
while True:
if e.is_set():
break # can be stopped from another trhead
print(f"{threading.current_thread().getName()} working now ...")
time.sleep(2)
print(f"{threading.current_thread().getName()} was stoped ...")
#app.route("/long_thread", methods=["GET"])
def long_thread_task():
"""Allows to start a new thread"""
th_name = f"Th-{randint(100000, 999999)}" # not really unique actually
stop_event = Event() # is used to stop another thread
th = Thread(target=do_work, args=(stop_event, ), name=th_name, daemon=True)
th.start()
current_app.config["ThreadWorkerActive"][th_name] = stop_event
return f"{th_name} was created!"
#app.route("/stop_thread/<th_id>", methods=["GET"])
def stop_thread_task(th_id):
th_name = f"Th-{th_id}"
if th_name in current_app.config["ThreadWorkerActive"].keys():
e = current_app.config["ThreadWorkerActive"].get(th_name)
if e:
e.set()
current_app.config["ThreadWorkerActive"].pop(th_name)
return f"Th-{th_id} was asked to stop"
else:
return "Sorry something went wrong..."
else:
return f"Th-{th_id} not found"
#app.route("/", methods=["GET"])
def index_route():
text = ("/long_thread - create another thread. "
"/stop_thread/th_id - stop thread with a certain id. "
f"Available Threads: {'; '.join(current_app.config['ThreadWorkerActive'].keys())}")
return text
if __name__ == '__main__':
app.run(host="0.0.0.0", port=9999)
I am using python muiltithreading for achieving a task which is like 2 to 3 mins long ,i have made one api endpoint in django project.
Here is my code--
from threading import Thread
def myendpoint(request):
print("hello")
lis = [ *args ]
obj = Model.objects.get(name =" jax")
T1 = MyThreadClass(lis, obj)
T1.start()
T1.deamon = True
return HttpResponse("successful", status=200)
Class MyThreadClass(Thread):
def __init__(self,lis,obj):
Thread.__init__(self)
self.lis = lis
self.obj = obj
def run(self):
for i in lis:
res =Func1(i)
self.obj.someattribute = res
self.obj.save()
def Func1(i):
'''Some big codes'''
context =func2(*args)
return context
def func2(*args):
"' some codes "'
return res
By this muiltithreading i can achieve the quick response from the django server on calling the endpoint function as the big task is thrown in another tread and execution of the endpoint thread is terminated on its return statement without keeping track of the spawned thread.
This part works for me correctly if i hit the url once , but if i hit the url 2 times as soon as 1st execution starts then on 2nd request i can see my request on console. But i cant get any response from it.
And if i hit the same url from 2 different client at the same time , both the individual datas are getting mixed up and i see few records of one client's request on other client data.
I am testing it to my local django runserver.
So guys please help , and i know about celery so dont recommend celery. Just tell me why this thing is happening or can it be fixed . As my task is not that long to use celery. I want to achieve it by muiltithreading.
The main intent of this question is to know how to refresh cache from db (which is populated by some other team not in our control) in django rest service which will then be used in serving requests received on rest end point.
Currently I am using the following approach but my concern is since python (cpython with GIL) is not multithreaded then can we have following type of code in rest service where one thread is populating cache every 30 mins and main thread is serving requests on rest end point.Here is sample code only for illustration.
# mainproject.__init__.py
globaldict = {} # cache
class MyThread(Thread):
def __init__(self, event):
Thread.__init__(self)
self.stopped = event
def run(self):
while not self.stopped.wait(1800):
refershcachefromdb() # function that takes around 5-6 mins for refreshing cache (global datastructure) from db
refershcachefromdb() # this is explicitly called to initially populate cache
thread = MyThread(stop_flag)
thread.start() # started thread that will refresh cache every 30 mins
# views.py
import mainproject
#api_view(['GET'])
def get_data(request):
str_param = request.GET.get('paramid')
if str_param:
try:
paramids = [int(x) for x in str_param.split(",")]
except ValueError:
return JsonResponse({'Error': 'This rest end point only accept comma seperated integers'}, status=422)
# using global cache to get records
output_dct_lst = [mainproject.globaldict[paramid] for paramid in paramids if paramid in mainproject.globaldict]
if not output_dct_lst:
return JsonResponse({'Error': 'Data not available'}, status=422)
else:
return JsonResponse(output_dct_lst, status=200, safe=False)
In flask app, I need to execute other task's checkJob function (checking the job status and email to the the user) after executing return render_template(page). The user will see the confirm page but there is still background job running to check the job status.
I tried to use celery https://blog.miguelgrinberg.com/post/using-celery-with-flask for the background job and it does not work. Anything after return render_template(page) is not being executed.
Here's the code fragment:
#app.route("/myprocess", methods=['POST'])
def myprocess():
//.... do work
#r = checkJob()
return render_template('confirm.html')
r = checkJob()
#celery.task()
def checkJob():
bb=1
while bb == 1:
print "checkJob"
time.sleep(10)
As suggested in the comments, you should use apply_async().
#app.route("/myprocess", methods=['POST'])
def myprocess():
#.... do work
r = checkJob.apply_async()
return render_template('confirm.html')
Note that, as with the example, you do not want to invoke checkJob() but rather keep it like checkJob.
I have a recommendation site. Everything was working dandy, until at points when the site was under a decent amount of traffic, the recommendations would take longer than 30 seconds (Heroku's limit) and time-out, throwing a 500 error. I realize this is a very long time for a http request.
So, I read up online and implemented RQ with Redis. I got that to work, but after some testing, it will still throw the Internal Server Error, even though the requests are going through a queue.
I'm really just lacking knowledge here and I have no idea what to do. I think I'm missing the whole idea of rq and redis I guess? Here's some of my code if it helps, but I'm hoping for more of just guidance of where to go from here to fix this error.
worker.py
import os
import redis
from rq import Worker, Queue, Connection
listen = ['high', 'default', 'low']
redis_url = os.getenv('REDISTOGO_URL',
'redis://redistogo:sampleurl:portNo/')
if not redis_url:
raise RuntimeError('Set up Redis to go first.')
conn = redis.from_url(redis_url)
if __name__=='__main__':
with Connection(conn):
worker = Worker(map(Queue, listen))
worker.work()
part of my views.py
q = Queue(connection=conn)
#app.route('/')
def home():
form = ArtistsForm()
return render_template('home.html', form=form)
#app.route('/results', methods=['POST'])
def results():
form = ArtistsForm()
error = None
try:
if request.method == 'POST' and form.validate():
table = 'Artists'
artists = []
for value in form.data.items():
if (value[1] is not ''):
artists.append(value[1])
results = q.enqueue_call(func=getArtists, args=(table, *artists))
while results.result is None:
time.sleep(1)
results = results.result.values.tolist()
return render_template('results.html', results=results)
else:
error = "Please be sure to enter 5 artists with correct spelling" \
" and punctuation"
except pylast.WSError:
return render_template('error.html')
return render_template('home.html', form=form, error=error)
Any guidance is appreciated
You can always try dividing the work between a web dyno to simply acknowledge the request, then have worker dynos doing the heavy lifting.
Kafka or something similar could be used to accomplish this.