I am preparing a Test or Quiz in Django. The quiz needs to be completed in certain time frame. Say 30 minutes for 40 questions.I can always initiate a clock at start of the test, and then calculate time by the time the Quiz is completed. However it's likely that during the attempt, there may be issues such as internet connection drops, or system crashes/power outages etc.
I need a strategy to figure out when such an accident happened, and stop the clock, then let the user take the test again from where it stopped, and start the clock again.
What is the right strategy? Any help including sample code/examples/ideas are most welcome
Your strategy should depend on importance of the test and ability to retake whole test.
Is test/quiz for fun or competence/knowledge checking?
Are you dealing with logged users?
Are tests generated randomly from large poll of available questions?
these are the questions you need to answer yourself first.
Remember that:
malicious user CAN simulate connection outage / power failure,
only clock you can trust is one on server side,
everything on browser side can be manipulated (think firebug/console js injection)
My approach would be:
Inform users that TIME is important factor and connection issues may not be taken into account when grade will be given...,
Serve only one question, wait for answer, serve another one,
Whole test time should be calculated as SUM of each answer time:
save each "question send" / "answer received" timestamps and calculate answer time from it,
time between questions wouldn't count,
you'd get extra scope on which questions was harder / took longer to answer.
Add some kind of heartbeat to your question page (like ajax request every X seconds), when heartbeat stops you can (depending on options you have):
invalidate question and notify user via dialog that he has connection issues and have to refresh to get new question instead if you have larger poll of questions to use,
pause time on server side (and for example dim question page so user cannot answer until his connection is restored) IMO only for games/fun quiz/tests
save information on server side on each interruption which would later ease decision to allow retake whole test e.g. he was fine until 20th question and then on 3-4 easy questions in a row he was dropping...
The simplest way would be to add a timestamp when the person starts the quiz and then compare that to when they submit. Of course, this doesn't take into account connection drops, crashes, etc... like you mentioned.
To account for these issues I'd probably use something like node.js. Each client has "check-in" when they connect to the quiz. Then at regular intervals (every 1s, 10s, 1m, etc...) the client checks in. If at these intervals the client doesn't check-in you can assume they've had the connection drop. You could keep track of when they connect again and start the timer from where they left off.
This is my initial thought on how to keep track of connection drops and crashes. The same could be done with a front-end ajax call to a Django view.
Either you do the clock on the client side, in which case they can always cheat somehow, or you do it on the server side, and then you aren't taking into account these interruptions.
To reduce cheating somewhat and still allow for interruptions, you could do a 'keep alive'.
Here the client side code announces to the server that it is still there every so often, say every 5 seconds. The server side notes when it stops getting these messages, and pauses/stops the clock. However it still has the start and end time, so you know how long it really took in wall time, and also how long it took while the client was supposedly there.
With these two pieces of information you could very easily track down odd behaviour and blacklist people. Blacklisted people might not be aware that they are blacklisted, but their quiz scores don't show up for other users of your quiz system.
The problem with pausing the clock when the connection to the user drops, is that the user could just disconnect their computer from the internet each time they received a new question, and then reconnect once they had worked out the answer.
One thing you could do, is give the user a certain amount of time for each question.
The clock is started when the user successfully receives the question to their browser, and if the user submits an answer before the time limit, it is accepted, otherwise it is void.
That would mean if a user lost connection it would only affect the question they are currently on. But it would also mean that the user would have no flexibility in how much time they want to allot to each question, you decide for them.
I was thinking you could do something like removing the question from the screen unless the connection to the server was still alive, but the user could always just screen-shot the question before disconnecting.
Related
This might be a bit hard to explain, but I hope I can explain it in a sufficient and understandable way.
I want to create a system to detect if a large amount of users suddenly joins our server, but i'm not sure how this would be setup.
Should I store every single new user in Redis and a timestamp, and then with a background tasks every n seconds/minutes check if the amount of users in a timespan is larger than an estimate of new users that I know we get ?
So like
if new_user_count > 10:
sendwarning()
But how would I incorporate and check within a certain timespan? Maybe the amount of new users since last scan?
Is this a sufficient way or is there some other smart method that I don't know of?
when a user connects a server (depending upon what kind of server you have) say if you have a linux server every user connection to your server has a entry in (/var/log/<file.log>) location you can write a simple python program to monitor this file and put regex to count the user "successful login". You can also use opensource IDS(Intrusion Detection System) like Snort in your server to do the same for you.
https://www.snort.org/
Suppose I have a model Event. I want to send a notification (email, push, whatever) to all invited users once the event has elapsed. Something along the lines of:
class Event(models.Model):
start = models.DateTimeField(...)
end = models.DateTimeField(...)
invited = models.ManyToManyField(model=User)
def onEventElapsed(self):
for user in self.invited:
my_notification_backend.sendMessage(target=user, message="Event has elapsed")
Now, of course, the crucial part is to invoke onEventElapsed whenever timezone.now() >= event.end.
Keep in mind, end could be months away from the current date.
I have thought about two basic ways of doing this:
Use a periodic cron job (say, every five minutes or so) which checks if any events have elapsed within the last five minutes and executes my method.
Use celery and schedule onEventElapsed using the eta parameter to be run in the future (within the models save method).
Considering option 1, a potential solution could be django-celery-beat. However, it seems a bit odd to run a task at a fixed interval for sending notifications. In addition I came up with a (potential) issue that would (probably) result in a not-so elegant solution:
Check every five minutes for events that have elapsed in the previous five minutes? seems shaky, maybe some events are missed (or others get their notifications send twice?). Potential workaroung: add a boolean field to the model that is set to True once notifications have been sent.
Then again, option 2 also has its problems:
Manually take care of the situation when an event start/end datetime is moved. When using celery, one would have to store the taskID (easy, ofc) and revoke the task once the dates have changed and issue a new task. But I have read, that celery has (design-specific) problems when dealing with tasks that are run in the future: Open Issue on github. I realize how this happens and why it is everything but trivial to solve.
Now, I have come across some libraries which could potentially solve my problem:
celery_longterm_scheduler (But does this mean I cannot use celery as I would have before, because of the differend Scheduler class? This also ties into the possible usage of django-celery-beat... Using any of the two frameworks, is it still possible to queue jobs (that are just a bit longer-running but not months away?)
django-apscheduler, uses apscheduler. However, I was unable to find any information on how it would handle tasks that are run in the far future.
Is there a fundemantal flaw with the way I am approaching this? Im glad for any inputs you might have.
Notice: I know this is likely to be somehwat opinion based, however, maybe there is a very basic thing that I have missed, regardless of what could be considered by some as ugly or elegant.
We're doing something like this in the company i work for, and the solution is quite simple.
Have a cron / celery beat that runs every hour to check if any notification needs to be sent.
Then send those notifications and mark them as done. This way, even if your notification time is years ahead, it will still be sent. Using ETA is NOT the way to go for a very long wait time, your cache / amqp might loose the data.
You can reduce your interval depending on your needs, but do make sure they dont overlap.
If one hour is too huge of a time difference, then what you can do is, run a scheduler every hour. Logic would be something like
run a task (lets call this scheduler task) hourly that gets all notifications that needs to be sent in the next hour (via celery beat) -
Schedule those notifications via apply_async(eta) - this will be the actual sending
Using that methodology would get you both of best worlds (eta and beat)
Hi I am quite new to web application development. I have been designing an application where a user uploads a file, some calculation is done and an output table will be shown. This process takes approximately 5-6 seconds.
I am saving my data in sessions like this:
request.session ['data']=resultDATA.
And loading the data whenever I need from sessions like this:
resultDATA = request.session['data']
I dont need DATA once the user is signed out. So is approach correct to save user data (not involving passwords)?
My biggest problem is if n number of users upload their files at exact moment do the last user have to wait for n*6 seconds for his calculation to complete? If yes is there any solution for this?
Right now I am using django built-in web server.
Do I have to use a different server to solve this problem?
There are quiet some questions in this question, however i think they are related enough and concise enough to deserve an answer:
So is approach correct to save user data (not involving passwords)?
I don't see any problem with this approach, since it's volatile data and it's not sensitive.
My biggest problem is if n number of users upload their files at exact moment do the last user have to wait for n*6 seconds for his calculation to complete?
This shouldn't be an issue as you put it. obviously if your server is handling huge ammounts of traffic it will slow down, and it will take a bit longer than your usual 5-6 seconds. However it won't be n*6, the server should be able to handle multiple requests at once.
Do I have to use a different server to solve this problem?
No, but kind of yes... what i mean is that in development the built-in server is great. It does everything you need it to do, however when you decide to push the app into production, you'll need a proper server for it.
As a side note, try to see if you can improve the data collection time, because right now everything is running on your own PC, which means it will probably be faster than when you push it to production. When you "upload" a file to localhost it takes a lot less time than when you upload it to an actual server over the internet, so that's a thing to keep in mind.
I apologize I couldn't find a proper title, let me explain what I'm working on:
I have a Python IRC bot, and I want to be able to keep track of how long users have been idle in the channel, and allow them to earn things (I have it tied to Skype/Minecraft/my website) each x amount of hours they're idle in the channel.
I already have everything to keep track of each user and have them validated with the site and stuff, but I am not sure how I would keep track of the time they're idle.
I have it capture on join/leave/part messages. How can I get a timer set up when they join, and keep that timer running, along with other times for all of the users who are in that channel, and each hour they've been idle (not all at same time) do something then restart the timer over for them?
Two general ways:
Create a separate timer for each user when he joins, do something when the timer fires and destroy it when the user leaves.
Have one timer set to fire, say, every second (or ten seconds) and iterate over all the users when it fires to see how long they have been idle.
A more precise answer would require deeper insight into your architecture, I’m afraid.
I'm building an installation that will run for several days and needs to get notifications from a GMail inbox in real time. The Gmail API is great for many of the features I need, so I'd like to use it. However, it has no IDLE command like IMAP.
Right now I've created a GMail API implementation that polls the mailbox every couple of seconds. This works great, but times out after a while (I get "connection reset by peer"). So, is it reasonable to turn off the sesson and restart it every half an hour or so to keep it active (like with IDLE)? Is that a terrible, terrible hack that will have google busting down my door in the middle of the night?
Would the proper solution be to log in with IMAP as well and use IDLE to notify my GMail API module to start up and pull in changes when they occur? Or should I just suck it up and create an IMAP only implementation?
Would definitely recommend against IMAP, note that even with the IMAP IDLE command it isn't real time--it's just polling every few (5?) seconds under the covers and then pushing out to the connection. (Experiment yourself and see the delay there.)
Querying history.list() frequently is quite cheap and should be fine. If this is for a sizeable number of users you may want to do a little bit of optimization like intelligent backoff for inactive mailboxes (e.g. every time there's no updates backoff by an extra 5s up to some maximum like a minute or two)?
Google will definitely not bust down your door or likely even notice unless you're doing it every second with 1M users. :)
Real push notifications for the API is definitely something that's called for.
You are getting connection reset by peer because you are exceeding Google quota. Every GMail API request has quota defined here.