Does Comet(long polling) needed for Django activity stream - python

Does anyone has idea how to display django stream actions on templates.
Does I need to use Comet to fetch values to display on my template. Since, When I am doing action.send, it is storing verb and description in table actstream_action. But how should I display the values on template ?
P.S. Please note that this is first time I am using django-activity stream.

I've checked django-activity-stream, seems it has no magic for rendering activity updating dynamically.
Check django-socketio for websockets way. Or simply polling tech would satisfy your requirement if the expected access loading is not that high.
I'm not quite sure what you mean about "fetch actions directly to template using various templatetags" because template rendering is accomplished during request-response proceduret.

Related

How can i send real time data to a Django application and show it on a webpage?

I'm trying to add real-time features to my Django webapp. Basically, i want to show real time data on a webpage.
I have an external Python script which generates some JSON data, not big data but around 10 records per second. On the other part, i have a Django app, i would like my Django app to receive that data and show it on a HTML page in real time. I've already considered updating the data on a db and then retrieving it from Django, but i would have too many queries, since Django would query the DB 1+ times per second for every user and my external script would be writing a lot of data every second.
What i'm missing is a "central" system, a way to make these two pieces communicate. I know the question is probably not specific enough, but is there some way to do this? I know something about Django Channels, but i don't know if i could do what i want with it; i've also considered updating the data on a RabbitMQ queue and then retrieve it from Django, but this is not the best use of RabbitMQ.
So is there a way to do this with Django-Channels? Any kind of advice is appreciated.
I would suggest using Django Channels. You can also use Redis instead of RabbitMQ. In your case, Redis might be a better choice.
Here is an approach: http://www.maxburstein.com/blog/realtime-django-using-nodejs-and-socketio/

Plone store form inputs in a lightweight way

I need to store anonymous form data (string, checkbox, FileUpload,...) for a Conference registration site, but ATContentTypes seems to me a little bit oversized.
Is there a lightweight alternative to save the inputs -
SQL and PloneFormGen are not an option
I need to list, view and edit the data inputs in the backend...
Plone 3.3.6
python 2.4
Thanks
You could use souper
The description of the packages matches exact your requirement:
ZODB Storage for lots of (light weight) data.
There's a plone integration package plone.souper
There's an also an implementation example, see collective.pfg.soup
I guess this could fit your requirement.
I remember a talk at the ploneconf 2013, as example for the performance of souper, someone imported wikipedia articles: some slides
btw: Well, I'm not sure about Plone 3.x / Python 2.4 support.
Use uwosh.pfg.d2c (https://pypi.python.org/pypi/uwosh.pfg.d2c/)
It's an adapter for PloneFormGen (I know you said you don't want to use it, but keep reading). It transform your data in real Archetype content and you can enable an optiona that make it works with anonymous users.
And it will work on Plone 3.3. also.
Another approach is our unreleased collective.signupsheet (https://github.com/RedTurtle/collective.signupsheet) that is based on uwosh.pfg.d2c, but it's focused on event subscription. However we never released it (use at your own risk).
One approach is to create a browser view that accepts and retrieves JSON data and then just do all of the form handling in custom HTML. The JSON could be stored in an annotation against the site root, or you could create a simple content type with a single field for holding the JSON and create one per record. You'll need to produce your own list and item view templates, which would be easier with the item-per-JSON-record approach, but that's not a large task.
If you don't want to store it in the ZODB, then pick whatever file store you want - like shelf - and dump it there instead.

Textbox warning for large queries with Django

I'm using Django to create a website for a project. The user fills out a form, then I run some queries with this data and display the results on another page. Currently it's a two page site.
I want to warn the user if their query result data is very large. Say if a user ends up getting 1000 rows in the results table, I want to warn the user that queries of this size might take a long time to load. I imagine that between the form page and the results page, I could make a popup textbox that displays the warning. I could have this box show if the query object size is greater than 1000.
Does Django have a method for me implementing this? How can I get this textbox to appear before the result page template is shown?
Yes, query object has the method like this. It is simply:
query.count()
No, I don't think django has a function that will do this. You could easily do this, if you wanted to, using django and javascript though.
Loading a site with 1000 results really isn't that many. If the number of results is affecting performance, paginate them.
I think it might be a bit cleaner to just load the results page, directly, with either:
paginated results
no results and have an ajax request fetch results after the page is loaded, so the page doesn't lag while loading all results
What will your user think of an intermediarary popup? I belive to maximize their expereince, load the page in the fastest least intrusive way possible

Psych Experiment in Python (w/Django) - how to port to interactive web app?

I'm writing a psychology experiment in Python, and I need to make it available as a web app. I've already got the Python basically working as a command-line program. On the recommendation of a CS buddy I'm using Django with a sqlite db. This is also working, my development server is up and the database tables are ready and waiting.
What I don't understand is how to glue these two pieces together. The Django tutorials I've found are all about building things like blogs, messaging systems or polls; systems based on sending form data. I can't do that, because I'm timing responses to presented stimuli in milliseconds - I need to build an interactive app that doesn't rely (during the exercise) on form POST data or URL changes.
In short: I have no idea how to go from my simple command line program to a "real time" interactive web application.
Maximum kudos for links to relevant tutorials! I will also really appreciate a high-level explanation of the concept I'm missing here.
(FYI, I asked a previous question (choice of database) about this project here)
You are going to need to use HTML/Javascript, and then you can collect and send the results to the server. The results can get gamed though, as the code for the exercise is going to be client side.
Edit: I recommend a Javascript library, jQuery: http://docs.jquery.com/Tutorials
Edit 2:
I'll be a bit more specific, you need at least two models in Django, Exercise, and ExecutedExercise. Exercise will have fields with its name, number, etc., generic data for each exercise. ExecutedExercise will have two fields, a foreign key to Exercise, and a field to store how long it took to finish.
Now in Javascript, you're going to time the exercises, and then post them to a Django view that will handle the data storage. How to post them? You could use http://api.jquery.com/jQuery.post/ Create the data string, data = { e1: timingE1, e2: timingE2 } and post it to the view. You can handle the POST parameters in that view, create a ExecutedExercise object (you'll have the time it took for each exercise) and save them.

How to optimize for Django's paginator module

I have a question about how Django's paginator module works and how to optimize it. I have a list of around 300 items from information that I get from different APIs on the internet. I am using Django's paginator module to display the list for my visitors, 10 items at a time. The pagination does not work as well as I want it to. It seems that the paginator has to get all 300 items before pulling out the ten that need to be displayed each time the page is changed. For example, if there are 30 pages, then going to page 2 requires my website to query the APIs again, put all the information in a list, and then access the ten that the visitor's browser requests. I do not want to keep querying the APIs for the same information that I already have on each page turn.
Right now, my views has a function that looks at the get request and queries the APIs for information based on the query. Then it puts all that information into a list and passes it onto the template file. So, this function always loads whenever someone turns the page, resulting in querying the APIs again.
How should I fix this?
Thank you for your help.
The paginator will in this case need the full list in order to do its job.
My advice would be to update a cache of the feeds at a regular interval, and then use that cache as the input to the paginator module. Doing an intensive or length task on each and every request is always a bad idea. If not for the page load times the user will experience, think of the vulnerability of your server to attack.
You may want to check out Django's low level cache API which would allow you to store the feed result in a globally accessible place under a key, which you can later use to retrieve the cache and paginate for each page request.
ORM's do not load data until the row is selected:
query_results = Foo(id=1) # No sql executed yet, just stored.
foo = query_results[0] # now it fires
or
for foo in query_results:
foo.bar() # sql fires
If you are using a custom data source that is loading results on initialization then the pagination will not work as expected since all feeds will be fetched at once. You may want to subclass __getitem__ or __iter__ to do the actual fetch. It will then coincide with the way Django expects the results to be loaded.
Pagination is going to need to know how many results there are to do things like has_next(). In sql it is usually inexpensive to get a count(*) with an index. So you would also, want to have know how many results there would be (or maybe just estimate if it too expensive to know exactly).

Categories

Resources