Monitor MySQLdb in python for new entries and Flask - python

I'm looking for a way to constantly check my database (MySQL) for new entries. Once a new entry is committed I want to output it in a webpage using Flask.
Since the process takes time to finish I would like to give the users the impression it took only few seconds to retrieve data.
For now I'm waiting that the whole process finishes to give to the user the whole result. But I would prefer to update the result web-page every time a new entry was added to the DB. So for example the first entry is added to the DB, immediately the user can see it on the web-page, then a second entry is added the user can now see both the first and the second entries on the web-page and so on. I don't know if it has to come from flask or other ways
Any idea?

You can set MySQL to log all commits to General Query Log and monitor all changes (for example via Watchdog or PyNotify). Once the file changes, you can parse the new log entries and get the signal. By this way you'll avoid pooling for changes.
The better way would be of course send the signal while storing data to the database.

Related

How to handle multi-user database interaction with PyQt5

I am developing a GUI app that will be used supposedly by mutliple users. In my app, I use QAbstractTableModel to display a MS Access Database (stored on a local server, accessed by several PCs) in a QTableView. I developped everything I needed for unique user interaction. But now I'm moving to the step where I need to think about multi-user interaction.
For exemple, if user A changes a specific line, the instance of the app on user's B PC needs to update the changed line. Another example, if user A is modifying a specific line, and user B also wants to modify it, it needs to be notified as "already being modified, wait please", and once the modification from user A is done, the user B needs to see this modification updated before he has any interaction.
Today, because of the local nature of the MS Access database, I have to update the table view a lot of time, based on user interaction, in order to not miss any database modification from other potential users. It is kinda greedy in terms of performance and resources.
I was thinking about using Django in order make the different app instances communicate with each other, but maybe I'm overthingking it and may be there is other solutions.
Dunno if it's clear, I'm available for more informations !
Perhaps you could simply store a "lastUpdated" timestamp on the row. With each update, you update that timestamp.
Now, when you submit an update, you include that timestamp, and if the timestamps don't match, you let the user know, and handle the conflict on the frontend (Perhaps a simple "overwrite local copy, or force update and overwrite server copy" option).
That's a simple and robust solution, but if you don't want users wasting time writing updates for old rows, you could use WebSockets to communicate from a server to any clients with that row open for editing, and let them know that the row has been updated.
If you want to "lock" rows while the row is already being edited, you could simply store a "inUse" boolean and have users check the value before continuing.
Usually, when using a MVC pattern (which is what QAbstractTableModel + QTableView is) the responsibility of updating the view should lie on the model itself. I.e. it's the model that should notify the view that something changed.
It seems that QAbstractTableModel has a dataChanged signal that gets emitted on data changes.
I suggest you to connect it to your view refresh slot as done here.
In this way you avoid the need of another moving part/infrastructure component (django).

Is there any way mssql can notify my python application when any table or row has been updated?

I dont have much knowledge in dbs, but wanted to know if there is any technique by which when i update or insert a specific entry in a table, it should notify my python application to which i can then listen whats updated and then update that particular row, in the data stored in session or some temporary storage.
I need to send data filter and sort calls again n again, so i dont want to fetch whole data from sql, so i decided to keep it local, nd process it from there. But i was worried if in the mean time the db updates, and i could have been passing the same old data to filter requests.
Any suggestions?
rdbs only will be updated by your program's method or function sort of things.
you can just print console or log inside of yours.
if you want to track what updated modified deleted things,
you have to build a another program to able to track the logs for rdbs
thanks.

Web Development - Flask and Database connection

I am building a small web-app based on Flask. I want to make a web-page, which will show some data taken from a database. This database is filed with data from some internet resources. The data in this database should be updated every 30 sec, which means I am sending update requests every 30 sec.
The way of how I see it's working:
1). script_db.py gets data from web and puts it in the database and updates database every 30 sec.
2). script_main.py gets the updated data from the database and renders the HTML template.
Question:
I want script_db.py to be executed automatically in the background, so that script_main.py will always have access to the most updated data.
1). The way I see it, is it the best way to work with the database, which needs to be automatically updated?
2). How to run that script_db.py in a background and then start script_main.py, whenever I want?
Thank You
Disclaimer: I am totally new to what I am trying to do. Any suggestion would help.

Is there a way to check if a Django management command is running?

The views rely on Redis to be populated. Redis is populated from a management command ran every 10 minutes. This management command deletes all existing keys and re-adds them with new data. How could I determine if the management command is running from a django view?
Right now I'm having the management command write to an external file and have a view read that file on each request. If the database is refreshing via the management command I hold up the view until it finishes (polling style).
Django does not provide a pre-packaged way to check whether an administration command is running. This being said, you should never write code that explicitly blocks a view while waiting for some result. You can easily use up all threads and processes that the server that runs your application has made available to your application. Your users will have a poor experience on your site, even those that don't do anything that has to do with the problem you're trying to solve here.
What I'm getting from your description is that you want users to get reasonably fresh results. For something like this I would use a solution based on versioning the data. It would go like this:
Declare a Redis-backed cache in your settings.py file that will contain the data populated by the command and read by the view. Make sure the TIMEOUT of the cache is set to NONE.
A current version number is recorded with the key CURRENT_VERSION. This key is itself unversioned.
When the command refreshes the data in the cache, it stores it in keys with version set to CURRENT_VERSION + 1. You'll have something like:
current_version = cache.get(CURRENT_VERSION)
# Record the new data.
for ...:
cache.set(some_key, some_value, version=current_version + 1)
Django's cache system does not readily allow getting a set of keys that correspond to a specific criterion. However, your view will want to obtain all keys that belong to a specific version. This information can be recorded as:
cache.set(ALL_RECORDS,
[... list of keys set in the loop above ...],
version=current_version + 1)
Where ALL_RECORDS is a key value that is guaranteed not to clash with CURRENT_VERSION or any of the keys set for the individual records.
Once the command is done, it atomically increases the value of CURRENT_VERSION:
cache.incr(CURRENT_VERSION)
The documentation on the Redis backend states that if you perform an increment on appropriate values (that's left vague but integers would seem appropriate) then Redis will perform the increment atomically.
The command should also clean up old versions form the cache. One method to ensure that old data does not stay in the cache is to set expiration times on the keys when you set their values. Your command refreshing the cache runs every 10 minutes. So you set keys to expire after 15 minutes. But suppose that a problem prevents multiple runs of the command to run. What then? Your data will be expired and removed from the cache, and the view will run with an empty data set. If this is okay for your situation, then I guess you could set the timeout parameter every time you do cache.set, except for CURRENT_VERSION which should never expire.
If you are not okay with your view running with an empty data set (which seems more probable to me), then you have to write code in your command to seek old versions and remove them explicitly.
Your view accesses the cache by:
Reading the value of CURRENT_VERSION:
current_version = cache.get(CURRENT_VERSION)
Reading the list of records in the version it got:
keys = cache.get(ALL_RECORDS, version=current_version)
Processing the records:
for key in keys:
value = cache.get(key, version=current_version)
The view should detect the case where the cache has not been initialized and fail gracefully. When deploying the application, care should be taken that the command has run at least once before the site can be accessed.
If the view starts working while the command is updating the cache, it does not matter. From the point of view of the command, the cache is just accessing the previous version. From the point of view of the view, the command is busy creating the next version but this is invisible to the view. The view does not have to block.

Django uploading and importing a file

I have a Django app that is basically a web front end on a database.
Every now and then, users upload files containing perhaps 1000s of records. These records will need to be parsed out of the file, processed, and used to create new records or update existing records in the database. I'm just wondering what is the better approach for processing the uploaded file:
in the view (while the user waits - I guess this could be a up to 5 minues) ?
save the uploaded file and have some background cron job call a custom admin command to process it? This seems the most sensible to me.
or perhaps another method I haven't thought of?
Celery seems to be pretty hot these days too, you should definitely look into this:
https://github.com/ask/django-celery
http://celeryproject.org/
Send an email when done, or have the front end poll for results every X seconds after submission. "Are we there yet?" "Are we there yet?"
I'd like to know too a simple, safe way to start a thread that writes to the db.

Categories

Resources