Best way to inform user of an SQL Table Update? - python

I have a desktop python application whose data backend is a MySQL database, but whose previous database was a network-accessed xml file(s). When it was xml-powered, I had a thread spawned at the launch of the application that would simply check the xml file for changes and whenever the date modified changed (due to any user updating it), the app would refresh itself so multiple users could use and see the changes of the app as they went about their business.
Now that the program has matured and is venturing toward an online presence so it can be used anywhere. Xml is out the window and I'm using MySQL with SQLAlchemy as the database access method. The plot thickens, however, because the information is no longer stored in one xml file but rather it is split into multiple tables in the SQL database. This complicates the idea of some sort of 'last modified' table value or structure. Thus the question, how do you inform the users that the data has changed and the app needs to refresh? Here are some of my thoughts:
Each table needs a last-modified column (this seems like the worst option ever)
A separate table that holds some last modified column?
Some sort of push notification through a server?
It should be mentioned that I have the capability of running perhaps a very small python script on the same server hosting the SQL db that perhaps the app could connect to and (through sockets?) it could pass information to and from all connected clients?
Some extra information:
The information passed back and forth would be pretty low-bandwidth. Mostly text with the potential of some images (rarely over 50k).
Number of clients at present is very small, in the tens. But the project could be picked up by some bigger companies with client numbers possibly getting into the hundreds. Even still the bandwidth shouldn't be a problem for the foreseeable future.
Anyway, somewhat new territory for me, so what would you do? Thanks in advance!

As I understand this is not a client-server application, but rather an application that has a common remote storage.
One idea would be to change to web services (this would solve most of your problems on the long run).
Another idea (if you don't want to switch to web) is to refresh periodically the data in your interface by using a timer.
Another way (and more complicated) would be to have a server that receives all the updates, stores them in the database and then pushes the changes to the other connected clients.
The first 2 ideas you mentioned will have maintenance, scalability and design uglyness issues.
The last 2 are a lot better in my opinion, but I still stick to web services as being the best.

Related

Database permissions for Python Desktop application

I recently started developing a Desktop python application and I would like to know how more expert people would handle this issue.
I used to develop (about 5-10 years ago) web applications in the past using PHP + MySQL and there, since the code/program is located on the server where the user doesn't have access (except the web page), I could simply store the user/group permissions in the database in a table say users, users_groups, users_permissions, and so on. I would then check at every page load if the user had the right to access that page / update that record in the database.
With a desktop application where the user has access to the executable (which can relatively easy be decompiled to source code, being written in Python) the approach will likely be quite different.
Since MySQL has forked into MariaDB and is not so actively developed anymore, PostgreSQL looked promising to start. I thought about creating different users on PostgreSQL level and letting PostgreSQL handle the permissions (instad of my application handling them directly).
However, this only allows tuning of the permissions down to the table level. A user will be allowed to create/delete/update records in a table, however no further control is available. AFAIK you cannot tell "let this user only update his own records" or "this user can only delete the records from this group", or "users from group X can only update their own records while users from group Y can update everybody's records".
My understanding as how to handle this kind of issue would be to put some kind of middleware application between the user and the database, located on the server, such as:
Desktop application <-----> Server-side application permissions handler <-----> Database
Where server-side permission handler could be as simple as adding a "WHERE user=..." to each query as well as much more advanced stuff (first check user permissions stored in the database, based on that decide if letting user execute the query or reject it). I think this is a common problem for all desktop applications and would therefore expect that such a server-side application already exist. Am I missing something obvious or maybe PostgreSQL allows for more detailed fine-tuning?
Thank you for all your help ;)
Your intuition is right. It is never a good idea having a client access directly a database. Take a look a Django https://www.djangoproject.com and https://www.django-rest-framework.org
This would be the the basis for your server side. You would handle here business logic, authentication, authorization. The client should basically present the data within the UI and delegate all the decision making to the server.
Here you can find a step by step tutorial about how to implement a REST api with user authentication in Django. https://wsvincent.com/django-rest-framework-authentication-tutorial/

Saving long term configuration from a web application

I am currently working on a small scale web application in Django, which will be integrated with a microcontroller.
All the data gathered by the application is saved in an SQL database.
Some of the data will be changed and accessed frequently (e.g. user data, permissions etc.), however - other data (microcontroller configuration for example) will be changed rarely, and accessed frequently by the backend code which works with the microcontroller.
I don't want the microcontroller code to access the database each time to sample its configuration parameters. However, I also don't want to change it directly from the web application (this will break the interface through which I update all other entries in the site).
So far I thought about caching, however - there might be a simpler solution I'm missing...
Use DB caching :
https://docs.djangoproject.com/en/1.9/topics/cache/#database-caching
or memcache in the same article

Django and Rails with one common DB

I have earlier worked on Java+Spring to create a web-app.
I have to build a new web-app now.
It will have one centralized db.
There will be two different type of instance of web-app.
Web-App 1:
a) It would have nothing to UI render, no html,js etc.
b) All it need to give is some set of rest API which will
b.1) create some new entries in DB
b.2) modify some entries in DB
b.3) retrieve some of DB records in JSON format.
some frontend code ( doesn't belong to this app) will periodically fetch
this details.
c) it will be used by max by 100,000 people but at a given point of time,
we can expect about 1000 users logged in and doing whats being said in b)
Web-App2 :
a) It will have some dashboards
b) 90% of DB operations would be read operations
c) 10% of DB operations would be write/modify
d) There will be about 1000s of user of this system and at any given point of time
hardly 50-1000 people will be accessing it.
I am thinking of following.
Have Web-App 1 created in python+Django and Web-App 2 created in RoR.
I am planning to use to Dynamo DB and memcache.
Why two different frameworks?
1) So that I get to learn both of them
2) There have been concern about scalability in RoR (and I also know people claim its not there), Web-app 1 may have scaling needs in future.
My questions is Do you see any problem with this combination?
for example active records would want you to use specific namings format for your data base tables? Are there any other concerns similar to this?
Anyone else who have used similar technology stack?
both frameworks are full stack framework and and provide MVC, templating, unit testing, security, db migration, caching, security, ORMs.
For my startup, we also needed to put out a full fleshed website along with an API. We are also using DynamoDB for storing most of the data and are only using MySQL for session info.
I opted to use Ruby on Rails for the Webapp and Sinatra for the API. If you're criteria is simply learning as many new things as possible, then it would make sense to opt for relatively different stacks (django/python and RoR). In our case, we went with sinatra because it's essentially a very lightweight wrapper around Rack and perfect for an API which essentially receives requests, calls one or more services or does some processing and hands out a formatted response. While I don't see any problem with using python/django instead of sinatra, in our case the benefit was having to spend less time working with a different language.
Also, scalability in rails is a bit of an iffy subject. In the end, it's about how you use it. We've had no issues scaling rails with unicorn and nginx. Our business logic is all in the API service and the rails server as well uses the API for most of the work. This means we don't use active record on rails and the website is just another consumer for our API which does all the heavy lifting whether the request comes from an app or the website. Using MySQL for the session store ensures we can route requests to any of the application servers without having to worry about always routing requests from the same client to the same server every time. This allows us to ramp up and down easily only considering the amount of traffic we're getting.
At the time we started working on this, there wasn't an ORM for dynamo db which looked and felt just like active record, so we ended up writing a few high level classes of our own to handle storage and retrieval of models on DynamoDb. Considering DynamoDB is not tailored for scans or joins, this didn't take a lot of effort since we were almost always doing lookups based on keys and ranges. This meant we didn't really need a replacement for active record since the real strength of active record is being able to intuitively do joins, etc. by convention.
DynamoDB does have it's limitations though and you might find yourself in situations where you will need to scan a large number of records. In our case, we also use CloudSearch to index some important info and use it as a fallback for cases when we need to do text based searches which need to scan all our data.

Standalone Desktop App with Centralized Database (file) w/o Database Server?

I have been developing a fairly simple desktop application to be used by a group of 100-150 people within my department mainly for reporting. Unfortunately, I have to build it within some pretty strict confines similar to the specs called out in this post. The application will just be a self contained executable with no need to install.
The problem I'm running into is figuring out how to handle the database need. There will probably only be about 1GB of data for the app, but it needs to be available to everyone.
I would embed the database with the application (SQLite), but the data needs to be refreshed every week from a centralized process, so I figure it would be easier to maintain one database, rather than pushing updates down to the apps. Plus users will need to write to the database as well and those updates need to be seen by everyone.
I'm not allowed to set up a server for the database, so that rules out any good options for a true database. I'm restricted to File Shares or SharePoint.
It seems like I'm down to MS Access or SQLite. I'd prefer to stick with SQLite because I'm a fan of python and SQLAlchemy - but based on what I've read SQLite is not a good solution for multiple users accessing it over the network (or even possible).
Is there another option I haven't discovered for this setup or am I stuck working with MS Access? Perhaps I'll need to break down and work with SharePoint lists and apps?
I've been researching this for quite a while now, and I've run out of ideas. Any help is appreciated.
FYI, as I'm sure you can tell, I'm not a professional developer. I have enough experience in web / python / vb development that I can get by - so I was asked to do this as a side project.
SQLite can operate across a network and be shared among different processes. It is not a good solution when the application is write-heavy (because it locks the database file for the duration of a write), but if the application is mostly reporting it may be a perfectly reasonable solution.
As my options are limited, I decided to go with a built in database for each app using SQLite. The db will only need updated every week or two, so I figured a 30 second update by pulling from flat files will be OK. then the user will have all data locally to browse as needed.

Sending data through the web to a remote program using python

I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database.
Another option in addition to what Casey already provided:
Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database.
Adding this as an answer so that OP will be more likely to see it...
Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution.
I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted.
Pyro4 using pickle to serialize data.

Categories

Resources