I have a custom django management command that is constantly running through the supervisor (keep alive)
I need under certain circumstances (for example, a signal in the project, or a change in the database) the django management process reacted to these changes.
Tried looking for something similar but can't seem to find a simple implementation.
UPD: In the management command I start the stream process to the twitter API to track new tweets for the tags from django database. When adding a new tag to the database, I want to restart the stream connection.
Related
I'm working on a platform that uses Django and Django REST Framework, so this platform is used to monitor the state of some sensors which send data to the computer.
Now I have the data in the computer, and I want to create a Python file to read the data from the computer and store it in a MySQL database that Django will read and show to the user.
I have two questions:
Can I make Django read from the file, save the data in the database, and show it in the user interface?
If not, how could I make a script to run in the background when I start the project?
Can i make django read from the file save the data in the database
Yes, you could create a management command for this which could be run from crontab. Or create a sort of 'daemonized' version which keep on running but sleeps for X amount of seconds before running again. This command reads the data, puts it into the database.
show it in the user interface ?
Yes, but I would advise against doing this sequential (as in, don't put your 'data reading' in your view!!). So your managment command updates the database with the latest data and you view only shows the latest data.
You could also use this
https://github.com/kraiz/django-crontab
That makes it even more simple to use
I recently started developing a Desktop python application and I would like to know how more expert people would handle this issue.
I used to develop (about 5-10 years ago) web applications in the past using PHP + MySQL and there, since the code/program is located on the server where the user doesn't have access (except the web page), I could simply store the user/group permissions in the database in a table say users, users_groups, users_permissions, and so on. I would then check at every page load if the user had the right to access that page / update that record in the database.
With a desktop application where the user has access to the executable (which can relatively easy be decompiled to source code, being written in Python) the approach will likely be quite different.
Since MySQL has forked into MariaDB and is not so actively developed anymore, PostgreSQL looked promising to start. I thought about creating different users on PostgreSQL level and letting PostgreSQL handle the permissions (instad of my application handling them directly).
However, this only allows tuning of the permissions down to the table level. A user will be allowed to create/delete/update records in a table, however no further control is available. AFAIK you cannot tell "let this user only update his own records" or "this user can only delete the records from this group", or "users from group X can only update their own records while users from group Y can update everybody's records".
My understanding as how to handle this kind of issue would be to put some kind of middleware application between the user and the database, located on the server, such as:
Desktop application <-----> Server-side application permissions handler <-----> Database
Where server-side permission handler could be as simple as adding a "WHERE user=..." to each query as well as much more advanced stuff (first check user permissions stored in the database, based on that decide if letting user execute the query or reject it). I think this is a common problem for all desktop applications and would therefore expect that such a server-side application already exist. Am I missing something obvious or maybe PostgreSQL allows for more detailed fine-tuning?
Thank you for all your help ;)
Your intuition is right. It is never a good idea having a client access directly a database. Take a look a Django https://www.djangoproject.com and https://www.django-rest-framework.org
This would be the the basis for your server side. You would handle here business logic, authentication, authorization. The client should basically present the data within the UI and delegate all the decision making to the server.
Here you can find a step by step tutorial about how to implement a REST api with user authentication in Django. https://wsvincent.com/django-rest-framework-authentication-tutorial/
I have built a python script that sends users telegram notifications about things happening in their account on another service.
For this a user needs to specify API keys for said service so that my script can pull the required information.
Now currently, for a new user, I manually create a new folder on my VPS, create a new venv, a new settings file and run the application from a screen session named after the user. This is becoming tedious with 10+ users, especially with updates to the script.
I am currently building a flask based website, where users can log in and set their API keys and other parameters on a own dashboard.
What I want to achieve:
if user registers, a new entity of the script has to be created with a settings file next to it containing user information
the user should have the option to start/stop said application from the dashboard
if I release an update to the script I want to deploy it to all users at once and restart their script if it was running
basically the flask website should only act as a configuration dashboard/frontend for the script that runs on my server so that people don't need to have an own VPS or leave their private system running 24/7
How do I go about this? Is it "just" file handling, creating new folders and files from a blueprint after a user registers? Are there better practices?
I tried to find answers to that via google and the stackoverflow search but I did not find a specific recommendation for that usecase.
If anyone could point me towards a resource on that or even better an example somewhere I'd really appreciate it!
Thanks in advance.
You should have only one script and all the configurations saved into a database, then you need to dispatch some notification just pass the right parameters to the script.
I'll briefly explain what I'm trying to achieve: We have a lot of servers behind ipvsadm VIPs (LVS load balancing) and we regularly move servers in/out of VIPs manually. To reduce risk (junior ops make mistakes...) I'd like to abstract it to a web interface.
I have a Python daemon which repeatedly runs "ipvsadm -l" to get a list of servers and statistics, then creates JSON from this output. What I'd now like to do is server this JSON, and have a web interface that can pass commands. For example, selecting a server in a web UI and pressing remove triggers an ipvsadm -d <server>... command. I'd also like the web UI to update every 10 seconds or so with the statistics from the list command.
My current Python daemon just outputs to a file. Should I somehow have this daemon also be a web server and serve its file and accept POST requests with command identifiers/arguments? Or a second daemon for the web UI? My only front end experience is with basic Bootstrap and jQuery usually backed by Laravel, so I'm not sure if there's a better way of doing this with sockets and some fancy JS modern-ism.
If there is a more appropriate place for this post, please move it if possible or let me know where to re-post.
You don't need fancy js application. To take the path of least resistance, I would create some extra application - if you like python, I recommend flask for this job. If you prefer php, then how about slim?
In your web application, if you want to make it fast and easy, you can even implement ajax mechanism fetching results based on interval to refresh servers' data every 10 seconds. You will fetch it from json served by independent, already existing deamon.
Running commands clicked on Web UI can be done by your web application.
Your web application is something extra and I find it nice to be separated from deamon which fetch data about servers and save it as json. Anytime you can turn off the page, but all statistics will be still fetching and available for console users in json format.
I have an application that uses Python appengine, there is a service that updates the status of users, if an admin person has a page open, I would need it to update in real time. I know that appengine has CRON and task queues, what would be the correct way to handle this? Should I set an update flag in the models that that triggers jscript?
The Channel API can be used to send real-time(ish) data to clients, without the need of clients polling the server.