I am writing a client-server type application. The server side gathers constantly changing data from other hardware and then needs to pass it to multiple clients (say about 10) for display. The server data gathering program will be written in Python 3.4 and run on Debian. The clients will be built with VB Winforms on .net framework 4 running on Windows.
I had the idea to run a lightweight web server on the server-side and use system.net.webclient.downloadstring calls on the client side to receive it. This is so that all the multi-threading async stuff is done for me by the web server.
Questions:
Does this seem like a good approach?
Having my data gathering program write a text file for the web server to serve seems unnecessary. Is there a way to have the data in memory and have the server just serve that so there is no disk file intermediary? Setting up a ramdisk was one solution I thought of but this seems like overkill.
How will the web server deal with the data being frequently updated, say, once a second? Do webservers deal with this elegantly or is there a chance the file will be served whilst it is being written to?
Thanks.
1) I am not very familiar with Python, but for the .net application you will likely want to push change notifications to it, rather than pull. The system.net.webclient.downloadstring is a request (pull). As I am not a Python developer I cannot assist in that.
3) As you are requesting data, it is possible to create some errors of the read/write while updating and reading at the same time. Even if this does not happen your data may be out of date as soon as you read it. This can be an acceptable problem, this just depends of how critical your data is.
This is why I would do a push notification rather than a pull. If worked correctly this can keep data synced and avoid some timing issues.
Related
I am working on a project whereby I have a couple of remote IoT devices that send messages via UDP. I am looking to make a server that can receive this constant flow of UDP messages and can store them in a database. Additionally I would like to make a (REST) API which allows the information from this database to be accessed (every 15/30 minutes or so) from other applications.
Does anyone have any suggestions for how to do this (preferably in python)?
So far I am able to do the following (in python):
I know how to make a UDP client and server, and send messages between them using "socket". This link provided a useful explanation.
I know how to create a Flask server, store random data in a database using SQLAlchemy, and make the database content available via an API that can be accessed via Postman. This link showed me how.
What I am not able to do:
Tying everything together is where the problem arises. Specifically I don't know how to combine these above methods so that everything works at the same time (in the same loop so to say). Both Flask and the UDP server are running their own loops and listening (for events?) so I don't see how those processes would work simultaneously.
One thing that I was considering is to run the UDP server + database insertion in one terminal, and the Flask/API server from another terminal. That would mean that the database is being opened and accessed by multiple programs at the same time. Is that possible? It would be like opening a single Excel sheet multiple times (which is not permitted I would think).
I also came across this library which allows you to combine Flask with Flask-Sockets, but that doesn't seem to support UDP as far as I understand..
Many thanks!
I have a bottle server running on port 8080, using the "gevent" server. I use this server to support some simple "server sent events".
My question is probably related to not knowing exactly how my set up is working. I hope someone can take the time to elaborate on this.
All routes and serving of files from the server is working great, but I have an issue when accessing a specific route "/get_data". This gathers data from the web as well as from some internal data sources. The gathering takes about 30 minutes. While this process is running, I am not able to access any routes on the server, i.e. "/" or "/login". Once the process is finished, everything works again and the database is updated with the gathered information.
I tried replacing the gathering algorithms by a simple time.sleep(60), and while the timer was active, I was still able to access other routes just fine.
This leads to my two questions:
Why am I not able to access the server while this process is running. Is it the port that is blocked (from reading web-information), or maybe it has something to do with threading?
What would be the best way to run a demanding / long process on my server? Preferably I would like to access this from my web app, but I have thought about just putting this in a seperate python file and run this localy on the server, in a seperate instance of python. This process is run at most once per day, maybe as seldom as once per week.
This happen because WSGI handle request/response synchronously.
You can use gunicorn to run your application, it will handle multi requests and response, or you can use other methods described in bottle website:
Primer to Asynchronous Applications
I have a bunch of Google alerts set up as rss feeds that update in real time. What I want is to be able to store the new data the rss feed is sending out in a database.
After looking around I found Google and Superfeedr both offer hubs that do most of the work for you; however they both require a callback url (obviously). I do have an Apache server running on the machine I'm working off, it already has python enabled so I can run python scripts on my server. However at the moment its only accessible from within my LAN.
What my real question is, what do I do next? I know that in php you would just have a call back file that handles requests but I'm lost as to what to do in python. Would I write a script and give the google/superfeedr services a url to that script? What would be in the script? Specific imports needed?
Also, I just read that if you use XMPP you don't need a callback url. How does that work?
For the local LAN problem, the most commonly used solution is to use tuneling solutions like Passageway. They will temporarily expose a local port of your machine to the "outer" web.
Now, as for implementation, it's fairly easy to set things up. Python is similar to PHP in the sense that you'll have to write a script that listen on networking connection and then handles the HTTP requests you're getting from Superfeedr or Google. (it looks like you're not familiar with Python, why not stick to PHP then?)
Finally XMPP is a feature that only us (Superfeedr) offer. It solves the problem of exposing local ports because it works behind the firewall.
I want to be able to schedule delivery of a lightweight message from a server to a client. This is new territory to me so I'd appreciate some advice on the possible approaches available.
The client is running on a Raspberry Pi using node.js (because I'm using node libraries to control a piece of attached hardware). Eventually there will be multiple clients like it.
The server could be anything, though I'm most familiar with Python, django and node.
I want to be able to access the server from a browser and cause it to schedule a future message to the client, effectively a push notification with a tiny bit of data.
I'm looking at pub-sub and messaging systems to do this; I started writing a system that uses node on both ends and sockets, but the approach I want is more fire-and-forget occasional messages, not constant realtime data exchange. I'm also not a huge fan of the node-cron style scheduling, I'd like to be able to retrieve and alter scheduled events and it felt somewhat heavy-handed to layer this on top of a cron system.
My current solution uses python on the server (so I can write a django web interface) with celery and rabbitmq, using a named queue per client. The client subscribes to that specific queue using node-amqp, and off we go. This also allows me to create queues that multiple clients can be interested in, which is a neat bonus.
This answer makes me think I'm doing the right thing -- but as I'm new to this stuff, it feels like I might be missing something. Are there alternatives I should consider in the world of server-client messaging?
Since you are already using python you could take a look at python remote objects, (pyro).
I'm in the planning phase of an Android app which synchronizes to a web app. The web side will be written in Python with probably Django or Pyramid while the Android app will be straightforward java. My goal is to have the Android app work while there is no data connection, excluding the social/web aspects of the application.
This will be a run-of-the-mill app so I want to stick to something that can be installed easily through one click in the market and not require a separate download like CloudDB for Android.
I haven't found any databases that support this functionality so I will write it myself. One caveat with writing the sync logic is there will be some shared data between users that multiple users will be able to write to. This is a solo project so I thought I'd through this up here to see if I'm totally off-base.
The app will process local saves to the local sqlite database and then send messages to a service which will attempt to synchronize these changes to the remote database.
The sync service will alternate between checking for messages for the local app, i.e. changes to shared data by other users, and writing the local changes to the remote server.
All data will have a timestamp for tracking changes
When writing from the app to the server, if the server has newer information, the user will be warned about the conflict and prompted to overwrite what the server has or abandon the local changes. If the server has not been updated since the app last read the data, process the update.
When data comes from the server to the app, if the server has newer data overwrite the local data otherwise discard it as it will be handled in the next go around by the app updating the server.
Here's some questions:
1) Does this sound like overkill? Is there an easier way to handle this?
2) Where should this processing take place? On the client or the server? I'm thinking the advantage of the client is less processing on the server but if it's on the server, this makes it easier to implement other clients.
3) How should I handle the updates from the server? Incremental polling or comet/websocket? One thing to keep in mind is that I would prefer to go with a minimal installation on Webfaction to begin with as this is the startup.
Once these problems are tackled I do plan on contributing the solution to the geek community.
1) Looks like this is pretty good way to manage your local & remote changes + support offline work. I don't think this is overkill
2) I think, you should cache user's changes locally with local timestamp until synchronizing is finished. Then server should manage all processing: track current version, commit and rollback update attempts. Less processing on client = better for you! (Easier to support and implement)
3) I'd choose polling if I want to support offline-mode, because in offline you can't keep your socket open and you will have to reopen it every time when Internet connection is restored.
PS: Looks like this is VEEERYY OLD question... LOL