I would like to implement an observer design pattern. My thoughts are, if this is deployed, and another system update has occurred and restarted the server, would the observers/subscribers be lost when the server has restarted?
Sorry for this newbie question.
Your question doesn't specify any tools you are using neither the deployment methods and softwares so the best I can say is that any non-persistent data will be deleted on restart. This counts for the subscribers your server has saved in a variable, for example.
In web development, you go around this problem (and that of lost of connection) by having "temporary subscriptions" and by not implementing functionalities with solutions needing consistency in connection.
However, what you could do is give the clients some sort of unique id which could be stored in a database along with data that can restore the connection.
Related
This is either a really good or really stupid question, but I find its worth asking --
I'm making a Django app that runs on a device as an interfaces. Is there any reason to think I could just use the python manage.py runserver and go no further? Or is there a better way to do this?
Installing the full web-bundle for local-network-devices seems excessive, hence my question. (Perhaps there is not a great deal of overhead using the full-web-setup -- I dunno). This is currently on the Raspberry pi, but for prototype purposes. The end-product will not necessarily be Pi.
It depends on how many users you're expecting to connect at once. The Django development server is suitable for only one connection at a time. It isn't good at handling multiple sessions and is not designed to stay up for long periods of time. This is the reason the docs clearly state
do not use this server in a production setting!
That said, running with an application server like gunicorn may be all you need to support hosting multiple users. It uses multiple workers so that if one user's request crashes, it can continue serving all the other users.
https://docs.djangoproject.com/en/1.11/howto/deployment/wsgi/gunicorn/
Finally, if you're serving up a lot of assets like images or videos, you should really have a full web server like Nginx to intercept asset URLs so they're not served through Django itself. Django should not be serving assets directly in production.
https://www.digitalocean.com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-14-04
I have a subsystem that contains sensor data and posts it to the Internet via TCP or UDP requests to a server with authorization by token. All the posted data is also saved to a local capped MongoDB database.
I want this system to be tolerant to network failures and outrages and to be able to synchronize data when the network is back.
What is the correct way to implement this without re-inventing the wheel?
I see several options:
MongoDB replication.
PROs:
Replication tools exist
CONs:
How to do that in real-time? Having two systems: to post one way when the system is online and the other way when the system is offline seems to be a bad idea.
No idea how to manage access tokens (I don't want to give direct access to the server database)
Server side scheme should match the local one (but can be PRO since then manual import is trivial)
Maintaining 'last ACKed' records and and re-transmitting once a while.
PROs:
Allows for different data scheme locally and on server side
Works
CONs:
Logic is complex (detect failures, monitor for network connectivity, etc)
Exactly opposite from 'reinventing the wheel'
Manual data backfeed is hardly possible (e.g. when the system is completely disconnected for a long time and data is restored from back-ups).
I want a simple and reliable solution (the project is in Python, but I'm also fine with JavaScript/CoffeeScript in a separate process). I prefer a Python module that is tailored for this task (I failed to find one) or a piece of advice of how to organize the system UNIX way.
I believe this is a solved problem and has a known best practices which I ceased to find.
Thank you!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I've been building this little app that runs on our work computers. It just some little maintenance stuff at night while we're out of the office. I added in some functionality the other day so that it connects to a web site and sends its status info in a user agent string. This was pretty cool, cause I can monitor what it's doing from home. But then I got to wondering, can I remotely change this thing's configuration settings over the internet? Like switch certain options on or off?
How would I go about such a thing?
Ideally, I would like to have a very basic web page with control panel of sorts, which can then submit the settings to the currently running software.
I have no idea where to even start with something like this. How do you get two things so talk to each other over the internet? What are the core pieces of knowledge I need to look into?
Unfortunately, this leans towards the open-ended side of questions, but I'm a newbie in this area, and could use a point in the correct direction.
My recommendation is to stop using a thick client and prepare to move this to a web based client. This will allow a central point of connection and you won't have users not updating their client when you need them to perform said updates.
The architecture you may want to consider is a application server, such as Django (since you are using Python). What will happen is you will deploy the application (thin client) to Django and users will now connect to this piece via sessions that the server spits out for them. By doing this it allows you to administer all of the users from one location and flip switches as you desire. At the end of the day this also lowers maintenance costs of both software and of time spent trying to troubleshoot desktop clients.
COMMENT
So, for instance, the program starts up, connects to a server and
builds its settings from that? How does the configuration happen?
Would I just have the program repeatedly check the server for changes,
or is there a better way?
Well the way it works is there are two types of configuration files. One is user defined, for instance if the user has a preference to show text in Blue and you store that information the server doesn't care about it as they are generally stored locally.
Now in terms of server configuration files there are a couple of things you can do:
Configure permissions via roles
this entails that you update permissions for users based on their roles. When this update occurs you must terminate all active sessions that would be impacted so the new permissions hit those users, or you can wait for the session to timeout by itself and the new permissions will be picked up.
Modify application server settings
This requires taking down the application, such as maintenance issues that need to be handled i.e. patches.
The program doesn't poll the server, the server pushes down to the clients. That isn't even completely true, the clients connect to the server and if there is new data the client receives it generally over a socket. Your session will store the majority of the information which is why it needs to be refreshed/destroyed periodically (logout, etc).
In terms of the program getting its settings, yes. The users have "dummy" terminals that connect to the application, these accounts are generally read only. As such they cannot modify the contents of the server nor should they be able to for security purposes. When the users connect the application will connect to the database and retrieve credentials (you can also use certificates I recommend this approach). Based on the credentials the user has connected with the application will serve that base "profile" to the user plus any user specific information that the server knows about i.e. display name/ last login date.
Im new to Cassandra, so bear with me.
So, I am building a search engine using Cassandra as the db. I am interacting with it through Pycassa.
Now, I want to output Cassandra's response to a webpage, having the user submitted a query.
I am aware of tools such as django, fastCGI, SCGI, etc to allow python to web interaction. However, how does one run a python script on a webserver without turning this server into a single point of failure ( i.e., if this server dies than the system is not accessible by the user) - and therefore negating one purpose of Cassandra?
I've seen this problem before - sometimes people need much more CPU power and bandwidth to generate and serve some server generated HTML and images than they do to run the actual query in Cassandra. For one customer, this was many 10's times the number of servers serving the front end of the website than in their Cassandra cluster.
You'll need to load balance between these front end servers somehow - investigate running haproxy on a few dedicated machines. Its quick and easy to configure, and similarly easy to reconfigure when your setup changes (unlike DNS, which can take days to propagate changes). I think you can also configure nginx to do the same. If you keep per-session information in your front end servers, you'll need each client to go to the same front end server for each request - this is called "session persistence", and can be achieved by hashing the client's IP to pick the front end server. Haproxy will do this for you.
However this approach will again create a SPOF in your configuration (the haproxy server) - you should run more than one, and potentially have a hot standby. Finally you will need to somehow balance load between your haproxies - we typically use round robin DNS for this, as the nodes running haproxy seldom change.
The benefit of this system is that you can easily scale up (and down) the number of front end servers without changing your DNS. You can read (a little bit) more about the setup I'm referring to at: http://www.acunu.com/blogs/andy-ormsby/using-cassandra-acunu-power-britains-got-talent/
Theo Schlossnagle's Scalable Internet Architectures covers load balancing and a lot more. Highly recommended.
I'm in the planning phase of an Android app which synchronizes to a web app. The web side will be written in Python with probably Django or Pyramid while the Android app will be straightforward java. My goal is to have the Android app work while there is no data connection, excluding the social/web aspects of the application.
This will be a run-of-the-mill app so I want to stick to something that can be installed easily through one click in the market and not require a separate download like CloudDB for Android.
I haven't found any databases that support this functionality so I will write it myself. One caveat with writing the sync logic is there will be some shared data between users that multiple users will be able to write to. This is a solo project so I thought I'd through this up here to see if I'm totally off-base.
The app will process local saves to the local sqlite database and then send messages to a service which will attempt to synchronize these changes to the remote database.
The sync service will alternate between checking for messages for the local app, i.e. changes to shared data by other users, and writing the local changes to the remote server.
All data will have a timestamp for tracking changes
When writing from the app to the server, if the server has newer information, the user will be warned about the conflict and prompted to overwrite what the server has or abandon the local changes. If the server has not been updated since the app last read the data, process the update.
When data comes from the server to the app, if the server has newer data overwrite the local data otherwise discard it as it will be handled in the next go around by the app updating the server.
Here's some questions:
1) Does this sound like overkill? Is there an easier way to handle this?
2) Where should this processing take place? On the client or the server? I'm thinking the advantage of the client is less processing on the server but if it's on the server, this makes it easier to implement other clients.
3) How should I handle the updates from the server? Incremental polling or comet/websocket? One thing to keep in mind is that I would prefer to go with a minimal installation on Webfaction to begin with as this is the startup.
Once these problems are tackled I do plan on contributing the solution to the geek community.
1) Looks like this is pretty good way to manage your local & remote changes + support offline work. I don't think this is overkill
2) I think, you should cache user's changes locally with local timestamp until synchronizing is finished. Then server should manage all processing: track current version, commit and rollback update attempts. Less processing on client = better for you! (Easier to support and implement)
3) I'd choose polling if I want to support offline-mode, because in offline you can't keep your socket open and you will have to reopen it every time when Internet connection is restored.
PS: Looks like this is VEEERYY OLD question... LOL