I want to create web form that stays on forever on a single computer. Users can come to the computer fill out the form and submit it. After submitting, it will record the responses in an excel file and send emails. The next user can then come and fill out a new form automatically. I was planning on using Flask for this task since it is simple to create, but since I am not doing this on some production server, I will just have it running locally in development on the single computer.
I have never seen anyone do something like this with Flask so I was wondering if my idea is possible or if I should avoid it. I am also new to web development so I was wondering what problems there could be with keeping a flask application stay on 24/7 on a local development computer.
Thanks
There is nothing wrong with doing this in principle however, it is likely not the best solution for the time-to-reward payoff.
First, to answer your question, this could easily be done, even for a beginner, completing this in a few hours with minimal Python and HTML experience could definitely be done. Your app could crash in the background for many reasons (running out of space, bad memory addresses, etc) but most likely you will be fine.
As for specifically building it, it is all possible, there are libraries you can use to add the results to an excel file, or you can easily just append to a CSV (which is what I would recommend). Creating and sending an email, similarly is relatively simple, but again, doing it without python would be much easier.
If you are not set on flask/python, you could check out Google Forms but if you are set on python, or want to use it as a learning experience, it can definitely be done.
Your idea is possible and while there are many ways to do this kind of thing, what you are suggesting is not necessarily to be avoided.
All apps that run on a computer over a long period of time start a process and keep it going until closed. That is essentially what you are doing.
Having done this myself (and still currently doing it) at my business, I can say that it works great.
The only caveat is that to ensure that it will always be available, you need to have the process monitored by some tool to make sure that it gets restarted if it ever closes due to a variety of reasons.
In linux, supervisor is a great tool for doing that. In windows you could register it as a service. But you could also just create an easy way to restart and make it easy for the user to do so if it is down when they need it.
Yes, this could be done. It's very similar to the applications that run on the servers in data centers.
To keep the application running forever or restarting it after your system starts you'll need to use a system manager similar to systemd in Unix. You could use NSSM - the Non-Sucking Service Manager
or Service Control to monitor your application and restart it if it crashes. This will also have to be enabled on startup.
Other than this, you could use Waitres to serve your Flask application. Waitress is a WSGI web server with which you can easily configure the number of threads and workers to enable serving multiple users at the same time.
In a production environment, it's always suggested to use a web server interface like Gunicorn or Waitress.
Related
I am dockerizing a Python webapp using the https://hub.docker.com/r/tiangolo/uwsgi-nginx image, which uses supervisor to control the uWSGI instance.
My app actually requires an additional supervisor-mediated process to run (LibreOffice headless, with which I generate documents through the appy module), and I'm wondering what is the proper pattern to implement it.
The way I see it, I could extend the above image with the extra supervisor config for my needs (along with all the necessary OS-level install steps), but this would be in contradiction with the general principle of running the least amount of distinct processes in a given container. However, since my Python app is designed to talk with LibreOffice only locally, I'm not sure how I could achieve it with a more containerized approach. Thanks for any help or suggestion.
The recommendation for one-process-per-container is sound - Docker only monitors the process it starts when the container runs, so if you have multiple processes they're not watched by Docker. It's also a better design - you have lightweight, focused containers with single responsibilities, and you can manage them independently.
user2105103 is right though, the image you're using already loses that benefit because it runs Python and Nginx, and you could extend it with LibreOffice headless and package your whole app without changing code.
If you move to a more "best practice" approach, you'd have a distributed app running across three containers in a Docker network:
nginx - web proxy, this is the public entry point to the app. Nginx can do routing, caching, SSL termination, rate limiting etc.
app - your Python app, only visible inside the Docker network. Receives requests from nginx and uses libreoffice for document manipulation;
libreoffice - running in headless mode with the API exposed, but only available within the Docker network.
You'd need code changes for this, bringing in something like PyOO to use the LibreOffice API remotely from the app container.
You've already blown the "one process per container" -- just add another process. It's not a hard rule, or even one that everybody agrees with.
Extend away, or better yet author your own custom container. That way you own it, you understand it, and it's optimized for your purpose.
I have a Django app that is intended to be run on Virtualbox VMs on LANs. The basic user will be a savvy IT end-user, not a sysadmin.
Part of that app's job is to connect to external databases on the LAN, run some python batches against those databases and save the results in its local db. The user can then explore the systems using Django pages.
Run time for the batches isn't all that long, but runs to minutes, tens of minutes potentially, not seconds. Run frequency is infrequent at best, I think you could spend days without needing a refresh.
This is not celery's normal use case of long tasks which will eventually push the results back into the web UI via ajax and/or polling. It is more similar to a dev's occasional use of the django-admin commands, but this time intended for an end user.
The user should be able to initiate a run of one or several of those batches when they want in order to refresh the calculations of a given external database (the target db is a parameter to the batch).
Until the batches are done for a given db, the app really isn't useable. You can access its pages, but many functions won't be available.
It is very important, from a support point of view that the batches remain easily runnable at all times. Dropping down to the VMs SSH would probably require frequent handholding which wouldn't be good - it is best that you could launch them from the Django webpages.
What I currently have:
Each batch is in its own script.
I can run it on the command line (via if __name__ == "main":).
The batches are also hooked up as celery tasks and work fine that way.
Given the way I have written them, it would be relatively easy for me to allow running them from subprocess calls in Python. I haven't really looked into it, but I suppose I could make them into django-admin commands as well.
The batches already have their own rudimentary status checks. For example, they can look at the calculated data and tell whether they have been run and display that in Django pages without needing to look at celery task status backends.
The batches themselves are relatively robust and I can make them more so. This is about their launch mechanism.
What's not so great.
In Mac dev environment I find the celery/celerycam/rabbitmq stack to be somewhat unstable. It seems as if sometime rabbitmqs daemon balloons up in CPU/RAM use and then needs to be terminated. That mightily confuses the celery processes and I find I have to kill -9 various tasks and relaunch them manually. Sometimes celery still works but celerycam doesn't so no task updates. Some of these issues may be OSX specific or may be due to the DEBUG flag being switched for now, which celery warns about.
So then I need to run the batches on the command line, which is what I was trying to avoid, until the whole celery stack has been reset.
This might be acceptable on a normal website, with an admin watching over it. But I can't have that happen on a remote VM to which only the user has access.
Given that these are somewhat fire-and-forget batches, I am wondering if celery isn't overkill at this point.
Some options I have thought about:
writing a cleanup shell/Python script to restart rabbitmq/celery/celerycam and generally make it more robust. i.e. whatever is required to make celery & all more stable. I've already used psutil to figure out rabbit/celery process are running and display their status in Django.
Running the batches via subprocess instead and avoiding celery. What about django-admin commands here? Does that make a difference? Still needs to be run from the web pages.
an alternative task/process manager to celery with less capability but also less moving parts?
not using subprocess but relying on Python multiprocessing module? To be honest, I have no idea how that compares to launches via subprocess.
environment:
nginx, wsgi, ubuntu on virtualbox, chef to build VMs.
I'm not sure how your celery configuration makes it unstable but sounds like it's still the best fit for your problem. I'm using redis as the queue system and it works better than rabbitmq from my own experience. Maybe you can try it see if it improves things.
Otherwise, just use cron as a driver to run periodic tasks. You can just let it run your script periodically and update the database, your UI component will poll the database with no conflict.
I need to write a very light database (sqlite is fine) app that will initially be run locally on a clients windows PC but could, should it ever be necessary, be upgraded to work over the public interwebs without a complete rewrite.
My end user is not very technically inclined and I'd like to keep things as simple as posible. To that end I really want to avoid having to install a local webserver, however "easy" that may seem to you or I. Django specifically warns not to use it's inbuilt webserver in production so my two options seem to be...
a) Use django's built in server anyway while the app is running locally on windows and, if it ever needs to be upgraded to work over the net just stick it behind apache on a linux box somewhere in the cloud.
b) Use a framework that has a more robust built in web server from the start.
My understanding is that the only two disadvantages of django's built in server are a lack of security testing (moot if running only locally) and it's single threaded nature (not likely to be a big deal either for a low/zero concurrency single user app running locally). Am I way off base?
If so, then can I get some other "full stack" framework recommendations please - I strongly prefer python but I'm open to PHP and ruby based solutions too if there's no clear python winner. I'm probably going to have to support this app for a decade or more so I'd rather not use anything too new or esoteric unless it's from developers with some serious pedigree.
Thanks for your advice :)
Roger
I find Django's admin very easy to use for non-technical clients. In fact, that is the major consideration for my using Django as of late. Once set up properly, non-technical people can very easily update information, which can reflected on the front end immediately.
The client feels empowered.
Use Django. It's very simple for you to get started. Also, they have the best documentation. Follow the step by step app creating tutorial. Django supports all the databases that exist. Also, the built in server is very simple to use for the development and production server. I would highly recommend Django.
I know how to reboot machines remotely, so that's the easy part. However, the complexity of the issue is trying to setup the following. I'd like to control machines on a network for after-hours use such that when users logoff and go home, or shutdown their computers, whatever, python or some combination of python + windows could restart their machines (for cleanliness) and automatically login, running a process for the night, then in the morning, stop said process and restart the machine so the user could easily login like normal.
I've looked around, haven't had too terribly much luck, though it looks like one could do it with a changing of the registry. That sounds like a rough idea though, modifying the registry on a per-day basis. Is there an easier way?
You probably want to consider running whatever program you're considering as a Windows service, unless you absolute need a desktop. There are a couple of questions concerning that, e.g. here and here, as well as recipes on Active State. That involves no real need to start up or login to the computer.
There's also always the option of scheduled tasks and what not. That can actually be done programmatically through Python, e.g., as in this blog post.
As for powering on powered off computers, while I've never done anything with it, I know Windows supports Wake-on-LAN functionality, and there seem to be some good resources, including, again, a recipe on ActiveState.
If you need a desktop to run your program, I don't think you have any choice but to mess with the registry to permit autologins, as I don't believe the Window's GINA is scriptable in any way shape or form.
I can't think of any way to do strictly what you want off the top of my head other than the registry, at least not without even more drastic measures. But doing this registry modification isn't a big deal; just change the autologon username/password and reboot the computer. To have the computer reboot when the user logs off, give them a "logoff" option that actually reboots rather than logging off; I've seen other places do that.
(edit)FYI: for registry edits, Windows has a REG command that will be useful if you decide to go with that route.(/edit)
Also, what kind of process are you trying to run? If it's not a GUI app that needs your interaction, you don't have to go through any great pains; just run the app remotely. At my work, we use psexec to do it very simply, and I've also created C++ programs that run code remotely. It's not that difficult, the way I do it is to have C++ call the WinAPI function to remotely register a service on the remote PC and start it, the service then does whatever I want (itself, or as a staging point to launch other things), then unregisters itself. I have only used Python for simple webpage stuff, so I'm not sure what kind of support it has for accessing the DLLs required, but if it can do that, you can still use Python here.
Or even better yet, if you don't need to do this remotely but just want it done every night, you can just use the Windows scheduler to run whatever application you want run during the night. You can even do this programmatically as there are a couple Windows commands for that: one is the "at" command, and I don't recall right now what the other is but just a little Googling should find it for you.
Thanks for the responses. To be more clear on what I'm doing, I have a program that automatically starts on bootup, so getting logged in would be preferred. I'm coding a manager for a render-farm for work which will take all the machines that our guys use during the day and turn them into render servers at night (or whenever they log off for a period of time, for example).
I'm not sure if I necessarily require a GUI app, but the computer would need to boot and login to launch a server application that does the rendering, and I'm not certain if that can be done without logging in. What i'm needing to run is Autodesk's Backburner Server.exe
Maybe that can be run without needing to be logged in specifically, but I'm unfamiliar with doing things of that nature.
I have created a chat server application using the Twisted framework. I am running it on my local machine and now I want to go global. The application is similar to omegle.com.
How can I develop on a third party commercial server so that it runs continuously?
Do I need to get a dedicated server for it?
As per this SO answer,
You can deploy Twisted on any hosting
provider who gives you a shell prompt
and doesn't limit your long-running
processes.
Some examples that I've used include:
Tummy ltd. and Slicehost.
The hosting server need not be dedicated, in other words, as long as those conditions are met (and of course as long as you have enough quota of RAM, disk, bandwidth, etc, for your purposes).
Take a look at Python friendly hosts to get an idea of what is available and what it will cost you. Typically, you could get away with a shared hosting package as long as you have a shell. However, if your program begins serving tons of clients, you might need to move it to a dedicated host.