This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 1 year ago.
I am new to dash app development and created a app which is running in the linux server. It is available to all users in our intranet only when I trigger and online. Once I logoff it is not accessible. How to schedule the app to run continuously even when I am offline. Any responses would be appreciated.
currently it is serving with below command
python app.py
I cant deploy in Heroku due to security restrictions. Docker also unavailable. Any other option would be appreciated.
Regards,
Sudheer
From this description, I guess that you
SSH into the server
Run python app.py
At this point, the app is available
Log off from the SSH connection (e.g. exit command)
At this point, the app is not available any more
If so, this is because the command you run during the SSH session will be terminated when you log off from the session.
There are several ways to keep your app running after you log off. For example,
Run nohub python app.py &.
Run tmux and run python app.py in the tmux window. Then Ctrl+b+d to close the tmux window.
In either way, the app will keep running after you log off from the SSH connection. If my guess about your situation is wrong, please elaborate on your situation more in detail. For example, tell us the series of command you run and what happens with them.
In particular, the term "offline" is not clear in this context. If you are completely offline and not connected even to the intranet, then there is no chance that you can use the app running on the server.
Related
This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 9 months ago.
I need help to start my Python application on DigitalOcean droplet. I set up all the settings and now can run my python file. But if I close the Ubuntu console - my loop or any other code (sending requests for example) finish. I want to start a Flask server which will receive webhooks all time when machine works (24/7). How can I start the process without working console on my Desktop? The question is not about Flask, only about endless working program. Thanks.
You could use screen or nohup to have your python script running 24/7.
screen allows you to create a terminal session and detach from it, leaving the process started on it running. You can install it on Ubuntu with the command below. See this tutorial or this one for more information.
sudo apt-get update sudo apt-get install screen
nohup allows you to do the same. It basically runs a command ignoring hangup signals, not stopping when you log out. Unlike screen, nohup is normally already installed by default on Ubuntu. See its manual page for more information about it.
Finally, in case you are interested in knowing more about the differences between screenand nohup, they were explained in this post.
I have a Flask application that spins up my web application to a specific port (e.g., 8080). It launches by running python run.py. I also have a Python file that runs Selenium end-to-end tests for the web app.
This is part of a GitHub repo, and I have it hooked up so that whenever I push/commit to the GitHub repo, it initiates a Jenkins test. Jenkins is using my machine as the agent to run the commands. I have a series of commands in a Pipeline on Jenkins to execute.
Ideally, the order of commands should be:
Spin up the Flask web application
Run the Selenium tests
Terminate the Flask web application
Now, my issue is this: I can use Jenkins to clone the repo to the local workspace on my computer (the agent), and then run the Python command to spin up my web app. I can use Jenkins to run a task in parallel to do the Selenium tests on the port used by the web app.
BUT I cannot figure out how to end the Flask web app once the Selenium tests are done. If I don't end the run.py execution, it is stuck on an infinite loop, endlessly serving up the application to the port. I can have Jenkins automatically abort after some specified amount of time, but that flags the entire process as a failure.
Does anyone have a good idea of how to end the Flask hosting process once the Selenium tests are done? Or, if I am doing this in a block-headed way (I admit I am speaking out of copious amounts of ignorance and inexperience), is there a better way I should be handling this?
I'm not aware of an official way to cause the development server to exit under Selenium control, but the following route works with Flask 1.1.2
from flask import request
...
if settings.DEBUG:
#app.route('/quit')
def quit():
shutdown_hook = request.environ.get('werkzeug.server.shutdown')
if shutdown_hook is not None:
shutdown_hook()
return "Bye"
return "No shutdown hook"
I am writing a tool for internal use at work. A user enters a router or switch IP address, username and password into a web form. The app then uses pexpect to SSH into the device, downloads a configuration and tests that the configuration complies with various standards by running true/false tests (e.g. hostname is set). Leaving aside whether this is a good idea or not, my problem is that when I run the program under the Flask development program it works fine. When I set it up to run under WSGI it fails at the SSH portion with the error:
pexpect.exceptions.ExceptionPexpect: The command was not found or was not executable: ssh.
I tried uWSGI and Unicorn and played with the number of workers etc. to no avail.
I suspect this is a setuid root thing. Google searches do not point to a solution. Can anyone lead me to a fix? If pexpect will not work, I may give up and require the user to upload a config file they save themselves but I am frustrated that this works on the flask development server but not a production server.
You probably just need to replace ssh with its full path, e.g. /usr/bin/ssh.
You can find the full path with which ssh.
I created a droplet that runs a flask application. My question is when I ssh into the droplet and restart the apache2 server, do I have to keep the console open all the time (that is I should not shut down my computer) for the application to be live?
What if I have a dynamic application that runs scripts in the background, do I have to keep the console open all the time for the dynamic parts to work?
P.S:
there's a similar question in SO about a NodeJs app but some parts of the answer they provided are irrelevant to my Flask app.
You can use the "screen" command to mantain the sesion open.
please see https://www.rackaid.com/blog/linux-screen-tutorial-and-how-to/
In my opinion it is not a good practice to use remote computers for the development stage unless you don't have an other option. If you want to make your application available after logging out from the ssh console, screen works, but it still a workaround.
I would suggest taking a look at this great tutorial on how to daemonize flask applications with Gunicorn+Nginx.
You needn't keep the console on, the app will still running after you close the console on your computer. But you may need to set a log to monitor it.
To debug a bug I'm seeing on Heroku but not on my local machine, I'm trying to do step-through debugging.
The typical import pdb; pdb.set_trace() approach doesn't work with Heroku since you don't have access to a console connected to your app, but apparently you can use rpdb, a "remote" version of pdb.
So I've installed rpdb, added import rpdb; rpdb.set_trace() at the appropriate spot. When I make a request that hits the rpdb line, the app hangs as expected and I see the following in my heroku log:
pdb is running on 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc:4444
Ok, so how to connect to the pdb that is running? I've tried heroku run nc 3d0c9fdd-c18a-4cc2-8466-da6671a72cbc 4444 to try to connect to the named host from within heroku's system, but that just immediately exits with status 1 and no error message.
So my specific question is: how do I now connect to this remote pdb?
The general related question is: is this even the right way for this sort of interactive debugging of an app running on Heroku? Is there a better way?
NOTE RE CELERY: Note, I've now also tried a similar approach with Celery, to no avail. The default host celery's rdb (remote pdb wrapper) uses is localhost, which you can't get to when it's Heroku. I've tried using the CELERY_RDB_HOST environment variable to the domain of the website that is being hosted on Heroku, but that gives a "Cannot assign requested address" error. So it's the same basic issue -- how to connect to the remote pdb instance that's running on Heroku?
In answer to your second question, I do it differently depending on the type of error (browser-side, backend, or view). For backend and view testing (unittests), will something like this work for you?
$ heroku run --app=your-app "python manage.py shell --settings=settings.production"
Then debug-away within ipython:
>>> %run -d script_to_run_unittests.py
Even if you aren't running a django app you could just run the debugger as a command line option to ipython so that any python errors will drop you to the debugger:
$ heroku run --app=your-app "ipython --pdb"
Front-end testing is a whole different ballgame where you should look into tools like selenium. I think there's also a "salad" test suite module that makes front end tests easier to write. Writing a test that breaks is the first step in debugging (or so I'm told ;).
If the bug looks simple, you can always do the old "print and run" with something like
import logging
logger = logging.getLogger(__file__)
logger.warn('here be bugs')`
and review your log files with getsentry.com or an equivalent monitoring tool or just:
heroku logs --tail