I have an issue when trying to log python prints using uwsgi from the console.
So I run my application from the console with:
uwsgi --http :9090 --wsgi-file wsgi.py --master -p 4
My wsgy.py file contains:
from assets_generator import app as application
if __name__ == "__main__":
app.run()
and my app looks like this (inside asset_generator.py):
from flask import Flask, render_template
app = Flask(__name__)
app.config.from_envvar('CONFIG')
from uwsgidecorators import thread
from worker import Worker
#thread
def _start_worker(item):
worker = Worker(item=item)
worker.run()
#app.route("/post-asset", methods=['GET', 'POST'])
def post_asset():
from flask import request
_start_worker(request.values)
return "OK", 200
The worker's run class calls a convert method:
class Worker(object):
def __init__(self, item):
super(Worker, self).__init__()
self.item = item
def run(self):
with app.app_context():
# prepare stuff for convertion, fill urls etc....
details = self.convert(
name=self.item.get('name'),
source_url=self.item.get('source_url'),
conversion_format=self.item.get('format'),
default_options=default_options
)
and the convert methods calls an url:
def convert(self, name, source_url, conversion_format, default_options):
try:
print "before requests " + source_url # THIS PRINT WORKS
r = requests.get(source_url)
print "after requests" # THIS ONE DOESN'T
# do other stuff, prints doesn't work
except Exception as e:
print " Error"
raise e
finally:
print "finally" # DOESN'T PRINT
if zip_extract_path:
shutil.rmtree(zip_extract_path)
print "before returning None" # DOESN'T PRINT
return None
My problem is that I can see the first print in the uwsgi console logs, but the second one actually never happen, and any other prints that are after this request call never happen.
I have tested manually to do the
r = requests.get(source_url)
with the right url from the place where this uwsgi applicaiton is ran, and the requests actually succeed and return OK.
I am a bit confused why my prints stop working, If anyone has an insight on this, it would be gladly appreciated.
For some reason, after trying to print a non-existing variable, and setting --py-autoreload 1 in the uwsgi config, my logs are now displayed, I don't understand why though.
you should pass flush=True to print statement for ex.
print("Hello", flush=True)
or useimport sys
sys.stdout.flush()
The reason is you did not reload your code. Though you change and save your file, your script keeps running(did not restart and reload code). If you add autoreload the script will check your file whether has changed, and restart server when it detected you modified the code. py-autoreload=N means it will check your code ecah N second(s). Read more: https://serverfault.com/questions/411362/how-do-i-make-uwsgi-restart-when-a-python-script-is-modified/411363
Related
I've spent the last hour and a half trying and failing to debug this test and I am utterly stumped. To simplify the process of testing the Flask server I am building, I have made a relatively simple script which starts the server, then runs pytest, kills the server, writes the outputs to files, and exits with Pytest's exit code. This code was working perfectly until today, and I haven't modified it since (aside from debugging this issue).
Here's the problem: when it gets to a certain point in the tests, it hangs. The weird thing is that this does not happen if I run my tests in any other way.
Debugging my server in VS Code, and running tests in the terminal: works
Running my server using the same code used in the test script and running pytest manually: works
Running pytest using the test script and running the server through the start server script (which uses the same code for running the server as the test script does) in a second terminal: works
Here's the other interesting thing: the tests always hang in the same place, part way through the setup fixture. It sends the clear command, and an echo request to the server (which prints the name of the current test). The database clears successfully, and the server echoes the correct information, but the echo route never exits - my tests never get a response. This echo route behaves perfectly for the 50 or so tests that happen before this point. If I comment out the test that is causing it to fail, it fails on the next test. If I comment out the call to the echo then it hangs on a later test on a completely different request to a different route. When it hangs, the server cannot be killed using a SIGTERM, but instead requires a SIGKILL.
Here is my echo route:
#debug.get('/echo')
def echo() -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
try:
value = request.args['value']
except KeyError:
raise http_errors.BadRequest('echo route requires a `value` argument')
to_print = f'{Fore.MAGENTA}[ECHO]\t\t{value}{Fore.RESET}'
# Print it to both stdout and stderr to ensure it is seen across all logs
# Otherwise it could be more difficult to figure out what's up with server
# output
print(to_print)
print(to_print, file=sys.stderr)
return {'value': value}
And here is my code that sends the requests:
def get(token: JWT | None, url: str, params: dict) -> dict:
"""
Returns the response to a GET web request
This also parses the response to help with error checking
### Args:
* `url` (`str`): URL to request to
* `params` (`dict`): parameters to send
### Returns:
* `dict`: response data
"""
return handle_response(requests.get(
url,
params=params,
headers=encode_headers(token),
timeout=3
))
def echo(value: str) -> IEcho:
"""
Echo an input. This returns the given value, but also prints it to stdout
on the server. Useful for debugging tests.
## Params:
* `value` (`str`): value to echo
"""
return cast(IEcho, get(None, f"{URL}/echo", {"value": value}))
#pytest.fixture(autouse=True)
def before_each(request: pytest.FixtureRequest):
"""Clear the database between tests"""
clear()
echo(f"{request.module.__name__}.{request.function.__name__}")
print("After echo") # This never prints
Here is my code for running Pytest in my test script
def pytest():
pytest = subprocess.Popen(
[sys.executable, '-u', '-m', 'pytest', '-v', '-s'],
)
# Wait for tests to finish
print("π¨ Running tests...")
try:
ret = pytest.wait()
except KeyboardInterrupt:
print("β Testing cancelled")
pytest.terminate()
# write_outputs(pytest, None)
# write_outputs(pytest, "pytest")
raise
# write_outputs(pytest, "pytest")
if ret == 0:
print("β
It works!")
else:
print("β Tests failed")
return bool(ret)
And here is my code for running my server in my test script:
def backend(debug=False, live_output=False):
env = os.environ.copy()
if debug:
env.update({"ENSEMBLE_DEBUG": "TRUE"})
debug_flag = ["--debug"]
else:
debug_flag = []
if live_output is False:
outputs = subprocess.PIPE
else:
outputs = None
flask = subprocess.Popen(
[sys.executable, '-u', '-m', 'flask'] + debug_flag + ['run'],
env=env,
stderr=outputs,
stdout=outputs,
)
if outputs is not None and (flask.stderr is None or flask.stdout is None):
print("β Can't read flask output", file=sys.stderr)
flask.kill()
sys.exit(1)
# Request until we get a success, but crash if we failed to start in 10
# seconds
start_time = time.time()
started = False
while time.time() - start_time < 10:
try:
requests.get(
f'http://localhost:{os.getenv("FLASK_RUN_PORT")}/debug/echo',
params={'value': 'Test script startup...'},
)
except requests.ConnectionError:
continue
started = True
break
if not started:
print("β Server failed to start in time")
flask.kill()
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
else:
if flask.poll() is not None:
print("β Server crashed during startup")
if outputs is not None:
write_outputs(flask, None)
sys.exit(1)
print("β
Server started")
return flask
So in summary, does anyone have any idea what on earth is happening? It freezes on such a simple route that this makes me very concerned. I think I may have found some crazy bug in Flask or in the requests library or something.
Even if you don't know what's happening with this, it'd be really helpful to have any ideas as to how I can debug this further, as I have absolutely no idea what is going on.
It turns out that my server output was filling up all the buffer space in the pipe, meaning that it would wait for the buffer to empty. The issue is that my test script was waiting for the tests to exit, and the tests could not progress unless the server was active. As such, the code reached a three-way deadlock. I fixed it by redirecting my output through a file (where limited buffer size wasn't a problem).
I have the following script which just boots up a web server serving a dynamically created website. In order to get dynamic data the script opens a file to read the data.
My concern is how can I catch CTRL-C command for killing the python script so I can close the file before script thread is killed.
I tried the following couple things but neither work:
from flask import Flask, render_template
import time
# Initialize the Flask application
app = Flask(__name__)
fileNames = {}
fileDesc = {}
for idx in range(1,4):
fileNames["name{}".format(idx)] = "./name" + str(idx) + ".txt"
fileDesc["name{}".format(idx)] = open(fileNames["name{}".format(idx)],'r')
try:
#app.route('/')
def index():
# code for reading data from files
return render_template('index.html', var1 = var1)
#app.errorhandler(Exception)
def all_exception_handler(error):
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
if __name__ == '__main__':
app.run(
host="192.168.0.166",
port=int("8080"),
debug=True
)
except KeyboardInterrupt:
print("Closing")
for key, value in fileDesc:
val.close()
print("Files closed")
Thanks in advance.
I am struggling with the same thing in my project. Something that did work for me was using signal to capture CTRL-C.
import sys
import signal
def handler(signal, frame):
print('CTRL-C pressed!')
sys.exit(0)
signal.signal(signal.SIGINT, handler)
signal.pause()
When this piece of code is put in the script that is running the Flask app, the CTRL-C can be captured. As of now, you have to use CTRL-C twice and then the handler is executed though. I'll investigate further and edit the answer if I find something new.
Edit 1
Okay I've done some more research and came up with some other methods, as the above is quite hack 'n slash.
In production, clean-up code such as closing databases or files is done via the #app.teardown_appcontext decorator. See this part of the tutorial.
When using the simple server, you can shut it down via exposing the werkzeug shutdown function. See this post.
Edit 2
I've tested the Werkzeug shutdown function, and it also works together with the teardown_appcontext functions. So I suggest to write your teardown functions using the decorator and writing a simple function that just does the shutdown of the werkzeug server. That way production and development code are the same.
Use atexit to handle this, from: https://stackoverflow.com/a/30739397/5782985
import atexit
#defining function to run on shutdown
def close_running_threads():
for thread in the_threads:
thread.join()
print "Threads complete, ready to finish"
#Register the function to be called on exit
atexit.register(close_running_threads)
#start your process
app.run()
I'm writing a python debugging library which opens a flask server in a new thread and serves information about the program it's running in. This works fine when the program being debugged isn't a web server itself. However if I try to run it concurrently with another flask server that's running in debug mode, things break. When I try to access the second server, the result alternates between the two servers.
Here's an example:
from flask.app import Flask
from threading import Thread
# app1 represents my debugging library
app1 = Flask('app1')
#app1.route('/')
def foo():
return '1'
Thread(target=lambda: app1.run(port=5001)).start()
# Cannot change code after here as I'm not the one writing it
app2 = Flask('app2')
#app2.route('/')
def bar():
return '2'
app2.run(debug=True, port=5002)
Now when I visit http://localhost:5002/ in my browser, the result may either be 1 or 2 instead of consistently being 2.
Using multiprocessing.Process instead of Thread has the same result.
How does this happen, and how can I avoid it? Is it unavoidable with flask/werkzeug/WSGI? I like flask for its simplicity and ideally would like to continue using it. If that's not possible, what's the simplest library/framework that I can use that won't interfere with any other web servers running at the same time? I'd also like to use threads instead of processes if possible.
The reloader of werkzeug (which is used in debug mode by default) creates a new process using subprocess.call, simplified it does something like:
new_environ = os.environ.copy()
new_environ['WERKZEUG_RUN_MAIN'] = 'true'
subprocess.call([sys.executable] + sys.argv, env=new_environ, close_fds=False)
This means that your script is reexecuted, which is usually fine if all it contains is an app.run(), but in your case it would restart both app1 and app2, but both now use the same port because if the OS supports it the listening port is opened in the parent process, inherited by the child and used there directly if an environment variable WERKZEUG_SERVER_FD is set.
So now you have two different apps somehow using the same socket.
You can see this better if you add some output, e.g:
from flask.app import Flask
from threading import Thread
import os
app1 = Flask('app1')
#app1.route('/')
def foo():
return '1'
def start_app1():
print("starting app1")
app1.run(port=5001)
app2 = Flask('app2')
#app2.route('/')
def bar():
return '2'
def start_app2():
print("starting app2")
app2.run(port=5002, debug=True)
if __name__ == '__main__':
print("PID:", os.getpid())
print("Werkzeug subprocess:", os.environ.get("WERKZEUG_RUN_MAIN"))
print("Inherited FD:", os.environ.get("WERKZEUG_SERVER_FD"))
Thread(target=start_app1).start()
start_app2()
This prints for example:
PID: 18860
Werkzeug subprocess: None
Inherited FD: None
starting app1
starting app2
* Running on http://127.0.0.1:5001/ (Press CTRL+C to quit)
* Running on http://127.0.0.1:5002/ (Press CTRL+C to quit)
* Restarting with inotify reloader
PID: 18864
Werkzeug subprocess: true
Inherited FD: 4
starting app1
starting app2
* Debugger is active!
If you change the startup code to
if __name__ == '__main__':
if os.environ.get("WERKZEUG_RUN_MAIN")) != 'true':
Thread(target=start_app1).start()
start_app2()
then it should work correctly, only app2 is reloaded by the reloader. However it runs in a separate process, not in a different thread, that is implied by using the debug mode.
A hack to avoid this would be to use:
if __name__ == '__main__':
os.environ["WERKZEUG_RUN_MAIN"] = 'true'
Thread(target=start_app1).start()
start_app2()
Now the reloader thinks it's already running in the subprocess and doesn't start a new one, everything runs in the same process. Reloading won't work and I don't know what other side effects that may have.
How to write a script in Python that outputs if celery is running on a machine (Ubuntu)?
My use-case. I have a simple python file with some tasks. I'm not using Django or Flask. I use supervisor to run the task queue. For example,
tasks.py
from celery import Celery, task
app = Celery('tasks')
#app.task()
def add_together(a, b):
return a + b
Supervisor:
[program:celery_worker]
directory = /var/app/
command=celery -A tasks worker info
This all works, I now want to have page which checks if celery/supervisor process is running. i.e. something like this maybe using Flask allowing me to host the page giving a 200 status allowing me to load balance.
For example...
check_status.py
from flask import Flask
app = Flask(__name__)
#app.route('/')
def status_check():
#check supervisor is running
if supervisor:
return render_template('up.html')
else:
return render_template('down.html')
if __name__ == '__main__':
app.run()
Update 09/2020: JΓ©rΓ΄me updated this answer for Celery 4.3 here: https://stackoverflow.com/a/57628025/1159735
You can run the celery status command via code by importing the celery.bin.celery package:
import celery
import celery.bin.base
import celery.bin.celery
import celery.platforms
app = celery.Celery('tasks', broker='redis://')
status = celery.bin.celery.CeleryCommand.commands['status']()
status.app = status.get_app()
def celery_is_up():
try:
status.run()
return True
except celery.bin.base.Error as e:
if e.status == celery.platforms.EX_UNAVAILABLE:
return False
raise e
if __name__ == '__main__':
if celery_is_up():
print('Celery up!')
else:
print('Celery not responding...')
How about using subprocess, not sure if it is a good idea:
>>> import subprocess
>>> output = subprocess.check_output('ps aux'.split())
>>> 'supervisord' in output
True
you can parse process state from supervisorctl status output
import subprocess
def is_celery_worker_running():
ctl_output = subprocess.check_output('supervisorctl status celery_worker'.split()).strip()
if ctl_output == 'unix:///var/run/supervisor.sock no such file':
# supervisord not running
return False
elif ctl_output == 'No such process celery_worker':
return False
else:
state = ctl_output.split()[1]
return state == 'RUNNING'
Inspired by #vgel's answer, using Celery 4.3.0.
import celery
import celery.bin.base
import celery.bin.control
import celery.platforms
# Importing Celery app from my own application
from my_app.celery import app as celery_app
def celery_running():
"""Test Celery server is available
Inspired by https://stackoverflow.com/a/33545849
"""
status = celery.bin.control.status(celery_app)
try:
status.run()
return True
except celery.bin.base.Error as exc:
if exc.status == celery.platforms.EX_UNAVAILABLE:
return False
raise
if __name__ == '__main__':
if celery_is_up():
print('Celery up!')
else:
print('Celery not responding...')
A sparse web user interface comes with supervisor. May be you could use that. It can be enabled in the supervisor config. Key to look for is [inet_http_server]
You could even look at the source code of that piece to get ideas to implement your own.
This isn't applicable for celery, but for anyone that ended up here to see if supervisord is running, check to see if the pidfile defined for supervisord in your supervisord.conf configuration file exists. If so, it's running; if not, it isn't. The default pidfile is /tmp/supervisord.pid, which is what I used below.
import os
import sys
if os.path.isfile("/tmp/supervisord.pid"):
print "supervisord is running."
sys.exit()
In my experience, I'd set a message to track whether it was complete or not so that the queues would be responsible for retrying tasks.
I'm using django-selenium to add Selenium testing functionality to existing unittests.
My Selenium tests are reliant on a web server running on my machine which would be triggered by running our django app like so; main.py -a
So the first thing I want to do in my Selenium test is start this server which I setup like so;
def start_server():
path = os.path.join(os.getcwd(), 'main.py -a')
server_running = is_server_running()
if server_running is False:
server = subprocess.Popen('cmd.exe', stdin= subprocess.PIPE, stdout= subprocess.PIPE)
stdout, stderr = server.communicate(input='%s\n' % path)
print 'Server error:\n{0}\n'.format(stderr)
server_running = is_server_running()
return server_running
However when I do this the webserver takes over the execution of the django test process in the command line. I assume the way I should be doing this is to launch the command prompt in a separate process and then trigger the main.py -a command in that process.
Is this the right idea and if so, how can I modify that function to spawn a new process and launch my command? I was trying to run 'cmd.exe' using Process(target=path but I couldn't get it to work. Thanks :)
The way I have gone with this is with a much simpler launch method;
startServer.py
def run():
path = os.path.join(os.getcwd(), 'main.py')
server_running = is_server_running()
if server_running is False:
subprocess.Popen(['python', path, '-a'])
if __name__ == '__main__':
run()
Which I can then start and stop in my tests' setup & teardown as so;
def setUp(self):
self.server = Process(target= startServer.run)
self.server.start()
def test(self):
# run test process
def tearDown(self):
utils.closeBrowser(self.ff)
There may well be a better way of doing things & something here may not be 'as it should be' but it works (with a socket forcibly closed error) :)
My only outstanding issue is test starting before the database tables have been created :(