Invoking flask server from within pytest test - python

import sys
import os
import logging
import requests
import json
import pytest
from multiprocessing import Process
from flask_file import main
#pytest.fixture
def endpoint():
return "http://127.0.0.1:8888/"
def test_send_request(endpoint: str):
server = Process(target=main)
server.start()
time.sleep(30)
# check that the service is up and running
service_up = requests.get(f"{endpoint}")
server.terminate()
server.join()
I wanted to spin up and spin down a server locally from within a test to test some requests. I know the server itself works because I can run main() from the flask_file itself using python flask_file and it will spin up the server...and I can ping it just fine. When I use the above method, the test does seem to do the full 30s sleep without failing, but in those 30s I cannot open the endpoint on my browser and see the expected "hello world".

When you run the Flask builtin development server (e.g. flask run or app.run(), only one connection is possible. So when your test accesses the app, you cannot access it via browser.
Anyway, you should rewrite your test and fixture to use the test_client, see the official documentation
https://flask.palletsprojects.com/en/1.1.x/testing/

Related

How to serialize python script across servers?

I started using Prefect recently and I noticed I can add decorators to some methods then submit them to prefect. It will then run my script remotely on an agent server. I'm wondering how it is possible to use an attribute and to serialize a method somehow for remote execution.
Example Prefect python script
import sys
import prefect
from prefect import flow, task, get_run_logger
from utilities import AN_IMPORTED_MESSAGE
#task
def log_task(name):
logger = get_run_logger()
logger.info("Hello %s!", name)
logger.info("Prefect Version = %s 🚀", prefect.__version__)
logger.debug(AN_IMPORTED_MESSAGE)
#flow()
def log_flow(name: str):
log_task(name)
if __name__ == "__main__":
name = sys.argv[1]
log_flow(name)

Celery with memory backend hanging

I'm developing a testing suite for a flask app using celery for processing background tasks.
I am working on integration tests and have been trying to configure a embedded live worker as per the documentation (https://docs.celeryproject.org/en/latest/userguide/testing.html)
conftest.py
#pytest.fixture(scope='session')
def celery_config():
return {
'broker_url': 'memory://localhost/',
'result_backend': 'memory://localhost/',
}
#pytest.fixture(scope='module')
def create_flask_app():
#drop all records in testDatabase before strting new test module
db = connect(host=os.environ["MONGODB_SETTINGS_TEST"], alias="testConnect")
for collection in db["testDatabase"].list_collection_names():
db["testDatabase"].drop_collection(collection)
db.close()
# Create a test client using the Flask application configured for testing
flask_app = create_app()
return flask_app
#pytest.fixture(scope='function')
def test_client(create_flask_app):
"""
Establish a test client for use within each test module
"""
with create_flask_app.test_client() as testing_client:
with create_flask_app.app_context():
yield testing_client
#pytest.fixture(scope='function')
def celery_app(create_flask_app):
from celery.contrib.testing import tasks
from app import celery
return celery
I'm trying to run the tests using local memory as the backend. Yet, the tasks hang and the test suite never finishes executing.
When I run the tests with a redis backend (and initialize redis in my development machine) everything works fine. But I'd like to not be dependent on redis when running the tests.
Am I doing something wrong with the setup? Does anyone have any idea on why the tasks are hanging?

redis get function return None

I am working on a flask application that interacts with redis. This applciation is deployed on heroku, with a redis add on.
When I am doing some testing with the interaction, I am not able to get the key value pair that I just set. Instead, I always get None as a return type. Here is the example:
import Flask
import redis
app = Flask(__name__)
redis_url = os.getenv('REDISTOGO_URL', 'redis://localhost:6379')
redis = redis.from_url(redis_url)
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
print redis.get("test") # print None here
return "what the freak"
if __name__ == "__main__":
app.run(host='0.0.0.0')
As shown above, the test route will print None, means the value is not set. I am confused. When I test the server on my local browser it works, and when I tried interacting with redis using heroku python shell it works too.
testing with python shell:
heroku run python
from server import redis
redis.set('test', 'i am here') # return True
redis.get('test') # return i am here
I am confused now. How should I properly interact with redis using Flask?
Redis-py by default constructs a ConnectionPool client, and this is probably what the from_url helper function is doing. While Redis itself is single threaded, the commands from the connection pool have no guaranteed order of execution. For a single client, construct a redis.StrictRedis client directly, or pass through the param connection_pool=none. This is preferable for simple commands, low in number, as there is less connection management overhead. You can alternatively use a pipeline in the context of a connection pool to serialise a batch operation.
https://redis-py.readthedocs.io/en/latest/#redis.ConnectionPool
https://redis-py.readthedocs.io/en/latest/#redis.Redis.pipeline
I did more experiments on this. It seems there is an issue related to the delay. the below modification will make it work:
#app.route('/test')
def test():
redis.set("test", "{test1: test}")
time.sleep(5) # add the delay needed to the let the set finish
print redis.get("test") # print "{test1: test}" here
return "now it works"
I read the documentation on redis, redis seems to be single threaded. So I am not sure why it will execute the get function call before the set function is done. Someone with more experience please post an explanation.

Replacing flask internal web server with Apache

I have written a single user application that currently works with Flask internal web server. It does not seem to be very robust and it crashes with all sorts of socket errors as soon as a page takes a long time to load and the user navigates elsewhere while waiting. So I thought to replace it with Apache.
The problem is, my current code is a single program that first launches about ten threads to do stuff, for example set up ssh tunnels to remote servers and zmq connections to communicate with a database located there. Finally it enters run() loop to start the internal server.
I followed all sorts of instructions and managed to get Apache service the initial page. However, everything goes wrong as I now don't have any worker threads available, nor any globally initialised classes, and none of my global variables holding interfaces to communicate with these threads do not exist.
Obviously I am not a web developer.
How badly "wrong" my current code is? Is there any way to make that work with Apache with a reasonable amount of work? Can I have Apache just replace the run() part and have a running application, with which Apache communicates? My current app in a very simplified form (without data processing threads) is something like this:
comm=None
app = Flask(__name__)
class CommsHandler(object):
__init__(self):
*Init communication links to external servers and databases*
def request_data(self, request):
*Use initialised links to request something*
return result
#app.route("/", methods=["GET"]):
def mainpage():
return render_template("main.html")
#app.route("/foo", methods=["GET"]):
def foo():
a=comm.request_data("xyzzy")
return render_template("foo.html", data=a)
comm = CommsHandler()
app.run()
Or have I done this completely wrong? Now when I remove app.run and just import app class to wsgi script, I do get a response from the main page as it does not need reference to global variable comm.
/foo does not work, as "comm" is an uninitialised variable. And I can see why, of course. I just never thought this would need to be exported to Apache or any other web server.
So the question is, can I launch this application somehow in a rc script at boot, set up its communication links and everyhing, and have Apache/wsgi just call function of the running application instead of launching a new one?
Hannu
This is the simple app with flask run on internal server:
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
To run it on apache server Check out fastCGI doc :
from flup.server.fcgi import WSGIServer
from yourapplication import app
if __name__ == '__main__':
WSGIServer(app).run()

Is there any way of detecting an automatic reload in flask's debug mode?

I have a flask app where I'd like to execute some code on the first time the app is run, not on the automatic reloads triggered by the debug mode. Is there any way of detecting when a reload is triggered so that I can do this?
To give an example, I might want to open a web browser every time I run the app from sublime text, but not when I subsequently edit the files, like so:
import webbrowser
if __name__ == '__main__':
webbrowser.open('http://localhost:5000')
app.run(host='localhost', port=5000, debug=True)
You can set an environment variable.
import os
if 'WERKZEUG_LOADED' in os.environ:
print 'Reloading...'
else:
print 'Starting...'
os.environ['WERKZEUG_LOADED']='TRUE'
I still don't know how to persist a reference that survives the reloading, though.
What about using Flask-Script to kick off a process before you start your server? Something like this (cribbed from their documentation and edited slightly):
# run_devserver.py
import webbrowser
from flask.ext.script import Manager
from myapp import app
manager = Manager(app)
if __name__ == "__main__":
webbrowser.open('http://localhost:5000')
manager.run(host='localhost', port=5000, debug=True)
I have a Flask app where it's not really practical to change the DEBUG flag or disable reloading, and the app is spun up in a more complex way than just flask run.
#osa's solution didn't work for me with flask debug on, because it doesn't have enough finesse to pick out the werkzeug watcher process from the worker process that gets reloaded.
I have this code in my main package's __init__.py (the package that defines the flask app). This code is run by another small module which has from <the_package_name> import app followed by app.run(debug=True, host='0.0.0.0', port=5000). Therefore this code is executed before the app starts.
import ptvsd
import os
my_pid = os.getpid()
if os.environ.get('PPID') == str(os.getppid()):
logger.debug('Reloading...')
logger.debug(f"Current process ID: {my_pid}")
try:
port = 5678
ptvsd.enable_attach(address=('0.0.0.0', port))
logger.debug(f'========================== PTVSD waiting on port {port} ==========================')
# ptvsd.wait_for_attach() # Not necessary for my app; YMMV
except Exception as ex:
logger.debug(f'PTVSD raised {ex}')
else:
logger.debug('Starting...')
os.environ['PPID'] = str(my_pid)
logger.debug(f"First process ID: {my_pid}")
NB: note the difference between os.getpid() and os.getppid() (the latter gets the parent process's ID).
I can attach at any point and it works great, even if the app has reloaded already before I attach. I can detach and re-attach. The debugger survives a reload.

Categories

Resources