Mongoengine connecting to atlas No replica set members found yet - python

after a working production app was connected to local mongo, we've decided to move to Mongo Atlas.
This caused production errors.
Our stack is docker -> alping 3.6 -> python 2.7.13 -> Flask -> uwsgi (2.0.17) -> nginx running on aws
flask-mongoengine-0.9.3 mongoengine-0.14.3 pymongo-3.5.1
when starting the app in staging/production, The uwsgi is sending No replica set members found yet.
We don't know why.
We've tried different connection settings connect: False which is lazy connecting, meaning not on initializing, but on first query.
It caused the nginx to fail for resource temporarily unavailable error on some of our apps. We had to restart multiple times for the app to finally start serving requests.
I think the issue is with pymongo and the fact that it's not fork-safe
http://api.mongodb.com/python/current/faq.html?highlight=thread#id3
and uwsgi is using forks
I suspect it might be related to the way my app is being initalized.
might by aginst Using PyMongo with Multiprocessing
here is the app init code:
from app import FlaskApp
from flask import current_app
app = None
from flask_mongoengine import MongoEngine
import logging
application, app = init_flask_app(app_instance, module_name='my_module')
def init_flask_app(app_instance, **kwargs):
app_instance.init_instance(environment_config)
application = app_instance.get_instance()
app = application.app
return application, app
# app_instance.py
import FlaskApp
def init_instance(env):
global app
app = FlaskApp(env)
return app
def get_instance():
if globals().get('app') is None:
app = current_app.flask_app_object
else:
app = globals().get('app')
assert app is not None
return app
class FlaskApp(object):
def __init__(self, env):
.....
# Initialize the DB
self.db = Database(self.app)
....
# used in app_instance.py to get the flask app object in case it's None
self.app.flask_app_object = self
def run_server(self):
self.app.run(host=self.app.config['HOST'], port=self.app.config['PORT'], debug=self.app.config['DEBUG'])
class Database(object):
def __init__(self, app):
self.db = MongoEngine(app)
def drop_all(self, database_name):
logging.warn("Dropping database %s" % database_name)
self.db.connection.drop_database(database_name)
if __name__ == '__main__':
application.run_server()
help in debugging this will be appreciated!

Related

FastAPI server running on AWS App Runner fails after 24 hours

I have a FastAPI server configured with Gunicorn, deployed on AWS App Runner. When I try to access the endpoint, it works perfectly, however, after 24 hours, when I try to access the same endpoint, I get a 502 bad gateway error, and nothing is logged on cloudWatch after this point, until I redeploy the application, then it starts working fine again.
I suspect this has to do with my Gunicorn configuration itself which was somehow shutting down my API after some time, and not AWS App Runner, but I have not found any solution. I have also shown my Gunicorn setup below. Any hep will be appreciated.
from fastapi import FastAPI
import uvicorn
from fastapi.middleware.cors import CORSMiddleware
from gunicorn.app.base import BaseApplication
import os
import multiprocessing
api = FastAPI()
def number_of_workers():
print((multiprocessing.cpu_count() * 2) + 1)
return (multiprocessing.cpu_count() * 2) + 1
class StandaloneApplication(BaseApplication):
def __init__(self, app, options=None):
self.options = options or {}
self.application = app
super().__init__()
def load_config(self):
config = {
key: value for key, value in self.options.items()
if key in self.cfg.settings and value is not None
}
for key, value in config.items():
self.cfg.set(key.lower(), value)
def load(self):
return self.application
#api.get("/test")
async def root():
return 'Success'
if __name__ == "__main__":
if os.environ.get('APP_ENV') == "development":
uvicorn.run("api:api", host="0.0.0.0", port=2304, reload=True)
else:
options = {
"bind": "0.0.0.0:2304",
"workers": number_of_workers(),
"accesslog": "-",
"errorlog": "-",
"worker_class": "uvicorn.workers.UvicornWorker",
"timeout": "0"
}
StandaloneApplication(api, options).run()
I had the same problem. After a lot of trial and error, two changes seemed to resolve this for me.
Set uvicorn --timeout-keep-alive to 65. For gunicorn this param is --keep-alive. I'm assuming the Application Load Balancer throws 502 if uvicorn closes the tcp socket before ALB does.
Change the App Runner health check to use HTTP rather than TCP ping to manage container recycling. Currently the AWS UI doesn't allow you to make this change. You will have to do this using aws cli. Use any active URL path for ping check - in your case /test
aws apprunner update-service --service-arn <arn> --health-check-configuration Protocol=HTTP,Path=/test
#2 might just be enough to resolve the issue.

What is the correct connection string for PostgresSQL remote db on AWS EBS

I'm trying to connect my locally hosted flask app to remotely set up RDS DB on Amazon, I'm unable to get my app to access it. It says no password supplied.
Here is what I have done:
Google
StackOverflow
Setting the Inbound Rules in EC2 Security Groups for RDS DB to allow all incoming traffic from anywhere.
This is my application.py code:
from os import environ
from Flask_Auth.manage import application
import Flask_Auth.manage
if __name__ == '__main__':
#application.debug = True
application.run()
This is my config.py code:
import os
basedir = os.path.abspath(os.path.dirname(__file__))
postgres_local_base = 'postgresql:///postgres:123123123!#confudb.cusmbtiketg6.ap-south-1.rds.amazonaws.com/confudb'
database_name = 'confudb'
#'postgresql://postgres:admin#localhost/'
#'flask_jwt_auth'
class BaseConfig:
"""Base configuration."""
SECRET_KEY = os.getenv('SECRET_KEY', 'y\n=\xc5\xfa\nB\xb8t\n\x83\xbef\x8a\xe3\xddE\x17\x06\xc9\x96\x8ec|')
DEBUG = False
CSRF_ENABLED = True
BCRYPT_LOG_ROUNDS = 13
SQLALCHEMY_TRACK_MODIFICATIONS = False
class DevelopmentConfig(BaseConfig):
"""Development configuration."""
DEVELOPMENT = True
DEBUG = True
BCRYPT_LOG_ROUNDS = 4
SQLALCHEMY_DATABASE_URI = postgres_local_base
I have prod and test configs as well.
I can connect through the pgAdmin in my windows laptop but can't connect my app to it.
I can add the server through add server in pgAdmin and access it to the fullest but somehow I'm unable to connect my application to my remote database.
This is what my AWS security policies are:
Kindly If any one can resolve this issue that will be appreciated.
It looks like your URL has an extra / in it.

debug Flask server inside Jupyter Notebook

I want to debug small flask server inside jupyter notebook for demo.
I created virtualenv on latest Ubuntu and Python2 (on Mac with Python3 this error occurs as well), pip install flask jupyter.
However, when I create a cell with helloworld script it does not run inside notebook.
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run(debug=True,port=1234)
File
"/home/***/test/local/lib/python2.7/site-packages/ipykernel/kernelapp.py",
line 177, in _bind_socket
s.bind("tcp://%s:%i" % (self.ip, port)) File "zmq/backend/cython/socket.pyx", line 495, in
zmq.backend.cython.socket.Socket.bind
(zmq/backend/cython/socket.c:5653) File
"zmq/backend/cython/checkrc.pxd", line 25, in
zmq.backend.cython.checkrc._check_rc
(zmq/backend/cython/socket.c:10014)
raise ZMQError(errno) ZMQError: Address already in use
NB - I change the port number after each time it fails.
Sure, it runs as a standalone script.
update without (debug=True) it's ok.
I installed Jupyter and Flask and your original code works.
The flask.Flask object is a WSGI application, not a server. Flask uses Werkzeug's development server as a WSGI server when you call python -m flask run in your shell. It creates a new WSGI server and then passes your app as paremeter to werkzeug.serving.run_simple. Maybe you can try doing that manually:
from werkzeug.wrappers import Request, Response
from flask import Flask
app = Flask(__name__)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == '__main__':
from werkzeug.serving import run_simple
run_simple('localhost', 9000, app)
Flask.run() calls run_simple() internally, so there should be no difference here.
The trick is to run the Flask server in a separate thread. This code allows registering data providers. The key features are
Find a free port for the server. If you run multiple instances of the server in different notebooks they would compete for the same port.
The register_data function returns the URL of the server so you can use it for whatever you need.
The server is started on-demand (when the first data provider is registered)
Note: I added the #cross_origin() decorator from the flask-cors package. Else you cannot call the API form within the notebook.
Note: there is no way to stop the server in this code...
Note: The code uses typing and python 3.
Note: There is no good error handling at the moment
import socket
import threading
import uuid
from typing import Any, Callable, cast, Optional
from flask import Flask, abort, jsonify
from flask_cors import cross_origin
from werkzeug.serving import run_simple
app = Flask('DataServer')
#app.route('/data/<id>')
#cross_origin()
def data(id: str) -> Any:
func = _data.get(id)
if not func:
abort(400)
return jsonify(func())
_data = {}
_port: int = 0
def register_data(f: Callable[[], Any], id: Optional[str] = None) -> str:
"""Sets a callback for data and returns a URL"""
_start_sever()
id = id or str(uuid.uuid4())
_data[id] = f
return f'http://localhost:{_port}/data/{id}'
def _init_port() -> int:
"""Creates a random free port."""
# see https://stackoverflow.com/a/5089963/2297345
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 0))
port = sock.getsockname()[1]
sock.close()
return cast(int, port)
def _start_sever() -> None:
"""Starts a flask server in the background."""
global _port
if _port:
return
_port = _init_port()
thread = threading.Thread(target=lambda: run_simple('localhost', _port, app))
thread.start()
Although this question was asked long ago, I come up with another suggestion:
The following code is adapted from how PyCharm starts a Flask console.
import sys
from flask.cli import ScriptInfo
app = None
locals().update(ScriptInfo(create_app=None).load_app().make_shell_context())
print("Python %s on %s\nApp: %s [%s]\nInstance: %s" % (sys.version, sys.platform, app.import_name, app.env, app.instance_path))
Now you can access app and use everything described in the Flask docs on working with the shell

Gunicorn Flask Caching

I have a Flask application that is running using gunicorn and nginx. But if I change the value in the db, the application fails to update in the browser under some conditions.
I have a flask script that has the following commands
from msldata import app, db, models
path = os.path.dirname(os.path.abspath(__file__))
manager = Manager(app)
#manager.command
def run_dev():
app.debug = True
if os.environ.get('PROFILE'):
from werkzeug.contrib.profiler import ProfilerMiddleware
app.config['PROFILE'] = True
app.wsgi_app = ProfilerMiddleware(app.wsgi_app, restrictions=[30])
if 'LISTEN_PORT' in app.config:
port = app.config['LISTEN_PORT']
else:
port = 5000
print app.config
app.run('0.0.0.0', port=port)
print app.config
#manager.command
def run_server():
from gunicorn.app.base import Application
from gunicorn.six import iteritems
# workers = multiprocessing.cpu_count() * 2 + 1
workers = 1
options = {
'bind': '0.0.0.0:5000',
}
class GunicornRunner(Application):
def __init__(self, app, options=None):
self.options = options or {}
self.application = app
super(GunicornRunner, self).__init__()
def load_config(self):
config = dict([(key, value) for key, value in iteritems(self.options) if key in self.cfg.settings and value is not None])
for key, value in iteritems(config):
self.cfg.set(key.lower(), value)
def load(self):
return self.application
GunicornRunner(app, options).run()
Now if i run the server run_dev in debug mode db modifications are updated
if run_server is used the modifications are not seen unless the app is restarted
However if i run like gunicorn -c a.py app:app, the db updates are visible.
a.py contents
import multiprocessing
bind = "0.0.0.0:5000"
workers = multiprocessing.cpu_count() * 2 + 1
Any suggestions on where I am missing something..
I also ran into this situation. Running flask in Gunicorn with several workers and the flask-cache won´t work anymore.
Since you are already using
app.config.from_object('default_config') (or similar filename)
just add this to you config:
CACHE_TYPE = "filesystem"
CACHE_THRESHOLD = 1000000 (some number your harddrive can manage)
CACHE_DIR = "/full/path/to/dedicated/cache/directory/"
I bet you used "simplecache" before...
I was/am seeing the same thing, Only when running gunicorn with flask. One workaround is to set Gunicorn max-requests to 1. However thats not a real solution if you have any kind of load due to the resource overhead of restarting the workers after each request. I got around this by having nginx serve the static content and then changing my flask app to render the template and write to static, then return a redirect to the static file.
Flask-Caching SimpleCache doesn't work w. workers > 1 Gunicorn
Had similar issue using version Flask 2.02 and Flask-Caching 1.10.1.
Everything works fine in development mode until you put on gunicorn with more than 1 worker. One probably reason is that on development there is only one process/worker so weirdly under this restrict circumstances SimpleCache works.
My code was:
app.config['CACHE_TYPE'] = 'SimpleCache' # a simple Python dictionary
cache = Cache(app)
Solution to work with Flask-Caching use FileSystemCache, my code now:
app.config['CACHE_TYPE'] = 'FileSystemCache'
app.config['CACHE_DIR'] = 'cache' # path to your server cache folder
app.config['CACHE_THRESHOLD'] = 100000 # number of 'files' before start auto-delete
cache = Cache(app)

python bottle integration tests

I have a rest api hosted using bottle web framework. I would like to run integration tests for my api. As part of the test, I would need to start a local instance of the bottle server. But run api in bottle framework blocks the execution thread. How do I create integration tests with a local instance of the server?
I want to start the server during setUp and stop it after running all my tests.
Is this possible with bottle web framework?
I was able to do it using multi threading. If there is a better solution, I will consider it.
def setUp(self):
from thread import start_new_thread
start_new_thread(start_bottle,(),{})
def my_test():
#Run the tests here which make the http call to the resources hosted using above bottle server instance
UPDATE
class TestBottleServer(object):
"""
Starts a local instance of bottle container to run the tests against.
"""
is_running = False
def __init__(self, app=None, host="localhost", port=3534, debug=False, reloader=False, server="tornado"):
self.app = app
self.host = host
self.port = port
self.debug = debug
self.reloader = reloader
self.server = server
def ensured_bottle_started(self):
if TestBottleServer.is_running is False:
start_new_thread(self.__start_bottle__, (), {})
#Sleep is required for forked thread to initialise the app
TestBottleServer.is_running = True
time.sleep(1)
def __start_bottle__(self):
run(
app=self.app,
host=self.host,
port=self.port,
debug=self.debug,
reloader=self.reloader,
server=self.server)
#staticmethod
def restart():
TestBottleServer.is_running = False
TestBottleServer.ensured_bottle_started()
TEST_BOTTLE_SERVER = TestBottleServer()

Categories

Resources