I am using beaker cache to cache the output of a function.
When I invalidate the cache from flask uwsgi app, it did not reflect in celery app and vice versa.
On further investigation found that beaker using inspect.getsourcefile(func) for unique key to store in redis.
Now the problem is:
In flask uwsgi app, the path to load function shows up as
./myproject/db_api.py
while in celery it shows up as:
/opt/myproject/db_api.py.
How do I make sure that the inspect.getsourcefile(func) function returns same path in both cases?
Either making celery set the path as ./myporject/db_api.py or flask load the path as /opt/myproject/db_api.py is fine.
Celery is being run as a daemon with CELERYD_CHDIR='/opt' in /etc/default/celeryd. In celeryconfig.py I have CELERY_IMPORTS = ('myproject.controllers.celerytasks.cache_invalidate')
Flask is being run by uwsig with an .ini file on ubuntu with following config:
[uwsgi]
module = myproject
callable = app
chdir = /opt
Related
I am following the instructions here to deploy an app in Google App Engine. Everything works correctly.
Nevertheless, Google, by default, looks for the main folder (where app = Flask(__name__) is defined) in main.py. How could I redefine this? I would like to define this main folder as app.py.
Rename main.py to app.py
Add entrypoint: gunicorn -b :$PORT app:app to your app.yaml file. This is where you are telling Google to find the app object in a file called app
Add gunicorn to your requirements.txt file
Notes:
i. Because you're changing from main.py to app.py, you need to specify an entrypoint. GAE documentation says
If your app meets the following requirements, App Engine will start
your app with the gunicorn web server if you don't specify the
entrypoint field:
The root of your app directory contains a main.py file with a WSGI-compatible object called app.
Your app does not contain Pipfile or Pipfile.lock files.
ii. If you add an entrypoint, then you need to include gunicorn in your requirements.txt file
iii. I just tested the above configuration (the answer I gave) on a dev environment (Python 3.9 environment on Macbook using dev_appserver.py) and it works
As per flask documentation, FLASK_ENV environmental variable determines whether flask runs in development or production mode.
Hence I have a .env file like so:
FLASK_ENV="development"
and my app.py looks like this:
load_dotenv(find_dotenv())
app = Flask(__name__)
config = DevConfig() if os.environ.get('FLASK_ENV') == 'development' else ProdConfig()
app.config.from_object(config)
Now here's the problem: if I move .env into another folder (in my case config), flask stops seeing it. More specifically (and weirdly):
The env variable is set ok (I can print it from different parts of the app)
The config loads ok (dev config loads indeed)
But flask app itself says:
Loading .env environment variables…
* Serving Flask app "app.py"
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
How is it possible that env variable is set, but flask still thinks it's running in prod? Again this only happens when I move .env away from the root folder.
I am trying to deploy my flask app. Usually I would have an app.py and put all code in it.
app.py
templates/
|
|--index.html
for my really small projects. But then I have a slightly larger app and follow the larger app guide by flask.
So I have this:
setup.py
app/
__init__.py
views.py
models.py
forms.py
templates/
| ---index.html
I now have all my routes and views in views.py and running the app in __init__.py:
from flask import Flask
app = Flask(__name__)
import app.views # Name in setup.py
if __name__ == "__main__":
app.run()
(This is just an example)
So now I follow the guide by running it with pip install -e . and running with:
>set FLASK_APP=app(name I set in setup.py) flask run and it works. Except I do not know how to run it with one command. Since there is no one file to run I can not use gunicorn or anything like that. I am not sure how to go about executing this app. How would I run pip install . on the cloud server heroku?
My problem is because I have to import the app from __init__.py and views using import blog.[insert import] (models, views etc.) Any help is appreciated. Thank you.
EDIT: I do not want to use blueprints though. That might be too much. My app is medium, not small but not large either
You absolutely can use Gunicorn to run this project. Gunicorn is not limited to a single file, it imports Python modules just the same as flask run can. Gunicorn just needs to know the module to import, an the WSGI object to call within that module.
When you use FLASK_APP, all that flask run does is look for module.app, module.application or instances of the Flask() class. It also supports a create_app() or make_app() app factory, but you are not using such a factory.
Gunicorn won't search, if you only give it a module, it'll expect the name application to be the WSGI callable. In your case, you are using app so all you have to do is explicitly tell it what name to use:
gunicorn app:app
The part before the : is the module to import (app in your case), the part after the colon is the callable object (also named app in your module).
If you have set FLASK_APP as a Heroku config var and want to re-use that, you can reference that on the command line for gunicorn:
gunicorn $FLASK_APP:app
As for heroku, it can handle requirement.txt or setup.py
c.f. https://devcenter.heroku.com/articles/python-pip#local-file-backed-distributions
If your Python application contains a setup.py file but excludes a requirements.txt file, python setup.py develop will be used to install your package and resolve your dependencies.
If you already have a requirements file, but would like to utilize this feature, you can add the following to your requirements file:
-e .
And about run command, i think you cat put Procfile like
web: FLASK_APP=app flask run
or
web: FLASK_APP=app python -m flask run
I have a Flask app I am trying to serve via Gunicorn.
I am using virtualenv and python3. If I activate my venv cd to my app base dir then run:
gunicorn mysite:app
I get:
Starting gunicorn
Listening at http://127.0.0.1:8000
DEBUG:mysite.settings:>>Config()
...
Failed to find application: 'mysite'
Worker exiting
Shutting down: master
Reason: App failed to load
Looking in /etc/nginx/sites-available I only have the file 'default'. In sites-enabled I have no file.
In my nginx.conf file I have:
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
App structure:
mysite #this is where I cd to and run gunicorn mysite:app
--manage.py
--/mysite
----settings.py
----__init__.py
in manage.py for mysite I have following:
logger.debug("manage.py entry point")
app = create_app(app_name)
manager = Manager(app)
if __name__ == "__main__":
manager.run()
In __init__.py file:
def create_app(object_name):
app = Flask(__name__)
#more setup here
return app
In my settings.py in the app directory
class Config(object):
logger.debug(">>Config()") #this logs OK so gunicorn is at least starting in correct directory
From inside the virtualenv if I run
print(sys.path)
I find a path to python and site-packages for this virtualenv.
From what I have read to start gunicorn it's just a matter of installing it and running gunicorn mysite:app
Running gunicorn from the parent directory of mysite I get the same failed to find application: 'mysite', App failed to load error, but don't get the DEBUG...Config() logged (as we are clearly in the wrong directory to start in). Running gunicorn from mysite/mysite (clearly wrong) I get and Exception in worker process ereor, ImportError: No module named 'mysite'.
Any clues as to how I can get gunicorn running?
You're pointing gunicorn at mysite:app, which is equivalent to from mysite import app. However, there is no app object in the top (__init__.py) level import of mysite. Tell gunicorn to call the factory.
gunicorn "mysite:create_app()"
You can pass arguments to the call as well.
gunicorn "mysite:create_app('production')"
Internally, this is equivalent to:
from mysite import create_app
app = create_app('production')
Alternatively, you can use a separate file that does the setup. In your case, you already initialized an app in manage.py.
gunicorn manage:app
I have a Pyramid webapp with a Postgres database, and I'm using git for version control.
This is how my files are structured:
myapp/
|----dotcloud.yml
|----env/ # virtualenv
|----MyProject/
|
|----production.ini
|----requirements.txt
|----myapp.sql
|----myapp.psql
|----wsgi.py
|----myproject
|
|----scripts/
|----static/
|----templates/
|----__init__.py
|----views.py
|----models.py
This is my dotcloud.yml:
www:
type: python
config:
python_version: v2.7
approot: home/home/myapp
db:
type: postgresql
This is my wsgi.py:
from pyramid.paster import get_app, setup_logging
ini.ath = '.../myproject/production.ini'
setup_logging(ini_path)
application = get_app(ini_path, 'main')
This is my (simplified) __init__.py:
def main(global_config, **settings):
engine = engine_from_config(settings, 'sqlalchemy.')
DBSession.configure(bind=engine)
config = Configurator(...)
config.add_static_view('static', 'static', cache_max_age=3600)
config.add_route('index', '/')
# other routes...
config.scan()
return config.make_wsgi_app()
I've read the official documentation and the third-party documentation to get to this point but there must be something I'm not doing right. It's my first time deploying a webapp, and running my app locally still works.
In MyProject/ (where the dotcloud.yml file resides) I did dotcloud create mydomainname, dotcloud connect mydomainname and then dotcloud push. But I'm getting an internal server error. What am I doing wrong?
Also, the documentation says that if I'm using git, I must state that explicitly when I use dotcloud create or dotcloud connect, but what is the exact command?
EDIT: Before moving to DotCloud, I tried to use DigitalOcean, and had some problems when using pip to install the dependencies in requirements.txt. I had to do a git clone separately on the CLI so that I could enter my username and password, and I also had to install psycopg2 manually. Could this be one of the problems here too? If so, how can I fix it?
There are several things you should try changing. First, do not push your virtualenv directory (env). The dotCloud builder will create another virtualenv based on your requirements.txt. One way to avoid pushing your env directory would be to move dotcloud.yml to MyProject. You seem to think that is where it is ("In MyProject/ (where the dotcloud.yml file resides)" ) but that is not what your file tree says.
Then, do the dotcloud connect or dotcloud create in MyProject, as you stated.
You should remove the approot line from your dotcloud.yml. Allow approot to go to its default, the current directory.