Deploy/Reload code from python pyramid project in production - python

I use ...\pserve development.ini --reload in my dev environment to restart my API when code change.
The doc says:
Auto-template-reload behavior is not recommended for production sites
as it slows rendering slightly; it’s usually only desirable during
development.
But the doc has no proposition for a production environment. What's the recommendation to reload, do I have to make it manually every time?

Yes, you will need to restart the service if you change anything in your config file.
If you know that you'll be changing things and don't want to restart it every time that happens, move some of your configs to a database and refactor your app to read from that. This won't be possible for everything, and you'll need to be careful that when an update happens it is applied correctly, but it can be done for some things.

First of all, you are talking about the section of the documentation, Automatically Reloading Templates. That only discusses how to reload templates automatically, not your entire application.
The documentation explicitly states not to use --reload in production. That is an automatic function, not a manual one.
If you change your code and deploy it to a production environment, it is assumed that you would restart your application manually, thereby removing the need to use --reload when invoking pserve production.ini.

Related

Elastic Beanstalk with Django: is there a way to run manage.py shell and have access to environment variables?

Similar question was asked here, however the solution does not give the shell access to the same environment as the deployment. If I inspect os.environ from within the shell, none of the environment variables appear.
Is there a way to run the manage.py shell with the environment?
PS: As a little side question, I know the mantra for EBS is to stop using eb ssh, but then how would you run one-off management scripts (that you don't want to run on every deploy)?
One of the cases you have to run something once is db schema migrations. Usually you store information about that in the db... So you can use db to sync / ensure that something was triggered only once.
Personally I have nothing against using eb ssh, I see problems with it however. If you want to have CI/CD, that manual operation is against the rules.
Looks like you are referring to WWW/API part of Beanstalk. If you need something that is quite frequent... maybe worker is more suitable? Problem here is that if API goes deployed first you would have wrong schema.
In general you are using EC2, so it's user data stores information that spins up you service. So there you can put your "stuff". Still you need to sync / ensure. Here are docs for beanstalk - for more information how to do that.
Edit
Beanstalk is kind of instrumentation on top of EC2. So there must be a way to work with it, since you have access to user data of that EC2s. No worries you don't need to dig that deep. There is good way of instrumenting your server. It is called ebextensions. It can be used to put files on the server, trigger commands, instrument cron. What ever you want.
You can create ebextension, with container_commands this time Python Configuration Namespaces section. That commands are executed on each deployment. Still, problem is that you need to sync since more then one deployment can go at the same time. Good part is that you can set env in the way you want.
I have no problem to access to the environment variables. How did you get the problem? Try do prepare page with the map.

Is apache2 reload for .conf changes only or is it allowable to be used when application code changes?

When the code for my python WSGI applicaiton changes should I use apache2's reload or graceful restart feature?
Currently we use reload, but have noticed that sometimes the application does not load properly and errors pertaining to missing modules are logged to the error files even though the modules have existed for a long time.
If you can, you should probably use graceful. But if your application is not exiting correctly you may have to force it with just restart.
For wsgi, you should try running in daemon mode. When it is running in daemon mode, you can restart your service just by touching the wsgi file and updating its timestamp. This will reload all the code without restarting apache.
Here is more info: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess
This is for django, but may be useful for your project: http://code.google.com/p/modwsgi/wiki/IntegrationWithDjango
The 'reload' and 'graceful' would have the same effect as far as reloading your web application. If you are seeing issues with imports like you describe, it is likely to be an issue in your application code with you having import order dependencies or import cycles. One sees this a lot with people using Django. Suggest you actually post an example of the error you are getting.

Refresh urls.py cache in django

I'm using django on nginx with FastCGI and i have a problem with urls.py. According to this question, django caches url.py file and i'm - just like above question's author - not able to modify my URLs definitions.
My question is - is there any way to clear url cache in django/nginx/fcgi without server restart (which not helps anyway)?
This is not just a urls.py thing, it's normal workflow for running a wsgi or fastcgi app. The module is in memory, and it doesn't get reloaded from disk until you tell the server that it's changed.
As per Django's FastCGI docs:
If you change any Python code on your site, you'll need to tell FastCGI the code has changed. But there's no need to restart Apache in this case. Rather, just reupload mysite.fcgi, or edit the file, so that the timestamp on the file will change. When Apache sees the file has been updated, it will restart your Django application for you.
If you have access to a command shell on a Unix system, you can accomplish this easily by using the touch command:
touch mysite.fcgi
For development, in most cases you can use the django development server, which watches for code changes and restarts when it sees something change.
You don't need to restart the whole server, just your FastCGI app. However, I don't know why you say this doesn't help - this is the way to do it. It can't not help.

How to set up a staging environment on Google App Engine

Having properly configured a Development server and a Production server, I would like to set up a Staging environment on Google App Engine useful to test new developed versions live before deploying them to production.
I know two different approaches:
A. The first option is by modifying the app.yaml version parameter.
version: app-staging
What I don't like of this approach is that Production data is polluted with my staging tests because (correct me if I'm wrong):
Staging version and Production version share the same Datastore
Staging version and Production version share the same logs
Regarding the first point, I don't know if it could be "fixed" using the new namespaces python API.
B. The second option is by modifying the app.yaml application parameter
application: foonamestaging
with this approach, I would create a second application totally independent from the Production version.
The only drawback I see is that I'm forced to configure a second application (administrators set up).
With a backup\restore tool like Gaebar this solution works well too.
What kind of approach are you using to set up a staging environment for your web application?
Also, do you have any automated script to change the yaml before deploying?
If separate datastore is required, option B looks cleaner solution for me because:
You can keep versions feature for real versioning of production applications.
You can keep versions feature for traffic splitting.
You can keep namespaces feature for multi-tenancy.
You can easily copy entities from one app to another. It's not so easy between namespaces.
Few APIs still don't support namespaces.
For teams with multiple developers, you can grant upload to production permission for a single person.
I chose the second option in my set-up, because it was the quickest solution, and I didn't make any script to change the application-parameter on deployment yet.
But the way I see it now, option A is a cleaner solution. You can with a couple of code lines switch the datastore namespace based on the version, which you can get dynamically from the environmental variable CURRENT_VERSION_ID as documented here: http://code.google.com/appengine/docs/python/runtime.html#The_Environment
We went with the option B. And I think it is better in general as it isolates the projects completely. So for example playing around with some of the configurations on the staging server will not affect and wont compromise security or cause any other butterfly effect in your production environment.
As for the deployment script, you can have any application name you want in your app.yaml. Some dummy/dev name and when you deploy, just use an -A parameter:
appcfg.py -A your-app-name update .
That will simplify your deploy script quite much, no need to string replace or anything similar in your app.yaml
We use option B.
In addition to Zygmantas suggestions about the benefits of separating dev from prod at application level, we also use our dev application to test performance.
Normally the dev instance runs without much available in the way of resources, this helps to see where the application "feels" slow. We can then also independently tweak the performance settings to see what makes a difference (e.g. front-end instance class).
Of course sometimes we need to bite the bullet and tweak & watch on live. But it's nice to have the other application to play with.
Still use namespaces and versions, just dev is dirty and experimental.
Here is what the Google documentation says :
A general recommendation is to have one project per application per
environment. For example, if you have two applications, "app1" and
"app2", each with a development and production environment, you would
have four projects: app1-dev, app1-prod, app2-dev, app2-prod. This
isolates the environments from each other, so changes to the
development project do not accidentally impact production, and gives
you better access control, since you can (for example) grant all
developers access to development projects but restrict production
access to your CI/CD pipeline
With this in mind, add a dispatch.yaml file at the root directory, and in each directory or repository that represents a single service and contain that service, add a app.yaml file along with the associated source code, as explained here : Structuring web services in App Engine
Edit, check out the equivalent link in the python section if you're using python.
No need to create a separate project. You can use dispatch.yaml to route your staging URL to another service (staging) in the same project.
Create a custom domain staging.yourdomain.com
Create a separate app-staging.yaml, that specifies staging service.
...
service: staging
...
Create distpatch.yaml that contains something like
...
url: "*staging.mydomain.com/"
service: staging
url: "*mydomain.com/"
service: default
...
gloud app deploy app-staging.yaml dispatch.yaml
use of application in app.yaml has been shut down.
Instead Google recommends
gcloud app deploy --project [YOUR_PROJECT_ID]
Please see https://cloud.google.com/appengine/docs/standard/python/config/appref

Restarting a Django application running on Apache + mod_python

I'm running a Django app on Apache + mod_python. When I make some changes to the code, sometimes they have effect immediately, other times they don't, until I restart Apache. However I don't really want to do that since it's a production server running other stuff too. Is there some other way to force that?
Just to make it clear, since I see some people get it wrong, I'm talking about a production environment. For development I'm using Django's development server, of course.
If possible, you should switch to mod_wsgi. This is now the recommended way to serve Django anyway, and is much more efficient in terms of memory and server resources.
In mod_wsgi, each site has a .wsgi file associated with it. To restart a site, just touch the relevant file, and only that code will be reloaded.
As others have suggested, use mod_wsgi instead. To get the ability for automatic reloading, through touching the WSGI script file, or through a monitor that looks for code changes, you must be using daemon mode on UNIX. A slight of hand can be used to achieve same on Windows when using embedded mode. All the details can be found in:
http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode
You can reduce number of connections to 1 by setting "MaxRequestsPerChild 1" in your httpd.conf file. But do it only on test server, not production.
or
If you don't want to kill existing connections and still restart apache you can restart it "gracefully" by performing "apache2ctl gracefully" - all existing connections will be allowed to complete.
Use a test server included in Django. (like ./manage.py runserver 0.0.0.0:8080) It will do most things you would need during development. The only drawback is that it cannot handle simultaneous requests with multi-threading.
I've heard that there is a trick that setting Apache's max instances to 1 so that every code change is reflected immediately--but because you said you're running other services, so this may not be your case.

Categories

Resources