What is the difference between using Apple's sandbox servers or not with push notifications? For example:
apns = APNs(use_sandbox=True, cert_file='cert.pem', key_file='key.pem')
vs.
apns = APNs(use_sandbox=False, cert_file='cert.pem', key_file='key.pem')
Why would someone care if they are using Apple's sandbox servers or not? Is there an actual reason why it should be used or not?
You should use the sandbox while you are in staging and production server while you have deployed ad hoc or on AppStore.
The reason is to keep test environment distinct from production.
When you create an application you need to set different certificates and provisioning profile to sign the app. Basically one for debug and one for distribution. If you want to add push functionalities you must create 2 certificates to communicate with APNS. Push test certificates only works in sandbox with app signed with debug certificates, while Push production certificates only works with app signed using a distribution cert.
It's quite common to keep test environment distinct from production, sometimes I work with 3 environments: test, stage, production. One for pure testing, one for understand if everything works as expected before going to production.
Suppose that you already have an application on App Store that uses Push notification, now you'd like to publish a new version of the app that enhance or modify something in the payload of the notification. Wouldn't by nice if you could test that new payload in an eviroment different from the one in the App Store? so you can take your time to see if everything is working correctly and maybe check if the changes doesn't affect the old app.That is the purpose of sandbox.
Did I answer your question?
Related
I am trying to build a web-app that has both a Python part and a Node.js part. The Python part is a RESTful API server, and the Node.js will use sockets.io and act as a push server. Both will need to access the same DB instance (Heroku Postgres in my case). The Python part will need to talk to the Node.js part in order to send push messages to be delivered to clients.
I have the Python and DB parts built and deployed, running under a "web" dyno. I am not sure how to build the Node part -- and especially how the Python part can talk to the Node.js part.
I am assuming that the Node.js will need to be a new Heroku app, so that it too can run on a 'web' dyno, so that it benefits from the HTTP routing stack, and clients can connect to it. In such a case, will my Python dynos will be accessing it using just like regular clients?
What are the alternatives? How is this usually done?
After having played around a little, and also doing some reading, it seems like Heroku apps that need this have 2 main options:
1) Use some kind of back-end, that both apps can talk to. Examples would be a DB, Redis, 0mq, etc.
2) Use what I suggested above. I actually went ahead and implemented it, and it works.
Just thought I'd share what I've found.
What are the common steps taken to secure a django app instance in production.
I am using sqlite so encrypting the database would be nice too.
And since the database is encrypted the app code should also be either encrypted or compiled only. Is it safe to simply delete all the *py files and leave the *pyc files?
Also, is it possible to disable the django shell (./manage.py shell) in the production server? Once the shell is accessible so is all the data.
The stack that I am using is: Nginx + Gunicorn + Django + SQLite all hosted on a rackspace dedicated server with a dedicated firewall.
Basically, the objective is, that anyone with root access may not access the database contents.
Securing django is an important question, but I think you are confused.
First of all even if you could make your code less easy to examine, it would not make it more secure. Secondly, it is possible to recover all but comments from pyc files.
Lastly, the django shell is a convenience for interacting with your application at the commandline. If anyone unauthorised were in a position to run it, it would not matter if you had disabled it - your security would already be completely compromised.
I strongly recommend that you do not administer your own production server with your current state of knowledge. Use a shared host, and follow your hosting service's security guidelines. Concentrate on the actual web security aspects of your application.
One more thing: you're not using the built-in server in production are you?
Update: You can't protect yourself from root, and even if you could, they could, say, just put the hard disk in another computer.
I have an app that services inbound mail and I have deployed a new development version to Google App Engine. The default is currently set to the previous version.
Is there a way to specify that inbound mail should be delivered to a particular version?
This is well documented using URLs but I can't find any reference to version support in the inbound mail service...
No, this isn't currently supported. You could write some code for your default version that routes mail to other versions via URLFetch, though.
There is an easier way to do this than writing code that routes between different versions using URLFetch.
If you have a large body of code that is email oriented and you need to have a development version,
simply use one of your ten applications as the development application (version).
This allows you to do things like have test-specific entities in the development application Datastore and you can
test as much as you want running on appengine live.
The only constraints are:
because the application has a different name, for email sending from the application, you either need to send from your gmail account or have a
configuration that switches the application name
sending test email to the application will have a slightly different email address (not a big issue I think)
keep an app.yaml with a different application name
you burn another one of your ten possible apps
Most RCS will allow you to have the same project checked out into different directories. Once you are ready for launch
(all development code is committed and testing done), update the 'production' directory (except for app.yaml) and then deploy.
Having properly configured a Development server and a Production server, I would like to set up a Staging environment on Google App Engine useful to test new developed versions live before deploying them to production.
I know two different approaches:
A. The first option is by modifying the app.yaml version parameter.
version: app-staging
What I don't like of this approach is that Production data is polluted with my staging tests because (correct me if I'm wrong):
Staging version and Production version share the same Datastore
Staging version and Production version share the same logs
Regarding the first point, I don't know if it could be "fixed" using the new namespaces python API.
B. The second option is by modifying the app.yaml application parameter
application: foonamestaging
with this approach, I would create a second application totally independent from the Production version.
The only drawback I see is that I'm forced to configure a second application (administrators set up).
With a backup\restore tool like Gaebar this solution works well too.
What kind of approach are you using to set up a staging environment for your web application?
Also, do you have any automated script to change the yaml before deploying?
If separate datastore is required, option B looks cleaner solution for me because:
You can keep versions feature for real versioning of production applications.
You can keep versions feature for traffic splitting.
You can keep namespaces feature for multi-tenancy.
You can easily copy entities from one app to another. It's not so easy between namespaces.
Few APIs still don't support namespaces.
For teams with multiple developers, you can grant upload to production permission for a single person.
I chose the second option in my set-up, because it was the quickest solution, and I didn't make any script to change the application-parameter on deployment yet.
But the way I see it now, option A is a cleaner solution. You can with a couple of code lines switch the datastore namespace based on the version, which you can get dynamically from the environmental variable CURRENT_VERSION_ID as documented here: http://code.google.com/appengine/docs/python/runtime.html#The_Environment
We went with the option B. And I think it is better in general as it isolates the projects completely. So for example playing around with some of the configurations on the staging server will not affect and wont compromise security or cause any other butterfly effect in your production environment.
As for the deployment script, you can have any application name you want in your app.yaml. Some dummy/dev name and when you deploy, just use an -A parameter:
appcfg.py -A your-app-name update .
That will simplify your deploy script quite much, no need to string replace or anything similar in your app.yaml
We use option B.
In addition to Zygmantas suggestions about the benefits of separating dev from prod at application level, we also use our dev application to test performance.
Normally the dev instance runs without much available in the way of resources, this helps to see where the application "feels" slow. We can then also independently tweak the performance settings to see what makes a difference (e.g. front-end instance class).
Of course sometimes we need to bite the bullet and tweak & watch on live. But it's nice to have the other application to play with.
Still use namespaces and versions, just dev is dirty and experimental.
Here is what the Google documentation says :
A general recommendation is to have one project per application per
environment. For example, if you have two applications, "app1" and
"app2", each with a development and production environment, you would
have four projects: app1-dev, app1-prod, app2-dev, app2-prod. This
isolates the environments from each other, so changes to the
development project do not accidentally impact production, and gives
you better access control, since you can (for example) grant all
developers access to development projects but restrict production
access to your CI/CD pipeline
With this in mind, add a dispatch.yaml file at the root directory, and in each directory or repository that represents a single service and contain that service, add a app.yaml file along with the associated source code, as explained here : Structuring web services in App Engine
Edit, check out the equivalent link in the python section if you're using python.
No need to create a separate project. You can use dispatch.yaml to route your staging URL to another service (staging) in the same project.
Create a custom domain staging.yourdomain.com
Create a separate app-staging.yaml, that specifies staging service.
...
service: staging
...
Create distpatch.yaml that contains something like
...
url: "*staging.mydomain.com/"
service: staging
url: "*mydomain.com/"
service: default
...
gloud app deploy app-staging.yaml dispatch.yaml
use of application in app.yaml has been shut down.
Instead Google recommends
gcloud app deploy --project [YOUR_PROJECT_ID]
Please see https://cloud.google.com/appengine/docs/standard/python/config/appref
I am developping a django app on Windows, SQLite and the django dev server . I have deployed it to my host server which is running Linux, Apache, FastCgi, MySQL.
Unfortunately, I have an error returned by the server on the prod while everything ok on the dev machine. I've asked my provider for a pre-production solution in order to be able to debug and understand the problem.
Anyway, what are according to you the most likely errors that can happen when moving a django app from dev to prod?
Best
Update: I think that a pre-prod is the best way to address this kind of problem. But I would like to build a check list of what must be done before to put in production.
Thanks for the very valuable answers that I received until now :)
Update: FYI, I 've implemented the preprod server and the email notification as suggested by shanyu and I can see that the error comes from the smart_if templatetag that I am using on this new version. Any trick with template tags?
Update: I think I've fixed the pb which was caused I think by the Filezilla FTP sending. I was using the "replace if newer" option which I guess is causing some unexpected results. Using the "replace all" option fix the issue. However, it was an opportunity for me to learn more about deployment. Thansk for your answers.
Problems I typically have include:
Misconfigured productions settings, whether in my production localsettings.py, wsgi/cgi, or apache site files in /etc/sites-available
Database differences. I use South for migrations and have run into some subtle issues when performing my migration on PostgreSQL when it worked smoothly in sqlite.
Static file hosting since I cheat and use the Django server in development
Permissions, both on the file system and within the database
Rare, but possible, network issues preventing me from getting my dependencies, whether on PyPi or some 3rd party site
Ways that I have mitigated these issues:
Use the same database in production and development (in your case, MySQL everywhere)
I've found it is useful to have a "test" environment which mimics production in every way possible (it can be on lower end hardware, or even the same machine). This way, if there are any issues in this "production-like" enivornment, I can solve them without taking my production server offline.
Script everything for repeatable deployments. I use fabric, but zc.buildout or Paver would also work. These tools help reduce typos while deploying and reduce the time to deploy my app.
Use version control (mercurial, git, subversion) and a schema migration tool (like South), so if something does go wrong when you deploy to production, you have the possibility of backing out the changes and allowing production to run on the old code with the old database schema.
I haven't set up an "egg proxy" yet, but I am considering it, to avoid issues when downloading dependencies.
I've found pip's freezing dependencies to be useful, in case a new, incompatible change to a library occurred since I downloaded it initially
Use a web testing framework like Windmill or Selenium to test my application in my "test" environment, so that I can get a lot of test coverage of my system very quickly.
Regarding your case, I can think of 2 simple things that may help you:
You can enable Django to send messages when exceptions occur giving details about them. Look at here for details.
You'll be better off if you set up a test environment on the prod server (say, test.example.com) so that you can check if things will go smoothly or not before you deploy the app.
I believe these were the podcasts I listened to recently (from Pycon 2009):
Locate Django in the Real World (PyCon 2009):
http://advocacy.python.org/podcasts/pycon.rss
Parts 1 to 3
Very good introduction to designing your apps for deployment, in particular for reuse and redeployment.
Regs.