I have been searching this since yesterday and still nothing. From all that researching this is my understanding so far.
You can access your datastore remotely with remote_api_shell.py
Make sure your path is set correctly in your Environment variable.
And according to my understanding the remote datastore that they were talking about is the datastore in appspot.com and not the local one. I don't want to deploy my app right now and hence I want to just work locally, for now atleast.
I created a model named Usersdb in my app. As someone coming from PHP, MYSQL background I thought GQL would have console environment for us to test the queries. But after some googling I found out that you can manipulate local datastore from the interactive console that's in
http://localhost:8081/_ah/admin/interactive
From the post Google App Engine GQL query on localhost I got the idea of performing GqlQuery in the interactive console while in localhost which goes something like this:-
from google.appengine.ext import db
q = db.GqlQuery("SELECT * FROM Userdb where username = 'random_user'")
print q.get().username
But what I really wanted to do was perform method calls like get_by_id() and get_by_key_name() and such in my interactive console without having to test on my app. Like:-
print Userdb.get_by_id(12)
How can I get those running? Do I have to import my python file to the interactive console? I tried doing that too but it crashed app engine. I'm just starting out on app engine. Forgive me if this is a completely stupid question.
You should import the model class that you wrote into your session in the interactive console. For example, if you have a file named model.py in your application, which contains your Userdb class you could write the following in the interactive console:
import model
print model.Userdb.get_by_id(12)
Related
I need to execute some housekeeping code but only in development or production environment. Unfortunately all management commands execute similar to runserver. Is there any clean way to classify what is the execution environment and run the code selectively.
I saw some solutions like 'runserver' in sys.argv
but it does not work for production. And does not look very clean.
Does django provide anything to classify all these different scenarios code is executing at?
Edit
The real problem is we need to initialise our local cache once the apps are loaded with some data that are frequently accessed. In general I want to fetch DB for some specific information and cache it (currently in memory). The issue is, when it tries to fetch DB, the table may not be created, in fact there may not be migration files created at all. So, when I run makemigrations/migrate, it will run this code which tries to fetch from DB, and throw error saying table does not exist. But if I can't run makemigration/migrate, there will be no table, it is kind of a loop I'm trying to avoid. The part of code will run for all management commands, but I would like to restrict it's execution only to when the app is actually serving requests (that is when the cache is needed) and not for any management commands (including the user defined ones).
```python
from django.apps import AppConfig
from my_app.signals import app_created
class MyAppConfig(AppConfig):
name = 'my_app'
def ready(self):
import my_app.signals
# Code below should be executed only in actual app execution
# And not in makemigration/migrate etc management commands
app_created.send(sender=MyAppConfig, sent_by="MyApp")
```
Q) Send app created signal for app execution other than executions due to management commands like makemigrations, migrate, etc.
There are so many different ways to do this. But generally when I create a production (or staging, or development) server I set an environment variable. And dynamically decide which settings file to load based on that environment variable.
Imagine something like this in a Django settings file:
import os
ENVIRONMENT = os.environ.get('ENVIRONMENT', 'development')
Then you can use
from django.conf import settings
if settings.ENVIRONMENT == 'production':
# do something only on production
Since, I did not get an convincing answer and I managed to pull off a solution, although not a 100% clean. I thought I would just share solution I ended up with.
import sys
from django.conf import settings
if (settings.DEBUG and 'runserver' in sys.argv) or not settings.DEBUG:
"""your code to run only in development and production"""
The rationale is you run the code if it is not in DEBUG mode no matter what. But if it is in DEBUG mode check if the process execution had runserver in the arguments.
The App Engine Dev Server documentation says the following:
The development server simulates the production App Engine service. One way in which it does this is to prepend a string (dev~) to the APPLICATION_IDenvironment variable. Google recommends always getting the application ID using get_application_id
In my application, I use different resources locally than I do on production. As such, I have the following for when I startup the App Engine instance:
import logging
from google.appengine.api.app_identity import app_identity
# ...
# other imports
# ...
DEV_IDENTIFIER = 'dev~'
application_id = app_identity.get_application_id()
is_development = DEV_IDENTIFIER in application_id
logging.info("The application ID is '%s'")
if is_development:
logging.warning("Using development configuration")
# ...
# set up application for development
# ...
# ...
Nevertheless, when I start my local dev server via the command line with dev_appserver.py app.yaml, I get the following output in my console:
INFO: The application ID is 'development-application'
WARNING: Using development configuration
Evidently, the dev~ identifier that the documentation claims will be preprended to my application ID is absent. I have also tried to use the App Engine Launcher UI to see if that changed anything, but it did not.
Note that 'development-application' is the name of my actual application, but I expected it to be 'dev~development-application'.
Google recommends always getting the application ID using get_application_id
But, that's if you cared about the application ID -- you don't: you care about the partition. Check out the source -- it's published at https://code.google.com/p/googleappengine/source/browse/trunk/python/google/appengine/api/app_identity/app_identity.py .
get_app_identity uses os.getenv('APPLICATION_ID') then passes that to internal function _ParseFullAppId -- which splits it by _PARTITION_SEPARATOR = '~' (thus removing again the dev~ prefix that dev_appserver.py prepended to the environment variable). That's returned as the "partition" to get_app_identity (which ignores it, only returning the application ID in the strict sense).
Unfortunately, there is no architected way to get just the partition (which is in fact all you care about).
I would recommend that, to distinguish whether you're running locally or "in production" (i.e, on Google's servers at appspot.com), in order to access different resources in each case, you take inspiration from the way Google's own example does it -- specifically, check out the app.py example at https://cloud.google.com/appengine/docs/python/cloud-sql/#Python_Using_a_local_MySQL_instance_during_development .
In that example, the point is to access a Cloud SQL instance if you're running in production, but a local MySQL instance instead if you're running locally. But that's secondary -- let's focus instead on, how does Google's own example tell which is the case? The relevant code is...:
if (os.getenv('SERVER_SOFTWARE') and
os.getenv('SERVER_SOFTWARE').startswith('Google App Engine/')):
...snipped: what to do if you're in production!...
else:
...snipped: what to do if you're in the local server!...
So, this is the test I'd recommend you use.
Well, as a Python guru, I'm actually slightly embarassed that my colleagues are using this slightly-inferior Python code (with two calls to os.getenv) -- me, I'd code it as follows...:
in_prod = os.getenv('SERVER_SOFTWARE', '').startswith('Google App Engine/')
if in_prod:
...whatever you want to do if we're in production...
else:
...whatever you want to do if we're in the local server...
but, this is exactly the same semantics, just expressed in more elegant Python (exploiting the second optional argument to os.getenv to supply a default value).
I'll be trying to get this small Python improvement into that example and to also place it in the doc page you were using (there's no reason anybody just needing to find out if their app is being run in prod or locally should ever have looked at the docs about Cloud SQL use -- so, this is a documentation goof on our part, and, I apologize). But, while I'm working to get our docs improved, I hope this SO answer is enough to let you proceed confidently.
That documentation seems wrong, when I run the commands locally it just spits out the name from app.yaml.
That being said, we use
import os
os.getenv('SERVER_SOFTWARE', '').startswith('Dev')
to check if it is the dev appserver.
I have a little system built for learning purposes with Flask-SQLAlchemy and deployed on Heroku(python/postgresql:
http://monkey-me.herokuapp.com/
https://github.com/coolcatDev/monkey-me
My app works fine locally and I have unit tested its functionality and use cases through 14 successfully passed tests.
When I deploy everything seems to go perfectly. Its deployed, I define environment variable APP_SETTINGS>>>config.BaseConfig, I run the db_create.py script to initialize the db. I create some users:
username-userpassword:
Alex-passwordAlex
Jack-passwordJack
Sara-passwordSara
But one thing is missing...I go to the users page from the main navigation bar and I get a 5oo internal server error saying:
The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.
My app code on app.py has debug mode on:
if __name__ == '__main__':
app.run(debug=True)
But no further error is displayed.
Notice that if I access the system on heroku and I am logged out, if I go to Users I am as I should redirected to the login page so thats how far I have gone with the debug...
If I set the environment variable LAUNCHY_DEBUG on heroku to either true or false everything goes crazy and new problems appear...like I cant delete a user, the profile images wont load....
If I remove the LAUNCHY_DEBUG var from heroku the new problems(images wont load, can't remove user..) persist among the original 500 error of the users page.
Thanks in advance for any suggestion with the debugging
Use the following to get more feedback written in logs:
import logging
app.logger.addHandler(logging.StreamHandler(sys.stdout))
app.logger.setLevel(logging.ERROR)
That reveals the problem:
ProgrammingError: (ProgrammingError) column "user_bf.id" must appear in the GROUP BY clause or be used in an aggregate function
Modify query:
userList = (
users.query
.add_column(db.func.count(friendships.user_id).label("total"))
.add_column(user_bf.id.label("best_friend"))
.add_column(user_bf.userName.label("best_friend_name"))
.outerjoin(friendships, users.id == friendships.user_id)
.outerjoin(user_bf, users.best_friend)
.group_by(users.id, user_bf.id)
.order_by(test)
.paginate(page, 6, False)
)
Regarding the images disappearing:
Any file written on a filesystem hosted on heroku will be deleted on dynos end of lifecycle.
I'm trying to sync my db from a view, something like this:
from django import http
from django.core import management
def syncdb(request):
management.call_command('syncdb')
return http.HttpResponse('Database synced.')
The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything?
I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.
management.call_command('syncdb', interactive=False)
Works like this (at least with Django 1.1.):
from django.core.management.commands import syncdb
syncdb.Command().execute(noinput=True)
I have a Pylons app where I would like to move some of the logic to a separate batch process. I've been running it under the main app for testing, but it is going to be doing a lot of work in the database, and I'd like it to be a separate process that will be running in the background constantly. The main pylons app will submit jobs into the database, and the new process will do the work requested in each job.
How can I launch a controller as a stand alone script?
I currently have:
from warehouse2.controllers import importServer
importServer.runServer(60)
and in the controller file, but not part of the controller class:
def runServer(sleep_secs):
try:
imp = ImportserverController()
while(True):
imp.runImport()
sleepFor(sleep_secs)
except Exception, e:
log.info("Unexpected error: %s" % sys.exc_info()[0])
log.info(e)
But starting ImportServer.py on the command line results in:
2008-09-25 12:31:12.687000 Could not locate a bind configured on mapper Mapper|I
mportJob|n_imports, SQL expression or this Session
If you want to load parts of a Pylons app, such as the models from outside Pylons, load the Pylons app in the script first:
from paste.deploy import appconfig
from pylons import config
from YOURPROJ.config.environment import load_environment
conf = appconfig('config:development.ini', relative_to='.')
load_environment(conf.global_conf, conf.local_conf)
That will load the Pylons app, which sets up most of the state so that you can proceed to use the SQLAlchemy models and Session to work with the database.
Note that if your code is using the pylons globals such as request/response/etc then that won't work since they require a request to be in progress to exist.
I'm redacting my response and upvoting the other answer by Ben Bangert, as it's the correct one. I answered and have since learned the correct way (mentioned below). If you really want to, check out the history of this answer to see the wrong (but working) solution I originally proposed.