I'm trying to sync my db from a view, something like this:
from django import http
from django.core import management
def syncdb(request):
management.call_command('syncdb')
return http.HttpResponse('Database synced.')
The issue is, it will block the dev server by asking for user input from the terminal. How can I pass it the '--noinput' option to prevent asking me anything?
I have other ways of marking users as super-user, so there's no need for the user input, but I really need to call syncdb (and flush) programmatically, without logging on to the server via ssh. Any help is appreciated.
management.call_command('syncdb', interactive=False)
Works like this (at least with Django 1.1.):
from django.core.management.commands import syncdb
syncdb.Command().execute(noinput=True)
Related
I have a little Django app that uses PyMongo and MongoDB.
If I write (or update) something in the database, I have to restart the server for it to show in the web page. I'm running with 'python manage.py runserver'
I switched to the django dummy cache but that didn't help.
Every database action is within an 'with MongoClient' statement.
I figured it out. I read in the data in the django_tables2 class variables. So it was never refreshed...
Bangs forehead on desk...
I need to execute some housekeeping code but only in development or production environment. Unfortunately all management commands execute similar to runserver. Is there any clean way to classify what is the execution environment and run the code selectively.
I saw some solutions like 'runserver' in sys.argv
but it does not work for production. And does not look very clean.
Does django provide anything to classify all these different scenarios code is executing at?
Edit
The real problem is we need to initialise our local cache once the apps are loaded with some data that are frequently accessed. In general I want to fetch DB for some specific information and cache it (currently in memory). The issue is, when it tries to fetch DB, the table may not be created, in fact there may not be migration files created at all. So, when I run makemigrations/migrate, it will run this code which tries to fetch from DB, and throw error saying table does not exist. But if I can't run makemigration/migrate, there will be no table, it is kind of a loop I'm trying to avoid. The part of code will run for all management commands, but I would like to restrict it's execution only to when the app is actually serving requests (that is when the cache is needed) and not for any management commands (including the user defined ones).
```python
from django.apps import AppConfig
from my_app.signals import app_created
class MyAppConfig(AppConfig):
name = 'my_app'
def ready(self):
import my_app.signals
# Code below should be executed only in actual app execution
# And not in makemigration/migrate etc management commands
app_created.send(sender=MyAppConfig, sent_by="MyApp")
```
Q) Send app created signal for app execution other than executions due to management commands like makemigrations, migrate, etc.
There are so many different ways to do this. But generally when I create a production (or staging, or development) server I set an environment variable. And dynamically decide which settings file to load based on that environment variable.
Imagine something like this in a Django settings file:
import os
ENVIRONMENT = os.environ.get('ENVIRONMENT', 'development')
Then you can use
from django.conf import settings
if settings.ENVIRONMENT == 'production':
# do something only on production
Since, I did not get an convincing answer and I managed to pull off a solution, although not a 100% clean. I thought I would just share solution I ended up with.
import sys
from django.conf import settings
if (settings.DEBUG and 'runserver' in sys.argv) or not settings.DEBUG:
"""your code to run only in development and production"""
The rationale is you run the code if it is not in DEBUG mode no matter what. But if it is in DEBUG mode check if the process execution had runserver in the arguments.
I am new to flask, but moderately proficient in python - I have a flask app that uses flask-security for user authentication. I would like to add some additional functionality to the user login process. Specifically, I need to save the user's auth_token (which I have set up to be a one-time-use token) to the db when they login, and remove it when they log out. The issue comes because flask-security does not (to my knowledge) expose the machinery of logging in directly to the developer. As far as I can tell from the code, it imports flask-login, which uses a login_user function.
I started out by trying to override this function by importing flask.ext.login (which normally, I would not need to do) and redefining the function as follows:
import flask.ext.login as a
def new_login_user():
...copy of existing function goes here...
...Plus new stuff with current_user.get_auth_token()...
a.login_user = new_login_user
however, I got hit with all sorts of namespace issues, and it seems like a really ugly way to do it.
I was thinking there might be a way to do it with a decorator, but I am new to flask, and have not used decorators much regardless.
Any ideas on what the best way to approach this might be? for context, I want the auth_token in the db, because I need to pass off the website authentication to another process, which also accesses the db. The other process is an API server using websockets. I don't want to combine the processes.
I think using a signal decorator seems to be the easiest-- in the following example on_user_logged_in should be called when a user logs into your app. More info in the docs.
from flask.ext.login import user_logged_in
#user_logged_in.connect_via(app)
def on_user_logged_in(sender, user):
log_auth_token(user.get_auth_token()) # or whatever.
Best keep it clean, write your own new_login_user function and use that whenever you would otherwise use flask.ext.login.login_user. You could put your new_login_user into its own file and import it from there.
You could even call it login_user, and never import flask.ext.login.login_user directly.
I have been searching this since yesterday and still nothing. From all that researching this is my understanding so far.
You can access your datastore remotely with remote_api_shell.py
Make sure your path is set correctly in your Environment variable.
And according to my understanding the remote datastore that they were talking about is the datastore in appspot.com and not the local one. I don't want to deploy my app right now and hence I want to just work locally, for now atleast.
I created a model named Usersdb in my app. As someone coming from PHP, MYSQL background I thought GQL would have console environment for us to test the queries. But after some googling I found out that you can manipulate local datastore from the interactive console that's in
http://localhost:8081/_ah/admin/interactive
From the post Google App Engine GQL query on localhost I got the idea of performing GqlQuery in the interactive console while in localhost which goes something like this:-
from google.appengine.ext import db
q = db.GqlQuery("SELECT * FROM Userdb where username = 'random_user'")
print q.get().username
But what I really wanted to do was perform method calls like get_by_id() and get_by_key_name() and such in my interactive console without having to test on my app. Like:-
print Userdb.get_by_id(12)
How can I get those running? Do I have to import my python file to the interactive console? I tried doing that too but it crashed app engine. I'm just starting out on app engine. Forgive me if this is a completely stupid question.
You should import the model class that you wrote into your session in the interactive console. For example, if you have a file named model.py in your application, which contains your Userdb class you could write the following in the interactive console:
import model
print model.Userdb.get_by_id(12)
I am having problems logging in my Django project.
In my view.py in an "application" I am doing the following:
import logging
logging.basicConfig(filename="django.log",level=logging.DEBUG)
def request(request):
logging.debug("test debugging")
with a django.log file in the same directory as the view.py file.
Now when making a request from the browser I keep getting a 500 Internal Server error as shown in Firebug. I can get logging to work just fine when I simply run it through the interactive python shell or executing it from a .py file like the following:
import logging
logging.basicConfig(filename="django.log",level=logging.DEBUG)
def testLogging():
logging.debug("test debugging")
if __name__ == "__main__"
testLogging()
, and then executing python nameOfFile.py.
What am I doing wrong? I am running Django 1.1.1 and Python 2.6.5. Maybe I should upgrade Django?
Could it be related to permissions? I can imagine that in a web-server environment, writing to a file may be restricted. In truth, I don't know, but please check the official Django logging documentation - it uses the standard logging module, but you may have to configure it differently.
Is that your entire view? It would have been helpful to post the actual traceback, but note that a view must return an HttpResponse, even if it's an empty one. So if that is your entire view, the 500 error is probably happening because you're not returning anything.
Add return HttpResponse() to the end of that view, and if that still doesn't work, please post the traceback itself.