I've got a Django product I'm using iPython to interact with.
I'm trying to have modules automatically loaded when I start a shell:
python manage.py shell
I've copied .ipython/ipythonrc to the root directory of the project and added to the file:
import_some module_name model1 model2
However, when I start the shell, these names are not being loaded.
What am I doing wrong?
I don't know about ipythonrc, but if you only need the models, you could use django-extensions. After you install it, you've got a plethora of new managment commands, including shell_plus, which will open a ipython session and autoload all your models:
python manage.py shell_plus
BryanWheelock Your solution won't work because your shell is the result of the spawn not a direct interatction with it. What you want to do is this - or at least this is what I do.
Within your workspace (the place where you type python manage.py shell) create a ipythonrc file. In it put the following:
include ~/.ipython/ipythonrc
execute from django.contrib.auth.models import User
# .
# .
# .
execute import_some module_name model1 model2
For example I also add the following lines in mine..
# Setup Logging
execute import sys
execute import logging
execute loglevel = logging.DEBUG
execute logging.basicConfig(format="%(levelname)-8s %(asctime)s %(name)s %(message)s", datefmt='%m/%d/%y %H:%M:%S', stream=sys.stdout )
execute log = logging.getLogger("")
execute log.setLevel(loglevel)
execute log.debug("Logging has been initialized from ipythonrc")
execute log.debug("Root Logger has been established - use \"log.LEVEL(MSG)\"")
execute log.setLevel(loglevel)
execute log.debug("log.setlevel(logging.DEBUG)")
execute print ""
This allows you to use logging in your modules and keep it DRY. Hope this helps.
shell_plus command of django-extensions can import the model automatically, but it seems can not load the profile of ipython. I have did some hacky job to make this done.
use start_ipython to launch ipython shell instead of embed and pass some arguments to it.
I have also wrote a blog post, you can find the detail here
Related
I'm new to logging in Python and have tried to build a basic logger that writes to a file. The problem I have is that the file is not created, yet there are not any errors thrown. Any ideas?
Using Spyder IDE in Anaconda (in case that is applicable)
Code:
import pandas as pd
import logging
format = "%(asctime)s %(message)s"
logging.basicConfig(format=format, level=logging.DEBUG, filename='H://logfile.log')
now = pd.datetime.now()
logging.info("Time Created")
I'm using python 2.7.11 on osx, I reproduced your steps as also explained here
import logging
format = "%(asctime)s %(message)s"
logging.basicConfig(format=format, level=logging.DEBUG, filename='logfile.log')
logging.info("Hello World")
A file named logfile.log is created in directory from which I executed python.
You can check the dir in which python is running typing:
pwd
My guess is that there is a problem with the permission of the folder in which you want to create the log file, or in the way you are writing the absolute path of the folder.
I would advise you to create first a log file in the local dir ( which is the output of the pwd command ) and see if this works.
Appears the problem is with Spyder IDE v. 2.3.8 (maybe mine is outdated). The log file is created in command prompt but in Spyder the log file isn't created.
I'm trying to automate the test rerun after a change while developing. After searching around a little sniffer seemed fine. But if I run it my tests fail with this error:
ERROR: Failure: ImportError (Settings cannot be imported, because environment variable DJANGO_SETTINGS_MODULE is undefined.)
if I run them manually they pass. Do you have a clue why sniffer won't work?
Something like the following as your scent.py should work:
from subprocess import call
from sniffer.api import runnable
#runnable
def execute_tests(*args):
fn = [ 'python', 'manage.py', 'test' ]
fn += args[1:]
return call(fn) == 0
Which you can then call as sniffer -x appName.
You can get sniffer to read your settings by creating a scent.py file in the same directory as manage.py.
Here's what mine looks like:
import os
os.environ["DJANGO_SETTINGS_MODULE"] = 'myapp.settings'
Which will get you as far as sniffer reading your settings, but then you'll run into other problems — basically, sniffer just runs your tests using nose, which isn't the same thing that the manage.py test does when django-nose is installed.
Anybody know what else needs to be in scent.py for snigger to with with Django?
Trying to guess where the problem may reside: it seems you need to explicitly set the position of your settings.py file.
if you're running your test from a subprocess' call you can use the following command:
call(["django-admin.py", "test --settings=your_project.settings"])
otherwise you can set environment variables with the following command:
import os
os.environ['DJANGO_SETTINGS_MODULE'] = 'your_project.settings'
(change your_project with the name of your django project)
if you're running a command like "./manage.py tests" you can add the former lines at the beginning of manage.py (there are other ways but I need to see the code to provide a more precise solution)
I'm trying to run a custom django command as a scheduled task on Heroku. I am able to execute the custom command locally via: python manage.py send_daily_email. (note: I do NOT have any problems with the custom management command itself)
However, Heroku is giving me the following exception when trying to "Run" the task through Heroku Scheduler addon:
Traceback (most recent call last):
File "bin/send_daily_visit_email.py", line 2, in <module>
from django.conf import settings
ImportError: No module named django.conf
I placed a python script in /bin/send_daily_email.py, and it is the following:
#! /usr/bin/python
from django.conf import settings
settings.configure()
from django.core import management
management.call_command('send_daily_email') #delegates off to custom command
Within Heroku, however, I am able to run heroku run bin/python - launch the python shell - and successfully import settings from django.conf
I am pretty sure it has something to do with my PYTHON_PATH or visibility to Django's SETTINGS_MODULE, but I'm unsure how to resolve the issue. Could someone point me in the right direction? Is there an easier way to accomplish what I'm trying to do here?
Thank you so much for your tips and advice in advance! New to Heroku! :)
EDIT:
Per Nix's comment, I made some adjustments, and did discover that specifying my exact python path, I did get past the Django setup.
I now receive:
File "/app/lib/python2.7/site-packages/django/core/management/__init__.py", line 155, in call_command
raise CommandError("Unknown command: %r" % name)
django.core.management.base.CommandError: Unknown command: 'send_daily_email'
Although, I can see 'send_daily_email' when I run ``heroku run bin/python app/manage.py```.
I'll keep an update if I come across the answer.
You are probably using a different interpreter.
Check to make sure shell python is the same as the one you reference in your script /usr/bin/python . It could be that there is a different one in your path, which would explain why it works when you run python manage.py but not your shell scrip which you explicitly reference /usr/bin/python.
Typing which python will tell you what interpreter is being found on your path.
In addition, this can also be resolved by adding your home directory to your Python path. A quick and unobtrusive way to accomplish that is to add it to the PYTHONPATH environment variable (which is generally /app on the Heroku Cedar stack).
Add it via the heroku config command:
$ heroku config:add PYTHONPATH=/app
That should do it! For more details: http://tomatohater.com/2012/01/17/custom-django-management-commands-on-heroku/
When i start django shell by typing python manage.py shell
the ipython shell is started. Is it possible to make Django start ipython in qtconsole mode? (i.e. make it run ipython qtconsole)
Arek
edit:
so I'm trying what Andrew Wilkinson suggested in his answer - extending my django app with a command which is based on original django shell command. As far as I understand code which starts ipython in original version is this:
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
requires_model_validation = False
def handle_noargs(self, **options):
from IPython.frontend.terminal.embed import TerminalInteractiveShell
shell = TerminalInteractiveShell()
shell.mainloop()
any advice how to change this code to start ipython in qtconsole mode?
second edit:
what i found and works so far is - start 'ipython qtconsole' from the location where settings.py of my project is (or set the sys.path if starting from different location), and then execute this:
import settings
import django.core.management
django.core.management.setup_environ(settings)
and now can i import my models, list all instances etc.
The docs here say:
If you'd rather not use manage.py, no problem. Just set the
DJANGO_SETTINGS_MODULE environment variable to mysite.settings and run
python from the same directory manage.py is in (or ensure that
directory is on the Python path, so that import mysite works).
So it should be enough to set that environment variable and then run ipython qtconsole. You could make a simple script to do this for you automatically.
I created a shell script with the following:
/path/to/ipython
qtconsole --pylab inline -c "run /path/to/my/site/shell.py"
You only need the --pylab inline part if you want the cool inline matplotlib graphs.
And I created a python script shell.py in /path/to/my/site with:
import os
working_dir = os.path.dirname(__file__)
os.chdir(working_dir)
import settings
import django.core.management
django.core.management.setup_environ(settings)
Running my shell script gets me an ipython qtconsole with the benefits of the django shell.
You can check the code that runs the shell here. You'll see that there is no where to configure what shell is run.
What you could do is copy this file, rename it as shell_qt.py and place it in your own project's management/commands directory. Change it to run the QT console and then you can run manage.py shell_qt.
Since Django version 1.4, usage of django.core.management.setup_environ() is deprecated. A solution that works for both the IPython notebook and the QTconsole is this (just execute this from within your Django project directory):
In [1]: from django.conf import settings
In [2]: from mydjangoproject.settings import DATABASES as MYDATABASES
In [3]: settings.configure(DATABASES=MYDATABASES)
Update: If you work with Django 1.7, you additionally need to execute the following:
In [4]: import django; django.setup()
Using django.conf.settings.configure(), you specify the database settings of your project and then you can access all your models in the usual way.
If you want to automate these imports, you can e.g. create an IPython profile by running:
ipython profile create mydjangoproject
Each profile contains a directory called startup. You can put arbitrary Python scripts in there and they will be executed just after IPython has started. In this example, you find it under
~/.ipython/profile_<mydjangoproject>/startup/
Just put a script in there which contains the code shown above, probably enclosed by a try..except clause to handle ImportErrors. You can then start IPython with the given profile like this:
ipython qtconsole --profile=mydjangoproject
or
ipython notebook --profile=mydjangoproject
I also wanted to open the Django shell in qtconsole. Looking inside manage.py solve the problem for me:
Launch IPython qtconsole, cd to the project base directory and run:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
Dont forget to change 'myproject' to your project name.
You can create a command that extends the base shell command and imports the IPythonQtConsoleApp like so:
create file qtshell.py in yourapp/management/commands with:
from django.core.management.commands import shell
class Command(shell.Command):
def _ipython(self):
"""Start IPython Qt console"""
from IPython.qt.console.qtconsoleapp import IPythonQtConsoleApp
app = IPythonQtConsoleApp.instance()
app.initialize(argv=[])
app.start()
then just use python manage.py qtshell
A somewhat undocumented feature of shell_plus is the ability to run it in "kernel only mode". This allows us to connect to it from another shell, such as one running qtconsole.
For example, in one shell do:
django-admin shell_plus --kernel
# or == ./manage.py shell_plus --kernel
This will print out something like:
# Shell Plus imports ...
...
To connect another client to this kernel, use:
--existing kernel-23600.json
Then, in another shell run:
ipython qtconsole --existing kernel-23600.json
This should now open a QtConsole. One other tip, instead of running another shell, you can also hit Ctrl+Z, and run bg to tell current process to run in background.
You can install django extensions and then run
python manage.py shell_plus --ipython
I am working with Django and use Django shell all the time. The annoying part is that while the Django server reloads on code changes, the shell does not, so every time I make a change to a method I am testing, I need to quit the shell and restart it, re-import all the modules I need, reinitialize all the variables I need etc. While iPython history saves a lot of typing on this, this is still a pain. Is there a way to make django shell auto-reload, the same way django development server does?
I know about reload(), but I import a lot of models and generally use from app.models import * syntax, so reload() is not much help.
I'd suggest use IPython autoreload extension.
./manage.py shell
In [1]: %load_ext autoreload
In [2]: %autoreload 2
And from now all imported modules would be refreshed before evaluate.
In [3]: from x import print_something
In [4]: print_something()
Out[4]: 'Something'
# Do changes in print_something method in x.py file.
In [5]: print_something()
Out[5]: 'Something else'
Works also if something was imported before %load_ext autoreload command.
./manage.py shell
In [1]: from x import print_something
In [2]: print_something()
Out[2]: 'Something'
# Do changes in print_something method in x.py file.
In [3]: %load_ext autoreload
In [4]: %autoreload 2
In [5]: print_something()
Out[5]: 'Something else'
There is possible also prevent some imports from refreshing with %aimport command and 3 autoreload strategies:
%autoreload
Reload all modules (except those excluded by %aimport) automatically
now.
%autoreload 0
Disable automatic reloading.
%autoreload 1
Reload all modules imported with %aimport every time before executing
the Python code typed.
%autoreload 2
Reload all modules (except those excluded by %aimport) every time
before executing the Python code typed.
%aimport
List modules which are to be automatically imported or not to be
imported.
%aimport foo
Import module ‘foo’ and mark it to be autoreloaded for %autoreload 1
%aimport -foo
Mark module ‘foo’ to not be autoreloaded.
This generally works good for my use, but there are some cavetas:
Replacing code objects does not always succeed: changing a #property in a class to an ordinary method or a method to a member variable can cause problems (but in old objects only).
Functions that are removed (eg. via monkey-patching) from a module before it is reloaded are not upgraded.
C extension modules cannot be reloaded, and so cannot be autoreloaded.
My solution to it is I write the code and save to a file and then use:
python manage.py shell < test.py
So I can make the change, save and run that command again till I fix whatever I'm trying to fix.
I recommend using the django-extensions project like stated above by dongweiming. But instead of just 'shell_plus' management command, use:
manage.py shell_plus --notebook
This will open a IPython notebook on your web browser. Write your code there in a cell, your imports etc. and run it.
When you change your modules, just click the notebook menu item 'Kernel->Restart'
There you go, your code is now using your modified modules.
look at the manage.py shell_plus command provided by the django-extensions project. It will load all your model files on shell startup. and autoreload your any modify but do not need exit, you can direct call there
It seems that the general consensus on this topic, is that python reload() sucks and there is no good way to do this.
Use shell_plus with an ipython config. This will enable autoreload before shell_plus automatically imports anything.
pip install django-extensions
pip install ipython
ipython profile create
Edit your ipython profile (~/.ipython/profile_default/ipython_config.py):
c.InteractiveShellApp.exec_lines = ['%autoreload 2']
c.InteractiveShellApp.extensions = ['autoreload']
Open a shell - note that you do not need to include --ipython:
python manage.py shell_plus
Now anything defined in SHELL_PLUS_PRE_IMPORTS or SHELL_PLUS_POST_IMPORTS (docs) will autoreload!
Note that if your shell is at a debugger (ex pdb.set_trace()) when you save a file it can interfere with the reload.
My solution for this inconvenient follows. I am using IPython.
$ ./manage.py shell
> import myapp.models as mdls # 'mdls' or whatever you want, but short...
> mdls.SomeModel.objects.get(pk=100)
> # At this point save some changes in the model
> reload(mdls)
> mdls.SomeModel.objects.get(pk=100)
For Python 3.x, 'reload' must be imported using:
from importlib import reload
Hope it helps. Of course it is for debug purposes.
Cheers.
Reload() doesn't work in Django shell without some tricks. You can check this thread na and my answer specifically:
How do you reload a Django model module using the interactive interpreter via "manage.py shell"?
Using a combination of 2 answers for this I came up with a simple one line approach.
You can run the django shell with -c which will run the commands you pass however it quits immediately after the code is run.
The trick is to setup what you need, run code.interact(local=locals()) and then re-start the shell from within the code you pass. Like this:
python manage.py shell -c 'import uuid;test="mytestvar";import code;code.interact(local=locals())'
For me I just wanted the rich library's inspect method. Only a few lines:
python manage.py shell -c 'import code;from rich import pretty;pretty.install();from rich import inspect;code.interact(local=locals())'
Finally the cherry on top is an alias
alias djshell='python manage.py shell -c "import code;from rich import pretty;pretty.install();from rich import inspect;code.interact(local=locals())"'
Now if I startup my shell and say, want to inspect the form class I get this beautiful output:
Instead of running commands from the Django shell, you can set up a management command like so and rerun that each time.
Not exactly what you want, but I now tend to build myself management commands for testing and fiddling with things.
In the command you can set up a bunch of locals the way you want and afterwards drop into an interactive shell.
import code
class Command(BaseCommand):
def handle(self, *args, **kwargs):
foo = 'bar'
code.interact(local=locals())
No reload, but an easy and less annoying way to interactively test django functionality.
import test // test only has x defined
test.x // prints 3, now add y = 4 in test.py
test.y // error, test does not have attribute y
solution Use reload from importlib as follows
from importlib import reload
import test // test only has x defined
test.x // prints 3, now add y = 4 in test.py
test.y // error
reload(test)
test.y // prints 4