I am working with Django and use Django shell all the time. The annoying part is that while the Django server reloads on code changes, the shell does not, so every time I make a change to a method I am testing, I need to quit the shell and restart it, re-import all the modules I need, reinitialize all the variables I need etc. While iPython history saves a lot of typing on this, this is still a pain. Is there a way to make django shell auto-reload, the same way django development server does?
I know about reload(), but I import a lot of models and generally use from app.models import * syntax, so reload() is not much help.
I'd suggest use IPython autoreload extension.
./manage.py shell
In [1]: %load_ext autoreload
In [2]: %autoreload 2
And from now all imported modules would be refreshed before evaluate.
In [3]: from x import print_something
In [4]: print_something()
Out[4]: 'Something'
# Do changes in print_something method in x.py file.
In [5]: print_something()
Out[5]: 'Something else'
Works also if something was imported before %load_ext autoreload command.
./manage.py shell
In [1]: from x import print_something
In [2]: print_something()
Out[2]: 'Something'
# Do changes in print_something method in x.py file.
In [3]: %load_ext autoreload
In [4]: %autoreload 2
In [5]: print_something()
Out[5]: 'Something else'
There is possible also prevent some imports from refreshing with %aimport command and 3 autoreload strategies:
%autoreload
Reload all modules (except those excluded by %aimport) automatically
now.
%autoreload 0
Disable automatic reloading.
%autoreload 1
Reload all modules imported with %aimport every time before executing
the Python code typed.
%autoreload 2
Reload all modules (except those excluded by %aimport) every time
before executing the Python code typed.
%aimport
List modules which are to be automatically imported or not to be
imported.
%aimport foo
Import module ‘foo’ and mark it to be autoreloaded for %autoreload 1
%aimport -foo
Mark module ‘foo’ to not be autoreloaded.
This generally works good for my use, but there are some cavetas:
Replacing code objects does not always succeed: changing a #property in a class to an ordinary method or a method to a member variable can cause problems (but in old objects only).
Functions that are removed (eg. via monkey-patching) from a module before it is reloaded are not upgraded.
C extension modules cannot be reloaded, and so cannot be autoreloaded.
My solution to it is I write the code and save to a file and then use:
python manage.py shell < test.py
So I can make the change, save and run that command again till I fix whatever I'm trying to fix.
I recommend using the django-extensions project like stated above by dongweiming. But instead of just 'shell_plus' management command, use:
manage.py shell_plus --notebook
This will open a IPython notebook on your web browser. Write your code there in a cell, your imports etc. and run it.
When you change your modules, just click the notebook menu item 'Kernel->Restart'
There you go, your code is now using your modified modules.
look at the manage.py shell_plus command provided by the django-extensions project. It will load all your model files on shell startup. and autoreload your any modify but do not need exit, you can direct call there
It seems that the general consensus on this topic, is that python reload() sucks and there is no good way to do this.
Use shell_plus with an ipython config. This will enable autoreload before shell_plus automatically imports anything.
pip install django-extensions
pip install ipython
ipython profile create
Edit your ipython profile (~/.ipython/profile_default/ipython_config.py):
c.InteractiveShellApp.exec_lines = ['%autoreload 2']
c.InteractiveShellApp.extensions = ['autoreload']
Open a shell - note that you do not need to include --ipython:
python manage.py shell_plus
Now anything defined in SHELL_PLUS_PRE_IMPORTS or SHELL_PLUS_POST_IMPORTS (docs) will autoreload!
Note that if your shell is at a debugger (ex pdb.set_trace()) when you save a file it can interfere with the reload.
My solution for this inconvenient follows. I am using IPython.
$ ./manage.py shell
> import myapp.models as mdls # 'mdls' or whatever you want, but short...
> mdls.SomeModel.objects.get(pk=100)
> # At this point save some changes in the model
> reload(mdls)
> mdls.SomeModel.objects.get(pk=100)
For Python 3.x, 'reload' must be imported using:
from importlib import reload
Hope it helps. Of course it is for debug purposes.
Cheers.
Reload() doesn't work in Django shell without some tricks. You can check this thread na and my answer specifically:
How do you reload a Django model module using the interactive interpreter via "manage.py shell"?
Using a combination of 2 answers for this I came up with a simple one line approach.
You can run the django shell with -c which will run the commands you pass however it quits immediately after the code is run.
The trick is to setup what you need, run code.interact(local=locals()) and then re-start the shell from within the code you pass. Like this:
python manage.py shell -c 'import uuid;test="mytestvar";import code;code.interact(local=locals())'
For me I just wanted the rich library's inspect method. Only a few lines:
python manage.py shell -c 'import code;from rich import pretty;pretty.install();from rich import inspect;code.interact(local=locals())'
Finally the cherry on top is an alias
alias djshell='python manage.py shell -c "import code;from rich import pretty;pretty.install();from rich import inspect;code.interact(local=locals())"'
Now if I startup my shell and say, want to inspect the form class I get this beautiful output:
Instead of running commands from the Django shell, you can set up a management command like so and rerun that each time.
Not exactly what you want, but I now tend to build myself management commands for testing and fiddling with things.
In the command you can set up a bunch of locals the way you want and afterwards drop into an interactive shell.
import code
class Command(BaseCommand):
def handle(self, *args, **kwargs):
foo = 'bar'
code.interact(local=locals())
No reload, but an easy and less annoying way to interactively test django functionality.
import test // test only has x defined
test.x // prints 3, now add y = 4 in test.py
test.y // error, test does not have attribute y
solution Use reload from importlib as follows
from importlib import reload
import test // test only has x defined
test.x // prints 3, now add y = 4 in test.py
test.y // error
reload(test)
test.y // prints 4
Related
I want to run cd and ls in python debugger. I try to use !ls but I get
*** NameError: name 'ls' is not defined
Simply use the "os" module and you will able to easily execute any os command from within pdb.
Start with:
(Pdb) import os
And then:
(Pdb) os.system("ls")
or even
(Pdb) os.system("sh")
the latest simply spawns a subshell. Exiting from it returns back to debugger.
Note: the "cd" command will have no effect when used as os.system("cd dir") since it will not change the cwd of the python process. Use os.chdir("/path/to/targetdir") for that.
PDB doesn't let you run shell commands, unfortunately. The reason for the error that you are seeing is that PDB lets you inspect a variable name or run a one-line snippet using !. Quoting from the docs:
[!]statement
Execute the (one-line) statement in the context of the current stack frame. The exclamation point can be omitted unless the first word of the statement resembles a debugger command. To set a global variable, you can prefix the assignment command with a global command on the same line, e.g.:
(Pdb) global list_options; list_options = ['-l']
(Pdb)
Thus !ls mean "print the value of ls", which causes the NameError that you observed.
PDB works very similarly to the normal python console so packages can be imported and used as you would normally do in the python interactive session.
Regarding the directory listing you should use the os module (inside the PDB, confirming each line with return aka. enter key ;) ):
from os import listdir
os.listdir("/path/to/your/folder")
Or if you want to do some more advanced stuff like start new processes or catch outputs etc. you need to have a look on subprocess module.
I have a python script test.py I'm working on. I'd like to make edits and re-run it. I'm using Terminal on OSX to run the script. I can't get the script to run a second time without quitting out of terminal and starting it again.
# test.py
print "Howdy"
Terminal window:
$ python
>>> import test
Howdy
>>> import test
>>>
Question 1: How do I get the script to run again?
Question 2: Or is python designed to work like this:
# test.py
def printStuff():
print "Howdy"
Terminal:
$ python
>>> import test
>>> test.printStuff()
Howdy
>>> test.printStuff()
Howdy
>>>
1: you can use reload(moduleName) to do what you're trying to do (but see below).
2: There's a few different patterns, but I typically have a main() function inside all of my modules that have a clear "start point", or else I'll just have a bunch of library functions. So more or less what you're thinking in your example. You're not really supposed to "do stuff" on import, unless it's setting up the module. Think of a module as a library, not a script.
If you want to execute it as a script (in which case you shouldn't be using import), then there's a couple options. You can use a shebang at the top of your python script (Should I put #! (shebang) in Python scripts, and what form should it take?) and execute it directly from the command line, or you can use the __main__ module in your script as an entry point (https://docs.python.org/2/library/main.html), and then call the script with python from the command line, e.g. python myscript.
Use Python's reload() method:
https://docs.python.org/2/library/functions.html#reload
You want to use the reload method
>>> import test
Howdy
>>> reload(test)
Howdy
How can I run a bunch of imports and path appends from the interpreter with one command/import? If I import another module that runs the commands for me the imports are not available in main namespace. Similar to running a bash script that modifies/adds commands and variables to the current session.
ex.
import os, ...
sys.path.append(...)
If I understand you correctly, you're just looking for the from … import … statement. For example:
lotsostuff.py:
import json
def foo(): pass
Now:
$ python3.3
>>> from lotsostuff import *
>>> json
<module 'json' from '/usr/local/lib/python3.3/json/__init__.py'>
>>> foo
<function lotsostuff.foo>
However, you might want to consider a different alternative. If you're just trying to control the startup of your interpreter session, you can do this:
$ PYTHONSTARTUP=lotsostuff.py
$ python3.3
>>> json
<module 'json' from '/usr/local/lib/python3.3/json/__init__.py'>
>>> foo
<function __main__.foo>
Notice the difference in the last line. You're now running lotsostuff in the __main__ namespace, rather than running in a separate namespace and grabbing all of its members.
Similarly:
$ python3.3 -i lotsostuff.py
>>> json
<module 'json' from '/usr/local/lib/python3.3/json/__init__.py'>
You'd normally use PYTHONSTARTUP if you want to do this every time in your session, -i if you want to do it just this once.
If you want to do the same thing in the middle of a session instead of at startup… well, you can't do it directly, but you can come pretty close with exec (Python 3.x) (or execfile in Python 2.x).
If you really want to do exactly what you described—importing a module, as a normal import, except merged into your namespace instead of in its own—you'll need to customize the import process. This isn't that hard with importlib; if you're not in Python 3.2 or later, you'll have a lot more work to do it with imp.
That's pretty much the difference between . ./foo instead of just ./foo in a bash script that I think you were looking for.
If you're using ipython, there are even cooler options. (And if you're not using ipython, you might want to check it out.)
In my local Google app engine development environment, I would like to use an ipython shell, especially to be able to check out models with data that was created via dev_server.py,
very much like how django's manage.py shell command works.
(This means that the ipython shell should be started after sys.path was fixed and app.yaml was read and analyzed, and the local datastore is ready)
Any simple solution for this?
For starters, you can put your application root directory and the SDK root directory (google_appengine) in your Python path. You'll also need a few libraries like yaml, either installed or added to the library path from the SDK's lib directory. Then you can import modules and call some features.
>>> import sys
>>> sys.path.append('/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine')
Of course, as soon as a code path tries to make a service call, the library will raise an exception, saying it isn't bound to anything. To bind the service libraries to test stubs, use the testbed library:
>>> from google.appengine.ext import testbed
>>> tb = testbed.Testbed()
>>> tb.activate()
>>> tb.init_datastore_v3_stub()
>>> from google.appengine.ext import db
>>> import models
>>> m = models.Entry()
>>> m.title = ‘Test’
>>> m.put()
To tell the datastore test stub to use your development server's datastore file, pass the path to the file to init_datastore_v3_stub() as the datastore_file argument. See the doc comment for the method in google.appengine.ext.testbed for more information.
For more information on testbed: https://developers.google.com/appengine/docs/python/tools/localunittesting
Basically you'll need to use this: https://developers.google.com/appengine/articles/remote_api
For IPython support you have two options:
(1) If you're working with Python 2.7 (and IPython 0.13) you will need to use this to embed an IPython shell:
from IPython.frontend.terminal.interactiveshell import TerminalInteractiveShell
shell = TerminalInteractiveShell(user_ns=namespace)
shell.mainloop()
(2) If you're working with Python 2.5 (and IPython 0.10.2) you will need to use this line to embed an IPython shell:
from IPython.Shell import IPShellEmbed
ipshell = IPShellEmbed(user_ns=namespace, banner=banner)
ipshell()
This is the one that I use: https://gist.github.com/4624108 so you just type..
>> python console.py your-app-id
Once you run dev_appserver.py
you will get
starting module "default" running at: http://127.0.0.1:8080
Starting admin server at : http://localhost:8000
so basically what you want to do is to access http://localhost:8000 and there you will find "Interactive Console" you can use it to play around with
When i start django shell by typing python manage.py shell
the ipython shell is started. Is it possible to make Django start ipython in qtconsole mode? (i.e. make it run ipython qtconsole)
Arek
edit:
so I'm trying what Andrew Wilkinson suggested in his answer - extending my django app with a command which is based on original django shell command. As far as I understand code which starts ipython in original version is this:
from django.core.management.base import NoArgsCommand
class Command(NoArgsCommand):
requires_model_validation = False
def handle_noargs(self, **options):
from IPython.frontend.terminal.embed import TerminalInteractiveShell
shell = TerminalInteractiveShell()
shell.mainloop()
any advice how to change this code to start ipython in qtconsole mode?
second edit:
what i found and works so far is - start 'ipython qtconsole' from the location where settings.py of my project is (or set the sys.path if starting from different location), and then execute this:
import settings
import django.core.management
django.core.management.setup_environ(settings)
and now can i import my models, list all instances etc.
The docs here say:
If you'd rather not use manage.py, no problem. Just set the
DJANGO_SETTINGS_MODULE environment variable to mysite.settings and run
python from the same directory manage.py is in (or ensure that
directory is on the Python path, so that import mysite works).
So it should be enough to set that environment variable and then run ipython qtconsole. You could make a simple script to do this for you automatically.
I created a shell script with the following:
/path/to/ipython
qtconsole --pylab inline -c "run /path/to/my/site/shell.py"
You only need the --pylab inline part if you want the cool inline matplotlib graphs.
And I created a python script shell.py in /path/to/my/site with:
import os
working_dir = os.path.dirname(__file__)
os.chdir(working_dir)
import settings
import django.core.management
django.core.management.setup_environ(settings)
Running my shell script gets me an ipython qtconsole with the benefits of the django shell.
You can check the code that runs the shell here. You'll see that there is no where to configure what shell is run.
What you could do is copy this file, rename it as shell_qt.py and place it in your own project's management/commands directory. Change it to run the QT console and then you can run manage.py shell_qt.
Since Django version 1.4, usage of django.core.management.setup_environ() is deprecated. A solution that works for both the IPython notebook and the QTconsole is this (just execute this from within your Django project directory):
In [1]: from django.conf import settings
In [2]: from mydjangoproject.settings import DATABASES as MYDATABASES
In [3]: settings.configure(DATABASES=MYDATABASES)
Update: If you work with Django 1.7, you additionally need to execute the following:
In [4]: import django; django.setup()
Using django.conf.settings.configure(), you specify the database settings of your project and then you can access all your models in the usual way.
If you want to automate these imports, you can e.g. create an IPython profile by running:
ipython profile create mydjangoproject
Each profile contains a directory called startup. You can put arbitrary Python scripts in there and they will be executed just after IPython has started. In this example, you find it under
~/.ipython/profile_<mydjangoproject>/startup/
Just put a script in there which contains the code shown above, probably enclosed by a try..except clause to handle ImportErrors. You can then start IPython with the given profile like this:
ipython qtconsole --profile=mydjangoproject
or
ipython notebook --profile=mydjangoproject
I also wanted to open the Django shell in qtconsole. Looking inside manage.py solve the problem for me:
Launch IPython qtconsole, cd to the project base directory and run:
import os
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")
Dont forget to change 'myproject' to your project name.
You can create a command that extends the base shell command and imports the IPythonQtConsoleApp like so:
create file qtshell.py in yourapp/management/commands with:
from django.core.management.commands import shell
class Command(shell.Command):
def _ipython(self):
"""Start IPython Qt console"""
from IPython.qt.console.qtconsoleapp import IPythonQtConsoleApp
app = IPythonQtConsoleApp.instance()
app.initialize(argv=[])
app.start()
then just use python manage.py qtshell
A somewhat undocumented feature of shell_plus is the ability to run it in "kernel only mode". This allows us to connect to it from another shell, such as one running qtconsole.
For example, in one shell do:
django-admin shell_plus --kernel
# or == ./manage.py shell_plus --kernel
This will print out something like:
# Shell Plus imports ...
...
To connect another client to this kernel, use:
--existing kernel-23600.json
Then, in another shell run:
ipython qtconsole --existing kernel-23600.json
This should now open a QtConsole. One other tip, instead of running another shell, you can also hit Ctrl+Z, and run bg to tell current process to run in background.
You can install django extensions and then run
python manage.py shell_plus --ipython