Shortcut for invoking ipdb? - python

ipdb is just great; wow. The thing is, when a script blows up, I still have to go into the code and add four lines that require not a ton of typing, but not nothing either. For example, let's say this is the bad line:
1 = 2
Naturally, I get this:
SyntaxError: can't assign to literal
If for whatever reason I wanted to debug that line and see what was going on just before that line (or somewhere else in the stack at that moment), I would typically change that line of code as follows:
try:
1 = 2
except:
import traceback;traceback.print_exc()
import ipdb;ipdb.set_trace()
That gets the job done, but I'd love to be able to run a script in a "mode" where whenever ANYTHING blows up (assuming the exception is otherwise unhandled), I get the same result.
Does that exist?
*******EDIT*******
Thanks to #np8's response, I redid the driver script to look like this (where main is any arbitrary function):
if __name__ == "__main__":
start = time.time()
args = parser.parse_args()
if args.verbosity > 1:
from ipdb import launch_ipdb_on_exception
with launch_ipdb_on_exception():
print("Launching ipdb on exception!")
main(start, args)
else:
print("NOT launching ipdb on exception!")
main(start, args)
This allows me, from the command line, to determine whether exceptions should launch ipdb (i.e. when I'm developing) or not (i.e. when the script is running in production and therefore with a verbosity argument below 2, in this example).

What you are looking for is "post mortem debugging". The easiest way to get what you want is to run the script using
ipdb script.py
or
python -m pdb script.py
instead of
python script.py

You could use the launch_ipdb_on_exception context manager:
# test.py
from ipdb import launch_ipdb_on_exception
with launch_ipdb_on_exception():
print(x)
Running the above will result into launching ipdb:
python .\test.py
NameError("name 'x' is not defined",)
> c:\tmp\delete_me\test.py(4)<module>()
2
3 with launch_ipdb_on_exception():
----> 4 print(x)
ipdb>

pm is a function decorator that mirrors the functionality of the
ipdb context manager launch_ipdb_on_exception. Use it to decorate
a function, and that function will go to ipdb on error.
The name 'pm' comes from the ipdb.pm() function, which stands for
postmortem, and behaves similarly.
With this, you can decorate the top-level function which you want to debug with #pm, and any exception at that level, or below, in the call stack will trigger ipdb.
import ipdb
class Decontext(object):
"""
Makes a context manager also act as decorator
"""
def __init__(self, context_manager):
self._cm = context_manager
def __enter__(self):
return self._cm.__enter__()
def __exit__(self, *args, **kwds):
return self._cm.__exit__(*args, **kwds)
def __call__(self, func):
def wrapper(*args, **kwds):
with self:
return func(*args, **kwds)
return wrapper
pm = Decontext(ipdb.launch_ipdb_on_exception())

Related

How can I stop the python process from terminating after a failed unit test so that I can call pdb.pm [duplicate]

Is there a way to automatically start the debugger at the point at which a unittest fails?
Right now I am just using pdb.set_trace() manually, but this is very tedious as I need to add it each time and take it out at the end.
For Example:
import unittest
class tests(unittest.TestCase):
def setUp(self):
pass
def test_trigger_pdb(self):
#this is the way I do it now
try:
assert 1==0
except AssertionError:
import pdb
pdb.set_trace()
def test_no_trigger(self):
#this is the way I would like to do it:
a=1
b=2
assert a==b
#magically, pdb would start here
#so that I could inspect the values of a and b
if __name__=='__main__':
#In the documentation the unittest.TestCase has a debug() method
#but I don't understand how to use it
#A=tests()
#A.debug(A)
unittest.main()
I think what you are looking for is nose. It works like a test runner for unittest.
You can drop into the debugger on errors, with the following command:
nosetests --pdb
import unittest
import sys
import pdb
import functools
import traceback
def debug_on(*exceptions):
if not exceptions:
exceptions = (AssertionError, )
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
try:
return f(*args, **kwargs)
except exceptions:
info = sys.exc_info()
traceback.print_exception(*info)
pdb.post_mortem(info[2])
return wrapper
return decorator
class tests(unittest.TestCase):
#debug_on()
def test_trigger_pdb(self):
assert 1 == 0
I corrected the code to call post_mortem on the exception instead of set_trace.
Third party test framework enhancements generally seem to include the feature (nose and nose2 were already mentioned in other answers). Some more:
pytest supports it.
pytest --pdb
Or if you use absl-py's absltest instead of unittest module:
name_of_test.py --pdb_post_mortem
A simple option is to just run the tests without result collection and letting the first exception crash down the stack (for arbitrary post mortem handling) by e.g.
try: unittest.findTestCases(__main__).debug()
except:
pdb.post_mortem(sys.exc_info()[2])
Another option: Override unittest.TextTestResult's addError and addFailure in a debug test runner for immediate post_mortem debugging (before tearDown()) - or for collecting and handling errors & tracebacks in an advanced way.
(Doesn't require extra frameworks or an extra decorator for test methods)
Basic example:
import unittest, pdb
class TC(unittest.TestCase):
def testZeroDiv(self):
1 / 0
def debugTestRunner(post_mortem=None):
"""unittest runner doing post mortem debugging on failing tests"""
if post_mortem is None:
post_mortem = pdb.post_mortem
class DebugTestResult(unittest.TextTestResult):
def addError(self, test, err):
# called before tearDown()
traceback.print_exception(*err)
post_mortem(err[2])
super(DebugTestResult, self).addError(test, err)
def addFailure(self, test, err):
traceback.print_exception(*err)
post_mortem(err[2])
super(DebugTestResult, self).addFailure(test, err)
return unittest.TextTestRunner(resultclass=DebugTestResult)
if __name__ == '__main__':
##unittest.main()
unittest.main(testRunner=debugTestRunner())
##unittest.main(testRunner=debugTestRunner(pywin.debugger.post_mortem))
##unittest.findTestCases(__main__).debug()
To apply #cmcginty's answer to the successor nose 2 (recommended by nose available on Debian-based systems via apt-get install nose2), you can drop into the debugger on failures and errors by calling
nose2
in your test directory.
For this, you need to have a suitable .unittest.cfg in your home directory or unittest.cfg in the project directory; it needs to contain the lines
[debugger]
always-on = True
errors-only = False
To address the comment in your code "In the documentation the unittest.TestCase has a debug() method but I don't understand how to use it", you can do something like this:
suite = unittest.defaultTestLoader.loadTestsFromModule(sys.modules[__name__])
suite.debug()
Individual test cases are created like:
testCase = tests('test_trigger_pdb') (where tests is a sub-class of TestCase as per your example). And then you can do testCase.debug() to debug one case.
Here's a built-in, no extra modules, solution:
import unittest
import sys
import pdb
####################################
def ppdb(e=None):
"""conditional debugging
use with: `if ppdb(): pdb.set_trace()`
"""
return ppdb.enabled
ppdb.enabled = False
###################################
class SomeTest(unittest.TestCase):
def test_success(self):
try:
pass
except Exception, e:
if ppdb(): pdb.set_trace()
raise
def test_fail(self):
try:
res = 1/0
#note: a `nosetests --pdb` run will stop after any exception
#even one without try/except and ppdb() does not not modify that.
except Exception, e:
if ppdb(): pdb.set_trace()
raise
if __name__ == '__main__':
#conditional debugging, but not in nosetests
if "--pdb" in sys.argv:
print "pdb requested"
ppdb.enabled = not sys.argv[0].endswith("nosetests")
sys.argv.remove("--pdb")
unittest.main()
call it with python myunittest.py --pdb and it will halt. Otherwise it won't.
Some solution above modifies business logic:
try: # <-- new code
original_code() # <-- changed (indented)
except Exception as e: # <-- new code
pdb.post_mortem(...) # <-- new code
To minimize changes to the original code, we can define a function decorator, and simply decorate the function that's throwing:
def pm(func):
import functools, pdb
#functools.wraps(func)
def func2(*args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as e:
pdb.post_mortem(e.__traceback__)
raise
return func2
Use:
#pm
def test_xxx(...):
...
Buildt a module with a decorator which post mortems into every type of error except AssertionError. The decorator can be triggered by the logging root level
#!/usr/bin/env python3
'''
Decorator for getting post mortem on errors of a unittest TestCase
'''
import sys
import pdb
import functools
import traceback
import logging
import unittest
logging.basicConfig(format='%(asctime)s %(message)s', level=logging.DEBUG)
def debug_on(log_level):
'''
Function decorator for post mortem debugging unittest functions.
Args:
log_level (int): logging levels coesponding to logging stl module
Usecase:
class tests(unittest.TestCase):
#debug_on(logging.root.level)
def test_trigger_pdb(self):
assert 1 == 0
'''
def decorator(f):
#functools.wraps(f)
def wrapper(*args, **kwargs):
try:
return f(*args, **kwargs)
except BaseException as err:
info = sys.exc_info()
traceback.print_exception(*info)
if log_level < logging.INFO and type(err) != AssertionError:
pdb.post_mortem(info[2])
return wrapper
return decorator
class Debug_onTester(unittest.TestCase):
#debug_on(logging.root.level)
def test_trigger_pdb(self):
assert 1 == 0
if __name__ == '__main__':
unittest.main()

python - How to avoid multiple running instances of the same function

In my python script I have a core dumped, and I think it's because the same function is called twice at the same time.
The function is the reading of a Vte terminal in a gtk window
def term(self, dPluzz, donnees=None):
text = str(self.v.get_text(lambda *a: True).rstrip())
[...]
print "Here is the time " + str(time.time())
def terminal(self):
self.v = vte.Terminal()
self.v.set_emulation('xterm')
self.v.connect ("child-exited", lambda term: self.verif(self, dPluzz))
self.v.connect("contents-changed", self.term)
Result:
Here is the time 1474816913.68
Here is the time 1474816913.68
Erreur de segmentation (core dumped)
How to avoid the double executing of the function?
The double execution must be the consequence of the contents-changed event triggering twice.
You could just check in your term function whether it has already been executed before, and exit if so.
Add these two lines at the start of the term function:
if hasattr(self, 'term_has_executed'): return
self.term_has_executed = True
I created a python decorator (mutli-platform compatible) which provide a mecanism to avoid concurrent execution.
The usage is :
#lock('file.lock')
def run():
# Function action
pass
Personnaly, I am used to using relative path :
CURRENT_FILE_DIR = os.path.dirname(os.path.abspath(__file__))
#lock(os.path.join(CURRENT_FILE_DIR, os.path.basename(__file__)+".lock"))
The decorator :
import os
def lock(lock_file):
"""
Usage:
#lock('file.lock')
def run():
# Function action
"""
def decorator(target):
def wrapper(*args, **kwargs):
if os.path.exists(lock_file):
raise Exception('Unable to get exclusive lock.')
else:
with open(lock_file, "w") as f:
f.write("LOCK")
# Execute the target
result = target(*args, **kwargs)
remove_attempts = 10
os.remove(lock_file)
while os.path.exists(lock_file) and remove_attempts >= 1:
os.remove(lock_file)
remove_attempts-=1
return result
return wrapper
return decorator
For mutlithread calls
There is a unix solution for manage multithread calls : https://gist.github.com/mvliet/5715690
Don't forget to thank the author of this gist (it's not me).

Flask-Script command that passes all remaining args to nose?

I use a simple command to run my tests with nose:
#manager.command
def test():
"""Run unit tests."""
import nose
nose.main(argv=[''])
However, nose supports many useful arguments that now I could not pass.
Is there a way to run nose with manager command (similar to the call above) and still be able to pass arguments to nose? For example:
python manage.py test --nose-arg1 --nose-arg2
Right now I'd get an error from Manager that --nose-arg1 --nose-arg2 are not recognised as valid arguments. I want to pass those args as nose.main(argv= <<< args comming after python manage.py test ... >>>)
Flask-Script has an undocumented capture_all_flags flag, which will pass remaining args to the Command.run method. This is demonstrated in the tests.
#manager.add_command
class NoseCommand(Command):
name = 'test'
capture_all_args = True
def run(self, remaining):
nose.main(argv=remaining)
python manage.py test --nose-arg1 --nose-arg2
# will call nose.main(argv=['--nose-arg1', '--nose-arg2'])
In the sources of flask_script you can see that the "too many arguments" error is prevented when the executed Command has the attribute capture_all_args set to True which isn't documented anywhere.
You can set that attribute on the class just before you run the manager
if __name__ == "__main__":
from flask.ext.script import Command
Command.capture_all_args = True
manager.run()
Like this additional arguments to the manager are always accepted.
The downside of this quick fix is that you cannot register options or arguments to the commands the normal way anymore.
If you still need that feature you could subclass the Manager and override the command decorator like this
class MyManager(Manager):
def command(self, capture_all=False):
def decorator(func):
command = Command(func)
command.capture_all_args = capture_all
self.add_command(func.__name__, command)
return func
return decorator
Then you can use the command decorator like this
#manager.command(True) # capture all arguments
def use_all(*args):
print("args:", args[0])
#manager.command() # normal way of registering arguments
def normal(name):
print("name", name)
Note that for some reason flask_script requires use_all to accept a varargs but will store the list of arguments in args[0] which is a bit strange. def use_all(args): does not work and fails with TypeError "got multiple values for argument 'args'"
Ran into an issue with davidism's soln where only some of the args were being received
Looking through the docs a bit more it is documented that nose.main automatically picks up the stdin
http://nose.readthedocs.io/en/latest/api/core.html
So we are now just using:
#manager.add_command
class NoseCommand(Command):
name = 'nose'
capture_all_args = True
def run(self, remaining):
nose.main()

Invoking a program in python

Is there a way for a program to invoke another program in python?
Let me explain my problem:
I am building an application (program 1) , I am also writing a debugger to catch exceptions (program 2) in program 1 { a typical try : except: } block of code . Now I want to release program 2 so that for any application like prog 1 , prog 2 can handle exceptions ( making my work easier) . I just want prog 1 to use a simple piece of code like:
import prog2
My confusion stems from the fact as how can I do something like this , how can I invoke prog 2 in prog 1, ie it should function as all the code in prog 1 should run in the {try: (prog 1) , except:} prog 2 try block.
Any pointers on how I can do this or a direction to start would we very much appreciated.
Note: I am using python 2.7 and IDLE as my developer tool.
tried execfile() yet? Read up on it on how to execute another script from your script.
I think you need to think about classes instead of scripts.
What about this?
class MyClass:
def __init__(self, t):
self.property = t
self.catchBugs()
def catchBugs(self):
message = self.property
try:
assert message == 'hello'
except AssertionError:
print "String doesn't match expected input"
a = MyClass('hell') # prints 'String doesn't match expected input'
UPDATE
I guess you have something like this in your directory:
program1.py (main program)
program2.py (debugger)
__init__.py
Program1
from program2 import BugCatcher
class MainClass:
def __init__(self, a):
self.property = a
obj = MainClass('hell')
bugs = BugCatcher(obj)
Program2
class BugCatcher(object):
def __init__(self, obj):
self.obj = obj
self.catchBugs()
def catchBugs(self):
obj = self.obj
try:
assert obj.property == 'hello'
except AssertionError:
print 'Error'
Here we are passing the whole object of your program1 to the BugCatcher object of program2. Then we access some property of that object to verify that it's what we expect.

How can I log all imports that are executed in a block of code?

I'm writing a test suite, and the code I'm testing makes excessive use of delayed module imports. So it's possible that with 5 different inputs to the same method, this may end up importing 5 additional modules. What I'd like to be able to do is set up tests so that I can assert that running the method with one input causes one import, and doesn't cause the other 4.
I had a few ideas of how to start on this, but none so far have been successful. I already have a custom importer, and I can put the logging code in the importer. But this doesn't work, because the import statements only run once. I need the log statement to be executed regardless of if the module has been previously imported. Just running del sys.modules['modname'] also doesn't work, because that runs in the test code, and I can't reload the module in the code being tested.
The next thing I tried was subclassing dict to do the monitoring, and replace sys.modules with this subclass. This subclass has a reimplemented __getitem__ method, but calling import module doesn't seem to trigger the __getitem__ call in the subclass. I also can't assign directly to sys.modules.__getitem__, because it is read-only.
Is what I'm trying to do even possible?
UPDATE
nneonneo's answer seems to only work if the implementation of logImports() is in the same module as where it is used. If I make a base test class containing this functionality, it has problems. The first is that it can't find just __import__, erroring with:
# old_import = __import__
# UnboundLocalError: local variable '__import__' referenced before assignment
When I change that to __builtin__.__import__, I another error:
myunittest.py:
import unittest
class TestCase(unittest.TestCase):
def logImports(self):
old_import = __builtins__.__import__
def __import__(*args, **kwargs):
print args, kwargs
return old_import(*args, **kwargs)
__builtins__.__import__ = __import__
test.py:
import myunittest
import unittest
class RealTest(myunittest.TestCase):
def setUp(self):
self.logImports()
def testSomething(self):
import unittest
self.assertTrue(True)
unittest.main()
# old_import = __builtins__.__import__
# AttributeError: 'dict' object has no attribute '__import__'
Try
old_import = __import__
def __import__(*args, **kwargs):
print args, kwargs
return old_import(*args, **kwargs)
__builtins__.__import__ = __import__
This overrides __import__ completely, allowing you to monitor every invocation of import.
Building on the previous answer, in Python 3, I've had success with the following.
import builtins
old_import = __import__
def importWithLog(*args, **kwargs):
print(args[0]) # This is the module name
print(args, kwargs)
return old_import(*args, **kwargs)
builtins.__import__ = importWithLog
# Rest of the code goes here.
import time
import myModule
...

Categories

Resources