Multi str for Exception - python

Python 2.7
I have my own exception:
class NoSourceFileError(Exception):
def __init__(self, logger, massage):
Exception.__init__(self, massage)
logger.logger.info(massage)
And it's calls with:
...
else:
raise NoSourceFileError('ERROR: can not find file %s for %s' % (add_file, add_name))
Problem here as you see - that I pass two variables (add_file, add_name) - but __init__ can accept only one var (message).
How can I pass both of them?
I tried play with *args - but can't make it work.
Logger - my additional class for logging.

The issue is not the string, which is fine. It is that you are passing logger into the exception.
Remove the logger parameter (and usage) and it will work:
>>> class NoSourceFileError(Exception):
... def __init__(self, message):
... Exception.__init__(self, message)
...
>>> raise NoSourceFileError('ERROR: can not find file %s for %s' % ('x', 'y'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
__main__.NoSourceFileError: ERROR: can not find file x for y
>>>
I'm not sure what logger is meant to be. But if you want to use it, you need to pass it as an argument to the exception when raising it.
On string formatting, it on it's own will always count as a single argument assuming it is done right. So can't count for more than 1.
As #werkritter has said, if you do want to use logger without having to pass it in as a parameter - define it globally. I will assume that it is meant to log errors, in which case it would make sense to have it defined globally anyway.

Related

Hide Traceback message for class exceptions in Python

Can anyone advice what would be effective way to hide the Trackback from a python class exception. We know sys.tracebacklimit = 0 can be useful for hiding the trace, but not sure how this can be implemented in a class.
For example, we have a test.py file with example code:
class FooError(Exception):
pass
class Foo():
def __init__(self, *args):
if len(args) != 2:
raise FooError('Input must be two parameters')
x, y = args
self.x = x
self.y = y
When we run the cmd to run this file, we get
>>> from test import *
>>> Foo()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/test.py", line 8, in __init__
raise FooError('Input must be two parameters')
test.FooError: Input must be two parameters
However, we only expect the error message should be displayed:
test.FooError: Input must be two parameters
Any code changes should be included in the class to reach this?
But why? Are you trying to make debugging your program harder?
Either way, this is really about how exceptions and tracebacks are printed by the default implementation. If you really want the console exception print implementation not to print a traceback for your errors, you can set sys.excepthook to a custom implementation that doesn't print the traceback.
This won't prevent any other try/except block from being able to access the traceback though, of course.

Not allowing the developer to use the print method

I have developed a python framework that is being used by others. In order to print any data to the output, the developer should use a Log class (Log.print(...)) and should not use the print() method directly. Is there any ways to force this rule throughout the code? For example, by throwing an error when a developer uses the print method directly like this:
Error: print method cannot be called directly. Please use Log.print().
Suppressing print (as discussed here) is not a good idea as the developer might get confused.
Actullay, below two line code are the same:
sys.stdout.write('hello'+'\n')
print('hello')
so, you can redirect sys.stdout to a class which raise a exception at calling print.
import sys
class BlockPrint():
call_print_exception = Exception('Error: print method cannot be called directly. Please use Log.print().')
def write(self, str):
raise self.call_print_exception
bp = BlockPrint()
sys.stdout=bp
print('aaa')
Output:
Traceback (most recent call last):
File "p.py", line 12, in <module>
print('aaa')
File "p.py", line 7, in write
raise self.call_print_exception
Exception: Error: print method cannot be called directly. Please use Log.print().

Avoiding namespace pollution in python by using Classes

A bit of background
I'm writing a python module for my own use, and I'm using Python's logging module. There are handlers and formatters and even a pair of functions I create that (for the most part) won't be used anywhere else. However, I still want to be able to access and modify these variables elsewhere (for instance, other closely-coupled modules or scripts)
A simple namespace
What I'm currently doing is using a class definition to group all of my variables together, like this:
class _Logging:
'''A little namespace for our logging facilities. Don't try to instantiate
it: all it does is group together some logging objects and keep them out of
the global namespace'''
global logger
def __init__(self):
raise TypeError("that's not how this works...")
def gz_log_rotator(source, dest):
'''accept a source filename and a destination filename. copy source to
dest and add gzip compression. for use with
logging.handlers.RotatingFileHandler.rotator.'''
with gzip.open(dest, 'wb', 1) as ofile, open(source, 'rb') as ifile:
ofile.write(ifile.read())
os.remove(source)
def gz_log_namer(name):
'''accept a filename, and return it with ".gz" appended. for use with
logging.handlers.RotatingFileHandler.namer.'''
return name + ".gz"
fmtr = logging.Formatter(
'[%(asctime)s:%(name)s:%(thread)05d:%(levelname)-8s] %(message)s')
gz_rotfile_loghandler = logging.handlers.RotatingFileHandler(
'%s.log' % __name__, mode='a', maxBytes=(1024**2 * 20), backupCount=3)
gz_rotfile_loghandler.setLevel(5)
gz_rotfile_loghandler.setFormatter(fmtr)
gz_rotfile_loghandler.rotator = gz_log_rotator
gz_rotfile_loghandler.namer = gz_log_namer
simplefile_loghandler = logging.FileHandler(
'%s.simple.log' % __name__, mode='w')
simplefile_loghandler.setLevel(15)
simplefile_loghandler.setFormatter(fmtr)
stream_loghandler = logging.StreamHandler()
stream_loghandler.setLevel(25)
stream_loghandler.setFormatter(fmtr)
logger = logging.getLogger(__name__)
logger.setLevel(5)
logger.addHandler(gz_rotfile_loghandler)
logger.addHandler(simplefile_loghandler)
logger.addHandler(stream_loghandler)
However, pylint complains (and i agree) that methods defined in a class should either be static methods, or follow the naming conventions for first parameters (e.g. gz_log_rotator(self, dest)), which is not how the function is used, and would be much more confusing.
Fun Fact
During this process i've also discovered that instances of classmethod and staticmethod are not in and of themselves callable (???). While a method defined in a class namespace is callable both within and without, classmethods and staticmethods are only callable when accessed through their class (at which point they refer to the underlying function, not the classmethod/staticmethod object)
>>> class Thing:
... global one_, two_, three_
... def one(self):
... print('one')
... #classmethod
... def two(cls):
... print('two')
... #staticmethod
... def three():
... print('three')
... one_, two_, three_ = one, two, three
...
>>> Thing.one()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: one() missing 1 required positional argument: 'self'
>>> Thing.two()
two
>>> Thing.three()
three
>>> # all as expected
>>> one_()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: one() missing 1 required positional argument: 'self'
>>> # so far so good
>>> two_()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'classmethod' object is not callable
>>> # what?
>>> three_()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'staticmethod' object is not callable
>>> # ???
My Question
Is there a better way to hold these variables without polluting my namespace?
The code I have works correctly, but it makes me feel a little unclean. I could define a function that would only be called once and then immediately call it, but then I either lose references to everything I don't return, or i'm back to polluting the global namespace. I could just make everything _hidden, but I feel like they should be logically grouped. I could make _Logging a bona fide class, put all of my stuff in an __init__ function and tack all my little variables onto self, but that also feels inelegant. I could create another file for this, but so far I've gotten by with everything held in the same file. The only other option that seemed palatable is to make the two functions staticmethods and only refer to them through our class (i.e. _Logging.gz_log_namer), but it would seem that is also impossible.
>>> class Thing:
... #staticmethod
... def say_hello():
... print('hello!')
... Thing.say_hello()
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 5, in Thing
AttributeError: type object 'Thing' has no attribute 'say_hello'
>>>
As it stands, the best option I see is to use the selfless methods.
you can create a new class that inherit from staticmethod class, and add __call__ method to the class.
for example:
class callablestatic(staticmethod):
def __init__(self, func):
super().__init__(func)
self.func = func
def __call__(self, *args, **kwargs):
# the __call__ method allows you to call the class instance
return self.func(*args, **kwargs)
then use it in your class:
class Thing:
#callablestatic
def hello(name):
print(f"hello {name}")
hello("John") # works
but better create new file and import it as a module
Sorry for answering 2 years later, but this could help someone.
You could make your methods static, and create another static method (ex. init), calling it right after initializing the class. Then use setattr to keep the references to your variables.
For setting multiple class variables, you can use
[setattr(Class, name, value) for name,value in locals().items()]
inside the method.
Full code:
class _Logging:
'''A little namespace for our logging facilities. Don't try to instantiate
it: all it does is group together some logging objects and keep them out of
the global namespace'''
def __init__(self):
raise TypeError("that's not how this works...")
#staticmethod
def gz_log_rotator(source, dest):
'''accept a source filename and a destination filename. copy source to
dest and add gzip compression. for use with
logging.handlers.RotatingFileHandler.rotator.'''
with gzip.open(dest, 'wb', 1) as ofile, open(source, 'rb') as ifile:
ofile.write(ifile.read())
os.remove(source)
#staticmethod
def gz_log_namer(name):
'''accept a filename, and return it with ".gz" appended. for use with
logging.handlers.RotatingFileHandler.namer.'''
return name + ".gz"
#staticmethod
def init():
global logger
fmtr = logging.Formatter(
'[%(asctime)s:%(name)s:%(thread)05d:%(levelname)-8s] %(message)s')
gz_rotfile_loghandler = logging.handlers.RotatingFileHandler(
'%s.log' % __name__, mode='a', maxBytes=(1024**2 * 20), backupCount=3)
gz_rotfile_loghandler.setLevel(5)
gz_rotfile_loghandler.setFormatter(fmtr)
gz_rotfile_loghandler.rotator = _Logging.gz_log_rotator
gz_rotfile_loghandler.namer = _Logging.gz_log_namer
simplefile_loghandler = logging.FileHandler(
'%s.simple.log' % __name__, mode='w')
simplefile_loghandler.setLevel(15)
simplefile_loghandler.setFormatter(fmtr)
stream_loghandler = logging.StreamHandler()
stream_loghandler.setLevel(25)
stream_loghandler.setFormatter(fmtr)
logger = logging.getLogger(__name__)
logger.setLevel(5)
logger.addHandler(gz_rotfile_loghandler)
logger.addHandler(simplefile_loghandler)
logger.addHandler(stream_loghandler)
[setattr(_Logging, name, value) for name,value in locals().items()]
_Logging.init()

How can I create an Exception in Python minus the last stack frame?

Not sure how possible this is, but here goes:
I'm trying to write an object with some slightly more subtle behavior - which may or may not be a good idea, I haven't determined that yet.
I have this method:
def __getattr__(self, attr):
try:
return self.props[attr].value
except KeyError:
pass #to hide the keyerror exception
msg = "'{}' object has no attribute '{}'"
raise AttributeError(msg.format(self.__dict__['type'], attr))
Now, when I create an instance of this like so:
t = Thing()
t.foo
I get a stacktrace containing my function:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
File "attrfun.py", line 15, in __getattr__
raise AttributeError(msg.format(self._type, attr))
AttributeError: 'Thing' object has no attribute 'foo'
I don't want that - I want the stack trace to read:
Traceback (most recent call last):
File "attrfun.py", line 23, in <module>
t.foo
AttributeError: 'Thing' object has no attribute 'foo'
Is this possible with a minimal amount of effort, or is there kind of a lot required? I found this answer which indicates that something looks to be possible, though perhaps involved. If there's an easier way, I'd love to hear it! Otherwise I'll just put that idea on the shelf for now.
You cannot tamper with traceback objects (and that's a good thing). You can only control how you process one that you've already got.
The only exceptions are: you can
substitute an exception with another or re-raise it with raise e (i.e make the traceback point to the re-raise statement's location)
raise an exception with an explicit traceback object
remove outer frame(s) from a traceback object by accessing its tb_next property (this reflects a traceback object's onion-like structure)
For your purpose, the way to go appears to be the 1st option: re-raise an exception from a handler one level above your function.
And, I'll say this again, this is harmful for yourself or whoever will be using your module as it deletes valuable diagnostic information. If you're dead set on making your module proprietary with whatever rationale, it's more productive for that goal to make it a C extension.
The traceback object is created during stack unwinding, not directly when you raise the exception, so you can not alter it right in your function. What you could do instead (though it's probably a bad idea) is to alter the top level exception hook so that it hides your function from the traceback.
Suppose you have this code:
class MagicGetattr:
def __getattr__(self, item):
raise AttributeError(f"{item} not found")
orig_excepthook = sys.excepthook
def excepthook(type, value, traceback):
iter_tb = traceback
while iter_tb.tb_next is not None:
if iter_tb.tb_next.tb_frame.f_code is MagicGetattr.__getattr__.__code__:
iter_tb.tb_next = None
break
iter_tb = iter_tb.tb_next
orig_excepthook(type, value, traceback)
sys.excepthook = excepthook
# The next line will raise an error
MagicGetattr().foobar
You will get the following output:
Traceback (most recent call last):
File "test.py", line 49, in <module>
MagicGetattr().foobar
AttributeError: foobar not found
Note that this ignores the __cause__ and __context__ members of the exception, which you would probably want to visit too if you were to implement this in real life.
You can get the current frame and any other level using the inspect module. For instance, here is what I use when I'd like to know where I'm in my code :
from inspect import currentframe
def get_c_frame(level = 0) :
"""
Return caller's frame
"""
return currentframe(level)
...
def locate_error(level = 0) :
"""
Return a string containing the filename, function name and line
number where this function was called.
Output is : ('file name' - 'function name' - 'line number')
"""
fi = get_c_frame(level = level + 2)
return '({} - {} - {})'.format(__file__,
fi.f_code,
fi.f_lineno)

How to override built-in getattr in Python?

I know how to override an object's getattr() to handle calls to undefined object functions. However, I would like to achieve the same behavior for the builtin getattr() function. For instance, consider code like this:
call_some_undefined_function()
Normally, that simply produces an error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'call_some_undefined_function' is not defined
I want to override getattr() so that I can intercept the call to "call_some_undefined_function()" and figure out what to do.
Is this possible?
I can only think of a way to do this by calling eval.
class Global(dict):
def undefined(self, *args, **kargs):
return u'ran undefined'
def __getitem__(self, key):
if dict.has_key(self, key):
return dict.__getitem__(self, key)
return self.undefined
src = """
def foo():
return u'ran foo'
print foo()
print callme(1,2)
"""
code = compile(src, '<no file>', 'exec')
globals = Global()
eval(code, globals)
The above outputs
ran foo
ran undefined
You haven't said why you're trying to do this. I had a use case where I wanted to be capable of handling typos that I made during interactive Python sessions, so I put this into my python startup file:
import sys
import re
def nameErrorHandler(type, value, traceback):
if not isinstance(value, NameError):
# Let the normal error handler handle this:
nameErrorHandler.originalExceptHookFunction(type, value, traceback)
name = re.search(r"'(\S+)'", value.message).group(1)
# At this point we know that there was an attempt to use name
# which ended up not being defined anywhere.
# Handle this however you want...
nameErrorHandler.originalExceptHookFunction = sys.excepthook
sys.excepthook = nameErrorHandler
Hopefully this is helpful for anyone in the future who wants to have a special error handler for undefined names... whether this is helpful for the OP or not is unknown since they never actually told us what their intended use-case was.

Categories

Resources