Suppose I have a method which can return a value, or can just be quickly called to see if I even did get the return value I expected.
from pprint import pprint
from my_module import get_data
def quicktest():
#pseudocode here to illustrate what I want
if isUsedForAssignment:
return get_data()
else:
pprint(get_data())
The idea here being I'm checking this returned data to ensure the structure is correct; however if I don't care about that, I'd rather assign the value. This way I just go into my Python interpreter and type:
import thismodule as thism
thism.quicktest()
…as opposed to some way of doing it where I'm continually importing pprint just to see my data structure correctly.
This is maybe a slightly pedantic example, but it prompted the question in me as to whether or not a method can tell if it's being used to assign a value or just to be called straight-up.
Technically you could inspect the parent frame's bytecode or source code. But this is not only incredibly fragile and hacky and complicated, it's also a surefire way to indicate that you're doing something wrongTM. Just don't do that. Write the method to always simply return the value, and do the printing at the call site. Alternatively, if the printing is nontrivial, write a separate method to do the printing.
Related
I'd like to make a copy of class, while updating all of its methods to refer a new set of __globals__
I was thinking something like below, however unlike types.FunctionType, the constructor for types.UnboundMethodType does not accept __globals__, any suggestions how to work around this?
def copy_class(old_class, new_module):
"""Copies a class, updating __globals__ of all methods to point to new_module"""
new_dict = {}
for name, entry in old_class.__dict__.items():
if isinstance(entry, types.UnboundMethodType):
entry = types.UnboundMethodType(name, None, old_class.__class__, globals=new_module.__dict__)
new_dict[name] = entry
return type(old_class.name, old_class.__bases__, new_dict)
The __dict__ values are functions, not unbound methods. The unbound method objects only get created on attribute access. If you are seeing unbound method objects in the __dict__, something weird happened with your class object before this function got to it.
I don't know about you, but I generally don't like to use types for anything other than type checking (which I don't do very often ;-). I'd much rather inspect...
I have to preface this code by saying that I hope you have a really good reason for wanting to do this ;-) -- to me, it seems like just subclassing and overriding class properties should get the job done much more elegantly ... However, If you really want to copy a class -- Why not just execute it's source again in the new namespace?
I've put together the following simple modules:
# test.py
# Just some test data
FOO = 1
class Bar(object):
def subclass_method(self):
print('Hello World!')
class Foo(Bar):
def method(self):
return FOO
And then something to do the heavy lifting:
import sys
import inspect
def copy_class(cls, new_globals):
source = inspect.getsource(cls)
globs = {}
globs.update(sys.modules[cls.__module__].__dict__)
globs.update(new_globals)
exec source in globs
return globs[cls.__name__]
# Check that it works...
import test
NewFoo = copy_class(test.Foo, {'FOO': 2})
print NewFoo().method()
NewFoo().subclass_method()
print test.Foo().method()
test.Foo().subclass_method()
This has some possibly desirable properties and undesirable... First, it only works on classes that are inspectable. That's pretty much anything user-defined so probably not too restrictive... It also might be a bit slower than other solutions that don't involve re-parsing the source string -- But again, it doesn't seem like this should be executed too frequently, so that's probably Ok.
Now the "advantages"...
If a global is requested by a function but not supplied, this will use the global from the old namespace. If this behavior isn't desireable (i.e. you'd rather have the NameError), you can modify the function easily to remove it.
The "copy" doesn't inherit from the original. For most purposes, that probably doesn't matter, but it's a bit weird to have the copy of something inherit from the original ...
Some people might see the exec in here and immediately think "Oh no! exec!?!?! The world is about to end!!!". Franky, that's a good default response. However, I argue that if you're copying a function that you plan to use later in the code, it is no more safe than using exec (after all, the function's code has already been executed).
I have a Python script but I don't want to change it.
I want to use another script to modify the original one and call to run the original one with all the "print" or "time.sleep" statements being commentted out(not run).
I search for it and find a method using AST, but I really don't have a idea of how to use it.
Thank you very much!
You might be able to manipulate the AST to achieve that, but it would probably be easier to monkeypatch whatever objects it uses prior to running. In your specific example, to incapacitate print and time.sleep, you could do this:
def insomniac(duration):
pass # don't sleep
_original_sleep = time.sleep
time.sleep = insomniac
def dont_write(stuff):
pass # don't write
_original_write = sys.stdout.write
sys.stdout.write = dont_write
To get the functionality back, you can set the relevant functions back to the stored originals. If you want to be truer to your original intention such that calls to these functions from the script in question are nullified but calls from other modules still work, you can inspect the stack to see what module the caller is in and selectively call the original or ignore the call.
When writing Python code, I often find myself wanting to get behavior similar to Lisp's defvar. Basically, if some variable doesn't exist, I want to create it and assign a particular value to it. Otherwise, I don't want to do anything, and in particular, I don't want to override the variable's current value.
I looked around online and found this suggestion:
try:
some_variable
except NameError:
some_variable = some_expensive_computation()
I've been using it and it works fine. However, to me this has the look of code that's not paradigmatically correct. The code is four lines, instead of the 1 that would be required in Lisp, and it requires exception handling to deal with something that's not "exceptional."
The context is that I'm doing interactively development. I'm executing my Python code file frequently, as I improve it, and I don't want to run some_expensive_computation() each time I do so. I could arrange to run some_expensive_computation() by hand every time I start a new Python interpreter, but I'd rather do something automated, particularly so that my code can be run non-interactively. How would a season Python programmer achieve this?
I'm using WinXP with SP3, Python 2.7.5 via Anaconda 1.6.2 (32-bit), and running inside Spyder.
It's generally a bad idea to rely on the existence or not of a variable having meaning. Instead, use a sentinel value to indicate that a variable is not set to an appropriate value. None is a common choice for this kind of sentinel, though it may not be appropriate if that is a possible output of your expensive computation.
So, rather than your current code, do something like this:
# early on in the program
some_variable = None
# later:
if some_variable is None:
some_variable = some_expensive_computation()
# use some_variable here
Or, a version where None could be a significant value:
_sentinel = object()
some_variable = _sentinel # this means it doesn't have a meaningful value
# later
if some_variable is _sentinel:
some_variable = some_expensive_computation()
It is hard to tell which is of greater concern to you, specific language features or a persistent session. Since you say:
The context is that I'm doing interactively development. I'm executing my Python code file frequently, as I improve it, and I don't want to run some_expensive_computation() each time I do so.
You may find that IPython provides a persistent, interactive environment that is pleasing to you.
Instead of writing Lisp in Python, just think about what you're trying to do. You want to avoid calling an expensive function twice and having it run two times. You can write your function do to that:
def f(x):
if x in cache:
return cache[x]
result = ...
cache[x] = result
return result
Or make use of Python's decorators and just decorate the function with another function that takes care of the caching for you. Python 3.3 comes with functools.lru_cache, which does just that:
import functools
#functools.lru_cache()
def f(x):
return ...
There are quite a few memoization libraries in the PyPi for 2.7.
For the use case you give, guarding with a try ... except seems like a good way to go about it: Your code is depending on leftover variables from a previous execution of your script.
But I agree that it's not a nice implementation of the concept "here's a default value, use it unless the variable is already set". Python does not directly support this for variables, but it does have a default-setter for dictionary keys:
myvalues = dict()
myvalues.setdefault("some_variable", 42)
print some_variable # prints 42
The first argument of setdefault must be a string containing the name of the variable to be defined.
If you had a complicated system of settings and defaults (like emacs does), you'd probably keep the system settings in their own dictionary, so this is all you need. In your case, you could also use setdefault directly on global variables (only), with the help of the built-in function globals() which returns a modifiable dictionary:
globals().setdefault("some_variable", 42)
But I would recommend using a dictionary for your persistent variables (you can use the try... except method to create it conditionally). It keeps things clean and it seems more... pythonic somehow.
Let me try to summarize what I've learned here:
Using exception handling for flow control is fine in Python. I could do it once to set up a dict in which I can store what ever I want.
There are libraries and language features that are designed for some form of persistence; these can provide "high road" solutions for some applications. The shelve module is an obvious candidate here, but I would construe "some form of persistence" broadly enough to include #Blender's suggest to use memoization.
Question: Is there a way to make a function object in python using strings?
Info: I'm working on a project which I store data in a sqlite3 server backend. nothing to crazy about that. a DAL class is very commonly done through code generation because the code is so incredibly mundane. But that gave me an idea. In python when a attribute is not found, if you define the function __getattr__ it will call that before it errors. so the way I figure it, through a parser and a logic tree I could dynamically generate the code I need on its first call, then save the function object as a local attrib. for example:
DAL.getAll()
#getAll() not found, call __getattr__
DAL.__getattr__(self,attrib)#in this case attrib = getAll
##parser logic magic takes place here and I end up with a string for a new function
##convert string to function
DAL.getAll = newFunc
return newFunc
I've tried the compile function, but exec, and eval are far from satisfactory in terms of being able to accomplish this kind of feat. I need something that will allow multiple lines of function. Is there another way to do this besides those to that doesn't involve writing the it to disk? Again I'm trying to make a function object dynamically.
P.S.: Yes, I know this has horrible security and stability problems. yes, I know this is a horribly in-efficient way of doing this. do I care? no. this is a proof of concept. "Can python do this? Can it dynamically create a function object?" is what I want to know, not some superior alternative. (though feel free to tack on superior alternatives after you've answered the question at hand)
The following puts the symbols that you define in your string in the dictionary d:
d = {}
exec "def f(x): return x" in d
Now d['f'] is a function object. If you want to use variables from your program in the code in your string, you can send this via d:
d = {'a':7}
exec "def f(x): return x + a" in d
Now d['f'] is a function object that is dynamically bound to d['a']. When you change d['a'], you change the output of d['f']().
can't you do something like this?
>>> def func_builder(name):
... def f():
... # multiline code here, using name, and using the logic you have
... return name
... return f
...
>>> func_builder("ciao")()
'ciao'
basically, assemble a real function instead of assembling a string and then trying to compile that into a function.
If it is simply proof on concept then eval and exec are fine, you can also do this with pickle strings, yaml strings and anything else you decide to write a constructor for.
Customizing pprint.PrettyPrinter
The documentation for the pprint module mentions that the method PrettyPrinter.format is intended to make it possible to customize formatting.
I gather that it's possible to override this method in a subclass, but this doesn't seem to provide a way to have the base class methods apply line wrapping and indentation.
Am I missing something here?
Is there a better way to do this (e.g. another module)?
Alternatives?
I've checked out the pretty module, which looks interesting, but doesn't seem to provide a way to customize formatting of classes from other modules without modifying those modules.
I think what I'm looking for is something that would allow me to provide a mapping of types (or maybe functions) that identify types to routines that process a node. The routines that process a node would take a node and return the string representation it, along with a list of child nodes. And so on.
Why I’m looking into pretty-printing
My end goal is to compactly print custom-formatted sections of a DocBook-formatted xml.etree.ElementTree.
(I was surprised to not find more Python support for DocBook. Maybe I missed something there.)
I built some basic functionality into a client called xmlearn that uses lxml. For example, to dump a Docbook file, you could:
xmlearn -i docbook_file.xml dump -f docbook -r book
It's pretty half-ass, but it got me the info I was looking for.
xmlearn has other features too, like the ability to build a graph image and do dumps showing the relationships between tags in an XML document. These are pretty much totally unrelated to this question.
You can also perform a dump to an arbitrary depth, or specify an XPath as a set of starting points. The XPath stuff sort of obsoleted the docbook-specific format, so that isn't really well-developed.
This still isn't really an answer for the question. I'm still hoping that there's a readily customizable pretty printer out there somewhere.
My solution was to replace pprint.PrettyPrinter with a simple wrapper that formats any floats it finds before calling the original printer.
from __future__ import division
import pprint
if not hasattr(pprint,'old_printer'):
pprint.old_printer=pprint.PrettyPrinter
class MyPrettyPrinter(pprint.old_printer):
def _format(self,obj,*args,**kwargs):
if isinstance(obj,float):
obj=round(obj,4)
return pprint.old_printer._format(self,obj,*args,**kwargs)
pprint.PrettyPrinter=MyPrettyPrinter
def pp(obj):
pprint.pprint(obj)
if __name__=='__main__':
x=[1,2,4,6,457,3,8,3,4]
x=[_/17 for _ in x]
pp(x)
This question may be a duplicate of:
Any way to properly pretty-print ordered dictionaries in Python?
Using pprint.PrettyPrinter
I looked through the source of pprint. It seems to suggest that, in order to enhance pprint(), you’d need to:
subclass PrettyPrinter
override _format()
test for issubclass(),
and (if it's not your class), pass back to _format()
Alternative
I think a better approach would be just to have your own pprint(), which defers to pprint.pformat when it doesn't know what's up.
For example:
'''Extending pprint'''
from pprint import pformat
class CrazyClass: pass
def prettyformat(obj):
if isinstance(obj, CrazyClass):
return "^CrazyFoSho^"
else:
return pformat(obj)
def prettyp(obj):
print(prettyformat(obj))
# test
prettyp([1]*100)
prettyp(CrazyClass())
The big upside here is that you don't depend on pprint internals. It’s explicit and concise.
The downside is that you’ll have to take care of indentation manually.
If you would like to modify the default pretty printer without subclassing, you can use the internal _dispatch table on the pprint.PrettyPrinter class. You can see how examples of how dispatching is added for internal types like dictionaries and lists in the source.
Here is how I added a custom pretty printer for MatchPy's Operation type:
import pprint
import matchpy
def _pprint_operation(self, object, stream, indent, allowance, context, level):
"""
Modified from pprint dict https://github.com/python/cpython/blob/3.7/Lib/pprint.py#L194
"""
operands = object.operands
if not operands:
stream.write(repr(object))
return
cls = object.__class__
stream.write(cls.__name__ + "(")
self._format_items(
operands, stream, indent + len(cls.__name__), allowance + 1, context, level
)
stream.write(")")
pprint.PrettyPrinter._dispatch[matchpy.Operation.__repr__] = _pprint_operation
Now if I use pprint.pprint on any object that has the same __repr__ as matchpy.Operation, it will use this method to pretty print it. This works on subclasses as well, as long as they don't override the __repr__, which makes some sense! If you have the same __repr__ you have the same pretty printing behavior.
Here is an example of the pretty printing some MatchPy operations now:
ReshapeVector(Vector(Scalar('1')),
Vector(Index(Vector(Scalar('0')),
If(Scalar('True'),
Scalar("ReshapeVector(Vector(Scalar('2'), Scalar('2')), Iota(Scalar('10')))"),
Scalar("ReshapeVector(Vector(Scalar('2'), Scalar('2')), Ravel(Iota(Scalar('10'))))")))))
Consider using the pretty module:
http://pypi.python.org/pypi/pretty/0.1