Pass Undefined Argument To Python Function [UX Driven] - python

I would like to have a plotting interface (I do Allllooottt of plotting) where a user can put in an undefined variable.
Desired Interface
plot(ax,time,n1) # Returns Name Error
Current Interface
plot(ax,'time','n1')
I understand that this is likely a tall order, but I'm curious if the geniuses of Stack Overflow can find a way to do this. So far I've tried a decorator, but thats not working because the error isn't happening in the function, its happening in the calling of the function. Nevertheless I'm still interested in a solution... Even if its cumbersome.
Current Code
def handleUndefined(function):
try:
return function
except NameError as ne:
print ne
except Exception as e:
print e
#handleUndefined
def plot(self,**args):
axesList = filter(lambda arg: isinstance(arg,p.Axes),args.keys())
parmList = filter(lambda arg: arg in self.parms, args.keys())
print axesList
print parmList
fig,ax = p.subplots()
plot(ax,time,n1)
I'm designing a plotting interface, where people might make 20 plots / minute, so its important to give them less syntax here.

It's considered evil (or at least a bad practice) to use exec but that's the only way I could come up with setting up a variable dynamically from the value of a string which is unknown before runtime:
strg = 'time' # suppose this value is received from the user via standard input
exec(strg + " = '" + strg + "'")
print time # now we have a variable called 'time' that holds the value of the string "time"
Using this technique, you can define variables that will hold "their own name" dynamically.

So I had given up on finding the solution to this but low and behold I found a solution. It wasn't obvious but we can rely on pythons magic methods to actually link these variables into a global list all which is python's first stop in finding a variable.
I found a solution in which you can append something to all by using an #public decorator:
http://code.activestate.com/recipes/576993-public-decorator-adds-an-item-to-all/
From there the solution was something like this
#public
class globalVariable(str):
_name = None
def __init__(self,stringInput):
self._name = stringInput
self.__name__ = self._name
def repr(self):
return self._name
# Hopefully There's a strong correlation
xaxis = globalVariable('trees')
yaxis = globalVariable('forest')
#Booya lunchtime
plot(trees,forest)

Related

Linking command line argument parsing with object initialization

I have a class which has around a dozen object variables. Along with each variable, I want to provide a default value, a help string (for argparse) and a comment string (to write to a data file). I want to be able to instantiate objects by:
providing explicit values to the init method
providing values to use on the command line
taking the defaults
some combination of the above.
When I only had two object variables, I provided the defaults in the declaration of the init function, I replicated these and the help string when I created the argument parser, etc. But for many variables, it gets very messy.
The trimmed down example code below is my current solution, which I am fairly happy with, does exactly what I want, and works pretty well. But, I have two questions:
Is it pythonic?
Surely this must be a solved problem already, and there is a "standard" way to do it?
I did look around here, and Googled a bit, but I didn't manage to find an existing solution.
# invoke with python demoArgs.py -a 15 -b 25 -c text
import argparse
class Foo:
defaults = {'a':10,'b':20, 'c':"outFile"}
helpDefs = {'a' : 'the first parameter',
'b' : 'the second paramter',
'c' : 'the third paramter'}
#staticmethod
def parse_args():
parser = argparse.ArgumentParser()
for key in Foo.defaults:
parser.add_argument('-'+ key, help = Foo.helpDefs[key],
default = Foo.defaults[key])
return vars(parser.parse_args())
def __init__(self, a = defaults['a'], b = defaults['b'], c = defaults['c']):
self.a = a
self.b = b
self.c = c
def report(self):
for key in sorted(vars(self)):
print key, "val = ", getattr(self,key), \
", help = ", self.helpDefs[key], \
", def = ", self.defaults[key]
def main():
print "\n an object using all of the defaults"
a = Foo()
a.report()
print "\n an object using the command line values"
args = Foo.parse_args()
b = Foo(**args)
b.report()
print "\n an object using values specified in the code"
c = Foo(30,40,"object")
c.report()
print "\n an object using a perverse combination"
args = Foo.parse_args()
d = Foo(50, c = args['c'])
d.report()
if __name__ == '__main__':
main()
As for is it "pythonic" -- I'd say no. Argparse is already pretty powerful. Someone who wanted to use your library would have to carefully understand how it worked and how it wrapped something else (which would also require understanding the thing it wrapped). It's arguable whether the effort would be worth it. In the short run your probably better off just using argparse, rather than trying to simplify something which already does the job (IMHO).
As for question #2 -- I seriously doubt anyone else has tried to do it. Most people will usually just stick with the stdlib. It's simpler and is available on all platforms where python can run.
Of course neither of these answers should stop you from doing what you like.

Python string interpolation implementation

[EDIT 00]: I've edited several times the post and now even the title, please read below.
I just learned about the format string method, and its use with dictionaries, like the ones provided by vars(), locals() and globals(), example:
name = 'Ismael'
print 'My name is {name}.'.format(**vars())
But I want to do:
name = 'Ismael'
print 'My name is {name}.' # Similar to ruby
So I came up with this:
def mprint(string='', dictionary=globals()):
print string.format(**dictionary)
You can interact with the code here:
http://labs.codecademy.com/BA0B/3#:workspace
Finally, what I would love to do is to have the function in another file, named my_print.py, so I could do:
from my_print import mprint
name= 'Ismael'
mprint('Hello! My name is {name}.')
But as it is right now, there is a problem with the scopes, how could I get the the main module namespace as a dictionary from inside the imported mprint function. (not the one from my_print.py)
I hope I made myself uderstood, if not, try importing the function from another module. (the traceback is in the link)
It's accessing the globals() dict from my_print.py, but of course the variable name is not defined in that scope, any ideas of how to accomplish this?
The function works if it's defined in the same module, but notice how I must use globals() because if not I would only get a dictionary with the values within mprint() scope.
I have tried using nonlocal and dot notation to access the main module variables, but I still can't figure it out.
[EDIT 01]: I think I've figured out a solution:
In my_print.py:
def mprint(string='',dictionary=None):
if dictionary is None:
import sys
caller = sys._getframe(1)
dictionary = caller.f_locals
print string.format(**dictionary)
In test.py:
from my_print import mprint
name = 'Ismael'
country = 'Mexico'
languages = ['English', 'Spanish']
mprint("Hello! My name is {name}, I'm from {country}\n"
"and I can speak {languages[1]} and {languages[0]}.")
It prints:
Hello! My name is Ismael, I'm from Mexico
and I can speak Spanish and English.
What do you think guys? That was a difficult one for me!
I like it, much more readable for me.
[EDIT 02]: I've made a module with an interpolate function, an Interpolate class and an attempt for a interpolate class method analogous to the function.
It has a small test suite and its documented!
I'm stuck with the method implementation, I don't get it.
Here's the code: http://pastebin.com/N2WubRSB
What do you think guys?
[EDIT 03]: Ok I have settled with just the interpolate() function for now.
In string_interpolation.py:
import sys
def get_scope(scope):
scope = scope.lower()
caller = sys._getframe(2)
options = ['l', 'local', 'g', 'global']
if scope not in options[:2]:
if scope in options[2:]:
return caller.f_globals
else:
raise ValueError('invalid mode: {0}'.format(scope))
return caller.f_locals
def interpolate(format_string=str(),sequence=None,scope='local',returns=False):
if type(sequence) is str:
scope = sequence
sequence = get_scope(scope)
else:
if not sequence:
sequence = get_scope(scope)
format = 'format_string.format(**sequence)'
if returns is False:
print eval(format)
elif returns is True:
return eval(format)
Thanks again guys! Any opinions?
[EDIT 04]:
This is my last version, it has a test, docstrings and describes some limitations I've found:
http://pastebin.com/ssqbbs57
You can quickly test the code here:
http://labs.codecademy.com/BBMF#:workspace
And clone grom git repo here:
https://github.com/Ismael-VC/python_string_interpolation.git
Modules don't share namespaces in python, so globals() for my_print is always going to be the globals() of my_print.py file ; i.e the location where the function was actually defined.
def mprint(string='', dic = None):
dictionary = dic if dic is not None else globals()
print string.format(**dictionary)
You should pass the current module's globals() explicitly to make it work.
Ans don't use mutable objects as default values in python functions, it can result in unexpected results. Use None as default value instead.
A simple example for understanding scopes in modules:
file : my_print.py
x = 10
def func():
global x
x += 1
print x
file : main.py
from my_print import *
x = 50
func() #prints 11 because for func() global scope is still
#the global scope of my_print file
print x #prints 50
Part of your problem - well, the reason its not working - is highlighted in this question.
You can have your function work by passing in globals() as your second argument, mprint('Hello my name is {name}',globals()).
Although it may be convenient in Ruby, I would encourage you not to write Ruby in Python if you want to make the most out of the language.
Language Design Is Not Just Solving Puzzles: ;)
http://www.artima.com/forums/flat.jsp?forum=106&thread=147358
Edit: PEP-0498 solves this issue!
The Template class from the string module, also does what I need (but more similar to the string format method), in the end it also has the readability I seek, it also has the recommended explicitness, it's in the Standard Library and it can also be easily customized and extended.
http://docs.python.org/2/library/string.html?highlight=template#string.Template
from string import Template
name = 'Renata'
place = 'hospital'
job = 'Dr.'
how = 'glad'
header = '\nTo Ms. {name}:'
letter = Template("""
Hello Ms. $name.
I'm glad to inform, you've been
accepted in our $place, and $job Red
will ${how}ly recieve you tomorrow morning.
""")
print header.format(**vars())
print letter.substitute(vars())
The funny thing is that now I'm getting more fond of using {} instead of $ and I still like the string_interpolation module I came up with, because it's less typing than either one in the long run. LOL!
Run the code here:
http://labs.codecademy.com/BE3n/3#:workspace

Way in Python to make vars visible in calling method scope?

I find myself doing something like this constantly to pull GET args into vars:
some_var = self.request.get('some_var', None)
other_var = self.request.get('other_var', None)
if None in [some_var, other_var]:
logging.error("some arg was missing in " + self.request.path)
exit()
What I would really want to do is:
pull_args('some_var', 'other_var')
And that would somehow pull these variables to be available in current scope, or log an error and exit if not (or return to calling method if possible). Is this possible in Python?
First, a disclaimer: "pulling" variables into the local scope in any way other than var = something is really really really not recommended. It tends to make your code really confusing for someone who isn't intimately familiar with what you're doing (i.e. anyone who isn't you, or who is you 6 months in the future, etc.)
That being said, for educational purposes only, there is a way. Your pull_args function could be implemented like this:
def pull_args(request, *args):
pulled = {}
try:
for a in args:
pulled[a] = request[a]
except AttributeError:
logging.error("some arg was missing in " + self.request.path)
exit()
else:
caller = inspect.stack()[1][0]
caller.f_locals.update(pulled)
At least, something to that effect worked when I came up with it probably about a year ago. I wouldn't necessarily count on it continuing to work in future Python versions. (Yet another reason not to do it) I personally have never found a good reason to use this code snippet.
No it's not and also pointless. Writing to outer namespaces completely destroys the purpose of namespaces, which is having only the things around that you explicitly set. Use lists!
def pull_args(*names):
return [self.request.get(name, None) for name in names]
print None in pull_args('some_var', 'other_var')
Probably this works too, to check if all _var are set:
print all(name in self.request for name in ('some_var', 'other_var'))

Call python function as if it were inline

I want to have a function in a different module, that when called, has access to all variables that its caller has access to, and functions just as if its body had been pasted into the caller rather than having its own context, basically like a C Macro instead of a normal function. I know I can pass locals() into the function and then it can access the local variables as a dict, but I want to be able to access them normally (eg x.y, not x["y"] and I want all names the caller has access to not just the locals, as well as things that were 'imported' into the caller's file but not into the module that contains the function.
Is this possible to pull off?
Edit 2 Here's the simplest possible example I can come up with of what I'm really trying to do:
def getObj(expression)
ofs = expression.rfind(".")
obj = eval(expression[:ofs])
print "The part of the expression Left of the period is of type ", type(obj),
Problem is that 'expression' requires the imports and local variables of the caller in order to eval without error.In reality theres a lot more than just an eval, so I'm trying to avoid the solution of just passing locals() in and through to the eval() since that won't fix my general case problem.
And another, even uglier way to do it -- please don't do this, even if it's possible --
import sys
def insp():
l = sys._getframe(1).f_locals
expression = l["expression"]
ofs = expression.rfind(".")
expofs = expression[:ofs]
obj = eval(expofs, globals(), l)
print "The part of the expression %r Left of the period (%r) is of type %r" % (expression, expofs, type(obj)),
def foo():
derp = 5
expression = "derp.durr"
insp()
foo()
outputs
The part of the expression 'derp.durr' Left of the period ('derp') is of type (type 'int')
I don't presume this is the answer that you wanted to hear, but trying to access local variables from a caller module's scope is not a good idea. If you normally program in PHP or C, you might be used to this sort of thing?
If you still want to do this, you might consider creating a class and passing an instance of that class in place of locals():
#other_module.py
def some_func(lcls):
print(lcls.x)
Then,
>>> import other_module
>>>
>>>
>>> x = 'Hello World'
>>>
>>> class MyLocals(object):
... def __init__(self, lcls):
... self.lcls = lcls
... def __getattr__(self, name):
... return self.lcls[name]
...
>>> # Call your function with an instance of this instead.
>>> other_module.some_func(MyLocals(locals()))
'Hello World'
Give it a whirl.
Is this possible to pull off?
Yes (sort of, in a very roundabout way) which I would strongly advise against it in general (more on that later).
Consider:
myfile.py
def func_in_caller():
print "in caller"
import otherfile
globals()["imported_func"] = otherfile.remote_func
imported_func(123, globals())
otherfile.py
def remote_func(x1, extra):
for k,v in extra.iteritems():
globals()[k] = v
print x1
func_in_caller()
This yields (as expected):
123
in caller
What we're doing here is trickery: we just copy every item into another namespace in order to make this work. This can (and will) break very easily and/or lead to hard to find bugs.
There's almost certainly a better way of solving your problem / structuring your code (we need more information in general on what you're trying to achieve).
From The Zen of Python:
2) Explicit is better than implicit.
In other words, pass in the parameter and don't try to get really fancy just because you think it would be easier for you. Writing code is not just about you.

Basic Python: Exception raising and local variable scope / binding

I have a basic "best practices" Python question. I see that there are already StackOverflow answers tangentially related to this question but they're mired in complicated examples or involve multiple factors.
Given this code:
#!/usr/bin/python
def test_function():
try:
a = str(5)
raise
b = str(6)
except:
print b
test_function()
what is the best way to avoid the inevitable "UnboundLocalError: local variable 'b' referenced before assignment" that I'm going to get in the exception handler?
Does python have an elegant way to handle this? If not, what about an inelegant way? In a complicated function I'd prefer to avoid testing the existence of every local variable before I, for example, printed debug information about them.
Does python have an elegant way to
handle this?
To avoid exceptions from printing unbound names, the most elegant way is not to print them; the second most elegant is to ensure the names do get bound, e.g. by binding them at the start of the function (the placeholder None is popular for this purpose).
If not, what about an inelegant way?
try: print 'b is', b
except NameError: print 'b is not bound'
In a complicated function I'd prefer
to avoid testing the existence of
every local variable before I, for
example, printed debug information
about them
Keeping your functions simple (i.e., not complicated) is highly recommended, too. As Hoare wrote 30 years ago (in his Turing acceptance lecture "The Emperor's old clothes", reprinted e.g. in this PDF):
There are two ways of constructing a
software design: One way is to make it
so simple that there are obviously no
deficiencies, and the other way is to
make it so complicated that there are
no obvious deficiencies. The first
method is far more difficult.
Achieving and maintaining simplicity is indeed difficult: given that you have to implement a certain total functionality X, it's the most natural temptation in the world to do so via complicated accretion into a few complicated classes and functions of sundry bits and pieces, "clever" hacks, copy-and-paste-and-edit-a-bit episodes of "drive-by coding", etc, etc.
However, it's a worthwhile effort to strive instead to keep your functions "so simple that there are obviously no deficiencies". If a function's hard to completely unit-test, it's too complicated: break it up (i.e., refactor it) into its natural components, even though it will take work to unearth them. (That's actually one of the way in which a strong focus on unit testing helps code quality: by spurring you relentlessly to keep all the code perfectly testable, it's at the same time spurring you to make it simple in its structure).
You can initialize your variables outside of the try block
a = None
b = None
try:
a = str(5)
raise
b = str(6)
except:
print b
You could check to see if the variable is defined in local scope using the built-in method locals()
http://docs.python.org/library/functions.html#locals
#!/usr/bin/python
def test_function():
try:
a = str(5)
raise
b = str(6)
except:
if 'b' in locals(): print b
test_function()
def test_function():
try:
a = str(5)
raise
b = str(6)
except:
print b
b = str(6) is never run; the program exits try block just after raise. If you want to print some variable in the except block, evaluate it before raising an exception and put them into the exception you throw.
class MyException(Exception):
def __init__(self, var):
self.var = var
def test_function():
try:
a = str(5)
b = str(6)
raise MyException(b)
except MyException,e:
print e.var

Categories

Resources