Say I have some code that creates several variables:
# Some code
# Beginning of the block to memoize
a = foo()
b = bar()
...
c =
# End of the block to memoize
# ... some more code
I would like to memoize the entire block above without having to be explicit about every variable created/changed in the block or pickle them manually. How can I do this in Python?
Ideally I would like to be able to wrap it with something (if/else or with statement) and have a flag that forces a refresh if I want.
Conceptually speaking, it woul dbe like:
# Some code
# Flag that I can set from outside to save or force a reset of the chache
refresh_cache = True
if refresh_cache == False
load_cache_of_block()
else:
# Beginning of the block to memoize
a = foo()
b = bar()
...
c = stuff()
# End of the block to memoize
save_cache_of_block()
# ... some more code
Is there any way to do this without having to explicitly pickle each variable defined or changed in the code? (i.e. at the end of the first run we save, and we later just reuse the values)
How about using locals() to get a list of the local variables, storing them in a dict in pickle, then using (below is more conceptual):
for k,v in vars_from_pickle:
run_string = '%s=%s' % (k,v)
exec(run_string)
to restore your local stack. Maybe its better to use a list instead of a dict to preserve stack ordering.
There are a lot of ways to go about this but I think the way that's closest to what you're describing would be to use pythons module scope as your memoized and import or reload as needed. Something like this:
# a.py
import b
print b.a, b.b
b.func(5)
b.b = 'world'
print b.a, b.b
if b.needs_refresh():
reload(b)
print b.a, b.b
With your "variable scope" being the module b:
# b.py
a = 0
b = 'hello'
def func(i):
global a
a += i
def needs_refresh():
return a >= 5
Executing this results in what you'd expect:
0 hello
5 world
0 hello
Edit: to be allow you to copy and save the entire scope you could just use a class scope:
memo_stack = list()
class MemoScope(object):
def __init__(self):
self.a = 0
self.b = 'hello'
memo = MemoScope()
memo.a = 2
memo.b = 3
memo_stack.append(memo)
memo_stack.append(MemoScope())
for i, o in enumerate(memo_stack):
print "memo t%i.a = %s" % (i, o.a)
print "memo t%i.b = %s" % (i, o.b)
if o.a == 2:
memo_stack[i] = MemoScope()
print "refreshed"
# memo t0.a = 2
# memo t0.b = 3
# refreshed
# memo t1.a = 0
# memo t1.b = hello
Related
This question already has answers here:
Getting the name of a variable as a string
(32 answers)
Closed 4 months ago.
Is it possible to get the original variable name of a variable passed to a function? E.g.
foobar = "foo"
def func(var):
print var.origname
So that:
func(foobar)
Returns:
>>foobar
EDIT:
All I was trying to do was make a function like:
def log(soup):
f = open(varname+'.html', 'w')
print >>f, soup.prettify()
f.close()
.. and have the function generate the filename from the name of the variable passed to it.
I suppose if it's not possible I'll just have to pass the variable and the variable's name as a string each time.
EDIT: To make it clear, I don't recommend using this AT ALL, it will break, it's a mess, it won't help you in any way, but it's doable for entertainment/education purposes.
You can hack around with the inspect module, I don't recommend that, but you can do it...
import inspect
def foo(a, f, b):
frame = inspect.currentframe()
frame = inspect.getouterframes(frame)[1]
string = inspect.getframeinfo(frame[0]).code_context[0].strip()
args = string[string.find('(') + 1:-1].split(',')
names = []
for i in args:
if i.find('=') != -1:
names.append(i.split('=')[1].strip())
else:
names.append(i)
print names
def main():
e = 1
c = 2
foo(e, 1000, b = c)
main()
Output:
['e', '1000', 'c']
To add to Michael Mrozek's answer, you can extract the exact parameters versus the full code by:
import re
import traceback
def func(var):
stack = traceback.extract_stack()
filename, lineno, function_name, code = stack[-2]
vars_name = re.compile(r'\((.*?)\).*$').search(code).groups()[0]
print vars_name
return
foobar = "foo"
func(foobar)
# PRINTS: foobar
Looks like Ivo beat me to inspect, but here's another implementation:
import inspect
def varName(var):
lcls = inspect.stack()[2][0].f_locals
for name in lcls:
if id(var) == id(lcls[name]):
return name
return None
def foo(x=None):
lcl='not me'
return varName(x)
def bar():
lcl = 'hi'
return foo(lcl)
bar()
# 'lcl'
Of course, it can be fooled:
def baz():
lcl = 'hi'
x='hi'
return foo(lcl)
baz()
# 'x'
Moral: don't do it.
Another way you can try if you know what the calling code will look like is to use traceback:
def func(var):
stack = traceback.extract_stack()
filename, lineno, function_name, code = stack[-2]
code will contain the line of code that was used to call func (in your example, it would be the string func(foobar)). You can parse that to pull out the argument
You can't. It's evaluated before being passed to the function. All you can do is pass it as a string.
#Ivo Wetzel's answer works in the case of function call are made in one line, like
e = 1 + 7
c = 3
foo(e, 100, b=c)
In case that function call is not in one line, like:
e = 1 + 7
c = 3
foo(e,
1000,
b = c)
below code works:
import inspect, ast
def foo(a, f, b):
frame = inspect.currentframe()
frame = inspect.getouterframes(frame)[1]
string = inspect.findsource(frame[0])[0]
nodes = ast.parse(''.join(string))
i_expr = -1
for (i, node) in enumerate(nodes.body):
if hasattr(node, 'value') and isinstance(node.value, ast.Call)
and hasattr(node.value.func, 'id') and node.value.func.id == 'foo' # Here goes name of the function:
i_expr = i
break
i_expr_next = min(i_expr + 1, len(nodes.body)-1)
lineno_start = nodes.body[i_expr].lineno
lineno_end = nodes.body[i_expr_next].lineno if i_expr_next != i_expr else len(string)
str_func_call = ''.join([i.strip() for i in string[lineno_start - 1: lineno_end]])
params = str_func_call[str_func_call.find('(') + 1:-1].split(',')
print(params)
You will get:
[u'e', u'1000', u'b = c']
But still, this might break.
You can use python-varname package
from varname import nameof
s = 'Hey!'
print (nameof(s))
Output:
s
Package below:
https://github.com/pwwang/python-varname
For posterity, here's some code I wrote for this task, in general I think there is a missing module in Python to give everyone nice and robust inspection of the caller environment. Similar to what rlang eval framework provides for R.
import re, inspect, ast
#Convoluted frame stack walk and source scrape to get what the calling statement to a function looked like.
#Specifically return the name of the variable passed as parameter found at position pos in the parameter list.
def _caller_param_name(pos):
#The parameter name to return
param = None
#Get the frame object for this function call
thisframe = inspect.currentframe()
try:
#Get the parent calling frames details
frames = inspect.getouterframes(thisframe)
#Function this function was just called from that we wish to find the calling parameter name for
function = frames[1][3]
#Get all the details of where the calling statement was
frame,filename,line_number,function_name,source,source_index = frames[2]
#Read in the source file in the parent calling frame upto where the call was made
with open(filename) as source_file:
head=[source_file.next() for x in xrange(line_number)]
source_file.close()
#Build all lines of the calling statement, this deals with when a function is called with parameters listed on each line
lines = []
#Compile a regex for matching the start of the function being called
regex = re.compile(r'\.?\s*%s\s*\(' % (function))
#Work backwards from the parent calling frame line number until we see the start of the calling statement (usually the same line!!!)
for line in reversed(head):
lines.append(line.strip())
if re.search(regex, line):
break
#Put the lines we have groked back into sourcefile order rather than reverse order
lines.reverse()
#Join all the lines that were part of the calling statement
call = "".join(lines)
#Grab the parameter list from the calling statement for the function we were called from
match = re.search('\.?\s*%s\s*\((.*)\)' % (function), call)
paramlist = match.group(1)
#If the function was called with no parameters raise an exception
if paramlist == "":
raise LookupError("Function called with no parameters.")
#Use the Python abstract syntax tree parser to create a parsed form of the function parameter list 'Name' nodes are variable names
parameter = ast.parse(paramlist).body[0].value
#If there were multiple parameters get the positional requested
if type(parameter).__name__ == 'Tuple':
#If we asked for a parameter outside of what was passed complain
if pos >= len(parameter.elts):
raise LookupError("The function call did not have a parameter at postion %s" % pos)
parameter = parameter.elts[pos]
#If there was only a single parameter and another was requested raise an exception
elif pos != 0:
raise LookupError("There was only a single calling parameter found. Parameter indices start at 0.")
#If the parameter was the name of a variable we can use it otherwise pass back None
if type(parameter).__name__ == 'Name':
param = parameter.id
finally:
#Remove the frame reference to prevent cyclic references screwing the garbage collector
del thisframe
#Return the parameter name we found
return param
If you want a Key Value Pair relationship, maybe using a Dictionary would be better?
...or if you're trying to create some auto-documentation from your code, perhaps something like Doxygen (http://www.doxygen.nl/) could do the job for you?
I wondered how IceCream solves this problem. So I looked into the source code and came up with the following (slightly simplified) solution. It might not be 100% bullet-proof (e.g. I dropped get_text_with_indentation and I assume exactly one function argument), but it works well for different test cases. It does not need to parse source code itself, so it should be more robust and simpler than previous solutions.
#!/usr/bin/env python3
import inspect
from executing import Source
def func(var):
callFrame = inspect.currentframe().f_back
callNode = Source.executing(callFrame).node
source = Source.for_frame(callFrame)
expression = source.asttokens().get_text(callNode.args[0])
print(expression, '=', var)
i = 1
f = 2.0
dct = {'key': 'value'}
obj = type('', (), {'value': 42})
func(i)
func(f)
func(s)
func(dct['key'])
func(obj.value)
Output:
i = 1
f = 2.0
s = string
dct['key'] = value
obj.value = 42
Update: If you want to move the "magic" into a separate function, you simply have to go one frame further back with an additional f_back.
def get_name_of_argument():
callFrame = inspect.currentframe().f_back.f_back
callNode = Source.executing(callFrame).node
source = Source.for_frame(callFrame)
return source.asttokens().get_text(callNode.args[0])
def func(var):
print(get_name_of_argument(), '=', var)
If you want to get the caller params as in #Matt Oates answer answer without using the source file (ie from Jupyter Notebook), this code (combined from #Aeon answer) will do the trick (at least in some simple cases):
def get_caller_params():
# get the frame object for this function call
thisframe = inspect.currentframe()
# get the parent calling frames details
frames = inspect.getouterframes(thisframe)
# frame 0 is the frame of this function
# frame 1 is the frame of the caller function (the one we want to inspect)
# frame 2 is the frame of the code that calls the caller
caller_function_name = frames[1][3]
code_that_calls_caller = inspect.findsource(frames[2][0])[0]
# parse code to get nodes of abstract syntact tree of the call
nodes = ast.parse(''.join(code_that_calls_caller))
# find the node that calls the function
i_expr = -1
for (i, node) in enumerate(nodes.body):
if _node_is_our_function_call(node, caller_function_name):
i_expr = i
break
# line with the call start
idx_start = nodes.body[i_expr].lineno - 1
# line with the end of the call
if i_expr < len(nodes.body) - 1:
# next expression marks the end of the call
idx_end = nodes.body[i_expr + 1].lineno - 1
else:
# end of the source marks the end of the call
idx_end = len(code_that_calls_caller)
call_lines = code_that_calls_caller[idx_start:idx_end]
str_func_call = ''.join([line.strip() for line in call_lines])
str_call_params = str_func_call[str_func_call.find('(') + 1:-1]
params = [p.strip() for p in str_call_params.split(',')]
return params
def _node_is_our_function_call(node, our_function_name):
node_is_call = hasattr(node, 'value') and isinstance(node.value, ast.Call)
if not node_is_call:
return False
function_name_correct = hasattr(node.value.func, 'id') and node.value.func.id == our_function_name
return function_name_correct
You can then run it as this:
def test(*par_values):
par_names = get_caller_params()
for name, val in zip(par_names, par_values):
print(name, val)
a = 1
b = 2
string = 'text'
test(a, b,
string
)
to get the desired output:
a 1
b 2
string text
Since you can have multiple variables with the same content, instead of passing the variable (content), it might be safer (and will be simpler) to pass it's name in a string and get the variable content from the locals dictionary in the callers stack frame. :
def displayvar(name):
import sys
return name+" = "+repr(sys._getframe(1).f_locals[name])
If it just so happens that the variable is a callable (function), it will have a __name__ property.
E.g. a wrapper to log the execution time of a function:
def time_it(func, *args, **kwargs):
start = perf_counter()
result = func(*args, **kwargs)
duration = perf_counter() - start
print(f'{func.__name__} ran in {duration * 1000}ms')
return result
Not sure if this makes sense at all, but here's an example:
Let's say I have a script. In this script, I create a list
list = [1,2,3,4]
Maybe I just don't have the technical vocabulary to find what I'm looking for, but is there any way I could set some logging up so that any time I created a variable I could store information in a log file? Given the above example, maybe I'd want to see how many elements are in the list?
I understand that I could simply write a function and call that over and over again, but let's say I might want to know information about a ton of different data types, not just lists. It wouldn't be clean to call a function repeatedly.
this is hackery but what the heck
class _LoggeryType(type):
def __setattr__(cls,attr,value):
print("SET VAR: {0} = {1}".format(attr,value))
globals().update({attr:value})
# Python3
class Loggery(metaclass=_LoggeryType):
pass
# python2
class Loggery:
__metaclass__=_LoggeryType
Loggery.x = 5
print("OK set X={0}".format(x))
note i wouldn't really recommend using this
One method would be to use the powerful sys.settrace. I've written up a small (but somewhat incomplete) example:
tracer.py:
import inspect
import sys
import os
import logging
logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger('tracing-logger')
FILES_TO_TRACE = [os.path.basename(__file__), 'tracee.py']
print(FILES_TO_TRACE)
def new_var(name, value, context):
logger.debug(f"New {context} variable called {name} = {value}")
# do some analysis here, for example
if type(value) == list:
logger.debug(f"\tNumber of elements: {len(value)}")
def changed_var(name, value, context):
logger.debug(f"{context} variable called {name} of was changed to: {value}")
def make_tracing_func():
current_locals = {}
current_globals = {}
first_line_executed = False
def tracing_func(frame, event, arg):
nonlocal first_line_executed
frame_info = inspect.getframeinfo(frame)
filename = os.path.basename(frame_info.filename)
line_num = frame_info.lineno
if event == 'line':
# check for difference with locals
for var_name in frame.f_code.co_varnames:
if var_name in frame.f_locals:
var_value = frame.f_locals[var_name]
if var_name not in current_locals:
current_locals[var_name] = var_value
new_var(var_name, var_value, 'local')
elif current_locals[var_name] != var_value:
current_locals[var_name] = var_value
changed_var(var_name, var_value, 'local')
for var_name, var_value in frame.f_globals.items():
if var_name not in current_globals:
current_globals[var_name] = var_value
if first_line_executed:
new_var(var_name, var_value, 'global')
elif current_globals[var_name] != var_value:
current_globals[var_name] = var_value
changed_var(var_name, var_value, 'global')
first_line_executed = True
return tracing_func
elif event == 'call':
if os.path.basename(filename) in FILES_TO_TRACE:
return make_tracing_func()
return None
return tracing_func
sys.settrace(make_tracing_func())
import tracee
tracee.py
my_list = [1, 2, 3, 4]
a = 3
print("tracee: I have a list!", my_list)
c = a + sum(my_list)
print("tracee: A number:", c)
c = 12
print("tracee: I changed it:", c)
Output:
DEBUG:tracing-logger:New global variable called my_list = [1, 2, 3, 4]
DEBUG:tracing-logger: Number of elements: 4
DEBUG:tracing-logger:New global variable called a = 3
tracee: I have a list! [1, 2, 3, 4]
DEBUG:tracing-logger:New global variable called c = 13
tracee: A number: 13
DEBUG:tracing-logger:global variable called c was changed to: 12
tracee: I changed it: 12
There are some additional cases you may want to handle (duplicated changes to globals due to function calls, closure variables, etc.). You can also use linecache to find the contents of the lines, or use the line_num variable in the logging.
I am learning python and have one question about how to save a dictionary value via a python function.
import copy
def func():
b = {'1':'d'}
a = copy.deepcopy(b)
global a
a = {}
func()
print a
The printout is still {}, how to make it be {'1':'d'}?
You need to say that you are accessing the global variable a, inside the function, like this
def func():
global a
b = {'1': 'd'}
a = copy.deepcopy(b)
But, prefer not doing something like that. Instead, return the copy and then store it in the calling place, like this
import copy
a = {}
def func():
b = {'1': 'd'}
return copy.deepcopy(b)
a = func()
print a
i moved the global a into the function definition.
#! /usr/bin/python
import copy
def func():
global a
b = {'1':'d'}
a = copy.deepcopy(b)
a = {}
func()
print a
You are defining 'a' in two different scopes, one in the "global" scope, one in the function scope. You will need to return copy.deepcopy(b) and set that to the value of the outer defined 'a'.
import copy
def func():
b = {'1':'d'}
return copy.deepcopy(b)
global a
a = func()
print a
Why can two functions with the same id value have differing attributes like __doc__ or __name__?
Here's a toy example:
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
Which prints:
57386560
57386560
I am function 0
I am function 1
function_0
function_1
1 # <--- Why is it bound to the most recent value of that variable?
1
I've tried mixing in a call to copy.deepcopy (not sure if the recursive copy is needed for functions or it is overkill) but this doesn't change anything.
You are comparing methods, and method objects are created anew each time you access one on an instance or class (via the descriptor protocol).
Once you tested their id() you discard the method again (there are no references to it), so Python is free to reuse the id when you create another method. You want to test the actual functions here, by using m.function_0.__func__ and m.function_1.__func__:
>>> id(m.function_0.__func__)
4321897240
>>> id(m.function_1.__func__)
4321906032
Method objects inherit the __doc__ and __name__ attributes from the function that they wrap. The actual underlying functions are really still different objects.
As for the two functions returning 1; both functions use i as a closure; the value for i is looked up when you call the method, not when you created the function. See Local variables in Python nested functions.
The easiest work-around is to add another scope with a factory function:
some_dict = {}
for i in range(2):
def create_fun(i):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
return fun
some_dict["function_{}".format(i)] = create_fun(i)
Per your comment on ndpu's answer, here is one way you can create the functions without needing to have an optional argument:
for i in range(2):
def funGenerator(i):
def fun1(self, *args):
print i
return fun1
fun = funGenerator(i)
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
#Martjin Pieters is perfectly correct. To illustrate, try this modification
some_dict = {}
for i in range(2):
def fun(self, *args):
print i
fun.__doc__ = "I am function {}".format(i)
fun.__name__ = "function_{}".format(i)
some_dict["function_{}".format(i)] = fun
print "id",id(fun)
my_type = type("my_type", (object,), some_dict)
m = my_type()
print id(m.function_0)
print id(m.function_1)
print m.function_0.__doc__
print m.function_1.__doc__
print m.function_0.__name__
print m.function_1.__name__
print m.function_0()
print m.function_1()
c = my_type()
print c
print id(c.function_0)
You see that the fun get's a different id each time, and is different from the final one. It's the method creation logic that send's it pointing to the same location, as that's where the class's code is stored. Also, if you use the my_type as a sort of class, instances created with it have the same memory address for that function
This code gives:
id 4299601152
id 4299601272
4299376112
4299376112
I am function 0
I am function 1
function_0
function_1
1
None
1
None
<main.my_type object at 0x10047c350>
4299376112
You should save current i to make this:
1 # <--- Why is it bound to the most recent value of that variable?
1
work, for example by setting default value to function argument:
for i in range(2):
def fun(self, i=i, *args):
print i
# ...
or create a closure:
for i in range(2):
def f(i):
def fun(self, *args):
print i
return fun
fun = f(i)
# ...
I'm trying to collect info on crashes and I am having trouble figuring out how to get the globals that are being used in the crashed function.
import inspect
fun = 222
other = "junk"
def test():
global fun
harold = 888 + fun
try:
harold/0
except:
frames = inspect.trace()
print "Local variables:"
print frames[0][0].f_locals
print "All global variables, not what I want!"
print frames[0][0].f_globals
test()
test() only uses "fun" but f_globals gives all the available globals. Is there some way to get just the globals that are being used by this function?
Check this out
a = 10
def test():
global a
a = 12
b = 12
print "co_argcount = ",test.__code__.co_argcount
print "co_cellvars = ",test.__code__.co_cellvars
print "co_code = ",test.__code__.co_code
print "co_consts = ",test.__code__.co_consts
print "co_filename = ",test.__code__.co_filename
print "co_firstlineno = ",test.__code__.co_firstlineno
print "co_flags = ",test.__code__.co_flags
print "co_freevars = ",test.__code__.co_freevars
print "co_lnotab = ",test.__code__.co_lnotab
print "co_name = ",test.__code__.co_name
print "co_names = ",test.__code__.co_names
print "co_nlocals = ",test.__code__.co_nlocals
print "co_stacksize = ",test.__code__.co_stacksize
print "co_varnames = ",test.__code__.co_varnames
I needed that also myself. This is my solution. The non-fast path covers most cases you are probably interested in.
def iterGlobalsUsedInFunc(f, fast=False, loadsOnly=True):
if hasattr(f, "func_code"): code = f.func_code
else: code = f
if fast:
# co_names is the list of all names which are used.
# These are mostly the globals. These are also attrib names, so these are more...
for name in code.co_names:
yield name
else:
# Use the disassembly. Note that this will still not
# find dynamic lookups to `globals()`
# (which is anyway not possible to detect always).
import dis
ops = ["LOAD_GLOBAL"]
if not loadsOnly:
ops += ["STORE_GLOBAL", "DELETE_GLOBAL"]
ops = map(dis.opmap.__getitem__, ops)
i = 0
while i < len(code.co_code):
op = ord(code.co_code[i])
i += 1
if op >= dis.HAVE_ARGUMENT:
oparg = ord(code.co_code[i]) + ord(code.co_code[i+1])*256
i += 2
else:
oparg = None
if op in ops:
name = code.co_names[oparg]
yield name
# iterate through sub code objects
import types
for subcode in code.co_consts:
if isinstance(subcode, types.CodeType):
for g in iterGlobalsUsedInFunc(subcode, fast=fast, loadsOnly=loadsOnly):
yield g
An updated version might be here.
My use case:
I have some module (songdb) which has some global database objects and I wanted to lazily load them once I called a function which uses the global database variable. I could have manually decorated such functions with a lazy loader or I could automatically detect which functions need it by my iterGlobalsUsedInFunc function.
This is basically the code (full code; was actually extended for classes now), where init automatically decorates such functions:
DBs = {
"songDb": "songs.db",
"songHashDb": "songHashs.db",
"songSearchIndexDb": "songSearchIndex.db",
}
for db in DBs.keys(): globals()[db] = None
def usedDbsInFunc(f):
dbs = []
for name in utils.iterGlobalsUsedInFunc(f, loadsOnly=True):
if name in DBs:
dbs += [name]
return dbs
def init():
import types
for fname in globals().keys():
f = globals()[fname]
if not isinstance(f, types.FunctionType): continue
dbs = usedDbsInFunc(f)
if not dbs: continue
globals()[fname] = lazyInitDb(*dbs)(f)
def initDb(db):
if not globals()[db]:
globals()[db] = DB(DBs[db])
def lazyInitDb(*dbs):
def decorator(f):
def decorated(*args, **kwargs):
for db in dbs:
initDb(db)
return f(*args, **kwargs)
return decorated
return decorator
Another solution would have been to use an object proxy which lazily loads the database. I have used that elsewhere in this project, thus I have also implemented such object proxy; if you are interested, see here: utils.py:ObjectProxy.
A dirty way would be to use inspect.getsourcelines() and search for lines containing global <varname>. There are no good methods for this, at least not in inspect module.
As you already found out, the property f_globals gives you the global namespace in which the function was defined.
From what I can see, the only way to find out which global variables are actually used is to disassemble the function's byte code with dis; look for the byte codes STORE_NAME, STORE_GLOBAL, DELETE_GLOBAL, etc.
If using Colab / Jupyter
In one cell you run dis redirecting the output to a variable
%%capture dis_output
func_to_check=my_own_function
# dis : Disassembler for Python bytecode
from dis import dis
dis(func_to_check)
And then you can filter its content to detect the use of GLOBALS. Here an example
Version 1 (safer than v2)
# Then grep will find the use of GLOBALS
# Method 1 (kind of safer)
with open('dis_output.txt', 'w') as f:
f.writelines(dis_output.stdout)
! cat dis_output.txt | grep -i global
Version 2 (not so safe)
# Method 2 (not so safe)
! echo "{dis_output.stdout}" | grep -i global
Results (example)
14 4 LOAD_GLOBAL 0 (AudioLibrary)
36 LOAD_GLOBAL 2 (userLanguageAudio)
50 LOAD_GLOBAL 2 (userLanguageAudio)
62 LOAD_GLOBAL 3 (LESSON_FILES_DIR)
27 76 LOAD_GLOBAL 4 (get_ipython)
30 96 LOAD_GLOBAL 6 (pread)
31 104 LOAD_GLOBAL 7 (print)
34 >> 124 LOAD_GLOBAL 7 (print)
[Update 2022-11] Some time later I put this together
#-------------------------
def check_function_globals(function_name):
import dis
# From python 3.4 dis can write to a file
with open('dis_output.txt', 'w') as f:
dis.dis(function_name, file=f)
# Then grep will find the use of GLOBALS
! cat dis_output.txt | grep -i global
return
#-------------------------