Reload function fails to erase removed variables - python

I am trying to access variables from a module that is being modified while the main script runs by using reload. But the reload function fails to erase the variables that have been removed from the module. How to force python to erase them ?
Here is my code
My module.py :
a = 1
b = 2
My main scipt.py :
import time
from importlib import reload
import module
while True:
reload(module)
print('------- module reloaded')
try:
print('a: ', module.a)
except AttributeError:
print('a: ', 'undefined')
try:
print('b: ', module.b)
except AttributeError:
print('b: ', 'undefined')
try:
print('c: ', module.c)
except AttributeError:
print('c: ', 'undefined')
time.sleep(5)
As expected, if I run my script with python (3.5.1) I get the output:
------- module reloaded
a: 1
b: 2
c: undefined
But I get an unexpected behavior, when I change the module.py as following:
# a = 1
b = 3
c = 4
I have the following output:
------- module reloaded
a: 1
b: 3
c: 4
This means that reload correctly updated the value of b and added the new variable c. But it failed to erase the variable a that has been removed from the module. It seems to only perform updates on the variables that are found in the new version of the module. How to force the reload function to erase removed values ?
Thank you for your help

The issue here is that reload is implemented with code that, roughly, execs the current version of the module code in the existing cached module's namespace. The reason for doing this is that reload is intended to have a global effect; reload(module) doesn't just reload it for you, it changes every other module's personally imported copy of module. If it created a new module namespace, other modules would still have the old module cached; while erasing the contents of the old module before execing might help, it would risk breaking already imported submodules of a package, trigger race conditions (where a thread might see module.a before and after the reload, but it would mysteriously disappear for a moment during the reload), etc.
As the docs note:
When a module is reloaded, its dictionary (containing the module’s global variables) is retained. Redefinitions of names will override the old definitions, so this is generally not a problem. If the new version of a module does not define a name that was defined by the old version, the old definition remains.
There are workarounds that bypass this safety mechanism if you absolutely must do it. The simplest is to simply remove the module from the module cache and reimport it, rather than reloading it:
import sys # At top of file
del sys.modules['module']
import module
That won't update any other importers of the module (they'll keep the stale cache), but if the module is only used in your module, that'll work.
Another approach that might work (untested, and it's kind of insane) would be to explicitly delete all the public names from the module before reloading with something like:
# Intentionally done as list comprehension so no modification to module's globals dict
# occurs while we're iterating it
# Might make sense to use dir or iterate module.__all__ if available instead of using vars;
# depends on design
for name in [n for n in vars(module) if not n.startswith('_')]:
try:
delattr(module, name)
except Exception:
pass # Undeletable attribute for whatever reason, just ignore and let reload deal with it
reload(module) # Now exec occurs in clean namespace
That would avoid the stale cache issue, in exchange for insanity.
Really, the answer is "don't use a design that depends on production reloading"; if the module is just data, store it as a JSON file or the like and just reparse it (which is generally much cheaper than what the Python import machinery goes through to import a module).

This is by design. From the docs:
When a module is reloaded, its dictionary (containing the module’s global variables) is retained. Redefinitions of names will override the old definitions, so this is generally not a problem. If the new version of a module does not define a name that was defined by the old version, the old definition remains.
Generally, reload is meant for convenience within interactive interpreter sessions; it is not really meant to be used in real scripts. You should probably re-think your design. (e.g. is there a reason that module can't be a regular text file?)

Related

Detect circular imports in Python [duplicate]

I'm working with a project that contains about 30 unique modules. It wasn't designed too well, so it's common that I create circular imports when adding some new functionality to the project.
Of course, when I add the circular import, I'm unaware of it. Sometimes it's pretty obvious I've made a circular import when I get an error like AttributeError: 'module' object has no attribute 'attribute' where I clearly defined 'attribute'. But other times, the code doesn't throw exceptions because of the way it's used.
So, to my question:
Is it possible to programmatically detect when and where a circular import is occuring?
The only solution I can think of so far is to have a module importTracking that contains a dict importingModules, a function importInProgress(file), which increments importingModules[file], and throws an error if it's greater than 1, and a function importComplete(file) which decrements importingModules[file]. All other modules would look like:
import importTracking
importTracking.importInProgress(__file__)
#module code goes here.
importTracking.importComplete(__file__)
But that looks really nasty, there's got to be a better way to do it, right?
To avoid having to alter every module, you could stick your import-tracking functionality in a import hook, or in a customized __import__ you could stick in the built-ins -- the latter, for once, might work better, because __import__ gets called even if the module getting imported is already in sys.modules, which is the case during circular imports.
For the implementation I'd simply use a set of the modules "in the process of being imported", something like (benjaoming edit: Inserting a working snippet derived from original):
beingimported = set()
originalimport = __import__
def newimport(modulename, *args, **kwargs):
if modulename in beingimported:
print "Importing in circles", modulename, args, kwargs
print " Import stack trace -> ", beingimported
# sys.exit(1) # Normally exiting is a bad idea.
beingimported.add(modulename)
result = originalimport(modulename, *args, **kwargs)
if modulename in beingimported:
beingimported.remove(modulename)
return result
import __builtin__
__builtin__.__import__ = newimport
Not all circular imports are a problem, as you've found when an exception is not thrown.
When they are a problem, you'll get an exception the next time you try to run any of your tests. You can change the code when this happens.
I don't see any change required from this situation.
Example of when it's not a problem:
a.py
import b
a = 42
def f():
return b.b
b.py
import a
b = 42
def f():
return a.a
Circular imports in Python are not like PHP includes.
Python imported modules are loaded the first time into an import "handler", and kept there for the duration of the process. This handler assigns names in the local namespace for whatever is imported from that module, for every subsequent import. A module is unique, and a reference to that module name will always point to the same loaded module, regardless of where it was imported.
So if you have a circular module import, the loading of each file will happen once, and then each module will have names relating to the other module created into its namespace.
There could of course be problems when referring to specific names within both modules (when the circular imports occur BEFORE the class/function definitions that are referenced in the imports of the opposite modules), but you'll get an error if that happens.
import uses __builtin__.__import__(), so if you monkeypatch that then every import everywhere will pick up the changes. Note that a circular import is not necessarily a problem though.

Does reloading a module changes the names in the module previously imported/reloaded?

I have a module named myfile.py
//myfile.py
msg = 'Hello World'
Then I import it: import myfile
Then I make some modifications, such as:
//myfile.py (updated)
a = 'Some text'
b = 'Some other text'
Then I call the reload function:
from imp import reload
reload(myfile)
Now when I run: dir(myfile)
its showing the names from the current module reload as well as the previous (all other previous import/reloads)
Does this mean "all" the names (even names omitted after updating) are available separately for outside world when the module is imported/reloaded?
From the imp.reload documentation:
When a module is reloaded, its dictionary (containing the module’s global variables) is retained. Redefinitions of names will override the old definitions, so this is generally not a problem. If the new version of a module does not define a name that was defined by the old version, the old definition remains.
The module is not wiped before the reload. The module's new code is executed with all old variable bindings still present. Any names the new code defines replace the old bindings, but anything the new version doesn't define retains its old value. This also applies in Python 2.
Use can use Python's built-in function reload. From the docs:
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code re-executed, defining a new set of objects which are bound to names
in the module’s dictionary. The init function of extension modules is
not called a second time.
As with all other objects in Python the old objects are only reclaimed after their reference counts drop to zero.
The names in the module namespace are updated to point to any new or changed objects.
Other references to the old objects (such as names external to the module) are not rebound to refer to the new objects and must be
updated in each namespace where they occur if that is desired.
Here is an example:
First, we create a file that defines a variable msg and a method to print it.
// myfile.py
msg = 'Hello World'
def x():
print msg
// run from the shell
shell1$ python
>>> import myfile
>>> myfile.x()
Hello World
Now we modify the file, removing variable msg and adding a and b variables (without exiting the current Python REPL).
// myfile.py - updated
a = 'Some text'
b = 'Some other text'
def x():
print '%s: %s' % (a, b)
// run from the same python session
>>> reload(myfile)
<module 'myfile' from 'myfile.pyc'>
>>> myfile.x()
Some text: Some other text
This is useful if you have edited the module source file using an external editor and want to try out the new version without leaving the Python interpreter.
One final note: reload(myfile) only reloads the specified modules. The module's (myfile) imports must be individually reloaded. Also, any objects that refer to anything in the module (like an instance whose class is defined in the module) will continue to use the previously loaded value.

How can I figure out in my module if the main program uses a specific variable?

I know this does not sound Pythonic, but bear with me for a second.
I am writing a module that depends on some external closed-source module. That module needs to get instantiated to be used (using module.create()).
My module attempts to figure out if my user already loaded that module (easy to do), but then needs to figure out if the module was instantiated. I understand that checking out the type() of each variable can tell me this, but I am not sure how I can get the names of variables defined by the main program. The reason for this is that when one instantiates the model, they also set a bunch of parameters that I do not want to overwrite for any reason.
My attempts so far involved using sys._getframe().f_globals and iterating through the elements, but in my testing it doesn't work. If I instantiate the module as modInst and then call the function in my module, it fails to show the modInst variable. Is there another solution to this? Sample code provided below.
import sys
if moduleName not in sys.modules:
import moduleName
modInst = moduleName.create()
else:
globalVars = sys._getframe().f_globals
for key, value in globalVars:
if value == "Module Name Instance":
return key
return moduleName.create()
EDIT: Sample code included.
Looks like your code assumes that the .create() function was called, if at all, by the immediate/direct caller of your function (which you show only partially, making it pretty hard to be sure about what's going on) and the results placed in a global variable (of the module where the caller of your function resides). It all seems pretty fragile. Doesn't that third-party module have some global variables of its own that are affected by whether the module's create has been called or not? I imagine it would -- where else is it keeping the state-changes resulting from executing the create -- and I would explore that.
To address a specific issue you raise,
I am not sure how I can get the names
of variables defined by the main
program
that's easy -- the main program is found, as a module, in sys.modules['__main__'], so just use vars(sys.modules['__main__']) to get the global dictionary of the main program (the variable names are the keys in that dictionary, along of course with names of functions, classes, etc -- the module, like any other module, has exactly one top-level/global namespace, not one for variables, a separate one for functions, etc).
Suppose the external closed-sourced module is called extmod.
Create my_extmod.py:
import extmod
INSTANTIATED=False
def create(*args,**kw):
global INSTANTIATED
INSTANTIATED=True
return extmod.create(*args,**kw)
Then require your users to import my_extmod instead of extmod directly.
To test if the create function has been called, just check the value of extmod.INSTANTIATED.
Edit: If you open up an IPython session and type import extmod, then type
extmod.[TAB], then you'll see all the top-level variables in the extmod namespace. This might help you find some parameter that changes when extmod.create is called.
Barring that, and barring the possibility of training users to import my_extmod, then perhaps you could use something like the function below. find_extmod_instance searches through all modules in sys.modules.
def find_instance(cls):
for modname in sys.modules:
module=sys.modules[modname]
for value in vars(module).values():
if isinstance(value,cls):
return value
x=find_instance(extmod.ExtmodClass) or extmod.create()

How to re import an updated package while in Python Interpreter? [duplicate]

This question already has answers here:
How do I unload (reload) a Python module?
(22 answers)
Closed 5 years ago.
I often test my module in the Python Interpreter, and when I see an error, I quickly update the .py file. But how do I make it reflect on the Interpreter ? So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me.
Update for Python3: (quoted from the already-answered answer, since the last edit/comment here suggested a deprecated method)
In Python 3, reload was moved to the imp module. In 3.4, imp was deprecated in favor of importlib, and reload was added to the latter. When targeting 3 or later, either reference the appropriate module when calling reload or import it.
Takeaway:
Python3 >= 3.4: importlib.reload(packagename)
Python3 < 3.4: imp.reload(packagename)
Python2: continue below
Use the reload builtin function:
https://docs.python.org/2/library/functions.html#reload
When reload(module) is executed:
Python modules’ code is recompiled and the module-level code reexecuted, defining a new set of objects which are bound to names in the module’s dictionary. The init function of extension modules is not called a second time.
As with all other objects in Python the old objects are only reclaimed after their reference counts drop to zero.
The names in the module namespace are updated to point to any new or changed objects.
Other references to the old objects (such as names external to the module) are not rebound to refer to the new objects and must be updated in each namespace where they occur if that is desired.
Example:
# Make a simple function that prints "version 1"
shell1$ echo 'def x(): print "version 1"' > mymodule.py
# Run the module
shell2$ python
>>> import mymodule
>>> mymodule.x()
version 1
# Change mymodule to print "version 2" (without exiting the python REPL)
shell2$ echo 'def x(): print "version 2"' > mymodule.py
# Back in that same python session
>>> reload(mymodule)
<module 'mymodule' from 'mymodule.pyc'>
>>> mymodule.x()
version 2
All the answers above about reload() or imp.reload() are deprecated.
reload() is no longer a builtin function in python 3 and imp.reload() is marked deprecated (see help(imp)).
It's better to use importlib.reload() instead.
So, far I have been exiting and reentering the Interpreter because re importing the file again is not working for me.
Yes, just saying import again gives you the existing copy of the module from sys.modules.
You can say reload(module) to update sys.modules and get a new copy of that single module, but if any other modules have a reference to the original module or any object from the original module, they will keep their old references and Very Confusing Things will happen.
So if you've got a module a, which depends on module b, and b changes, you have to ‘reload b’ followed by ‘reload a’. If you've got two modules which depend on each other, which is extremely common when those modules are part of the same package, you can't reload them both: if you reload p.a it'll get a reference to the old p.b, and vice versa. The only way to do it is to unload them both at once by deleting their items from sys.modules, before importing them again. This is icky and has some practical pitfalls to do with modules entries being None as a failed-relative-import marker.
And if you've got a module which passes references to its objects to system modules — for example it registers a codec, or adds a warnings handler — you're stuck; you can't reload the system module without confusing the rest of the Python environment.
In summary: for all but the simplest case of one self-contained module being loaded by one standalone script, reload() is very tricky to get right; if, as you imply, you are using a ‘package’, you will probably be better off continuing to cycle the interpreter.
In Python 3, the behaviour changes.
>>> import my_stuff
... do something with my_stuff, then later:
>>>> import imp
>>>> imp.reload(my_stuff)
and you get a brand new, reloaded my_stuff.
No matter how many times you import a module, you'll get the same copy of the module from sys.modules - which was loaded at first import mymodule
I am answering this late, as each of the above/previous answer has a bit of the answer, so I am attempting to sum it all up in a single answer.
Using built-in function:
For Python 2.x - Use the built-in reload(mymodule) function.
For Python 3.x - Use the imp.reload(mymodule).
For Python 3.4 - In Python 3.4 imp has been deprecated in favor of importlib i.e. importlib.reload(mymodule)
Few caveats:
It is generally not very useful to reload built-in or dynamically
loaded modules. Reloading sys, __main__, builtins and other key
modules is not recommended.
In many cases extension modules are not
designed to be initialized more than once, and may fail in arbitrary
ways when reloaded. If a module imports objects from another module
using from ... import ..., calling reload() for the other module does
not redefine the objects imported from it — one way around this is to
re-execute the from statement, another is to use import and qualified
names (module.name) instead.
If a module instantiates instances of a
class, reloading the module that defines the class does not affect
the method definitions of the instances — they continue to use the
old class definition. The same is true for derived classes.
External packages:
reimport - Reimport currently supports Python 2.4 through 2.7.
xreload- This works by executing the module in a scratch namespace, and then
patching classes, methods and functions in place. This avoids the
need to patch instances. New objects are copied into the target
namespace.
livecoding - Code reloading allows a running application to change its behaviour in response to changes in the Python scripts it uses. When the library detects a Python script has been modified, it reloads that script and replaces the objects it had previously made available for use with newly reloaded versions. As a tool, it allows a programmer to avoid interruption to their workflow and a corresponding loss of focus. It enables them to remain in a state of flow. Where previously they might have needed to restart the application in order to put changed code into effect, those changes can be applied immediately.
Short answer:
try using reimport: a full featured reload for Python.
Longer answer:
It looks like this question was asked/answered prior to the release of reimport, which bills itself as a "full featured reload for Python":
This module intends to be a full featured replacement for Python's reload function. It is targeted towards making a reload that works for Python plugins and extensions used by longer running applications.
Reimport currently supports Python 2.4 through 2.6.
By its very nature, this is not a completely solvable problem. The goal of this module is to make the most common sorts of updates work well. It also allows individual modules and package to assist in the process. A more detailed description of what happens is on the overview page.
Note: Although the reimport explicitly supports Python 2.4 through 2.6, I've been trying it on 2.7 and it seems to work just fine.
Basically reload as in allyourcode's asnwer. But it won't change underlying the code of already instantiated object or referenced functions. Extending from his answer:
#Make a simple function that prints "version 1"
shell1$ echo 'def x(): print "version 1"' > mymodule.py
# Run the module
shell2$ python
>>> import mymodule
>>> mymodule.x()
version 1
>>> x = mymodule.x
>>> x()
version 1
>>> x is mymodule.x
True
# Change mymodule to print "version 2" (without exiting the python REPL)
shell2$ echo 'def x(): print "version 2"' > mymodule.py
# Back in that same python session
>>> reload(mymodule)
<module 'mymodule' from 'mymodule.pyc'>
>>> mymodule.x()
version 2
>>> x()
version 1
>>> x is mymodule.x
False
Not sure if this does all expected things, but you can do just like that:
>>> del mymodule
>>> import mymodule
import sys
del sys.modules['module_name']
See here for a good explanation of how your dependent modules won't be reloaded and the effects that can have:
http://pyunit.sourceforge.net/notes/reloading.html
The way pyunit solved it was to track dependent modules by overriding __import__ then to delete each of them from sys.modules and re-import. They probably could've just reload'ed them, though.
dragonfly's answer worked for me (python 3.4.3).
import sys
del sys.modules['module_name']
Here is a lower level solution :
exec(open("MyClass.py").read(), globals())

How to make a cross-module variable?

The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it.
If you need a global cross-module variable maybe just simple global module-level variable will suffice.
a.py:
var = 1
b.py:
import a
print a.var
import c
print a.var
c.py:
import a
a.var = 2
Test:
$ python b.py
# -> 1 2
Real-world example: Django's global_settings.py (though in Django apps settings are used by importing the object django.conf.settings).
I don't endorse this solution in any way, shape or form. But if you add a variable to the __builtin__ module, it will be accessible as if a global from any other module that includes __builtin__ -- which is all of them, by default.
a.py contains
print foo
b.py contains
import __builtin__
__builtin__.foo = 1
import a
The result is that "1" is printed.
Edit: The __builtin__ module is available as the local symbol __builtins__ -- that's the reason for the discrepancy between two of these answers. Also note that __builtin__ has been renamed to builtins in python3.
I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them.
When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules.
If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment.
Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters.
It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident.
Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var_name"
This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example)
For clarification, globalbaz.py looks just like this:
var_name = "my_useful_string"
You can pass the globals of one module to onother:
In Module A:
import module_b
my_var=2
module_b.do_something_with_my_globals(globals())
print my_var
In Module B:
def do_something_with_my_globals(glob): # glob is simply a dict.
glob["my_var"]=3
Global variables are usually a bad idea, but you can do this by assigning to __builtins__:
__builtins__.foo = 'something'
print foo
Also, modules themselves are variables that you can access from any module. So if you define a module called my_globals.py:
# my_globals.py
foo = 'something'
Then you can use that from anywhere as well:
import my_globals
print my_globals.foo
Using modules rather than modifying __builtins__ is generally a cleaner way to do globals of this sort.
You can already do this with module-level variables. Modules are the same no matter what module they're being imported from. So you can make the variable a module-level variable in whatever module it makes sense to put it in, and access it or assign to it from other modules. It would be better to call a function to set the variable's value, or to make it a property of some singleton object. That way if you end up needing to run some code when the variable's changed, you can do so without breaking your module's external interface.
It's not usually a great way to do things — using globals seldom is — but I think this is the cleanest way to do it.
I wanted to post an answer that there is a case where the variable won't be found.
Cyclical imports may break the module behavior.
For example:
first.py
import second
var = 1
second.py
import first
print(first.var) # will throw an error because the order of execution happens before var gets declared.
main.py
import first
On this is example it should be obvious, but in a large code-base, this can be really confusing.
I wondered if it would be possible to avoid some of the disadvantages of using global variables (see e.g. http://wiki.c2.com/?GlobalVariablesAreBad) by using a class namespace rather than a global/module namespace to pass values of variables. The following code indicates that the two methods are essentially identical. There is a slight advantage in using class namespaces as explained below.
The following code fragments also show that attributes or variables may be dynamically created and deleted in both global/module namespaces and class namespaces.
wall.py
# Note no definition of global variables
class router:
""" Empty class """
I call this module 'wall' since it is used to bounce variables off of. It will act as a space to temporarily define global variables and class-wide attributes of the empty class 'router'.
source.py
import wall
def sourcefn():
msg = 'Hello world!'
wall.msg = msg
wall.router.msg = msg
This module imports wall and defines a single function sourcefn which defines a message and emits it by two different mechanisms, one via globals and one via the router function. Note that the variables wall.msg and wall.router.message are defined here for the first time in their respective namespaces.
dest.py
import wall
def destfn():
if hasattr(wall, 'msg'):
print 'global: ' + wall.msg
del wall.msg
else:
print 'global: ' + 'no message'
if hasattr(wall.router, 'msg'):
print 'router: ' + wall.router.msg
del wall.router.msg
else:
print 'router: ' + 'no message'
This module defines a function destfn which uses the two different mechanisms to receive the messages emitted by source. It allows for the possibility that the variable 'msg' may not exist. destfn also deletes the variables once they have been displayed.
main.py
import source, dest
source.sourcefn()
dest.destfn() # variables deleted after this call
dest.destfn()
This module calls the previously defined functions in sequence. After the first call to dest.destfn the variables wall.msg and wall.router.msg no longer exist.
The output from the program is:
global: Hello world!
router: Hello world!
global: no message
router: no message
The above code fragments show that the module/global and the class/class variable mechanisms are essentially identical.
If a lot of variables are to be shared, namespace pollution can be managed either by using several wall-type modules, e.g. wall1, wall2 etc. or by defining several router-type classes in a single file. The latter is slightly tidier, so perhaps represents a marginal advantage for use of the class-variable mechanism.
This sounds like modifying the __builtin__ name space. To do it:
import __builtin__
__builtin__.foo = 'some-value'
Do not use the __builtins__ directly (notice the extra "s") - apparently this can be a dictionary or a module. Thanks to ΤΖΩΤΖΙΟΥ for pointing this out, more can be found here.
Now foo is available for use everywhere.
I don't recommend doing this generally, but the use of this is up to the programmer.
Assigning to it must be done as above, just setting foo = 'some-other-value' will only set it in the current namespace.
I use this for a couple built-in primitive functions that I felt were really missing. One example is a find function that has the same usage semantics as filter, map, reduce.
def builtin_find(f, x, d=None):
for i in x:
if f(i):
return i
return d
import __builtin__
__builtin__.find = builtin_find
Once this is run (for instance, by importing near your entry point) all your modules can use find() as though, obviously, it was built in.
find(lambda i: i < 0, [1, 3, 0, -5, -10]) # Yields -5, the first negative.
Note: You can do this, of course, with filter and another line to test for zero length, or with reduce in one sort of weird line, but I always felt it was weird.
I could achieve cross-module modifiable (or mutable) variables by using a dictionary:
# in myapp.__init__
Timeouts = {} # cross-modules global mutable variables for testing purpose
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 60
# in myapp.mod1
from myapp import Timeouts
def wait_app_up(project_name, port):
# wait for app until Timeouts['WAIT_APP_UP_IN_SECONDS']
# ...
# in myapp.test.test_mod1
from myapp import Timeouts
def test_wait_app_up_fail(self):
timeout_bak = Timeouts['WAIT_APP_UP_IN_SECONDS']
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 3
with self.assertRaises(hlp.TimeoutException) as cm:
wait_app_up(PROJECT_NAME, PROJECT_PORT)
self.assertEqual("Timeout while waiting for App to start", str(cm.exception))
Timeouts['WAIT_JENKINS_UP_TIMEOUT_IN_SECONDS'] = timeout_bak
When launching test_wait_app_up_fail, the actual timeout duration is 3 seconds.

Categories

Resources