Calling scripts and variables between them iteratively in Python - python

I am trying to authomatise a script that:
reads an input defined in another script (e.g. Input_01.py, or Input_02.py, just containing a variable definition such as: J = 6 or J = 7); and
use that variable within a function, i.e.:
def foo_function():
A = J
A += 3
print(A)
Now, if I don't need to authomatise anything, the thing is pretty straightforward:
I just type from Input_01 import J and that's it. Then I can do the same for Input_02 and repeat the operation.
But my idea is to make a script (Multi_Run.py) that allows me to authomatise the process of calling several times the key script (called "code_foo.py"), one for each Input file, and each time reading a different Input file (e.g. Input_01, Input_02,..., etc.)
This is my current "Multi_Run.py" script (simplified for the case of just two different input files):
Inputs = []
for i in range(2):
Inputs.append("Input_0%s" % (i+1))
for Counter in Inputs:
import code_foo
code_foo.foo_function()
But now I cannot say, within "code_foo.py", something like: from Counter import J, because that won't work for two reasons:
First, Counter is not a variable in code_foo (this seems to be solvable by adding a line like from __main__ import *)
Second, the import function is not able to read the string inside the Counter variable, but it wants directly the name of the module (e.g. Input_01), preventing authomatisation. I tried to solve this by using the importlib.import_module but it's not seeming to work properly (i.e. the "Input_##" module is "imported" somehow, but not "run", which means that the "J=#somenumber" line is not run when importing the script and thus I get an error because the J variable is not defined).
For better clarity, my current code_foo.py script is the following:
import importlib
from __main__ import *
importlib.import_module(Counter)
def foo_function():
A = J
A += 3
print(A)
Any hints? thanks a lot

AFAICT (your question is still quite unclear), it looks like you have two issues here.
The first one is about Python's imports and namespaces. Python imports are not "includes" à la C or PHP so importing a module doesn't make the names defined in the module directly available in the importer's namespace, you have to use the qualified name ("."). Also, Python doesn't have a "process global" namespace, Python's "globals" are only global to the current module's namespace, so defining a global name in module "a" doesn't make it magically available to modules imported by module "a". This means that your foo_function in code_foo.py cannot directly access names defined in your main script.
Actually - and this is the second issue - your problem comes from foo_function depending on a global name that is defined elsewhere. This is a design mistake. As a general rule, you should not use globals when you don't need them, and you very rarely need globals. Just write your functions so they take "external" values as argument, and everything becomes much simpler AND much easier to read, test and maintain.
In your case, all you need to do is to 1/ rewrite foo_function() so it takes J as an argument and 2/ rewrite multirun so that it gets J from your "inputs" modules and passes it to foo_function:
# code_foo.py
def foo_function(j):
a = j
a += 3
print(a)
and
# multirun.py
import importlib
import code_foo
def main():
inputs = ["Input_0%s" % i for i in range(1,3)]
for modname in inputs:
module = importlib.import_module(modname)
j = module.J
code_foo.foo_function(j)
if __name__ == "__main__":
main()
As a side note: python's naming conventions are to use all_lower names for modules and variables - ALL_CAPS names are for pseudo-constants.

Related

Python global variable in import * [duplicate]

I've run into a bit of a wall importing modules in a Python script. I'll do my best to describe the error, why I run into it, and why I'm tying this particular approach to solve my problem (which I will describe in a second):
Let's suppose I have a module in which I've defined some utility functions/classes, which refer to entities defined in the namespace into which this auxiliary module will be imported (let "a" be such an entity):
module1:
def f():
print a
And then I have the main program, where "a" is defined, into which I want to import those utilities:
import module1
a=3
module1.f()
Executing the program will trigger the following error:
Traceback (most recent call last):
File "Z:\Python\main.py", line 10, in <module>
module1.f()
File "Z:\Python\module1.py", line 3, in f
print a
NameError: global name 'a' is not defined
Similar questions have been asked in the past (two days ago, d'uh) and several solutions have been suggested, however I don't really think these fit my requirements. Here's my particular context:
I'm trying to make a Python program which connects to a MySQL database server and displays/modifies data with a GUI. For cleanliness sake, I've defined the bunch of auxiliary/utility MySQL-related functions in a separate file. However they all have a common variable, which I had originally defined inside the utilities module, and which is the cursor object from MySQLdb module.
I later realised that the cursor object (which is used to communicate with the db server) should be defined in the main module, so that both the main module and anything that is imported into it can access that object.
End result would be something like this:
utilities_module.py:
def utility_1(args):
code which references a variable named "cur"
def utility_n(args):
etcetera
And my main module:
program.py:
import MySQLdb, Tkinter
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
And then, as soon as I try to call any of the utilities functions, it triggers the aforementioned "global name not defined" error.
A particular suggestion was to have a "from program import cur" statement in the utilities file, such as this:
utilities_module.py:
from program import cur
#rest of function definitions
program.py:
import Tkinter, MySQLdb
db=MySQLdb.connect(#blahblah) ; cur=db.cursor() #cur is defined!
from utilities_module import *
But that's cyclic import or something like that and, bottom line, it crashes too. So my question is:
How in hell can I make the "cur" object, defined in the main module, visible to those auxiliary functions which are imported into it?
Thanks for your time and my deepest apologies if the solution has been posted elsewhere. I just can't find the answer myself and I've got no more tricks in my book.
Globals in Python are global to a module, not across all modules. (Many people are confused by this, because in, say, C, a global is the same across all implementation files unless you explicitly make it static.)
There are different ways to solve this, depending on your actual use case.
Before even going down this path, ask yourself whether this really needs to be global. Maybe you really want a class, with f as an instance method, rather than just a free function? Then you could do something like this:
import module1
thingy1 = module1.Thingy(a=3)
thingy1.f()
If you really do want a global, but it's just there to be used by module1, set it in that module.
import module1
module1.a=3
module1.f()
On the other hand, if a is shared by a whole lot of modules, put it somewhere else, and have everyone import it:
import shared_stuff
import module1
shared_stuff.a = 3
module1.f()
… and, in module1.py:
import shared_stuff
def f():
print shared_stuff.a
Don't use a from import unless the variable is intended to be a constant. from shared_stuff import a would create a new a variable initialized to whatever shared_stuff.a referred to at the time of the import, and this new a variable would not be affected by assignments to shared_stuff.a.
Or, in the rare case that you really do need it to be truly global everywhere, like a builtin, add it to the builtin module. The exact details differ between Python 2.x and 3.x. In 3.x, it works like this:
import builtins
import module1
builtins.a = 3
module1.f()
As a workaround, you could consider setting environment variables in the outer layer, like this.
main.py:
import os
os.environ['MYVAL'] = str(myintvariable)
mymodule.py:
import os
myval = None
if 'MYVAL' in os.environ:
myval = os.environ['MYVAL']
As an extra precaution, handle the case when MYVAL is not defined inside the module.
This post is just an observation for Python behaviour I encountered. Maybe the advices you read above don't work for you if you made the same thing I did below.
Namely, I have a module which contains global/shared variables (as suggested above):
#sharedstuff.py
globaltimes_randomnode=[]
globalist_randomnode=[]
Then I had the main module which imports the shared stuff with:
import sharedstuff as shared
and some other modules that actually populated these arrays. These are called by the main module. When exiting these other modules I can clearly see that the arrays are populated. But when reading them back in the main module, they were empty. This was rather strange for me (well, I am new to Python). However, when I change the way I import the sharedstuff.py in the main module to:
from globals import *
it worked (the arrays were populated).
Just sayin'
A function uses the globals of the module it's defined in. Instead of setting a = 3, for example, you should be setting module1.a = 3. So, if you want cur available as a global in utilities_module, set utilities_module.cur.
A better solution: don't use globals. Pass the variables you need into the functions that need it, or create a class to bundle all the data together, and pass it when initializing the instance.
The easiest solution to this particular problem would have been to add another function within the module that would have stored the cursor in a variable global to the module. Then all the other functions could use it as well.
module1:
cursor = None
def setCursor(cur):
global cursor
cursor = cur
def method(some, args):
global cursor
do_stuff(cursor, some, args)
main program:
import module1
cursor = get_a_cursor()
module1.setCursor(cursor)
module1.method()
Since globals are module specific, you can add the following function to all imported modules, and then use it to:
Add singular variables (in dictionary format) as globals for those
Transfer your main module globals to it
.
addglobals = lambda x: globals().update(x)
Then all you need to pass on current globals is:
import module
module.addglobals(globals())
Since I haven't seen it in the answers above, I thought I would add my simple workaround, which is just to add a global_dict argument to the function requiring the calling module's globals, and then pass the dict into the function when calling; e.g:
# external_module
def imported_function(global_dict=None):
print(global_dict["a"])
# calling_module
a = 12
from external_module import imported_function
imported_function(global_dict=globals())
>>> 12
The OOP way of doing this would be to make your module a class instead of a set of unbound methods. Then you could use __init__ or a setter method to set the variables from the caller for use in the module methods.
Update
To test the theory, I created a module and put it on pypi. It all worked perfectly.
pip install superglobals
Short answer
This works fine in Python 2 or 3:
import inspect
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
save as superglobals.py and employ in another module thusly:
from superglobals import *
superglobals()['var'] = value
Extended Answer
You can add some extra functions to make things more attractive.
def superglobals():
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals
def getglobal(key, default=None):
"""
getglobal(key[, default]) -> value
Return the value for key if key is in the global dictionary, else default.
"""
_globals = dict(inspect.getmembers(
inspect.stack()[len(inspect.stack()) - 1][0]))["f_globals"]
return _globals.get(key, default)
def setglobal(key, value):
_globals = superglobals()
_globals[key] = value
def defaultglobal(key, value):
"""
defaultglobal(key, value)
Set the value of global variable `key` if it is not otherwise st
"""
_globals = superglobals()
if key not in _globals:
_globals[key] = value
Then use thusly:
from superglobals import *
setglobal('test', 123)
defaultglobal('test', 456)
assert(getglobal('test') == 123)
Justification
The "python purity league" answers that litter this question are perfectly correct, but in some environments (such as IDAPython) which is basically single threaded with a large globally instantiated API, it just doesn't matter as much.
It's still bad form and a bad practice to encourage, but sometimes it's just easier. Especially when the code you are writing isn't going to have a very long life.

How to mimic Python modules without import?

I’ve tried to develop a « module expander » tool for Python 3 but I've some issues.
The idea is the following : for a given Python script main.py, the tool generates a functionally equivalent Python script expanded_main.py, by replacing each import statement by the actual code of the imported module; this assumes that the Python source code of the imported is accessible. To do the job the right way, I’m using the builtin module ast of Python as well as astor, a third-party tool allowing to dump the AST back into Python source. The motivation of this import expander is to be able to compile a script into one single bytecode chunk, so the Python VM should not take care of importing modules (this could be useful for MicroPython, for instance).
The simplest case is the statement:
from import my_module1 import *
To transform this, my tool looks for a file my_module1.py and it replaces the import statement by the content of this file. Then, the expanded_main.py can access any name defined in my_module, as if the module was imported the normal way. I don’t care about subtle side effects that may reveal the trick. Also, to simplify, I treat from import my_module1 import a, b, c as the previous import (with asterisk), without caring about possible side effect. So far so good.
Now here is my point. How could you handle this flavor of import:
import my_module2
My first idea was to mimic this by creating a class having the same name as the module and copying the content of the Python file indented:
class my_module2:
# content of my_module2.py
…
This actually works for many cases but, sadly, I discovered that this has several glitches: one of these is that it fails with functions having a body referring to a global variable defined in the module. For example, consider the following two Python files:
# my_module2.py
g = "Hello"
def greetings():
print (g + " World!")
and
# main.py
import my_module2
print(my_module2.g)
my_module2.greetings()
At execution, main.py prints "Hello" and "Hello World!". Now, my expander tool shall generate this:
# expanded_main.py
class my_module2:
g = "Hello"
def greetings():
print (g + " World!")
print(my_module2.g)
my_module2.greetings()
At execution of expanded_main.py, the first print statement is OK ("Hello") but the greetings function raises an exception: NameError: name 'g' is not defined.
What happens actually is that
in the module my_module2, g is a global variable,
in the class my_module2, g is a class variable, which should be referred as my_module2.g.
Other similar side effects happens when you define functions, classes, … in my_module2.py and you want to refer to them in other functions, classes, … of the same my_module2.py.
Any idea how these problems could be solved?
Apart classes, are there other Python constructs that allow to mimic a module?
Final note: I’m aware that the tool should take care 1° of nested imports (recursion), 2° of possible multiple import of the same module. I don't expect to discuss these topics here.
You can execute the source code of a module in the scope of a function, specifically an instance method. The attributes can then be made available by defining __getattr__ on the corresponding class and keeping a copy of the initial function's locals(). Here is some sample code:
class Importer:
def __init__(self):
g = "Hello"
def greetings():
print (g + " World!")
self._attributes = locals()
def __getattr__(self, item):
return self._attributes[item]
module1 = Importer()
print(module1.g)
module1.greetings()
Nested imports are handled naturally by replacing them the same way with an instance of Importer. Duplicate imports shouldn't be a problem either.

importing a function which involves another function python

I came across a strange situation while writing two separate functions in same ".py" file. I am using python 2.7.12. Here is my code:
def do_the_job(n, list):
for i in range(2, n):
a = list[i - 1]
b = list[i - 2]
ap_me = list[i- a] + list[i - b]
list.append(ap_me)
return list
def func_two(n, k):
counter = 0
list = [1, 1]
do_the_job(n,list)
for i in range(n):
if list[i] >= k:
counter += 1
return counter
Then for import, I write from my_py import func_two to console. When I write func_two(10,3) it imports do_the_job function too and runs successfully. However, here comes the reason of the post, when I write do_the_job(7,[1,1]) to console, it errors as NameError: name 'do_the_job' is not defined.
How come I can use do_the_job function as a part of func_two function with importing func_two, but cannot use it on its own with same import status?
Functions are given a reference to a global namespace. That namespace is the module they are defined in.
Because both func_two and do_the_job are defined in the same module, they both 'live' in that namespace. Using the name do_the_job in the body of func_two will be resolved by looking at the globals for func_two and that'll find the other function.
When you import a module, the whole module is always loaded; you can find it in the sys.modules mapping. Once loaded (which only needs to be done once), the importing namespace is updated to add the names you imported; these are just references to objects in the imported module namespace.
So the from my_py import func_two import ensures the my_py module is loaded (stored in sys.modules['my_py']) and then the name func_two is added to your current namespace to reference sys.modules['my_py'].func_two.
You did not, however, import do_the_job into your current namespace, so you can't use that name. It is not part of your global namespace. Either import it explicitly, or reference it via sys.modules['my_py'].do_the_job.
You may want to read up on how Python names work, I can heartily recommend you read Ned Batchelder's Facts and myths about Python names and values. It doesn't directly cover importing, but it does offer vital insights that'll help you understand this specific situation better.

Multiple Files handling with Codependency - Python

I just finished the tutorial for making a rogue-like-game and I'm on my way to implement freatures.
The problem is, the whole game is a single file with 1k+ lines.
As you can see:
http://roguebasin.roguelikedevelopment.org/index.php?title=Complete_Roguelike_Tutorial,_using_python%2Blibtcod,_part_13_code
I want to divide it in different files/folders to handle the implements better.Maybe a file for each aspect of the game like map/player/npcs/items... but at least divide in Classes/Functions/Main.
The problem is, when I put this in the Main.py:
from Classes import *
from Functions import *
I get
NameError: name 'libtcod' is not defined
Which is a Module used in Main.
Then I tried to import the Main.py in the Classes.py and Functions.py
And get the
NameError: global name 'MAP_WIDTH' is not defined
MAP_WIDTH is a global variable in Main.py
I also tried to import the whole Main.py in Classes.py and Functions.py
But I get:
NameError: name 'Fighter' is not defined
Fighter is a Class inside Classes.py
Can anyone help me sort this out so I can start implement freatures.
EDIT: One simple example is:
Main.py
from Functions import *
def plus_one(ab):
return ab +1
a = 1
b = 2
c = add_things()
print plus_one(c)
Functions.py
from Main import *
def add_things():
global a,b
return a + b
It's a simple example, but in the project it get a lot of mutual dependecy between classes/functions and the main file.
There are many issues with your code and your planned program architecture. Please read my comment on your post. You need to shore up your knowledge of object oriented programming.
First, it is highly recommended to never use from Classes import *. You should use import Classes. Then to access functions or constants from the module you would use Classes.function_name or Classes.constant. See for more info on how to properly import in Python: http://effbot.org/zone/import-confusion.htm
Second, global variables are not recommended in Python. But if you do need them, you need to remember that in python a global variable means global to a module, not your entire program. Global variables in Python are a strange beast. If you need to read a global variable nothing special is required. However if you want to modify a global variable from within a function, then you must use the global keyword.
Thirdly, what you are doing is called a circle dependancy. Module A, imports Module B and Module B imports Module A. You can define shared functions, classes etc. in a third Module C. Then both A and B can import Module C. You can also defined your constants like MAP_WIDTH in module C and access them from A or B with C.MAP_WIDTH provided you have an import C.

How to make a cross-module variable?

The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it.
If you need a global cross-module variable maybe just simple global module-level variable will suffice.
a.py:
var = 1
b.py:
import a
print a.var
import c
print a.var
c.py:
import a
a.var = 2
Test:
$ python b.py
# -> 1 2
Real-world example: Django's global_settings.py (though in Django apps settings are used by importing the object django.conf.settings).
I don't endorse this solution in any way, shape or form. But if you add a variable to the __builtin__ module, it will be accessible as if a global from any other module that includes __builtin__ -- which is all of them, by default.
a.py contains
print foo
b.py contains
import __builtin__
__builtin__.foo = 1
import a
The result is that "1" is printed.
Edit: The __builtin__ module is available as the local symbol __builtins__ -- that's the reason for the discrepancy between two of these answers. Also note that __builtin__ has been renamed to builtins in python3.
I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them.
When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules.
If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment.
Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters.
It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident.
Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var_name"
This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example)
For clarification, globalbaz.py looks just like this:
var_name = "my_useful_string"
You can pass the globals of one module to onother:
In Module A:
import module_b
my_var=2
module_b.do_something_with_my_globals(globals())
print my_var
In Module B:
def do_something_with_my_globals(glob): # glob is simply a dict.
glob["my_var"]=3
Global variables are usually a bad idea, but you can do this by assigning to __builtins__:
__builtins__.foo = 'something'
print foo
Also, modules themselves are variables that you can access from any module. So if you define a module called my_globals.py:
# my_globals.py
foo = 'something'
Then you can use that from anywhere as well:
import my_globals
print my_globals.foo
Using modules rather than modifying __builtins__ is generally a cleaner way to do globals of this sort.
You can already do this with module-level variables. Modules are the same no matter what module they're being imported from. So you can make the variable a module-level variable in whatever module it makes sense to put it in, and access it or assign to it from other modules. It would be better to call a function to set the variable's value, or to make it a property of some singleton object. That way if you end up needing to run some code when the variable's changed, you can do so without breaking your module's external interface.
It's not usually a great way to do things — using globals seldom is — but I think this is the cleanest way to do it.
I wanted to post an answer that there is a case where the variable won't be found.
Cyclical imports may break the module behavior.
For example:
first.py
import second
var = 1
second.py
import first
print(first.var) # will throw an error because the order of execution happens before var gets declared.
main.py
import first
On this is example it should be obvious, but in a large code-base, this can be really confusing.
I wondered if it would be possible to avoid some of the disadvantages of using global variables (see e.g. http://wiki.c2.com/?GlobalVariablesAreBad) by using a class namespace rather than a global/module namespace to pass values of variables. The following code indicates that the two methods are essentially identical. There is a slight advantage in using class namespaces as explained below.
The following code fragments also show that attributes or variables may be dynamically created and deleted in both global/module namespaces and class namespaces.
wall.py
# Note no definition of global variables
class router:
""" Empty class """
I call this module 'wall' since it is used to bounce variables off of. It will act as a space to temporarily define global variables and class-wide attributes of the empty class 'router'.
source.py
import wall
def sourcefn():
msg = 'Hello world!'
wall.msg = msg
wall.router.msg = msg
This module imports wall and defines a single function sourcefn which defines a message and emits it by two different mechanisms, one via globals and one via the router function. Note that the variables wall.msg and wall.router.message are defined here for the first time in their respective namespaces.
dest.py
import wall
def destfn():
if hasattr(wall, 'msg'):
print 'global: ' + wall.msg
del wall.msg
else:
print 'global: ' + 'no message'
if hasattr(wall.router, 'msg'):
print 'router: ' + wall.router.msg
del wall.router.msg
else:
print 'router: ' + 'no message'
This module defines a function destfn which uses the two different mechanisms to receive the messages emitted by source. It allows for the possibility that the variable 'msg' may not exist. destfn also deletes the variables once they have been displayed.
main.py
import source, dest
source.sourcefn()
dest.destfn() # variables deleted after this call
dest.destfn()
This module calls the previously defined functions in sequence. After the first call to dest.destfn the variables wall.msg and wall.router.msg no longer exist.
The output from the program is:
global: Hello world!
router: Hello world!
global: no message
router: no message
The above code fragments show that the module/global and the class/class variable mechanisms are essentially identical.
If a lot of variables are to be shared, namespace pollution can be managed either by using several wall-type modules, e.g. wall1, wall2 etc. or by defining several router-type classes in a single file. The latter is slightly tidier, so perhaps represents a marginal advantage for use of the class-variable mechanism.
This sounds like modifying the __builtin__ name space. To do it:
import __builtin__
__builtin__.foo = 'some-value'
Do not use the __builtins__ directly (notice the extra "s") - apparently this can be a dictionary or a module. Thanks to ΤΖΩΤΖΙΟΥ for pointing this out, more can be found here.
Now foo is available for use everywhere.
I don't recommend doing this generally, but the use of this is up to the programmer.
Assigning to it must be done as above, just setting foo = 'some-other-value' will only set it in the current namespace.
I use this for a couple built-in primitive functions that I felt were really missing. One example is a find function that has the same usage semantics as filter, map, reduce.
def builtin_find(f, x, d=None):
for i in x:
if f(i):
return i
return d
import __builtin__
__builtin__.find = builtin_find
Once this is run (for instance, by importing near your entry point) all your modules can use find() as though, obviously, it was built in.
find(lambda i: i < 0, [1, 3, 0, -5, -10]) # Yields -5, the first negative.
Note: You can do this, of course, with filter and another line to test for zero length, or with reduce in one sort of weird line, but I always felt it was weird.
I could achieve cross-module modifiable (or mutable) variables by using a dictionary:
# in myapp.__init__
Timeouts = {} # cross-modules global mutable variables for testing purpose
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 60
# in myapp.mod1
from myapp import Timeouts
def wait_app_up(project_name, port):
# wait for app until Timeouts['WAIT_APP_UP_IN_SECONDS']
# ...
# in myapp.test.test_mod1
from myapp import Timeouts
def test_wait_app_up_fail(self):
timeout_bak = Timeouts['WAIT_APP_UP_IN_SECONDS']
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 3
with self.assertRaises(hlp.TimeoutException) as cm:
wait_app_up(PROJECT_NAME, PROJECT_PORT)
self.assertEqual("Timeout while waiting for App to start", str(cm.exception))
Timeouts['WAIT_JENKINS_UP_TIMEOUT_IN_SECONDS'] = timeout_bak
When launching test_wait_app_up_fail, the actual timeout duration is 3 seconds.

Categories

Resources