Need help properly using variables, python - python

So I have a lot of functions in a file called definitions.py
My main file main.py which accesses those definitions and also has functions of its own using those functions.
Now in main.py I have 'from definitions import *'
Both files rely on a set of 15 initial variables, these variables I have placed in definitions.py, this is all well and good I have all my functions working fine, the problem arises when I want to use my application as a model, where I will want to change some of the variables to see how the output differs.
Essentially I want my initial variables to be in a sort of bowl which is accessed each time a function is called and I can swap and change the values in this bowl which means the next function that is called uses the updated variables.
The problem I'm having at the moment is, I think, because the variables are written in definitions.py that's that and I can't change them.
Even in python shell I can put n1 equal to something else, execute a function that uses n1 but it will use the old n1, not the new one, I think because the variables haven't changed in the definition.py file.
Is there some sort of way to have live access variables that I don't know about? Thank you.

You should use a class. For example, if your definitions.py file has:
variable1 = 3
variable2 = 'stuff'
def spam(arg1):
return 'spam' + arg1
def eggs(arg1,arg2):
return 'eggs' + arg1 + arg2
change it to
class Definitions():
def __init__():
self.variable1 = 3
self.variable2 = 'stuff'
def spam(self,arg1):
return 'spam' + arg1
def eggs(self,arg1,arg2):
return 'eggs' + arg1 + arg2
Now, from your main.py file, you can import in a slightly different way and sweep multiple parameter values:
import definitions
for parameter in xrange(0,10):
defs = definitions.Definitions()
defs.variable1 = parameter
# do some stuff and store the result
# compare the various results
Remember that now your functions are inside a class, so instead of calling spam('mail'), you should call defs.spam('mail'), and so on.

Related

Mocking a global variable in pytest

how do you mock a global variable in pytest? Here is a pair of example files:
File being tested, call it main.py:
MY_GLOBAL = 1
def foo():
return MY_GLOBAL*2
def main()
# some relevant invokation of foo somewhere here
if __name__=='__main__':
main()
File that is testing, call it test_main.py:
from main import foo
class TestFoo(object):
def test_that_it_multiplies_by_global(self):
# expected=2, we could write, but anyway ...
actual = foo()
assert actual == expected
This is just a dummy example of course, but how would you go about mocking MY_GLOBAL and giving it another value?
Thanks in advance i've been kind of breaking my head over this and i bet it's really obvious.
The global variable is an attribute of the module, which you can patch using patch.object:
import main
from unittest.mock import patch
class TestFoo(object):
def test_that_it_multiplies_by_global(self):
with patch.object(main, 'MY_GLOBAL', 3):
assert main.foo(4) == 12 # not 4
However, you want to make sure you are testing the right thing. Is foo supposed to multiply its argument by 1 (and the fact that it uses a global variable with a value of 1 an implementation detail), or is it supposed to multiply its argument by whatever value MY_GLOBAL has at the time of the call? The answer to that question will affect how you write your test.
It's useful to distinguish between module-level constants and global variables. The former are pretty common, but the latter are an anti-pattern in Python. You seem to have a module-level constant (read-only access to the var in normal production code). Global variables (R/W access in production code) should generally be refactored if possible.
For module constants:
If you can do so, it's generally more maintainable to refactor the functions that depend on module constants. This allows direct testing with alternate values as well as a single source of truth for the "constants" and backward compatibility. A minimal refactor is as simple as adding an optional parameter in each function that depends on the "constant" and doing a simple search-and-replace in that function, e.g.:
def foo(value=MY_GLOBAL):
return value*2
All other code can continue to call foo() as normal, but if you want to write tests with alternate values of MY_GLOBAL, you can simply call foo(value=7484).
If what you want is an actual global, (with the global keyword and read/write access during production code, try these alternatives.

Not able to access object in python while creating object of other class defined in some other file

I have a directory structure like this:
home/
main.py
lib/
mylib/
Textfile_Class.py
Excelfile_Class.py
globals.py (has all the global variables declared here)
functions.py
I created an object of Textfile_class in main.py using
txt = TEXT(file).
Now, I want to use this variable txt during creating object of Excelfile_Class for some operations (eg, if the value of a variable in txt object is 5, then do certain action in Excelfile_Class )
In Excelfile_Class, I am also importing all the global variables. String variables are accessible there but I dont know why this object txt is not accessible there. Wherever I am refering to txt in Excelfile_Class(self.line_count = txt.count), I am getting below error: AttributeError: 'NoneType' object has no attribute 'count'
Please help me to know why this is happening even though I have defined all the variables in a seperate file and importing those in all the files.
Eg:
main.py
path = os.path.abspath('./lib')
sys.path.insert(0, path)
from mylib.Textfile_class import *
from mylib.Excelfile_Class import *
from mylib.globals import *
from mylib.functions import *
if __name__ == "__main__":
txt = TEXT(file)
xcel = EXCEL(file)
Eg globals.py
global txt, xcel
txt=None
xcel=None
Eg Textfile_class.py
from globals import *
class TEXT:
def __init__(self, filename):
self.count = 0
with open(filename) as fp:
for line in fp:
self.count = self.count + 1`
Eg Excelfile_Class.py
from globals import *
class EXCEL:
def __init__(self, filename):
self.line_count = 0
self.operation(filename)
def operation(self, file):
self.line_count = txt.count
if self.line_count:
self.some_operation()
else:
self.other_operation()
When you assign a value to a variable name inside a function, you're not working with the global version of the variable any more, instead you have a completely new variable.
You have to use global keyword inside the function to indicate you're working with a global variable.
From Python Doc FAQ
What are the rules for local and global variables in Python?
In Python, variables that are only referenced inside a function are implicitly global. If a variable is assigned a new value anywhere within the function’s body, it’s assumed to be a local. If a variable is ever assigned a new value inside the function, the variable is implicitly local, and you need to explicitly declare it as ‘global’.
Read more...
Example:
x = None # x is momdule-level global variable
def local():
x = 5 # assign 5 to x, here x is a new variable, global x is not affected.
def glob():
global x # indicate we'll working with the global x.
x = 5 # this affect the global variable.
def print_global():
print(x) # Just print the global variable.
local()
print_global() # Prints None
glob()
print_global() # Prints 5
So, every time you refer to txt inside a function, you have to tell the context you'll be working with the global version of txt.
Other thing can be happening!
Python is interpreted, that means it execute the code line by line, if in the others modules (not in the main module) you have code trying to access txt before some value be assigned
if __name__ == "__main__":
txt = TEXT(file)
, then you'll get the same error.
A recommendation:
Try to avoid the use of global variables, you already know that isn't a good practice and it leads to unestable code.
If your problem is that you want to txt and xcel to be available at any time anywhere, you could use the pattern Singleton (warning, Singleton is considered an anti-pattern ref)
I will post an example for you, but before I will encorage you to redesign your program, I ensure you it will be a good exercise!
Singleton example: (again this is an anti-pattern, but I preffer it to nude global variables).
class Globals(object):
__instance = None
def __init__(self):
class wrapped_class:
def __init__(self):
self.txt = None
self.excel = None
if (Globals.__instance is None):
Globals.__instance = wrapped_class()
def __getattr__(self, attrname):
return getattr(self.__instance, attrname)
def __setattr__(self, attrname, value):
setattr(self.__instance, attrname, value)
glob = Globals() # Since __instance is None this will instantiate wrapped_class and save the reference in __instance
glob.txt = "Txt example" # Modify the txt attribute of __instance.
glob_1 = Globals() # Since __instance is not None it stays as it is.
print(glob.txt) # Prints "Txt example"
print(glob_1.txt) # Prints "Txt example"
It is quite hard to tell where your problem is without having the code or the stack trace but I would strongly advice that you read some documentation about Python best practices, the use of global variables and naming conventions. This by itself might solve your issues.
Naming conventions might sound silly but those things and other syntax-related choices do matter in python.
Also, you seem to be missing __init__.py file in your module which may or may not be important depending of the python version you are using.
Here are a few links to get you started:
https://www.python.org/dev/peps/pep-0008/#naming-conventions
https://docs.python.org/2/tutorial/classes.html#private-variables-and-class-local-references
http://www.python-course.eu/global_vs_local_variables.php
http://gettingstartedwithpython.blogspot.be/2012/05/variable-scope.html
http://c2.com/cgi/wiki?GlobalVariablesAreBad
I dont know why it happened like that. If anyone knows they can tell.
The problem got resolved when I used the method given in Using global variables between files in Python.
So, finally I put all the global variables in a function in globals.py and instantiated that once in main.py. Then, I used from mylib import globals and referenced the global variables inside the classes as globals.txt and it worked fine.

Python - using a method's return value as a constant

I am trying to define some constants at the top of my file. There are no classes in the file, just imports, constants, and methods. Possibly due to poor design, I want to use a method inside of this file to set a constant. For example:
MY_CONSTANT = function(foo, bar)
def function(foo, bar):
return 6
In this example, I want MY_CONSTANT to get assigned the int 6. This is a simplified version of the problem, as my function actually makes many expensive calls and I only want that function to be called once. I plan to use the constant inside of a loop.
This does not work because I get the following error:
NameError: name 'function' is not defined
Is there a better design for this, or how can I use a method call to set my constant?
You are trying to call a function before it has been defined:
def function(foo, bar):
return 6
MY_CONSTANT = function(foo=None, bar=None)
>>>MY_CONSTANT
6
Edit: I set foo=None and bar=None going into function because I'm not sure where you have those defined either.

Pass variables between functions vs global variables

I have a code with several functions defined which I call from a main container code. Each new function uses variables obtained with the previous functions, so it looks kind of like this:
import some_package
import other_package
import first_function as ff
import secon_function as sf
import third_function as tf
import make_plot as mp
# Get values for three variables from first function
var_1, var_2, var_3 = ff()
# Pass some of those values to second function and get some more
var4, var5 = sf(var_1, var_3)
# Same with third function
var_6, var_7, var_8, var_9 = tf(var_2, var_4, var_5)
# Call plotting function with (almost) all variables
mp(var_1, var_2, var_3, var_5, var_6, var_7, var_8, var_9)
Is this more pythonic than using global variables? The issue with this methodology is that if I add/remove a new variable from a given function I'm forced to modify four places: the function itself, the call to that function in the main code, the call to the make_plot function in the main and the make_plotfunction itself. Is there a better or more recommended way to do this?
What about putting them in a class?
class Foo(object):
def ff(self):
self.var_1, self.var_2, self.var_3 = ff()
def sf(self):
self.var_4, self.var_5 = sf(self.var_1, self.var_2)
def tf(self):
self.var_6, self.var_7, self.var_8, self.var_9 = tf(self.var_2, self.var_4, self.var_5)
def plot(self):
mp(self.var_1, self.var_2, self.var_3,
self.var_5, self.var_6, self.var_7, self.var_8, self.var_9)
foo = Foo()
foo.ff()
foo.sf()
foo.tf()
foo.plot()
Maybe some of these methods should be module-level functions that take a Foo instance, maybe some of these attributes should be variables passed around separately, maybe there are really 2, or even 4, different classes here rather than 1, etc. But the idea—you've replaced 9 things to pass around with 1 or 2.
I'd suggest that what you want is a data structure which is filled in by the various functions and then passed into make_plot at the end. This largely applies in whatever language you're using.

How to make a cross-module variable?

The __debug__ variable is handy in part because it affects every module. If I want to create another variable that works the same way, how would I do it?
The variable (let's be original and call it 'foo') doesn't have to be truly global, in the sense that if I change foo in one module, it is updated in others. I'd be fine if I could set foo before importing other modules and then they would see the same value for it.
If you need a global cross-module variable maybe just simple global module-level variable will suffice.
a.py:
var = 1
b.py:
import a
print a.var
import c
print a.var
c.py:
import a
a.var = 2
Test:
$ python b.py
# -> 1 2
Real-world example: Django's global_settings.py (though in Django apps settings are used by importing the object django.conf.settings).
I don't endorse this solution in any way, shape or form. But if you add a variable to the __builtin__ module, it will be accessible as if a global from any other module that includes __builtin__ -- which is all of them, by default.
a.py contains
print foo
b.py contains
import __builtin__
__builtin__.foo = 1
import a
The result is that "1" is printed.
Edit: The __builtin__ module is available as the local symbol __builtins__ -- that's the reason for the discrepancy between two of these answers. Also note that __builtin__ has been renamed to builtins in python3.
I believe that there are plenty of circumstances in which it does make sense and it simplifies programming to have some globals that are known across several (tightly coupled) modules. In this spirit, I would like to elaborate a bit on the idea of having a module of globals which is imported by those modules which need to reference them.
When there is only one such module, I name it "g". In it, I assign default values for every variable I intend to treat as global. In each module that uses any of them, I do not use "from g import var", as this only results in a local variable which is initialized from g only at the time of the import. I make most references in the form g.var, and the "g." serves as a constant reminder that I am dealing with a variable that is potentially accessible to other modules.
If the value of such a global variable is to be used frequently in some function in a module, then that function can make a local copy: var = g.var. However, it is important to realize that assignments to var are local, and global g.var cannot be updated without referencing g.var explicitly in an assignment.
Note that you can also have multiple such globals modules shared by different subsets of your modules to keep things a little more tightly controlled. The reason I use short names for my globals modules is to avoid cluttering up the code too much with occurrences of them. With only a little experience, they become mnemonic enough with only 1 or 2 characters.
It is still possible to make an assignment to, say, g.x when x was not already defined in g, and a different module can then access g.x. However, even though the interpreter permits it, this approach is not so transparent, and I do avoid it. There is still the possibility of accidentally creating a new variable in g as a result of a typo in the variable name for an assignment. Sometimes an examination of dir(g) is useful to discover any surprise names that may have arisen by such accident.
Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var_name"
This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example)
For clarification, globalbaz.py looks just like this:
var_name = "my_useful_string"
You can pass the globals of one module to onother:
In Module A:
import module_b
my_var=2
module_b.do_something_with_my_globals(globals())
print my_var
In Module B:
def do_something_with_my_globals(glob): # glob is simply a dict.
glob["my_var"]=3
Global variables are usually a bad idea, but you can do this by assigning to __builtins__:
__builtins__.foo = 'something'
print foo
Also, modules themselves are variables that you can access from any module. So if you define a module called my_globals.py:
# my_globals.py
foo = 'something'
Then you can use that from anywhere as well:
import my_globals
print my_globals.foo
Using modules rather than modifying __builtins__ is generally a cleaner way to do globals of this sort.
You can already do this with module-level variables. Modules are the same no matter what module they're being imported from. So you can make the variable a module-level variable in whatever module it makes sense to put it in, and access it or assign to it from other modules. It would be better to call a function to set the variable's value, or to make it a property of some singleton object. That way if you end up needing to run some code when the variable's changed, you can do so without breaking your module's external interface.
It's not usually a great way to do things — using globals seldom is — but I think this is the cleanest way to do it.
I wanted to post an answer that there is a case where the variable won't be found.
Cyclical imports may break the module behavior.
For example:
first.py
import second
var = 1
second.py
import first
print(first.var) # will throw an error because the order of execution happens before var gets declared.
main.py
import first
On this is example it should be obvious, but in a large code-base, this can be really confusing.
I wondered if it would be possible to avoid some of the disadvantages of using global variables (see e.g. http://wiki.c2.com/?GlobalVariablesAreBad) by using a class namespace rather than a global/module namespace to pass values of variables. The following code indicates that the two methods are essentially identical. There is a slight advantage in using class namespaces as explained below.
The following code fragments also show that attributes or variables may be dynamically created and deleted in both global/module namespaces and class namespaces.
wall.py
# Note no definition of global variables
class router:
""" Empty class """
I call this module 'wall' since it is used to bounce variables off of. It will act as a space to temporarily define global variables and class-wide attributes of the empty class 'router'.
source.py
import wall
def sourcefn():
msg = 'Hello world!'
wall.msg = msg
wall.router.msg = msg
This module imports wall and defines a single function sourcefn which defines a message and emits it by two different mechanisms, one via globals and one via the router function. Note that the variables wall.msg and wall.router.message are defined here for the first time in their respective namespaces.
dest.py
import wall
def destfn():
if hasattr(wall, 'msg'):
print 'global: ' + wall.msg
del wall.msg
else:
print 'global: ' + 'no message'
if hasattr(wall.router, 'msg'):
print 'router: ' + wall.router.msg
del wall.router.msg
else:
print 'router: ' + 'no message'
This module defines a function destfn which uses the two different mechanisms to receive the messages emitted by source. It allows for the possibility that the variable 'msg' may not exist. destfn also deletes the variables once they have been displayed.
main.py
import source, dest
source.sourcefn()
dest.destfn() # variables deleted after this call
dest.destfn()
This module calls the previously defined functions in sequence. After the first call to dest.destfn the variables wall.msg and wall.router.msg no longer exist.
The output from the program is:
global: Hello world!
router: Hello world!
global: no message
router: no message
The above code fragments show that the module/global and the class/class variable mechanisms are essentially identical.
If a lot of variables are to be shared, namespace pollution can be managed either by using several wall-type modules, e.g. wall1, wall2 etc. or by defining several router-type classes in a single file. The latter is slightly tidier, so perhaps represents a marginal advantage for use of the class-variable mechanism.
This sounds like modifying the __builtin__ name space. To do it:
import __builtin__
__builtin__.foo = 'some-value'
Do not use the __builtins__ directly (notice the extra "s") - apparently this can be a dictionary or a module. Thanks to ΤΖΩΤΖΙΟΥ for pointing this out, more can be found here.
Now foo is available for use everywhere.
I don't recommend doing this generally, but the use of this is up to the programmer.
Assigning to it must be done as above, just setting foo = 'some-other-value' will only set it in the current namespace.
I use this for a couple built-in primitive functions that I felt were really missing. One example is a find function that has the same usage semantics as filter, map, reduce.
def builtin_find(f, x, d=None):
for i in x:
if f(i):
return i
return d
import __builtin__
__builtin__.find = builtin_find
Once this is run (for instance, by importing near your entry point) all your modules can use find() as though, obviously, it was built in.
find(lambda i: i < 0, [1, 3, 0, -5, -10]) # Yields -5, the first negative.
Note: You can do this, of course, with filter and another line to test for zero length, or with reduce in one sort of weird line, but I always felt it was weird.
I could achieve cross-module modifiable (or mutable) variables by using a dictionary:
# in myapp.__init__
Timeouts = {} # cross-modules global mutable variables for testing purpose
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 60
# in myapp.mod1
from myapp import Timeouts
def wait_app_up(project_name, port):
# wait for app until Timeouts['WAIT_APP_UP_IN_SECONDS']
# ...
# in myapp.test.test_mod1
from myapp import Timeouts
def test_wait_app_up_fail(self):
timeout_bak = Timeouts['WAIT_APP_UP_IN_SECONDS']
Timeouts['WAIT_APP_UP_IN_SECONDS'] = 3
with self.assertRaises(hlp.TimeoutException) as cm:
wait_app_up(PROJECT_NAME, PROJECT_PORT)
self.assertEqual("Timeout while waiting for App to start", str(cm.exception))
Timeouts['WAIT_JENKINS_UP_TIMEOUT_IN_SECONDS'] = timeout_bak
When launching test_wait_app_up_fail, the actual timeout duration is 3 seconds.

Categories

Resources