I do not really understand how python handles global variables if the code is split into different files.
Assuming I have 3 files: class1.py, class2.py, main.py
In main.py I define a global variable
from class1 import Class1
from class2 import Class2
global sys
sys = constructor()
This object contains information about the system which I simulate and is used and manipulated by the classes defined in class1.py and in class2.py.
One could of course argue that this is bad style and one should avoid exploiting global variables like this, but this is not the point here.
If now I use sys in either class, it is unknown. To use a global variable one could of course define them somewhere and then include this file. But then, the changes that are made by the classes would not effect each other, so I don't want to do this.
Another way would be to define a new Class SuperClass where sys is a member. If now Class1 and Class2 are inherited from SuperClass, I could probably do some stuff with the super keyword. I do not really want to do this...
Long story short... Is there a way to define an python object such, that it behaves similar to a C-style global variable?
Maybe it helps if I give an example:
sys includes the system frequency
a function of Class1 changes the system frequency
a function of Class2 simulates stuff and uses the system frequency
based on this the system power is changed in sys
a function of Class1 performs a task with updated system power
No, there's no way to have "C-style global variables", and that's by design. Explicitely pass your sys objects to Class1 and Class2 when instanciating them and you'll be done, with a clean, readable, testable, maintainable implementation:
# lib.py
class Class1(object):
def __init__(self, sys, whatever):
self.sys = sys
# ...
class Class2(object):
def __init__(self, sys, whateverelse):
self.sys = sys
# ...
# main.py
from lib import Class1, Class2
def main():
sys = constructor()
c1 = Class1(sys, 42)
c2 = Class(sys, "spam")
# now you can work with c1 and c2
if __name__ == "__main__":
main()
To answer your very first question:
I do not really understand how python handles global variables if the
code is split into different files.
Python's "globals" are module-level names. At runtime, modules are objects (instances of the module type), and all names defined at the module's top level become attributes of the module instance - "defined" being either by assignment, by a def or class statement or by an import statement.
From within the module, top-level names are accessible by their unqualified names in all the module's code - you only need the global keyword if you want to rebind a top-level name from within a function.
From outside the module, you can access those names using the qualified path, ie if module1 defines name foo, you can access it from module2 by importing module1 (making module1 a top-level name in module2) and using the usual attribute resolution (dotted.name) syntax.
You can also import foo directly using from module1 import foo, in which case a new name foo will be created in module2 and bound to module1.foo (more exactly: bound to the object which is at this point bound to module1.foo). The point here is that you know have two distinct names in two distinct namespaces, so rebinding either will only affect the namespace in which it's rebound.
You certainly want to read Ned Batcheler's famous article on Python's names, it's usually a good starting point to understand all this.
Related
What I'd like to do
I'd like to import a Python module without adding it to the local namespace.
In other words, I'd like to do this:
import foo
del foo
Is there a cleaner way to do this?
Why I want to do it
The short version is that importing foo has a side effect that I want, but I don't really want it in my namespace afterwards.
The long version is that I have a base class that uses __init_subclass__() to register its subclasses. So base.py looks like this:
class Base:
_subclasses = {}
def __init_subclass__(cls, **kwargs):
super().__init_subclass__(**kwargs)
cls._subclasses[cls.__name__] = cls
#classmethod
def get_subclass(cls, class_name):
return cls._subclasses[class_name]
And its subclasses are defined in separate files, e.g. foo_a.py:
from base import Base
class FooA(Base):
pass
and so on.
The net effect here is that if I do
from base import Base
print(f"Before import: {Base._subclasses}")
import foo_a
import foo_b
print(f"After import: {Base._subclasses}")
then I would see
Before import: {}
After import: {'FooA': <class 'foo_a.FooA'>, 'FooB': <class 'foo_b.FooB'>}
So I needed to import these modules for the side effect of adding a reference to Base._subclasses, but now that that's done, I don't need them in my namespace anymore because I'm just going to be using Base.get_subclass().
I know I could just leave them there, but this is going into an __init__.py so I'd like to tidy up that namespace.
del works perfectly fine, I'm just wondering if there's a cleaner or more idiomatic way to do this.
If you want to import a module without assigning the module object to a variable, you can use importlib.import_module and ignore the return value:
import importlib
importlib.import_module("foo")
Note that using importlib.import_module is preferable over using the __import__ builtin directly for simple usages. See the builtin documenation for details.
I have the following (toy) package structure
root/
- package1/
- __init__.py
- class_a.py
- class_b.py
- run.py
In both class_a.py and class_b.py I have a class definition that I want to expose to run.py. If I want to import them this way, I will have to use
from package1.class_a import ClassA # works but doesn't look nice
I don't like that this shows the class_a.py module, and would rather use the import style
from package1 import ClassA # what I want
This is also closer to what I see from larger libraries. I found a way to do this by importing the classes in the __init__.py file like so
from class_a import ClassA
from class_b import ClassB
This works fine if it wasn't for one downside: as soon as I import ClassA as I would like (see above), I also immediately 'import' ClassB as, as far as I know, the __init__.py will be run, importing ClassB. In my real scenario, this means I implicitly import a huge class that I use very situationally (which itself imports tensorflow), so I really want to avoid this somehow. Is there a way to create the nice looking imports without automatically importing everything in the package?
It is possible but require a rather low level customization: you will have to customize the class of your package (possible since Python 3.5). That way, you can declare a __getattr__ member that will be called when you ask for a missing attribute. At that moment, you know that you have to import the relevant module and extract the correct attribute.
The init.py file should contain (names can of course be changed):
import importlib
import sys
import types
class SpecialModule(types.ModuleType):
""" Customization of a module that is able to dynamically loads submodules.
It is expected to be a plain package (and to be declared in the __init__.py)
The special attribute is a dictionary attribute name -> relative module name.
The first time a name is requested, the corresponding module is loaded, and
the attribute is binded into the package
"""
special = {'ClassA': '.class_a', 'ClassB': '.class_b'}
def __getattr__(self, name):
if name in self.special:
m = importlib.import_module(self.special[name], __name__) # import submodule
o = getattr(m, name) # find the required member
setattr(sys.modules[__name__], name, o) # bind it into the package
return o
else:
raise AttributeError(f'module {__name__} has no attribute {name}')
sys.modules[__name__].__class__ = SpecialModule # customize the class of the package
You can now use it that way:
import package1
...
obj = package1.ClassA(...) # dynamically loads class_a on first call
The downside is that clever IDE that look at the declared member could choke on that and pretend that you are accessing an inexistant member because ClassA is not statically declared in package1/__init__.py. But all will be fine at run time.
As it is a low level customization, it is up to you do know whether it is worth it...
Since 3.7 you could also declare a __gettatr__(name) function directly at the module level.
TL; DR
Basically the question is about hiding from the user the fact that my modules have class implementations so that the user can use the module as if it has direct function definitions like my_module.func()
Details
Suppose I have a module my_module and a class MyThing that lives in it. For example:
# my_module.py
class MyThing(object):
def say():
print("Hello!")
In another module, I might do something like this:
# another_module.py
from my_module import MyThing
thing = MyThing()
thing.say()
But suppose that I don't want to do all that. What I really want is for my_module to create an instance of MyThing automatically on import such that I can just do something like the following:
# yet_another_module.py
import my_module
my_module.say()
In other words, whatever method I call on the module, I want it to be forwarded directly to a default instance of the class contained in it. So, to the user of the module, it might seem that there is no class in it, just direct function definitions in the module itself (where the functions are actually methods of a class contained therein). Does that make sense? Is there a short way of doing this?
I know I could do the following in my_module:
class MyThing(object):
def say():
print("Hello!")
default_thing = MyThing()
def say():
default_thing.say()
But then suppose MyThing has many "public" methods that I want to use, then I'd have to explicitly define a "forwarding" function for every method, which I don't want to do.
As an extension to my question above, is there a way to achieve what I want above, but also be able to use code like from my_module import * and be able to use methods of MyThing directly in another module, like say()?
In module my_module do the following:
class MyThing(object):
...
_inst = MyThing()
say = _inst.say
move = _inst.move
This is exactly the pattern used by the random module.
Doing this automatically is somewhat contrived. First, one needs to find out which of the instance/class attributes are the methods to export... perhaps export only names which do not start with _, something like
import inspect
for name, member in inspect.getmembers(Foo(), inspect.ismethod):
if not name.startswith('_'):
globals()[name] = member
However in this case I'd say that explicit is better than implicit.
You could just replace:
def say():
return default_thing.say()
with:
say = default_thing.say
You still have to list everything that's forwarded, but the boilerplate is fairly concise.
If you want to replace that boilerplate with something more automatic, note that (details depending on Python version), MyThing.__dict__.keys() is something along the lines of ['__dict__', '__weakref__', '__module__', 'say', '__doc__']. So in principle you could iterate over that, skip the __ Python internals, and call setattr on the current module (which is available as sys.modules[__name__]). You might later regret not listing this stuff explicitly in the code, but you could certainly do it.
Alternatively you could get rid of the class entirely as use the module as the unit of encapsulation. Wherever there is data on the object, replace it with global variables. "But", you might say, "I've been warned against using global variables because supposedly they cause problems". The bad news is that you've already created a global variable, default_thing, so the ship has sailed on that one. The even worse news is that if there is any data on the object, then the whole concept of what you want to do: module-level functions that mutate a shared global state, carries with it most of the problems of globals.
Not Sure why this wouldn't work.
say = MyClass().say()
from my_module import *
say
>>Hello!
How can I avoid a global variable when creating an object? Someone told me that when I create the objects, it is considered doing that globally.
For instance if I have my class like this
class Windpower(object):
def __init__(self,name):
self.name=name
def calc_area(self,dia):
area=((dia/2)**2*math.pi)
return area
def calc_wind_energy(self,area,v):
energy=(random.uniform(0.10,0.4)*1.2*area*v**3*0.5)
return energy
def get_velocity(self):
with open('smhi.txt') as input:
smhi_list=[int(line.strip()) for line in input if line.strip()]
return smhi_list
windpower = Windpower("Stockholm")
solarpower=Solarpower(500,4)
Main.py
def average(lat):
energy_list = [] #
table = [] #
area = klass.solarpower.area
sundigit=klass.solarpower.sundigit
This is more of an anti-pattern in python than anything else. The act of having executable code on the module level should be avoided, because this can be executed when the module is being imported by other modules.
There are however cases to place code on the module level, such as providing objects on the module as part of your API (think Singleton), or doing any module initialization.
If you need this code to be executed only when you run the module as the main program you should place them under
if __name__ == '__main__':
windpower = Windpower("Stockholm")
solarpower=Solarpower(500,4)
Or put them in a function.
Bear in mind that the term "global" is incorrect here, as these objects are scoped within the module they're defined in, rather than the whole executable program.
When you instantiate your objects in the top level of the module (a python file), as you do in the example code you gave, they're accessible from the root of the module.
However, you really should instantiate them in the module where you use them. Here's an example:
wind.py
class WindPower(object):
def __init__(self,name):
self.name=name
main.py
import wind
city = wind.WindPower("Stockholm")
velocity = city.get_velocity()
For larger programs, in order to be more organized, I have been looking into dividing my code up into different .py files and having the main file that calls upon those files when needed. I have looked around and seen lots of remarks about creating a directory and a SystemPath for Python. Are those reasonable options for a program that could be distributed between a few computers? As a test, I tried to assemble an example:
This is the class named grades in the same directory as main
class student:
def __init__(self):
self.name = ""
self.score = 0
self.grade = 0
def update(self,name,score,grade):
self.score = score
self.name = name
self.grade = grade
print self.score,self.name,self.grade
s = student()
s.update(name,score,grade)
This is my main script currently:
from grades import score
import random
name = 'carl'
score = random.randrange(0,100)
grade = 11
s = student()
s.score(name,score,grade)
There are some questions I have generally about this method:
Is there a way to import all from different files or do I need to specify each individual class?
If I just had a function, is it possible to import it just as a function or can you only import via a class?
Why is it when I call upon a class, in general, I have to make a variable for it as in the example below?
# way that works
s = student()
s.update(name,score,grade)
# incorrect way
student.update(name,score,grade)
Thank you for your time and thought towards my question.
Yes.
You can import instance of student from other script to main script like this:
from grades import s
# if your first script is called grades.py
import random
name = 'carl'
score = random.randrange(0,100)
grade = 11
# you can directly use it without initializing it again.
s.score(name,score,grade)
2.
If you have a function called test() in grades.py, you can import it in this way:
from grades import test
# then invoke it
test()
3.
This variable stands for the instance of class student. You need this instance to invoke the function inside.
Generally, to divide the source code of a program, Python use module to do that, which corresponds to a *.py file. Then for your 3 questions:
You can import a whole "module content" (function, class, global variables, ...) through import module_name.*
for a function, if it is a function in a class(member method, class method or static method) you can not only import the function, you should import class to use the method; if it is a function under module, you can separately import the function through import module_name.function_name
update is a member function of the student class, so you should use it through an instance. if it is a class method or static method, you can use it through the class name you wrote.
1: Is there a way to import all from different file or do i need to
specify each individual class?
You can use the "wildcard import", but you probably shouldn't. See
Should wildcard import be avoided?
If i just had a function, is it possible to import it just as a
function or can you only import via a class?
Functions can be totally independent of classes in Python.
3.Why is it when i call upon a class in general i have to make a variable for it as in the example below?
You should read up on object-oriented programming. In the basic cases, you have to instantiate instances of a class in order to use that class's functionality. In your example, the class student describes what it means to be a student, and the statement
s = student()
creates a student and names it "s".
I think this should be clear after reading a bit about object-oriented programming.
First, you can use from module import * to import everything like:
hello.py:
def hello():
print 'hello'
def bye():
print 'Bye'
main.py:
from hello import *
hello()
bye()
But it's not a good way, if you have two files, two functions have the same name,
so use
from hello import hello, bye
hello()
bye()
is better, it an example for function ,as same as class.
Third before Second, student is a class, so you have to use an instance object to use the function which with self parameter. If you want to use student.function, the function must be a static method like this:
class Person:
def __init__():
pass
#staticmethod
def Count():
return 1
print Person.Count()
Second, you can import the function in a class file which is independent of the class.
Is there a way to import all from different file or do i need to specify each individual class?
the answer is yes , as python import statement use sys.path (A list of strings that specifies the search path for modules ) you need to add the patht of your modules in sys.path , for example if you want to interact between different computers you can put your modules in public folder and add the path of folder to sys.path :
import sys
sys.path.append( path to public )
If i just had a function, is it possible to import it just as a function or can you only import via a class?
you just need to use from ... import function_name.
Why is it when i call upon a class in general i have to make a variable for it as in the example below?
for this question you just need to read the python Class objects documentation :
Class objects support two kinds of operations: attribute references and instantiation.