I have a question which seems to be rather fundamental but I can't seem to find any help on this anywhere.
file_a.py >>
from xyz import XYZ
class A:
.
.
.
file_b.py >>
import file_a
from file_a import A
class B(A):
def __init__(self):
A.__init__(self)
def someMethod(self):
XYZ.doSomething()
XYZ.doSomething() fails saying NameError: name 'XYZ' is not defined
Even standard imports like 'import sys' done from file_a does not seem to render it usable in file_b. I assumed that should work. Is my understanding wrong? If yes, then is there a way to have common imports and global variables across files? (If it is of nay help, I've been a C++ and java programmer and am now starting to use python. )
Is my understanding wrong?
Yes, because the line from file_a import A import only class A into the namespace of file_b. The namespace of file_a is left alone. If it were not like this, there would be little sense in having both syntax:
import modulename
from modulename import something
as if your thinking was right, then after the second form you would always be able to use modulename.someotherthing.
If yes, then is there a way to have common imports and global variables across files?
Yes, with the star * operator:
from modulename import *
but this brings the issue of namespace pollution, for example from file_a import * will import in file_b also all the imports done in file_a. You will eventually lose control of your imports and this will bite you at some time... trust me on this!
When for some reason from module import * is needed, a workaround to namespace pollution is to define in module the variable __all__, which whitelists what should be imported with the star operator.
HTH!
When you import a module, all the variables defined in that module are available in its namespace. Hence, if XYZ is available in file_a module, when you import file_a you can access XYZ as file_a.XYZ.
The general idea here is that your namespace shouldn't be cluttered with the contents of other namespaces.
Yes, your understanding is wrong. Each module is its own namespace, and only things you explicitly import within that file are available in that namespace.
Contrary to other answers, it is not particularly Pythonic to refer to file_a.XYZ, although this will work. Instead, you should import XYZ and sys at the top of file_b.
import file_a
from file_a import A
I think the problem is: you do not import XYZ really!
from fila_a import *
can solve your problem,but it is not a good way~~~
you can write this in file_b:
from file_a import XYZ
It is done?
Related
In a main *.py, the statement
from myModule import a,b,c
imports the module 'myModule', and creates references in the current namespace to the given objects. Or in other words, you can now use a and b and c in your program. But you need to use, for example:
print(myModule.a)
Now, I need to use the statement:
from myModule import *
and I need to access, in my main *.py file, at the values of some variables declared in 'myModule', for example:
print(a)
I tried to use globals() to declare global variables but then they are not imported in the current namespace.
Does anyone know hoe to solve this??
Thanks
Question isn't clear but you can import modules with any moniker you want with:
import thing as t
And to access:
t.thingFunction.(foo)
I've literally been trying to understand Python imports for about a year now, and I've all but given up programming in Python because it just seems too obfuscated. I come from a C background, and I assumed that import worked like #include, yet if I try to import something, I invariably get errors.
If I have two files like this:
foo.py:
a = 1
bar.py:
import foo
print foo.a
input()
WHY do I need to reference the module name? Why not just be able to write import foo, print a? What is the point of this confusion? Why not just run the code and have stuff defined for you as if you wrote it in one big file? Why can't it work like C's #include directive where it basically copies and pastes your code? I don't have import problems in C.
To do what you want, you can use (not recommended, read further for explanation):
from foo import *
This will import everything to your current namespace, and you will be able to call print a.
However, the issue with this approach is the following. Consider the case when you have two modules, moduleA and moduleB, each having a function named GetSomeValue().
When you do:
from moduleA import *
from moduleB import *
you have a namespace resolution issue*, because what function are you actually calling with GetSomeValue(), the moduleA.GetSomeValue() or the moduleB.GetSomeValue()?
In addition to this, you can use the Import As feature:
from moduleA import GetSomeValue as AGetSomeValue
from moduleB import GetSomeValue as BGetSomeValue
Or
import moduleA.GetSomeValue as AGetSomeValue
import moduleB.GetSomeValue as BGetSomeValue
This approach resolves the conflict manually.
I am sure you can appreciate from these examples the need for explicit referencing.
* Python has its namespace resolution mechanisms, this is just a simplification for the purpose of the explanation.
Imagine you have your a function in your module which chooses some object from a list:
def choice(somelist):
...
Now imagine further that, either in that function or elsewhere in your module, you are using randint from the random library:
a = randint(1, x)
Therefore we
import random
You suggestion, that this does what is now accessed by from random import *, means that we now have two different functions called choice, as random includes one too. Only one will be accessible, but you have introduced ambiguity as to what choice() actually refers to elsewhere in your code.
This is why it is bad practice to import everything; either import what you need:
from random import randint
...
a = randint(1, x)
or the whole module:
import random
...
a = random.randint(1, x)
This has two benefits:
You minimise the risks of overlapping names (now and in future additions to your imported modules); and
When someone else reads your code, they can easily see where external functions come from.
There are a few good reasons. The module provides a sort of namespace for the objects in it, which allows you to use simple names without fear of collisions -- coming from a C background you have surely seen libraries with long, ugly function names to avoid colliding with anybody else.
Also, modules themselves are also objects. When a module is imported in more than one place in a python program, each actually gets the same reference. That way, changing foo.a changes it for everybody, not just the local module. This is in contrast to C where including a header is basically a copy+paste operation into the source file (obviously you can still share variables, but the mechanism is a bit different).
As mentioned, you can say from foo import * or better from foo import a, but understand that the underlying behavior is actually different, because you are taking a and binding it to your local module.
If you use something often, you can always use the from syntax to import it directly, or you can rename the module to something shorter, for example
import itertools as it
When you do import foo, a new module is created inside the current namespace named foo.
So, to use anything inside foo; you have to address it via the module.
However, if you use from from foo import something, you don't have use to prepend the module name, since it will load something from the module and assign to it the name something. (Not a recommended practice)
import importlib
# works like C's #include, you always call it with include(<path>, __name__)
def include(file, module_name):
spec = importlib.util.spec_from_file_location(module_name, file)
mod = importlib.util.module_from_spec(spec)
# spec.loader.exec_module(mod)
o = spec.loader.get_code(module_name)
exec(o, globals())
For example:
#### file a.py ####
a = 1
#### file b.py ####
b = 2
if __name__ == "__main__":
print("Hi, this is b.py")
#### file main.py ####
# assuming you have `include` in scope
include("a.py", __name__)
print(a)
include("b.py", __name__)
print(b)
the output will be:
1
Hi, this is b.py
2
I have the following situation, a module called enthought.chaco2 and I have many imports, like from enthought.chaco.api import ..
so what's the quickest way to add chaco.api and make it dispatch to the correct one?
I tried a few things, for example:
import enthought.chaco2 as c2
import enthought
enthought.chaco = c2
but it doesn't work. I might have to create a real module and add it to the path; is that the only way?
What is the behavior you're looking for?
You could use from enthought.chaco import api as ChacoApi and then address any content from the module through ChacoApi, like ChacoApi.foo() or chaco_class = ChacoApi.MyClass().
You could use (and that's not recommended) from enthought.chaco.api import * and have all the content of the module added to your base namespace.
You could add an __all__ variable declaration to chaco's __init__.py file and have the previous example (with the *) only import what you entered the list __all__.
Or you could import specifically any content you might use the way you do right now which is perfectly fine in my opinion...
I am importing lots of functions from a module
Is it better to use
from my_module import function1, function2, function3, function4, function5, function6, function7
which is a little messy, but avoids flooding the current namespace with everything from that module or
from my_module import *
Which looks tidy but will fill the namespace with everything from that module.
Can't find anything in PEP8 about what the limit for how much you should import by name is. Which is better and why?
If you really need that many functions, you are already polluting your namespace.
I would suggest:
import my_module
Or, if my_module has a long name use an alias:
import my_long_module as m
If it's between one or the other, use
from my_module import function1, function2, function3, function4, function5, function6, function7
See "Explicit is better than implicit." in import this.
If you just want a shorter name than my_module.function1, there is always import my_module as mod.
For the few functions you use many times (either type many times so you want a short name or in a loop so access speed is important), there is
func1 = my_module.function1
With a little bit of management you can control what import * imports. Say your my_module has function1..function8 but you only want to make functions 1 through 6 available. In your my_module, reassign the __all__ attribute:
my_module.py:
__all__ = ['function1', 'function2', 'function3' ...]
def function1():
...
# etc...
Now if you use from my_module import *, you'll only import those functions and variables you defined in the __all__ attribute from my_module.py.
Not sure if this is new, but now you can do:
from my_module import (
function1,
function2,
function3,
function4
)
At least this doesn't go off the page and is easier to read IMO.
It is recommended to not to use import * in Python.
Can anyone please share the reason for that, so that I can avoid it doing next time?
Because it puts a lot of stuff into your namespace (might shadow some other object from previous import and you won't know about it).
Because you don't know exactly what is imported and can't easily find from which module a certain thing was imported (readability).
Because you can't use cool tools like pyflakes to statically detect errors in your code.
According to the Zen of Python:
Explicit is better than implicit.
... can't argue with that, surely?
You don't pass **locals() to functions, do you?
Since Python lacks an "include" statement, and the self parameter is explicit, and scoping rules are quite simple, it's usually very easy to point a finger at a variable and tell where that object comes from -- without reading other modules and without any kind of IDE (which are limited in the way of introspection anyway, by the fact the language is very dynamic).
The import * breaks all that.
Also, it has a concrete possibility of hiding bugs.
import os, sys, foo, sqlalchemy, mystuff
from bar import *
Now, if the bar module has any of the "os", "mystuff", etc... attributes, they will override the explicitly imported ones, and possibly point to very different things. Defining __all__ in bar is often wise -- this states what will implicitly be imported - but still it's hard to trace where objects come from, without reading and parsing the bar module and following its imports. A network of import * is the first thing I fix when I take ownership of a project.
Don't misunderstand me: if the import * were missing, I would cry to have it. But it has to be used carefully. A good use case is to provide a facade interface over another module.
Likewise, the use of conditional import statements, or imports inside function/class namespaces, requires a bit of discipline.
I think in medium-to-big projects, or small ones with several contributors, a minimum of hygiene is needed in terms of statical analysis -- running at least pyflakes or even better a properly configured pylint -- to catch several kind of bugs before they happen.
Of course since this is python -- feel free to break rules, and to explore -- but be wary of projects that could grow tenfold, if the source code is missing discipline it will be a problem.
That is because you are polluting the namespace. You will import all the functions and classes in your own namespace, which may clash with the functions you define yourself.
Furthermore, I think using a qualified name is more clear for the maintenance task; you see on the code line itself where a function comes from, so you can check out the docs much more easily.
In module foo:
def myFunc():
print 1
In your code:
from foo import *
def doThis():
myFunc() # Which myFunc is called?
def myFunc():
print 2
It is OK to do from ... import * in an interactive session.
Say you have the following code in a module called foo:
import ElementTree as etree
and then in your own module you have:
from lxml import etree
from foo import *
You now have a difficult-to-debug module that looks like it has lxml's etree in it, but really has ElementTree instead.
Understood the valid points people put here. However, I do have one argument that, sometimes, "star import" may not always be a bad practice:
When I want to structure my code in such a way that all the constants go to a module called const.py:
If I do import const, then for every constant, I have to refer it as const.SOMETHING, which is probably not the most convenient way.
If I do from const import SOMETHING_A, SOMETHING_B ..., then obviously it's way too verbose and defeats the purpose of the structuring.
Thus I feel in this case, doing a from const import * may be a better choice.
http://docs.python.org/tutorial/modules.html
Note that in general the practice of importing * from a module or package is frowned upon, since it often causes poorly readable code.
These are all good answers. I'm going to add that when teaching new people to code in Python, dealing with import * is very difficult. Even if you or they didn't write the code, it's still a stumbling block.
I teach children (about 8 years old) to program in Python to manipulate Minecraft. I like to give them a helpful coding environment to work with (Atom Editor) and teach REPL-driven development (via bpython). In Atom I find that the hints/completion works just as effectively as bpython. Luckily, unlike some other statistical analysis tools, Atom is not fooled by import *.
However, lets take this example... In this wrapper they from local_module import * a bunch modules including this list of blocks. Let's ignore the risk of namespace collisions. By doing from mcpi.block import * they make this entire list of obscure types of blocks something that you have to go look at to know what is available. If they had instead used from mcpi import block, then you could type walls = block. and then an autocomplete list would pop up.
It is a very BAD practice for two reasons:
Code Readability
Risk of overriding the variables/functions etc
For point 1:
Let's see an example of this:
from module1 import *
from module2 import *
from module3 import *
a = b + c - d
Here, on seeing the code no one will get idea regarding from which module b, c and d actually belongs.
On the other way, if you do it like:
# v v will know that these are from module1
from module1 import b, c # way 1
import module2 # way 2
a = b + c - module2.d
# ^ will know it is from module2
It is much cleaner for you, and also the new person joining your team will have better idea.
For point 2: Let say both module1 and module2 have variable as b. When I do:
from module1 import *
from module2 import *
print b # will print the value from module2
Here the value from module1 is lost. It will be hard to debug why the code is not working even if b is declared in module1 and I have written the code expecting my code to use module1.b
If you have same variables in different modules, and you do not want to import entire module, you may even do:
from module1 import b as mod1b
from module2 import b as mod2b
As a test, I created a module test.py with 2 functions A and B, which respectively print "A 1" and "B 1". After importing test.py with:
import test
. . . I can run the 2 functions as test.A() and test.B(), and "test" shows up as a module in the namespace, so if I edit test.py I can reload it with:
import importlib
importlib.reload(test)
But if I do the following:
from test import *
there is no reference to "test" in the namespace, so there is no way to reload it after an edit (as far as I can tell), which is a problem in an interactive session. Whereas either of the following:
import test
import test as tt
will add "test" or "tt" (respectively) as module names in the namespace, which will allow re-loading.
If I do:
from test import *
the names "A" and "B" show up in the namespace as functions. If I edit test.py, and repeat the above command, the modified versions of the functions do not get reloaded.
And the following command elicits an error message.
importlib.reload(test) # Error - name 'test' is not defined
If someone knows how to reload a module loaded with "from module import *", please post. Otherwise, this would be another reason to avoid the form:
from module import *
As suggested in the docs, you should (almost) never use import * in production code.
While importing * from a module is bad, importing * from a package is probably even worse.
By default, from package import * imports whatever names are defined by the package's __init__.py, including any submodules of the package that were loaded by previous import statements.
If a package’s __init__.py code defines a list named __all__, it is taken to be the list of submodule names that should be imported when from package import * is encountered.
Now consider this example (assuming there's no __all__ defined in sound/effects/__init__.py):
# anywhere in the code before import *
import sound.effects.echo
import sound.effects.surround
# in your module
from sound.effects import *
The last statement will import the echo and surround modules into the current namespace (possibly overriding previous definitions) because they are defined in the sound.effects package when the import statement is executed.