The Python future statement from __future__ import feature provides a nice way to ease the transition to new language features. Is it is possible to implement a similar feature for Python libraries: from myproject.__future__ import feature?
It's straightforward to set a module wide constants on an import statement. What isn't obvious to me is how you could ensure these constants don't propagate to code executed in imported modules -- they should also require a future import to enable the new feature.
This came up recently in a discussion of possible indexing changes in NumPy. I don't expect it will actually be used in NumPy, but I can see it being useful for other projects.
As a concrete example, suppose that we do want to change how indexing works in some future version of NumPy. This would be a backwards incompatible change, so we decide we to use a future statement to ease the transition. A script using this new feature looks something like this:
import numpy as np
from numpy.__future__ import orthogonal_indexing
x = np.random.randn(5, 5)
print(x[[0, 1], [0, 1]]) # should use the "orthogonal indexing" feature
# prints a 2x2 array of random numbers
# we also want to use a legacy project that uses indexing, but
# hasn't been updated to the use the "orthogonal indexing" feature
from legacy_project import do_something
do_something(x) # should *not* use "orthogonal indexing"
If this isn't possible, what's the closest we can get for enabling local options? For example, is to possible to write something like:
from numpy import future
future.enable_orthogonal_indexing()
Using something like a context manager would be fine, but the problem is that we don't want to propagate options to nested scopes:
with numpy.future.enable_orthogonal_indexing():
print(x[[0, 1], [0, 1]]) # should use the "orthogonal indexing" feature
do_something(x) # should *not* use "orthogonal indexing" inside do_something
The way Python itself does this is pretty simple:
In the importer, when you try to import from a .py file, the code first scans the module for future statements.
Note that the only things allowed before a future statement are strings, comments, blank lines, and other future statements, which means it doesn't need to fully parse the code to do this. That's important, because future statements can change the way the code is parsed (in fact, that's the whole point of having them…); strings, comments, and blank lines can be handled by the lexer step, and future statements can be parsed with a very simple special-purpose parser.
Then, if any future statements are found, Python sets a corresponding flag bit, then re-seeks to the top of the file and calls compile with those flags. For example, for from __future__ import unicode_literals, it does flags |= __future__.unicode_literals.compiler_flag, which changes flags from 0 to 0x20000.
In this "real compile" step, the future statements are treated as normal imports, and you will end up with a __future__._Feature value in the variable unicode_literals in the module's globals.
Now, you can't quite do the same thing, because you're not going to reimplement or wrap the compiler. But what you can do is use your future-like statements to signal an AST transform step. Something like this:
flags = []
for line in f:
flag = parse_future(line)
if flag is None:
break
flags.append(flag)
f.seek(0)
contents = f.read()
tree = ast.parse(contents, f.name)
for flag in flags:
tree = transformers[flag](tree)
code = compile(tree, f.name)
Of course you have to write that parse_future function to return 0 for a blank line, comment, or string, a flag for a recognized future import (which you can look up dynamically if you want), or None for anything else. And you have to write the AST transformers for each flag. But they can be pretty simple—e.g., you can transform Subscript nodes into different Subscript nodes, or even into Call nodes that call different functions based on the form of the index.
To hook this into the import system, see PEP 302. Note that this gets simpler in Python 3.3, and simpler again in Python 3.4, so if you can require one of those versions, instead read the import system docs for your minimum version.
For a great example of import hooks and AST transformers being used in real life, see MacroPy. (Note that it's using the old 2.3-style import hook mechanism; again, your own code can be simpler if you can use 3.3+ or 3.4+. And of course your code isn't generating the transforms dynamically, which is the most complicated part of MacroPy…)
The __future__ in Python is both a module and also not. The Python __future__ is actually not imported from anywhere - it is a construct used by the Python bytecode compiler, deliberately chosen so that no new syntax needs to be created. There is also a __future__.py in the library directory; it can be imported as such: import __future__; and then you can for example access the __future__.print_function to find out which Python version makes the feature optionally available and in which version the feature is on by default.
It is possible to make a __future__ module that knows what is being imported. Here is an example of myproject/__future__.py that can intercept feature imports on per module basis:
import sys
import inspect
class FutureMagic(object):
inspect = inspect
#property
def more_magic(self):
importing_frame = self.inspect.getouterframes(
self.inspect.currentframe())[1][0]
module = importing_frame.f_globals['__name__']
print("more magic imported in %s" % module)
sys.modules[__name__] = FutureMagic()
On load time the module is replaced with a FutureMagic() instance. Whenever more_magic is imported from myproject.FutureMagic, the more_magic property method will be called, and it will print out the name of the module that imported the feature:
>>> from myproject.__future__ import more_magic
more magic imported in __main__
Now, you could have a bookkeeping of the modules that have imported this feature. Doing import myproject.__future__; myproject.__future__.more_magic would trigger the same machinery, but you could also ensure that the more_magic import be at the beginning of the file - its global variables at that point shouldn't contain anything else except values returned from this fake module; otherwise the value is being accessed for inspection only.
However the real question is: how could you use this - to find out from which module the function is being called is quite expensive, and would limit the usefulness of this feature.
Thus a possibly more fruitful approach could be to use import hooks to do source translation on abstract syntax trees on modules that do from mypackage.__future__ import more_magic, possibly changing all object[index] into __newgetitem__(operand, index).
No, you can't. The real __future__ import is special in that its effects are local to the individual file where it occurs. But ordinary imports are global: once one module does import blah, blah is executed and is available globally; other modules that later do import blah just retrieve the already-imported module. This means that if from numpy.__future__ changes something in numpy, everything that does import numpy will see the change.
As an aside, I don't think this is what that mailing list message is suggesting. I read it as suggesting an effect that is global, equivalent to setting a flag like numpy.useNewIndexing = True. This would mean that you should only set that flag at the top level of your application if you know that all parts of your application will work with that.
No, there is no reasonable way to do this. Let's go through the requirements.
First, you need to figure out which modules have your custom future statement enabled. Standard imports aren't up to this, but you could require them to e.g. call some enabling function and pass __name__ as a parameter. This is somewhat ugly:
from numpy.future import new_indexing
new_indexing(__name__)
This falls apart in the face of importlib.reload(), but meh.
Next, you need to figure out whether your caller is running in one of these modules. You'd start by pulling out the stack via inspect.stack() (which won't work under all Python implementations, misses C extension modules, etc.) and then goof around with inspect.getmodule() and the like.
Frankly, this is just a Bad Idea.
If the "feature" that you want to control can be boiled down to changing a name, then this is easy to do, like
from module.new_way import something
vs
from module.old_way import something
The feature you suggested is not, of course, but I would argue that this is the only Pythonic way of having different behavior in different scopes (and I do think you mean scope, not module, e.g., what if someone does an import inside a function definition), since scoping names is controlled and well supported by the interpreter itself.
Related
From the perspective of an external user of the module, are both necessary?
From my understanding, by correctly prefix hidden functions with an underscore it essentially does the same thing as explicitly define __all__, but I keep seeing developers doing both in their code. Why is that?
When importing from a module with from modulename import * names starting with underscores are indeed skipped.
However, a module rarely contains only public API objects. Usually you've made imports to support the code as well, and those names are global in the module as well. Without __all__, those names would be part of the import too.
In other words, unless you want to 'export' os in the following example you should use __all__:
import os
from .implementation import some_other_api_call
_module_path = os.path.dirname(os.path.abspath(__file__))
_template = open(os.path.join(_module_path, 'templates/foo_template.txt')).read()
VERSION = '1.0.0'
def make_bar(baz, ham, spam):
return _template.format(baz, ham, spam)
__all__ = ['some_other_api_call', 'make_bar']
because without the __all__ list, Python cannot distinguish between some_other_api_call and os here and divine which one should not be imported when using from ... import *.
You could work around this by renaming all your imports, so import os as _os, but that'd just make your code less readable.
And an explicit export list is always nice. Explicit is better than implicit, as the Zen of Python tells you.
I also use __all__: that explictly tells module users what you intend to export. Searching the module for names is tedious, even if you are careful to do, e.g., import os as _os, etc. A wise man once wrote "explicit is better than implicit" ;-)
Defining all will overide the default behaviour. There is actually might be one reason to define __all__
When importing a module, you might want that doing from mod import * will import only a minimal amount of things. Even if you prefix everything correctly, there could be reasons not to import everything.
The other problem that I had once was defining a gettext shortcut. The translation function was _ which would not get imported. Even if it is prefixed "in theory" I still wanted it to get exported.
One other reason as stated above is importing a module that imports a lot of thing. As python cannot make the difference between symbols created by imports and the one defined in the actual module. It will automatically reexport everything that can be reexported. For that reason, it could be wise to explicitely limit the thing exported to the things you want to export.
With that in mind, you might want to have some prefixed symbols exported by default. Usually, you don't redefine __all__. Whenever you need it to do something unusual then it may make sense to do it.
I have created a number of personal libraries to help with my daily coding. Best practice is to put imports at the beginning of python programs. But say I import my library, or even just a function or class from the library. All of the modules are imported (even if those modules are used in other unused classes or functions). I assume this increases the overhead of the program?
One example. I have a library called pytools which looks something like this
import difflib
def foo():
# uses difflib.SequenceMatcher
def bar():
# benign function ie
print "Hello!"
return True
class foobar:
def __init__():
print "New foobar"
def ret_true():
return True
The function foo uses difflib. Now say I am writing a new program that needs to use bar and foobar. I could either write
import pytools
...
item = pytools.foobar()
vals = pytools.bar()
or I could do
from pytools import foobar, bar
...
item = foobar()
vals = bar()
Does either choice reduce overhead or preclude the import of foo and its dependencies on difflib? What if the import to difflib was inside of the foo function?
The problem I am running into is when converting simple programs into executables that only use one or two classes or functions from my libraries, The executable ends up being 50 mb or so.
I have read through py2exe's optimizing size page and can optimize using some of its suggestions.
http://www.py2exe.org/index.cgi/OptimizingSize
I guess I am really asking for best practice here. Is there some way to preclude the import of libraries whose dependencies are in unused functions or classes? I've watched import statements execute using a debugger and it appears that python only "picks up" the line with "def somefunction" before moving on. Is the rest of the import not completed until the function/class is used? This would mean putting high volume imports at the beginning of a function or class could reduce overhead for the rest of the library.
The only way to effectively reduce your dependencies is to split your tool box into smaller modules, and to only import the modules you need.
Putting imports at the beginning of unused functions will prevent loading these modules at run-time, but is discouraged because it hides the dependecies. Moreover, your Python-to-executable converter will likely need to include these modules anyway, since Python's dynamic nature makes it impossible to statically determine which functions are actually called.
I am developing a Python package for dealing with some scientific data. There are multiple frequently-used classes and functions from other modules and packages, including numpy, that I need in virtually every function defined in any module of the package.
What would be the Pythonic way to deal with them? I have considered multiple variants, but every has its own drawbacks.
Import the classes at module-level with from foreignmodule import Class1, Class2, function1, function2
Then the imported functions and classes are easily accessible from every function. On the other hand, they pollute the module namespace making dir(package.module) and help(package.module) cluttered with imported functions
Import the classes at function-level with from foreignmodule import Class1, Class2, function1, function2
The functions and classes are easily accessible and do not pollute the module, but imports from up to a dozen modules in every function look as a lot of duplicate code.
Import the modules at module-level with import foreignmodule
Not too much pollution is compensated by the need to prepend the module name to every function or class call.
Use some artificial workaround like using a function body for all these manipulations and returning only the objects to be exported... like this
def _export():
from foreignmodule import Class1, Class2, function1, function2
def myfunc(x):
return function1(x, function2(x))
return myfunc
myfunc = _export()
del _export
This manages to solve both problems, module namespace pollution and ease of use for functions... but it seems to be not Pythonic at all.
So what solution is the most Pythonic? Is there another good solution I overlooked?
Go ahead and do your usual from W import X, Y, Z and then use the __all__ special symbol to define what actual symbols you intend people to import from your module:
__all__ = ('MyClass1', 'MyClass2', 'myvar1', …)
This defines the symbols that will be imported into a user's module if they import * from your module.
In general, Python programmers should not be using dir() to figure out how to use your module, and if they are doing so it might indicate a problem somewhere else. They should be reading your documentation or typing help(yourmodule) to figure out how to use your library. Or they could browse the source code yourself, in which case (a) the difference between things you import and things you define is quite clear, and (b) they will see the __all__ declaration and know which toys they should be playing with.
If you try to support dir() in a situation like this for a task for which it was not designed, you will have to place annoying limitations on your own code, as I hope is clear from the other answers here. My advice: don't do it! Take a look at the Standard Library for guidance: it does from … import … whenever code clarity and conciseness require it, and provides (1) informative docstrings, (2) full documentation, and (3) readable code, so that no one ever has to run dir() on a module and try to tell the imports apart from the stuff actually defined in the module.
One technique I've seen used, including in the standard library, is to use import module as _module or from module import var as _var, i.e. assigning imported modules/variables to names starting with an underscore.
The effect is that other code, following the usual Python convention, treats those members as private. This applies even for code that doesn't look at __all__, such as IPython's autocomplete function.
An example from Python 3.3's random module:
from warnings import warn as _warn
from types import MethodType as _MethodType, BuiltinMethodType as _BuiltinMethodType
from math import log as _log, exp as _exp, pi as _pi, e as _e, ceil as _ceil
from math import sqrt as _sqrt, acos as _acos, cos as _cos, sin as _sin
from os import urandom as _urandom
from collections.abc import Set as _Set, Sequence as _Sequence
from hashlib import sha512 as _sha512
Another technique is to perform imports in function scope, so that they become local variables:
"""Some module"""
# imports conventionally go here
def some_function(arg):
"Do something with arg."
import re # Regular expressions solve everything
...
The main rationale for doing this is that it is effectively lazy, delaying the importing of a module's dependencies until they are actually used. Suppose one function in the module depends on a particular huge library. Importing the library at the top of the file would mean that importing the module would load the entire library. This way, importing the module can be quick, and only client code that actually calls that function incurs the cost of loading the library. Further, if the dependency library is not available, client code that doesn't need the dependent feature can still import the module and call the other functions. The disadvantage is that using function-level imports obscures what your code's dependencies are.
Example from Python 3.3's os.py:
def get_exec_path(env=None):
"""[...]"""
# Use a local import instead of a global import to limit the number of
# modules loaded at startup: the os module is always loaded at startup by
# Python. It may also avoid a bootstrap issue.
import warnings
Import the module as a whole: import foreignmodule. What you claim as a drawback is actually a benefit. Namely, prepending the module name makes your code easier to maintain and makes it more self-documenting.
Six months from now when you look at a line of code like foo = Bar(baz) you may ask yourself which module Bar came from, but with foo = cleverlib.Bar it is much less of a mystery.
Of course, the fewer imports you have, the less of a problem this is. For small programs with few dependencies it really doesn't matter all that much.
When you find yourself asking questions like this, ask yourself what makes the code easier to understand, rather than what makes the code easier to write. You write it once but you read it a lot.
For this situation I would go with an all_imports.py file which had all the
from foreignmodule import .....
from another module import .....
and then in your working modules
import all_imports as fgn # or whatever you want to prepend
...
something = fgn.Class1()
Another thing to be aware of
__all__ = ['func1', 'func2', 'this', 'that']
Now, any functions/classes/variables/etc that are in your module, but not in your modules's __all__ will not show up in help(), and won't be imported by from mymodule import * See Making python imports more structured? for more info.
I would compromise and just pick a short alias for the foreign module:
import foreignmodule as fm
It saves you completely from the pollution (probably the bigger issue) and at least reduces the prepending burden.
I know this is an old question. It may not be 'Pythonic', but the cleanest way I've discovered for exporting only certain module definitions is, really as you've found, to globally wrap the module in a function. But instead of returning them, to export names, you can simply globalize them (global thus in essence becomes a kind of 'export' keyword):
def module():
global MyPublicClass,ExportedModule
import somemodule as ExportedModule
import anothermodule as PrivateModule
class MyPublicClass:
def __init__(self):
pass
class MyPrivateClass:
def __init__(self):
pass
module()
del module
I know it's not much different than your original conclusion, but frankly to me this seems to be the cleanest option. The other advantage is, you can group any number of modules written this way into a single file, and their private terms won't overlap:
def module():
global A
i,j,k = 1,2,3
class A:
pass
module()
del module
def module():
global B
i,j,k = 7,8,9 # doesn't overwrite previous declarations
class B:
pass
module()
del module
Though, keep in mind their public definitions will, of course, overlap.
I am new to python programming. How can I add new built-in functions and keywords to python interpreter using C or C++?
In short, it is technically possible to add things to Python's builtins†, but it is almost never necessary (and generally considered a very bad idea).
In longer, it's obviously possible to modify Python's source and add new builtins, keywords, etc… But the process for doing that is a bit out of the scope of the question as it stands.
If you'd like more detail on how to modify the Python source, how to write C functions which can be called from Python, or something else, please edit the question to make it more specific.
If you are new to Python programming and you feel like you should be modifying the core language in your day-to-day work, that's probably an indicator you should simply be learning more about it. Python is used, unmodified, for a huge number of different problem domains (for example, numpy is an extension which facilitates scientific computing and Blender uses it for 3D animation), so it's likely that the language can handle your problem domain too.
†: you can modify the __builtin__ module to “add new builtins”… But this is almost certainly a bad idea: any code which depends on it will be very difficult (and confusing) to use anywhere outside the context of its original application. Consider, for example, if you add a greater_than_zero “builtin”, then use it somewhere else:
$ cat foo.py
import __builtin__
__builtin__.greater_than_zero = lambda x: x > 0
def foo(x):
if greater_than_zero(x):
return "greater"
return "smaller"
Anyone who tries to read that code will be confused because they won't know where greater_than_zero is defined, and anyone who tries to use that code from an application which hasn't snuck greater_than_zero into __builtin__ won't be able to use it.
A better method is to use Python's existing import statement: http://docs.python.org/tutorial/modules.html
for python 3.6 onward use import builtins.
# example 1
import builtins
def f():
print('f is called')
builtins.g = f
g() # output = f is called
####################################
# example 2
import builtins
k = print
def f(s):
k('new print called : ' + s)
builtins.print = f
print('abc') # output = new print is called abc
While David Wolever's answer is perfect, it should be noted again that the asker is new to Python. Basically all he wants is a global function, which can be done in two different ways...
Define a function in your module and use it.
Define a function in a different module and import it using the "from module import *" statement.
I think the asker's solution is the 2nd option and anyone new to Python having this question should look in to the same.
For an advance user, I would agree with Wolever's suggestion that it is a bad idea to insert a new function in to the builtin module. However, may be the user is looking for a way to avoid importing an always-used module in every script in the project. And that is a valid use case. Of course the code will not make sense to people who aren't part of the project but that shouldn't be a concern. Anyways, such users should look in to the PYTHONSTARTUP environment variable. I would suggest looking it up in the Index of the Python documentation and look at all links that talks about this environment variable and see which page serves your purpose. However, this solution works for interactive mode only and does not work for sub-main script.
For an all around solution look in to this function that I have implemented: https://drive.google.com/file/d/19lpWd_h9ipiZgycjpZW01E34hbIWEbpa/view
Yet another way is extending or embedding Python and it is a relatively complex topic. It is best to read the Python documentation on the same. For basic users, all I would say is that...
Extending means adding new builtin modules to the Python interpreter.
Embedding means inserting Python interpreter into your application.
And advanced users already know what they are doing!
You can use builtins module.
Example 1:
import builtins
def write(x):
print(x)
builtins.write = write
write("hello")
# output:
# Hello
Example 2:
import builtins
def hello(*name):
print(f"Hello, {' '.join(name)}!")
builtins.hello = hello
hello("Clark", "Kent")
# output:
# Hello, Clark Kent!
In python, if you need a module from a different package you have to import it. Coming from a Java background, that makes sense.
import foo.bar
What doesn't make sense though, is why do I need to use the full name whenever I want to use bar? If I wanted to use the full name, why do I need to import? Doesn't using the full name immediately describe which module I'm addressing?
It just seems a little redundant to have from foo import bar when that's what import foo.bar should be doing. Also a little vague why I had to import when I was going to use the full name.
The thing is, even though Python's import statement is designed to look similar to Java's, they do completely different things under the hood. As you know, in Java an import statement is really little more than a hint to the compiler. It basically sets up an alias for a fully qualified class name. For example, when you write
import java.util.Set;
it tells the compiler that throughout that file, when you write Set, you mean java.util.Set. And if you write s.add(o) where s is an object of type Set, the compiler (or rather, linker) goes out and finds the add method in Set.class and puts in a reference to it.
But in Python,
import util.set
(that is a made-up module, by the way) does something completely different. See, in Python, packages and modules are not just names, they're actual objects, and when you write util.set in your code, that instructs Python to access an object named util and look for an attribute on it named set. The job of Python's import statement is to create that object and attribute. The way it works is that the interpreter looks for a file named util/__init__.py, uses the code in it to define properties of an object, and binds that object to the name util. Similarly, the code in util/set.py is used to initialize an object which is bound to util.set. There's a function called __import__ which takes care of all of this, and in fact the statement import util.set is basically equivalent to
util = __import__('util.set')
The point is, when you import a Python module, what you get is an object corresponding to the top-level package, util. In order to get access to util.set you need to go through that, and that's why it seems like you need to use fully qualified names in Python.
There are ways to get around this, of course. Since all these things are objects, one simple approach is to just bind util.set to a simpler name, i.e. after the import statement, you can have
set = util.set
and from that point on you can just use set where you otherwise would have written util.set. (Of course this obscures the built-in set class, so I don't recommend actually using the name set.) Or, as mentioned in at least one other answer, you could write
from util import set
or
import util.set as set
This still imports the package util with the module set in it, but instead of creating a variable util in the current scope, it creates a variable set that refers to util.set. Behind the scenes, this works kind of like
_util = __import__('util', fromlist='set')
set = _util.set
del _util
in the former case, or
_util = __import__('util.set')
set = _util.set
del _util
in the latter (although both ways do essentially the same thing). This form is semantically more like what Java's import statement does: it defines an alias (set) to something that would ordinarily only be accessible by a fully qualified name (util.set).
You can shorten it, if you would like:
import foo.bar as whateveriwant
Using the full name prevents two packages with the same-named submodules from clobbering each other.
There is a module in the standard library called io:
In [84]: import io
In [85]: io
Out[85]: <module 'io' from '/usr/lib/python2.6/io.pyc'>
There is also a module in scipy called io:
In [95]: import scipy.io
In [96]: scipy.io
Out[96]: <module 'scipy.io' from '/usr/lib/python2.6/dist-packages/scipy/io/__init__.pyc'>
If you wanted to use both modules in the same script, then namespaces are a convenient way to distinguish the two.
In [97]: import this
The Zen of Python, by Tim Peters
...
Namespaces are one honking great idea -- let's do more of those!
in Python, importing doesn't just indicate you might use something. The import actually executes code at the module level. You can think of the import as being the moment where the functions are 'interpreted' and created. Any code that is in the _____init_____.py level or not inside a function or class definition happens then.
The import also makes an inexpensive copy of the whole module's namespace and puts it inside the namespace of the file / module / whatever where it is imported. An IDE then has a list of the functions you might be starting to type for command completion.
Part of the Python philosophy is explicit is better than implicit. Python could automatically import the first time you try to access something from a package, but that's not explicit.
I'm also guessing that package initialization would be much more difficult if the imports were automatic, as it wouldn't be done consistently in the code.
You're a bit confused about how Python imports work. (I was too when I first started.) In Python, you can't simply refer to something within a module by the full name, unlike in Java; you HAVE to import the module first, regardless of how you plan on referring to the imported item. Try typing math.sqrt(5) in the interpreter without importing math or math.sqrt first and see what happens.
Anyway... the reason import foo.bar has you required to use foo.bar instead of just bar is to prevent accidental namespace conflicts. For example, what if you do import foo.bar, and then import baz.bar?
You could, of course, choose to do import foo.bar as bar (i.e. aliasing), but if you're doing that you may as well just use from foo import bar. (EDIT: except when you want to import methods and variables. Then you have to use the from ... import ... syntax. This includes instances where you want to import a method or variable without aliasing, i.e. you can't simply do import foo.bar if bar is a method or variable.)
Other than in Java, in Python import foo.bar declares, that you are going to use the thing referred to by foo.bar.
This matches with Python's philosophy that explicit is better than implicit. There are more programming languages that make inter-module dependencies more explicit than Java, for example Ada.
Using the full name makes it possible to disambiguate definitions with the same name coming from different modules.
You don't have to use the full name. Try one of these
from foo import bar
import foo.bar as bar
import foo.bar
bar = foo.bar
from foo import *
A few reasons why explicit imports are good:
They help signal to humans and tools what packages your module depends on.
They avoid the overhead of dynamically determining which packages have to be loaded (and possibly compiled) at run time.
They (along with sys.path) unambiguously distinguish symbols with conflicting names from different namespaces.
They give the programmer some control of what enters the namespace within which he is working.