I'm starting a django 1.10 project and would like to split the settings file. I was unsatisfied with any existing solutions.
I do not want to be able to override string/boolean/dict settings from one file in another. Each string/boolean/dict setting should be set in only one place. This makes it easy to keep track of where things are defined.
I do not want to have to manually extend tuple/list settings, e.g. INSTALLED_APPS += (test_app). This seems to be messy and requires me to keep track of whether a list or tuple was used in the other file.
I do not want to have to import os and define BASE_DIR in multiple files. DRY.
My solution, having looked at many others, is to replace settings.py with a directory containing local_settings.py, common_settings.py and __init__.py.
In __init__.py, I import os and calculate BASE_DIR. I then
import builtins
builtins.BASE_DIR = BASE_DIR
builtins.os = os
from .common_settings import *
from . import local_settings
# At this point both modules have run and we no longer need to be messing
# with the builtins namespace.
del builtins.BASE_DIR
del builtins.os
del builtins
I then loop over dir(local_settings) and mess with globals() to achieve the first two requirements (I can post the whole thing if requested but I'm interested in my use of builtins).
Is this use of builtins too evil? What can break it. Obviously if either identifier clashs with an attribute of a later version of builtins, then this code would break python. If a function that uses either of these identifiers ended up in one of the settings files and it was later called, then that would break.
I don't see either of those happening though. Is there a potential problem that I'm not seeing?
The main problem with modifying builtins in this way is that it adds non-local reasoning to your code for no good reason. The behavior of the common/local settings modules now implicitly depends on what happens in the module that imports them. That's bad.
Essentially, you need to get rid of your requirement #3.
Importing os in each module isn't "repeating yourself" because each module imports os into its own namespace. That's just how Python works.
You're right to want to only define BASE_DIR once, but the correct way to do this is to define the variable in one module (say basedir.py) and then explicitly import that variable (from basedir import BASE_DIR) into each module that uses it.
Related
I hope the question phrasing is meaningful. What I am wanting to do is change a flat variable's value in a file, and have the files which have imported that file see the updated value. It appears that I can do this. For example:
#settings.py
VARIABLE = 1
def change_variable():
global VARIABLE
VARIABLE = 2
and
#main.py
import settings
print(settings.VARIABLE)
settings.change_variable()
print(settings.VARIABLE)
which outputs:
1
2
As desired. Although I was a little surprised since I thought maybe the value of settings.VARIABLE would be fixed after settings was imported. I would like to know whether I can rely on this behaviour. My question is thus also, when in general will the values from an imported file be "updated" or "re-evaluated" from the perspective of the importing file? How does it work behind the scenes?
I could of course just make a class. But I don't like the idea of settings, or any config, being an object. I prefer it flat. But I want the option to change the settings after import based on user cli input.
Once the file settings.py is imported, python is done looking at the file. It now has a module loaded in memory, and if it is imported somewhere else, that module will be loaded there. The file is never looked at again after the first import.
Your function changed the value of VARIABLE in that module. You can depend on it being your new value unless you change it again.
Let's say I have imported two modules like this:
from module0 import hello_func
from directory.module1 import hello_var
Where in module0:
def hello_func(): return "hello from module0"
And module1:
hello_var = "hello from module1"
How can I know from which file is each object being imported?
I tried to check locals() function but nothing in there giving reference to the file...
Actually, you kind of answered your question yourself:
Let's say I have imported two modules
(insert "from xxx import *" here)
How can I know from which file is each object being imported?
One of the reasons of NOT using wildcard imports is precisely to make it clear where names are imported from (the other one being to avoind one imported name to shadow a previously imported one - something that tends to break you code in the most unexpected - and sometimes quite hard to spot - ways).
Note that in your edited question:
from module0 import hello_func
from directory.module1 import hello_var
you already have a much better idea where a name comes from. Not the exact files path yet, but at least the name of the package/module.
And that's one of the main reasons why one should NOT use wildcard imports.
Now if you want to know the exact files path, you have two distinct cases.
Some objects keep trace of where they were created (mostly modules, classes, functions, etc - cf the list of types supported by inspect.getfile()), and then, well, you already know the answer (use inspect.getfile() xD).
But most types wont (because there's no reason for it). In this case, you have to know which module they were imported from and call inspect.getfile() on the module itself. In this case, if you used wildcard imports, you will have to manually inspect all the modules you imported from to find out which one defined this name. Enjoy. Specially if one of those modules did also use wildcard imports...
one question please: where they does keep traces? and how these traces look like?
Modules keep it in their __file__ attribute. Functions and classes keep a reference to their module's name in their __module__ attribute, and from this you can use it to retrieve the module from the sys.modules dict (a cache of all modules already imported in the current process), which will gives you the file.
I never had a need to search this info for tracebacks, frames, code objects etc so you'll have to check it yourself I'm afraid ;-)
You can define in each module a constant with its path, something like this should work:
import os
FILE_PATH = os.path.abspath(__file__)
When you import that module you can access its location like this:
import module
print(module.FILE_PATH)
Another solution using the inspect and os modules.
import module0
import os
import inspect
print(os.path.abspath(inspect.getfile(module0.hello_func)))
If you are looking for the absolute path of where the script is being run, this should work for sure:
import os
abs_path = os.path.dirname(os.path.abspath(__file__))
print(abs_path)
From the perspective of an external user of the module, are both necessary?
From my understanding, by correctly prefix hidden functions with an underscore it essentially does the same thing as explicitly define __all__, but I keep seeing developers doing both in their code. Why is that?
When importing from a module with from modulename import * names starting with underscores are indeed skipped.
However, a module rarely contains only public API objects. Usually you've made imports to support the code as well, and those names are global in the module as well. Without __all__, those names would be part of the import too.
In other words, unless you want to 'export' os in the following example you should use __all__:
import os
from .implementation import some_other_api_call
_module_path = os.path.dirname(os.path.abspath(__file__))
_template = open(os.path.join(_module_path, 'templates/foo_template.txt')).read()
VERSION = '1.0.0'
def make_bar(baz, ham, spam):
return _template.format(baz, ham, spam)
__all__ = ['some_other_api_call', 'make_bar']
because without the __all__ list, Python cannot distinguish between some_other_api_call and os here and divine which one should not be imported when using from ... import *.
You could work around this by renaming all your imports, so import os as _os, but that'd just make your code less readable.
And an explicit export list is always nice. Explicit is better than implicit, as the Zen of Python tells you.
I also use __all__: that explictly tells module users what you intend to export. Searching the module for names is tedious, even if you are careful to do, e.g., import os as _os, etc. A wise man once wrote "explicit is better than implicit" ;-)
Defining all will overide the default behaviour. There is actually might be one reason to define __all__
When importing a module, you might want that doing from mod import * will import only a minimal amount of things. Even if you prefix everything correctly, there could be reasons not to import everything.
The other problem that I had once was defining a gettext shortcut. The translation function was _ which would not get imported. Even if it is prefixed "in theory" I still wanted it to get exported.
One other reason as stated above is importing a module that imports a lot of thing. As python cannot make the difference between symbols created by imports and the one defined in the actual module. It will automatically reexport everything that can be reexported. For that reason, it could be wise to explicitely limit the thing exported to the things you want to export.
With that in mind, you might want to have some prefixed symbols exported by default. Usually, you don't redefine __all__. Whenever you need it to do something unusual then it may make sense to do it.
A co-worker has a library that uses a hard-coded configuration defined in its own file. For instance:
constants.py:
API_URL="http://example.com/bogus"
Throughout the rest of the library, the configuration is accessed in the following manner.
from constants import API_URL
As you can imagine, this is not very flexible and causes problems during testing. If I want to change the configuration, I have to modify constants.py, which is in source code management.
Naturally, I'd much rather load the configuration from a JSON or YAML file. I can read the configuration into an object with no problems. Is there a way I can override the constants.py module without breaking the code, so that each global, e.g. API_URL is replaced by a value provided by my file?
I was thinking that after each from constants import ... I could add something like this:
from constants import * # existing configuration import
import json
new_config = json.load(open('config.json')) # load my config file into a dictionary
constants.__dict__.update(new_config) # override any constants with what I've loaded
The problem with this, of course, is that it's not very "DRY" and looks like it might be brittle.
Does anyone have a suggestion for doing this more cleanly? Thanks!
EDIT: looks like my approach doesn't work anyway. I guess "from import *" copies the values from the module into the current module's global scope?
DOUBLE EDIT: no, it does work; I'm just confused. But rather than doing this in X different files I'd like to have it work transparently if possible.
from module import <name> creates a reference in the importing module global namespace to the imported object. If that is an immutable, that means you now have to monkeypatch the value in the module that imported it.
Your only hope is to be the first to import constants and monkeypatch the names in that module. Subsequent imports will then use your monkeypatched values.
To patch the original module early, the following is enough:
import constants
for name, value in new_config.iteritems():
setattr(constants, name, value)
Let's assume I have a python package called bestpackage.
Convention dictates that bestpacakge would also be a directory on sys.path that contains an __init__.py to make the interpreter assume it can be imported from.
Is there any way I can set a variable for the package name so the directory could be named something different than the directive I import it with? Is there any way to make the namespacing not care about the directory name and honor some other config instead?
My super trendy client-side devs are just so much in love with these sexy something.otherthing.js project names for one of our smaller side projects!
EDIT:
To clarify, the main purpose of my question was to allow my client side guys continue to call the directories in their "projects" (the one we all have added to our paths) folder using their existing convention (some.app.js), even though in some cases they are in fact python packages that will be on the path and sourced to import statements internally. I realize this is in practice a pretty horrible thing and so I ask more out of curiosity. So part of the big problem here is circumventing the fact that the . in the directory name (and thereby the assumed package name) implies attribute access. It doesn't really surprise me that this cannot be worked around, I was just curious if it was possible deeper in the "magic" behind import.
There's some great responses here, but all rely on doing a classical import of some kind where the attribute accessor . will clash with the directory names.
A directory with a __init__.py file is called a package.
And no, the package name is always the same as the directory. That's how Python can discover packages, it matches it against directory names found on the search path, and if there is a __init__.py file in that directory it has found a match and imports the __init__.py file contained.
You can always import something into your local module namespace under a shorter, easier to use name using the from module import something or the import module as alias syntax:
from something.otherthing.js import foo
from something.otherthing import js as bar
import something.otherthing.js as hamspam
There is one solution wich needs one initial import somewhere
>>> import sys
>>> sys.modules['blinot_existing_blubb'] = sys
>>> import blinot_existing_blubb
>>> blinot_existing_blubb
<module 'sys' (built-in)>
Without a change to the import mechanism you can not import from an other name. This is intended, I think, to make Python easier to understand.
However if you want to change the import mechanism I recommend this: Getting the Most Out of Python Imports
Well, first I would say that Python is not Java/Javascript/C/C++/Cobol/YourFavoriteLanguageThatIsntPython. Of course, in the real world, some of us have to answer to bosses who disagree. So if all you want is some indirection, use smoke and mirrors, as long as they don't pay too much attention to what's under the hood. Write your module the Python way, then provide an API on the side in the dot-heavy style that your coworkers want. Ex:
pythonic_module.py
def func_1():
pass
def func_2():
pass
def func_3():
pass
def func_4():
pass
indirection
/dotty_api_1/__init__.py
from pythonic_module import func_1 as foo, func_2 as bar
/dotty_api_2/__init__.py
from pythonic_module import func_3 as foo, func_4 as bar
Now they can dot to their hearts' content, but you can write things the Pythonic way under the hood.
Actually yes!
you could do a canonical import Whatever or newmodulename = __import__("Whatever")
python keeps track of your modules and you can inspect that by doing:
import sys
print sys.modules
See this article more details
But that's maybe not your problem? let's guess: you have a module in a different path, which your current project can't access because it's not in the sys-path?
well the just add:
import sys
sys.path.append('path_to_the_other_package_or_module_directory')
prior to your import statement or see this SO-post for a more permanent solution.
I was looking for this to happen with setup.py at sdist and install time, rather than runtime, and found the directive package_dir:
https://docs.python.org/3.5/distutils/setupscript.html#listing-whole-packages