The Situation
I'm currently working on a small but very expandable project, where i have the following structure:
/
|- main.py
|- services
|- __init__.py
|- service1.py
|- service2.py
|- ...
Every one of these services creates an object and all of them have the exact same arguments and all of them are used in the same way. The difference between them is internally, where they do some, for this question unimportant, thing in a different way.
Now this is around how my code currently handles it like this:
main.py
from services import *
someObject = {} #content doesn't matter, it's always the same
serv_arr = [] # an array to hold all services
serv_arr.append( service1.service1(someObject) )
serv_arr.append( service2.service2(someObject) )
...
for service in serv_arr:
# this function always has the same name and return type in each service
service.do_something()
The Question
My specific question is:
Is there a way to automate the creation of serv_arr with a loop, such that, if i add service100.py and service101.py to the package services and i don't have to go back into main.py and add it manually, but instead it automatically loads whatever it needs?
First off, you should really avoid using the from xxx import * pattern, as it clutters the global namespace.
You could add a list of available services to services/__init__.py
something like this perhaps
# services/__init__.py
from .service1 import service1
from .service2 import service2
...
services = [service1, service2, ...]
__all__ = ['services']
If that's still too manual for you, you could iterate over the directory and use importlib to import the services by their paths.
However, I can't help but think this problem is indicative of a bad design. You might want to consider using something like the Factory Pattern to instantiate the various services, rather than having a large number of separate modules. As it is, if you wanted to make a small change to all of the services, you'll have a lot of tedious work ahead of you to do so.
Okay, building on this idea:
Austin Philp's answer
# services/__init__.py
from .service1 import service1
from .service2 import service2
...
services = [service1, service2, ...]
__all__ = ['services']
And the idea of specifically exposed methods and modules from Factory Pattern, mentioned in this answer, i came up with a very hacky solution that works without cluttering global namespace (another thing criticized by #Austin Philp).
The Solution
I got the idea to implement a method in each module that does nothing but create an instance of said module and each module is mentioned in services/__init__.py:
#services/__init__.py
from .service1 import service1
from .service2 import service2
__all__=["service1", "service2", ...]
#services/service1.py
class service1(object):
def __init__(self, input):
...
...
#
def create_instance(input):
return service1(input) # create the object and return it.
Then in main.py, I simply do this (it is extremely hacky, but it works)
#main.py
import services
import sys
# use the __all__ method to get module names. actually
for name in services.__all__:
service = sys.modules[f'services.{name}'].create_instance( input )
# do whatever with service
This way i can just happily do whatever needed without cluttering the global namespace but still iterating over or even individually calling the modules. The only thing i would have to edit to add/remove a module is another entry in the __all__ variable inside services/__init__.py. It even removed the need to have the serv_arr array, because services.__all__ already has all the names i am interested in and would have the same length as modules used.
Related
I am making a tiny framework for games with pygame, on which I wish to implement basic code to quickly start new projects. This will be a module that whoever uses should just create a folder with subfolders for sprite classes, maps, levels, etc.
My question is, how should my framework module load these client modules? I was considering to design it so the developer could just pass to the main object the names of the directories, like:
game = Game()
game.scenarios = 'scenarios'
Then game will append 'scenarios' to sys.path and use __import__(). I've tested and it works.
But then I researched a little more to see if there were already some autoloader in python, so I could avoid to rewrite it, and I found this question Python modules autoloader?
Basically, it is not recommended to use a autoloader in python, since "explicit is better than implicit" and "Readability counts".
That way, I think, I should compel the user of my module to manually import each of his/her modules, and pass these to the game instance, like:
import framework.Game
import scenarios
#many other imports
game = Game()
game.scenarios = scenarios
#so many other game.whatever = whatever
But this doesn't looks good to me, not so confortable. See, I am used to work with php, and I love the way it works with it's autoloader.
So, the first exemple has some problability to crash or be some trouble, or is it just not 'pythonic'?
note: this is NOT an web application
I wouldn't consider letting a library import things from my current path or module good style. Instead I would only expect a library to import from two places:
Absolute imports from the global modules space, like things you have installed using pip. If a library does this, this library must also be found in its install_requires=[] list
Relative imports from inside itself. Nowadays these are explicitly imported from .:
from . import bla
from .bla import blubb
This means that passing an object or module local to my current scope must always happen explicitly:
from . import scenarios
import framework
scenarios.sprites # attribute exists
game = framework.Game(scenarios=scenarios)
This allows you to do things like mock the scenarios module:
import types
import framework
# a SimpleNamespace looks like a module, as they both have attributes
scenarios = types.SimpleNamespace(sprites='a', textures='b')
scenarios.sprites # attribute exists
game = framework.Game(scenarios=scenarios)
Also you can implement a framework.utils.Scenario() class that implements a certain interface to provide sprites, maps etc. The reason being: Sprites and Maps are usually saved in separate files: What you absolutely do not want to do is look at the scenarios's __file__ attribute and start guessing around in its files. Instead implement a method that provides a unified interface to that.
class Scenario():
def __init__(self):
...
def sprites(self):
# optionally load files from some default location
# If no such things as a default location exists, throw a NotImplemented error
...
And your user-specific scenarios will derive from it and optionally overload the loading methods
import framework.utils
class Scenario(framework.utils.Scenario):
def __init__(self):
...
def sprites(self):
# this method *must* load files from location
# accessing __file__ is OK here
...
What you can also do is have framework ship its own framework.contrib.scenarios module that is used in case no scenarios= keyword arg was used (i.e. for a square default map and some colorful default textures)
from . import contrib
class Game()
def __init__(self, ..., scenarios=None, ...):
if scenarios is None:
scenarios = contrib.scenarios
self.scenarios = scenarios
I am in the midst of refactoring some single-file Python modules into multi-file packages and I am encountering the same problem pattern repeatedly: I have objects that are part of the public interface of the package, but must also be used internally by submodules the package.
mypackage/
__init__.py # <--- Contains object 'cssURL'
views.py # <--- Needs to use object 'cssURL'
In this case, it's important that clients of mypackage have access to mypackage.cssURL. However, my submodule, views.py, also needs it, but has no access to the contents of __init__.py. Sure, I can create another submodule like so:
mypackage/
__init__.py
views.py
style.py # <--- New home for 'cssURL'
However, if I did this every time, it seems like it would multiply the number of submodules exceedingly. Moreover, clients must now refer to mypackage.cssURL as mypackage.style.cssURL, or else I must create a synonym in __init__.py like this:
import style
cssURL = style.cssURL
I think I am doing something wrong. Is there a better way to handle these kinds of package members that are both part of the public interface and used internally?
You can refer to the current package as .:
# views.py
from . import cssURL
See here for more information.
I would structure it as follows:
/mypackage
__init__.py
from style import cssURL
...
style.py
cssURL = '...' # or whatever
...
views.py
from .style import cssURL
...
If other modules within the same package need them, I wouldn't define names in __init__.py; just create an alias there for external consumers to use.
As far as I know, the preferred way is to create a "synonym" in __init__.py with "from .style import cssURL"; cf. the source for the json module.
I am currently creating a package for python but I would like to give access to the user only a specific set of functions defined in this package. Let's say that the structure file is as follows:
my_package/
__init__.py
modules/
__init__.py
functions.py
In functions.py, there are several functions as below (those are silly examples):
def myfunction(x):
return my_subfunction1(x) + my_subfunction2(x)
def my_subfunction1(x):
return x
def my_subfunction2(x):
return 2*x
I want the user to be able to import my_package and directly access myfunction, but NOT my_subfunction1 and my_subfunction2. For example, let's say that only myfunction is useful for the user, whereas the sub-functions are only intermediate computations.
import my_package
a=my_package.myfunction(1) #should return 3
b=my_package.my_subfunction1(1) # should returns an error, function does not exist
I can think of two ways of solving my problem by adding the following lines to the __init__.py file inside my_package/
1/ from modules.functions import myfunction
2/ from modules.functions import *, and renaming the subfunctions with a leading underscore to exclude them from the starred import, ie :
_my_subfunction1 and _my_subfunction2
and both of these tricks seems to work well so far.
My question is thus : Is this the correct "pythonic" way to do ? Which one is better ? If none of them is the good way, how should I re-write it ?
Thanks for your help.
I believe you should take a look at the __all__ variable.
In your case, just set, in yout __init__.py:
__all__ = ['myfunction']
Let's assume I have a python package called bestpackage.
Convention dictates that bestpacakge would also be a directory on sys.path that contains an __init__.py to make the interpreter assume it can be imported from.
Is there any way I can set a variable for the package name so the directory could be named something different than the directive I import it with? Is there any way to make the namespacing not care about the directory name and honor some other config instead?
My super trendy client-side devs are just so much in love with these sexy something.otherthing.js project names for one of our smaller side projects!
EDIT:
To clarify, the main purpose of my question was to allow my client side guys continue to call the directories in their "projects" (the one we all have added to our paths) folder using their existing convention (some.app.js), even though in some cases they are in fact python packages that will be on the path and sourced to import statements internally. I realize this is in practice a pretty horrible thing and so I ask more out of curiosity. So part of the big problem here is circumventing the fact that the . in the directory name (and thereby the assumed package name) implies attribute access. It doesn't really surprise me that this cannot be worked around, I was just curious if it was possible deeper in the "magic" behind import.
There's some great responses here, but all rely on doing a classical import of some kind where the attribute accessor . will clash with the directory names.
A directory with a __init__.py file is called a package.
And no, the package name is always the same as the directory. That's how Python can discover packages, it matches it against directory names found on the search path, and if there is a __init__.py file in that directory it has found a match and imports the __init__.py file contained.
You can always import something into your local module namespace under a shorter, easier to use name using the from module import something or the import module as alias syntax:
from something.otherthing.js import foo
from something.otherthing import js as bar
import something.otherthing.js as hamspam
There is one solution wich needs one initial import somewhere
>>> import sys
>>> sys.modules['blinot_existing_blubb'] = sys
>>> import blinot_existing_blubb
>>> blinot_existing_blubb
<module 'sys' (built-in)>
Without a change to the import mechanism you can not import from an other name. This is intended, I think, to make Python easier to understand.
However if you want to change the import mechanism I recommend this: Getting the Most Out of Python Imports
Well, first I would say that Python is not Java/Javascript/C/C++/Cobol/YourFavoriteLanguageThatIsntPython. Of course, in the real world, some of us have to answer to bosses who disagree. So if all you want is some indirection, use smoke and mirrors, as long as they don't pay too much attention to what's under the hood. Write your module the Python way, then provide an API on the side in the dot-heavy style that your coworkers want. Ex:
pythonic_module.py
def func_1():
pass
def func_2():
pass
def func_3():
pass
def func_4():
pass
indirection
/dotty_api_1/__init__.py
from pythonic_module import func_1 as foo, func_2 as bar
/dotty_api_2/__init__.py
from pythonic_module import func_3 as foo, func_4 as bar
Now they can dot to their hearts' content, but you can write things the Pythonic way under the hood.
Actually yes!
you could do a canonical import Whatever or newmodulename = __import__("Whatever")
python keeps track of your modules and you can inspect that by doing:
import sys
print sys.modules
See this article more details
But that's maybe not your problem? let's guess: you have a module in a different path, which your current project can't access because it's not in the sys-path?
well the just add:
import sys
sys.path.append('path_to_the_other_package_or_module_directory')
prior to your import statement or see this SO-post for a more permanent solution.
I was looking for this to happen with setup.py at sdist and install time, rather than runtime, and found the directive package_dir:
https://docs.python.org/3.5/distutils/setupscript.html#listing-whole-packages
Not sure if there's a neat way of dealing with it, it just makes sense to me visually to lay out each object/class into it's own module under a common package.
For instance:
/Settings/
/Settings/__init__.py
/Settings/AbstractSetting.py
/Settings/Float.py
/Settings/String.py
Each class inside of every module has the same name as the module and at the moment I keep doing this:
import Settings
mysetting = Settings.Float.Float()
..which is giving me these double "Float" names.
I could do, in the __init__.py of the package:
from Float import Float
..so that I could then do:
import Settings
mysetting = Settings.Float()
But I'd like this package to be dynamically updating to whatever modules I put inside of it. So that the next day, when I've added "Knob.py" to this package, I could do:
import Settings
myknob = Settings.Knob()
Makes sense?
But again, I haven't worked with packages before and are still trying to wrap my head around it and try and make it as easy as possible. At this point, I found it easier having all classes inside one big master module which is getting increasingly cumbersome.
Maybe packages isn't the way to go? What alternatives do I have?
Thanks a bunch.
EDIT: Main reason I want to do this is to let users write their own modules that will integrate with the rest of the application. A native "plugin" architeture, if you will.
Each module will contain a class inherited by a superclass with default values. The app then has a browser with available modules that, when clicked, displays relevant information found under the modules attributes. Each class contained then has a similar interface with which the application can use.
I did some further reading and apparently this is not the way to go. I'd love to hear your ideas though on what the benefits/disadvantages of this approach could be.
You should be aware that this is not the Python way. "One class per file" is a Java philosphy that does not apply in the Python world. We usually name modules in lowercase and stick related classes into the same file (in your example, all of the classes would go into settings.py or would be explicitely imported from there). But I guess the fact that you want users to provide plugins is a legitimate reason for your approach (immdbg does it the same way, I think).
So, if you really want to do this, you could put something like this into your Settings/__init__.py:
import os
import glob
import imp
for f in glob.glob(os.path.join(os.path.dirname(__file__), '*.py')):
modname = os.path.basename(f)[:-3]
if modname.startswith('__'): continue
mod = imp.load_source(modname, f)
globals()[modname] = getattr(mod, modname)
# or if you just want to import everything (even worse):
#for name in dir(mod):
# if name.startswith('__'): continue
# globals()[name] = getattr(mod, name)
Can you feel how the Python developers don't want you to do this? :)
There are many plugin systems. It is exemplified by the name of one such system yapsy (yet another plugin system).
You could create an object that provides necessary interface:
class Settings(object):
def __getattr__(self, attr):
return load_plugin(attr)
settings = Settings()
In your code:
from settings import settings
knob = settings.Knob()
You can use whatever implementation you like for load_plugin() e.g., for the code from the question:
from importlib import import_module
def load_plugin(name):
m = import_module('Settings.'+name)
return getattr(m, name)