I'm building a little program but I'm trying to make it more easily expandable for the future (without the need of much refactoring). I'm experimenting with things I have never used before like typings.
I have a small typings file that looks like this (called typings.py)
from typing import (
Union,
Any,
Optional
)
from typing import ForwardRef as ref
CustomLevel = ref("levels.level.CustomLevel")
OfficialLevel = ref("levels.level.OfficialLevel")
__all__ = (
"Union",
"Any",
"Optional",
"CustomLevel",
"OfficialLevel",
)
I also have a level.py file under ./levels/. In the file I have a base class but also a CustomLevel and OfficialLevel. At the top of level.py I am importing the CustomLevel typing. The issue is I am getting an error saying:
class already defined (line 5)
because the import of the CustomLevel type is clashing with definition of the CustomLevel class, like so:
from ..typings import (
CustomLevel
)
class CustomLevel(): #<--- erros here since 'customlevel' is already defined
def __init__(self) -> None:
pass
def somefunc(self) -> CustomLevel:
pass
I am actually using an already existing library as a sort of "guide". What I mean is I am looking at the code of the library, because I think it's pretty good code, and using some of the features I think will be helpful in my program (like typings). In their library they do exactly what I am doing where they import a type from their typing file, and then create a class of the same name. In theirs they don't get an error though.
The Library is this. It happens in a lot of the python scrips but a short one is the comment.py. At the top of the file, from gd.typing, Comment is imported. Not too much further down Comment is created as a class. Like I said, this library has no issues.
In their typing.py file the only extra bit is:
def __forward_ref_repr__(self: ref) -> str:
return str(self.__forward_arg__)
setattr(ref, "__repr__", __forward_ref_repr__)
I have tried adding this but it doesn't make any difference. I am using VSCode with pylint which could very well be the case but at the moment I have other errors which won't allow me to run the script to test.
Related
This question already has answers here:
Short description of the scoping rules?
(9 answers)
Closed 1 year ago.
Context: I'm writing a translator from one Python API to another, both in Python 3.5+. I load the file to be translated with a class named FileLoader, described by Fileloader.py. This file loader allows me to transfer the file's content to other classes doing the translation job.
All of the .py files describing each class are in the same folder
I tried two different ways to import my FileLoader module inside the other modules containing the classes doing the translation job. One seems to work, but the other didn't and I don't understand why.
Here are two code examples illustrating both ways:
The working way
import FileLoader
class Parser:
#
def __init__(self, fileLoader):
if isinstance(fileLoader, FileLoader.FileLoader)
self._fileLoader = fileLoader
else:
# raise a nice exception
The crashing way
class Parser:
import FileLoader
#
def __init__(self, fileLoader):
if isinstance(fileLoader, FileLoader.FileLoader)
self._fileLoader = fileLoader
else:
# raise a nice exception
I thought doing the import inside the class's scope (where it's the only scope FileLoader is used) would be enough, since it would know how to relate to the FileLoader module and its content. I'm obviously wrong since it's the first way which worked.
What am I missing about scopes in Python? Or is it about something different?
2 things : this won't work. And there is no benefit to doing it this way.
First, why not?
class Parser:
#this assigns to the Parser namespace, to refer to it
#within a method you need to use `self.FileLoader` or
#Parser.FileLoader
import FileLoader
#`FileLoader` works fine here, under the Parser indentation
#(in its namespace, but outside of method)
copy_of_FileLoader = FileLoader
#
def __init__(self, fileLoader):
# you need to refer to modules under in Parser namespace
# with that `self`, just like you would with any other
# class or instance variable 👇
if isinstance(fileLoader, self.FileLoader.FileLoader)
self._fileLoader = fileLoader
else:
# raise a nice exception
#works here again, since we are outside of method,
#in `Parser` scope/indent.
copy2_of_FileLoader = FileLoader
Second it's not Pythonic and it doesn't help
Customary for the Python community would be to put import FileLoader at the top of the program. Since it seems to be one of your own modules, it would go after std library imports and after third party module imports. You would not put it under a class declaration.
Unless... you had a good (probably bad actually reason to).
My own code, and this doesn't reflect all that well on me, sometimes has stuff like.
class MainManager(batchhelper.BatchManager):
....
def _load(self, *args, **kwargs):
👉 from pssystem.models import NotificationConfig
So, after stating this wasn't a good thing, why am I doing this?
Well, there are some specific circumstances to my code going here. This is a batch, command-line, script, usable within a Django context and it uses some Django ORM models. In order for those to be used, Django needs to be imported first and then setup. But that often happens too early in the context of these types of batch programs and I get circular import errors, with Django complaining that it hasn't initialized yet.
The solution? Defer execution until the method is called, when all the other modules have been imported and Django has been setup elsewhere.
NotificationConfig is now available, but only within that method as it is a local variable in it. It works, but... it's really not great practice.
Remember: anything in the global scope gets executed at module load time, anything under classes at module load time, anything withing method/function bodies when the method/function is called.
#happens at module load time, you could have circular import errors
import X1
class DoImportsLater:
.
#happens at module load time, you could have circular import errors
import X2
def _load(self, *args, **kwargs):
#only happens when this method is called, if ever
#so you shouldn't be seeing circular imports
import X3
import X1 is std practice, Pythonic.
import X2, what are doing, is not and doesn't help
import X3, what I did, is a hack and is covering up circular import references. But it "fixes" the issue.
I am trying to use typing in Python 3.8 and I am a bit stuck.
Example: I am mainly developing in main.py. I also have a class util.py that contains some helper functions an classes. But these classes also need to import classes from main.py for Typing. Now, when I want to use functions from util.py in main.py I also need to import it - but then I'll get an error because of circular importing (and rightly so).
Is there a way around this?
Thanks in advance!
Circular imports are not an immediate error in Python; they're only an error if you use them in a particular way. I believe you're looking for forward references
Example main.py:
import util
class SomeClass:
pass
Example util.py
import main
# We can't use main.SomeClass in the type signature because of the cycle,
# but we can forward reference it, which the type system understands.
def make_some_class() -> "main.SomeClass":
return main.SomeClass()
Sylvio's answer is correct, but I want to address just for typing in your title.
Importing classes just for typing in python?
If the only reason you are importing a class is to address some typing consideration, you can use the typing.TYPE_CHECKING constant to make that import conditional.
Quoting that documentation:
Runtime or type checking?
Sometimes there's code that must be seen by a type checker (or other static analysis tools) but should not be executed. For such situations the typing module defines a constant, TYPE_CHECKING, that is considered True during type checking (or other static analysis) but False at runtime. Example:
import typing
if typing.TYPE_CHECKING:
import expensive_mod
def a_func(arg: 'expensive_mod.SomeClass') -> None:
a_var = arg # type: expensive_mod.SomeClass
...
(Note that the type annotation must be enclosed in quotes, making it a "forward reference", to hide the expensive_mod reference from the interpreter runtime. In the # type comment no quotes are needed.)
Bottom line is that you may very well not have to import it all, except for when you are type-checking.
I think the best way to prevent that is to move every class in their own file, and import them where you need them.
Declaring multiple class in a single file is not good for code maintenance and can make big application harder to debug.
And I don't think you can avoid circular importing the way you describe it, if every class depends on each other
I am making a tiny framework for games with pygame, on which I wish to implement basic code to quickly start new projects. This will be a module that whoever uses should just create a folder with subfolders for sprite classes, maps, levels, etc.
My question is, how should my framework module load these client modules? I was considering to design it so the developer could just pass to the main object the names of the directories, like:
game = Game()
game.scenarios = 'scenarios'
Then game will append 'scenarios' to sys.path and use __import__(). I've tested and it works.
But then I researched a little more to see if there were already some autoloader in python, so I could avoid to rewrite it, and I found this question Python modules autoloader?
Basically, it is not recommended to use a autoloader in python, since "explicit is better than implicit" and "Readability counts".
That way, I think, I should compel the user of my module to manually import each of his/her modules, and pass these to the game instance, like:
import framework.Game
import scenarios
#many other imports
game = Game()
game.scenarios = scenarios
#so many other game.whatever = whatever
But this doesn't looks good to me, not so confortable. See, I am used to work with php, and I love the way it works with it's autoloader.
So, the first exemple has some problability to crash or be some trouble, or is it just not 'pythonic'?
note: this is NOT an web application
I wouldn't consider letting a library import things from my current path or module good style. Instead I would only expect a library to import from two places:
Absolute imports from the global modules space, like things you have installed using pip. If a library does this, this library must also be found in its install_requires=[] list
Relative imports from inside itself. Nowadays these are explicitly imported from .:
from . import bla
from .bla import blubb
This means that passing an object or module local to my current scope must always happen explicitly:
from . import scenarios
import framework
scenarios.sprites # attribute exists
game = framework.Game(scenarios=scenarios)
This allows you to do things like mock the scenarios module:
import types
import framework
# a SimpleNamespace looks like a module, as they both have attributes
scenarios = types.SimpleNamespace(sprites='a', textures='b')
scenarios.sprites # attribute exists
game = framework.Game(scenarios=scenarios)
Also you can implement a framework.utils.Scenario() class that implements a certain interface to provide sprites, maps etc. The reason being: Sprites and Maps are usually saved in separate files: What you absolutely do not want to do is look at the scenarios's __file__ attribute and start guessing around in its files. Instead implement a method that provides a unified interface to that.
class Scenario():
def __init__(self):
...
def sprites(self):
# optionally load files from some default location
# If no such things as a default location exists, throw a NotImplemented error
...
And your user-specific scenarios will derive from it and optionally overload the loading methods
import framework.utils
class Scenario(framework.utils.Scenario):
def __init__(self):
...
def sprites(self):
# this method *must* load files from location
# accessing __file__ is OK here
...
What you can also do is have framework ship its own framework.contrib.scenarios module that is used in case no scenarios= keyword arg was used (i.e. for a square default map and some colorful default textures)
from . import contrib
class Game()
def __init__(self, ..., scenarios=None, ...):
if scenarios is None:
scenarios = contrib.scenarios
self.scenarios = scenarios
Let's assume that we have a system of modules that exists only on production stage. At the moment of testing these modules do not exist. But still I would like to write tests for the code that uses those modules. Let's also assume that I know how to mock all the necessary objects from those modules. The question is: how do I conveniently add module stubs into current hierarchy?
Here is a small example. The functionality I want to test is placed in a file called actual.py:
actual.py:
def coolfunc():
from level1.level2.level3_1 import thing1
from level1.level2.level3_2 import thing2
do_something(thing1)
do_something_else(thing2)
In my test suite I already have everything I need: I have thing1_mock and thing2_mock. Also I have a testing function. What I need is to add level1.level2... into current module system. Like this:
tests.py
import sys
import actual
class SomeTestCase(TestCase):
thing1_mock = mock1()
thing2_mock = mock2()
def setUp(self):
sys.modules['level1'] = what should I do here?
#patch('level1.level2.level3_1.thing1', thing1_mock)
#patch('level1.level2.level3_1.thing1', thing2_mock)
def test_some_case(self):
actual.coolfunc()
I know that I can substitute sys.modules['level1'] with an object containing another object and so on. But it seems like a lot of code for me. I assume that there must be much simpler and prettier solution. I just cannot find it.
So, no one helped me with my problem and I decided to solve it by myself. Here is a micro-lib called surrogate which allows one to create stubs for non-existing modules.
Lib can be used with mock like this:
from surrogate import surrogate
from mock import patch
#surrogate('this.module.doesnt.exist')
#patch('this.module.doesnt.exist', whatever)
def test_something():
from this.module.doesnt import exist
do_something()
Firstly #surrogate decorator creates stubs for non-existing modules, then #patch decorator can alter them. Just as #patch, #surrogate decorators can be used "in plural", thus stubbing more than one module path. All stubs exist only at the lifetime of decorated function.
If anyone gets any use of this lib, that would be great :)
Not sure if there's a neat way of dealing with it, it just makes sense to me visually to lay out each object/class into it's own module under a common package.
For instance:
/Settings/
/Settings/__init__.py
/Settings/AbstractSetting.py
/Settings/Float.py
/Settings/String.py
Each class inside of every module has the same name as the module and at the moment I keep doing this:
import Settings
mysetting = Settings.Float.Float()
..which is giving me these double "Float" names.
I could do, in the __init__.py of the package:
from Float import Float
..so that I could then do:
import Settings
mysetting = Settings.Float()
But I'd like this package to be dynamically updating to whatever modules I put inside of it. So that the next day, when I've added "Knob.py" to this package, I could do:
import Settings
myknob = Settings.Knob()
Makes sense?
But again, I haven't worked with packages before and are still trying to wrap my head around it and try and make it as easy as possible. At this point, I found it easier having all classes inside one big master module which is getting increasingly cumbersome.
Maybe packages isn't the way to go? What alternatives do I have?
Thanks a bunch.
EDIT: Main reason I want to do this is to let users write their own modules that will integrate with the rest of the application. A native "plugin" architeture, if you will.
Each module will contain a class inherited by a superclass with default values. The app then has a browser with available modules that, when clicked, displays relevant information found under the modules attributes. Each class contained then has a similar interface with which the application can use.
I did some further reading and apparently this is not the way to go. I'd love to hear your ideas though on what the benefits/disadvantages of this approach could be.
You should be aware that this is not the Python way. "One class per file" is a Java philosphy that does not apply in the Python world. We usually name modules in lowercase and stick related classes into the same file (in your example, all of the classes would go into settings.py or would be explicitely imported from there). But I guess the fact that you want users to provide plugins is a legitimate reason for your approach (immdbg does it the same way, I think).
So, if you really want to do this, you could put something like this into your Settings/__init__.py:
import os
import glob
import imp
for f in glob.glob(os.path.join(os.path.dirname(__file__), '*.py')):
modname = os.path.basename(f)[:-3]
if modname.startswith('__'): continue
mod = imp.load_source(modname, f)
globals()[modname] = getattr(mod, modname)
# or if you just want to import everything (even worse):
#for name in dir(mod):
# if name.startswith('__'): continue
# globals()[name] = getattr(mod, name)
Can you feel how the Python developers don't want you to do this? :)
There are many plugin systems. It is exemplified by the name of one such system yapsy (yet another plugin system).
You could create an object that provides necessary interface:
class Settings(object):
def __getattr__(self, attr):
return load_plugin(attr)
settings = Settings()
In your code:
from settings import settings
knob = settings.Knob()
You can use whatever implementation you like for load_plugin() e.g., for the code from the question:
from importlib import import_module
def load_plugin(name):
m = import_module('Settings.'+name)
return getattr(m, name)