Recommendations about making a modular design - python

I want future adding on to be super easy for my current project and I'm trying to have it so I can just add a module to a package and have it imported and implemented easily.
Right now, I have the following code. Is there a better way to handle what I want accomplished? I also have a question about pkgutil.walk_packages. It returns modules that are not in the "selected" package (parent.package.path). Any tips on how to fix that?
import pkgutil
import importlib
sub_modules = []
for importer, modname, ispkg in pkgutil.walk_packages('parent.package.path'):
if 'parent.package.path.' not in str(modname):
continue
sub_modules.append(str(modname))
for m in sub_modules:
i = importlib.import_module(m)
i.handle_command()
My goal is to add things to parent.package.path and have them activated by running the handle_command() method in each module.

Related

Is there a way of unit testing what imports are used?

Is there a way of unit testing what modules are imported in a Python file (a bit like ArchUnit in Java)? The context is in implementing a hexagonal architecture and wanting to ensure that the domain model does not import any code that resides in an adapter. I'd like unit tests to fail if there are forbidden imports.
For example, I might like to test that no modules within foo.bar.domain import anything from foo.bar.adapter. Imports of foo.bar.domain should be allowed from within foo.bar.adapter.
Is this possible in Python and what's the best way of achieving this?
You can use the -Ximporttime Python flag to trace imports. I'm not entirely sure what would be the logic for finding forbidden imports in your case, but here's a silly example script that might help:
import subprocess
import sys
process = subprocess.run(
('python3', '-Ximporttime', '-c', 'import ' + 'shlex'),
stdout=subprocess.DEVNULL,
stderr=subprocess.PIPE,
encoding='utf-8',
)
blacklisted_imports = {'enum', 're', 'zipfile'}
data = [
x.rpartition('|')[2].strip() for x in process.stderr.split('\n')
]
for import_ in data:
if import_ in blacklisted_imports:
print('found bad import:', import_)
Output:
found bad import: enum
found bad import: re
I am not aware that testing methods exist for this specific case, but someone might know more about it. One thing that comes to mind are try-catch with the methods from the other module being checked if you can call a method. Another hacky way, would be to add custom string constants in global context of the each module, and if they are exist you know that the submodule imported/used the other module.
Check more on this stack overflow post.

Mark unused imported items in Python

In Python if you need to import a module for its side effects then you can't use something like autoflake to automatically remove unused imports. There's a fairly clean way to work around that that I described here (I didn't come up with it but I don't remember where I saw it):
import something
assert something, "Something is imported for its side effects."
But what if you are importing something so that it is re-exported. In other words:
# api.py
from internal_details import version
# main.py
import api
print(api.version)
Unfortunately Python is not well enough designed to have an export statement, so there's no way for autoflake to know that you imported version just to expose it to other modules.
What's the cleanest way to mark version as unused? Note that you can't do the same thing as the with the module:
assert version, "Version is part of the exported API."
That won't work (exercise for the reader!).
The best I can come up with is:
assert True or version, "Version is part of the export API."
But it's a bit ugly. Is there a better solution?
To be clear, I don't want to use lint-disabling comments like # noqa unless there is one that is de facto supported by everyone. Autoflake was just an example; Pylint also does this analysis. There's probably more.
How about making your intention clearer by assigning internal_details.version to a local variable in api?
# api.py
import internal_details
version = internal_details.version
Then
# main.py
import api
print(api.version)
The autoflake tool can ignore certain imports that you are not using directly.
Append # noqa to the relevant import line. For example:
from .endpoints import role, token, user, utils # noqa

How to allow excluding a module when importing my Python package?

I wrote a Python package pack that can perform a set of related tasks, taskA, taskB, ..., taskZ. Each of them has its own module file, e.g. taskN.py.
Now say in taskN.py I import a third party package evilpack. It Works On My Machineā„¢, but a coworker of mine (a) can't install evilpack, but (b) does not even need that module taskN.
My goal is to structure my package so that we can choose at import time whether we want to load the module taskN or ignore it.
What's the most elegant way to solve this? I'm sensing it has something to do with the directories' __init__.py files.
A simple way to solve this problem:
Identify all of the modules that may have unfulfilled dependencies.
In the main module that does the importing, surround each such import with a try...except clause:
try:
import packN
except ImportError as details:
print ("Could not import packN because of this error:", details)
print ("Functionality xxxx will not be available")
packN = None
If your colleague's code doesn't call a function that relies on packN then all will be well.
I think I can only point you in the correct direction via setupscript because I do not have access to your data/package details.
To simply put, you will have to locate your taskN.py's setup.py script, and specifically remove the module from inside the script.

How to import custom module the same way as pip-installed modules?

I feel really dumb asking this question, but it's a quirk of python I've put up with for awhile now that I finally want to fix.
On CentOS 7, given that I have "roflmao.py" and "__init__.py" in the directory:
/usr/lib/python2.7/site-packages/roflmao
Why is it that when I'm using the python interpreter (and not in the directory containing roflmao.py), I must type:
from roflmao import roflmao
Instead of simply:
import roflmao
To gain access to "roflmao.py"'s functions and variables? I can import re, collections, requests, or any PIP-installed module just fine, but not my own custom one.
How can I set things up to accomplish this?
Put from roflmao import * into __init__.py.
If you do this, then you don't really need to use roflmao.py. Because it would then be pointless to do from roflmao import roflmao. So it's best to just put the code from roflmao.py into __init__.py.

Minimizing Python Modules

Suppose that I have a python module A in my libraries directory. How can I export the same library as module B, however use only the functions and the code parts I need automatically? For instance, I want to build a minimal version of openCV for Python for a laser tracking project and I am sure that the code I actually need is much less than the whole openCV library. I don't want to do this by hand, so is there any kind of application that automates this proceedure?
EDIT: I saw the comments and noted you are struggling with a space issue. In that case the code below is not what you are looking for. I think you will have to write the minimized version of the target module. Otherwise you will have to implement some kind of tool that enables you to find all dependences of functions you want to use automatically.
You have to choose if it's worth the effort.
class Importer:
def __init__(self, module_name, funcion_list):
self.__module = __import__(module_name)
self.__specific = funcion_list
self.initialize()
def initialize(self):
for elem in self.__specific:
setattr(self, elem, getattr(self.__module, elem))
Lets say you want only export the function listdir from module os. So, in your B module you can:
os_minimized = Importer("os", ("listdir",))
And then in other module you write:
from B import os_minimized
print(os_minimized.listdir("."))

Categories

Resources