i have this code in a python file:
from dec import my_decorator
import asyncio
#my_decorator
async def simple_method(bar): # , x, plc_name, var_name):
print("Henlo from simple_method\npartent:{}".format(parent))
return
#my_decorator
async def other_simple_meth(bar, value):
print("Henlo from other_simple_meth:\t Val:{}".format(value))
return
async def main():
print("Start Module-Export")
open('module_functions.py', 'a').close()
# Write all decorated functions to modue_functions.py
print("Functions in module_functions.py exported")
while True:
asyncio.sleep(2)
print("z...z...Z...")
My goal is to write all decorated functions (inc. the import dependencies) into a second module file (here "module_functions.py"). My 'module_functions.py' file should look like this:
from dec import my_decorator
import asyncio
#my_decorator
async def simple_method(bar): # , x, plc_name, var_name):
print("Henlo from simple_method\npartent:{}".format(parent))
return
#my_decorator
async def other_simple_meth(bar, value):
print("Henlo from other_simple_meth:\t Val:{}".format(value))
return
I know how to get references and names of a function, but not how to "copy/paste" the functioncode (incl. decorator and all dependencies) into a seperated file. Is this even possible?
EDIT: I know that pickle and dill exist, but this may not fullfill the goal. The problem is, that someone else may not know the order of the dumped file and loading them back may/will cause problem. As well it seems to be not possible to edit such loaded functions again.
I found a (not ideal, but ok) solution for my problems.
I) Find and write functions, coroutines etc. into a file (works):
Like #MisterMiyagi suspected, is the inspect module a good way to go. For the common stuff, it is possible with inspect.getsource() to get the code and write them into a file:
# List of wanted stuff
func_list = [simple_method, meth_with_input, meth_with_input_and_output, func_myself]
with open('module_functions.py', 'a') as module_file:
for func in func_list:
try:
module_file.write(inspect.getsource(func))
module_file.write("\n")
except:
print("Error :( ")
II) But what about decorated stuff(seems to work)?
I) will not work for decorated stuff, it is just ignored without throwing an exception. What seems to be used is from functools import wraps.
In many examples the #wraps decorator is added into the decorator class. This was not possible for me, but there is a good workaround:
#wraps(lambda: simple_method) #<---add wraps-decorator here
#my_decorator
async def simple_method(parent): # , x, plc_name, var_name):
print("Henlo from simple_method\npartent:{}".format(parent))
return
Wraps can be placed above the original decorated method/class/function and it seems to behave like I want. Now we can add simple_methodinto the func_listof I).
III) What about the imports?
Well it seems to be quite tricky/impossible to actually read the dependencies of a function. My workaround is to drop all wanted imports into a class (sigh). This class can be throw into the func_listof I) and is written into the file.
EDIT:
There is a cleaner way, which may works, after some modification, with I) and II) as well. The magic module is ast.
I have overwritten following:
class ImportVisitor(ast.NodeVisitor):
def __init__(self, target):
super().__init__()
self.file_target = target
"pick these special nodes via overwriting: visit_classname." \
"classnames are listed in https://docs.python.org/3.6/library/ast.html#abstract-grammar"
def visit_Import(self, node):
"Overwrite func!"
"Write all statements just with import like - import ast into file_target"
str = 'import '+', '.join(alias.name for alias in node.names)
self.file_target.write(str+"\n")
def visit_ImportFrom(self, node):
"Overwrite func!"
"Write all statements with from ... import (like - from os.path import basename) into file_tagrget"
str = 'from '+ node.module+ ' import '+', '.join(alias.name for alias in node.names)
self.file_target.write(str+"\n")
Now I can parse my own script name and fill the module_file with the imports and from...imports it will find while visiting all nodes in this tree:
with open('module_functions.py', 'a') as module_file:
with open(basename(__file__), "rb") as f:
tree = ast.parse(f.read(), basename(__file__))
visitor = ImportVisitor(module_file)
visitor.visit(tree)
module_file.write("\n\n")
Related
I am maintaining a python package in which I did some restructuring. Now, I want to support clients who still do from my_package.old_subpackage.foo import Foo instead of the new from my_package.new_subpackage.foo import Foo, without explicitly reintroducing many files that do the forwarding. (old_subpackage still exists, but no longer contains foo.py.)
I have learned that there are "loaders" and "finders", and my impression was that I should implement a loader for my purpose, but I only managed to implement a finder so far:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
# TODO: ideally, we would not just implement a "finder", but also a "loader"
# (using the importlib.util.module_for_loader decorator); this would enable us
# to get module contents that also pass identity checks
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
return importlib.util.find_spec(renamed)
return None
sys.meta_path.append(RenamedFinder())
https://docs.python.org/3.5/library/importlib.html#importlib.util.module_for_loader and related functionality, however, seem to be deprecated. I know it's not a very pythonic thing I am trying to achieve, but I would be glad to learn that it's achievable.
On import of your package's __init__.py, you can place whatever objects you want into sys.modules, the values you put in there will be returned by import statements:
from . import new_package
from .new_package import module1, module2
import sys
sys.modules["my_lib.old_package"] = new_package
sys.modules["my_lib.old_package.module1"] = module1
sys.modules["my_lib.old_package.module2"] = module2
If someone now uses import my_lib.old_package or import my_lib.old_package.module1 they will obtain a reference to my_lib.new_package.module1. Since the import machinery already finds the keys in the sys.modules dictionary, it never even begins looking for the old files.
If you want to avoid importing all the submodules immediately, you can emulate a bit of lazy loading by placing a module with a __getattr__ in sys.modules:
from types import ModuleType
import importlib
import sys
class LazyModule(ModuleType):
def __init__(self, name, mod_name):
super().__init__(name)
self.__mod_name = name
def __getattr__(self, attr):
if "_lazy_module" not in self.__dict__:
self._lazy_module = importlib.import(self.__mod_name, package="my_lib")
return self._lazy_module.__getattr__(attr)
sys.modules["my_lib.old_package"] = LazyModule("my_lib.old_package", "my_lib.new_package")
In the init file of the old module, have it import from the newer modules
Old (package.oldpkg):
foo = __import__("Path to new module")
New (package.newpkg):
class foo:
bar = "thing"
so
package.oldpkg.foo.bar is the same as package.newpkg.foo.bar
Hope this helps!
I think that this is what you are looking for:
RENAMED_PACKAGES = {
'my_package.old_subpackage.foo': 'my_package.new_subpackage.foo',
}
class RenamedFinder:
#classmethod
def find_spec(cls, fullname, path, target=None):
renamed = RENAMED_PACKAGES.get(fullname)
if renamed is not None:
sys.stderr.write(
f'WARNING: {fullname} was renamed to {renamed}; please adapt import accordingly!\n')
spec = importlib.util.find_spec(renamed)
spec.loader = cls
return spec
return None
#staticmethod
def create_module(spec):
return importlib.import_module(spec.name)
#staticmethod
def exec_module(module):
pass
sys.meta_path.append(RenamedFinder())
Still, IMO the approach that manipulates sys.modules is preferable as it is more readable, more explicit, and provides you much more control. It might become useful especially in further versions of your package when my_package.new_subpackage.foo starts to diverge from my_package.old_subpackage.foo while you would still need to provide the old one for backward compatibility. For that reason, you would maybe need to preserve the code of both anyway.
Consolidate all the old package names into my_package.
Old packages (old_package):
image_processing (class) Will be deleted and replaced by better_image_processing
text_recognition (class) Will be deleted and replaced by better_text_recognition
foo (variable) Will be moved to better_text_recognition
still_there (class) Will not move
New packages:
super_image_processing
better_text_recognition
Redirector (class of my_package):
class old_package:
image_processing = super_image_processing # Will be replaced
text_recognition = better_text_recognition # Will be replaced
Your main new module (my_package):
#imports here
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
still_there = __import__("path to unchanged module")
This allows you to delete some files and keep the rest. If you want to redirect variables you would do:
class super_image_processing:
def its(gets,even,better):
pass
class better_text_recognition:
def now(better,than,ever):
pass
class old_package:
#Links
image_processing = super_image_processing
text_recognition = better_text_recognition
foo = text_recognition.foo
still_there = __import__("path to unchanged module")
Would this work?
util.py:
import inspect
class Singleton(type):
_instances=[]
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
class MetaResult(Singleton):
def __getattribute__(cls, name):
return super().__getattribute__(name)
class Result(metaclass=MetaResult):
#staticmethod
def res_decorator(func):
def funcwrap(*args, **kwargs):
sig = inspect.signature(func)
bound_sig = sig.bind(*args, **kwargs)
bound_sig.apply_defaults()
#additional code to extract function arguments
return funcwrap
check_params.py
from util import Result as _Result
from abc import ABCMeta as _ABCMeta
class paramparse(metaclass=_ABCMeta)
#classmethod
#_Result.res_decorator
def parse_flash_params(cls, flash_config_path):
#some code
Now, I cythonize the file check_params.py with following setup :
cythonize.py
import os as _os
from pathlib import Path as _Path
from distutils.core import setup as _setup
from Cython.Distutils import build_ext as _build_ext
files_to_compile = []
def cython_build(source_path):
for dirpath, _, fnames in os.walk(source_path):
for fname in [x for x in fnames if f.endswith('.py'):
fname = _Path(fname)
files_to_compile.append(fname)
for e in files_to_compile:
e.cython_directives = {'binding':True, 'language_level':3}
_setup(name="Proj1",cmdclass={'build_ext':_build_ext}, ext_modules=files_to_compile)
cythonized as:
python cythonize.py --path C:\directory_where_check_params_exist
generates a pyd file on which the following unit tests were attempted to run:
Now, coming to the usage, in the unit tests:
unit_test_check_params.py
from check_params import * #getting error here , details outside the code
# unit tests written here
check_params.pyx:112: in init check_params
???
E
TypeError: Class-level classmethod() can only be called on a method_descriptor or instance method.
So when i debug this, the error appears as being caused because of the classmethod descriptor over decorator (def parse_flash_params) in check_params.py
Please let me know if you need more information.
This still isn't a hugely helpful example in since the code you provide still doesn't actually work. However:
The case
#classmethod
#_Result.res_decorator
is definitely a Cython bug. In the function __Pyx_Method_ClassMethod Cython has a lot of type checks to ensure that the type is a method (a function defined in a class), while actually it only needs to be callable, and this should only really be checked at call time. As a quick workround you can edit the relevant internal Cython file (CythonFunction.c) to replace the lines
PyErr_SetString(PyExc_TypeError,
"Class-level classmethod() can only be called on "
"a method_descriptor or instance method.");
return NULL;
with
return PyClassMethod_New(method);
This seems to me closer to what Python does, where it accepts any object and only checks for callability when the function is actually called.
In the longer term you should report it as a bug in Cython with an example that actually works to demonstrate the problem. This way it can actually be fixed. I don't think you need Result and the staticclass bit - res_decorator as an isolated function should demonstrate the problem.
The second possible order
#_Result.res_decorator
#classmethod
doesn't work in unCythonized Python either since the direct result of a classmethod decorator isn't callable. It only becomes callable when it becomes a bound method, which happens later. Therefore this isn't a bug in Cython.
Final addendum:
A cleaner workaround is to force Cython to use the builtin classmethod, instead of its own version that's causing bugs
try:
myclassmethod = __builtins__.classmethod
except AttributeError:
myclassmethod = __builtins__['classmethod']
class paramparse(metaclass=_ABCMeta):
#myclassmethod
#_Result.res_decorator
def parse_flash_params(cls, flash_config_path):
pass
The try ... except block is because __builtins__ behaves slightly differently in Cython and in a Python module, which is fine because it's an implementation detail anyway.
I readily admit to going a bit overboard with unit testing.
While I have passing tests, I find my solution to be inelegant, and I'm curious if anyone has a cleaner solution.
The class being tested:
class Config():
def __init__(self):
config_parser = ConfigParser()
try:
self._read_config_file(config_parser)
except FileNotFoundError as e:
pass
self.token = config_parser.get('Tokens', 'Token', )
#staticmethod
def _read_config_file(config):
if not config.read(os.path.abspath(os.path.join(BASE_DIR, ROOT_DIR, CONFIG_FILE))):
raise FileNotFoundError(f'File {CONFIG_FILE} not found at path {BASE_DIR}{ROOT_DIR}')
The ugly test:
class TestConfiguration(unittest.TestCase):
#mock.patch('config.os.path.abspath')
def test_config_init_sets_token(self, mockFilePath: mock.MagicMock):
with open('mock_file.ini', 'w') as file: #here's where it gets ugly
file.write('[Tokens]\nToken: token')
mockFilePath.return_value = 'mock_file.ini'
config = Config()
self.assertEqual(config.token, 'token')
os.remove('mock_file.ini') #quite ugly
EDIT: What I mean is I'm creating a file instead of mocking one.
Does anyone know how to mock a file object, while having its data set so that it reads ascii text? The class is deeply buried.
Other than that, the way ConfigParser sets data with .read() is throwing me off. Granted, the test "works", it doesn't do it nicely.
For those asking about other testing behaviors, here's an example of another test in this class:
#mock.patch('config.os.path.abspath')
def test_warning_when_file_not_found(self, mockFilePath: mock.MagicMock):
mockFilePath.return_value = 'mock_no_file.ini'
with self.assertRaises(FileNotFoundError):
config.Config._read_config_file(ConfigParser())
Thank you for your time.
I've found it!
I had to start off with a few imports:from io import TextIOWrapper, BytesIO
This allows a file object to be created: TextIOWrapper(BytesIO(b'<StringContentHere>'))
The next part involved digging into the configparser module to see that it calls open(), in order to mock.patch the behavior, and, here we have it, an isolated unittest!
#mock.patch('configparser.open')
def test_bot_init_sets_token(self, mockFileOpen: mock.MagicMock):
mockFileOpen.return_value = TextIOWrapper(BytesIO(b'[Tokens]\nToken: token'))
config = Config()
self.assertEqual(config.token, 'token')
I decided to try to preprocess function text before it's compilation into byte-code and following execution. This is merely for training. I hardly imagine situations where it'll be a satisfactory solution to be used. I have faced one problem which I wanted to solve in this way, but eventually a better way was found. So this is just for training and to learn something new, not for real usage.
Assume we have a function, which source code we want to be modified quite a bit before compilation:
def f():
1;a()
print('Some statements 1')
1;a()
print('Some statements 2')
Let, for example, mark some lines of it with 1;, for them to be sometimes commented and sometimes not. I just take it for example, modifications of the function may be different.
To comment these lines I made a decorator. The whole code it bellow:
from __future__ import print_function
def a():
print('a()')
def comment_1(s):
lines = s.split('\n')
return '\n'.join(line.replace(';','#;',1) if line.strip().startswith('1;') else line for line in lines)
def remove_1(f):
import inspect
source = inspect.getsource(f)
new_source = comment_1(source)
with open('temp.py','w') as file:
file.write(new_source)
from temp import f as f_new
return f_new
def f():
1;a()
print('Some statements 1')
1;a()
print('Some statements 2')
f = remove_1(f) #If decorator #remove is used above f(), inspect.getsource includes #remove inside the code.
f()
I used inspect.getsourcelines to retrieve function f code. Then I made some text-processing (in this case commenting lines starting with 1;). After that I saved it to temp.py module, which is then imported. And then a function f is decorated in the main module.
The output, when decorator is applied, is this:
Some statements 1
Some statements 2
when NOT applied is this:
a()
Some statements 1
a()
Some statements 2
What I don't like is that I have to use hard drive to load compiled function. Can it be done without writing it to temporary module temp.py and importing from it?
The second question is about placing decorator above f: #replace. When I do this, inspect.getsourcelines returns f text with this decorator. I could manually be deleted from f's text. but that would be quite dangerous, as there may be more than one decorator applied. So I resorted to the old-style decoration syntax f = remove_1(f), which does the job. But still, is it possible to allow normal decoration technique with #replace?
One can avoid creating a temporary file by invoking the exec statement on the source. (You can also explicitly call compile prior to exec if you want additional control over compilation, but exec will do the compilation for you, so it's not necessary.) Correctly calling exec has the additional benefit that the function will work correctly if it accesses global variables from the namespace of its module.
The problem described in the second question can be resolved by temporarily blocking the decorator while it is running. That way the decorator remains, along all the other ones, but is a no-op.
Here is the updated source.
from __future__ import print_function
import sys
def a():
print('a()')
def comment_1(s):
lines = s.split('\n')
return '\n'.join(line.replace(';','#;',1) if line.strip().startswith('1;') else line for line in lines)
_blocked = False
def remove_1(f):
global _blocked
if _blocked:
return f
import inspect
source = inspect.getsource(f)
new_source = comment_1(source)
env = sys.modules[f.__module__].__dict__
_blocked = True
try:
exec new_source in env
finally:
_blocked = False
return env[f.__name__]
#remove_1
def f():
1;a()
print('Some statements 1')
1;a()
print('Some statements 2')
f()
def remove_1(f):
import inspect
source = inspect.getsource(f)
new_source = comment_1(source)
env = sys.modules[f.__module__].__dict__.copy()
exec new_source in env
return env[f.__name__]
I'll leave a modified version of the solution given in the answer by user4815162342. It uses ast module to delete some parts of f, as was suggested in the comment to the question. To make it I majorly relied on the information in this article.
This implementation deletes all occurrences of a as standalone expression.
from __future__ import print_function
import sys
import ast
import inspect
def a():
print('a() is called')
_blocked = False
def remove_1(f):
global _blocked
if _blocked:
return f
import inspect
source = inspect.getsource(f)
a = ast.parse(source) #get ast tree of f
class Transformer(ast.NodeTransformer):
'''Will delete all expressions containing 'a' functions at the top level'''
def visit_Expr(self, node): #visit all expressions
try:
if node.value.func.id == 'a': #if expression consists of function with name a
return None #delete it
except(ValueError):
pass
return node #return node unchanged
transformer = Transformer()
a_new = transformer.visit(a)
f_new_compiled = compile(a_new,'<string>','exec')
env = sys.modules[f.__module__].__dict__
_blocked = True
try:
exec(f_new_compiled,env)
finally:
_blocked = False
return env[f.__name__]
#remove_1
def f():
a();a()
print('Some statements 1')
a()
print('Some statements 2')
f()
The output is:
Some statements 1
Some statements 2
Is there a way in grep (or vim) to print out a named function/class?
i.e. From:
class InternalTimer(Sim.Process):
def __init__(self, fsm):
Sim.Process.__init__(self, name="Timer")
random.seed()
self.fsm = fsm
def Lifecycle(self, Request):
while True:
yield Sim.waitevent, self, Request
yield Sim.hold, self, Request.signalparam[0]
if(self.interrupted()):
self.interruptReset()
else:
self.fsm.process(Request.signalparam[1])
Calling $my-func-grep '__init__(self,fsm)' filename.py would produce
def __init__(self, fsm):
Sim.Process.__init__(self, name="Timer")
random.seed()
self.fsm = fsm
You could create a vim extension which effectively performs the following:
import inspect
print inspect.getsource(name_of_function)
This prints the function signature and the body of the function. If Vim has been compiled with Python support, you can write extensions in Python itself.