Using the #task and submodule convention, my "parent" fabfile imports two submodules ("dev" and "stable" whose tasks are defined in their respective init.py files). How to I get a #task in the dev module to invoke a task defined in the parent fabfile. I can't seem to get the imports to work correctly.
I also tried using the imp.load_source but that produced a nasty circular import (fabfile.py imports dev which tries to import ../fabfile.py).
Using this as an example: http://docs.fabfile.org/en/1.4.3/usage/tasks.html#going-deeper
How would a task defined in lb.py call something in the top init.py or a task in migrations.py call something in the top init.py?
You can invoke fabric task by name:
from fabric.api import execute, task
#task
def innertask():
execute("mytask", arg1, key1=kwarg1)
Related
My folder structure is (using Pycharm),
project
-testclass.py
-__test__.py
i am trying to call the test class method from test.py.
But getting below exception
from .testclass import Scheduler
ImportError: attempted relative import with no known parent package
test.py:
import asyncio
import sys
from .testclass import Scheduler
async def main():
print("main")
scheduler = Scheduler()
scheduler.run()
if __name__ == "__test__":
main()
try:
sys.exit(asyncio.run(main()))
except:
print("application exception")
testclass.py:
class Scheduler:
def __init__(self):
print("init")
async def run(self):
print("run")
How to resolve the correct import & it says relative import! How do i get this working in pycharm.
Edit: folder structure now changed as below as suggested.
project_folder
testpack (->python package)
- __init__.py
- testclass.py
-__init__.py
-__test__.py
I've had similar issues and I've created an experimental new import library ultraimport that allows to do file system based imports to solve your issue.
In your test.py you could then write:
import ultraimport
Scheduler = ultraimport('__dir__/testclass.py', 'Scheduler')
This will always work, no matter what is your folder structure, no matter if you run the code as a script or module and it does not care about sys.path or your current working directory. It's also not necessary to create __init__.py files.
One caveat when importing scripts like this is if they contain further relative imports. ultraimport has a builtin preprocessor to rewrite subsequent relative imports to ultraimports so they continue to work. Though, this is currently somewhat limited as original Python imports are ambiguous and there's only so much you can do about it.
I currently have two interesting issues within a python package I am working on.
Considerations
I am using:
Poetry
Typer
Python 3.8.12
Issue Summary
Using Typer, I need to pass an input from an option from the CLI into another module for use as a configuration parameter
I would need this to be passed before other upstream modules ran. I would also need to resolve the circular import that doing this would cause
Issue Diagram
Issue #1 details:
I have the following class and function within the main.py module
#dataclass
class Common:
profile_dir: str
#app.callback()
def profile_callback(ctx: typer.Context,
profile_dir: str = typer.Option(..., )):
typer.echo(f"Hello {profile_dir}")
test_dir = {profile_dir}
"""Common Entry Point"""
ctx.obj = Common(profile_dir)
return (test_dir)
The option works well when just echoing the CLI option but I have issues returning the {profile_dir} variable outside of the CLI. Typer only seems to be able to echo the variable, not store it. I am struggling to reference {profile_dir} outside of the CLI and outside the function.
What I would like to be able to do is turn {profile_dir} into an object that I can reference outside of the function and that I can pass from main.py into config.py
Issue #2
If I were to resolve issue #1, I would then encountered issue#2. As the CLI commands exist within the module_#_cli.py files and the main.py file, they import from the config.py upstream. This would lead me to the following error:
cannot import name 'cli_profile_path' from partially initialized module 'example.main' (most likely due to a circular import)
I'm pretty dumbfounded about how to resolve this as the package would import config parameters from the config.py module when there is no CLI option. I have struggled to think of a way to pass config parameters from the CLI into the config.py classes to then be used within the intermediary modules without the counterintuitive flow within the diagram.
Error: While importing "wsgi-contract-test", an ImportError was raised:
Traceback (most recent call last):
File "/Users/karl/Development/tral/bin/tral-env/lib/python3.9/site-packages/flask/cli.py", line 236, in locate_app
__import__(module_name)
File "/Users/karl/Development/tral/test/contract/inject/wsgi-contract-test.py", line 8, in <module>
from . import (
ImportError: attempted relative import with no known parent package
ERROR (run-tral): trap on error (rc=2) near line 121
make: *** [component.mk:114: tral-local-run-api-contract-test] Error 2
wsgi-contract-test.py:
from ...libs.tral import app, setup_from_env
from ...libs.tral import routes
I have the regular source files under the libs/tral directory, however the entry file for this test is located under test/contract/inject. I do NOT want to move this test file into libs since this file should be nowhere near production code as it is a rather hazardous file security wise.
In node.js this would of worked fine but there seems to be something with python imports I'm not grasping?
Since tests don't belong inside the src tree, there are basically two approaches. If you are testing a library you will frequently install it (in a virtualenv) and then just import it exactly the same way you would anywhere else. Frequently you also do this with a fresh install for the test: this has the great advantage that your tests mirror real setups, and helps to catch bugs like forgetting to commit/include files (guilty!) and only noticing once you've deployed...
The other option is to modify sys.path so that your test is effectively in the same place as your production code:
# /test/contract/inject.py
from pathlib import Path
import sys
sys.path.insert(0, str(Path(__file__).parent.parent.parent))
Since sys.path uses strs you may prefer to use os.path, like every single other example on SO. Personally I'm addicted to pathlib. Here:
__file__ is the absolute path of the current file
Path(str) wraps it in a pathlib.Path object
.parent gets you up the tree, and
str() casts back to the format sys.path expects, or it won't work.
Using a test runner
Normally we use a test runner for tests. But these tests need a running instance! No problem:
# tests/conftest.py
import pytest
from sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent))
# now pytest can find the src
from app import get_instance # dummy
#pytest.fixture
def instance():
instance = get_instance(some_param)
# maybe
instance.do_some_setup()
yield instance
# maybe
instance.do_some_cleanup()
If you don't need to do any cleanup, you can just return the instance rather than yielding it. Then in another file (for neatness) you write tests like this:
# tests/test_instance.py
def test_instance(instance): # note how we requested the fixture
assert instance.some_method()
And you run your tests with pytest:
pytest tests/
Pytest will run the code in conftest.py, discover all test fns starting with test in files whose names start with test, run the tests (supplying all the fixtures you have defined) and report.
Fixture Lifetime
Sometimes spinning up a fixture can be expensive. See the docs on fixture scope for telling pytest to keep the fixture around and supply it to multiple tests.
Using a runner vs. just running a script
You don't have to use a runner. But they do have many advantages:
decent metrics e.g. code/branch coverage
everyone uses them
parallel testing against different envs (e.g. python versions)
tests tend to be a lot neater
I took for granted that Python was able to handle simple relative paths; not the case. Instead I just added the path to the packages I wanted to include in the PYTHONPATH variable and walla, it found everything.
export PYTHONPATH="${PYTHONPATH}:$(pwd)/libs"
Then running it from the root project directory.
I had to change the code to the following though:
from tral import app, setup_from_env
from tral import routes
The following code works if the module "user.py" is in the same directory as the code, but fails if it is in a different directory. The error message I get is "ModuleNotFoundError: No module named 'user'
import multiprocessing as mp
import imp
class test():
def __init__(self,pool):
pool.processes=1
usermodel=imp.load_source('user','D:\\pool\\test\\user.py').userfun
#file D:\\pool\\test\\user.py looks like this:
# def userfun():
# return 1
vec=[]
for i in range(10):
vec.append([usermodel,i])
pool.map(self.myfunc,vec)
def myfunc(self,A):
userfun=A[0]
i=A[1]
print (i,userfun())
return
if __name__=='__main__':
pool=mp.Pool()
test(pool)
If the function myfunc is called without the pooled process the code is fine regardless of whether user.py is in the same directory of the main code or in \test. Why can't the pooled process find user.py in a separate directory? I have tried different methods such as modifying my path then import user, and importlib, all with the same results.
I am using windows 7 and python 3.6
multiprocessing tries to pretend it's just like threading, but the abstraction leaks like a sieve. One of the ways it leaks is that communicating with worker processes involves a lot of implicit pickling and data copying.
When you try to send usermodel to a worker, multiprocessing implicitly pickles it and tries to have the worker unpickle the pickle. Functions are pickled by recording the module name and function name, so the worker just thinks it's supposed to do from user import userfun to access userfun. It doesn't know that user needs to be loaded with imp.load_source from a specific filesystem location, so it can't reconstruct usermodel.
The way this problem manifests is OS-dependent, because if multiprocessing uses the fork start method, the workers inherit the user module from the master process. fork is default on Unix, but unavailable on Windows.
If I have a Python module implemented as a directory (i.e. package) that has both a top level function run and a submodule run, can I count on from example import run to always import the function? Based on my tests that is the case at least with Python 2.6 and Jython 2.5 on Linux, but can I count on this generally? I tried to search information about the import priorities but couldn't find anything.
Background:
I have a pretty large package that people generally run as a tool from the command line but also sometimes use programmatically. I would like to have simple entry points for both usages and consider to implement them like this:
example/__init__.py:
def run(*args):
print args # real application code belongs here
example/run.py:
import sys
from example import run
run(*sys.argv[1:])
The first entry point allows users to access the module from Python like this:
from example import run
run(args)
The latter entry point allows users to execute the module from the command line using both of the approaches below:
python -m example.run args
python path/to/example/run.py args
This both works great and covers everything I need. Before taking this into real use, I would like to know is this a sound approach that I can expect to work with all Python implementations on all operating systems.
I think this should always work; the function definition will shadow the module.
However, this also strikes me as a dirty hack. The clean way to do this would be
# __init__.py
# re-export run.run as run
from .run import run
i.e., a minimal __init__.py, with all the running logic in run.py:
# run.py
def run(*args):
print args # real application code belongs here
if __name__ == "__main__":
run(*sys.argv[1:])