I have two scripts:
script1:
from test import script2
if __name__ == '__main__':
foo()
script2:
def foo():
print 'hello'
In the structure:
test:
│ A
│-----B:
│ script2.py
│script1.py
I am trying to call the function foo() from script2.py in script1.py.
I received the error:
This inspection detect names that should resolve but don't. Due to
dynamic dispatch and duck typing, this is possible in a limited but
useful number of cases. Top-level and class-level itesm are supported
better than instance items.
I read number of similar cases but they didn't help me:
Call a function from another file in Python
How to import a function from parent folder in python?
Import Script from a Parent Directory
For Python packages to work, you need __init__.py files in the folders. These can be empty.
Then, you are importing from a subfolder, so you need import script2 from A.B
At that point, you may use script2.foo()
Also worth mentioning that Python3 should be targeted for any new projects .
Related
I recently started working on an existing project.
This project is composed of several git repositories that may use each other in some hierarchy.
All repositories should be cloned to the same root directory.
Here's an example of the structure:
root_dir:
repo_a:
module1:
- a.py
- b.py
repo_b:
module1:
- c.py
- d.py
repo_c:
module1:
- e.py
- f.py
Note that I've written "module1" three times on purpose, as it really is the same name.
Now for an example of a file, let's say a.py:
from module1.b import foo
from module1.d import goo
from module1.f import zoo
def func():
foo()
goo()
zoo()
When trying to run it from the root_dir I'm having trouble, I guess due to the ambiguities and not having relative paths.
Is there a way I can run this project properly without internally changing the code?
Based on the boundary having to stick with the given structure two ways using importlib:
Concise option:
from importlib.machinery import SourceFileLoader
repo_a_module1 = SourceFileLoader("module1",">YourSystemPath</repo_a/module1.py").load_module()
repo_b_module1 = SourceFileLoader("module1",">YourSystemPath</repo_b/module1.py").load_module()
repo_c_module1 = SourceFileLoader("module1",">YourSystemPath</repo_c/module1.py").load_module()
repo_a_module1.foo()
repo_b_module1.goo()
repo_c_module1.zoo()
Another option, but with more code:
import importlib.util
repo_a = importlib.util.spec_from_file_location(
"module1", ">YourSystemPath</repo_a/module1.py")
repo_a_module1 = importlib.util.module_from_spec(repo_a)
repo_a.loader.exec_module(repo_a_module1)
repo_b = importlib.util.spec_from_file_location(
"module1", ">YourSystemPath</repo_b/module1.py")
repo_b_module1 = importlib.util.module_from_spec(repo_b)
repo_b.loader.exec_module(repo_b_module1)
repo_c = importlib.util.spec_from_file_location(
"module1", ">YourSystemPath</repo_c/module1.py")
repo_c_module1 = importlib.util.module_from_spec(repo_c)
repo_c.loader.exec_module(repo_c_module1)
repo_a_module1.foo()
repo_b_module1.goo()
repo_c_module1.zoo()
Note: >YourSystemPath< can be absolute or relative in both options.
Both examples above have a change in the calls of the imported functions (e.g. foo() -> repo_a_module1.foo() ) which is specific and may help when working with the code later on.
If you have to keep the calls you could add further functions in between:
def foo():
repo_a_module1.foo()
foo()
The project has the same structure as in the picture: I'm trying to import from "mod.py " in "index.py "
from .. import mod
However, it gives the error: "ImportError: attempted relative import with no known parent package" If you use this option:
from pack1 import mod
Then error: "ModuleNotFoundError error: there is no module named 'pack1'"
enter image description here
PROJECT/
pack1/
__init__.py
mod.py
pack2/
__init__.py
index.py
What is the problem?
This is a recurring question on StackOverflow. And much of the confusion (in my opinion) comes from how Python interprets the files and folders it sees is based on where Python is run from. First, some terminology:
module: a file containing Python code.
package: a folder containing files with Python code and other folders.
When you start Python in a directory (folder), it doesn't "know" what the namespace of that directory should be. I.e., if you are working in Z:\path\to_my\project\ when you start Python:
it does NOT consider project to be a package.
any .py files you want to import from will be in their own namespace as modules.
any folders you want to import from will also be in their own namespace as packages.
What about __init__.py? Since version 3.3, Python has implicit namespace packages, which allows importing without needing to create an empty __init__.py file.
Consider #2: if you have two files: first.py and second.py:
path/
to_my/
project/
>>Python is running here<<
first.py
second.py
with these contents:
# first.py
first_var = 'hello'
# second.py
from .first import first_var
second_var = first_var + ' world'
if you try to import like this:
>>> import second
Python basically does the following:
"ok, I see second.py"
"Reading that in as a module, chief!"
"Ok, it wants to import .first
"The . means get the package (folder) that contains first.py"
"Wait, I don't have a parent package for first.py!"
"Better raise an error."
The same rules apply for #3 as well. If we add a few packages to the project like this:
path/
to_my/
project/
>>Python is running here<<
first.py
second.py
pack1/
mod.py
other_mod.py
pack2/
index.py
with the following contents:
# pack1/mod.py
mod_var = 1234
# pack1/other_mod.py
from .mod import mod_var
other_var = mod_var * 10
# pack2/index.py
from ..pack1 import mod
and when you try to import like this:
>>> from pack2 import index.py
The import in pack2/index.py is going to fail for the same reason second.py, Python will work its way up the import chain of dots like this:
"Reading in in index.py as a module."
"Looks like it wants to import mod from ..pack1.
"Ok, . is the pack2 parent package namespace of index.py, found that."
"So, .. is the parent package of pack2."
"But, I don't have a parent package for pack2!"
"Better raise an error."
How do we make it work? Two thing.
First, move where Python is running up one level so that all of the .py files and subfolders are considered to be part of the same package namespace, which allows the file to reference each other using relative references.
path/
to_my/
>>Python is running here now<<
project/
first.py
second.py
pack1/
mod.py
other_mod.py
pack2/
index.py
So now Python sees project as a package namespace, and all of the files within can use relative references up to that level.
This changes how you import when you are in the Python interpreter:
>>> from project.pack2 import index.py
Second, you make explicit references instead of relative references. That can make the import statements really long, but if you have several top-level modules that need to pull from one another, this is how you can do it. This is useful when you are defining your functions in one file and writing your script in another.
# first.py
first_var = 'hello'
# second.py
from first import first_var # we dropped the dot
second_var = first_var + ' world'
I hope this helps clear up some of the confusion about relative imports.
I am attempting to unit test some Python 3 code that imports a module. Unfortunately, the way the module is written, simply importing it has unpleasant side effects, which are not important for the tests. I'm trying to use unitest.mock.patch to get around it, but not having much luck.
Here is the structure of an illustrative sample:
.
└── work
├── __init__.py
├── test_work.py
├── work.py
└── work_caller.py
__init__.py is an empty file
work.py
import os
def work_on():
path = os.getcwd()
print(f"Working on {path}")
return path
def unpleasant_side_effect():
print("I am an unpleasant side effect of importing this module")
# Note that this is called simply by importing this file
unpleasant_side_effect()
work_caller.py
from work.work import work_on
class WorkCaller:
def call_work(self):
# Do important stuff that I want to test here
# This call I don't care about in the test, but it needs to be called
work_on()
test_work.py
from unittest import TestCase, mock
from work.work_caller import WorkCaller
class TestWorkMockingModule(TestCase):
def test_workcaller(self):
with mock.patch("work.work.unpleasant_side_effect") as mocked_function:
sut = WorkCaller()
sut.call_work()
In work_caller.py I only want to test the beginning code, not the call to work_on(). When I run the test, I get the following output:
paul-> python -m unittest
I am an unpleasant side effect of importing this module
Working on /Users/paul/src/patch-test
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
I was expecting that the line I am an unpleasant side effect of importing this module would not be printed because the function unpleasant_side_effect would be mocked. Where might I be going wrong?
The unpleasant_side_effect is run for two reasons. First because the imports are handled before the test case is started and is therefore not mocked when importing is happening. Secondly, because the mocking itself imports work.py and thus runs unpleasant_side_effect even if work_caller.py was not imported.
The import problem can be solved by mocking the module work.py itself. This can either be done globally in the test module or in the testcase itself. Here I assigned it a MagicMock, which can be imported, called etc.
test_work.py
from unittest import TestCase, mock
class TestWorkMockingModule(TestCase):
def test_workcaller(self):
import sys
sys.modules['work.work'] = mock.MagicMock()
from work.work_caller import WorkCaller
sut = WorkCaller()
sut.call_work()
The downside is that work_on is also mocked, which I am not sure whether is a problem in your case.
It is not possible to not run the entire module when it is imported, since functions and classes are also statements, thus the module execution has to finish before returning to the caller, where one want to alter the imported module.
In case you asked partially about the best practice.
You should always split your code to library used by every other code and side-effect lines. And probably eliminate side-effects by calling the side-effecting code from you def main(): But if you want to keep side-effects anyway, then you could do:
work_lib.py:
...no side-effects...
work.py
from work_lib import ...
...side-effects....
test_work.py
from work_lib import ...
Another solution is to put this line ahead of any code that you don't want to run on import:
if __name__ == "__main__":
If the code is at the highest/outermost level of a module, the name will be "main" when running directly or will be the module name when being imported. So in your example, if you put that line ahead of your call to unpleasant_side_effect(), the function wouldn't get called when the module is imported.
I'm new to Python and I still can't get my head around why we need a __init__.py file to import modules. I have gone through other questions and answers, such as this.
What confuses me is that I can import my modules without __init__py, so why do I need it at all?
My example,
index.py
modules/
hello/
hello.py
HelloWorld.py
index.py,
import os
import sys
root = os.path.dirname(__file__)
sys.path.append(root + "/modules/hello")
# IMPORTS MODULES
from hello import hello
from HelloWorld import HelloWorld
def application(environ, start_response):
results = []
results.append(hello())
helloWorld = HelloWorld()
results.append(helloWorld.sayHello())
output = "<br/>".join(results)
response_body = output
status = '200 OK'
response_headers = [('Content-Type', 'text/html'),
('Content-Length', str(len(response_body)))]
start_response(status, response_headers)
return [response_body]
modules/hello/hello.py,
def hello():
return 'Hello World from hello.py!'
modules/hello/HelloWorld.py,
# define a class
class HelloWorld:
def __init__(self):
self.message = 'Hello World from HelloWorld.py!'
def sayHello(self):
return self.message
Result,
Hello World from hello.py!
Hello World from HelloWorld.py!
What it takes is just these two lines,
root = os.path.dirname(__file__)
sys.path.append(root + "/modules/hello")
Without any of __init__py. Can someone explain why it works in this way?
If __init__py is the proper way, what should I do/change in my code?
Based on this link: Since Python 3.3
Allowing implicit namespace packages means that the requirement to provide an __init__.py file can be dropped completely
__init__.py is for packages. A package contains a collection of related modules. If you just have a single module you want to use, you don't need to use __init__.py; just put the single .py file somewhere on the system path and you can import it.
The purpose of packages is not just to allow you to import the modules inside them. It's to group the modules together. The main benefit of this is that, if a module is inside a package, then that module can import other modules from the package using relative imports. If you have foo.py and bar.py in the same package, then foo can just do from . import bar. This makes intra-package imports more compact and easier to reorganize if you restructure the package or change its name.
Also, an obvious benefit is. . . if you make it a package, you don't have to do that sys.path stuff every time you want to import something from it.
I think that this might be due to Python version you are using. I did some experimentation and found out that having following structure:
jedrzej#jedrzej-UX303LB ~/temp $ tree .
.
├── main.py
└── packages
├── file.py
└── file.pyc
1 directory, 5 files
content of main.py:
import packages.file as p
p.fun()
and content of file.py:
import sys
def fun():
print(sys.path)
When I am executing main.py with Python 2.7.12 I get ImportError while execution of main.py with Python 3.5.2 simply works.
After adding __init__.py in packages directory, code works with both versions of Python.
Files named __init__.py are used to mark directories on disk as Python package directories. If you have the files
modules/spam/__init__.py
modules/spam/module.py
and modules is in your path, you can import the code in module.py as
import spam.module
or
from spam import module
If you remove the __init__.py file, Python will no longer look for submodules inside that directory, so attempts to import the module will fail.
The __init__.py file is usually empty, but can be used to export selected portions of the package under a more convenient name, hold convenience functions, etc. Given the example above, the contents of the init module can be accessed with
import spam
And finally here is what the official documentation has to say about this file:
The __init__.py files are required to make Python treat the
directories as containing packages; this is done to prevent
directories with a common name, such as string, from
unintentionally hiding valid modules that occur later on the
module search path. In the simplest case, __init__.py can just
be an empty file, but it can also execute initialization code
for the package or set the __all__ variable, described later.
I think this is a good 'answer' for what I didn't understand.
myMath/
__init__.py
adv/
__init__.py
sqrt.py
fib.py
add.py
subtract.py
multiply.py
divide.py
myMath/__init__.py
from add import add
from divide import division
from multiply import multiply
from subtract import subtract
from adv.fib import fibonacci
from adv.sqrt import squareroot
index.py
import sys
sys.path.append('C:\Users\mdriscoll\Documents')
import mymath
print mymath.add(4,5)
print mymath.division(4, 2)
print mymath.multiply(10, 5)
print mymath.fibonacci(8)
print mymath.squareroot(48)
I want to create a Python module that works like NumPy. The methods are not only sub-modules in the leaves of the tree from the module source. There is a root module containing many methods that I can call directly, and there are also sub-modules. The problem is the root methods must be defined somewhere. I was thinking to have a directory structure:
module/
__init__.py
core.py
stuff1.py
submodule/
__init__.py
stuff2.py
stuff3.py
Now that I want is for everything inside "core" to be imported into the "module" namespace, as if it were a module.py file, and the contents of core.py were inside this module.py. The problem is that module is a directory instead of a file, so how do I define these methods that should sit in the root of the module?
I tried putting "from core import *" inside init.py, but that didn't work. (EDIT: Actually it does.)
Should I have the core methods inside a "module.py" file, and also a module directory? I don't know if that works, but it looks pretty awkward.
What I think you want is to be able to do this:
# some_other_script.py
import module
# Do things using routines defined in module.core
What happens when you ask Python to import module is (in a very basic sense), module/__init__.py is run, and a module object is created and imported into your namespace. This object (again, very basically) encompasses the things that happened when __init__.py was run: name definitions and so on. These can be accessed through module.something.
Now, if your setup looks like this:
# module/__init__.py
from module.core import Clazz
c = Clazz()
print c # Note: demo only! Module-level side-effects are usually a bad idea!
When you import module, you'll see a print statement like this:
<module.core.Clazz object at 0x00BBAA90>
Great. But if you then try to access c, you'll get a NameError:
# some_other_script.py
import module # prints "<module.core.Clazz object at 0x00BBAA90>"
print c # NameError (c is not defined)
This is because you haven't imported c; you've imported module. If instead your entry-point script looks like this:
# some_other_script.py
import module # prints "<module.core.Clazz object at 0x00BBAA90>"
print module.c # Access c *within module*
Everything will run fine. This will also work fine with from core import * and/or from module import *, but I (and PEP8) advise against that just because it's not very clear what's going on in the script when you start mucking around with wild imports. For clarity:
# module/core.py
def core_func():
return 1
# module/__init__.py
from core import *
def mod_func():
return 2
The above is really pretty much fine, although you might as well make core "private" (rename to _core) to indicate that there's no reason to touch it from outside the package anymore.
# some_other_script.py
from module import *
print core_func() # Prints 1
print mod_func() # Prints 2
Check out information about the __all__ list. It allows you to define what names are exported.
Tag it as such and you can setup a function to determine what to pull in from your submodules:
#property
all(self):
#Whatever introspective code you may want for your modules
__all__ += submodule.__all__
If you just want the whole damn shabang in module space, here's a way:
$ ipython
In [1]: from foomod import *
In [2]: printbar()
Out[2]: 'Imported from a foreign land'
In [3]: ^D
Do you really want to exit ([y]/n)?
$ ls foomod/
__init__.py __init__.pyc core.py core.pyc submodule
$ grep . foomod/*.py
foomod/__init__.py:from foomod.core import *
foomod/core.py:def printbar():
foomod/core.py: return "Imported from a foreign land"
... and if we make __init__.py empty:
$ echo > foomod/__init__.py
$ ipython
In [1]: from foomod import *
In [2]: printbar()
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-2-ba5b6693441e> in <module>()
----> 1 printbar()
NameError: name 'printbar' is not defined