I'm very new to Python and am learning how packages and modules work but have run into a snag. Initially I created a very simple package to be deployed as a Lambda function. I have a file in the root directory called lambda.py which contains a handler function, and I put most of my business logic in a separate file. I created a subdirectory - for this example let's say it's called testme - and more specifically put it in __init__.py underneath that subdirectory. Initially this worked great. Inside my lambda.py file I could use this import statement:
from testme import TestThing # TestThing is the name of the class
However, now that the code is growing, I've been splitting things into multiple files. As a result, my code no longer runs; I get the following error:
TypeError: 'module' object is not callable
Here's a simplified version of what my code looks like now, to illustrate the problem. What am I missing? What can I do to make these modules "callable"?
/lambda.py:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
from testme import TestThing
def handler(event, context):
abc = TestThing(event.get('value'))
abc.show_value()
if __name__ == '__main__':
handler({'value': 5}, None)
/testme/__init__.py:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
__project__ = "testme"
__version__ = "0.1.0"
__description__ = "Test MCVE"
__url__ = "https://stackoverflow.com"
__author__ = "soapergem"
__all__ = ["TestThing"]
/testme/TestThing.py:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
class TestThing:
def __init__(self, value):
self.value = value
def show_value(self):
print 'The value is %s' % self.value
Like I said, the reason I'm doing all this is because the real world example has enough code that I want to split it into multiple files inside the subdirectory. And so I left an __init__.py file there to just to serve as an index essentially. But I am very unsure of what the best practices are for package structure, or how to get this working.
You either have to import your class in your __init__ file:
testme/__init__.py:
from .TestThing import TestThing
or import it using the full path:
lambda.py:
from testme.TestThing import TestThing
When you use a __init__.py file, you create a package, and this package (named after the root directory, e.g. testme, may include submodules. Those are accessible via the package.module syntax, but the contents of the submodules are only visible in the root package if you explicitly import them there.
Related
below the folder structure of my software:
below the code of all the .py files:
run.py:
import modules.module_01.aa as a
a.test()
# test:
if __name__=="__main__":
pass
aa.py (module 1):
import libraries.qq as q
import libraries.zz as z
def test():
q.qq_fun()
z.zz_fun()
print("ciao")
qq.py (library used by aa.py):
def qq_fun():
pass
zz.py (library used by aa.py):
def zz_fun():
pass
my question is really simple, why when I run "run.py" Python say to me:
why "aa.py" can't import the module "qq.py" and "zz.py"? how can I fix this issue?
run.py
In run.py, the Python interpreter thinks you're trying to import module_01.aa from a module named module. To import aa.py, you'll need to add this code to the top of your file, which adds the directory aa.py is in to the system path, and change your import statement to import aa as a.
import sys
sys.path.insert(0, "./modules/module_01/")
aa.py
The same problem occurs in aa.py. To fix the problem in this file, you'll need to add this code to the top of aa.py, which adds the directory qq.py and zz.py are in, and remove the libraries. from both of your import statements.
import sys
sys.path.insert(0, "./modules/module_01/libraries")
I am developping a new little project which need to run on Windows and Linux. To explain my problem I will use 3 files.
parser/__init__.py
from .toto import Parser as TotoParser
parser/toto.py
class Variable(object):
def __str__(self):
return "totoVariable"
class Parser(object):
#staticmethod
def parse(data):
return Variable()
main.py
#!/usr/bin/env python3
from parser import TotoParser
def main():
print(TotoParser.parse(""))
if __name__ == '__main__':
main()
In this project. I create several modules(file) into different packages(directory). The thing is I need to change the name of module imported. To do that I use aliasing into __init__ files.
My project run perfectly on Lunix but when I tried it on Windows this problem occurs !
ImportError: cannot import name 'TotoParser'
Sorry for my English, I am learning it...
Please rename init.py to __init__.py, I believe that it is work, case already named as __init__.py ignore this anwser...
I'm working on creating my first cookiecutter. By and large, this has gone well, but I now want to add a jinja2 filter of my own.
In line with the comments in this issue, I've created a new Jinja2 extension much like the one here. Full code for this extension is here:
https://github.com/seclinch/sigchiproceedings-cookiecutter/commit/5a314fa7207fa8ab7b4024564cec8bb1e1629cad#diff-f4acf470acf9ef37395ef389c12f8613
However, the following simple example demonstrates the same error:
# -*- coding: utf-8 -*-
from jinja2.ext import Extension
def slug(value):
return value
class PaperTitleExtension(Extension):
def __init__(self, environment):
super(PaperTitleExtension, self).__init__(environment)
environment.filters['slug'] = slug
I've dropped this code into a new jinja2_extensions directory and added a simple __init__.py as follows:
# -*- coding: utf-8 -*-
from paper_title import PaperTitleExtension
__all__ = ['PaperTitleExtension']
Based on this piece of documentation I've also added the following to my `cookiecutter.json' file:
"_extensions": ["jinja2_extensions.PaperTitleExtension"]
However, running this generates the following error:
$ cookiecutter sigchiproceedings-cookiecutter
Unable to load extension: No module named 'jinja2_extensions'
I'm guessing that I'm missing some step here, can anyone help?
Had the same issue, try executing with python3 -m option
My extension in extensions/json_loads.py
import json
from jinja2.ext import Extension
def json_loads(value):
return json.loads(value)
class JsonLoadsExtension(Extension):
def __init__(self, environment):
super(JsonLoadsExtension, self).__init__(environment)
environment.filters['json_loads'] = json_loads
cookiecutter.json
{
"service_name": "test",
"service_slug": "{{ cookiecutter.service_name.replace('-', '_') }}",
...
"_extensions": ["extensions.json_loads.JsonLoadsExtension"]
}
Then I executed with python3 -m cookiecutter . no_input=True timestamp="123" extra_dict="{\"features\": [\"redis\", \"grpc_client\"]}" -f and it works fine.
I ran into a similar error earlier.
Unable to load extension: No module named 'cookiecutter_repo_extensions'
The problem was that in my case there was a dependency to the 'cookiecutter-repo-extension' which I had not installed in my Virtual Environment.
The directory containing your extension needs to be on your PYTHONPATH.
https://github.com/cookiecutter/cookiecutter/issues/1211#issuecomment-522226155
A PR to improve the docs would be appreciated 📖 ✍️ 🙏
I use pytest in my .travis.yml to check my code.
I would like to check the README.rst, too.
I found readme_renderer via this StackO answer
Now I ask myself how to integrate this into my current tests.
The docs of readme_renderer suggest this, but I have not clue how to integrate this into my setup:
python setup.py check -r -s
I think the simplest and most robust option is to write a pytest plugin that replicates what the distutils command you mentioned in you answer does.
That could be as simple as a conftest.py in your test dir. Or if you want a standalone plugin that's distributable for all of us to benefit from there's a nice cookiecutter template.
Ofc there's inherently nothing wrong with calling the check manually in your script section after the call to pytest.
I check it like this now:
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, unicode_literals, print_function
import os
import subx
import unittest
class Test(unittest.TestCase):
def test_readme_rst_valid(self):
base_dir = os.path.dirname(os.path.dirname(os.path.dirname(__file__)))
subx.call(cmd=['python', os.path.join(base_dir, 'setup.py'), 'check', '--metadata', '--restructuredtext', '--strict'])
Source: https://github.com/guettli/reprec/blob/master/reprec/tests/test_setup.py
So I implemented something but it does require some modifications. You need to modify your setup.py as below
from distutils.core import setup
setup_info = dict(
name='so1',
version='',
packages=[''],
url='',
license='',
author='tarun.lalwani',
author_email='',
description=''
)
if __name__ == "__main__":
setup(**setup_info)
Then you need to create a symlink so we can import this package in the test
ln -s setup.py setup_mypackage.py
And then you can create a test like below
# -*- coding: utf-8 -*-
from __future__ import absolute_import, division, unicode_literals, print_function
import os
import unittest
from distutils.command.check import check
from distutils.dist import Distribution
import setup_mypackage
class Test(unittest.TestCase):
def test_readme_rst_valid(self):
dist = Distribution(setup_mypackage.setup_info)
test = check(dist)
test.ensure_finalized()
test.metadata = True
test.strict = True
test.restructuredtext = True
global issues
issues = []
def my_warn(msg):
global issues
issues += [msg]
test.warn = my_warn
test.check_metadata()
test.check_restructuredtext()
if len(issues) > 0:
assert len(issues) == 0, "\n".join(issues)
Running the test then I get
...
AssertionError: missing required meta-data: version, url
missing meta-data: if 'author' supplied, 'author_email' must be supplied too
Ran 1 test in 0.067s
FAILED (failures=1)
This is one possible workaround that I can think of
Upvoted because checking readme consistence is a nice thing I never integrated in my own projects. Will do from now on!
I think your approach with calling the check command is fine, although it will check more than readme's markup. check will validate the complete metadata of your package, including the readme if you have readme_renderer installed.
If you want to write a unit test that does only markup check and nothing else, I'd go with an explicit call of readme_renderer.rst.render:
import pathlib
from readme_renderer.rst import render
def test_markup_is_generated():
readme = pathlib.Path('README.rst')
assert render(readme.read_text()) is not None
The None check is the most basic test: if render returns None, it means that the readme contains errors preventing it from being translated to HTML. If you want more fine-grained tests, work with the HTML string returned. For example, I expect my readme to contain the word "extensions" to be emphasized:
import pathlib
import bs4
from readme_renderer.rst import render
def test_extensions_is_emphasized():
readme = pathlib.Path('README.rst')
html = render(readme.read_text())
soup = bs4.BeautifulSoup(html)
assert soup.find_all('em', string='extensions')
Edit: If you want to see the printed warnings, use the optional stream argument:
from io import StringIO
def test_markup_is_generated():
warnings = StringIO()
with open('README.rst') as f:
html = render(f.read(), stream=warnings)
warnings.seek(0)
assert html is not None, warnings.read()
Sample output:
tests/test_readme.py::test_markup_is_generated FAILED
================ FAILURES ================
________ test_markup_is_generated ________
def test_markup_is_generated():
warnings = StringIO()
with open('README.rst') as f:
html = render(f.read(), stream=warnings)
warnings.seek(0)
> assert html is not None, warnings.read()
E AssertionError: <string>:54: (WARNING/2) Title overline too short.
E
E ----
E fffffff
E ----
E
E assert None is not None
tests/test_readme.py:10: AssertionError
======== 1 failed in 0.26 seconds ========
I've been trying to import some python classes which are defined in a child directory. The directory structure is as follows:
workspace/
__init__.py
main.py
checker/
__init__.py
baseChecker.py
gChecker.py
The baseChecker.py looks similar to:
import urllib
class BaseChecker(object):
# SOME METHODS HERE
The gChecker.py file:
import baseChecker # should import baseChecker.py
class GChecker(BaseChecker): # gives a TypeError: Error when calling the metaclass bases
# SOME METHODS WHICH USE URLLIB
And finally the main.py file:
import ?????
gChecker = GChecker()
gChecker.someStuff() # which uses urllib
My intention is to be able to run main.py file and call instantiate the classes under the checker/ directory. But I would like to avoid importing urllib from each file (if it is possible).
Note that both the __init__.py are empty files.
I have already tried calling from checker.gChecker import GChecker in main.py but a ImportError: No module named checker.gChecker shows.
In the posted code, in gChecker.py, you need to do
from baseChecker import BaseChecker
instead of import baseChecker
Otherwise you get
NameError: name 'BaseChecker' is not defined
Also with the mentioned folders structure you don't need checker module to be in the PYTHONPATH in order to be visible by main.py
Then in main.y you can do:
from checker import gChecker.GChecker