how to write a custom assert Python - python

I am planning to split my one big test file to smaller test based on the part of code it tests. And I have a custom assert function for one of my tests. If I split them out to a separate file, how should I import them to other test files.
TestSchemacase:
class TestSchemaCase(unittest.TestCase):
"""
This will test our schema against the JSONTransformer output
just to make sure the schema matches the model
"""
# pylint: disable=too-many-public-methods
_base_dir = os.path.realpath(os.path.dirname(__file__))
def assertJSONValidates(self, schema, data):
"""
This function asserts the validation works as expected
Args:
schema(dict): The schema to test against
data(dict): The data to validate using the schema
"""
# pylint: disable=invalid-name
validator = jsonschema.Draft4Validator(schema)
self.assertIsNone(validator.validate(data))
def assertJSONValidateFails(self, schema, data):
"""
This function will assertRaises an ValidationError Exception
is raised.
Args:
schema(dict): The schema to validate from
data(dict): The data to validate using the schema
"""
# pylint: disable=invalid-name
validator = jsonschema.Draft4Validator(schema)
with self.assertRaises(jsonschema.ValidationError):
validator.validate(data)
My question is,1. When I try to import them I get an Import error with No module name found. I am breaking the TestValidation to mentioned small files. 2. I know I can raise Validation error in the assertJSONValidateFails but what should I return in case of validation pass.
tests/schema
├── TestSchemaCase.py
├── TestValidation.py
├── __init__.py
└── models
├── Fields
│   ├── TestImplemen.py
│   ├── TestRes.py
│   └── __init__.py
├── Values
│   ├── TestInk.py
│   ├── TestAlue.py
│   └── __init__.py
└── __init__.py
3.And is this how we should inherit them?
class TestRes(unittest.TestCase, TestSchemaCase):
Thanks for your time. Sorry for the big post
I did see the post, But that doesn't solve the problem.

I would suggest using a test framework that doesn't force you to put your tests in classes, like pytest.

Related

cannot patch class because of __init__.py

I'm writing unit tests for a codebase. My issue is I need to patch a database abstraction class but it is masked by the __init__.py file. My file structure is:
.
├── __init__.py
├── tools
│   ├── __init__.py
│   ├── test
│   │   ├── __init__.py
│   │   └── test_tool1.py
│   ├── tool1.py
│   └── tool2.py
└── utils
├── __init__.py
└── sql_client.py
Now the content of the files:
# tools/__init__.py
from .tool1 import * # `SQLClient` is imported here as part of tool1
from .tool2 import * # `SQLClient` is imported here again but now it's part of tool2
# tools/tool1.py
from utils import SQLClient
class A(object):
...
def run(self, **kwargs):
# the function I want to test
sql = SQLClient("some id")
# tools/tool2.py
from utils import SQLClient
...
# utils/__init__.py
from sql_client import *
# utils/sql_client.py
class SQLClient(object):
# The sql abstraction class that needs to be patched
In my test file, I'm creating a mock class to be used as the patch. I'm using absolute imports in my tests because later I want to move all the tests outside of the source folders.
# tools/test/test_tool1.py
from unittest.mock import MagicMock
from utils import SQLClient
from tools import A
class MockSQLClient(MagicMock):
def __init__(self, *args, **kwargs):
super().__init__(spec=SQLClient)
self._mocks = {"select *": "rows"}
def make_query(query)
return self._mocks[query]
def test_run_func(monkeypatch):
monkeypatch.setattr("tools.SQLClient", MockSQLClient)
a = A()
a.run()
# rest of the test
Now, the issue is that tools/__init__.py imports everything from all the submodules so SQLClient from tool1 is being masked by SQLClient from tool2. As a result, my monkeypatch is patching tool2.SQLClient. I cannot patch the tool1 part directly by monkeypatch.setattr("tools.tool1.SQLClient") because of tools/__init__.py which throws an error complaining there is no tool1 in tools.
EDIT
changed question title, more detail on where the test is.

Python Sphinx imported method organization within a class

I have a Class that is importing many methods from submodules. Currently Sphinx is setup to organize 'bysource' so they're at least sorting by the order in which the submodules are imported.
What I would like though, is some kind of header or searchable text for the title of the file they come from.
Current Directory Structure:
my_project
├── setup.py
└── my_package
   ├── __init__.py # Classes for subpackage_1 and subpackage_2 and imports occur here
   ├── subpackage_1
   │   ├── __init__.py
   │   ├── _submodule_a.py
   │   └── _submodule_b.py
   └── subpackage_2
   ├── __init__.py
   ├── _submodule_c.py
   └── _submodule_d.py
Sphinx module rst file:
Module contents
---------------
.. automodule:: my_package
:members:
:undoc-members:
:show-inheritance:
:member-order: bysource
In the my_package init.py there are there parent Classes defined where all the submodules/methods are imported to their related Class.
class MyClass_1():
...
from .subpackage_1._submodule_a import method_a
from .subpackage_1._submodule_b import method_b, method_c
class MyClass_2():
...
from .subpackage_2._submodule_c import method_d, method_e
from .subpackage_2._submodule_d import method_f
In the resulting Sphinx method documentation I see the methods under each class, but the desire is to be able to see which submodule file the methods sourced from. Doesn't have to be a subsection, merely including a header/note etc. would be nice without having to resort to manually listing all the modlues/methods in the Sphinx file.
There's hundreds of functions in the real package so it would be helpful for the user to be able to quickly discern what submodule file a method came from when viewing the documentation.
Current Sphinx Output:
Desired Sphinx Output:
Class MyClass_1...
Class MyClass_1...
submodule_a
* method_a
* method_a
submodule_b
* method_b
* method_b
* method_c
* method_c
Class MyClass_2...
Class MyClass_2...
submodule_c
* method_d
* method_d
* method_d
* method_e
submodule_d
* method_f
* method_f
I see worse case scenario, putting the submodule filename in docstring for each method within the file, but would love if someone's figured this out in a more automated fashion

Airflow on Docker - Path issue

Working with airflow I try simple DAG work.
I wrote custom operators and other files that I want to import into the main file where the DAG logic is.
Here the folder's structure :
├── airflow.cfg
├── dags
│   ├── __init__.py
│   ├── dag.py
│   └── sql_statements.sql
├── docker-compose.yaml
├── environment.yml
└── plugins
├── __init__.py
└── operators
├── __init__.py
├── facts_calculator.py
├── has_rows.py
└── s3_to_redshift.py
I setup the volume right in the compose file since I can see them when I log into the container's terminal.
I followed some tutorials online from where I have added some __init__.py.
The 2 none empty __init__ are into
/plugins/operators:
from operators.facts_calculator import FactsCalculatorOperator
from operators.has_rows import HasRowsOperator
from operators.s3_to_redshift import S3ToRedshiftOperator
__all__ = [
'FactsCalculatorOperator',
'HasRowsOperator',
'S3ToRedshiftOperator'
]
/plugins:
from airflow.plugins_manager import AirflowPlugin
import operators
# Defining the plugin class
class CustomPlugin(AirflowPlugin):
name = "custom_plugin"
# A list of class(es) derived from BaseOperator
operators = [
operators.FactsCalculatorOperator,
operators.HasRowsOperator,
operators.S3ToRedshiftOperator
]
# A list of class(es) derived from BaseHook
hooks = []
# A list of class(es) derived from BaseExecutor
executors = []
# A list of references to inject into the macros namespace
macros = []
# A list of objects created from a class derived
# from flask_admin.BaseView
admin_views = []
# A list of Blueprint object created from flask.Blueprint
flask_blueprints = []
# A list of menu links (flask_admin.base.MenuLink)
menu_links = []
But I keep getting errors from my IDE (saying No module named operators or Unresolved reference operators inside the operator's __init__).
Since everything fails to launch on the webserver.
Any idea how to set this up ? Where I'm wrong ?
Are you using the puckel's image?
If you are, you need to uncomment the # - ./plugins:/usr/local/airflow/plugins lines (may there are more than one) in the docker-compose files (either Local or Celery). The rest of your setup looks fine to me.

Using flask-script together with template-filter

I have a flask application which uses jinja2 template filters. An example of template filter is as follows:
#app.template_filter('int_date')
def format_datetime(date):
if date:
return utc_time.localize(date).astimezone(london_time).strftime('%Y-%m-%d %H:%M')
else:
return date
This works fine if we have an app instantiated before a decorator is defined, however if we are using an app factory combined with flask-script manager, then we don't have an instantiated app. For example:
def create_my_app(config=None):
app = Flask(__name__)
if config:
app.config.from_pyfile(config)
return app
manager = Manager(create_my_app)
manager.add_option("-c", "--config", dest="config", required=False)
#manager.command
def mycommand(app):
app.do_something()
Manager accepts either an instantiated app or an app factory, so at first glance it appears that we can do this:
app = create_my_app()
#app.template_filter('int_date')
....
manager = Manager(app)
The problem with this solution is that the manager then ignores the option, since the app has already been configured during instantiation. So how is someone supposed to use template filters together with the flask-script extension?
This is where blueprints come into play. I would define a blueprint core and put all my custom template filters in say core/filters.py.
To register filters to an application in flask when using blueprints you need to use app_template_filter instead of template_filter. This way you can still use the decorator pattern to register filters and use the application factory approach.
A typical directory layout for an application using blueprint might look something like:
├── app
│   ├── blog
│   │   ├── __init__.py # blog blueprint instance
│   │   └── routes.py # core filters can be used here
│   ├── core
│   │   ├── __init__.py # core blueprint instance
│   │   ├── filters.py # define filters here
│   │   └── routes.py # any core views are defined here
│   └── __init__.py # create_app is defined here & blueprint registered
└── manage.py # application is configured and created here
For a minimal working example of this approach see: https://github.com/iiSeymour/app_factory
The solution can be found here, which states they are two ways one can define a jinja template filter. Thus, instead of defining a decorator outside the factory, one can modify the jinja_env instead. This can be done in the app factory, for example:
def format_datetime(date):
if date:
return utc_time.localize(date).astimezone(london_time).strftime('%Y-%m-%d %H:%M')
else:
return date
def create_app(production=False):
app = Flask(__name__)
....
# Register Jinja2 filters
app.jinja_env.filters['datetime'] = format_datetime
manager = Manager(create_app)
...

Flask teardown request in context of blueprint

I would like to access an sqlite3 database from a Flask application (without using Flask-SQLAlchemy, since I require fts4 functionality). I am using Flask blueprints, and I am not sure where to put the following functions (shamelessly copied from a response to this stackoverflow question):
def request_has_connection():
return hasattr(flask.g, 'dbconn')
def get_request_connection():
if not request_has_connection():
flask.g.dbconn = sqlite3.connect(DATABASE)
# Do something to make this connection transactional.
# I'm not familiar enough with SQLite to know what that is.
return flask.g.dbconn
#app.teardown_request
def close_db_connection(ex):
if request_has_connection():
conn = get_request_connection()
# Rollback
# Alternatively, you could automatically commit if ex is None
# and rollback otherwise, but I question the wisdom
# of automatically committing.
conn.close()
My file structure is:
app
├── __init__.py
├── main
│   ├── forms.py
│   ├── __init__.py
│   ├── views.py
├── models.py
├── static
└── templates
├── base.html
├── index.html
└── login.html
I want the request_has_connection() and get_request_connection() functions accessible from all view functions and maybe from models.py as well. Right now, I'm thinking they all belong in my blueprint init.py, which currently contains:
from flask import Blueprint
main = Blueprint('main',__name__)
from . import views
and that my request teardown function would be registered as
#main.teardown_request
def close_db_connection(ex):
<blah-blah-blah>
Is this right?

Categories

Resources