I've dug myself into quite a hole here.
I'm working on a Python/Kivy app in PyDev.
The app runs off of many systems (about 10), so I shoved them into an engine to handle everything.
For ease of access, I grab the engine via (the worst) singletons
main.py
#main.py
from code import engine
class MyApp(App):
def build(self):
engine.GetInstance().Initialize()
if __name__ == '__main__':
MyApp().run()
engine.py
#engine.py
from code import system1
from code import system2
gEngineInstance = None
def GetInstance():
global gEngineInstance
if (gEngineInstance == None):
gEngineInstance = Engine()
return gEngineInstance
class Engine():
mSystem1 = None
mSystem2 = None
def Initialize(self):
self.mSystem1 = system1.System1()
self.mSystem2 = system2.System2()
# Omitted
Unfortunatley, this resulted in some nasty circular dependencies.
Main has to create engine, and know about it, which runs engines imports, which runs the system imports.
Problem: Systems imports then import engine, circular reference.
system1.py
#system1.py
from code import engine
class System1():
def SomeMethod(self):
engine.GetInstance().mSystem2.DoThings()
You get the picture.
I bypassed this for now with this hideous code all over the place:
system1.py
#system1.py
class System1():
def SomeMethod(self):
from code import engine
engine.GetInstance().mSystem2.DoThings()
This stops the import from happening until that line, which is fine, but it looks wrong, eveyrthing feels like i'm doing things wrong.
I'm tempted to just pass Engine as a reference to every systems constructor, but thats a bit of refactoring, and i'd like to know if there's a more decent way to fix this sort of singleton/circular reference issue for the future.
How about having a "registration" mechanism, where each system module "registers" itself with the Engine class using some module-level code:
engine.py
class Engine():
#classmethod
def register(cls, type):
...
system1.py
from engine import Engine
class System1():
...
Engine.register(System1)
That way, the Engine doesn't directly have to know what gets plugged into it.
Related
I am trying to use the hydra tool in my project and would like to use the decorator for class functions
import hydra
from hydra.core.config_store import ConfigStore
from src.config import RecordingConfig
cs = ConfigStore.instance()
cs.store(name="recording_config", node=RecordingConfig)
class HydraClassTest:
#hydra.main(config_path="../src/conf/", config_name="conf")
def __init__(self, conf: RecordingConfig):
print(conf)
def main():
HydraClassTest()
if __name__ == "__main__":
main()
But I get the error
TypeError: __init__() missing 1 required positional argument: 'conf'
Is this intended and should I pass the configuration from the outside to the class? (For example by using the decorator on the main function and passing the configuration as a parameter to the initializer, this works)
Or am using the decorator in a wrong way?
If it is intended, is there some design reason why one would not want to do it that way?
I have checked whether I used the decorator correctly by passing the configuration through the main function, that worked.
import hydra
from hydra.core.config_store import ConfigStore
from src.config import RecordingConfig
cs = ConfigStore.instance()
cs.store(name="recording_config", node=RecordingConfig)
class HydraClassTest:
def __init__(self, conf: RecordingConfig):
print(conf)
#hydra.main(config_path="../src/conf/", config_name="conf")
def main(conf: RecordingConfig):
HydraClassTest(conf)
if __name__ == "__main__":
main()
This gives me the expected result.
#hydra.main() is not appropriate for this use case. It's designed to be used once in an application and it has many side effects (changing working directory, configuring logging etc).
Use the Compose API instead.
I have taken input from a form and passed them to this function from the kivy file (on_press property). The form data is fetched properly in the execute function but it won't get logged in myapp.log
Here's the code:
import logging
import selenium
class UIf(GridLayout):
def execute(self, *args):
print("First probe")
logging.basicConfig(filename="myapp.log", level = logging.DEBUG, format='%(asctime)s:%(message)s')
print("Second probe")
for name in args:
print("Third probe")
logging.debug(name)
class MyApp(App):
def build(self):
return UIf()
if __name__ == '__main__':
runapp = MyApp()
runapp.run()
Be sure to make your basicConfig call before importing any of kivy code, because it does its own, that could conflict with your parameters (especially output file).
Also, you might need to the debug level to DEBUG in kivy config, or to reset yourself the log level after the kivy import, because it reset it to its value.
I agree it's a bit intrusive and we should probably consider this a bug, though i don't know what the best way to both have good defaults and stay out of the way of people with opinions would be.
So,
consider I have a simple library that I am trying to write unit-tests for. This library talks to a database and then uses that data to call an SOAP API. I have three modules, and a testfile for each module.
dir structure:
./mypkg
../__init__.py
../main.py
../db.py
../api.py
./tests
../test_main
../test_db
../test_api
Code:
#db.py
import mysqlclient
class Db(object):
def __init__(self):
self._client = mysqlclient.Client()
#property
def data(self):
return self._client.some_query()
#api.py
import soapclient
class Api(object):
def __init__(self):
self._client = soapclient.Client()
#property
def call(self):
return self._client.some_external_call()
#main.py
from db import Db
from api import Api
class MyLib(object):
def __init__(self):
self.db = Db()
self.api = Api()
def caller(self):
return self.api.call(self.db.data)
Unit-Tests:
#test_db.py
import mock
from mypkg.db import Db
#mock.patch('mypkg.db.mysqlclient')
def test_db(mysqlclient_mock):
mysqlclient_mock.Client.return_value.some_query = {'data':'data'}
db = Db()
assert db.data == {'data':'data'}
#test_api.py
import mock
from mypkg.api import Api
#mock.patch('mypkg.db.soapclient')
def test_db(soap_mock):
soap_mock.Client.return_value.some_external_call = 'foo'
api = Api()
assert api.call == 'foo'
In the above example, mypkg.main.MyLib calls mypkg.db.Db() (uses third-party mysqlclient) and then mypkg.api.Api() (uses third-party soapclient)
I am using mock.patch to patch the third-party libraries to mock my db and api calls in test_db and test_api separately.
Now my question is, is it recommended to patch these external calls again in test_main OR simply patch db.Db and api.Api? (this example is pretty simple, but in larger libraries, the code becomes cumbersome when patching the external calls again or even using test helper functions that patch internal libraries).
Option1: patch external libraries in main again
#test_main.py
import mock
from mypkg.main import MyLib
#mock.patch('mypkg.db.mysqlclient')
#mock.patch('mypkg.api.soapclient')
def test_main(soap_mock, mysqlcient_mock):
ml = MyLib()
soap_mock.Client.return_value.some_external_call = 'foo'
assert ml.caller() == 'foo'
Option2: patch internal libraries
#test_main.py
import mock
from mypkg.main import MyLib
#mock.patch('mypkg.db.Db')
#mock.patch('mypkg.api.Api')
def test_main(api_mock, db_mock):
ml = MyLib()
api_mock.return_value = 'foo'
assert ml.caller() == 'foo'
mock.patch creates a mock version of something where it's imported, not where it lives. This means the string passed to mock.patch has to be a path to an imported module in the module under test. Here's what the patch decorators should look like in test_main.py:
#mock.patch('mypkg.main.Db')
#mock.patch('mypkg.main.Api')
Also, the handles you have on your patched modules (api_mock and db_mock) refer to the classes, not instances of those classes. When you write api_mock.return_value = 'foo', you're telling api_mock to return 'foo' when it gets called, not when an instance of it has a method called on it. Here are the objects in main.py and how they relate to api_mock and db_mock in your test:
Api is a class : api_mock
Api() is an instance : api_mock.return_value
Api().call is an instance method : api_mock.return_value.call
Api().call() is a return value : api_mock.return_value.call.return_value
Db is a class : db_mock
Db() is an instance : db_mock.return_value
Db().data is an attribute : db_mock.return_value.data
test_main.py should therefore look like this:
import mock
from mypkg.main import MyLib
#mock.patch('mypkg.main.Db')
#mock.patch('mypkg.main.Api')
def test_main(api_mock, db_mock):
ml = MyLib()
api_mock.return_value.call.return_value = 'foo'
db_mock.return_value.data = 'some data' # we need this to test that the call to api_mock had the correct arguments.
assert ml.caller() == 'foo'
api_mock.return_value.call.assert_called_once_with('some data')
The first patch in Option 1 would work great for unit-testing db.py, because it gives the db module a mock version of mysqlclient. Similarly, #mock.patch('mypkg.api.soapclient') belongs in test_api.py.
I can't think of a way Option 2 could help you unit-test anything.
Edited: I was incorrectly referring to classes as modules. db.py and api.py are modules
I come from Java background and most of my thinking comes from there. Recently started learning Python. I have a case where I want to just create one connection to Redis and use it everywhere in the project. Here is how my structure and code looks.
module: state.domain_objects.py
class MyRedis():
global redis_instance
def __init__(self):
redis_instance = redis.Redis(host='localhost', port=6379, db=0)
print("Redus instance created", redis_instance)
#staticmethod
def get_instance():
return redis_instance
def save_to_redis(self, key, object_to_cache):
pickleObj = pickle.dumps(object_to_cache)
redis_instance.set(key, pickleObj)
def get_from_redis(self, key):
pickled_obj = redis_instance.get(key)
return pickle.loads(pickled_obj)
class ABC():
....
Now I want to use this from other modules.
module service.some_module.py
from state.domain_objects import MyRedis
from flask import Flask, request
#app.route('/chat/v1/', methods=['GET'])
def chat_service():
userid = request.args.get('id')
message_string = request.args.get('message')
message = Message(message_string, datetime.datetime.now())
r = MyRedis.get_instance()
user = r.get(userid)
if __name__ == '__main__':
global redis_instance
MyRedis()
app.run()
When I start the server, MyRedis() __init__ method gets called and the instance gets created which I have declared as global. Still when the service tries to access it when the service is called, it says NameError: name 'redis_instance' is not defined I am sure this is because I am trying to java-fy the approach but not sure how exactly to achieve it. I read about globals and my understanding of it is, it acts like single variable to the module and thus the way I have tried doing it. Please help me clear my confusion. Thanks!
I'm using hooks in my Eve app to update a "summary" object every time a new item is added to my collection. To keep things clean, I've moved my callbacks to a separate dir/file that I import from run.py where I set up the hooks.
My problem is that I need to access the Eve() object (that I called "app") from inside my callback function (named on_inserted_expense). I couldn't find the "eve" way to do it, so I ended up using something like this decorator-like trick, which works:
from eve import Eve
from eventhooks import posthooks
from functools import wraps
app = Eve()
def passing_app(f):
#wraps(f)
def wrapper(*args, **kwargs):
kwargs['app'] = app
return f(*args, **kwargs)
return wrapper
app.on_inserted_expenses += passing_app(posthooks.on_inserted_expense)
That way from eventhooks/posthooks.py I can do:
def on_inserted_expense(items, **kwargs):
app = kwargs['app']
for item in items:
summaries = app.data.driver.db['summaries']
summary = summaries.find_one({'title': 'default'})
if not item_in_summary(item, summary):
with app.test_request_context():
update = update_summary(summary, item)
patch_internal(summary, payload=update, concurrency_check=True)
My question, therefore, is: is there a way to retrieve the current "app" object from Eve in a cleaner way from anywhere within the application? If not, would that be something worth adding, maybe in the way of a singleton? Thanks!
I have been doing this:
from flask import current_app
And using current_app as the app.
Reference: http://flask.pocoo.org/docs/0.10/api/#flask.current_app
Are there pitfalls I should be aware of doing this? It seems to work when adding hooks.
You probably want to to follow the Larger Flask Application pattern , so you can have your app object declared in your __init__.py and then you can import it anywhere you want. Remember, Eve is just a Flask application so whatever you can do with Flask you can generally do with Eve.