I'm using cherrypy to build a web service. I came across the BackgroundTaskQueue plugin and I want to use it to handle specific time-consuming operations on a separate thread.
The documentation states the usage should be like the following:
import cherrypy
from complicated_logging import log
bgtask = BackgroundTaskQueue(cherrypy.engine)
bgtask.subscribe()
class Root(object):
def index(self):
bgtask.put(log, "index was called", ip=cherrypy.request.remote.ip))
return "Hello, world!"
index.exposed = True
But, IMHO, using the bgtask object like this isn't very elegant. I would like handlers from other python modules to use this object too.
Is there a way to subscribe this plugin once, and then "share" the bgtask object among other handlers (like, for example saving it it in the cherrypy.request)?
How is this done? Does this require writing a cherrypy tool?
Place
queue = BackgroundTaskQueue(cherrypy.engine)
in a separate file named, for instance, tasks.py. This way you create module tasks.
Now you can 'import tasks' in other modules and queue is a single instance
For example, in a file called test.py:
import tasks
def test(): print('works!')
tasks.queue.put(log, test)
Related
The following code works if the module "user.py" is in the same directory as the code, but fails if it is in a different directory. The error message I get is "ModuleNotFoundError: No module named 'user'
import multiprocessing as mp
import imp
class test():
def __init__(self,pool):
pool.processes=1
usermodel=imp.load_source('user','D:\\pool\\test\\user.py').userfun
#file D:\\pool\\test\\user.py looks like this:
# def userfun():
# return 1
vec=[]
for i in range(10):
vec.append([usermodel,i])
pool.map(self.myfunc,vec)
def myfunc(self,A):
userfun=A[0]
i=A[1]
print (i,userfun())
return
if __name__=='__main__':
pool=mp.Pool()
test(pool)
If the function myfunc is called without the pooled process the code is fine regardless of whether user.py is in the same directory of the main code or in \test. Why can't the pooled process find user.py in a separate directory? I have tried different methods such as modifying my path then import user, and importlib, all with the same results.
I am using windows 7 and python 3.6
multiprocessing tries to pretend it's just like threading, but the abstraction leaks like a sieve. One of the ways it leaks is that communicating with worker processes involves a lot of implicit pickling and data copying.
When you try to send usermodel to a worker, multiprocessing implicitly pickles it and tries to have the worker unpickle the pickle. Functions are pickled by recording the module name and function name, so the worker just thinks it's supposed to do from user import userfun to access userfun. It doesn't know that user needs to be loaded with imp.load_source from a specific filesystem location, so it can't reconstruct usermodel.
The way this problem manifests is OS-dependent, because if multiprocessing uses the fork start method, the workers inherit the user module from the master process. fork is default on Unix, but unavailable on Windows.
I use pytest quite a bit for my code. Sample code structure looks like this. The entire codebase is python-2.7
core/__init__.py
core/utils.py
#feature
core/feature/__init__.py
core/feature/service.py
#tests
core/feature/tests/__init__.py
core/feature/tests/test1.py
core/feature/tests/test2.py
core/feature/tests/test3.py
core/feature/tests/test4.py
core/feature/tests/test10.py
The service.py looks something like this:
from modules import stuff
from core.utils import Utility
class FeatureManager:
# lots of other methods
def execute(self, *args, **kwargs):
self._execute_step1(*args, **kwargs)
# some more code
self._execute_step2(*args, **kwargs)
utility = Utility()
utility.doThings(args[0], kwargs['variable'])
All the tests in feature/tests/* end up using core.feature.service.FeatureManager.execute function. However utility.doThings() is not necessary for me to be run while I am running tests. I need it to happen while the production application runs but I do not want it to happen while the tests are being run.
I can do something like this in my core/feature/tests/test1.py
from mock import patch
class Test1:
def test_1():
with patch('core.feature.service.Utility') as MockedUtils:
exectute_test_case_1()
This would work. However I added Utility just now to the code base and I have more than 300 test cases. I would not want to go into each test case and write this with statement.
I could write a conftest.py which sets a os level environment variable based on which the core.feature.service.FeatureManager.execute could decide to not execute the utility.doThings but I do not know if that is a clean solution to this issue.
I would appreciate if someone could help me with global patches to the entire session. I would like to do what I did with the with block above globally for the entire session. Any articles in this matter would be great too.
TLDR: How do I create session wide patches while running pytests?
I added a file called core/feature/conftest.py that looks like this
import logging
import pytest
#pytest.fixture(scope="session", autouse=True)
def default_session_fixture(request):
"""
:type request: _pytest.python.SubRequest
:return:
"""
log.info("Patching core.feature.service")
patched = mock.patch('core.feature.service.Utility')
patched.__enter__()
def unpatch():
patched.__exit__()
log.info("Patching complete. Unpatching")
request.addfinalizer(unpatch)
This is nothing complicated. It is like doing
with mock.patch('core.feature.service.Utility') as patched:
do_things()
but only in a session-wide manner.
Building on the currently accepted answer for a similar use case (4.5 years later), using unittest.mock's patch with a yield also worked:
from typing import Iterator
from unittest.mock import patch
import pytest
#pytest.fixture(scope="session", autouse=True)
def default_session_fixture() -> Iterator[None]:
log.info("Patching core.feature.service")
with patch("core.feature.service.Utility"):
yield
log.info("Patching complete. Unpatching")
Aside
For this answer I utilized autouse=True. In my actual use case, to integrate into my unit tests on a test-by-test basis, I utilized #pytest.mark.usefixtures("default_session_fixture").
Versions
Python==3.8.6
pytest==6.2.2
I am wondering if there is a way to have a python variable to behave like a python module.
Problem I currently have is that we have python bindings for our API. The bindings are automatically generated through swig and to use them someone would only needs to:
import module_name as short_name
short_name.functions()
Right now we are studying having the API to use Apache Thrift. To use it someone needs to:
client, transport = thrift_connect()
client.functions()
...
transport.close()
Problem is that we have loads of scripts and we were wondering if there is a way to have the thrift client object to behave like a module so that we don't need to modify all scripts. One idea we had was to do something like this:
client, transport = thrift_connect()
global short_name
short_name = client
__builtins__.short_name = client
This 'sort of' works. It creates a global variable 'short_name' that acts like a module, but it also generates other problems. If other files import the same module it is needed to comment those imports. Also, having a global variable is not a bright idea for maintenance purposes.
So, would there be a way to make the thrift client to behave like a module? So that people could continue to use the 'old' syntax, but under the hood the module import would trigger a connection ans return the object as the module?
EDIT 1:
It is fine for every import to open a connection. Maybe we could use some kind of singleton so that a specific interpreter can only open one connection even if it calls multiple imports on different files.
I thought about binding the transport.close() to a object termination. Could be the module itself, if that is possible.
EDIT 2:
This seems to do what I want:
client, transport = thrift_connect()
attributes = dict((name, getattr(client, name)) for name in dir(client) if not (name.startswith('__') or name.startswith('_')))
globals().update(attributes)
Importing a module shouldn't cause a network connection.
If you have mandatory setup/teardown steps then you could define a context manager:
from contextlib import contextmanager
#contextmanager
def thrift_client():
client, transport = thrift_connect()
client.functions()
try:
yield client
finally:
transport.close()
Usage:
with thrift_client() as client:
# use client here
In general, the auto-generated module with C-like API should be private e.g., name it _thrift_client and the proper pythonic API that is used outside should be written on top of it by hand in another module.
To answer the question from the title: you can make an object to behave like a module e.g., see sh.SelfWrapper and quickdraw.Module.
I am writing my first "serious" application with AppEngine and have run into some problems with the task queue.
I have read and reproduced the example code that is given in the appengine docs.
When I tried to add a Task to a custom Queue though it doesn't seem to work for me as it works for others:
What I do is:
from google.appengine.api import taskqueue
def EnterQueueHandler(AppHandler):
def get(self):
#some code
def post(self):
key = self.request.get("value")
task = Task(url='/queue', params={'key':key})
task.add("testqueue")
self.redirect("/enterqueue")
And then I have a handler set for "/queue" that does stuff.
The problem is that this throws the following error:
NameError: global name 'Task' is not defined
Why is that? It seems to me I am missing something basic, but I can't figure out what. It says in the docs that the Task-Class is provided by the taskqueue module.
By now I have figured out that it works if I replace the two task-related lines in the code above with the following:
taskqueue.add(queue_name="testqueue", url="/queue", params={"key":key})
But I would like to understand why the other method doesn't work nonetheless. It would be very nice if someone could help me out here.
From the documentation
Task is provided by the google.appengine.api.taskqueue module.
Since you have already imported
from google.appengine.api import taskqueue
You can replace this line:
task = Task(url='/queue', params={'key':key})
with
task = taskqueue.Task(url='/queue', params={'key':key})
I think the reason is does not work is "Task" is not imported. Below is an example that i use all of the time successfully. Looks just like yours but my import is different.
from google.appengine.api.taskqueue import Task
task = Task(
url=url,
method=method,
payload=payload,
params=params,
countdown=0
)
task.add(queue_name=queue)
I would like to test a pyramid view like the following one:
def index(request):
data = request.some_custom_property.do_something()
return {'some':data}
some_custom_property is added to the request via such an event handler:
#subscriber(NewRequest)
def prepare_event(event):
event.request.set_property(
create_some_custom_property,
'some_custom_property',reify=True
)
My problem is: If I create a test request manually, the event is not setup correctly, because no events are triggered. Because the real event handler is more complicated and depends on configuration settings, I don't want to reproduce that code in my test code. I would like to use the pyramid infracstructure as much as possible. I learned from an earlier question how to set up a real pyramid app from an ini file:
from webtest import TestApp
from pyramid.paster import get_app
app = get_app('testing.ini#main')
test_app = TestApp(app)
The test_app works fine, but I can only get back the html output (which is the idea of TestApp). What I want to do is, to execute index in the context of app or test_app, but to get back the result of index before it's send to a renderer.
Any hint how to do that?
First of all, I believe this is a really bad idea to write doctests like this. Since it requires a lot of initialization work, which is going to be included in documentation (remember doctests) and will not "document" anything. And, to me, these tests seems to be the job for unit/integration test. But if you really want, here's a way to do it:
import myapp
from pyramid.paster import get_appsettings
from webtest import TestApp
app, conf = myapp.init(get_appsettings('settings.ini#appsection'))
rend = conf.testing_add_renderer('template.pt')
test_app = TestApp(app)
resp = test_app.get('/my/view/url')
rend.assert_(key='val')
where myapp.init is a function that does the same work as your application initialization function, which is called by pserve (like main function here. Except myapp.init takes 1 argument, which is settings dictionary (instead of main(global_config, **settings)). And returns app (i.e. conf.make_wsgi_app()) and conf (i.e pyramid.config.Configurator instance). rend is a pyramid.testing.DummyTemplateRenderer instance.
P.S. Sorry for my English, I hope you'll be able to understand my answer.
UPD. Forgot to mention that rend has _received property, which is the value passed to renderer, though I would not recommend to use it, since it is not in public interface.