In Python async function I'am creating ContextVar, task and attaching callback to it:
bbb = contextvars.ContextVar('aaa')
bbb.set(3)
task = self.loop.create_task(self.someFunc())
task.add_done_callback(self.commonCallback)
bbb.set(4)
In callback I first start debugger:
def commonCallback(self, result):
pdb.set_trace()
try:
r = result.result()
print(r)
except:
self.log.exception('commonCallback')
And in debugger:
-> try:
(Pdb) bbb.get()
*** NameError: name 'bbb' is not defined
(Pdb) ctx = contextvars.copy_context()
(Pdb) print(list(ctx.items()))
[(<ContextVar name='aaa' at 0xa8245df0>, 3)]
(Pdb)
ContextVar is there, but I can't access it. So, I am missing something, but can't find what?
The bbb local variable is defined in one place, so it won't be automatically accessible in another, such as the commonCallback function defined elsewhere in the code. The documentation states that "Context Variables should be created at the top module level", so you should try that first.
You can get a value from context without importing top level module.
contextvars.Context has __iter__ method. And you can use a for loop to get the value:
def get_ctx_var_value(ctx, var_name, default_value=None):
for var in ctx:
if var.name == var_name:
return ctx[var]
return default_value
ctx = contextvars.copy_context()
var_value = get_ctx_var_value(ctx, 'aaa')
Related
I want to create a function that will react according to the success of the argument function and get the argument function's name.
Something like
def foo(argument_function):
try:
argument_function
print(f"{somehow name of the argument_function} executed successfully.")
except:
print(f"{somehow name of the argument_function} failed.")
argument_function should be executed only under the try statement. Otherwise, it will give an error and stop the script which I am trying to avoid.
{somehow name of the argument_function} should return the name of the argument_function.
for successful attempt:
foo(print("Hello world!"))
should return
>>Hello world!
>>print("Hello world!") executed successfully.
for unsuccessful attempt:
foo(prnt("Hello world!"))
should return
>>prnt("Hello world!") failed.
import typing
def foo(argument_function:typing.Callable):
try:
argument_function()
print(f"{argument_function.__name__} executed successfully.")
except:
print(f"{argument_function.__name__} failed.")
print(foo.__name__) # returns foo
Or use:
def foo(argument_function:callable):
__name__ returns function name and if you ensure the function is an object then adding () should run the function
We have unit tests running via Pytest, which use a custom decorator to start up a context-managed mock echo server before each test, and provide its address to the test as an extra parameter. This works on Python 2.
However, if we try to run them on Python 3, then Pytest complains that it can't find a fixture matching the name of the extra parameter, and the tests fail.
Our tests look similar to this:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url):
res_id = self._test_resource(url)['id']
result = update_resource(None, res_id)
assert not result, result
self.assert_archival_error('Server reported status error: 404 Not Found', res_id)
With a decorator function like this:
from functools import wraps
def with_mock_url(url=''):
"""
Start a MockEchoTestServer and call the decorated function with the server's address prepended to ``url``.
"""
def decorator(func):
#wraps(func)
def decorated(*args, **kwargs):
with MockEchoTestServer().serve() as serveraddr:
return func(*(args + ('%s/%s' % (serveraddr, url),)), **kwargs)
return decorated
return decorator
On Python 2 this works; the mock server starts, the test gets a URL similar to "http://localhost:1234/?status=404&content=test&content-type=csv", and then the mock is shut down afterward.
On Python 3, however, we get an error, "fixture 'url' not found".
Is there perhaps a way to tell Python, "This parameter is supplied from elsewhere and doesn't need a fixture"? Or is there, perhaps, an easy way to turn this into a fixture?
You can use url as args parameter
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, *url):
url[0] # the test url
Looks like Pytest is content to ignore it if I add a default value for the injected parameter, to make it non-mandatory:
#with_mock_url('?status=404&content=test&content-type=csv')
def test_file_not_found(self, url=None):
The decorator can then inject the value as intended.
consider separating the address from the service of the url. Using marks and changing fixture behavior based on the presence of said marks is clear enough. Mock should not really involve any communication, but if you must start some service, then make it separate from
with_mock_url = pytest.mark.mock_url('http://www.darknet.go')
#pytest.fixture
def url(request):
marker = request.get_closest_marker('mock_url')
if marker:
earl = marker.args[0] if args else marker.kwargs['fake']
if earl:
return earl
try:
#
earl = request.param
except AttributeError:
earl = None
return earl
#fixture
def server(request):
marker = request.get_closest_marker('mock_url')
if marker:
# start fake_server
#with_mock_url
def test_resolve(url, server):
server.request(url)
I'm having trouble understanding, in pytest, when (or why) i need to add the #pytest.hookimpl(hookwrapper=True) tag to all my pytest_runtest_* calls.
basically, in conftest, i have implemented all of those calls to do do something before yeilding, and then doing something after, such as:
def pytest_runtest_call(item: Any) -> None:
""" TODO: method docstring """
item.session.test_info["on_case_number"] += 1
pytest_log = getLogger("runtest")
pytest_log.debug(f'= runtest started [{count_str(item.session.test_info)}]')
outcome = yield
setattr(item, "case_markers", item.own_markers)
module_name, class_name, case_name = item.nodeid.split("::")
trunk = item.session.test_info["items"][module_name]["classes"][class_name]
try:
# set a report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
outcome.get_result()
except KeyboardInterrupt:
raise KeyboardInterrupt("Caught Keyboard Termination")
except Exception as e:
trunk["cases"][case_name]["passed"] = False
_process_exception(e, item, pytest_log)
pytest_log.debug(f'= runtest completed [{count_str(item.session.test_info)}]')
When I run a test case, this never gets called. However, if I prepend to tag
#pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(item: Any) -> None:
...
It does get called.
But I was under the (probalby false) impression that a certain set of methods within pytest get called automagically if they are defined, including the pytest_runtest_* set (and in fact, pytest_collection_modifyitems for instance does get called eventually w/o the tag).
I'm confused as to why this is the case. Or if I am just not getting/doing something correctly.
I have a number of functions that need to get called from various imported files.
The functions are formated along the lines of this:
a.foo
b.foo2
a.bar.foo4
a.c.d.foo5
and they are passed in to my script as a raw string.
I'm looking for a clean way to run these, with arguments, and get the return values
Right now I have a messy system of splitting the strings then feeding them to the right getattr call but this feels kind of clumsy and is very un-scalable. Is there a way I can just pass the object portion of getattr as a string? Or some other way of doing this?
import a, b, a.bar, a.c.d
if "." in raw_script:
split_script = raw_script.split(".")
if 'a' in raw_script:
if 'a.bar' in raw_script:
out = getattr(a.bar, split_script[-1])(args)
if 'a.c.d' in raw_script:
out = getattr(a.c.d, split_script[-1])(args)
else:
out = getattr(a, split_script[-1])(args)
elif 'b' in raw_script:
out = getattr(b, split_script[-1])(args)
It's hard to tell from your question, but it sounds like you have a command line tool you run as my-tool <function> [options]. You could use importlib like this, avoiding most of the getattr calls:
import importlib
def run_function(name, args):
module, function = name.rsplit('.', 1)
module = importlib.import_module(module)
function = getattr(module, function)
function(*args)
if __name__ == '__main__':
# Elided: retrieve function name and args from command line
run_function(name, args)
Try this:
def lookup(path):
obj = globals()
for element in path.split('.'):
try:
obj = obj[element]
except KeyError:
obj = getattr(obj, element)
return obj
Note that this will handle a path starting with ANY global name, not just your a and b imported modules. If there are any possible concerns with untrusted input being provided to the function, you should start with a dict containing the allowed starting points, not the entire globals dict.
Need a help with the next situation. I want to implement debug mode in my script through printing small completion report in functions with command executed name and ellapsed time like:
def cmd_exec(cmd):
if isDebug:
commandStart = datetime.datetime.now()
print commandStart
print cmd
...
... exucuting commands
...
if isDebug:
print datetime.datetime.now() - command_start
return
def main():
...
if args.debug:
isDebug = True
...
cmd_exec(cmd1)
...
cmd_exec(cmd2)
...
How can isDebug variable be simply passed to functions?
Should I use "global isDebug"?
Because
...
cmd_exec(cmd1, isDebug)
...
cmd_exec(cmd2, isDebug)
...
looks pretty bad. Please help me find more elegant way.
isDebug is state that applies to the application of a function cmd_exec. Sounds like a use-case for a class to me.
class CommandExecutor(object):
def __init__(self, debug):
self.debug = debug
def execute(self, cmd):
if self.debug:
commandStart = datetime.datetime.now()
print commandStart
print cmd
...
... executing commands
...
if self.debug:
print datetime.datetime.now() - command_start
def main(args):
ce = CommandExecutor(args.debug)
ce.execute(cmd1)
ce.execute(cmd2)
Python has a built-in __debug__ variable that could be useful.
if __debug__:
print 'information...'
When you run your program as python test.py, __debug__ is True. If you run it as python -O test.py, it will be False.
Another option which I do in my projects is set a global DEBUG var at the beginning of the file, after importing:
DEBUG = True
You can then reference this DEBUG var in the scope of the function.
You can use a module to create variables that are shared. This is better than a global because it only affects code that is specifically looking for the variable, it doesn't pollute the global namespace. It also lets you define something without your main module needing to know about it.
This works because modules are shared objects in Python. Every import gets back a reference to the same object, and modifications to the contents of that module get shared immediately, just like a global would.
my_debug.py:
isDebug = false
main.py:
import my_debug
def cmd_exec(cmd):
if my_debug.isDebug:
# ...
def main():
# ...
if args.debug:
my_debug.isDebug = True
Specifically for this, I would use partials/currying, basically pre-filling a variable.
import sys
from functools import partial
import datetime
def _cmd_exec(cmd, isDebug=False):
if isDebug:
command_start = datetime.datetime.now()
print command_start
print cmd
else:
print 'isDebug is false' + cmd
if isDebug:
print datetime.datetime.now() - command_start
return
#default, keeping it as is...
cmd_exec = _cmd_exec
#switch to debug
def debug_on():
global cmd_exec
#pre-apply the isDebug optional param
cmd_exec = partial(_cmd_exec, isDebug=True)
def main():
if "-d" in sys.argv:
debug_on()
cmd_exec("cmd1")
cmd_exec("cmd2")
main()
In this case, I check for -d on the command line to turn on debug mode and I do pre-populate isDebug on the function call by creating a new function with isDebug = True.
I think even other modules will see this modified cmd_exec, because I replaced the function at the module level.
output:
jluc#explore$ py test_so64.py
isDebug is falsecmd1
isDebug is falsecmd2
jluc#explore$ py test_so64.py -d
2016-10-13 17:00:33.523016
cmd1
0:00:00.000682
2016-10-13 17:00:33.523715
cmd2
0:00:00.000009