Python 2: function alias or macro - python

I am wondering if there is a way to create macros or aliases for functions in Python 2.7.
Example: I am trying to use the logging module and create aliases/macros for functions logging.debug, logging.info, logging.error, etc. If I use those functions as they are in the place where I want the log, everything works fine. But if I try to create an 'alias' function wrapper like this:
def debugLog(message):
logging.debug(message)
... then the line number reporting no longer works as intended, the line reported always states the location of the wrapper and not the actual log, which isn't any real use.
I did find this solution:
import logging
from logging import info as infoLog
from logging import debug as debugLog
from logging import error as errorLog
....
... but it is not suitable for me since I also create my own logging severity:
logging.addLevelName(60, "NORMAL")
... and I'd like to create an alias/macro like normalLog(message)=logging.log(60, message) for it as well if it's possible? I couldn't find anything comprehensive in Python Docs or online.

You can use functools.partial:
import functools
import logging
normalLog = functools.partial(logging.log, 60)
It works pretty well:
normalLog("Hey!!")
Level 60:root:Hey!!
partial binds arguments to function calls and return a partial object (a callable object that holds the necesary information), so you can also use it in the addLevelName method:
activateLevel = functools.partial(logging.addLevelName, 60, "NORMAL")
activateLevel()
Here you have a live working example, notice that the log line is properly reported.

You can use a frame object to get the line number. You can get a frame object in a number of ways, in the example below I use sys._getframe(), the parameter 1 gives the previous stack frame. sys._getframe() is not guaranteed to be present on all Python non-C implementations. Several other functions return frame objects, including the inspect module.
import sys
def debugLog(message):
line = sys._getframe(1).f_lineno
print line, ':', message
x = 42
print x
debugLog("A")
y = x + 1
print y
debugLog("B")
Gives:
42
10 : A
43
13 : B

Related

Dask delayed function call with non-passed parameters

I am seeking to better understand the following behavior when using dask.delayed to call a function that depends on parameters. The issue seems to arise when parameters are specified in a parameters file read by configparser. Here is a complete example:
parameter file:
#zpar.ini: parameter file for configparser
[my pars]
my_zpar = 2.
parser:
#zippy_parser
import configparser
def read(_rundir):
global rundir
rundir = _rundir
cp = configparser.ConfigParser()
cp.read(rundir + '/zpar.ini')
#[my pars]
global my_zpar
my_zpar = cp['my pars'].getfloat('my_zpar')
and the main python file:
# dask test with configparser
import dask
from dask.distributed import Client
import zippy_parser as zpar
def my_func(x, y):
# print stuff
print("parameter from main is: {}".format(main_par))
print("parameter from configparser is: {}".format(zpar.my_zpar))
# do stuff
return x + y
if __name__ == '__main__':
client = Client(n_workers = 4)
#read parameters from input file
rundir = '/path/to/parameter/file'
zpar.read(rundir)
#test zpar
print("zpar is {}".format(zpar.my_zpar))
#define parameter and call my_func
main_par = 5.
z = dask.delayed(my_func)(1., 2.)
z.compute()
client.close()
The first print statement in my_func() executes just fine, but the second print statement raises an exception. The output is:
zpar is 2.0
parameter from main is: 5.0
distributed.worker - WARNING - Compute Failed
Function: my_func
args: (1.0, 2.0)
kwargs: {}
Exception: AttributeError("module 'zippy_parser' has no attribute 'my_zpar'",)
I am new to dask. I suppose this has something to do with the serialization, which I do not understand. Can someone enlighten me and/or point to relevant documentation? Thanks!
I will try to keep this brief.
When a function is serialised in order to be sent to workers, python also sends local variables and functions needed by the function (its "closure"). However, it stores the modules it references by name, it does not try to serialise your whole runtime.
This means that zippy_parser is imported in the worker, not deserialised. Since the function read has never been called
in the worker, the global variable is never initialised.
So, you could call read in the workers as part of your function or otherwise, but probably the pattern or setting module-global variables from with a function isn't great. Dask's delayed mechanism prefers functional purity, that the result you get should not depend on the current state of the runtime.
(note that if you had created the client after calling read in the main script, the workers might have got the in-memory version, depending on how subprocesses are configured to be created on your system)
I encourage you to pass in all parameters to your dask delayed functions explicitly, rather than relying on the global namespace.

How to use Sympys NumPyPrinter?

I am trying to print my Sympy-expression as a string ready to be used with Numpy. I just cannot figure out how to do it.
I found that there is sp.printing.pycode: https://docs.sympy.org/latest/_modules/sympy/printing/pycode.html
The web page states that "This module contains python code printers for plain python as well as NumPy & SciPy enabled code.", but I just cannot figure out how to get it to output the expression numpy format.
sp.printing.pycode(expr)
'math.cos((1/2)*alpha)*math.cos((1/2)*beta)'
That web page also contain class NumPyPrinter(PythonCodePrinter) but I do not know how to use it. def pycode(expr, **settings) just seems to use return PythonCodePrinter(settings).doprint(expr) as a default all the time.
The definition of pycode is almost trivial:
def pycode(expr, **settings):
# docstring skipped
return PythonCodePrinter(settings).doprint(expr)
It should be straight forward to run NumPyPrinter().doprint(expr) instead. The problem is that sympy.printing re-exports the pycode function which shadows the module with the same name. However, we can still import the class directly and use it:
import sympy as sy
from sympy.printing.pycode import NumPyPrinter
x = sy.Symbol('x')
y = x * sy.cos(x * sy.pi)
code = NumPyPrinter().doprint(y)
print(code)
# x*numpy.cos(numpy.pi*x)

Unable to set equation in redhawk's fcalc component

In REDHAWK IDE (v2.12), I am trying to use fcalc component for some math calculations. I tried to follow an example in the doc by putting math.sin(a+b)+random.random() in the equation field, but I got the following error:
CF.PropertySetPackage.InvalidConfiguration: Failure: . Properties: equation
IDL:CF/PropertySet/InvalidConfiguration:1.0
I also tried other math functions, such as sqrt. However, none of them worked. It is very hard to add any modules in the import field as well.
Did I do anything wrong while using this fcalc component?
It appears that the property change listener is not being triggered for the initial property configuration when launched in the IDE sandbox. There are several workarounds:
Manually configure the import property after launching the component, which will trigger the property change listener. Adding time to the list of imports, for example, will then import math and random as well.
Use the Python sandbox instead of the IDE sandbox
>>> from ossie.utils import sb
>>> fcalc = sb.launch('rh.fcalc')
2019-01-04 11:55:44 WARNING rh_fcalc:176 - NOT overriding global namespace with random from random
>>> fcalc.equation = 'sin(a+b)+random.random()'
The warning is expected and just indicates that you can't use random() in the equation without the full namespace random.random() because it would conflict with the random library.
Launch rh.fcalc in a waveform in a domain

How to mock an imported module with pytest

Something similar has been asked before, but I'm struggling to get this to work.
How do I mock an import module from another file
I have one file:
b.py (named to be consistent with the linked docs)
import cv2 # module 'a' in the linked docs
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
vidcap.isOpened()
...
test_b.py
import b
import pytest # with pytest-mock installed
def test_get_frame(mocker):
mock_vidcap = mocker.Mock()
mock_vidcap.isOpened.side_effect = AssertionError
mock_cv2 = mocker.patch('cv2.VideoCapture')
mock_cv2.return_value = mock_vidcap
b.get_video_frame('foo') # Doesn't fail
mock_vidcap.isOpened.assert_called() # fails
I set the tests up like this because in where to patch it specifies that if
In this case the class we want to patch is being looked up on the a module and so we have to patch a.SomeClass instead:
#patch(‘a.SomeClass’)
I've tried a few other combinations of patching, but it exhibits the same behavior, which suggests I'm not successfully patching the module. If the patch were to be applied b.get_video_frame('foo') would fail due to the side_effect; having assert_called fail, supports this.
Edit in an effort to reduce the length of the question I left off the rest of get_video_frame. Unfortunitly, the parts left off we're the critical parts. The full function is:
def get_video_frame(path):
vidcap = cv2.VideoCapture(path) # `a.SomeClass` in the linked docs
is_open = vidcap.isOpened()
while True:
is_open, frame = vidcap.read()
if is_open:
yield frame
else:
break
This line just creates a generator:
b.get_video_frame('foo')
The line is_open = vidcap.isOpened() is never reached, because in the test function the generator remains frozen at the start, therefore the side effect never raises.
You are otherwise using mocker and patch correctly.

Accessing global variables from inside a module

I wrote some python code to control a number of USB (electrical relays and temperature sensors) and RS232 (vacuum gauges) devices. From within this main script (e.g., myscript.py), I would like to import a module (e.g., exp_protocols.py) where I define different experimental protocols, i.e. a series of instructions to open or close relays, read temperature and pressure values, with some simple flow control thrown in (e.g. "wait until temperature exceeds 200 degrees C").
My initial attempt looked like this:
switch_A = Relay('A')
switch_B = Relay('B')
gauge_1 = Gauge('1')
global switch_A
global switch_B
global gauge_1
from exp_protocols import my_protocol
my_protocol()
with exp_protocols.py looking like this:
def my_protocol():
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
This outputs a global variable error, because exp_protocols.my_protocol cannot access the objects defined in myscript.py.
It seems, from reading the answers to earlier questions here, that I could (should?) create all my Relay and Gauge variable in another module, e.g., myconfig.py, and then import myconfig both in myscript.py and exp_protocols? But if I do that, won't my Relay and Gauge objects be created twice (thus trying to open serial ports already active, etc.)?
What would be the best (most Pythonic) way to achieve this kind of inter-module communication?
Thanks in advance.
No matter how many times you import myconfig, python only imports the module once. After the first import, future import statements just grab another reference to the module.
Globals should only be used if these are static bits of data. Your function would be more generic if it took the variables as parameters:
def my_protocol(switch_A, switch_B, gauge_1):
print 'Pressure is %.3f mbar.' % gauge_1.value
switch_A.close()
switch_B.open()
modules could use it with many combinations of data. Suppose you have blocks of switches in a list (and I'm just making this up because I have no idea how you configure your data...), you could process them all with the same function:
import exp_protocols
switch_blocks = [
[Relay('1-A'), Relay('1-B'), Gauge('1-1')],
[Relay('2-A'), Relay('2-B'), Gauge('2-1')],
]
for switch1, switch2, gauge in switch_blocks:
exp_protocols.my_protocol(switch1, switch2, gauge)

Categories

Resources