Using the Maya Python API 2.0, I'm trying to make a callback that changes the value of a plug. However, none of the methods I've tried are working.
I've tried using the MPlug.setFloat() method, but this didn't lead to expected results; I found no change in the plug's value. I figured this hadn't worked because I needed to clean the plug after changing its value. So, I then tried getting the plug's data handle using the MPlug.asDataHandle() method, then using the data handle's datablock() method in order to use the data handle and datablock to set the plug's value and clean it. However, I got an error saying "RuntimeError: (kFailure): Unexpected Internal Failure" upon using MDataHandle.datablock().
Now I'm trying the following, which uses the data handle to set the plug's value and clean it:
def setPlugFloatValue(node, plugName, val):
fnSet = OpenMaya.MFnDependencyNode(node)
plug = fnSet.findPlug(plugName,True)
handle = plug.asMDataHandle()
handle.setFloat(val)
handle.setClean()
The above function is intended to find a certain plug in a node, then use its data handle to set its value and clean it. In my program, the callback uses this function to change the translateX, translateY and translateZ plugs of a node's child nodes. The callback runs when the translate value of the node it's applied to changes. In a scene I'm using to test this callback, I apply the callback to a polygon mesh object, with one child which is also a polygon mesh object. So, as I translate the parent object, I expect the translate values of its child to change. But when I select the child object after translating its parent, its translate values haven't changed.
Tried your example and used setFloat() on the plug, which appears to work fine.
import maya.api.OpenMaya as OpenMaya
def setPlugFloatValue(node, plugName, val):
fnSet = OpenMaya.MFnDependencyNode(node)
plug = fnSet.findPlug(plugName,True)
plug.setFloat(val)
def applyToSelectedObjects():
sl_list = OpenMaya.MGlobal.getActiveSelectionList()
iterator = OpenMaya.MItSelectionList(sl_list)
while not iterator.isDone():
obj = iterator.getDependNode()
setPlugFloatValue(obj, "translateX", -2.0)
iterator.next()
applyToSelectedObjects()
Perhaps your issue is something else? You can also try to use setMDistance() instead, but it didn't make any difference in my testing.
distance = OpenMaya.MDistance(val)
plug.setMDistance(distance)
Related
I am trying to write a testing program for a python program that takes data, does calculations on it, then puts the output in a class instance object. This object contains several other objects, each with their own attributes. I'm trying to access all the attributes and sub-attributes dynamically with a one size fits all solution, corresponding to elements in a dictionary I wrote to cycle through and get all those attributes for printing onto a test output file.
Edit: this may not be clear from the above but I have a list of the attributes I want, so using something to actually get those attributes is not a problem, although I'm aware python has methods that accomplish this. What I need to do is to be able to get all of those attributes with the same function call, regardless of whether they are top level object attributes or attributes of object attributes.
Python is having some trouble with this - first I tried doing something like this:
for string in attr_dictionary:
...
outputFile.print(outputclass.string)
...
But Python did not like this, and returned an AttributeError
After checking SE, I learned that this is a supposed solution:
for string in attr_dictionary:
...
outputFile.print(getattr(outputclass, string))
...
The only problem is - I want to dynamically access the attributes of objects that are attributes of outputclass. So ideally it would be something like outputclass.objectAttribute.attribute, but this does not work in python. When I use getattr(outputclass, objectAttribute.string), python returns an AttributeError
Any good solution here?
One thing I have thought of trying is creating methods to return those sub-attributes, something like:
class outputObject:
...
def attributeIWant(self,...):
return self.subObject.attributeIWant
...
Even then, it seems like getattr() will return an error because attributeIWant() is supposed to be a function call, it's not actually an attribute. I'm not certain that this is even within the capabilities of Python to make this happen.
Thank you in advance for reading and/or responding, if anyone is familiar with a way to do this it would save me a bunch of refactoring or additional code.
edit: Additional Clarification
The class for example is outputData, and inside that class you could have and instance of the class furtherData, which has the attribute dataIWant:
class outputData:
example: furtherData
example = furtherData()
example.dataIWant = someData
...
with the python getattr I can't access both attributes directly in outputData and attributes of example unless I use separate calls, the attribute of example needs two calls to getattr.
Edit2: I have found a solution I think works for this, see below
I was able to figure this out - I just wrote a quick function that splits the attribute string (for example outputObj.subObj.propertyIWant) then proceeds down the resultant array, calling getattr on each subobject until it reaches the end of the array and returns the actual attribute.
Code:
def obtainAttribute(sample, attributeString: str):
baseObj = sample
attrArray = attributeString.split(".")
for string in attrArray:
if(attrArray.index(string) == (len(attrArray) - 1)):
return getattr(baseObj,string)
else:
baseObj = getattr(baseObj,string)
return "failed"
sample is the object and attributeString is, for example object.subObject.attributeYouWant
I am writing a program in Python that communicates with a spectrometer from Avantes. There are some proprietary dlls available whose code I don't access to, but they have some decent documentation. I am having some trouble to find a good way to store the data received via callbacks.
The proprietary shared library
Basically, the dll contains a function that I have to call to start measuring and that receives a callback function that will be called whenever the spectrometer has finished a measurement. The function is the following:
int AVS_MeasureCallback(AvsHandle a_hDevice,void (*__Done)(AvsHandle*, int*),short a_Nmsr)
The first argument is a handle object that identifies the spectrometer, the second is the actual callback function and the third is the amount of measurements to be made.
The callback function will receive then receive another type of handle identifying the spetrometer and information about the amount of data available after a measurement.
Python library
I am using a library that has Python wrappers for many equipments, including my spectrometer.
def measure_callback(self, num_measurements, callback=None):
self.sdk.AVS_MeasureCallback(self._handle, callback, num_measurements)
And they also have defined the following decorator:
MeasureCallback = FUNCTYPE(None, POINTER(c_int32), POINTER(c_int32))
The idea is that when the callback function is finally called, this will trigger the get_data() function that will retrieve data from the equipment.
The recommended example is
#MeasureCallback
def callback_fcn(handle, info):
print('The DLL handle is:', handle.contents.value)
if info.contents.value == 0: # equals 0 if everything is okay (see manual)
print(' callback data:', ava.get_data())
ava.measure_callback(-1, callback_fcn)
My problem
I have to store the received data in a 2D numpy array that I have created somewhere else in my main code, but I can't figure out what is the best way to update this array with the new data available inside the callback function.
I wondered if I could pass this numpy array as an argument for the callback function, but even in this case I cannot find a good way to do this since it is expected that the callback function will have only those 2 arguments.
Edit 1
I found a possible solution here but I am not sure it is the best way to do it. I'd rather not create a new class just to hold a single numpy array inside.
Edit 2
I actually changed my mind about my approach, because inside my callback I'd like to do many operations with the received data and save the results in many different variables. So, I went back to the class approach mentioned here, where I would basically have a class with all the variables that will somehow be used in the callback function and that would also inherit or have an object of the class ava.
However, as shown in this other question, the self parameter is a problem in this case.
If you don't want to create a new class, you can use a function closure:
# Initialize it however you want
numpy_array = ...
def callback_fcn(handle, info):
# Do what you want with the value of the variable
store_data(numpy_array, ...)
# After the callback is called, you can access the changes made to the object
print(get_data(numpy_array))
How this works is that when the callback_fcn is defined, it keeps a reference to the value of the variable numpy_array, so when it's called, it can manipulate it, as if it were passed as an argument to the function. So you get the effect of passing it in, without the callback caller having to worry about it.
I finally managed to solve my problem with a solution envolving a new class and also a closure function to deal with the self parameter that is described here. Besides that, another problem would appear by garbage collection of the new created method.
My final solution is:
class spectrometer():
def measurement_callback(self,handle,info):
if info.contents.value >= 0:
timestamp,spectrum = self.ava.get_data()
self.spectral_data[self.spectrum_index,:] = np.ctypeslib.as_array(spectrum[0:pixel_amount])
self.timestamps[self.spectrum_index] = timestamp
self.spectrum_index += 1
def __init__(self,ava):
self.ava = ava
self.measurement_callback = MeasureCallback(self.measurement_callback)
def register_callback(self,scans,pattern_amount,pixel_amount):
self.spectrum_index = 0
self.timestamps = np.empty((pattern_amount),dtype=np.uint32)
self.spectral_data = np.empty((pattern_amount,pixel_amount),dtype=np.float64)
self.ava.measure_callback(scans, self.measurement_callback)
I am using Office 2007.
I found if I would like to show the legend overlapping the chart in office2007.
The XML should be as the following.
`-<c:legend>
<c:overlay val="1"/>`
But no matter I use the API from python-pptx 'chart.legend.include_in_layout = True' or I leave it as the default. The generated XML would always be as the following.
`-<c:legend>
<c:overlay/>`
Without the val=1, then office2007 won't show the format properly.
What can I do to force the python-pptx to write the val=1? thanks.
Explanation
In short, the True value is not explicitly set (in contrast to False) because True corresponds to the default value of overlay's val attribute.
To explain it in more detail - you can follow the python-pptx hierarchy as follows: overlay is mapped to CT_Boolean (all overlay oxml elements are instantiated from CT_Boolean). The actual val parameter is then mapped via OptionalAttribute and is defined with the default value of True:
class CT_Boolean(BaseOxmlElement):
"""
Common complex type used for elements having a True/False value.
"""
val = OptionalAttribute('val', XsdBoolean, default=True)
Now, when setting the optional attribute to its default value, it is actually skipped/deleted, as you can see here if value == self._default:
class OptionalAttribute(BaseAttribute):
"""
Defines an optional attribute on a custom element class. An optional
attribute returns a default value when not present for reading. When
assigned |None|, the attribute is removed.
"""
#property
def _setter(self):
def set_attr_value(obj, value):
if value == self._default:
if self._clark_name in obj.attrib:
del obj.attrib[self._clark_name]
return
str_value = self._simple_type.to_xml(value)
obj.set(self._clark_name, str_value)
return set_attr_value
Fix - provide custom CT_Boolean class
Add these lines somewhere before you need to use the overlay. It will overwrite python-pptx overlay mapping with the custom CT_Boolean_NoDefault class:
from pptx.oxml import register_element_cls
from pptx.oxml.xmlchemy import BaseOxmlElement, OptionalAttribute
from pptx.oxml.simpletypes import XsdBoolean
class CT_Boolean_NoDefault(BaseOxmlElement):
"""
Common complex type used for elements having a True/False value with no
default value.
"""
val = OptionalAttribute('val', XsdBoolean)
register_element_cls('c:overlay', CT_Boolean_NoDefault)
This worked for me and finally I got:
<c:legend>
<c:overlay val="1"/>
</c:legend>
Fix - modify python-pptx permanently
This is not recommended but you might want to modify python-pptx instead of adding the solution from above for each script you run.
First, add the following to pptx/oxml/chart/shared.py which defines a new bool class without a default value:
class CT_Boolean_NoDefault(BaseOxmlElement):
"""
Common complex type used for elements having a True/False value.
"""
val = OptionalAttribute('val', XsdBoolean)
Second, modify pptx/oxml/__init__.py to add the new bool class:
from .chart.shared import (
CT_Boolean, CT_Double, CT_Layout, CT_LayoutMode, CT_ManualLayout,
CT_NumFmt, CT_Tx, CT_UnsignedInt, CT_Boolean_NoDefault
)
Third, modify pptx/oxml/__init__.py to change the mapping of the overlay element to the new bool class:
register_element_cls('c:overlay', CT_Boolean_NoDefault)
Better solution
In case you have time, please submit a ticket here so it might become a permanent fix. In case #scanny finds some time, he will read this. Perhaps there is some better solution for this, too, and I've completely missed something.
#pansen 's analysis is spot-on. Here's an alternative way to get this working in your case that might be a little lighter weight:
def include_in_layout(legend):
legend_element = legend._element
overlay = legend_element.get_or_add_overlay()
overlay.set('val', '1')
This appears to be a localized non-conformance of that version of PowerPoint with the ISO/IEC 29500 spec. As pansen rightly points out, a missing val attribute is to be interpreted the same as val=1 (True). I'd be interested to discover how extensive this non-conformance goes, i.e. what other elements exhibit this same behavior. The CT_Boolean type is used quite frequently in PowerPoint, for things like bold, italic, varyColors, smooth, and on and on. So a "compensating" fix would need to be applied carefully to avoid reporting incorrect results for other elements.
I think I'll take pansen's cue and use a specialized element class for this element only. It will still report True for an element without the val attribute, which will be inconsistent with the observed behavior on this version of PowerPoint; but assuming other versions behave correctly (according to the spec), the inconsistency will be localized and at least assigning True to that property will make the legend show up the way you want.
I'm working on a PySide based App in which I continously get values and want to put them onto the GUI.
When I receive a value (I receive them via a CAN device using the PCANBasic library) I convert him to an int and emit him via the .emit() attributte of PySide.QtCore.Signal
Signal = PySide.QtCore.Signal(int)
# as soon as a new value is received and processed
Signal.emit(new_value)
Now I try to display my new_value on a PySide.QtGui.QSlider, thats what I do at the moment:
my_slider = PySide.QtGui.QSlider()
Signal.connect(change_slider_value)
# with a simple helper function
def change_slider_value(value):
my_slider.setValue(value)
What I wanna do is:
Signal.connect(lambda value = data : my_slider.setValue(value))
With data being the what I emited with Signal (I'd love to somehow mark it, but the formating disappeaered on me and its my first post -.-)
When I test this I get the following Traceback:
self.calibrate.bar_val_signal.connect(lambda value = data: self.UI.calibrate.ctrl.Bar.setValue(value)) # self.change_bar_value)
NameError: global name 'data' is not defined
(You see the program is probably somewhat more complicated)
Translated to our pseudo code it would probably look like this:
Signal.connect(lambda value = data: my_slider.setValue(value))
NameError: global name 'data' is not defined
In my opinion the issue is that the lambda function can't get the value out of the signal.
Has anybody a idea if there's a possibility to work without the need for a helper function.
Thanks in advance
You don't need to use a lambda. Since your change_slider_value function only takes the argument that your signal would emit, you can just connect the signal to that.
Signal.connect(change_slider_value)
But as for why your lambda wasn't working, think of data as the parameter of a function. data will contain whatever the lambda is called with, therefore you could just do this, omitting 'value':
Signal.connect(lambda data: my_slider.setValue(data))
But I would suggest using the first solution, unless your parameters for chang_slider_value change.
I have been doing a lot of searching, and I don't think I've really found what I have been looking for. I will try my best to explain what I am trying to do, and hopefully there is a simple solution, and I'll be glad to have learned something new.
This is ultimately what I am trying to accomplish: Using nosetests, decorate some test cases using the attribute selector plugin, then execute test cases that match a criteria by using the -a switch during commandline invocation. The attribute values for the tests that are executed are then stored in an external location. The command line call I'm using is like below:
nosetests \testpath\ -a attribute='someValue'
I have also created a customized nosetest plugin, which stores the test cases' attributse, and writes them to an external location. The idea is that I can select a batch of tests, and by storing the attributes of these tests, I can do filtering on these results later for reporting purposes. I am accessing the method attributes in my plugin by overriding the "wantMethod" method with the code similar to the following:
def set_attribs(self, method, attribute):
if hasattr(method, attribute):
if not self.method_attributes.has_key(method.__name__):
self.method_attributes[method.__name__] = {}
self.method_attributes[method.__name__][attribute] = getattr(method, attribute)
def wantMethod(self, method):
self.set_attribs(method, "attribute1")
self.set_attribs(method, "attribute2")
pass
I have this working for pretty much all the tests, except for one case, where the test is uing the "yield" keyword. What is happening is that the methods that are generated are being executed fine, but then the method attributes are empty for each of the generated functions.
Below is the example of what I am trying to achieve. The test below retreives a list of values, and for each of those values, yields the results from another function:
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield (lambda x: f(), key)
def _do_test(self, input1, input2):
# Some code
From what I have read, and think I understand, when yield is called, it would create a new callable function which then gets executed. I have been trying to figure out how to retain the attribute values from my sample_test_generator method, but I have not been successful. I thought I could create a partial method, and then add the attribute to the method, but no luck. The tests execute without errors at all, it just seems that from my plugin's perspective, the method attributes aren't present, so they don't get recorded.
I realize this a pretty involved question, but I wanted to make sure that the context for what I am trying to achieve is clear. I have been trying to find information that could help me for this particular case, but I feel like I've reached a stumbling block now, so I would really like to ask the experts for some advice.
Thanks.
** Update **
After reading through the feedback and playing around some more, it looks like if I modified the lambda expression, it would achieve what I am looking for. In fact, I didn't even need to create the partial function:
def sample_test_generator(self):
for (key, value) in _input_dictionary.items()
yield (lambda: self._do_test)
The only downside to this approach is that the test name will not change. As I am playing around more, it looks like in nosetests, when a test generator is used, it would actually change the test name in the result based on the keywords it contains. Same thing was happening when I was using the lambda expression with a parameter.
For example:
Using lamdba expression with a parameter:
yield (lambda x: self._do_test, "value1")
In nosetests plugin, when you access the test case name, it would be displayed as "sample_test_generator(value1)
Using lambda expression without a parameter:
yield (lambda: self._do_test)
The test case name in this case would be "sample_test_generator". In my example above, if there are multiple values in the dictionary, then the yield call would occur multiple times. However, the test name would always remain as "sample_test_generator". This is not as bad as when I would get the unique test names, but then not be able to store the attribute values at all. I will keep playing around, but thanks for the feedback so far!
EDIT
I forgot to come back and provide my final update on how I was able to get this to work in the end, there was a little confusion on my part at first, and after I looked through it some more, I figured out that it had to do with how the tests are recognized:
My original implementation assumed that every test that gets picked up for execution goes through the "wantMethod" call from the plugin's base class. This is not true when "yield" is used to generate the test, because at this point, the test method has already passed the "wantMethod" call.
However, once the test case is generated through the "yeild" call, it does go through the "startTest" call from the plug-in base class, and this is where I was finally able to store the attribute successfully.
So in a nut shell, my test execution order looked like this:
nose -> wantMethod(method_name) -> yield -> startTest(yielded_test_name)
In my override of the startTest method, I have the following:
def startTest(self, test):
# If a test is spawned by using the 'yield' keyword, the test names would be the parent test name, appended by the '(' character
# example: If the parent test is "smoke_test", the generated test from yield would be "smoke_test('input')
parent_test_name = test_name.split('(')[0]
if self.method_attributes.has_key(test_name):
self._test_attrib = self.method_attributes[test_name]
elif self.method_attributes.has_key(parent_test_name):
self._test_attrib = self.method_attributes[parent_test_name]
else:
self._test_attrib = None
With this implementation, along with my overide of wantMethod, each test spawned by the parent test case also inherits attributes from the parent method, which is what I needed.
Again, thanks to all who send replies. This was quite a learning experience.
Would this fix your name issue?
def _actual_test(x, y):
assert x == y
def test_yield():
_actual_test.description = "test_yield_%s_%s" % (5, 5)
yield _actual_test, 5, 5
_actual_test.description = "test_yield_%s_%s" % (4, 8) # fail
yield _actual_test, 4, 8
_actual_test.description = "test_yield_%s_%s" % (2, 2)
yield _actual_test, 2, 2
Rename survives #attr too.
does this work?
#attr(attribute1='someValue', attribute2='anotherValue')
def sample_test_generator(self):
def get_f(f, key):
return lambda x: f(), key
for (key, value) in _input_dictionary.items()
f = partial(self._do_test, key, value)
f.attribute1='someValue'
yield get_f(f, key)
def _do_test(self, input1, input2):
# Some code
The Problem ist that the local variables change after you created the lambda.