I have a selection that can reasonably contain most any node type. In python I need to filter out everything except the group nodes. The problem is that group nodes are read by maya as just transform nodes so its proving difficult to filter them out from all of the other transform nodes in the scene. Is there a way to do this? Possibly in the API?
Thanks!
As you alluded to, "group" nodes really are just transform nodes, with no real distinction.
The clearest distinction I can think of however would be that its children must be comprised entirely of other transform nodes. Parenting a shape node under a "group" will no longer be considered a "group"
First, your selection of transform nodes. I assume you already have something along these lines:
selection = pymel.core.ls(selection=True, transforms=True)
Next, a function to check if a given transform is itself a "group".
Iterate over all the children of a given node, returning False if any of them aren't transform. Otherwise return True.
def is_group(node):
children = node.getChildren()
for child in children:
if type(child) is not pymel.core.nodetypes.Transform:
return False
return True
Now you just need to filter the selection, in one of the following two ways, depending on which style you find most clear:
selection = filter(is_group, selection)
or
selection = [node for node in selection if is_group(node)]
mhlester's answer will return true for joints since they also fit that definition.
it also doesnt account for empty groups
def isGroup(node):
if mc.objectType(node, isType = 'joint'):
return False
kids = mc.listRelatives(node, c=1)
if kids:
for kid in kids:
if not mc.objectType(kid, isType = 'transform'):
return false
return True
print isGroup(mc.ls(sl=1))
I know this is old, the method described here will not work properly when used with maya.cmds commands only. Here is my solution:
import maya.cmds as cmds
def is_group(groupName):
try:
children = cmds.listRelatives(groupName , children=True)
for child in children:
if not cmds.ls(child, transforms=True):
return False
return True
except:
return False
for item in cmds.ls():
if is_group(item):
print(item)
else:
pass
Related
I am trying to create an if condition that checks the existence of a certain string and its multiple forms in different lists. The condition currently looks like this.
if ("example_string" in node and "example_string" in node_names) or\
("string_similar_to_example_string" in node and
"string_similar_to_example_string" in node_names):
return response
Here node is a string that matches the string exactly and node_names is a list of strings that has strings matching the example string. This logic right now works, but I wanted to know if there is a better way to write this that makes it readable and clear.
As you said, your logic is working here.
So, a small function might help with the code readability.
# assuming node and node_names are global here, otherwise you can pass it as well.
def foo(bar):
"""foo function description"""
if bar in node and bar in node_names:
return True
if foo("example_string") or foo("string_similar_to_example_string"):
return response
I like reducing functions like all or any. For instance, I think you could have it done with any and then really just organising the sentence.
E.g,
does_match = False
for string_to_lookup in ["example_string", "string_similar"]:
does_match = string_to_lookup in node and string_to_lookup in node_names)
return response if does_match else None
Before I go, I want to point out something: I think your condition for matching the list of strings is wrong; At least, it is different from the match against node.
Suppose you have your "example_string", and node is "example_string_yay" and node_names is ["example_string_1", "another_string_bla", "example_what"]. If you do:
>>> res = "example_string" in node and "example_string" in node_names
>>> print(res)
False
but if you do:
>>> res = "example_string" in node and any("example_string" in n for n in node_names)
>>> print(res)
True
I have a special case xml file structure is something like :
<Root>
<parent1>
<parent2>
<element id="Something" >
</parent2>
</parent1>
<parent1>
<element id="Something">
</parent1>
</Root>
My use case is to remove the duplicated element , I want to remove the elements with same Id . I tried the following code with no positive outcome (its not finding the duplicate node)
import xml.etree.ElementTree as ET
path = 'old.xml'
tree = ET.parse(path)
root = tree.getroot()
prev = None
def elements_equal(e1, e2):
if type(e1) != type(e2):
return False
if e1.tag != e1.tag: return False
if e1.text != e2.text: return False
if e1.tail != e2.tail: return False
if e1.attrib != e2.attrib: return False
if len(e1) != len(e2): return False
return all([elements_equal(c1, c2) for c1, c2 in zip(e1, e2)])
for page in root: # iterate over pages
elems_to_remove = []
for elem in page:
for insideelem in page:
if elements_equal(elem, insideelem) and elem != insideelem:
print("found duplicate: %s" % insideelem.text) # equal function works well
elems_to_remove.append(insideelem)
continue
for elem_to_remove in elems_to_remove:
page.remove(elem_to_remove)
# [...]
tree.write("out.xml")
Can someone help me in letting me know how can i solve it. I am very new to python with almost zero experience .
First of all what you're doing is a hard problem in the library you're using, see this question: How to remove a node inside an iterator in python xml.etree.ElemenTree
The solution to this would be to use lxml which "implements the same API but with additional enhancements". Then you can do the following fix.
You seem to be only traversing the second level of nodes in your XML tree. You're getting root, then walking the children its children. This would get you parent2 from the first page and the element from your second page. Furthermore you wouldn't be comparing across pages here:
your comparison will only find second-level duplicates within the same page.
Select the right set of elements using a proper traversal function such as iter:
# Use a `set` to keep track of "visited" elements with good lookup time.
visited = set()
# The iter method does a recursive traversal
for el in root.iter('element'):
# Since the id is what defines a duplicate for you
if 'id' in el.attr:
current = el.get('id')
# In visited already means it's a duplicate, remove it
if current in visited:
el.getparent().remove(el)
# Otherwise mark this ID as "visited"
else:
visited.add(current)
I have overcome the problem of avoiding the creation of duplicate nodes on my DB with the use of merge_one functions which works like that:
t=graph.merge_one("User","ID","someID")
which creates the node with unique ID. My problem is that I can't find a way to add multiple attributes/properties to my node along with the ID which is added automatically (date for example).
I have managed to achieve this the old "duplicate" way but it doesn't work now since merge_one can't accept more arguments! Any ideas???
Graph.merge_one only allows you to specify one key-value pair because it's meant to be used with a uniqueness constraint on a node label and property. Is there anything wrong with finding the node by its unique id with merge_one and then setting the properties?
t = graph.merge_one("User", "ID", "someID")
t['name'] = 'Nicole'
t['age'] = 23
t.push()
I know I am a bit late... but still useful I think
Using py2neo==2.0.7 and the docs (about Node.properties):
... and the latter is an instance of PropertySet which extends dict.
So the following worked for me:
m = graph.merge_one("Model", "mid", MID_SR)
m.properties.update({
'vendor':"XX",
'model':"XYZ",
'software':"OS",
'modelVersion':"",
'hardware':"",
'softwareVesion':"12.06"
})
graph.push(m)
This hacky function will iterate through the properties and values and labels gradually eliminating all nodes that don't match each criteria submitted. The final result will be a list of all (if any) nodes that match all the properties and labels supplied.
def find_multiProp(graph, *labels, **properties):
results = None
for l in labels:
for k,v in properties.iteritems():
if results == None:
genNodes = lambda l,k,v: graph.find(l, property_key=k, property_value=v)
results = [r for r in genNodes(l,k,v)]
continue
prevResults = results
results = [n for n in genNodes(l,k,v) if n in prevResults]
return results
The final result can be used to assess uniqueness and (if empty) create a new node, by combining the two functions together...
def merge_one_multiProp(graph, *labels, **properties):
r = find_multiProp(graph, *labels, **properties)
if not r:
# remove tuple association
node,= graph.create(Node(*labels, **properties))
else:
node = r[0]
return node
example...
from py2neo import Node, Graph
graph = Graph()
properties = {'p1':'v1', 'p2':'v2'}
labels = ('label1', 'label2')
graph.create(Node(*labels, **properties))
for l in labels:
graph.create(Node(l, **properties))
graph.create(Node(*labels, p1='v1'))
node = merge_one_multiProp(graph, *labels, **properties)
I've found related methods:
find - doesn't work because this version of neo4j doesn't support labels.
match - doesn't work because I cannot specify a relation, because the node has no relations yet.
match_one - same as match.
node - doesn't work because I don't know the id of the node.
I need an equivalent of:
start n = node(*) where n.name? = "wvxvw" return n;
Cypher query. Seems like it should be basic, but it really isn't...
PS. I'm opposed to using Cypher for too many reasons to mention. So that's not an option either.
Well, you should create indexes so that your start nodes are reduced. This will be automatically taken care of with the use of labels, but in the meantime, there can be a work around.
Create an index, say "label", which will have keys pointing to the different types of nodes you will have (in your case, say 'Person')
Now while searching you can write the following query :
START n = node:label(key_name='Person') WHERE n.name = 'wvxvw' RETURN n; //key_name is the key's name you will assign while creating the node.
user797257 seems to be out of the game, but I think this could still be useful:
If you want to get nodes, you need to create an index. An index in Neo4j is the same as in MySQL or any other database (If I understand correctly). Labels are basically auto-indexes, but an index offers additional speed. (I use both).
somewhere on top, or in neo4j itself create an index:
index = graph_db.get_or_create_index(neo4j.Node, "index_name")
Then, create your node as usual, but do add it to the index:
new_node = batch.create(node({"key":"value"}))
batch.add_indexed_node(index, "key", "value", new_node)
Now, if you need to find your new_node, execute this:
new_node_ref = index.get("key", "value")
This returns a list. new_node_ref[0] has the top item, in case you want/expect a single node.
use selector to obtain node from the graph
The following code fetches the first node from list of nodes matching the search
selector = NodeSelector(graph)
node = selector.select("Label",key='value')
nodelist=list(node)
m_node=node.first()
using py2neo, this hacky function will iterate through the properties and values and labels gradually eliminating all nodes that don't match each criteria submitted. The final result will be a list of all (if any) nodes that match all the properties and labels supplied.
def find_multiProp(graph, *labels, **properties):
results = None
for l in labels:
for k,v in properties.iteritems():
if results == None:
genNodes = lambda l,k,v: graph.find(l, property_key=k, property_value=v)
results = [r for r in genNodes(l,k,v)]
continue
prevResults = results
results = [n for n in genNodes(l,k,v) if n in prevResults]
return results
see my other answer for creating a merge_one() that will accept multiple properties...
I have some issue trying to define keywords_aguments.
I'm trying to define a function that returns all the objects with *_control in the scene, when nothing is specified, but i'd like to choose which ones about 'left' or 'right' it has to return.
Below you can find my function. I don't understand where the error is.
from maya import cmds
def correct_value(selection=None, **keywords_arguments):
if selection is None:
selection = cmds.ls ('*_control')
if not isinstance(selection, list):
selection = [selection]
for each in keywords_arguments:
keywords_list = []
if each.startswith('right','left'):
selection.append(each)
return selection
correct_value()
Keyword arguments are dictionaries. You can print them or could have verified the type with the type() function. This allows you to try use of dictionary in isolated context on your own and finding out how to solve your problem yourself.
Now, when you have a dictionary x = {1:2}, iterating over it with for will give you just one, i.e. it will only iterate over the keys(!), not the according values. For that, use for key, value in dictionary.items() and then use the value if key in ('right', 'left').
The code you have would add 'right' or 'left' on to the end of the list.
I think you want something like this:
def find_controls(*selection, **kwargs): # with *args you can pass one item, several items, or a list
selection = selection or cmds.ls("*_control") or [] # supplied objects, or the ls command, or an empty list
if not kwargs:
return list(selection) # no flags? reutrn the whole list
lefty = lambda ctrl: ctrl.lower().startswith("left") # this will filter for items with left
righty = lambda ctrl: ctrl.lower().startswith("right") # this will filter for items with left
filters = []
if kwargs.get('left'): # safe way to ask 'is this key here and does it have a true value?'
filters.append(lefty)
if kwargs.get('right'):
filters.append(righty)
result = []
for each_filter in filters:
result += filter (each_filter, selection)
return result
find_controls (left=True, right=True)
# Result: [u'left_control', u'right_control'] #
find_controls (left=True, right =False) # or just left=True
# Result: [u'left_control'] #
find_controls()
# Result: [u'left_control', u'middle_control', u'right_control'] #
The trick here is to use the lambdas (which are basically just functions in a shorter format) and the built in filter function (which applies a function to everything in a list and returns things where the function true a non-zero, non-false answer. It's easy to see how you could extend it just by adding more keywords and corresponding lambdas