I've found related methods:
find - doesn't work because this version of neo4j doesn't support labels.
match - doesn't work because I cannot specify a relation, because the node has no relations yet.
match_one - same as match.
node - doesn't work because I don't know the id of the node.
I need an equivalent of:
start n = node(*) where n.name? = "wvxvw" return n;
Cypher query. Seems like it should be basic, but it really isn't...
PS. I'm opposed to using Cypher for too many reasons to mention. So that's not an option either.
Well, you should create indexes so that your start nodes are reduced. This will be automatically taken care of with the use of labels, but in the meantime, there can be a work around.
Create an index, say "label", which will have keys pointing to the different types of nodes you will have (in your case, say 'Person')
Now while searching you can write the following query :
START n = node:label(key_name='Person') WHERE n.name = 'wvxvw' RETURN n; //key_name is the key's name you will assign while creating the node.
user797257 seems to be out of the game, but I think this could still be useful:
If you want to get nodes, you need to create an index. An index in Neo4j is the same as in MySQL or any other database (If I understand correctly). Labels are basically auto-indexes, but an index offers additional speed. (I use both).
somewhere on top, or in neo4j itself create an index:
index = graph_db.get_or_create_index(neo4j.Node, "index_name")
Then, create your node as usual, but do add it to the index:
new_node = batch.create(node({"key":"value"}))
batch.add_indexed_node(index, "key", "value", new_node)
Now, if you need to find your new_node, execute this:
new_node_ref = index.get("key", "value")
This returns a list. new_node_ref[0] has the top item, in case you want/expect a single node.
use selector to obtain node from the graph
The following code fetches the first node from list of nodes matching the search
selector = NodeSelector(graph)
node = selector.select("Label",key='value')
nodelist=list(node)
m_node=node.first()
using py2neo, this hacky function will iterate through the properties and values and labels gradually eliminating all nodes that don't match each criteria submitted. The final result will be a list of all (if any) nodes that match all the properties and labels supplied.
def find_multiProp(graph, *labels, **properties):
results = None
for l in labels:
for k,v in properties.iteritems():
if results == None:
genNodes = lambda l,k,v: graph.find(l, property_key=k, property_value=v)
results = [r for r in genNodes(l,k,v)]
continue
prevResults = results
results = [n for n in genNodes(l,k,v) if n in prevResults]
return results
see my other answer for creating a merge_one() that will accept multiple properties...
Related
I have overcome the problem of avoiding the creation of duplicate nodes on my DB with the use of merge_one functions which works like that:
t=graph.merge_one("User","ID","someID")
which creates the node with unique ID. My problem is that I can't find a way to add multiple attributes/properties to my node along with the ID which is added automatically (date for example).
I have managed to achieve this the old "duplicate" way but it doesn't work now since merge_one can't accept more arguments! Any ideas???
Graph.merge_one only allows you to specify one key-value pair because it's meant to be used with a uniqueness constraint on a node label and property. Is there anything wrong with finding the node by its unique id with merge_one and then setting the properties?
t = graph.merge_one("User", "ID", "someID")
t['name'] = 'Nicole'
t['age'] = 23
t.push()
I know I am a bit late... but still useful I think
Using py2neo==2.0.7 and the docs (about Node.properties):
... and the latter is an instance of PropertySet which extends dict.
So the following worked for me:
m = graph.merge_one("Model", "mid", MID_SR)
m.properties.update({
'vendor':"XX",
'model':"XYZ",
'software':"OS",
'modelVersion':"",
'hardware':"",
'softwareVesion':"12.06"
})
graph.push(m)
This hacky function will iterate through the properties and values and labels gradually eliminating all nodes that don't match each criteria submitted. The final result will be a list of all (if any) nodes that match all the properties and labels supplied.
def find_multiProp(graph, *labels, **properties):
results = None
for l in labels:
for k,v in properties.iteritems():
if results == None:
genNodes = lambda l,k,v: graph.find(l, property_key=k, property_value=v)
results = [r for r in genNodes(l,k,v)]
continue
prevResults = results
results = [n for n in genNodes(l,k,v) if n in prevResults]
return results
The final result can be used to assess uniqueness and (if empty) create a new node, by combining the two functions together...
def merge_one_multiProp(graph, *labels, **properties):
r = find_multiProp(graph, *labels, **properties)
if not r:
# remove tuple association
node,= graph.create(Node(*labels, **properties))
else:
node = r[0]
return node
example...
from py2neo import Node, Graph
graph = Graph()
properties = {'p1':'v1', 'p2':'v2'}
labels = ('label1', 'label2')
graph.create(Node(*labels, **properties))
for l in labels:
graph.create(Node(l, **properties))
graph.create(Node(*labels, p1='v1'))
node = merge_one_multiProp(graph, *labels, **properties)
I have a selection that can reasonably contain most any node type. In python I need to filter out everything except the group nodes. The problem is that group nodes are read by maya as just transform nodes so its proving difficult to filter them out from all of the other transform nodes in the scene. Is there a way to do this? Possibly in the API?
Thanks!
As you alluded to, "group" nodes really are just transform nodes, with no real distinction.
The clearest distinction I can think of however would be that its children must be comprised entirely of other transform nodes. Parenting a shape node under a "group" will no longer be considered a "group"
First, your selection of transform nodes. I assume you already have something along these lines:
selection = pymel.core.ls(selection=True, transforms=True)
Next, a function to check if a given transform is itself a "group".
Iterate over all the children of a given node, returning False if any of them aren't transform. Otherwise return True.
def is_group(node):
children = node.getChildren()
for child in children:
if type(child) is not pymel.core.nodetypes.Transform:
return False
return True
Now you just need to filter the selection, in one of the following two ways, depending on which style you find most clear:
selection = filter(is_group, selection)
or
selection = [node for node in selection if is_group(node)]
mhlester's answer will return true for joints since they also fit that definition.
it also doesnt account for empty groups
def isGroup(node):
if mc.objectType(node, isType = 'joint'):
return False
kids = mc.listRelatives(node, c=1)
if kids:
for kid in kids:
if not mc.objectType(kid, isType = 'transform'):
return false
return True
print isGroup(mc.ls(sl=1))
I know this is old, the method described here will not work properly when used with maya.cmds commands only. Here is my solution:
import maya.cmds as cmds
def is_group(groupName):
try:
children = cmds.listRelatives(groupName , children=True)
for child in children:
if not cmds.ls(child, transforms=True):
return False
return True
except:
return False
for item in cmds.ls():
if is_group(item):
print(item)
else:
pass
Say I'm given a tuple of strings, representing relationships between objects, for example:
connections = ("dr101-mr99", "mr99-out00", "dr101-out00", "scout1-scout2","scout3-scout1", "scout1-scout4", "scout4-sscout", "sscout-super")
each dash "-" shows a relationship between the two items in the string. Then I'm given two items:
first = "scout2"
second = "scout3"
How might I go about finding if first and second are interrelated, meaning I could find a path that connects them, not necessarily if they are just in a string group.
You can try concatenating the strings and using the in operator to check if it is an element of the tuple connections:
if first + "-" + second in connections:
# ...
Edit:
You can also use the join() function:
if "-".join((first, second)) in connections:
# ...
If you plan on doing this any number of times, I'd consider frozensets...
connections_set = set(frozenset(c.split('-')) for c in connections)
Now you can do something like:
if frozenset((first, second)) in connections_set:
...
and you have an O(1) solution (plus the O(N) upfront investment). Note that I'm assuming the order of the pairs is irrelevant. If it's relevant, just use a tuple instead of frozenset and you're good to go.
If you actually need to walk through a graph, an adjacency list implementation might be a little better.
from collections import defaultdict
adjacency_dict = defaultdict(list)
for c in connections:
left, right = c.split('-')
adjacency_dict[left].append(right)
# if undirected: adjacency_dict[right].append(left)
class DFS(object):
def __init__(self, graph):
self.graph = graph
def is_connected(self, node1, node2):
self._seen = set()
self._walk_connections(node1)
output = node2 in self._seen
del self._seen
return output
def _walk_connections(self, node):
if node in self._seen:
return
self._seen.add(node)
for subnode in self.graph[node]:
self._walk_connections(subnode)
print DFS(adjacency_dict).is_connected()
Note that this implementation is definitely suboptimal (I don't stop when I found the node I'm looking for for example) -- and I don't check for an optimal path from node1 to node2. For that, you'd want something like Dijkstra's algorithm
You could use a set of pairs (tuples):
connections = {("dr101", "mr99"), ("mr99", "out00"), ("dr101", "out00")} # ...
if ("scout2", "scout3") in connections:
print "scout2-scout3 in connections"
This only works if the 2 elements are already in the right order, though, because ("scout3", "scout2") != ("scout2", "scout3"), but maybe this is what you want.
If the order of the items in the connection is not significant, you can use a set of frozensets instead (see mgilson's answer). Then you can look up pairs of item regardless of which order they appear in, but the order of the original pairs in connections is lost.
I'm building a database with tag nodes and url nodes, and the url nodes are connected to tag nodes. In this case if the same url is inserted in to the database, it should be linking to the tag node, rather than creating duplicate url nodes. I think indexing would solve this problem. How is it possible to do indexing and traversal with the neo4jrestclient?. Link to a tutorial would be fine. I'm currently using versae neo4jrestclient.
Thanks
The neo4jrestclient supports both indexing and traversing the graph, but I think by using just indexing could be enoguh for your use case. However, I don't know if I understood properly your problem. Anyway, something like this could work:
>>> from neo4jrestclient.client import GraphDatabase
>>> gdb = GraphDatabase("http://localhost:7474/db/data/")
>>> idx = gdb.nodes.indexes.create("urltags")
>>> url_node = gdb.nodes.create(url="http://foo.bar", type="URL")
>>> tag_node = gdb.nodes.create(tag="foobar", type="TAG")
We add the property count to the relationship to keep track the number of URLs "http://foo.bar" tagged with the tag foobar.
>>> url_node.relationships.create(tag_node["tag"], tag_node, count=1)
And after that, we index the url node according the value of the URL.
>>> idx["url"][url_node["url"]] = url_node
Then, when I need to create a new URL node tagged with a TAG node, we first query the index to check if that is yet indexed. Otherwise, we create the node and index it.
>>> new_url = "http://foo.bar2"
>>> nodes = idx["url"][new_url]
>>> if len(nodes):
... rel = nodes[0].relationships.all(types=[tag_node["tag"]])[0]
... rel["count"] += 1
... else:
... new_url_node = gdb.nodes.create(url=new_url, type="URL")
... new_url_node.relationships.create(tag_node["tag"], tag_node, count=1)
... idx["url"][new_url_node["url"]] = new_url_node
An important concept is that the indexes are key/value/object triplets where the object is either a node or a relationship you want to index.
Steps to create and use the index:
Create an instance of the graph database rest client.
from neo4jrestclient.client import GraphDatabase
gdb = GraphDatabase("http://localhost:7474/db/data/")
Create a node or relationship index (Creating a node index here)
index = gdb.nodes.indexes.create('latin_genre')
Add nodes to the index
nelly = gdb.nodes.create(name='Nelly Furtado')
shakira = gdb.nodes.create(name='Shakira')
index['latin_genre'][nelly.get('name')] = nelly
index['latin_genre'][shakira.get('name')] = shakira
Fetch nodes based on the index and do further processing:
for artist in index['latin_genre']['Shakira']:
print artist.get('name')
More details can be found from the notes in the webadmin
Neo4j has two types of indexes, node and relationship indexes. With
node indexes you index and find nodes, and with relationship indexes
you do the same for relationships.
Each index has a provider, which is the underlying implementation
handling that index. The default provider is lucene, but you can
create your own index provides if you like.
Neo4j indexes take key/value/object triplets ("object" being a node or
a relationship), it will index the key/value pair, and associate this
with the object provided. After you have indexed a set of
key/value/object triplets, you can query the index and get back
objects that where indexed with key/value pairs matching your query.
For instance, if you have "User" nodes in your database, and want to
rapidly find them by username or email, you could create a node index
named "Users", and for each user index username and email. With the
default lucene configuration, you can then search the "Users" index
with a query like: "username:bob OR email:bob#gmail.com".
You can use the data browser to query your indexes this way, the
syntax for the above query is "node:index:Users:username:bob OR
email:bob#gmail.com".
pydot has a huge number of bound methods for getting and setting every little thing in a dot graph, reading and writing, you-name-it, but I can't seem to find a simple membership test.
>>> d = pydot.Dot()
>>> n = pydot.Node('foobar')
>>> d.add_node(n)
>>> n in d.get_nodes()
False
is just one of many things that didn't work. It appears that nodes, once added to a graph, acquire a new identity
>>> d.get_nodes()[0]
<pydot.Node object at 0x171d6b0>
>>> n
<pydot.Node object at 0x1534650>
Can anyone suggest a way to create a node and test to see if it's in a graph before adding it so you could do something like this:
d = pydot.Dot()
n = pydot.Node('foobar')
if n not in d:
d.add_node(n)
Looking through the source code, http://code.google.com/p/pydot/source/browse/trunk/pydot.py, it seems that node names are unique values, used as the keys to locate the nodes within a graph's node dictionary (though, interestingly, rather than return an error for an existing node, it simply adds the attributes of the new node to those of the existing one).
So unless you want to add an implementation of __contains__() to one of the classes in the pydot.py file that does the following, you can just do the following in your code:
if n.get_name() not in d.obj_dict['nodes'].keys():
d.add_node(n)