Py2neo (V4) - CypherSyntaxError: Variable `$x` not defined - python

I am trying in the absolute simplest way I can think of to create a node in my neo4j database using py2neo. Here is an example:
from py2neo import Graph, Node
db = Graph()
node = Node('band', name='The Yeah Yeah Yeahs')
db.create(node)
With this (and every variation thereof), I get the following error:
neo4j.exceptions.CypherSyntaxError: Variable `$x` not defined (line 1, column 8 (offset: 7))
"UNWIND $x AS data CREATE (_:band) SET _ = data RETURN id(_)"
I've tried every permutation of this that I can think of and I still can't see anything in my code that might be causing a syntax error. This appears to be some kind of internal mechanism for generating a cypher query in order to create the node, but even with the complete stack trace I haven't been able to track down where this error might be coming from or what might be causing it.
I am using a virtual environment that uses Python 3.7.2 and py2neo 4.1.3.
Any thoughts or insights would be hugely appreciated. Thanks very much in advance.

Which version of Neo4j are you using? The $x replaced the older {x} syntax and the error message implies that $x isn't recognised. If this isn't a recent version, please try upgrading your database and try again.

Related

Creating named graph with rdflib in Virtuoso

The following code works for me when using Apache Jena Fuseki 4.3.2 (docker image secoresearch/fuseki:4.3.2) with rdflib 6.1.1:
from rdflib import Graph
from rdflib.plugins.stores.sparqlstore import SPARQLUpdateStore
FUSEKI_QUERY = 'http://localhost:3030/ds/sparql'
FUSEKI_UPDATE = 'http://localhost:3030/ds/update'
store = SPARQLUpdateStore(query_endpoint=FUSEKI_QUERY,
update_endpoint=FUSEKI_UPDATE,
method='POST',
autocommit=False)
graph = Graph(store=store, identifier=GRAPH_NAME)
graph.parse('./dump.ttl') # file containing 1000 example triples
store.commit()
But when I change to OpenLink Virtuoso 07.20.3233 (docker image tenforce/virtuoso:latest), I get the following error:
urllib.error.HTTPError: HTTP Error 500: SPARQL Request Failed
With some trial and error, I got the following to work for Virtuoso:
from rdflib import Graph
from rdflib.plugins.stores.sparqlstore import SPARQLUpdateStore
VIRTUOSO_QUERY = 'http://localhost:8890/sparql'
VIRTUOSO_UPDATE = 'http://localhost:8890/sparql'
store = SPARQLUpdateStore(query_endpoint=VIRTUOSO_QUERY,
update_endpoint=VIRTUOSO_UPDATE,
method='POST',
autocommit=False)
intermediate_graph = Graph()
intermediate_graph.parse('./dump.ttl')
graph = Graph(store=store, identifier=GRAPH_NAME)
for triple in intermediate_graph:
graph.add(triple)
store.commit() # Have to commit after every add here
If I don't commit after every add, but only once after the loop, I get the same error as above. At the moment, I don't see any helpful HTTP or server log entries, that might point me to the problem.
So my question is, has anyone an idea why this error occurs and what the solution may be? I guess it has something to do with the way my Virtuoso instance is configured?
Update 03.06.2022:
I noticed, that I was using an old version of Virtuoso with docker image tenforce/virtuoso:latest (07.20.3233), so I switched to openlink/virtuoso-opensource-7:latest (07.20.3234). With that, my code for Virtuoso does not work anymore (same error as stated above).
Also, as TallTed identified correctly in his comment, I use /sparql for both query and update. I can do that, because I gave the user SPARQL the SPARQL_UPDATE role in the Virtuoso Conductor interface. It is kind of a workaround for now, since I didn't get basic auth to work over rdflib. Could that have something to do with the problem, since I don't use /sparql-auth?

Converting Intersystems cache objectscript into a python function

I am accessing an Intersystems cache 2017.1.xx instance through a python process to get various attributes about the database in able to monitor the database.
One of the items I want to monitor is license usage. I wrote a objectscript script in a Terminal window to access license usage by user:
s Rset=##class(%ResultSet).%New("%SYSTEM.License.UserListAll")
s r=Rset.Execute()
s ncol=Rset.GetColumnCount()
While (Rset.Next()) {f i=1:1:ncol w !,Rset.GetData(i)}
But, I have been unable to determine how to convert this script into a Python equivalent. I am using the intersys.pythonbind3 import for connecting and accessing the cache instance. I have been able to create python functions that accessing most everything else in the instance but this one piece of data I can not figure out how to translate it to Python (3.7).
Following should work (based on the documentation):
query = intersys.pythonbind.query(database)
query.prepare_class("%SYSTEM.License","UserListAll")
query.execute();
# Fetch each row in the result set, and print the
# name and value of each column in a row:
while 1:
cols = query.fetch([None])
if len(cols) == 0: break
print str(cols[0])
Also, notice that InterSystems IRIS -- successor to the Caché now has Python as an embedded language. See more in the docs
Since the noted query "UserListAll" is not defined correctly in the library; not SqlProc. So to resolve this issue would require a ObjectScript with the query and the use of #Result set or similar in Python to get the results. So I am marking this as resolved.
Not sure which Python interface you're using for Cache/IRIS, but this Open Source 3rd party one is worth investigating for the kind of things you're trying to do:
https://github.com/chrisemunt/mg_python

How to run the Calculate Field using ArcPy?

I have an issue in executing the Calculate field command in Python (ArcPy). I couldn't find any related resources or helpful descriptions regarding this. I hope somebody could help me with this.
inFeatures = r"H:\Python Projects\PycharmProjects\ArcPy\Test_Output\Trial_out.gdb\Trail_Data_A.shp"
arcpy.CalculateField_management(inFeatures, 'ObjArt_Ken', '!AttArt_Ken!'.split('_')[0])
arcpy.CalculateField_management(inFeatures, 'Wert', '!AttArt_Ken!'.split('_')[-1])
The error i am getting when I run the command is
arcgisscripting.ExecuteError: Error during execution. Parameters are invalid.
ERROR 000989: The CalculateField tool cannot use VB expressions for services.
Error while executing (CalculateField).
I am using ArcGIS Pro 2.8 and Python 2.7
I found a working output for the above mentioned question
inTable = r'H:\Python Projects\PycharmProjects\ArcPy\Test_Output\Trial_out.gdb\Trail_Data_A.shp'
func_1 = "'!AttArt_Ken!'.split('_')[0]"
field_1 = "ObjArt_Ken"
func_2 = "'!AttArt_Ken!'.split('_')[1]"
field_2 = "Wert"
arcpy.AddField_management(inTable, field_1, "TEXT".encode('utf-8'))
arcpy.CalculateField_management(inTable, field_1, func_1, "PYTHON_9.3")
arcpy.AddField_management(inTable, field_2, "TEXT".encode('utf-8'))
arcpy.CalculateField_management(inTable, field_2, func_2, "PYTHON_9.3")
But a problem still exists such that the output (After split function) is like
|ObjArt_Ken|Wert|
|:---|:---|
|u"31001|1001"|
I still couldn't figure out a way to remove the unicode or the '"' symbol from the desired output
Since I can't comment yet, I will have to use an Answer to make several comments
You need to clarify, "I am using ArcGIS Pro 2.8 and Python 2.7." ArcGIS Pro has always come bundled with Python 3.x, and ArcGIS Desktop/ArcMap has always been bundled with Python 2.x. It is not possible to use ArcGIS Pro and Python 2.7. Given the error message, it appears you are running some version of ArcMap and Python 2.7.
The original error message, as you seemed to have figured out, was telling you exactly what wasn't working originally. Since you didn't pass expression_type to Calculate Field and you are using some version of ArcMap that defaults to VB, the function was interpreting your expression as VB code and erring because "CalculateField tool cannot use VB expressions for services."
Looking at the newer code, I am not sure why you are encoding field_type. Although I tested it and it works, it is unnecessary and confusing to someone looking at the code.
Unless you provide some samples of what the AttArt_Ken field contains, people can't really comment on the output you are seeing.

Error Message when trying to delete features in Feature Service Python API 1.7 for ArcGIS

I am trying to delete features using the delete_features method on the FeatureLayer Object and I keep getting the following error: "This SqlTransaction has completed; it is no longer usable."
The code is below. The error message seems to populate in the last line where="OBJECTID >=0", but I'm not a 100 sure if this is the problem. Unfortunately I'm not very good at programming.
gis = arcgis.GIS("http://gfcgis.maps.arcgis.com", "UserName", "Password")
feature_layer_item = gis.content.search(FeatureLayer, item_type = 'Feature Service')[0]
flayers = feature_layer_item.layers
flayer = flayers[0]
flayer.delete_features(where="OBJECTID >= 0", rollback_on_failure=True)
Any help would be greatly appreciated.
Michael
This sounds like a zombie transaction. Check with your DBA if there's a query, most likely a Stored Procedure, which is being called when your code runs. This message usually shows up when the application code tries to do a commit on the DB after the SP has already committed.
That's the SQL Transaction which has already completed.
Come to find out, it was a simple syntax error causing the issue. I didn't put quotations around 'True' for the rollback_on_failure parameter.

Elastic Search [PUT] error

I'm having some trouble getting elastic search integrated with an existing application, but it should be a fairly straightforward issue. I'm able to create and destroy indices but for some reason I'm having trouble getting data into elastic search and querying for it.
I'm using the pyes library and honestly finding the documentation to be less than helpful on this front. This is my current code:
def initialize_transcripts(database, mapping):
database.indices.create_index("transcript-index")
def index_course(database, sjson_directory, course_name, mapping):
database.put_mapping(course_name, {'properties': mapping}, "transcript-index")
all_transcripts = grab_transcripts(sjson_directory)
video_counter = 0
for transcript_tuple in all_transcripts:
data_map = {"searchable_text": transcript_tuple[0], "uuid": transcript_tuple[1]}
database.index(data_map, "transcript-index", course_name, video_counter)
video_counter += 1
database.indices.refresh("transcript-index")
def search_course(database, query, course_name):
search_query = TermQuery("searchable_text", query)
return database.search(query=search_query)
I'm first creating the database, and initializing the index, then trying to add data in and search it with the second two methods. I'm currently getting the following error:
raise ElasticSearchException(response.body, response.status, response.body)
pyes.exceptions.ElasticSearchException: No handler found for uri [/transcript-index/test-course] and method [PUT]
I'm not quite sure how to approach it, and the only reference I could find to this error suggested creating your index beforehand which I believe I am already doing. Has anyone run into this error before? Alternatively do you know of any good places to look that I might not be aware of?
Any help is appreciated.
For some reason, adding the ID to the index, despite the fact that it is shown in the starting documentation: (http://pyes.readthedocs.org/en/latest/manual/usage.html) doesn't work, and in fact causes this error.
Once I removed the video_counter argument to index, this worked perfectly.

Categories

Resources