RDFLib parser does not recognize json-ld format - python

My code in Python 3.4:
from rdflib import Graph, plugin
import json, rdflib_jsonld
from rdflib.plugin import register, Serializer
register('json-ld', Serializer, 'rdflib_jsonld.serializer', 'JsonLDSerializer')
context = {
"#context": {
"foaf" : "http://xmlns.com/foaf/0.1/",
"vcard": "http://www.w3.org/2006/vcard/ns#country-name",
"job": "http://example.org/job",
"name": {"#id": "foaf:name"},
"country": {"#id": "vcard:country-name"},
"profession": {"#id": "job:occupation"},
}
}
x = [{"name": "bert", "country": "antartica", "profession": "bear"}]
g = Graph()
g.parse(data=json.dumps(x), format='json-ld', context=context)
g.close()
Error:
"No plugin registered for (%s, %s)" % (name, kind))
rdflib.plugin.PluginException: No plugin registered for (json-ld, <class'rdflib.parser.Parser'>)
According to the RDFLib documentation, a list of supported plugins does not include the json-ld format. However, I had it working before with format set to json-ld and there are plenty of examples using the json-ld format, e.g: https://github.com/RDFLib/rdflib-jsonld/issues/19
I included the import of rdflib_jsonld although it worked before on another environment (Python 2.7) with only the rdflib (I know, doesn't make any sense).
The register part of json-ld on line 4 isn't helping either.
Anyone having an idea?

I got it working by adding:
from SPARQLWrapper import SPARQLWrapper
I was looking into the jsonLayer module from RDFLib at http://rdflib.readthedocs.org/en/latest/apidocs/rdflib.plugins.sparql.results.html#module-rdflib.plugins.sparql.results.jsonlayer and noticed the mentioning of SPARQLWrapper which I used in my previous environment where I got the example working and there it was.

this a simple syntax you can use
import rdflib
import json
from collections import Counter
from rdflib import Graph, plugin
from rdflib.serializer import Serializer
g = rdflib.Graph()
g.parse("http://purl.obolibrary.org/obo/go.owl")
j = g.serialize(format='json-ld', indent=4)
with open('ontology.json', 'a+') as f:
f.write(str(j))
f.close()

I encountered this PluginException as well, in a Jupyter notebook, after running the following two cells:
! pip install rdflib-json
from rdflib import Graph, plugin
from rdflib.serializer import Serializer
g = Graph()
g.parse(data="""
<some turtle triples>
""", format="turtle")
g.serialize(format="json-ld")
Turns out it started working by just restarting the notebook and rerunning the code above (so reimporting after restarting the notebook).

Related

Changing cursor type from python

I am making a text based video game with python, and I need a way to change the cursor type from inside python. Here is my code:
from os import system,getenv
from json import loads,dumps
import uuid
apd = getenv("LOCALAPPDATA")
pf = loads(open(rf"{apd}\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState\settings.json").read())
pf['profiles']['list'].append({"guid":str(uuid.uuid4()),'name':"Game","commandline": "C:\\Windows\\System32\\cmd.exe","cursorShape": "vintage","font": {"face": "Consolas"},})
open(rf"{apd}\Packages\Microsoft.WindowsTerminal_8wekyb3d8bbwe\LocalState\settings.json",'w').write(dumps(pf))
system('wt -F -d {{PATH TO FILE}} -p "Game" --title Game python game.py')
I'm not sure it's a great idea to place a dependency on Windows Terminal, but if you do go that route, the right way to do this is via the JSON fragment extension feature.
Don't risk modifying the user's Windows Terminal settings file directly via Python. Users will be quite (rightly) upset if you muck up their settings if something goes wrong.
JSON fragment extensions give you the ability to create a separate settings fragment that only applies to your application/profile.
For example, create a fragment:
{
"profiles": [
{
"name": "My Game",
"commandline": "python.exe /path/to/game.py",
"cursorShape": "vintage",
"font": {"face": "Consolas"}
}
]
}
Place this file in either:
C:\ProgramData\Microsoft\Windows Terminal\Fragments\{app-name}\{file-name}.json: For all users on the system
C:\Users\<user>\AppData\Local\Microsoft\Windows Terminal\Fragments\{app-name}\{file-name}.json: For only the current user

How to propagate mlpipeline-metrics from custom Python function TFX component?

Note: this is a copy of a GitHub issue I reported.
It is re-posted in hope to get more attention, I will update any solutions on either site.
Question
I want to export mlpipeline-metrics from my custom Python function TFX component so that it is displayed in the KubeFlow UI.
This is a minimal example of what I am trying to do:
import json
from tfx.dsl.component.experimental.annotations import OutputArtifact
from tfx.dsl.component.experimental.decorators import component
from tfx.types.standard_artifacts import Artifact
class Metric(Artifact):
TYPE_NAME = 'Metric'
#component
def ShowMetric(MLPipeline_Metrics: OutputArtifact[Metric]):
rmse_eval = 333.33
metrics = {
'metrics':[
{
'name': 'RMSE-validation',
'numberValue': rmse_eval,
'format': 'RAW'
}
]
}
path = '/tmp/mlpipeline-metrics.json'
with open(path, 'w') as _file:
json.dump(metrics, _file)
MLPipeline_Metrics.uri = path
In the KubeFlow UI, the "Run output" tab says "No metrics found for this run." However, the output artefact shows up in the ML MetaData (see screenshot). Any help on how to accomplish this would be greatly appreciated. Thanks!

How to deserialize App Engine application logs from StackDriver Logging API?

As part of migrating to Python 3, I need to migrate from logservice to the StackDriver Logging API. I have google-cloud-logging installed, and I can successfully fetch GAE application logs with eg:
>>> from google.cloud.logging_v2 import LoggingServiceV2Client
>>> entries = LoggingServiceV2Client().list_log_entries(('projects/projectname',),
filter_='resource.type="gae_app" AND protoPayload.#type="type.googleapis.com/google.appengine.logging.v1.RequestLog"')
>>> print(next(iter(entries)))
proto_payload {
type_url: "type.googleapis.com/google.appengine.logging.v1.RequestLog"
value: "\n\ts~brid-gy\022\0018\032R5d..."
}
This gets me a LogEntry with text application logs in the proto_payload.value field. How do I deserialize that field? I've found lots of related mentions in the docs, but nothing pointing me to a google.appengine.logging.v1.RequestLog protobuf generated class anywhere that I can use, if that's even the right idea. Has anyone done this?
Woo! Finally got this working. I had to generate and use the Python bindings for the google.appengine.logging.v1.RequestLog protocol buffer myself, by hand. Here's how.
First, I cloned these two repos at head:
https://github.com/googleapis/googleapis.git
https://github.com/protocolbuffers/protobuf.git
Then, I generated request_log_pb2.py from request_log.proto by running:
protoc -I googleapis/ -I protobuf/src/ --python_out . googleapis/google/appengine/logging/v1/request_log.proto
Finally, I pip installed googleapis-common-protos and protobuf. I was then able to deserialize proto_payload with:
from google.cloud.logging_v2 import LoggingServiceV2Client
client = LoggingServiceV2Client(...)
log = next(iter(client.list_log_entries(('projects/brid-gy',),
filter_='logName="projects/brid-gy/logs/appengine.googleapis.com%2Frequest_log"')))
import request_log_pb2
pb = request_log_pb2.RequestLog.FromString(log.proto_payload.value)
print(pb)
You can use the LogEntry.to_api_repr() function to get a JSON version of the LogEntry.
>>> from google.cloud.logging import Client
>>> entries = Client().list_entries(filter_="severity:DEBUG")
>>> entry = next(iter(entries))
>>> entry.to_api_repr()
{'logName': 'projects/PROJECT_NAME/logs/cloudfunctions.googleapis.com%2Fcloud-functions'
, 'resource': {'type': 'cloud_function', 'labels': {'region': 'us-central1', 'function_name': 'tes
t', 'project_id': 'PROJECT_NAME'}}, 'labels': {'execution_id': '1zqolde6afmx'}, 'insertI
d': '000000-f629ab40-aeca-4802-a678-d513e605608e', 'severity': 'DEBUG', 'timestamp': '2019-10-24T2
1:55:14.135056Z', 'trace': 'projects/PROJECT_NAME/traces/9c5201c3061d91c2b624abb950838b4
0', 'textPayload': 'Function execution started'}
Do you really want to use the API v2 ?
If not, use from google.cloud import logging and
set os.environ['GOOGLE_CLOUD_DISABLE_GRPC'] = 'true' - or similar env setting.
That will effectively return a JSON in payload instead of payload_pb

How do you test your ncurses app in Python?

We've built cli-app with Python. Some part need ncurses, so we use
npyscreen. We've successfully tested most part of app using pytest
(with the help of mock and other things). But we stuck in 'how to test
the part of ncurses code'
Take this part of our ncurses code that prompt user to answer:
"""
Generate text user interface:
example :
fields = [
{"type": "TitleText", "name": "Name", "key": "name"},
{"type": "TitlePassword", "name": "Password", "key": "password"},
{"type": "TitleSelectOne", "name": "Role",
"key": "role", "values": ["admin", "user"]},
]
form = form_generator("Form Foo", fields)
print(form["role"].value[0])
print(form["name"].value)
"""
def form_generator(form_title, fields):
def myFunction(*args):
form = npyscreen.Form(name=form_title)
result = {}
for field in fields:
t = field["type"]
k = field["key"]
del field["type"]
del field["key"]
result[k] = form.add(getattr(npyscreen, t), **field)
form.edit()
return result
return npyscreen.wrapper_basic(myFunction)
We have tried many ways, but failed:
stringIO to capture the output: failed
redirect the output to file: failed
hecate: failed
I think it's only work if we run whole program
pyautogui
I think it's only work if we run whole program
This is the complete steps of what I have
tried
So the last thing I use is to use patch. I patch those
functions. But the cons is the statements inside those functions are
remain untested. Cause it just assert the hard-coded return value.
I find npyscreen docs
for writing test. But I don't completely understand. There is just one example.
Thank you in advance.
I don't see it mentioned in the python docs, but you can use the screen-dump feature of the curses library to capture information for analysis.

How to use relative path for $ref in Json Schema

Say I have a json schema called child.json.
"$ref": "file:child.json" will work
"$ref": "file:./child.json" will work
That is the only two relative path worked for me. I am using the python validator: http://sacharya.com/validating-json-using-python-jsonschema/
The issue I have is: if i have 3 schema: grandpa.json, parent.json, and child.json; grandpa is referring to parent using "$ref": "file:parent.json and parent is referring to child using "$ref": "file:child.json. Then the above relative path does not work no more
Building off the github issue linked by #jruizaranguren I ended up with the following which works as expected:
import os
import json
import jsonschema
schema_dir = os.path.abspath('resources')
with open(os.path.join(schema_dir, 'schema.json')) as file_object:
schema = json.load(file_object)
# Your data
data = {"sample": "woo!"}
# Note that the second parameter does nothing.
resolver = jsonschema.RefResolver('file://' + schema_dir + '/', None)
# This will find the correct validator and instantiate it using the resolver.
# Requires that your schema contains a line like this: "$schema": "http://json-schema.org/draft-04/schema#"
jsonschema.validate(data, schema, resolver=resolver)

Categories

Resources