lark-parser indented DSL and multiline documentation strings - python

I'm trying to implement a record definition DSL using lark. It is based on indentation, which makes things a bit more complex.
Lark is a great tool, but I'm facing some dificulteis.
Here is a snippet of the DSL I'm implementing:
record Order :
"""Order record documentation
should have arbitrary size"""
field1 Int
field2 Datetime:
"""Attributes should also have
multiline documentation"""
field3 String "inline documentation also works"
and here is the grammar used:
?start: (_NEWLINE | redorddef)*
simple_type: NAME
multiline_doc: MULTILINE_STRING _NEWLINE
inline_doc: INLINE_STRING
?element_doc: ":" _NEWLINE _INDENT multiline_doc _DEDENT | inline_doc
attribute_name: NAME
attribute_simple_type: attribute_name simple_type [element_doc] _NEWLINE
attributes: attribute_simple_type+
_recordbody: _NEWLINE _INDENT [multiline_doc] attributes _DEDENT
redorddef: "record" NAME ":" _recordbody
MULTILINE_STRING: /"""([^"\\]*(\\.[^"\\]*)*)"""/
INLINE_STRING: /"([^"\\]*(\\.[^"\\]*)*)"/
_WS_INLINE: (" "|/\t/)+
COMMENT: /#[^\n]*/
_NEWLINE: ( /\r?\n[\t ]*/ | COMMENT )+
%import common.CNAME -> NAME
%import common.INT
%ignore /[\t \f]+/ // WS
%ignore /\\[\t \f]*\r?\n/ // LINE_CONT
%ignore COMMENT
%declare _INDENT _DEDENT
It works fine for multiline string docs for the record definition, works fine for inline attribute definition, but doesn't work for attribute multiline string doc.
The code I use to execute is this:
import sys
import pprint
from pathlib import Path
from lark import Lark, UnexpectedInput
from lark.indenter import Indenter
scheman_data_works = '''
record Order :
"""Order record documentation
should have arbitrary size"""
field1 Int
# field2 Datetime:
# """Attributes should also have
# multiline documentation"""
field3 String "inline documentation also works"
'''
scheman_data_wrong = '''
record Order :
"""Order record documentation
should have arbitrary size"""
field1 Int
field2 Datetime:
"""Attributes should also have
multiline documentation"""
field3 String "inline documentation also works"
'''
grammar = r'''
?start: (_NEWLINE | redorddef)*
simple_type: NAME
multiline_doc: MULTILINE_STRING _NEWLINE
inline_doc: INLINE_STRING
?element_doc: ":" _NEWLINE _INDENT multiline_doc _DEDENT | inline_doc
attribute_name: NAME
attribute_simple_type: attribute_name simple_type [element_doc] _NEWLINE
attributes: attribute_simple_type+
_recordbody: _NEWLINE _INDENT [multiline_doc] attributes _DEDENT
redorddef: "record" NAME ":" _recordbody
MULTILINE_STRING: /"""([^"\\]*(\\.[^"\\]*)*)"""/
INLINE_STRING: /"([^"\\]*(\\.[^"\\]*)*)"/
_WS_INLINE: (" "|/\t/)+
COMMENT: /#[^\n]*/
_NEWLINE: ( /\r?\n[\t ]*/ | COMMENT )+
%import common.CNAME -> NAME
%import common.INT
%ignore /[\t \f]+/ // WS
%ignore /\\[\t \f]*\r?\n/ // LINE_CONT
%ignore COMMENT
%declare _INDENT _DEDENT
'''
class SchemanIndenter(Indenter):
NL_type = '_NEWLINE'
OPEN_PAREN_types = ['LPAR', 'LSQB', 'LBRACE']
CLOSE_PAREN_types = ['RPAR', 'RSQB', 'RBRACE']
INDENT_type = '_INDENT'
DEDENT_type = '_DEDENT'
tab_len = 4
scheman_parser = Lark(grammar, parser='lalr', postlex=SchemanIndenter())
print(scheman_parser.parse(scheman_data_works).pretty())
print("\n\n")
print(scheman_parser.parse(scheman_data_wrong).pretty())
and the result is:
redorddef
Order
multiline_doc """Order record documentation
should have arbitrary size"""
attributes
attribute_simple_type
attribute_name field1
simple_type Int
attribute_simple_type
attribute_name field3
simple_type String
inline_doc "inline documentation also works"
Traceback (most recent call last):
File "schema_parser.py", line 83, in <module>
print(scheman_parser.parse(scheman_data_wrong).pretty())
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/lark.py", line 228, in parse
return self.parser.parse(text)
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/parser_frontends.py", line 38, in parse
return self.parser.parse(token_stream, *[sps] if sps is not NotImplemented else [])
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/parsers/lalr_parser.py", line 68, in parse
for token in stream:
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/indenter.py", line 31, in process
for token in stream:
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/lexer.py", line 319, in lex
for x in l.lex(stream, self.root_lexer.newline_types, self.root_lexer.ignore_types):
File "/Users/branquif/Dropbox/swf_projects/schema-manager/.venv/lib/python3.7/site-packages/lark/lexer.py", line 167, in lex
raise UnexpectedCharacters(stream, line_ctr.char_pos, line_ctr.line, line_ctr.column, state=self.state)
lark.exceptions.UnexpectedCharacters: No terminal defined for 'f' at line 11 col 2
field3 String "inline documentation also
^
I undestand indented grammars are more complex, and lark seems to make it easier, but cannot find the mistake here.
PS: I also tried pyparsing, without success wit this same scenario, and would be hard for me to move to PLY, given the amount of code that will probably be needed.

The bug comes from misplaced _NEWLINE terminals. Generally, it's recommended to make sure rules are balanced, in terms of their role in the grammar. So here's how you should have defined element_doc:
?element_doc: ":" _NEWLINE _INDENT multiline_doc _DEDENT
| inline_doc _NEWLINE
Notice the added newline, which means that no matter which of the two options the parser takes, it ends in a similar state, syntax-wise (_DEDENT also matches a newline).
The second change, as a consequence of the first one, is:
attribute_simple_type: attribute_name simple_type (element_doc|_NEWLINE)
As element_doc already handles newlines, we shouldn't try to match it twice.

You mentioned trying pyparsing, otherwise I would have left your question alone.
Whitespace-sensitive parsing is not great with pyparsing, but it does make an effort on this kind of case, using pyparsing.indentedBlock. There is some level of distress in writing this, but it can be done.
import pyparsing as pp
COLON = pp.Suppress(':')
tpl_quoted_string = pp.QuotedString('"""', multiline=True) | pp.QuotedString("'''", multiline=True)
quoted_string = pp.ungroup(tpl_quoted_string | pp.quotedString().addParseAction(pp.removeQuotes))
RECORD = pp.Keyword("record")
ident = pp.pyparsing_common.identifier()
field_expr = (ident("name")
+ ident("type") + pp.Optional(COLON)
+ pp.Optional(quoted_string)("docstring"))
indent_stack = []
STACK_RESET = pp.Empty()
def reset_indent_stack(s, l, t):
indent_stack[:] = [pp.col(l, s)]
STACK_RESET.addParseAction(reset_indent_stack)
record_expr = pp.Group(STACK_RESET
+ RECORD - ident("name") + COLON + pp.Optional(quoted_string)("docstring")
+ (pp.indentedBlock(field_expr, indent_stack))("fields"))
record_expr.ignore(pp.pythonStyleComment)
If your example is written to a variable 'sample', do:
print(record_expr.parseString(sample).dump())
And get:
[['record', 'Order', 'Order record documentation\n should have arbitrary size', [['field1', 'Int'], ['field2', 'Datetime', 'Attributes should also have\n multiline documentation'], ['field3', 'String', 'inline documentation also works']]]]
[0]:
['record', 'Order', 'Order record documentation\n should have arbitrary size', [['field1', 'Int'], ['field2', 'Datetime', 'Attributes should also have\n multiline documentation'], ['field3', 'String', 'inline documentation also works']]]
- docstring: 'Order record documentation\n should have arbitrary size'
- fields: [['field1', 'Int'], ['field2', 'Datetime', 'Attributes should also have\n multiline documentation'], ['field3', 'String', 'inline documentation also works']]
[0]:
['field1', 'Int']
- name: 'field1'
- type: 'Int'
[1]:
['field2', 'Datetime', 'Attributes should also have\n multiline documentation']
- docstring: 'Attributes should also have\n multiline documentation'
- name: 'field2'
- type: 'Datetime'
[2]:
['field3', 'String', 'inline documentation also works']
- docstring: 'inline documentation also works'
- name: 'field3'
- type: 'String'
- name: 'Order'

Related

How to create a dataclass with optional fields that outputs field in json only if the field is not None

I am unclear about how to use a #dataclass to convert a mongo doc into a python dataclass. With my NSQL documents they may or may not contain some of the fields. I only want to output a field (using asdict) from the dataclass if that field was present in the mongo document.
Is there a way to create a field that will be output with dataclasses.asdict only if it exists in the mongo doc?
I have tried using post_init but have not figured out a solution.
# in this example I want to output the 'author' field ONLY if it is present in the mongo document
#dataclass
class StoryTitle:
_id: str
title: str
author: InitVar[str] = None
dateOfPub: int = None
def __post_init__(self, author):
print(f'__post_init__ got called....with {author}')
if author is not None:
self.newauthor = author
print(f'self.author is now {self.newauthor}')
# foo and bar approximate documents in mongodb
foo = dict(_id='b23435xx3e4qq', title = 'goldielocks and the big bears', author='mary', dateOfPub = 220415)
newFoo = StoryTitle(**foo)
json_foo = json.dumps(asdict(newFoo))
print(json_foo)
bar = dict(_id='b23435xx3e4qq', title = 'War and Peace', dateOfPub = 220415)
newBar = StoryTitle(**bar)
json_bar = json.dumps(asdict(newBar))
print(json_bar)
My output json does not (of course) have the 'author' field. Anyone know how to accomplish this? I suppose I could just create my own asdict method ...
The dataclasses.asdict helper function doesn't offer a way to exclude fields with default or un-initialized values unfortunately -- however, the dataclass-wizard library does.
The dataclass-wizard is a (de)serialization library I've created, which is built on top of dataclasses module. It adds no extra dependencies outside of stdlib, only the typing-extensions module for compatibility reasons with earlier Python versions.
To skip dataclass fields with default or un-initialized values in serialization for ex. with asdict, the dataclass-wizard provides the skip_defaults option. However, there is also a minor issue I noted with your code above. If we set a default for the author field as None, that means that we won't be able to distinguish between null values and also the case when author field is not present when de-serializing the json data.
So in below example, I've created a CustomNull object similar to the None singleton in python. The name and implementation doesn't matter overmuch, however in our case we use it as a sentinel object to determine if a value for author is passed in or not. If it is not present in the input data when from_dict is called, then we simply exclude it when serializing data with to_dict or asdict, as shown below.
from __future__ import annotations # can be removed in Python 3.10+
from dataclasses import dataclass
from dataclass_wizard import JSONWizard
# create our own custom `NoneType` class
class CustomNullType:
# these methods are not really needed, but useful to have.
def __repr__(self):
return '<null>'
def __bool__(self):
return False
# this is analogous to the builtin `None = NoneType()`
CustomNull = CustomNullType()
# in this example I want to output the 'author' field ONLY if it is present in the mongo document
#dataclass
class StoryTitle(JSONWizard):
class _(JSONWizard.Meta):
# skip default values for dataclass fields when `to_dict` is called
skip_defaults = True
_id: str
title: str
# note: we could also define it like
# author: str | None = None
# however, using that approach we won't know if the value is
# populated as a `null` when de-serializing the json data.
author: str | None = CustomNull
# by default, the `dataclass-wizard` library uses regex to case transform
# json fields to snake case, and caches the field name for next time.
# dateOfPub: int = None
date_of_pub: int = None
# foo and bar approximate documents in mongodb
foo = dict(_id='b23435xx3e4qq', title='goldielocks and the big bears', author='mary', dateOfPub=220415)
new_foo = StoryTitle.from_dict(foo)
json_foo = new_foo.to_json()
print(json_foo)
bar = dict(_id='b23435xx3e4qq', title='War and Peace', dateOfPub=220415)
new_bar = StoryTitle.from_dict(bar)
json_bar = new_bar.to_json()
print(json_bar)
# lastly, we try de-serializing with `author=null`. the `author` field should still
# be populated when serializing the instance, as it was present in input data.
bar = dict(_id='b23435xx3e4qq', title='War and Peace', dateOfPub=220415, author=None)
new_bar = StoryTitle.from_dict(bar)
json_bar = new_bar.to_json()
print(json_bar)
Output:
{"_id": "b23435xx3e4qq", "title": "goldielocks and the big bears", "author": "mary", "dateOfPub": 220415}
{"_id": "b23435xx3e4qq", "title": "War and Peace", "dateOfPub": 220415}
{"_id": "b23435xx3e4qq", "title": "War and Peace", "author": null, "dateOfPub": 220415}
Note: the dataclass-wizard can be installed with pip:
$ pip install dataclass-wizard

In ruamel.yaml, how do I emit a ScalarEvent with the literal string "null"?

I am using ruamel.yaml to emit a series of events to create a custom YAML file format mixing flow styles.
I've found myself unable to emit a ScalarEvent with the value "null", such that it appears in the YAML file as the string 'null', rather than as the YAML keyword null.
In code form, if I try
dumper = yaml.Dumper(out_file, width=200)
param = 'field'
param_value = 'null'
dumper.emit(yaml.MappingStartEvent(anchor=None, tag=None, implicit=True, flow_style=True))
dumper.emit(yaml.ScalarEvent(anchor=None, tag=None, implicit=(True, True), value=param))
dumper.emit(yaml.ScalarEvent(anchor=None, tag=None, implicit=(True, True), value=param_value))
dumper.emit(yaml.MappingEndEvent())
I get
field: null
whereas I'd like to see
field: 'null'
Your code is incomplete, but since you set flow_style=True on the
mapping event, you did not get the code that you shown, and never going to get the output that you want.
If you want to go this route, then look at the only place in the code
where the ruamel.yaml code emits a ScalarNode. It is in serializer.py:
self.emitter.emit(
ScalarEvent(
alias,
node.tag,
implicit,
node.value,
style=node.style,
comment=node.comment,
)
)
From that you pick up that you need to add the style
parameter. Further digging will shown this should be a single
characters string, in your case the single quote ("'"), to force a single quoted scalar.
import sys
import ruamel.yaml as yaml
dumper = yaml.Dumper(sys.stdout, width=200)
param = 'field'
param_value = 'null'
dumper.emit(yaml.StreamStartEvent())
dumper.emit(yaml.DocumentStartEvent())
# !!!! changed flow_style in the next line
dumper.emit(yaml.MappingStartEvent(anchor=None, tag=None, implicit=True, flow_style=False))
dumper.emit(yaml.ScalarEvent(anchor=None, tag=None, implicit=(True, True), value=param))
# added style= in next line
dumper.emit(yaml.ScalarEvent(anchor=None, tag=None, implicit=(True, True), style="'", value=param_value))
dumper.emit(yaml.MappingEndEvent())
dumper.emit(yaml.DocumentEndEvent())
dumper.emit(yaml.StreamEndEvent())
which gives what you want:
field: 'null'
However I think you are making your life way more difficult than necessary. ruamel.yaml does
preserve flow-style on round-trip, and you create a functional data structure and dump that
instead of reverting to driving the dumper with events:
import sys
import ruamel.yaml
yaml_str = """\
a:
- field: 'null'
x: y
- {field: 'null', x: y}
"""
yaml = ruamel.yaml.YAML()
data = yaml.load(yaml_str)
for i in range(2):
data["a"].append(ruamel.yaml.comments.CommentedMap([("a", "b"), ("c", "d")]))
data["a"][-1].fa.set_flow_style()
yaml.dump(data, sys.stdout)
this gives:
a:
- field: 'null'
x: y
- {field: 'null', x: y}
- a: b
c: d
- {a: b, c: d}

How to update only 1 attribute of json type column in postgres table using python api code

I have a postgresql(v.9.5) table called products defined using sqlalchemy core as:
products = Table("products", metadata,
Column("id", Integer, primary_key=True),
Column("name", String, nullable=False, unique=True),
Column("description", String),
Column("list_price", Float),
Column("xdata", JSON))
Assume the date in the table is added as follows:
id | name | description | list_price | xdata
----+------------+---------------------------+------------+--------------------------------
24 | Product323 | description of product332 | 6000 | [{"category": 1, "uom": "kg"}]
Using API edit code as follows:
def edit_product(product_id):
if 'id' in session:
exist_data = {}
mkeys = []
s = select([products]).where(products.c.id == product_id)
rs = g.conn.execute(s)
if rs.rowcount == 1:
data = request.get_json(force=True)
for r in rs:
exist_data = dict(r)
try:
print exist_data, 'exist_data'
stmt = products.update().values(data).\
where(products.c.id == product_id)
rs1 = g.conn.execute(stmt)
return jsonify({'id': "Product details modified"}), 204
except Exception, e:
print e
return jsonify(
{'message': "Couldn't modify details / Duplicate"}), 400
return jsonify({'message': "UNAUTHORIZED"}), 401
Assuming that I would like to modify only the "category" value in xdata column of the table, without disturbing the "uom" attribute and its value, which is the best way to achieve it? I have tried the 'for loop' to get the attributes of the existing values, then checking with the passed attribute value changes to update. I am sure there is a better way than this. Please revert with the changes required to simplify this
Postgresql offers the function jsonb_set() for replacing a part of a jsonb with a new value. Your column is using the json type, but a simple cast will take care of that.
from sqlalchemy.dialects.postgresql import JSON, JSONB, array
import json
def edit_product(product_id):
...
# If xdata is present, replace with a jsonb_set() function expression
if 'xdata' in data:
# Hard coded path that expects a certain structure
data['xdata'] = func.jsonb_set(
products.c.xdata.cast(JSONB),
array(['0', 'category']),
# A bit ugly, yes, but the 3rd argument to jsonb_set() has type
# jsonb, and so the passed literal must be convertible to that
json.dumps(data['xdata'][0]['category'])).cast(JSON)
You could also device a generic helper that creates nested calls to jsonb_set(), given some structure:
import json
from sqlalchemy.dialects.postgresql import array
def to_jsonb_set(target, value, create_missing=True, path=()):
expr = target
if isinstance(value, dict):
for k, v in value.items():
expr = to_jsonb_set(expr, v, create_missing, (*path, k))
elif isinstance(value, list):
for i, v in enumerate(value):
expr = to_jsonb_set(expr, v, create_missing, (*path, i))
else:
expr = func.jsonb_set(
expr,
array([str(p) for p in path]),
json.dumps(value),
create_missing)
return expr
but that's probably overdoing it.

How to get a parse in a bracketed format (without POS tags)?

I want to parse a sentence to a binary parse of this form (Format used in the SNLI corpus):
sentence:"A person on a horse jumps over a broken down airplane."
parse: ( ( ( A person ) ( on ( a horse ) ) ) ( ( jumps ( over ( a ( broken ( down airplane ) ) ) ) ) . ) )
I'm unable to find a parser which does this.
note: This question has been asked earlier(How to get a binary parse in Python). But the answers are not helpful. And I was unable to comment because I do not have the required reputation.
Here is some sample code which will erase the labels for each node in the tree.
package edu.stanford.nlp.examples;
import edu.stanford.nlp.ling.*;
import edu.stanford.nlp.pipeline.*;
import edu.stanford.nlp.trees.*;
import edu.stanford.nlp.util.*;
import java.util.*;
public class PrintTreeWithoutLabelsExample {
public static void main(String[] args) {
// set up pipeline properties
Properties props = new Properties();
props.setProperty("annotators", "tokenize,ssplit,pos,lemma,parse");
// use faster shift reduce parser
props.setProperty("parse.model", "edu/stanford/nlp/models/srparser/englishSR.ser.gz");
props.setProperty("parse.maxlen", "100");
props.setProperty("parse.binaryTrees", "true");
// set up Stanford CoreNLP pipeline
StanfordCoreNLP pipeline = new StanfordCoreNLP(props);
// build annotation for text
Annotation annotation = new Annotation("The red car drove on the highway.");
// annotate the review
pipeline.annotate(annotation);
for (CoreMap sentence : annotation.get(CoreAnnotations.SentencesAnnotation.class)) {
Tree sentenceConstituencyParse = sentence.get(TreeCoreAnnotations.TreeAnnotation.class);
for (Tree subTree : sentenceConstituencyParse.subTrees()) {
if (!subTree.isLeaf())
subTree.setLabel(CoreLabel.wordFromString(""));
}
TreePrint treePrint = new TreePrint("oneline");
treePrint.printTree(sentenceConstituencyParse);
}
}
}
I analyzed the accepted version and as I needed something in python, I made a simple function, that creates the same results. For parsing the sentences I adapted the version found at the referenced link.
import re
import string
from stanfordcorenlp import StanfordCoreNLP
from nltk import Tree
from functools import reduce
regex = re.compile('[%s]' % re.escape(string.punctuation))
def parse_sentence(sentence):
nlp = StanfordCoreNLP(r'./stanford-corenlp-full-2018-02-27')
sentence = regex.sub('', sentence)
result = nlp.parse(sentence)
result = result.replace('\n', '')
result = re.sub(' +',' ', result)
nlp.close() # Do not forget to close! The backend server will consume a lot memery.
return result.encode("utf-8")
def binarize(parsed_sentence):
sentence = sentence.replace("\n", "")
for pattern in ["ROOT", "SINV", "NP", "S", "PP", "ADJP", "SBAR",
"DT", "JJ", "NNS", "VP", "VBP", "RB"]:
sentence = sentence.replace("({}".format(pattern), "(")
sentence = re.sub(' +',' ', sentence)
return sentence
Neither my or the accepted version deliver the same results as presented in the SNLI or MultiNLI corpus, as they gather two single leafs of the tree together to one. An example from the MultiNLI corpus shows
"( ( The ( new rights ) ) ( are ( nice enough ) ) )",
where as booth answers here return
'( ( ( ( The) ( new) ( rights)) ( ( are) ( ( nice) ( enough)))))'.
I am not an expert in NLP, so I hope this does not make any difference. At least it does not for my applications.

appengine: convert ndb model to go lang struct

I've got a python module and a go module on app engine. The go module is fairly simple and just provides a readonly search interface to the datastore which is populated by the python module.
How do I convert the following ndb model into a go struct:
class Course(ndb.Model):
name = ndb.StringProperty()
neat_name = ndb.StringProperty(required=True)
country = ndb.KeyProperty(kind=Country, required=True)
university = ndb.KeyProperty(kind=University, required=True)
faculty = ndb.KeyProperty(kind=Faculty, required=True)
department = ndb.KeyProperty(kind=Department, required=True)
stage = ndb.KeyProperty(kind=Stage, required=True)
legacy_id = ndb.StringProperty()
course_title = ndb.StringProperty(required=True, indexed=False)
course_description = ndb.TextProperty(required=True)
course_link = ndb.StringProperty(required=True, indexed=False)
#0-5 or None or not has attribute.
course_rating_ = ndb.FloatProperty()
course_review_count_ = ndb.IntegerProperty()
To start with I'll have:
type Course struct {
Name string `datastore:"name"`
NeatName `datastore:"neat_name"`
...
}
For the ndb.KeyProperty properties - Do I just use a string in my struct? & I'll have to parse that string - is that straight forward?
Also can I just ignore the required=True & indexed=False options? Obviously since i'm not doing any writes?
Per https://cloud.google.com/appengine/docs/go/datastore/entities#Go_Properties_and_value_types , String (a short string of up to 500 characters, indexed by default) maps to Go string; Text (a long string up to 1MB, not indexed) also to Go string but always with noindex; for datastore Key there is *datastore.Key, see https://cloud.google.com/appengine/docs/go/datastore/reference#Key ; for Integer, int64; for Float, float64 (you could use shorter ints and floats but the datastore uses 64 bits for each anyway so you might as well:-).
Also can I just ignore the required=True & indexed=False options?
Yes for required, but I believe that, using https://cloud.google.com/appengine/docs/go/datastore/reference , you do have to use option noindex for Text because it's necessary to indicate strings that can be longer than 512 (unicode) characters.
Not sure which versions of go and its datastore package enforce this constraint, but even if the present one doesn't it's safer to respect it anyway -- or else your app might break with a simple Go version upgrade!-)
Here's the code - it's working in production & locally too:
type Course struct {
Name string `datastore:"name"`
NeatName string `datastore:"neat_name"`
Country *datastore.Key `datastore:"country"`
University *datastore.Key `datastore:"university"`
Faculty *datastore.Key `datastore:"faculty"`
Department *datastore.Key `datastore:"department"`
Stage *datastore.Key `datastore:"stage"`
LegacyId string `datastore:"legacy_id"`
CourseTitle string `datastore:"course_title,noindex"`
CourseDescription string `datastore:"course_description"`
CourseLink string `datastore:"course_link,noindex"`
CourseRating float64 `datastore:"course_rating_"`
CourseReviewCount int64 `datastore:"course_review_count_"`
}
and
func (ttt *EdSearchApi) Search(r *http.Request,
req *SearchQuery, resp *SearchResults) error {
c := appengine.NewContext(r)
q := datastore.NewQuery("Course").Limit(1)
var courses []Course
_, err := q.GetAll(c, &courses)
c.Infof("err %v", err)
c.Infof("courses 0: %v", courses[0])
c.Infof("!!!")
return nil
}

Categories

Resources