I'm not sure if the following is a bug/not implemented.
Following situation:
I have the following json snippet in my MongoDB:
[{
"firstname": "Test",
"surname": "Test",
"email_address": "example#example.com",
"country": "Austria",
"holiday_home": {
"address": "",
"address_2": "",
"city": "",
"country": "Germany",
"postal_code": "",
"state_province": ""
}
}]
I managed to display the "first level values" (firstname, surname, email, country) in a "standard view" like this without any issues:
class RegistrantView(ModelView):
column_list = ('firstname', 'surname', 'email_address', 'country')
form = RegistrantForm
Unfortunately I do not manage to access the key/values stored nested in "holiday_home".
I've tried numerous ways, like column_list = ([holiday_home]['country']) but unfortunately didn't succeed.
Hence I'd like to ask if this is even possible using flask-admin with pymongo.
Old question, but as for now, there seems to be no way to do what is requested directly from a PyMongo ModelView. This means, there is no special notation allowing you to get the value from a nested document by default.
That being said, I just shared two solutions on this very specific issue here :
https://github.com/flask-admin/flask-admin/issues/1585
First you can duplicate/override : _get_field_value function in your flask_admin_custom.contrib.pymongo.views so that it handles dotted notation better like "holiday_home.country". A proposal is :
def _get_field_value(self, model, name):
"""
Get unformatted field value from the model
"""
try:
return reduce(dict.get, name.split('.'), model)
except AttributeError:
return model.get(name)
If you don't feel like it is a good idea at all to diverge from official Flask admin code, you can still use a column formatter as pointed by Joost.
To do so, just add a custom name in your column list:
column_list = ("firstname","surname", "email_address" "country")
Then catch this exact name in your column formatter :
column_formatter = {
'country' : _country_formatter
}
Not forgetting to define your formatting function somewhere in your code like :
def _country_formatter(view, context, model, name):
return model['holyday_home']['country']
That's it!
Second solution sounds more like a hack, but I think this is the way this currently done by most people.
Related
I have this models (for ilustration only):
class Customer(models.Model):
created = models.DateTimeField(auto_now_add=True)
class Person(Customer):
first_name = models.CharField(max_lenght=40)
last_name = models.CharField(max_length=40)
# More data related to a person
class Company(Customer):
company_name = models.CharField(max_length=100)
# More data related to a company
As you can see, I can have two "types" of customers, but they are both "customers" (I think this is a textbook example of inheritance). Everything works fine at the database level, but now I need to create an endpoint that will "dinamically" show the customer data based on which "type" of customer it is. Something like this:
[
{
"id": 1,
"first_name": "John",
"last_name": "Doe"
},
{
"id": 2,
"company_name": "Big buck"
},
{
"id": 3,
"first_name": "Jane",
"last_name": "Doe"
}
]
When working within my project, I use things like:
customer = Customer.objects.get(pk=100)
if hasattr(customer, 'person'):
pass # Do some stuff
elif hasattr(customer, 'company'):
pass # Do some other stuff
# Do common stuff related to both customer 'types'
So far, I've managed to work around by using different endpoints for "Person" and "Company" customers, but now I need to get both "types" of customers in a single endpoint. And I can't figure out how to write a serializer to get this!
Looking around I found this example for using polymorphic models, however, since the project is quite big now, I'd like to keep things as "vanila" as possible. So my specific questions are:
Is there a way to create a "polymorphic" serializer using the above model definition?
If django-polymorphic (and django-rest-polymorphic) are the way to go, will using them break the functionallity?
I am using the SQLAlchemy 1.4 ORM with postgres13 and Python 3.7.
EDITED FOR CLARITY AND REFINEMENT:
To stand up the project and test it out, these 3 files are working well:
base.py --> setting up SQLAlchemy Engine and database session
models.py --> a User class is defined with a number of fields
inserts.py --> Creating instances of the User class, adding and committing them to database
This all works well provided models.py has a hardcoded Class already defined (such as the User Class).
I have a schema.json file that defines database schema. The file is very large with many nested dictionaries.
The goal is to parse the json file and use the given schema to create Python Classes in models.py (or whichever file) automatically.
An example of schema.json:
"groupings": {
"imaging": {
"owner": { "type": "uuid", "required": true, "index": true },
"tags": { "type": "text", "index": true }
"filename": { "type": "text" },
},
"user": {
"email": { "type": "text", "required": true, "unique": true },
"name": { "type": "text" },
"role": {
"type": "text",
"required": true,
"values": [
"admin",
"customer",
],
"index": true
},
"date_last_logged": { "type": "timestamptz" }
}
},
"auths": {
"boilerplate": {
"owner": ["read", "update", "delete"],
"org_account": [],
"customer": ["create", "read", "update", "delete"]
},
"loggers": {
"owner": [],
"customer": []
}
}
}
The models' Classes need to be created on the fly by parsing the json because the schema might change in the future and manually hardcoding 100+ classes each time doesn't scale.
I have spent time researching this but have not been able to find a completely successful solution. Currently this is how I am handling the parsing and dynamic table creation.
I have this function create_class(table_data) which gets passed an already-parsed dictionary containing all the table names, column names, column constraints. The trouble is, I cannot figure out how to create the table with its constraints. Currently, running this code will commit the table to the database but in terms of columns, it only takes what it inherited from Base (automatically generated PK ID).
All of the column names and constraints written into the constraint_dict are ignored.
The line #db_session.add(MyTableClass) is commented out because it errors with "sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.decl_api.DeclarativeMeta' is not mapped; was a class (main.MyTableClass) supplied where an instance was required?"
I think this must have something to do with the order of operations - I am creating an instance of a class before the class itself has been committed. I realise this further confuses things, as I'm not actually calling MyTableClass.
def create_class(table_data):
constraint_dict = {'__tablename__': 'myTableName'}
for k, v in table_data.items():
if 'table' in k:
constraint_dict['__tablename__'] = v
else:
constraint_dict[k] = f'Column({v})'
MyTableClass = type('MyTableClass', (Base,), constraint_dict)
Base.metadata.create_all(bind=engine)
#db_session.add(MyTableClass)
db_session.commit()
db_session.close()
return
I'm not quite sure what to take a look at to complete this last step of getting all columns and their constraints committed to the database. Any help is greatly appreciated!
This does not answer your question directly, but rather poses a different strategy. If you expect the json data to change frequently, you could just consider creating a simple model with an id and data column, essentially using postgres as a json document store.
from sqlalchemy.dialects.postgresql import JSONB
class Schema(db.Model):
id = db.Column(db.Integer(), nullable=False, primary_key=True, )
data= db.Column(JSONB)
sqlalchemy: posgres dialect JSONB type
The JSONB data type is preferable to the JSON data type in posgres because the binary representation is more efficient to search through, though JSONB does take slightly longer to insert than JSON. You can read more about the distinction between the JSON and JSONB data types in the posgres docs
This SO post explains how you can use SQLAlchemy to perform json operations in posgres: sqlalchemy filter by json field
I have this JSONField in my django model :
{
"site-acces": {
"title": "digimon",
"compare": {
"with": "sagromon",
}
},
"site-denied": {
"title": "pokemon",
"compare": {
"with": "salameche",
}
}
}
I would like to do a query in django that do that :
search into my Json all object with title that contain "pokemon".
I tried that :
pokemon.filter(widgets__contains={'title': 'pokemon'})
but it's not working... That return me empty queryset.
So i also tried that:
pokemon.filter(widgets__title= 'pokemon')
but not working too. I think that not working because the "title" meta is inside "site-denied" ...
So i'm asking how can i search a string inside this "site-denied". But take care ! It's not always "site-denied", some times that could be "site-acces", or other random string. So i can't do a search using "site-denied" word.
If only two keys are possible, use Q or expression:
from django.db.models import Q
pokemon.filter(
Q(**{'widget__site-acces__title__contains': 'pokemon'}) |
Q(**{'widget__site-denied__title__contains': 'pokemon'})
)
If you dont know all possible keys, consider storing titles in a different structure.
I am starting to add Tastypie to a very small Django application I'm developing, and I was wondering if there is a way to send just the numeric id of a resource pointed to by a relationship, instead of the uri where the resource resides.
For instance, using one of the examples provided in the documentation:
The exposed "Entry" resource looks like:
{
"body": "Welcome to my blog!",
"id": "1",
"pub_date": "2011-05-20T00:46:38",
"resource_uri": "/api/v1/entry/1/",
"slug": "first-post",
"title": "First Post",
"user": "/api/v1/user/1/"
}
It has a relationship towards "user" that shows as "user": "/api/v1/user/1/". Is there any way of just making it "user": 1 (integer, if possible) so it looks like the following?
{
"body": "Welcome to my blog!",
"id": "1",
"pub_date": "2011-05-20T00:46:38",
"resource_uri": "/api/v1/entry/1/",
"slug": "first-post",
"title": "First Post",
"user": 1
}
I like the idea or keeping the resource_uri attribute as whole, but when it comes to modeling Sql Relationships, I'd rather have just the id (or a list of numeric ids, if the relationship is "ToMany"). Would it be good idea adding a dehydrate_user method to the EntryResource class to do this? It seems to work, but maybe there's a more generic way of doing it (to avoid having to write a dehydrate method for every relationship)
Thank you in advance
You can try using hydrate dehydrate cycles
def dehydrate(self, bundle):
bundle.data['entry'] = bundle.obj.entry.id
return bundle
def hydrate(self, bundle):
bundle.data['entry'] = Entry.objects.get(id=bundle.data['entry'])
return bundle
BUT I strongly recommend to stick with URI usage since it is how you can adress directly a resource. Hydrate and dehydrate are used for more complexe or virtual resources.
I am trying to figure out whether I can represent model field choices to clients consuming a tastypie API.
I have a django (1.4.1) application for which I am implementing a django-tastypie (0.9.11) API. I have a Model and ModelResource similar to the following:
class SomeModel(models.Model):
QUEUED, IN_PROCESS, COMPLETE = range(3)
STATUS_CHOICES = (
(QUEUED, 'Queued'),
(IN_PROCESS, 'In Process'),
(COMPLETE, 'Complete'),
)
name = models.CharFIeld(max_length=50)
status = models.IntegerField(choices=STATUS_CHOICES, default=QUEUED)
class SomeModelResource(ModelResource):
class Meta:
queryset = SomeModel.objects.all()
resource_name = 'some_model'
When I look at objects in the API, the name and status fields are displayed as follows:
{
...
"objects":[
{
"name": "Some name 1",
"status": 0
},
{
"name": "Some name 2",
"status": 2
}]
}
I know I can alter SomeModelResource with hydrate/dehydrate methods to display the string values for status as follows, which would have more value to clients:
{
...
"objects":[
{
"name": "Some name 1",
"status": "Queued"
},
{
"name": "Some name 2",
"status": "Complete"
}]
}
But how would the client know the available choices for the status field without knowing the inner workings of SomeModel?
Clients creating objects in the system may not provide a status as the default value of QUEUED is desirable. But clients that are editing objects need to know the available options for status to provide a valid option.
I would like for the choices to be listed in the schema description for SomeModelResource, so the client can introspect the available choices when creating/editing objects. But I am just not sure whether this is something available out of the box in tastypie, or if I should fork tastypie to introduce the capability.
Thanks for any feedback!
You can add the choices to the schema by overriding the method in your resource. If you would want to add the choices to any field (maybe to use with many resources), you could create the method as follows:
def build_schema(self):
base_schema = super(SomeModelResource, self).build_schema()
for f in self._meta.object_class._meta.fields:
if f.name in base_schema['fields'] and f.choices:
base_schema['fields'][f.name].update({
'choices': f.choices,
})
return base_schema
I haven't tested the above code but I hope you get the idea. Note that the object_class will be set only if you use the tastypie's ModelResource as it is being get from the provided queryset.
A simpler solution is to hack the choices information into your help_text blurb.
In our example we were able to do:
source = models.CharField(
help_text="the source of the document, one of: %s" % ', '.join(['%s (%s)' % (t[0], t[1]) for t in DOCUMENT_SOURCES]),
choices=DOCUMENT_SOURCES,
)
Easy peasy, automatically stays up to date, and is pretty much side-effect free.