Unable to get individual mongo document - python

I am fairly new to python and mongodb and I'm facing an issue already.
I am trying to "translate" a nodejs backend restapi into flask, using mongodb as a data source.
Using the flask documentation, I was able to configure my app in order to connect to my local mongod.
And I am able to obtain values from the users collection like this
def getUser():
usr = Users.objects(email="mail#mail.com")
return {
"user": usr,
}
Which returns the following JSON when I'm calling the API through Postman
{
"user": [
{
"__v": 0,
"_id": {
"$oid": "5da86dc651eac87d2a82e2e2"
},
"createdAt": {
"$date": 1571319238918
},
"email": "mail#mail.com",
"password": "$2b$10$hoH57R5GL1MrwqpuW4yEJ.wwLlyNgyfxQm2Mxb19wioYTPPsU9z7y",
"profil": {
"_id": {
"$oid": "5da86dc651eac87d2a82e2e3"
},
"checked": false,
"clapList": [],
"followerList": [],
"followingList": [],
"playpoint": 0
},
"updatedAt": {
"$date": 1571319477959
}
}
]
}
As you can see, I have an array with one user in it. When I try to get only one object, like this:
def getUser():
usr = Users.objects(email="mail#mail.com").first()
return {
"user": usr,
}
I have a 500 status returned in Postman and the following error in my debug console: mongoengine.errors.FieldDoesNotExist: The fields "{'__v'}" do not exist on the document "Users"
This is my Users model
import mongoengine as me
class Users(me.Document):
phone = me.StringField()
email = me.StringField()
password = me.StringField()
accountType = me.StringField()
createdAt = me.DateTimeField()
updatedAt = me.DateTimeField()
profil = me.EmbeddedDocumentField(Profil)
I have already tried adding __v as an InfField(), but I still have the same error.
What is that __v anyway and should I retry making a new database from scratch?
Additional info:
The mongodb database and collection was generated using the nodejs API
The users in the database were generated using the nodejs API

So I added a meta property to it and I'm now able to use the class
meta = {
'strict': False,
}
I don't really know yet what transpired in there but I'm not touching anything if it works

Related

Django Rest API from Database

I have 2 APIs from my existing project. Where One provides the latest blog posts and another one provides sorting details. The 2nd API (sorting) gives blog posts ID and an ordering number, which should be in the 1st,2nd,3rd...n position. If I filter in the first API with that given ID I can get the blog post details.
How can I create a Django REST API from Database? or an API merging from that 2 APIs? Any tutorial or reference which might help me?
Frist API Response:
{
"count": 74,
"next": "https://cms.example.com/api/v2/stories/?page=2",
"previous": null,
"results": [
{
"id": 111,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-09T07:29:17.029746Z"
},
"title": "A Test Blog Post"
},
{
"id": 105,
"meta": {
"type": "blog.CreateStory",
"seo_title": "",
"search_description": "",
"first_published_at": "2022-10-08T04:45:32.165072Z"
},
"title": "Blog Story 2"
},
2nd API Response
[
{
"featured_item": 1,
"sort_order": 0,
"featured_page": 105
},
{
"featured_item": 1,
"sort_order": 1,
"featured_page": 90
},
Here I want to create another API that will provide more details about sorting for example it will sort like this https://cms.example.com/api/v2/stories/105 and catch Title, Image & Excerpt and If there is no data from Sorting details it will show the first API's response by default.
After searching, I found that you can make API from Database. In setting you need to set the database credentials and then need to create a class inside your models.py and inside class's meta you need to set meta name to db_table and then create serializers.py and views.py as you create REST API.
class SortAPI(models.Model):
featured_item_id = models.IntegerField()
sort_order = models.IntegerField()
title=models.TextField()
first_published_at=models.DateTimeField()
alternative_title= models.TextField()
excerpt=models.TextField()
sub_heading=models.TextField()
news_slug=models.TextField()
img_title=models.TextField()
img_url=models.TextField()
img_width=models.IntegerField()
img_height=models.IntegerField()
class Meta:
db_table = 'view_featured'

Stripe MetaData Working Properly But Not Showing Up on Stripe Dashboard

I've implemented Stripe checkout on a Django app and it's all working correctly except that it's not showing up on the Stripe Dashboard, even though it's showing in the event data on the same page. Have I formatted it incorrectly or am I overlooking something obvious?
This is how I added meta data:
checkout_session = stripe.checkout.Session.create(
payment_method_types=['card'],
line_items = line_itemz,
metadata={
"payment_type":"schedule_visit",
"visit_id":visit_id
},
mode='payment',
success_url= 'http://localhost:8000/success',
cancel_url = 'http://localhost:8000/cancel',)
Here is a screenshot of the Metadata section empty, but in the events the Metadata is there as it should be:
Again I can access the metadata every where else but would like it to show up on the dashboard so my team can more easily access that information.
The metadata field you set is for Checkout Session alone, but not on Payment Intent (which is the Dashboard page you are at). To have metadata shown at the Payment Intent, I'd suggest also setting payment_intent_data.metadata [0] in the request when creating a Checkout Session.
For example,
session = stripe.checkout.Session.create(
success_url="https://example.com/success",
cancel_url="https://example.com/cancel",
line_items=[
{
"price": "price_xxx",
"quantity": 1,
},
],
mode="payment",
metadata={
"payment_type": "schedule_visit",
"visit_id": "123"
},
payment_intent_data={
"metadata": {
"payment_type": "schedule_visit",
"visit_id": "123"
}
}
)
[0] https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-payment_intent_data-metadata

How to parse a JSON schema file and create new Python classes dynamically with many column constraints?

I am using the SQLAlchemy 1.4 ORM with postgres13 and Python 3.7.
EDITED FOR CLARITY AND REFINEMENT:
To stand up the project and test it out, these 3 files are working well:
base.py --> setting up SQLAlchemy Engine and database session
models.py --> a User class is defined with a number of fields
inserts.py --> Creating instances of the User class, adding and committing them to database
This all works well provided models.py has a hardcoded Class already defined (such as the User Class).
I have a schema.json file that defines database schema. The file is very large with many nested dictionaries.
The goal is to parse the json file and use the given schema to create Python Classes in models.py (or whichever file) automatically.
An example of schema.json:
"groupings": {
"imaging": {
"owner": { "type": "uuid", "required": true, "index": true },
"tags": { "type": "text", "index": true }
"filename": { "type": "text" },
},
"user": {
"email": { "type": "text", "required": true, "unique": true },
"name": { "type": "text" },
"role": {
"type": "text",
"required": true,
"values": [
"admin",
"customer",
],
"index": true
},
"date_last_logged": { "type": "timestamptz" }
}
},
"auths": {
"boilerplate": {
"owner": ["read", "update", "delete"],
"org_account": [],
"customer": ["create", "read", "update", "delete"]
},
"loggers": {
"owner": [],
"customer": []
}
}
}
The models' Classes need to be created on the fly by parsing the json because the schema might change in the future and manually hardcoding 100+ classes each time doesn't scale.
I have spent time researching this but have not been able to find a completely successful solution. Currently this is how I am handling the parsing and dynamic table creation.
I have this function create_class(table_data) which gets passed an already-parsed dictionary containing all the table names, column names, column constraints. The trouble is, I cannot figure out how to create the table with its constraints. Currently, running this code will commit the table to the database but in terms of columns, it only takes what it inherited from Base (automatically generated PK ID).
All of the column names and constraints written into the constraint_dict are ignored.
The line #db_session.add(MyTableClass) is commented out because it errors with "sqlalchemy.orm.exc.UnmappedInstanceError: Class 'sqlalchemy.orm.decl_api.DeclarativeMeta' is not mapped; was a class (main.MyTableClass) supplied where an instance was required?"
I think this must have something to do with the order of operations - I am creating an instance of a class before the class itself has been committed. I realise this further confuses things, as I'm not actually calling MyTableClass.
def create_class(table_data):
constraint_dict = {'__tablename__': 'myTableName'}
for k, v in table_data.items():
if 'table' in k:
constraint_dict['__tablename__'] = v
else:
constraint_dict[k] = f'Column({v})'
MyTableClass = type('MyTableClass', (Base,), constraint_dict)
Base.metadata.create_all(bind=engine)
#db_session.add(MyTableClass)
db_session.commit()
db_session.close()
return
I'm not quite sure what to take a look at to complete this last step of getting all columns and their constraints committed to the database. Any help is greatly appreciated!
This does not answer your question directly, but rather poses a different strategy. If you expect the json data to change frequently, you could just consider creating a simple model with an id and data column, essentially using postgres as a json document store.
from sqlalchemy.dialects.postgresql import JSONB
class Schema(db.Model):
id = db.Column(db.Integer(), nullable=False, primary_key=True, )
data= db.Column(JSONB)
sqlalchemy: posgres dialect JSONB type
The JSONB data type is preferable to the JSON data type in posgres because the binary representation is more efficient to search through, though JSONB does take slightly longer to insert than JSON. You can read more about the distinction between the JSON and JSONB data types in the posgres docs
This SO post explains how you can use SQLAlchemy to perform json operations in posgres: sqlalchemy filter by json field

get user profile photo from gmail api object

I build this google API client object,
serivce=build('gmail', 'v1', credentials=credentials)
i succed in founding the gmail address by doing this
serivce.users().getProfile(userId='me').execute()['emailAddress']
but I didn't found a way to get the Gmail user profile photo .
first- I tried to get it from getProfile but it only have history and other attributes.
then I tried some versions like getPhotos()/photos/getUrlPhoto but the service object doesn't have those attributes.
I would like to know how can I get from this object the user profile photo.
The Profile for a Gmail user contains the following information:
{
"emailAddress": string,
"messagesTotal": integer,
"threadsTotal": integer,
"historyId": string
}
Unfortunately, the profile photo is not one of them.
To retrieve a profile picture, you need to use the People API
Use the method people.get
Specify a resourceName (e.g. people/me) and set personFields to photos
This will return you the url(s) of the user's profile picture(s).
The response will look as follows:
{
"resourceName": "AAA",
"etag": "BBB",
"photos": [
{
"metadata": {
"primary": true,
"source": {
"type": "DOMAIN_PROFILE",
"id": "AAA"
}
},
"url": "https://lh3.googleusercontent.com/a-/CCC=s100"
}
]
}

DJango: formatting json serialization

I have the following DJango view
def company(request):
company_list = Company.objects.all()
output = serializers.serialize('json', company_list, fields=('name','phonenumber','email','companylogo'))
return HttpResponse(output, content_type="application/json")
it result as follows:
[{"pk": 1, "model": "test.company", "fields": {"companylogo": null, "phonenumber": "741.999.5554", "name": "Remax", "email": "home#remax.co.il"}}, {"pk": 4, "model": "test.company", "fields": {"companylogo": null, "phonenumber": "641-7778889", "name": "remixa", "email": "a#aa.com"}}, {"pk": 2, "model": "test.company", "fields": {"companylogo": null, "phonenumber": "658-2233444", "name": "remix", "email": "b#bbb.com"}}, {"pk": 7, "model": "test.company", "fields": {"companylogo": null, "phonenumber": "996-7778880", "name": "remix", "email": "a#aba.com"}}]
my questions:
1. can i control the order of the fields
2. can i change the name of the fields
3. I was expecting to see the result with indentation in the browser i.e. instead of one long line to see something like this:
[
{
"pk": 1,
"model": "test.company",
"fields":
{
"companylogo": null,
"phonenumber": "741.999.5554",
"name": "Remax",
"email": "home#remax.co.il"
}
},
{
"pk": 4,
"model": "test.company",
"fields":
{
"companylogo": null,
"phonenumber": "641-7778889",
"name": "remixa",
"email": "a#aa.com"
}
},
....
}
]
you can get pretty format in this way:
return JsonResponse(your_json_dict, json_dumps_params={'indent': 2})
django doc as the first comment say
Python (unrelated to Django and starting with 2.6) has a built in json library that can accomplish the indentation you require. If you are looking for something quick and dirty for debug purposes you could do something like this:
from django.http import HttpResponse
from django.core import serializers
import json
def company(request, pretty=False):
company_list = Company.objects.all()
output = serializers.serialize('json', company_list, fields=('name','phonenumber','email','companylogo'))
if pretty:
output = json.dumps(json.loads(output), indent=4))
return HttpResponse(output, content_type="application/json")
But this is a performance issue if the Company model is large. I recommend taking Dan R's advice and use a browser plugin to parse and render the json or come up with some other client side solution. I have a script that takes in a json file and does exactly the same thing as the code above, reads in the json and prints it out with indent=4.
As for sorting your output, you can just use the order_by method on your query set:
def company(request):
company_list = Company.objects.order_by("sorting_field")
...
And if you always want that model sorted that way, you can use the ordering meta-class option:
class Company(models.Model):
class Meta:
ordering = ["sorting_field"]
...
As a final note, If your intent is to expose your models with a web service, I highly recommend taking a look at tastypie. It may help you in the long run as it provides many other convenient features that help towards that end.
With Django 1.7 I can get nicely indented JSON by using the indent parameter of the serializer. For instance, in a command that dumps data from my database:
self.stdout.write(serializers.serialize("json",
records,
indent=2))
The indent parameter has been in Django since version 1.5. The output I get looks like this:
[
{
"fields": {
"type": "something",
"word": "something else",
},
"model": "whatever",
"pk": 887060
},
{
"fields": {
"type": "something more",
"word": "and another thing",
},
"model": "whatever",
"pk": 887061
},
...
To order your records then you'd have to do what Kevin suggested and use order_by, or whatever method you want to order the records you pass to the serializer. For instance, I use itertools.chain to order different querysets that return instances of different models.
The serializer does not support ordering fields according to your preferences, or renaming them. You have to write your own code to do this or use an external tool.
JSON doesn't have indentation, it's simply structured data. Browsers or other tools may format the JSON so that it looks nice but by default it's not there. It's also not part of the JSON as the formatting is just how it looks on the screen. JSON is often processed by other code or services so they don't care about indentation, as long as the data is structured correctly.

Categories

Resources