I am trying to query a SQL Server database hosted on Azure through a flask API and convert the results to JSON, what I am trying is below. And this is working, however the results are coming through with escape characters. There don't appear to be any special characters obvious in the data. If I use the API and exec a stored procedure with a parameter the json will come through in the format that I want it. Any suggestions on how to alter this so that I get standard json format?
app = Flask(__name__)
api = Api(app)
parser = reqparse.RequestParser()
parser.add_argument('customer')
conn = pyodbc.connect(serverconnectionstring)
class Customer(Resource):
def get(self):
cursor = conn.cursor()
query = "SELECT * FROM [dbo].[testforjson]"
result = cursor.execute(query)
items = [dict(zip([key[0] for key in cursor.description], row)) for row in result]
jsonitems = json.dumps(items)
return jsonitems
api.add_resource(Customer, '/customer')
if __name__ == '__main__':
app.run()
example output:
"[{\"field1\": \"B2653\", \"field2\": \"ERLOP\"}, {\"field1\": \"C2653\", \"field2\": \"ERLOP\"}]
desired output:
[
{
"field1": "B2653",
"field2": "ERLOP"
},
{
"field1": "C2653",
"field2": "ERLOP"
}
]
Very thanks for #njzk2's help and detailed explanation. I help post it as answer to end this question:
Please try returning items directly from that get method:
quick explanation: if you return an object, flask will attempt to
return a json representation and set the json content type. If you
return a string, flask doesn't know your intention, and sends a
string content type. It's then up to your client to figure out what
the intention was. In most cases a string content type means the
result is presented to you as a string. But you can also ignore the
content type and parse that string as json
Glad to hear it worked for you. Thanks #njzk3 again. This can be beneficial to other community members.
Related
I'm struggling to find documentation and examples for Python Client for BigQuery Data Transfer Service. A new query string is generated by my application from time to time and I'd like to update the existing scheduled query accordingly. This is the most helpful thing I have found so far, however I am still unsure where to pass my query string. Is this the correct method?
from google.cloud import bigquery_datatransfer_v1
def sample_update_transfer_config():
# Create a client
client = bigquery_datatransfer_v1.DataTransferServiceClient()
# Initialize request argument(s)
transfer_config = bigquery_datatransfer_v1.TransferConfig()
transfer_config.destination_dataset_id = "destination_dataset_id_value"
request = bigquery_datatransfer_v1.UpdateTransferConfigRequest(
transfer_config=transfer_config,
)
# Make the request
response = client.update_transfer_config(request=request)
# Handle the response
print(response)
You may refer to this Update Scheduled Queries for python documentation from BigQuery for the official reference on the usage of Python Client Library in updating scheduled queries.
However, I updated the code for you to update your query string. I added the updated query string in the params and define what attributes of the TransferConfig() will be updated in the update_mask.
See updated code below:
from google.cloud import bigquery_datatransfer
from google.protobuf import field_mask_pb2
transfer_client = bigquery_datatransfer.DataTransferServiceClient()
transfer_config_name = "projects/{your-project-id}/locations/us/transferConfigs/{unique-ID-of-transferconfig}"
new_display_name = "Your Desired Updated Name if Necessary" #--remove if no need to update **scheduled query name**.
query_string_new = """
SELECT
CURRENT_TIMESTAMP() as current_time
"""
new_params={
"query": query_string_new,
"destination_table_name_template": "your_table_{run_date}",
"write_disposition": "WRITE_TRUNCATE",
"partitioning_field": "",
}
transfer_config = bigquery_datatransfer.TransferConfig(name=transfer_config_name,
)
transfer_config.display_name = new_display_name #--remove if no need to update **scheduled query name**.
transfer_config.params = new_params
transfer_config = transfer_client.update_transfer_config(
{
"transfer_config": transfer_config,
"update_mask": field_mask_pb2.FieldMask(paths=["display_name","params"]), #--remove "display_name" from the list if no need to update **scheduled query name**.
}
)
print("Updates are executed successfully")
For you to get the value of your transfer_config_name, you may list all your scheduled queries by following this SO post.
my db model looks like this...
from pydantic import BaseModel
class Store(BaseModel):
name: str
store_code : str
and there can be same store names in db with different store_code.
what I want is filtering all informations of stores with same names.
for example, if my db is like this...
{
name:lg
store_code: 123
name:lg
store_code:456
}
I'd like to see all those two documents
my python fast api code is like this..
from fastapi import FastAPI, HTTPException
from database import *
app = FastAPI()
#app.get("/api/store{store_name}", response_model=Store)
async def get_store_by_name(store_name):
response = await fetch_store_by_name(store_name)
if response:
return response
raise HTTPException
and this is my mongo query code...
from pymongo import MongoClient
from model import Store
client = MongoClient(host='localhost', port=27017)
database = client.store
async def fetch_store_by_name(store_name:str):
document = collection.find({"name":store_name})
return document
i thought in the document, there would be two documents eventually.
but there's always an error like this
pydantic.error_wrappers.ValidationError: 1 validation error for Store
response
value is not a valid dict (type=type_error.dict)
is there anyone to help me please?
++++
I just changed my query like this
async def fetch_store_by_name(store_name:str):
stores = []
cursor = collection.find({"name":store_name})
for document in cursor:
stores.append(document)
return stores
this should returns two documents like I expected but it still has
ValueError: [TypeError("'ObjectId' object is not iterable"), TypeError('vars() argument must have __dict__ attribute')]
this error.
I think my fast-api code has a problem which I really have no idea...
async def fetch_store_by_name(store_name:str):
stores = [] ---Fault in this line---
cursor = collection.find({"name":store_name})
for document in cursor:
stores.append(document)
return stores
stores should be a string value, not a list as Mongodb will try to find it as the default value that you provided. In this case - str
I am trying to pass comma separated query parameters to a Flask endpoint.
An example URI would be:
localhost:3031/someresource#?status=1001,1002,1003
Looking at the return of request.args or request.args.getlist('status') I see that I only get a string.
ipdb> pp request.args
ImmutableMultiDict([('status', '1001,1002,1003')])
ipdb> request.args.getlist('status')
['1001,1002,1003']
I know I can split the string by comma but that feels hacky. Is there a more idiomatic way to handle this in Flask? Or are my query params wrong format?
Solution
Since Flask does not directly support comma separated query params, I put this in in my base controller to support comma-separated or duplicate query params on all endpoints.
request_data = {}
params = request.args.getlist('status') or request.form.getlist('status')
if len(params) == 1 and ',' in params[0]:
request_data['status'] = comma_separated_params_to_list(params[0])})
else:
request_data['status'] = params
def comma_separated_params_to_list(param):
result = []
for val in param.split(','):
if val:
result.append(val)
return result
The flask variant getlist expects multiple keys:
from flask import Flask, request
app = Flask(__name__)
#app.route('/')
def status():
first_status = request.args.get("status")
statuses = request.args.getlist("status")
return "First Status: '{}'\nAll Statuses: '{}'".format(first_status, statuses)
❯ curl "http://localhost:5000?status=5&status=7"
First Status: '5'
All Statuses: '['5', '7']'
There's no standard for this, how multiple GET args are parsed/passed depends on which language/framework you're using; flask is built on werkzeug so it allows this style, but you'll have to look it up if you switch away from flask.
As an aside, Its not uncommon REST API design to have commas to pass multiple values for the same key - makes it easier for the user. You're parsing GET args anyway, parsing a resulting string is not that much more hacky. You can choose to raise a 400 HTTP error if their string with commas isn't well formatted.
Some other languages (notably PHP) support 'array' syntax, so that is used sometimes:
/request?status[]=1000&status[]=1001&status[]=1002
This is what you might want here:
request.args.to_dict(flat=False)
flat is True by default, so by setting it to False, you allow it to return a dict with values inside a list when there's more than one.
According to to_dict documentation:
to_dict(flat=True)
Return the contents as regular dict. If flat is True
the returned dict will only have the first item present, if flat is False
all values will be returned as lists.
I'm new to Python and I'm building a simple CRUD app using Flask. Here's how I'm doing it:
from flask import Flask,request
import pymysql.cursors
connection = pymysql.connect(
host='localhost',
db='mydb',
user='root',
password='password',
cursorclass='pymysql.cursors.DictCursor
)
#app.route('/login',methods=['POST'])
def login():
cursor = connection.cursor(pymysql.cursors.DictCursor)
cursor.execute("SELECT id,hash,displayName,attempt,status FROM users WHERE id=%s", (request.form['username']))
user = cursor.fetchone()
pprint(user)
But this code outputs something like this thing:
{u'displayName': 'John Smith',
u'hash': 'somehash.asdf!###$',
u'id': 'developer',
u'attempt': 0,
u'status': 1
}
The thing is, I can't seem to get these attributes using the standard user.hash syntax. Am I doing something wrong? I need to either:
convert it to JSON-like structure
get the properties of user when it's presented in this form
When using a pymysql.cursors.DictCursor cursor the fetched data is returned as a dictionary, therefore you can use all the common dict traversal/accessing/modifying methods on it.
This means that you can access your returned hash as: user["hash"].
When it comes to JSON, Python comes with a built-in json module that can readily convert your retrieved dict into a JSON string, so in your case, to get a JSON representation of the returned user dictionary use:
import json
json_string = json.dumps(user)
print(json_string) # or do whatever you need with it
I have a function in AWS Lambda written in Python.
I am trying to extract documents from a collection in MongoDB with pymongo.
I thought it was quite simple, but I seem to get problems (maybe because of ObjectID types).
I am simply trying to do
from pymongo import MongoClient
def lambda_handler(event, context):
client = MongoClient(MONGODB_URI)
db = client[DB_NAME]
return db.users.find({})
but I get the error
{errorMessage= is not JSON serializable, errorType=TypeError, stackTrace=[["\/var\/lang\/lib\/python3.6\/json\/__init__.py",238,"dumps","**kw).encode(obj)"],["\/var\/lang\/lib\/python3.6\/json\/encoder.py",199,"encode","chunks = self.iterencode(o, _one_shot=True)"],["\/var\/lang\/lib\/python3.6\/json\/encoder.py",257,"iterencode","return _iterencode(o, 0)"],["\/var\/runtime\/awslambda\/bootstrap.py",110,"decimal_serializer","raise TypeError(repr(o) + \" is not JSON serializable\")"]]}
It does work if I use return bson.json_util.dumps(db.users.find({})), but why should it be necessary?
As far as I understand, lambda functions always return json, so I don't understand why I have to use bson.json_util.
Also, when I use this function, I don't get normal ObjectID types, but instead I get
[
{"_id": {"$oid": "59aed327f25c0f0ca8f94ae1"}, "name": ...},
...
]
although I wanted something like
[
{"_id": "59aed327f25c0f0ca8f94ae1", "name": ...},
...
]
Your issue is due to pymongo not returning straight JSON strings. An example of how to handle this can be found here -
How do I turn MongoDB query into a JSON?
It should be noted that API Gateway expects to return JSON unless configured otherwise.
https://aws.amazon.com/blogs/compute/binary-support-for-api-integrations-with-amazon-api-gateway/