fastapi snowflake connection only pulling 1 record - python

I am trying to read data from snowflake database using FASTAPI. I was able to create the connection which is able to pull data from snowflake.
The issue which I am facing right now is that I am only getting 1 record (instead of 10 records).
I suspect I am not using correct keyword while returning the data. appreciate any help.
Here is my code :-
from fastapi import FastAPI
import snowflake.connector as sf
import configparser
username='username_value'
password='password_value'
account= 'account_value'
warehouse= 'test_wh'
database= 'test_db'
ctx=sf.connect(user=username,password=password,account=account,warehouse=warehouse,database=database)
app = FastAPI()
#app.get('/test API')
async def fetchdata():
cursor = ctx.cursor()
cursor.execute("USE WAREHOUSE test_WH ")
cursor.execute("USE DATABASE test_db")
cursor.execute("USE SCHEMA test_schema")
sql = cursor.execute ("SELECT DISTINCT ID,NAME,AGE,CITY FROM TEST_TABLE WHERE AGE > 60")
for data in sql:
return data

You use return in your inner for-loop. This will return the first row encountered.
If you want to return all rows as a list, you can probably do (I'm not familiar with the snowflake connector):
return list(data)
instead of the for-loop, or sql.fetchall().

Related

Want to create python API and integrated with swagger/postman

Requirement: 1. I want to create python API which will help to insert data in big query table and this API will host in swagger/postman, from there user can provide input data so that it will get reflected in big query table.
Can anyone help me to find out suitable solution with code
import sqlite3 as sql
from google.cloud import bigquery
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path/to/file.json')
project_id = 'project_id'
client = bigquery.Client(credentials= credentials,project=project_id)
def add_data(group_name, user_name):
try:
# Connecting to database
con = sql.connect('shot_database.db')
# Getting cursor
c = con.cursor()
# Adding data
job_config.use_legacy_sql = True
query_job = client.query("""
INSERT INTO `table_name` (group, user)
VALUES (%s, %s)""",job_config = job_config)
results = query_job.result() # Wait for the job to complete.
# Applying changes
con.commit()
except:
print("An error has occured")
The code you provided is a mix of SQLite and BigQuery, but it likes that you're trying to use BigQuery to insert data into a table. To insert data into a BigQuery table using Python, you can use the insert_data() method of the Client class. Here's I am adding an example of how you can use this method to insert data into a table called "mytable" in a dataset called "mydataset":
# Define the data you want to insert
data = [
{
"group": group_name,
"user": user_name
}
]
# Insert the data
table_id = "mydataset.mytable"
errors = client.insert_data(table_id, data)
if errors == []:
print("Data inserted successfully")
else:
print("Errors occurred while inserting data:")
print(json.dumps(errors, indent=2))
Then, You can create an API using Flask or Django and call the add_data method which you have defined to insert data into big query table.

Fetching data from sql database tablee and converting the result into JSON and using the POST request transfer the result to API

I have a table in sql databse where there are many columns and I want to fetch the results of this table row wise and send it to API using the POST request. Also I have a column name customer email so basically the result should populate if there is any new email added in the table else not.
Looking for this scenario any help would be valuable and highly appreciable.
Thanks in advance.
import pyodbc
import json
import requests
import decimal
cilioconnection = pyodbc.connect('Driver={SQL Server};Server=192.148.143.89;Database=NSADataCache;uid=xxx;pwd=xxx')
ciliocursor = cilioconnection.cursor()
#cursor = conn.cursor()
query=ciliocursor.execute("SELECT * FROM [DataCache].[dbo].[Details]")
result = query.fetchall()
#payload = [dict(zip([column[0] for column in ciliocursor.description], row))
for row in result:
print(row)
#for i in range(0,len(payload)):
#r=requests.post('https://reqres.in/api/users',json=payload[i])
#print('Status Code:', r.status_code)
#print(payload[i])
#r.json()
since I want the result to be in JSON to send over to API

How to execute a sqlalchecmy TextClause statement with a sqlite3 connection cursor?

I have a python flask app which primarily uses sqlalchemy to execute all of it's mySQL queries and I need to write tests for it using a local database and behave.
After a brief research, the database I've chosen for this task is a local sqlite3 db, mainly because I've read that its pretty much compatible with mySQL and sqlalchemy, and also because it's easy to set up and tear-down.
I've established a connection to it successfully and managed to create all the tables I need for the tests.
I've encountered a problem when trying to execute some queries, where the query statement is being built as a sqlalchemy TextClause object and my sqlite3 connection cursor raises the following exception when trying to execute the statement:
TypeError: argument 1 must be str, not TextClause
How can I convert this TextClause object dynamically to a string and execute it?
I don't want to make drastic changes to the code just for testing.
A code example:
employees table:
id
name
1
Jeff Bezos
2
Bill Gates
from sqlalchemy import text
import sqlite3
def select_employee_by_id(id: int):
employees_table = 'employees'
db = sqlite3.connect(":memory:")
cursor = db.cursor()
with db as session:
statement = text("""
SELECT *
FROM {employees_table}
WHERE
id = :id
""".format(employees_table=employees_table)
).bindparams(id=id)
data = cursor.execute(statement)
return data.fetchone()
Should return a row containing {'id': 1, 'name': 'Jeff Bezos'} for select_employee_by_id(1)
Thanks in advance!
If you want to test your TextClause query then you should execute it by using SQLAlchemy, not by using a DBAPI (SQLite) cursor:
from sqlalchemy import create_engine, text
def select_employee_by_id(id: int):
employees_table = 'employees'
engine = create_engine("sqlite://")
with engine.begin() as conn:
statement = text("""
SELECT *
FROM {employees_table}
WHERE
id = :id
""".format(employees_table=employees_table)
).bindparams(id=id)
data = conn.execute(statement)
return data.one()

Python - How to parse and save JSON to MYSQL database

As the title indicates, how does one use python to elegantly access an API and parse and save the JSON contents onto a relational database (MYSQL) for later access?
Here, I saved the data onto a pandas object. But how do I create a mysql database, save the json contents onto it, and access the contents for later use?
# Libraries
import json, requests
import pandas as pd
from pandas.io.json import json_normalize
# Set URL
url = 'https://api-v2.themuse.com/jobs'
# For loop to
for i in range(100):
data = json.loads(requests.get(
url=url,
params={'page': i}
).text)['results']
data_norm = pd.read_json(json.dumps(data))
You create your Mysql table on your server using something like Mysql Workbench CE. then in python you do this. I wasnt sure if you want to use data in for loop or data_norm so for ease of use, here some functions. insertDb() can be put in your for loop, since data will be overwriten by itself in every iteration.
import MySQLdb
def dbconnect():
try:
db = MySQLdb.connect(
host='localhost',
user='root',
passwd='password',
db='nameofdb'
)
except Exception as e:
sys.exit("Can't connect to database")
return db
def insertDb():
try:
db = dbconnect()
cursor = db.cursor()
cursor.execute("""
INSERT INTO nameoftable(nameofcolumn) \
VALUES (%s) """, (data))
cursor.close()
except Exception as e:
print e
If this is merely for storage for processing later, kind of like a cache, a varchar field is enough. If however you need to retrieve some structured jdata, JSON field is what you need.

SQLAlchemy not finding tables, possible connection issues

I'm trying to connect to one our our internal databases using the following code:
engine = create_engine('postgresql+psycopg2://{user}:{passw}#{host}:{port}/{db}'.format(
user=config3.CANVAS_USERNAME,
passw=config3.CANVAS_PWD,
host=config3.CANVAS_BOUNCER,
port=config3.CANVAS_PORT,
db='cluster17dr'
))
metadata = MetaData()
metadata.reflect(bind=engine)
print(metadata.tables)
And my only result is a table called 'spatial_ref_sys', which I assume is some kind of metadata. I know that my login stuff is correct, because this works perfectly:
with ppg2.connect(
database='cluster17dr',
user=config3.CANVAS_USERNAME,
password=config3.CANVAS_PWD,
host=config3.CANVAS_BOUNCER,
port=config3.CANVAS_PORT) as conn:
cur = conn.cursor()
sql = 'SELECT * FROM canvas.switchman_shards LIMIT 10'
cur.execute(sql)
res = cur.fetchall()
print(res)
Any ideas as to what I'm missing in my connection using SQLAlchemy?
By default, if no schema name is specified, SQLAlchemy will only give you tables under the default schema. If you want to reflect tables in a schema other than the default schema (which defaults to public in PostgreSQL), you need to specify the schema keyword to .reflect():
metadata.reflect(..., schema="canvas")

Categories

Resources