Get dictionary from sqlite query when using python databases module - python

I am developing an asynchronous program and have decided to use databases module for interacting with a database. The problem is that when making queries I get a tuple as a response. However I want to receive the response in a dictionary format like this: 'column_name1': column_value1, ..., 'column_name_n': column_value_n.
I have found some solutions but they all use sqlite3 module.

import databases
pathtodb = '/path/to/your/database.db'
tablename = 'table_name_in_your_database'
database = databases.Database('sqlite:///{}'.format(pathtodb))
query1 = "PRAGMA table_info('{}')".format(tablename)
pragma = await database.fetch_all(query=query1)
query2 = "SELECT * FROM {}".format(tablename)
data = await database.fetch_all(query=query2)
dictionary_keys = list(zip(*pragma))[1]
dictionary_values = list(zip(*data))
dictionary = dict(zip(dictionary_keys,dictionary_values))

Related

How to execute a sqlalchecmy TextClause statement with a sqlite3 connection cursor?

I have a python flask app which primarily uses sqlalchemy to execute all of it's mySQL queries and I need to write tests for it using a local database and behave.
After a brief research, the database I've chosen for this task is a local sqlite3 db, mainly because I've read that its pretty much compatible with mySQL and sqlalchemy, and also because it's easy to set up and tear-down.
I've established a connection to it successfully and managed to create all the tables I need for the tests.
I've encountered a problem when trying to execute some queries, where the query statement is being built as a sqlalchemy TextClause object and my sqlite3 connection cursor raises the following exception when trying to execute the statement:
TypeError: argument 1 must be str, not TextClause
How can I convert this TextClause object dynamically to a string and execute it?
I don't want to make drastic changes to the code just for testing.
A code example:
employees table:
id
name
1
Jeff Bezos
2
Bill Gates
from sqlalchemy import text
import sqlite3
def select_employee_by_id(id: int):
employees_table = 'employees'
db = sqlite3.connect(":memory:")
cursor = db.cursor()
with db as session:
statement = text("""
SELECT *
FROM {employees_table}
WHERE
id = :id
""".format(employees_table=employees_table)
).bindparams(id=id)
data = cursor.execute(statement)
return data.fetchone()
Should return a row containing {'id': 1, 'name': 'Jeff Bezos'} for select_employee_by_id(1)
Thanks in advance!
If you want to test your TextClause query then you should execute it by using SQLAlchemy, not by using a DBAPI (SQLite) cursor:
from sqlalchemy import create_engine, text
def select_employee_by_id(id: int):
employees_table = 'employees'
engine = create_engine("sqlite://")
with engine.begin() as conn:
statement = text("""
SELECT *
FROM {employees_table}
WHERE
id = :id
""".format(employees_table=employees_table)
).bindparams(id=id)
data = conn.execute(statement)
return data.one()

fastapi snowflake connection only pulling 1 record

I am trying to read data from snowflake database using FASTAPI. I was able to create the connection which is able to pull data from snowflake.
The issue which I am facing right now is that I am only getting 1 record (instead of 10 records).
I suspect I am not using correct keyword while returning the data. appreciate any help.
Here is my code :-
from fastapi import FastAPI
import snowflake.connector as sf
import configparser
username='username_value'
password='password_value'
account= 'account_value'
warehouse= 'test_wh'
database= 'test_db'
ctx=sf.connect(user=username,password=password,account=account,warehouse=warehouse,database=database)
app = FastAPI()
#app.get('/test API')
async def fetchdata():
cursor = ctx.cursor()
cursor.execute("USE WAREHOUSE test_WH ")
cursor.execute("USE DATABASE test_db")
cursor.execute("USE SCHEMA test_schema")
sql = cursor.execute ("SELECT DISTINCT ID,NAME,AGE,CITY FROM TEST_TABLE WHERE AGE > 60")
for data in sql:
return data
You use return in your inner for-loop. This will return the first row encountered.
If you want to return all rows as a list, you can probably do (I'm not familiar with the snowflake connector):
return list(data)
instead of the for-loop, or sql.fetchall().

How to run a BigQuery query in Python

This is the query that I have been running in BigQuery that I want to run in my python script. How would I change this/ what do I have to add for it to run in Python.
#standardSQL
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
From what I have been researching it is saying that I cant save this query as a permanent table using Python. Is that true? and if it is true is it possible to still export a temporary table?
You need to use the BigQuery Python client lib, then something like this should get you up and running:
from google.cloud import bigquery
client = bigquery.Client(project='PROJECT_ID')
query = "SELECT...."
dataset = client.dataset('dataset')
table = dataset.table(name='table')
job = client.run_async_query('my-job', query)
job.destination = table
job.write_disposition= 'WRITE_TRUNCATE'
job.begin()
https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html
See the current BigQuery Python client tutorial.
Here is another way using a JSON file for the service account:
>>> from google.cloud import bigquery
>>>
>>> CREDS = 'test_service_account.json'
>>> client = bigquery.Client.from_service_account_json(json_credentials_path=CREDS)
>>> job = client.query('select * from dataset1.mytable')
>>> for row in job.result():
... print(row)
This is a good usage guide:
https://googleapis.github.io/google-cloud-python/latest/bigquery/usage/index.html
To simply run and write a query:
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table("your_table_id")
job_config.destination = table_ref
sql = """
SELECT corpus
FROM `bigquery-public-data.samples.shakespeare`
GROUP BY corpus;
"""
# Start the query, passing in the extra configuration.
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config,
) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
I personally prefer querying using pandas:
# BQ authentication
import pydata_google_auth
SCOPES = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
]
credentials = pydata_google_auth.get_user_credentials(
SCOPES,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as over SSH or with Google Colab.
auth_local_webserver=True,
)
query = "SELECT * FROM my_table"
data = pd.read_gbq(query, project_id = MY_PROJECT_ID, credentials=credentials, dialect = 'standard')
The pythonbq package is very simple to use and a great place to start. It uses python-gbq.
To get started you would need to generate a BQ json key for external app access. You can generate your key here.
Your code would look something like:
from pythonbq import pythonbq
myProject=pythonbq(
bq_key_path='path/to/bq/key.json',
project_id='myGoogleProjectID'
)
SQL_CODE="""
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
"""
output=myProject.query(sql=SQL_CODE)

Azure's CosmosDB - Use query explorer in python program

I have a bunch of JSON files stored in Azure's CosmosDB database. I also have a python program that reads the JSON files. I want to run a query on the Azure's query explorer from python
SELECT VALUE Block
FROM c
JOIN Block IN c.radar50p01
So far what I have in my python program is the following
def getCosmosDBClient():
# Initialize the Python DocumentDB client
client = document_client.DocumentClient(Constants.URL, {'masterKey': Constants.KEY})
return client
def getCosmosDBColl_link():
client = getCosmosDBClient()
db_id = Constants.RADAR_DATABASE_NAME
db_query = "select * from r where r.id = '{0}'".format(db_id)
db = list(client.QueryDatabases(db_query))[0]
db_link = db['_self']
coll_id = Constants.RADAR_COLL_NAME
coll_query = "select * from r where r.id = '{0}'".format(coll_id)
coll = list(client.QueryCollections(db_link, coll_query))
if coll:
coll = coll[0]
else:
raise ValueError("Collection not found in database.")
coll_link = coll['_self']
docs = client.ReadDocuments(coll_link)
return docs
So is there a way to use the Query above in python so I just get what I need specifically?
Thanks.
If your query have run successfully in the Query Explorer on Azure portal, you just use the client.QueryDocuments(collection_link, query) method to do your query, as the code below from here.
A query is performed using SQL
# Query them in SQL
query = { 'query': 'SELECT * FROM server s' }
options = {}
options['enableCrossPartitionQuery'] = True
options['maxItemCount'] = 2
result_iterable = client.QueryDocuments(collection['_self'], query, options)
results = list(result_iterable);
print(results)
Hope it helps. Any concern, please feel free to let me know.

How to get sql query from peewee?

Simple peewee example:
MySQL DB "Pet" with autoincrement "id" and char-field "name".
Doing
my_pet = Pet.select().where(name == 'Garfield')
With .sql() we get the sql interpretation.
How to get the raw sql query from:
my_pet = Pet.get(name='Garfield')
?
When you write:
my_pet = Pet(name='Garfield')
Nothing at all happens in the database.
You have simply created an object. There is no magic, as peewee is an ActiveRecord ORM, and only saves when you call a method like Model.save() or Model.create().
If you want the SQL for a query like Model.create(), then you should look into using Model.insert() instead:
insert_stmt = Pet.insert(name='Garfield')
sql = insert_stmt.sql()
new_obj_id = insert_stmt.execute()
The downside there is that you aren't returned a model instance, just the primary key.
If you are connecting to a Postgres database, per peewee 3.13 you can print SQL queries by first getting the cursor, then calling mogrify() for your query. Mogrify is provided by the psycopg2 library and hence may not be available when connecting to other databases.
Given your example:
my_pet = Pet.select().where(Pet.name == 'Garfield').limit(1)
cur = database.cursor()
print(cur.mogrify(*my_pet.sql()))
Where database is the Peewee Database object representing the connection to Postgres.
You can use python's "%" operator to build the string
def peewee_sql_to_str(sql):
return (sql[0] % tuple(sql[1]))
insert_stmt = Pet.insert(name='Garfield')
sql = insert_stmt.sql()
print(peewee_sql_to_str(sql))

Categories

Resources