Get value of pulumi secret - python

I have a Pulumi (python) script that needs to query a database to get a list of customers. The rest of the setup that it does is based on that list.
I've tried to store the username/password for that list in a pulumi secret with pulumi config set --secret db_user $USER and pulumi config set --secret db_password $PASSWORD so that they are encrypted in the pulumi stack file. The problem is that when I try to retrieve them, they are Output objects. I think that pulumi does this so that it can track the value and the resource that created it together, but I just need the string values so I can connect to a database and run a query, as in this simplified example:
db_host = pulumi_config.require("db_host")
db_name = pulumi_config.require("db_name")
db_user = pulumi_config.require_secret("db_user")
db_password = pulumi_config.require_secret("db_password")
# psycopg2.connect fails with an error:
# TypeError: <pulumi.output.Output object at 0x10feb3df0> has type Output, but expected one of: bytes, unicode
connection = psycopg2.connect(
host = db_host,
database = db_name,
user = db_user,
password = db_password)
cursor = connection.cursor()
query = """
SELECT id
FROM customers
WHERE ready = true
ORDER BY id DESC
"""
cursor.execute(query)
customer_ids = []
for record in cursor:
customer_ids.append(record[0])
The code above fails when I try to connect with psycopg2 because it requires a string.
I know that when I use Pulumi libraries that take Pulumi Inputs/Outputs as parameters, the secrets are decrypted just fine. So how can I decrypt these secrets for use with non-Pulumi code?

I think that pulumi does this so that it can track the value and the resource that created it together
The actual reason is because Pulumi needs to resolve the value it retrieves from config, and its an eventual operation. Pulumi decrypts the value using the key first, and once that's done it can resolve it.
You're dealing with an Output and like any other Output, you need to resolve the value using an apply if you want to interpolate it into a string.
connection = Output.all(db_user, db_password) \
.apply(lambda args: psycopg2.connect(
host = db_host,
database = db_name,
user = args[0],
password = args[1]))
# perform your SQL query here
Note, all of the logic you're talking about needs to happen inside the apply

As a reference for anyone else who tries to do something like this, the complete solution looked like this:
# Takes a connection object, uses it to perform a query, and then returns a list of customer IDs
def do_query(connection):
query = """
SELECT id
FROM customers
WHERE ready = true
ORDER BY id DESC
"""
cursor = connection.cursor()
cursor.execute(query)
customer_ids = []
for record in cursor:
customer_ids.append(record[0])
return customer_ids
# gets a list of customer IDs, wrapped in an Output object.
def get_customer_ids():
customer_ids = Output.all(db_user, db_password) \
.apply(lambda args: do_query(
psycopg2.connect(
host = db_host,
database = db_name,
user = args[0],
password = args[1])))
return customer_ids
NOTE: The list of customer IDs will still be wrapped in an Output object, so when you want to use it, you will need to do something like this:
def create_connector_for_customers(customer_ids):
for customer in customer_ids:
connector_config = ConnectorConfigArgs(
# Use customer_id to set up connector
)
destination_schema = ConnectorDestinationSchemaArgs(
# Use customer_id to set up connector
)
# The customer ID list is wrapped in an Output, it can only be accessed within an `apply`
customer_list = get_customer_ids()
customer_list.apply(lambda customer_ids: create_connector_for_customers(customer_ids))

Related

Want to create python API and integrated with swagger/postman

Requirement: 1. I want to create python API which will help to insert data in big query table and this API will host in swagger/postman, from there user can provide input data so that it will get reflected in big query table.
Can anyone help me to find out suitable solution with code
import sqlite3 as sql
from google.cloud import bigquery
from google.oauth2 import service_account
credentials = service_account.Credentials.from_service_account_file('path/to/file.json')
project_id = 'project_id'
client = bigquery.Client(credentials= credentials,project=project_id)
def add_data(group_name, user_name):
try:
# Connecting to database
con = sql.connect('shot_database.db')
# Getting cursor
c = con.cursor()
# Adding data
job_config.use_legacy_sql = True
query_job = client.query("""
INSERT INTO `table_name` (group, user)
VALUES (%s, %s)""",job_config = job_config)
results = query_job.result() # Wait for the job to complete.
# Applying changes
con.commit()
except:
print("An error has occured")
The code you provided is a mix of SQLite and BigQuery, but it likes that you're trying to use BigQuery to insert data into a table. To insert data into a BigQuery table using Python, you can use the insert_data() method of the Client class. Here's I am adding an example of how you can use this method to insert data into a table called "mytable" in a dataset called "mydataset":
# Define the data you want to insert
data = [
{
"group": group_name,
"user": user_name
}
]
# Insert the data
table_id = "mydataset.mytable"
errors = client.insert_data(table_id, data)
if errors == []:
print("Data inserted successfully")
else:
print("Errors occurred while inserting data:")
print(json.dumps(errors, indent=2))
Then, You can create an API using Flask or Django and call the add_data method which you have defined to insert data into big query table.

How to set up AzureSQL Database with AlwaysEncrypted and fill it with data?

at the moment I am working with the Azure Cloud. I want to set up an AzureSQL database and use AlwaysEncrypted to ensure that the data is 'always encrypted' ;-). Furthermore I would like to set up AzureFunctions which are able to connect to the Database as well as write records in.
I already set up an AzureSQL Database but I do not know how to work with it. I started two attempts:
Set up table directly in SSMS, fill data in table, create keys and encrypt it with the wizard. This works totally fine and I am able to see the plain data only if I set the 'AlwaysEncrypted' Checkbox while Connecting to the database.
My second attempt was to include the 'always encrypt directly in the queries. I tried the following:
CREATE COLUMN MASTER KEY CMK_test_1
WITH (
KEY_STORE_PROVIDER_NAME = 'AZURE_KEY_VAULT',
KEY_PATH = '<PATH_TO_AZURE_KEY_VAULT>'
)
CREATE COLUMN ENCRYPTION KEY CEK_test_1
WITH VALUES
(
COLUMN_MASTER_KEY = CMK_test_1,
ALGORITHM = 'RSA_OAEP',
ENCRYPTED_VALUE = <VALUE>
)
Create Table dbo.AlwaysEncryptedTest
(
ID int identity(1,1) PRIMARY KEY
, FirstName varchar(25) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK_test_1) not NULL
, LastName varchar(25) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK_test_1) not NULL
, City varchar(25) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK_test_1) not NULL
, StreetName varchar(25) COLLATE Latin1_General_BIN2 ENCRYPTED WITH (
ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK_test_1) not NULL
)
I know that I have to use an application to put records in the database but I could not find a tutorial or something else that helps me to do so. I found some C# explenation on the Microsoft website but this did not help me to do the job. In best case I would write the connection in python.
Any help is appreciated.
Best
P
If you want to connect Azure SQL server which enables always encrypt with Azure key vault in python application, we can use ODBC driver to implement it.
Regarding how to implement it, we need to add ColumnEncryption=Enabled into connection string to tell odbc application always encrypt has been enabled. Besides, since we use Azure key vault store, we also need to add KeyStoreAuthentication KeyStorePrincipalId and KeyStoreSecret to make ODBC application connect Azure key vault, get encryption key.
For more details, please refer to here and here
For example
Create a service principal to connect Azure key vault
az login
az ad sp create-for-rbac --skip-assignment --sdk-auth
az keyvault set-policy --name $vaultName --key-permissions get, list, sign, unwrapKey, verify, wrapKey --resource-group $resourceGroupName --spn <clientId-of-your-service-principal>
Code
server = '<>.database.windows.net'
database = ''
username = ''
password = ''
driver= '{ODBC Driver 17 for SQL Server}'
KeyStoreAuthentication='KeyVaultClientSecret'
KeyStorePrincipalId='<clientId-of-your-service-principal>'
KeyStoreSecret='<clientSecret-of-your-service-principal>'
conn_str=f'DRIVER={driver};SERVER={server};PORT=1443;DATABASE={database};UID={username};PWD={password};ColumnEncryption=Enabled;KeyStoreAuthentication={KeyStoreAuthentication};KeyStorePrincipalId={KeyStorePrincipalId};KeyStoreSecret={KeyStoreSecret}'
with pyodbc.connect(conn_str) as conn:
with conn.cursor() as cursor:
cursor.execute("SELECT * FROM [dbo].[Patients]")
row = cursor.fetchone()
while row:
print (row)
row = cursor.fetchone()

Telegram Bot respond to specific command in Python list

I am making a Telegram bot that can can access database to reply users' query. The bot need to respond to specific request of certain data in database. I was able to solve for when users request for all data but I am stuck with individual data. I am using telegram.ext from telegram package in python. Here is what I have done so far.
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters
import MySQLdb
currr = [] # global list var ~don't bash me for using global in python please, I'm a newbie
# request for all data in database
def request2(bot, update):
db = MySQLdb.connect(host = "local", user = "root", passwd = "pwd", db = "mydb")
cur = db.cursor()
cur.execute("select ID from table")
ID = cur.fetchall()
cur.execute("SELECT ID, temp FROM table2 order by indexs desc")
each_rows = cur.fetchall()
for IDs in ID:
for each_row in each_rows:
if str(each_row[0])[0:4]==str(ID)[2:6]:
update.message.reply_text('reply all related data here')
break
# request for single data
def individualreq(bot, update):
db = pymysql.connect(host = "localhost", user = "root", passwd = "pwd", db = "mydb")
update.message.reply_text('reply individual data to users here')
def main():
updater = Updater("TOKEN")
dp = updater.dispatcher
global currr
# get all ID form database
db = MySQLdb.connect(host = "localhost", user = "root", passwd = "pwd", db = "mydb")
cur = db.cursor()
cur.execute("select ID from table")
curr_ID = cur.fetchall()
# example ID = 'F01', 'F02', 'F03'
for curr_IDs in curr_ID:
currr.append(curr_IDs[0])
# request all data
dp.add_handler(CommandHandler("all", request2))
# request individual data
dp.add_handler(CommandHandler(currr, individualreq)) # list command in currr[]
if __name__ == '__main__':
main()
I am looking for a way to pass the current command which is also the ID in database that user request in the currr[] list to the individualreq(bot, update) function so that only data of the called ID is being replied. Users will select from a list of ID in telegram and the command handler can pass the selected ID to the function. I have not found a way to pass the ID to the function. Could someone help me to solve this please. Thanks
I find out a solution for my question from the answer provided by Oluwafemi Sule. CommandHandler can pass the arguments of the command to the function by adding pass_args=True in the CommandHandler.
dp.add_handler(CommandHandler(currr, individualreq, pass_args=True))
To print out the args in the function, the function need to receive the args.
def individualreq(bot, update, args):
# id store the args value
id = update.message.text
print(id[1:]) # [1:] is to get rid of the / in id
You can outright make individualreq a closure.
CommandHandler takes a command or list of command to listen to and a list other options.
There is a pass_user_data option that allows for user data to be passed to the callback.
dp.add_handler(CommandHandler(currr, individualreq, pass_user_data=True))
The signature for individualreq callback will be updated to take the user_data
def individualreq(bot, update, user_data=None):
#user_data is a dict
print(user_data)

How to run a BigQuery query in Python

This is the query that I have been running in BigQuery that I want to run in my python script. How would I change this/ what do I have to add for it to run in Python.
#standardSQL
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
From what I have been researching it is saying that I cant save this query as a permanent table using Python. Is that true? and if it is true is it possible to still export a temporary table?
You need to use the BigQuery Python client lib, then something like this should get you up and running:
from google.cloud import bigquery
client = bigquery.Client(project='PROJECT_ID')
query = "SELECT...."
dataset = client.dataset('dataset')
table = dataset.table(name='table')
job = client.run_async_query('my-job', query)
job.destination = table
job.write_disposition= 'WRITE_TRUNCATE'
job.begin()
https://googlecloudplatform.github.io/google-cloud-python/stable/bigquery-usage.html
See the current BigQuery Python client tutorial.
Here is another way using a JSON file for the service account:
>>> from google.cloud import bigquery
>>>
>>> CREDS = 'test_service_account.json'
>>> client = bigquery.Client.from_service_account_json(json_credentials_path=CREDS)
>>> job = client.query('select * from dataset1.mytable')
>>> for row in job.result():
... print(row)
This is a good usage guide:
https://googleapis.github.io/google-cloud-python/latest/bigquery/usage/index.html
To simply run and write a query:
# from google.cloud import bigquery
# client = bigquery.Client()
# dataset_id = 'your_dataset_id'
job_config = bigquery.QueryJobConfig()
# Set the destination table
table_ref = client.dataset(dataset_id).table("your_table_id")
job_config.destination = table_ref
sql = """
SELECT corpus
FROM `bigquery-public-data.samples.shakespeare`
GROUP BY corpus;
"""
# Start the query, passing in the extra configuration.
query_job = client.query(
sql,
# Location must match that of the dataset(s) referenced in the query
# and of the destination table.
location="US",
job_config=job_config,
) # API request - starts the query
query_job.result() # Waits for the query to finish
print("Query results loaded to table {}".format(table_ref.path))
I personally prefer querying using pandas:
# BQ authentication
import pydata_google_auth
SCOPES = [
'https://www.googleapis.com/auth/cloud-platform',
'https://www.googleapis.com/auth/drive',
]
credentials = pydata_google_auth.get_user_credentials(
SCOPES,
# Set auth_local_webserver to True to have a slightly more convienient
# authorization flow. Note, this doesn't work if you're running from a
# notebook on a remote sever, such as over SSH or with Google Colab.
auth_local_webserver=True,
)
query = "SELECT * FROM my_table"
data = pd.read_gbq(query, project_id = MY_PROJECT_ID, credentials=credentials, dialect = 'standard')
The pythonbq package is very simple to use and a great place to start. It uses python-gbq.
To get started you would need to generate a BQ json key for external app access. You can generate your key here.
Your code would look something like:
from pythonbq import pythonbq
myProject=pythonbq(
bq_key_path='path/to/bq/key.json',
project_id='myGoogleProjectID'
)
SQL_CODE="""
SELECT
Serial,
MAX(createdAt) AS Latest_Use,
SUM(ConnectionTime/3600) as Total_Hours,
COUNT(DISTINCT DeviceID) AS Devices_Connected
FROM `dataworks-356fa.FirebaseArchive.testf`
WHERE Model = "BlueBox-pH"
GROUP BY Serial
ORDER BY Serial
LIMIT 1000;
"""
output=myProject.query(sql=SQL_CODE)

Twisted - Using a Deferred for another sql query

I've got a twisted based network application. Now I would like to implement a new database design but I''m getting stuck with the Deferred object.
I write sessions into my database by using this two functions:
def createSession(self, peerIP, peerPort, hostIP, hostPort):
SessionUUID = uuid.uuid1().hex
yield self.writeSession(SessionUUID, peerIP, hostIP)
sid = self.db.runQuery("SELECT id FROM sessions WHERE sid = %s",
(SessionUUID,))
yield sid
#defer.inlineCallbacks
def writeSession(self, SessionUUID, peerIP, hostIP):
sensorname = self.getSensor() or hostIP
r = yield self.db.runQuery("SELECT id FROM sensors WHERE ip = %s",
(sensorname,))
if r:
id = r[0][0]
else:
yield self.db.runQuery("INSERT INTO sensors (ip) VALUES (%s)",
(sensorname,))
r = yield self.db.runQuery("""SELECT LAST_INSERT_ID()""")
id = int(r[0][0])
self.simpleQuery(
"""
INSERT INTO sessions (sid, starttime, sensor, ip)
VALUES (%s, FROM_UNIXTIME(%s), %s, %s)
""",
(SessionUUID, self.nowUnix(), id, peerIP))
In short words:
createSession creates an UUID for the session and calls writeSession to write this into my db. After this is written I try to select the ID of the last insert by using the UUID in the where statement and return the result.
Now my problem. To update the session information I call this function:
def handleConnectionLost(self, sid, args):
self.simpleQuery("UPDATE sessions SET endtime = now() WHERE sid = %s",
(sid))
As you can see I try to use the sid from createSession which is an Deferred object and not an integer. If I got this right and I add a Callback to handleConnectionLost it will run the query at this time so that I can use the value here. But this is not my only function where I need the sid. So it would be an overhead when I do every time a callback when I need the sid.
Is there a way that I can give my sid as an integer to my functions? So that I'm just running the query one time? How does it have to look like?
When I'm using a Deferred query with a now() statement. Will it use now() when I added this query to my Callbacks or will it use now() when the query is fired?
You can immediately get the ID after inserting a new row for later use, similar question was answered here: The equivalent of SQLServer function SCOPE_IDENTITY() in mySQL?
it will use now() when the query is fired

Categories

Resources