I am trying to fetch Azure Cost Usage data on daily basis using Python but I facing an error while trying to run the code
Error being generated :
query_ops = client.operations.query
AttributeError: 'Operations' object has no attribute 'query'
client = CostManagementClient(credentials, subscription_id)
# Query for yesterday's cost
query = "SELECT sum(cost) as cost FROM cost where time >= '{}' and time <= '{}'".format(yesterday_start, yesterday_end)
query_ops = client.operations.query
query_result = query_ops.execute(query)
# Extract the cost from the query result
cost = query_result.results[0]["cost"]["value"]
# Append the cost and subscription ID to the data list
data.append({'subscription_id': subscription_id, 'cost': cost})
Why not substitute the 'query_usage' method for the 'query' method to retrieve the Cost Usage data.
Note: The client.usage object returned by CostManagementClient has a query method that you can use to execute your query. The query method takes your query as a parameter and returns a QueryResponse object that you can use to extract the results.
Replace:
query_ops = client.operations.query
query_result = query_ops.execute(query)
With:
query_ops = client.usage
query_result = query_ops.query(query)
Related
I am trying to create some simple replication mechanism, fetching records from one database and writing it to another with python, using cx_oracle to query the Oracle source and psycopg2 to insert the data into PostgreSQL
# Fetch from Oracle
cur.execute("select * from INC where sys_updated_on >= to_timestamp(:max, 'YYYY-MM-DD HH24:MI:SS')", {"max": str(maxupd[0])})
# Get all records in chunks
while True:
# Fetch a subset of records acc. to cur.arraysize
records = cur.fetchmany(numRows=cur.arraysize)
# End loop if no more records are available
if not records:
break
# Get row index for unique SYS_ID
index = cols.index("SYS_ID")
# Check for each record if already exists or is new
for rec in records:
# Fetch the SYS_ID from the current record
my_sql = sql.SQL("select 'SYS_ID' from inc where 'SYS_ID'= '%%%s%%' " % (rec[index]))
cur1.execute(my_sql)
myid = cur1.fetchone()
# If record does not exist in target, insert record
if myid is None:
rec = str(list(rec))[1:-1]
cur1.execute("""INSERT INTO inc VALUES(%s)""" % (rec))
The insert fails because of the following error:
psycopg2.errors.SyntaxError: syntax error at or near "<"
LINE 1: ...370f17e2c083d6ff7bc2050ea4', 'SAR', None, 0, '4', <cx_Oracle...
This means that the CLOB fields are not read correctly when transforming the result from tuple to a string to be used in the insert statement.
Printing the item in the tuple field directly like print(rec[28]) gives the content of the field, but the transformation only shows the cx_Oracle placeholder. I tried various approaches but all failed for my purpose.
Is there a way to get the CLOB content into the string??
I am working in Python/Django with a MySQL Db. This sql query works fine in my Workbench
SELECT * FROM frontend_appliance_sensor_reading
WHERE id = (SELECT max(id) FROM frontend_appliance_sensor_reading WHERE sensor_id = hex(x'28E86479972003D2') AND
appliance_id = 185)
I am returning the latest record for a sensor. The hex string is the sensor ID that I need to pass in as a variable in python.
Here is my python function that would return this object
def get_last_sensor_reading(appliance_id, sensor_id):
dw_conn = connection
dw_cur = dw_conn.cursor()
appliance_id_lookup = appliance_id
dw_cur.execute('''
SELECT * FROM frontend_appliance_sensor_reading as sr
WHERE id = (SELECT max(id) FROM frontend_appliance_sensor_reading WHERE sensor_id = hex(x{sensor_id_lookup}) AND
appliance_id = {appliance_id_lookup})
'''.format(appliance_id_lookup=appliance_id_lookup, sensor_id_lookup = str(sensor_id)))
values = dw_cur.fetchall()
dw_cur.close()
dw_conn.close()
print values
However it seems to concat the x with the variable like this:
(1054, "Unknown column 'x9720032F0100DE86' in 'where clause'")
I have tried various string combinations to get it to execute correctly with no luck. What am I missing? Does the x actually get interpreted as a str? Or should I be converting it to something else prior?
Also, I cannot not use Django's ORM for this as the sensor id is stored in a BinaryField as BLOB data. You cannot filter by BLOB data in Django. This is the reason I am using a sql command instead of just doing SensorReading.objects.filter(sensor_id = sensor).latest('id)
I wrote a method to get the status of a csv file in a SQL Server table. The table has column named CSV_STATUS, and for the particular csv, I'd like my method to give me the value of the CSV status. I wrote the following function:
def return_csv_status_db(db_instance, name_of_db_instance_tabledict, csvfile_path):
table_dict = db_instance[name_of_db_instance_tabledict]
csvfile_name = csvfile_path.name
sql = db.select([table_dict['table'].c.CSV_STATUS]).where(table_dict['table'].c.CSV_FILENAME == csvfile_name)
result = table_dict['engine'].execute(sql)
print(result)
Whenever I print result, it returns: <sqlalchemy.engine.result.ResultProxy object at 0x0000005E642256C8>
How can I extract the value of the select statement?
Take a look at [1].
As I understand it, you need to do the following:
for row in result:
# do what you need to do for each row.
[1] - https://docs.sqlalchemy.org/en/13/core/connections.html
I am trying to query a table in Bigquery via a Python script. However I have written the query as a standard sql query. For this I need to start my query with '#standardsql'. However when I do this it then comments out the rest of my query. I have tried to write the query using multiple lines but it does not allow me to do this either. Has anybody dealt with a problem like this and found out a solution? Below is my first code where the query becomes commented out.
client = bigquery.Client('dataworks-356fa')
query = ("#standardsql SELECT count(distinct serial) FROM `dataworks-356fa.FirebaseArchive.test2` Where (PeripheralType = 1 or PeripheralType = 2 or PeripheralType = 12) AND EXTRACT(WEEK FROM createdAt) = EXTRACT(WEEK FROM CURRENT_TIMESTAMP()) - 1 AND serial != 'null'")
dataset = client.dataset('FirebaseArchive')
table = dataset.table('test2')
tbl = dataset.table('Count_BB_Serial_weekly')
job = client.run_async_query(str(uuid.uuid4()), query)
job.destination = tbl
job.write_disposition= 'WRITE_TRUNCATE'
job.begin()
When I try to write the query like this python does not read anything past on the second line as the query.
query = ("#standardsql
SELECT count(distinct serial) FROM `dataworks-356fa.FirebaseArchive.test2` Where (PeripheralType = 1 or PeripheralType = 2 or PeripheralType = 12) AND EXTRACT(WEEK FROM createdAt) = EXTRACT(WEEK FROM CURRENT_TIMESTAMP()) - 1 AND serial != 'null'")
The query Im running selects values that have been produced within the last week. If there is a variation of this that would not be required to use standardsql I would be willing to switch my other queries as well but I have not been able to figure out how to do that. I would prefer for this to be the last resort though. Thank you for the help!
If you want to flag you'll be using Standard SQL inside the query itself, you can build it like:
query = """#standardSQL
SELECT count(distinct serial) FROM `dataworks-356fa.FirebaseArchive.test2` Where (PeripheralType = 1 or PeripheralType = 2 or PeripheralType = 12) AND EXTRACT(WEEK FROM createdAt) = EXTRACT(WEEK FROM CURRENT_TIMESTAMP()) - 1 AND serial != 'null'
"""
Another option you can use as well is setting the property use_legacy_sql of the job created to False, something like:
job = client.run_async_query(job_name, query)
job.use_legacy_sql = False # -->this also makes the API use Standard SQL
job.begin()
I've been learning Python recently and have learned how to connect to the database and retrieve data from a database using MYSQLdb. However, all the examples show how to get multiple rows of data. I want to know how to retrieve only one row of data.
This is my current method.
cur.execute("SELECT number, name FROM myTable WHERE id='" + id + "'")
results = cur.fetchall()
number = 0
name = ""
for result in results:
number = result['number']
name = result['name']
It seems redundant to do for result in results: since I know there is only going to be one result.
How can I just get one row of data without using the for loop?
.fetchone() to the rescue:
result = cur.fetchone()
use .pop()
if results:
result = results.pop()
number = result['number']
name = result['name']