SQL query within Python - python

I am trying to run a query within python and am running into errors.
I am extremely novice in the world of SQL, and one of our DBAs wrote this SQL command and it worked from their machine. However when I am converting it to Python is where I am running into issues.
import requests
from orionsdk import SwisClient
npm_server = 'SERVER'
username = 'USERNAME'
password = 'PASSWORD'
verify = False
if not verify:
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
swis = SwisClient(npm_server, username, password)
print("Query Test:")
query = """
SELECT NodesData.Caption as NodeName, IP_Address,
Interfaces.InterfaceName,
Interfaces.Status, Interfaces.InterfaceLastChange
FROM Interfaces
INNER JOIN NodesData ON Interfaces.NodeID = NodesData.NodeID
INNER JOIN NodesCustomProperties (NOLOCK) ON NodesData.NodeID = NodesCustomProperties.NodeID
LEFT OUTER JOIN WebCommunityStrings ON WebCommunityStrings.CommunityString = NodesData.Community
WHERE Interfaces.Status = 2 AND Interfaces.Severity > 0 AND InterfaceName = 'Tunnel201' ORDER BY NodesData.Caption, Interfaces.InterfaceIndex DESC
"""
results = swis.query(query)
for row in results['results']:
#print("{NodeID:<5}: {DisplayName}".format(**row))
print("{NodeID:<5}: {DisplayName}".format(**row))
Output:
C:\Users\jefhill\AppData\Local\Programs\Python\Python37-32\python.exe "C:/Users/jefhill/Desktop/Python Stuff/Projects/solarWinds/swExport.py"
Query Test:
Traceback (most recent call last):
File "C:/Users/jefhill/Desktop/Python Stuff/Projects/solarWinds/swExport.py", line 30, in <module>
results = swis.query(query)
File "C:\Users\jefhill\AppData\Local\Programs\Python\Python37-32\lib\site-packages\orionsdk\swisclient.py", line 26, in query
{'query': query, 'parameters': params}).json()
File "C:\Users\jefhill\AppData\Local\Programs\Python\Python37-32\lib\site-packages\orionsdk\swisclient.py", line 63, in _req
resp.raise_for_status()
File "C:\Users\jefhill\AppData\Local\Programs\Python\Python37-32\lib\site-packages\requests\models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: mismatched input ')' expecting 'EQ' in Join clause for url: https://SERVER:PORT/SolarWinds/InformationService/v3/Json/Query
Process finished with exit code 1
Note: Removed some of the more private information (IE, server name, password, ect)

Related

How can I query the bittensor network using btcli?

btcli query
Enter wallet name (default): my-wallet-name
Enter hotkey name (default): my-hotkey
Enter uids to query (All): 18
Note that my-wallet-name, my-hotkey where actually correct names. My wallet with one of my hotkeys. And I decided to query the UID 18.
But btcli is returning an error with no specific message
AttributeError: 'Dendrite' object has no attribute 'forward_text'
Exception ignored in: <function Dendrite.__del__ at 0x7f5655e3adc0>
Traceback (most recent call last):
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_dendrite/dendrite_impl.py", line 107, in __del__
bittensor.logging.success('Dendrite Deleted', sufix = '')
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 341, in success
cls()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 73, in __new__
config = logging.config()
File "/home/eduardo/repos/bittensor/venv/lib/python3.8/site-packages/bittensor/_logging/__init__.py", line 127, in config
parser = argparse.ArgumentParser()
File "/usr/lib/python3.8/argparse.py", line 1672, in __init__
prog = _os.path.basename(_sys.argv[0])
TypeError: 'NoneType' object is not subscriptable
What does this means?
How can I query an UID correctly?
I have try to look for UIDs to query but the tool does not give me any.
I was expecting a semantic error or a way to look for a UID i can query but not a TypeError.
It appears that command is broken and should be removed.
I opened an issue for you here: https://github.com/opentensor/bittensor/issues/1085
You can use the python API like:
import bittensor
UID: int = 18
subtensor = bittensor.subtensor( network="nakamoto" )
forward_text = "testing out querying the network"
wallet = bittensor.wallet( name = "my-wallet-name", hotkey = "my-hotkey" )
dend = bittensor.dendrite( wallet = wallet )
neuron = subtensor.neuron_for_uid( UID )
endpoint = bittensor.endpoint.from_neuron( neuron )
response_codes, times, query_responses = dend.generate(endpoint, forward_text, num_to_generate=64)
response_code_text = response_codes[0]
query_response = query_responses[0]

Is it possible to view a postgresql table using Pandas?

def connectxmlDb(dbparams):
conn_string = "host='{}' dbname='{}' user='{}' password='{}' port='{}'"\
.format(dbparams['HOST'], dbparams['DB'], dbparams['USERNAME'], dbparams['PASSWORD'], dbparams['PORT'])
try:
conn = psycopg2.connect(conn_string)
pass
except Exception as err:
print('Connection to Database Failed : ERR : {}'.format(err))
return False
print('Connection to Database Success')
return conn
dbconn = connectxmlDb(params['DATABASE'])
The above mentioned is the code I use to connect to postgresql database
sql_statement = """ select a.slno, a.clientid, a.filename, a.user1_id, b.username, a.user2_id, c.username as username2, a.uploaded_ts, a.status_id
from masterdb.xmlform_joblist a
left outer join masterdb.auth_user b
on a.user1_id = b.id
left outer join masterdb.auth_user c
on a.user2_id = c.id
"""
cursor.execute(sql_statement)
result = cursor.fetchall()
This is my code to extract data from a postgres table using python.
I want to know if it is possible to view this data using Pandas
This is the code I used:
df = pd.read_sql_query(sql_statement, dbconn)
print(df.head(10))
but it is showing error.
C:\Users\Lukmana\AppData\Local\Programs\Python\Python310\lib\site-
packages\pandas\io\sql.py:762: UserWarning: pandas only support SQLAlchemy connectable(engine/connection) ordatabase string URI or sqlite3 DBAPI2 connectionother DBAPI2 objects are not tested, please consider using SQLAlchemy
warnings.warn(
Traceback (most recent call last):
File "C:\Users\Lukmana\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\sql.py", line 2023, in execute
cur.execute(*args, **kwargs)
psycopg2.errors.SyntaxError: syntax error at or near ";"
LINE 1: ...ELECT name FROM sqlite_master WHERE type='table' AND name=?;
^
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\Lukmana\Desktop\user_count\usercount.py", line 55, in <module>
df = pd.read_sql_table(result, dbconn)
File "C:\Users\Lukmana\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\sql.py", line 286, in read_sql_table
if not pandas_sql.has_table(table_name):
File "C:\Users\Lukmana\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\sql.py", line 2200, in has_table
return len(self.execute(query, [name]).fetchall()) > 0
File "C:\Users\Lukmana\AppData\Local\Programs\Python\Python310\lib\site-packages\pandas\io\sql.py", line 2035, in execute
raise ex from exc
pandas.io.sql.DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': syntax error at or near ";"
LINE 1: ...ELECT name FROM sqlite_master WHERE type='table' AND name=?;

i am trying to unload data from snowflake internal stage to unix file path using COPY INTO and GET command, but getting error

I am running all the sql scripts under the scripts path in a for loop and copying the data into #priya_stage area in snowflake and then using GET command , i am unloading data from stage area to my Unix path in csv format. But I am getting error.
Note: this same code works on my MAC but not on unix server.
import logging
import os
import snowflake.connector
from snowflake.connector import DictCursor as dict
from os import walk
try:
conn = snowflake.connector.connect(
account = 'xxx' ,
user = 'xxx' ,
password = 'xxx' ,
database = 'xxx' ,
schema = 'xxx' ,
warehouse = 'xxx' ,
role = 'xxx' ,
)
conn.cursor().execute('USE WAREHOUSE xxx')
conn.cursor().execute('USE DATABASE xxx')
conn.cursor().execute('USE SCHEMA xxx')
take = []
scripts = '/xxx/apps/xxx/xxx/scripts/snow/scripts/'
os.chdir('/xxx/apps/xxx/xxx/scripts/snow/scripts/')
for root , dirs , files in walk(scripts):
for file in files:
inbound = file[0:-4]
sql = open(file , 'r').read()
# file_number = 0
# file_number += 1
file_prefix = 'bridg_' + inbound
file_name = file_prefix
result_query = conn.cursor(dict).execute(sql)
query_id = result_query.sfqid
sql_copy_into = f'''
copy into #priya_stage/{file_name}
from (SELECT * FROM TABLE(RESULT_SCAN('{query_id}')))
DETAILED_OUTPUT = TRUE
HEADER = TRUE
SINGLE = FALSE
OVERWRITE = TRUE
max_file_size=4900000000'''
rs_copy_into = conn.cursor(dict).execute(sql_copy_into)
for row_copy in rs_copy_into:
file_name_in_stage = row_copy["FILE_NAME"]
sql_get_to_local = f"""
GET #priya_stage/{file_name_in_stage} file:///xxx/apps/xxx/xxx/inbound/zip_files/{inbound}/"""
rs_get_to_local = conn.cursor(dict).execute(sql_get_to_local)
except snowflake.connector.errors.ProgrammingError as e:
print('Error {0} ({1}): {2} ({3})'.format(e.errno , e.sqlstate , e.msg , e.sfqid))
finally:
conn.cursor().close()
conn.close()
Error
Traceback (most recent call last):
File "Generic_local.py", line 52, in <module>
rs_get_to_local = conn.cursor(dict).execute(sql_get_to_local)
File "/usr/local/lib64/python3.6/site-packages/snowflake/connector/cursor.py", line
746, in execute
sf_file_transfer_agent.execute()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/file_transfer_agent.py", line 379, in execute
self._transfer_accelerate_config()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/file_transfer_agent.py", line 671, in
_transfer_accelerate_config
self._use_accelerate_endpoint = client.transfer_accelerate_config()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/s3_storage_client.py", line 572, in
transfer_accelerate_config
url=url, verb="GET", retry_id=retry_id, query_parts=dict(query_parts)
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/s3_storage_client.py", line 353, in _.
send_request_with_authentication_and_retry
verb, generate_authenticated_url_and_args_v4, retry_id
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/storage_client.py", line 313, in
_send_request_with_retry
f"{verb} with url {url} failed for exceeding maximum retries."
snowflake.connector.errors.RequestExceedMaxRetryError: GET with url b'https://xxx-
xxxxx-xxx-x-customer-stage.xx.amazonaws.com/https://xxx-xxxxx-xxx-x-customer-
stage.xx.amazonaws.com/?accelerate' failed for exceeding maximum retries.
This link redirects me to a error message .
https://xxx-
xxxxx-xxx-x-customer-stage.xx.amazonaws.com/https://xxx-xxxxx-xxx-x-customer-
stage.xx.amazonaws.com/?accelerate
Access Denied error :
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>1X1Z8G0BTX8BAHXK</RequestId>
<HostId>QqdCqaSK7ogAEq3sNWaQVZVXUGaqZnPv78FiflvVzkF6nSYXTSKu3iSiYlUOU0ka+0IMzErwGC4=</HostId>
</Error>

pyodbc ERROR - ('ODBC SQL type -151 is not yet supported. column-index=16 type=-151', 'HY106')

I'm working on automating some query extraction using python and pyodbc, and then converting to parquet format, and send to AWS S3.
My script solution is working fine so far, but I have faced a problem. I have a Schema, let us call it SCHEMA_A, and inside of it several tables, TABLE_1, TABLE_2 .... TABLE_N.
All those tables inside that schema are accessible by using the same credentials.
So I'm using a script like this one to automate the task.
def get_stream(cursor, batch_size=100000):
while True:
row = cursor.fetchmany(batch_size)
if row is None or not row:
break
yield row
cnxn = pyodbc.connect(driver='pyodbc driver here',
host='host name',
database='schema name',
user='user name,
password='password')
print('Connection stabilished ...')
cursor = cnxn.cursor()
print('Initializing cursos ...')
if len(sys.argv) > 1:
table_name = sys.argv[1]
cursor.execute('SELECT * FROM {}'.format(table_name))
else:
exit()
print('Query fetched ...')
row_batch = get_stream(cursor)
print('Getting Iterator ...')
cols = cursor.description
cols = [col[0] for col in cols]
print('Initalizin batch data frame ..')
df = pd.DataFrame(columns=cols)
start_time = time.time()
for rows in row_batch:
tmp = pd.DataFrame.from_records(rows, columns=cols)
df = df.append(tmp, ignore_index=True)
tmp = None
print("--- Batch inserted inn%s seconds ---" % (time.time() - start_time))
start_time = time.time()
I run a code similar to that inside Airflow tasks, and works just fine for all other tables. But then I have two tables, lets call TABLE_I and TABLE_II that yields the following error when I execute cursor.fetchmany(batch_size):
ERROR - ('ODBC SQL type -151 is not yet supported. column-index=16 type=-151', 'HY106')
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 117, in execute
return_value = self.execute_callable()
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 128, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/ubuntu/prea-ninja-airflow/jobs/plugins/extract/fetch.py", line 58, in fetch_data
for rows in row_batch:
File "/home/ubuntu/prea-ninja-airflow/jobs/plugins/extract/fetch.py", line 27, in stream
row = cursor.fetchmany(batch_size)
Inspecting those tables with SQLElectron, and Querying the first few lines, I have realized that both TABLE_I and TABLE_II have a Column called 'Geolocalizacao', when I use SQL server language to find the DATA TYPE of that column with:
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'TABLE_I' AND
COLUMN_NAME = 'Geolocalizacao';
It yields:
DATA_TYPE
geography
Seraching here on stack overflow I found this solution: python pyodbc SQL Server Native Client 11.0 cannot return geometry column
By the description of the user, it seem work fine by adding:
def unpack_geometry(raw_bytes):
# adapted from SSCLRT information at
# https://learn.microsoft.com/en-us/openspecs/sql_server_protocols/ms-ssclrt/dc988cb6-4812-4ec6-91cd-cce329f6ecda
tup = struct.unpack('<i2b3d', raw_bytes)
# tup contains: (unknown, Version, Serialization_Properties, X, Y, SRID)
return tup[3], tup[4], tup[5]
and then:
cnxn.add_output_converter(-151, unpack_geometry)
After creating the connection. But It's not working for the GEOGRAPHY DATA TYPE, when I use this code (add import struct on python script), it gives me the following error:
Traceback (most recent call last):
File "benchmark.py", line 79, in <module>
for rows in row_batch:
File "benchmark.py", line 39, in get_stream
row = cursor.fetchmany(batch_size)
File "benchmark.py", line 47, in unpack_geometry
tup = struct.unpack('<i2b3d', raw_bytes)
struct.error: unpack requires a buffer of 30 bytes
An example of values that this column have, follows the given template:
{"srid":4326,"version":1,"points":[{}],"figures":[{"attribute":1,"pointOffset":0}],"shapes":[{"parentOffset":-1,"figureOffset":0,"type":1}],"segments":[]}
I honestly don't know how to adapt the code for this given structure, can someone help me? It's been working fine for all other tables, but I have those two tables with this column that are giving me a lot o headeach.
Hi this is what I have done:
from binascii import hexlify
def _handle_geometry(geometry_value):
return f"0x{hexlify(geometry_value).decode().upper()}"
and then on connection:
cnxn.add_output_converter(-151, _handle_geometry)
this will return value as SSMS.

problem with script verifying RRSIGs using DNSPython

Im writing a script to verify rrsigs using dnspython but something is wrong with my code. The following is a snippet and its accompanying error message:
domain = 'iana.org'
server = '8.8.8.8'
qname = dns.name.from_text(domain)
# get DNSKEYs
DNSKEY_query = dns.message.make_query(qname, dns.rdatatype.DNSKEY, want_dnssec=True)
(DNSKEY_response, _) = dns.query.udp_with_fallback(DNSKEY_query, server)
dnskey_set, dnskey_sig = DNSKEY_response.answer
# get RRset and RRsig to verify
query = dns.message.make_query(qname, dns.rdatatype.NS, want_dnssec=True)
(response, _) = dns.query.udp_with_fallback(query, server)
rrset, rrsig = response.answer
dns.dnssec.validate(rrset, rrsig, {dns.name.empty: dnskey_set}, None)
Error message.
Traceback (most recent call last):
File "dnssec_validator.py", line 107, in <module>
dns.dnssec.validate(rrset, rrsig, {dns.name.empty: dnskey_set}, None)
File "/home/user/PycharmProjects/RPKIDNSSEC/venv/lib/python3.6/site-packages/dns/dnssec.py", line 494, in _validate
raise ValidationFailure("no RRSIGs validated")
dns.dnssec.ValidationFailure: no RRSIGs validated

Categories

Resources