I'm connecting to an influx db using python. Using built in dataframe tools I am successfully accessing data and am able to do everything I'd like, accept I can't access the timestamp values. For example:
import sys
from influxdb import DataFrameClient
reload(sys)
sys.setdefaultencoding('utf-8')
user = 'reader'
password = 'oddstringtoconfusebadguys'
dbname = 'autoweights'
host = '55.777.244.112'
protocol = 'line'
port = 8086
client = DataFrameClient(host,port=8086,username=user,password=password,
database=dbname,verify_ssl=False,ssl=True)
results = client.query("select * from measurementname")
df = results['measurementname']
for index, row in df.iterrows():
print row
results look like this
Name: 2017-11-14 22:11:23.534395882+00:00, dtype: object
host C4:27:EB:D7:D9:70
value 327
I can easily access row['host'] and row['value']. The date/time stamp is obviously important but try as I might I can't find an approach to get the values.
You can get the timestamp using the index not the row parameter
for index, row in df.iterrows():
print(index)
print(row)
You can also use Pinform library, an ORM for InfluxDB to easily get timestamp, fields and tags.
Related
My goal is to display SQL queries via the Python console using the Win32com module. I am able to use a comobject to access and successfully display the fields of a SQL query however when i try to display rows i am falling into Exceptions.
Background
I have used PYODBC which works great however there is a limitation based off what SQL providers are installed and whether TLS 1.2 is enforced. The software sometimes is installed to an external SQL server and therefore no provider on the software's server that can always establish a connection. This is why i am now using the kernal of the software via com objects to access the DB as this circumvents the pitfalls of the provider needing to be installed or the latest Windows update to allow TLS 1.2 connections etc.
Win32com Code that works for me using Fields
import win32com.client
import win32com
"""Connection"""
objprox = win32com.client.Dispatch("****.DbObjectProxy") #Blanked out for security of Software
"""Set Query"""
sql1 = "select * from ServiceConsumer_t"
dbq1 = objprox.DoDatabaseQuery(sql1)
dbq1.Open(sql1)
"""Specify & Print result"""
while not dbq1.EOF:
col1 = dbq1.Fields("ID").Value
dbq1.MoveNext()
print(col1)
"""Close Session"""
dbq1.Close()
Output of the above is:
{9CAFD41E-D322-4234-BF80-CF6E11A724A0}
{CE4AAE72-0889-41E8-BDB2-ED96696DDB91}
{DC18008F-2C84-4EB4-BCCB-D94FF96E0564}
{1AAB143C-8393-4C1E-BE94-7AB44788D4E4}
This is correct as I am specifying the ID column to output and using MoveNext() to iterate. This shows I am close to my goal, however, the below code for displaying the rows never appears to work, I am now lost on why?
Win32com Code that does not work for me to display rows:
import win32com.client
import win32com
"""Connection"""
objprox = win32com.client.Dispatch("*****.DbObjectProxy") #Blanked out for security of Software
"""Set Query"""
sql2 = "select * from ServiceConsumer_t"
dbq2 = objprox.DoDatabaseQuery(sql2)
dbq2.Open(sql2)
"""Specify & Print result"""
while not dbq2.EOF:
dbq2.MoveFirst()
res = dbq2.GetRows()
dbq2.MoveNext()
print(res)
"""Close Session"""
dbq2.Close()
From this, I simply get the exception that the object GetRows has no attribute. Looking online there is very little surrounding this. Please can you suggest why the code is not working for displaying all row results? Ideally, I would like the column names displayed too.
Assuming your COM object aligns to the ADODB object, GetRows does the following:
Retrieves multiple records of a Recordset object into an array.
In Python, this array or multidimensional object translates as a nested tuple without metadata like columns names:
rst.MoveFirst()
res = rst.GetRows() # TUPLE OF TUPLES
# LOOP THROUGH ROWS
for i in range(len(res)):
# LOOP THROUGH COLUMNS
for j in range(len(res[i])):
print(res[i][j])
ADODB example:
import win32com.client
# OPEN SQL SERVER DATABASE CONNECTION
conn = win32com.client.gencache.EnsureDispatch("ADODB.Connection")
conn.Open(
"Driver={ODBC Driver 17 for SQL Server};"
f"Server=myServer;Database=myDatabase;"
"Trusted_Connection=Yes;"
)
# OPEN RECORDSET
rst = win32com.client.gencache.EnsureDispatch("ADODB.Recordset")
rst.Open("SELECT * FROM myTable", conn)
rst.MoveFirst()
res = rst.GetRows()
# LOOP THROUGH ROWS
for i in range(len(res)):
# LOOP THROUGH COLUMNS
for j in range(len(res[i])):
print(res[i][j])
# CLOSE AND RELEASE OBJECTS
rst.Close(); conn.Close()
rst = None; conn = None
del rst; del conn
Simple answer to this is I added each column manually to create the full row output like below:
col1 = dbq1.Fields("ID").Value
col2 = dbq1.Fields("Name").Value
col3 = dbq1.Fields("Login").Value
col4 = dbq1.Fields("Email").Value
print(str(col1) + " " + str(col2) + " " str(col3) + " " str(col4))
i am using Python to get a list of databases that were more than 30 days old. So far i have been able to get the list of the databases from here. And this is my code :-
import pyorient
def list_orient_databases(name):
# Use a breakpoint in the code line below to debug your script.
print(f'{name}')
client = pyorient.OrientDB("10.121.3.55", 2525)
session_id = client.connect("admin", "admin")
db_names = client.db_list().__getattr__('databases')
db_count = 0
for db_name in db_names:
print(db_name)
How can i adjust the code to get list of databases 30 days older or more? Thanks for the help.
If you are able to somehow pull a create_date value for the DB you could use something like this using the timedelta object:
from datetime import datetime, timedelta
d = datetime.today() - timedelta(days=30)
# 'X' would be whatever the database create date name parameter is
for db_name in db_names:
if db_name['X'] <= d:
print(db_name)
You'll have to adjust this as needed but this gives you a general idea
I'm trying to connect MySQL with python in order to automate some reports. By now, I'm just testing the connection. Seems it's working but here comes the problem: the output from my Python code is different from the one that I get in MySQL.
Here I attach the query used and the output that I can find in MySQL:
The testing query for the Python connection:
SELECT accountID
FROM Account
WHERE accountID in ('340','339','343');
The output from MySQL (using Dbiever). For this test, the column chosen contains integers:
accountID
1 339
2 340
3 343
Here I attach the actual output from my Python code:
today:
20200811
Will return true if the connection works:
True
Empty DataFrame
Columns: [accountID]
Index: []
In order to help you understand the problem, please find attached my python code:
import pandas as pd
import json
import pymysql
import paramiko
from datetime import date, time
tiempo_inicial = time()
today = date.today()
today= today.strftime("%Y%m%d")
print('today:')
print(today)
#from paramiko import SSHClient
from sshtunnel import SSHTunnelForwarder
**(part that contains all the connection information, due to data protection this part can't be shared)**
print('will return true if connection works:')
print(conn.open)
query = '''SELECT accountId
FROM Account
WHERE accountID in ('340','339','343');'''
data = pd.read_sql_query(query, conn)
print(data)
conn.close()
Under my point of view doesn't have a sense this output as the connection is working and the query it's being tested previously in MySQL with a positive output. I tried with other columns that contain names or dates and the result doesn't change.
Any idea why I'm getting this "Empty DataFrame" output?
Thanks
I have a dictionary with 3 keys which correspond to field names in a SQL Server table. The values of these keys come from an excel file and I store this dictionary in a dataframe which I now need to insert into a SQL table. This can all be seen in the code below:
import pandas as pd
import pymssql
df=[]
fp = "file path"
data = pd.read_excel(fp,sheetname ="CRM View" )
row_date = data.loc[3, ]
row_sita = "ABZPD"
row_event = data.iloc[12, :]
df = pd.DataFrame({'date': row_date,
'sita': row_sita,
'event': row_event
}, index=None)
df = df[4:]
df = df.fillna("")
print(df)
My question is how do I insert this dictionary into a SQL table now?
Also, as a side note, this code is part of a loop which needs to go through several excel files one by one, insert the data into dictionary then into SQL then delete the data in the dictionary and start again with the next excel file.
You could try something like this:
import MySQLdb
# connect
conn = MySQLdb.connect("127.0.0.1","username","passwore","table")
x = conn.cursor()
# write
x.execute('INSERT into table (row_date, sita, event) values ("%d", "%d", "%d")' % (row_date, sita, event))
# close
conn.commit()
conn.close()
You might have to change it a little based on your SQL restrictions, but should give you a good start anyway.
For the pandas dataframe, you can use the pandas built-in method to_sql to store in db. Following is the way to use it.
import sqlalchemy as sa
params = urllib.quote_plus("DRIVER={};SERVER={};DATABASE={};Trusted_Connection=True;".format("{SQL Server}",
"<db_server_url>",
"<db_name>"))
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = sa.create_engine(conn_str)
df.to_sql(<table_name>, engine,schema=<schema_name>, if_exists="append", index=False)
For this method you you will need to install sqlalchemy package.
pip install sqlalchemy
You will also need to setup the MSSql DSN on the machine.
I am using cx_Oracle to fetch some data stored in Arabic characters from an Oracle database. Below is how I try to connect to the database. When I try to print the results, specially those columns stored in Arabic, I get something like "?????" which seems to me that the data was not coded properly.
I tried to print random Arabic string in Python it went alright, which indicates the problem is in the manner in which I am pulling data from the database.
connection = cx_Oracle.connect(username, password, instanceName)
wells = getWells(connection)
def getWells(conn):
cursor = conn.cursor()
wells = []
cursor.execute(sql)
clmns = len(cursor.description)
for row in cursor.fetchall():
print row
well = {}
for i in range(0, clmns):
if type(row[i]) is not datetime.datetime:
well[cursor.description[i][0]] = row[i]
else:
well[cursor.description[i][0]] = row[i].isoformat()
wells.append(well)
cursor.close()
connection.close()
return wells
In order to force a reset of the default encoding from the environment, you can call the setdefaultencoding method in the sys module.
As this is not recommended, it is not visible by default and a reload is required.
It is recommended that you attempt to fix the encoding set in the shell for the user on the host system rather than modifying in a script.
import sys
reload(sys)
sys.setdefaultencoding('utf-8')