I have the following code in a TKinter (8.6) program. When I invoke the Run_Report function the data is written as expected to the Editor. I would however also like it to print it to my printer. I have put in code which I think should do the job but it returns an error:
lpr.stdin.write("Population Data"+"n".encode())
TypeError: can only concatenate str (not "bytes") to str
The code for the functions is as follows:
def Run_Report():
import subprocess
lpr = subprocess.Popen("/usr/bin/lpr", stdin=subprocess.PIPE)
sPrinciple=e_SelectInv.get()
Sql=("SELECT Principle as Investment, RptDate as Report_Date, printf('%,.2f',RptVal) as Report_Value, printf('%.2f',ExchRate) as Exchange_Rate, printf('%,d',RptVal*ExchRate) as Total_Value FROM MyInv WHERE Principle = ?" +
"order by Id Asc")
conn = sqlite3.connect(r'/home/bushbug/Databases/TrackMyInv')
curs = conn.cursor()
curs.execute(Sql,(sPrinciple,))
col_names = [cn[0] for cn in curs.description]
rows = curs.fetchall()
x = PrettyTable(col_names)
x.align[col_names[0]] = "l"
x.align[col_names[1]] = "r"
x.align[col_names[2]] = "r"
x.align[col_names[3]] = "r"
x.align[col_names[4]] = "r"
x.padding_width = 1
for row in rows:
x.add_row(row)
print (x)
tabstring = x.get_string()
output=open("export.txt","w")
output.write("Population Data"+"n")
output.write(tabstring)
lpr.stdin.write("Population Data"+"n".encode())
output.close()
conn.close()
Can someone please help me correct whatever is wrong in the above?
Thank you in anticipation.
Related
Below is my code for decrypting from a csv file stored on DropBox. I get the user to type in their ID, I match this with a database containing hashed values, and then I use the ID typed in to search my stored csv file for the matching row. I then place all the row values into my decryption function.
Also im aware my variable names/formatting is awful im just using this code as a prototype as of right now.
My results are being printed as such:
b'b\xebS\x1b\xc8v\xe2\xf8\xa2\\x84\x0e7M~\x1b'
b'\x01B#6i\x1b\xfc]\xc3\xca{\xd5{B\xbe!'
b'in*V7\xf3P\xa0\xb2\xc5\xd2\xb7\x1dz~\x95'
I store my key and IV so they are always the same, yet the decryption doesnt seem to work. My only thinking is perhaps my data is changed somehow when stored in a csv or pandas table etc. does anyone know what the issue would be or if the bytes can be altered when stored/imported to dataframe?
also maybe i am extracting the data from my csv to pandas incorrectly?
def login():
import sqlite3
import os.path
def decoder():
from Crypto.Cipher import AES
import hashlib
from secrets import token_bytes
cursor.execute(
'''
Select enc_key FROM Login where ID = (?);
''',
(L_ID_entry.get(), ))
row = cursor.fetchone()
if row is not None:
keys = row[0]
#design padding function for encryption
def padded_text(data_in):
while len(data_in)% 16 != 0:
data_in = data_in + b"0"
return data_in
#calling stored key from main file and reverting back to bytes
key_original = bytes.fromhex(keys)
mode = AES.MODE_CBC
#model
cipher = AES.new(key_original, mode, IV3)
#padding data
p4 = padded_text(df1.tobytes())
p5 = padded_text(df2.tobytes())
p6 = padded_text(df3.tobytes())
#decrypting data
d_fname = cipher.decrypt(p4)
d_sname = cipher.decrypt(p5)
d_email = cipher.decrypt(p6)
print(d_fname)
print(d_sname)
print(d_email)
#connecting to db
try:
conn = sqlite3.connect('login_details.db')
cursor = conn.cursor()
print("Connected to SQLite")
except sqlite3.Error as error:
print("Failure, error: ", error)
finally:
#downloading txt from dropbox and converting to dataframe to operate on
import New_user
import ast
_, res = client.files_download("/user_details/enc_logins.csv")
with io.BytesIO(res.content) as csvfile:
with open("enc_logins.csv", 'rb'):
df = pd.read_csv(csvfile, names=['ID', 'Fname', 'Sname', 'Email'], encoding= 'unicode_escape')
newdf = df[(df == L_ID_entry.get()).any(axis=1)]
print(newdf)
df1 = newdf['Fname'].to_numpy()
df2 = newdf['Sname'].to_numpy()
df3 = newdf['Email'].to_numpy()
print(df1)
print(df2)
print(df3)
csvfile.close()
decoder()
I'm trying to unload data from snowflakes to GCS, for that I'm using snowflakepython connector and python script. In the below python script in the file name 'LH_TBL_FIRST20200908' if the script runs today then the name will be same, if the script runs tomorrow then the file name should be 'LH_TBL_FIRST20200909' similarly if it runs day after then 'LH_TBL_FIRST202009010'.
Also please tell me if the code has any mistakes in it. Code is below
import snowflake.connector
# Gets the version
ctx = snowflake.connector.connect(
user='*****',
password='*******',
account='********',
warehouse='*******',
database='********',
schema='********'
)
cs = ctx.cursor()
sql = "copy into #unload_gcs/LH_TBL_FIRST20200908.csv.gz
from ( select * from TEST_BASE.LH_TBL_FIRST )
file_format =
( type=csv compression='gzip'
FIELD_DELIMITER = ','
field_optionally_enclosed_by='"'
NULL_IF=()
EMPTY_FIELD_AS_NULL = FALSE
)
single = fals
e max_file_size=5300000000
header = false;"
cur.execute(sql)
cur.close()
conn.close()
You can use f-strings to fill in (part of) your filename. Python has the datetime module to handle dates and times.
from datetime import datetime
date = datetime.now().strftime('%Y%m%d')
myFileName = f'LH_TBL_FIRST{date}.csv.gz'
print(myFileName)
>>> LH_TBL_FIRST20200908.csv.gz
As for errors in your code:
you declare your cursor as ctx.cursor() and further along you just use cur.execute(...) and cur.close(...). These won't work. Run your code to find the errors and fix them.
Edit suggested by #Lysergic:
If your python version is too old, you could use str.format().
myFileName = 'LH_TBL_FIRST{0}.csv.gz'.format(date)
from datetime import datetime
class FileNameWithDateTime(object):
def __init__(self, fileNameAppender, fileExtension="txt"):
self.fileNameAppender = fileNameAppender
self.fileExtension = fileExtension
def appendCurrentDateTimeInFileName(self,filePath):
currentTime = self.fileNameAppender
print(currentTime.strftime("%Y%m%d"))
filePath+=currentTime.strftime("%Y%m%d")
filePath+="."+self.fileExtension
try:
with open(filePath, "a") as fwrite1:
fwrite1.write(filePath)
except OSError as oserr:
print("Error while writing ",oserr)
I take the following approach
#defining what time/date related values your variable will contain
date_id = (datetime.today()).strftime('%Y%m%d')
Write the output file.
#Creating the filename
with open(date_id + "_" + "LH_TBL.csv.gz" 'w') as gzip:
output: YYYY/MM/DD _ filename
20200908_filename
Looking for some help with a specific error when I write out from a pyodbc connection. How do I fix the error:
ODBC SQL type -360 is not yet supported. column-index=1 type=-360', 'HY106' error from PYODBC
Here is my code:
import pyodbc
import pandas as pd
import sqlparse
## Function created to read SQL Query
def create_query_string(sql_full_path):
with open(sql_full_path, 'r') as f_in:
lines = f_in.read()
# remove any common leading whitespace from every line
query_string = textwrap.dedent("""{}""".format(lines))
## remove comments from SQL Code
query_string = sqlparse.format(query_string, strip_comments=True)
return query_string
query_string = create_query_string("Bad Code from R.sql")
## initializes the connection string
curs = conn.cursor()
df=pd.read_sql(query_string,conn)
df.to_csv("TestSql.csv",index=None)
We are using the following SQL code in query string:
SELECT loss_yr_qtr_cd,
CASE
WHEN loss_qtr_cd <= 2 THEN loss_yr_num
ELSE loss_yr_num + 1
END AS LOSS_YR_ENDING,
snap_yr_qtr_cd,
CASE
WHEN snap_qtr_cd <= 2 THEN snap_yr_num
ELSE snap_yr_num + 1
END AS CAL_YR_ENDING,
cur_ctstrph_loss_ind,
clm_symb_grp_cd,
adbfdb_pol_form_nm,
risk_st_nm,
wrt_co_nm,
wrt_co_part_cd,
src_of_bus_cd,
rt_zip_dlv_ofc_cd,
cur_rst_rt_terr_cd,
Sum(xtra_cntrc_py_amt) AS XTRA_CNTRC_PY_AMT
FROM (SELECT DT.loss_yr_qtr_cd,
DT.loss_qtr_cd,
DT.loss_yr_num,
SNAP.snap_yr_qtr_cd,
SNAP.snap_qtr_cd,
SNAP.snap_yr_num,
CLM.cur_ctstrph_loss_ind,
CLM.clm_symb_grp_cd,
POL_SLCT.adbfdb_pol_form_nm,
POL_SLCT.adbfdb_pol_form_cd,
CVR.bsic_cvr_ind,
POL_SLCT.priv_pass_ind,
POL_SLCT.risk_st_nm,
POL_SLCT.wrt_co_nm,
POL_SLCT.wrt_co_part_cd,
POL_SLCT.src_of_bus_cd,
TERR.rt_zip_dlv_ofc_cd,
TERR.cur_rst_rt_terr_cd,
LOSS.xtra_cntrc_py_amt
FROM ahshdm1d.vmaloss_day_dt_dim DT,
ahshdm1d.vmasnap_yr_mo_dim SNAP,
ahshdm1d.tmaaclm_dim CLM,
ahshdm1d.tmaapol_slct_dim POL_SLCT,
ahshdm1d.tmaacvr_dim CVR,
ahshdm1d.tmaart_terr_dim TERR,
ahshdm1d.tmaaloss_fct LOSS,
ahshdm1d.tmaaprod_bus_dim BUS
WHERE SNAP.snap_yr_qtr_cd BETWEEN '20083' AND '20182'
AND TRIM(POL_SLCT.adbfdb_lob_cd) = 'A'
AND CVR.bsic_cvr_ind = 'Y'
AND POL_SLCT.priv_pass_ind = 'Y'
AND POL_SLCT.adbfdb_pol_form_cd = 'V'
AND POL_SLCT.src_of_bus_cd NOT IN ( 'ITC', 'INV' )
AND LOSS.xtra_cntrc_py_amt > 0
AND LOSS.loss_day_dt_id = DT.loss_day_dt_dim_id
AND LOSS.cvr_dim_id = CVR.cvr_dim_id
AND LOSS.pol_slct_dim_id = POL_SLCT.pol_slct_dim_id
AND LOSS.rt_terr_dim_id = TERR.rt_terr_dim_id
AND LOSS.prod_bus_dim_id = BUS.prod_bus_dim_id
AND LOSS.clm_dim_id = CLM.clm_dim_id
AND LOSS.snap_yr_mo_dt_id = SNAP.snap_yr_mo_dt_id) AS TABLE1
GROUP BY loss_yr_qtr_cd,
loss_qtr_cd,
loss_yr_num,
snap_yr_qtr_cd,
snap_qtr_cd,
snap_yr_num,
cur_ctstrph_loss_ind,
clm_symb_grp_cd,
adbfdb_pol_form_nm,
risk_st_nm,
wrt_co_nm,
wrt_co_part_cd,
src_of_bus_cd,
rt_zip_dlv_ofc_cd,
cur_rst_rt_terr_cd
FOR FETCH only
Just looking how to properly write out the database.
Thanks,
Justin
So the code runs until inserting new row, at which time I get>>>
'attempting to insert [ 3 item result] into these columns [ 5 items]. I have tried to discover where my code is causing a loss in results, but cannot. Any suggestions would be great.
Additional information, my feature class I am inserting has five fields and they are they same as the fields as the source fields. It reaches to my length != and my error prints. Please assist if you anyone would like.
# coding: utf8
import arcpy
import os, sys
from arcpy import env
arcpy.env.workspace = r"E:\Roseville\ScriptDevel.gdb"
arcpy.env.overwriteOutput = bool('TRUE')
# set as python bool, not string "TRUE"
fc_buffers = "Parcels" # my indv. parcel buffers
fc_Landuse = "Geology" # my land use layer
outputLayer = "IntersectResult" # output layer
outputFields = [f.name for f in arcpy.ListFields(outputLayer) if f.type not in ['OBJECTID', "Geometry"]] + ['SHAPE#']
landUseFields = [f.name for f in arcpy.ListFields(fc_Landuse) if f.type not in ['PTYPE']]
parcelBufferFields = [f.name for f in arcpy.ListFields(fc_buffers) if f.type not in ['APN']]
intersectionFeatureLayer = arcpy.MakeFeatureLayer_management(fc_Landuse, 'intersectionFeatureLayer').getOutput(0)
selectedBuffer = arcpy.MakeFeatureLayer_management(fc_buffers, 'selectedBuffer').getOutput(0)
def orderFields(luFields, pbFields):
ordered = []
for field in outputFields:
# append the matching field
if field in landUseFields:
ordered.append(luFields[landUseFields.index(field)])
if field in parcelBufferFields:
ordered.append(pbFields[parcelBufferFields.index(field)])
return ordered
print pbfields
with arcpy.da.SearchCursor(fc_buffers, ["OBJECTID", 'SHAPE#'] + parcelBufferFields) as sc, arcpy.da.InsertCursor(outputLayer, outputFields) as ic:
for row in sc:
oid = row[0]
shape = row[1]
print (oid)
print "Got this far"
selectedBuffer.setSelectionSet('NEW', [oid])
arcpy.SelectLayerByLocation_management(intersectionFeatureLayer,"intersect", selectedBuffer)
with arcpy.da.SearchCursor(intersectionFeatureLayer, ['SHAPE#'] + landUseFields) as intersectionCursor:
for record in intersectionCursor:
recordShape = record[0]
print "list made"
outputShape = shape.intersect(recordShape, 4)
newRow = orderFields(row[2:], record[1:]) + [outputShape]
if len(newRow) != len(outputFields):
print 'there is a problem. the number of columns in the record you are attempting to insert into', outputLayer, 'does not match the number of destination columns'
print '\tattempting to insert:', newRow
print '\tinto these columns:', outputFields
continue
# insert into the outputFeatureClass
ic.insertRow(newRow)
Your with statement where you define the cursors is creating a input cursor with 5 fields, but your row you are trying to feed it is only 3 fields. You need to make sure your insert cursor is the same length as the row. I suspect the problem is actually in the orderfields method. Or what you pass to it.
I have this function that it's supposed should receive json and store the values on a RDS MySQL db.
def saveMetric(metrics):
cnx = RDS_Connect()
cursor = cnx.cursor()
jsonMetrics = json.loads(metrics)
#print type(jsonMetrics['Metrics'])
# Every 2000 registries, the script will start overriding values
persistance = 2000
save_metrics_query = (
"REPLACE INTO metrics "
"SET metric_seq = (SELECT COALESCE(MAX(row_id), 0) %% %(persistance)d + 1 FROM metrics AS m), "
"instance_id = \'%(instance_id)s\', "
"service = \'%(service)s\' , "
"metric_name = \'%(metric_name)s\', "
"metric_value = %(metric_value)f"
)
for metric in jsonMetrics['Metrics']:
formatData = {}
formatData['persistance'] = persistance
formatData['instance_id'] = arguments.dimensionValue
formatData['service'] = jsonMetrics['Service']
formatData['metric_name'] = metric
formatData['metric_value'] = jsonMetrics['Metrics'][metric]
print save_metrics_query % formatData
try:
cursor.execute(save_metrics_query, formatData, multi=True)
logger('info','Metrics were saved successfully!')
cnx.commit()
except mysql.connector.Error as err:
logger('error', "Something went wrong: %s" % err)
cursor.close()
cnx.close()
RDS_Connect() was already tested and it works just fine. The problem is that after running the function, the data is not saved to the DB. I think there is a problem with the commit but I don't see any errors or warning message. If I run the query manually, the data gets stored.
Here is the query that runs after parsing the json:
REPLACE INTO metrics SET metric_seq = (SELECT COALESCE(MAX(row_id), 0) % 2000 + 1 FROM metrics AS m), instance_id = 'i-03932937bd67622c4', service = 'AWS/EC2' , metric_name = 'CPUUtilization', metric_value = 0.670000
If it helps, this is the json that the function receives:
{
"Metrics": {
"CPUUtilization": 1.33,
"NetworkIn": 46428.0,
"NetworkOut": 38772.0
},
"Id": "i-03932937bd67622c4",
"Service": "AWS/EC2"
}
I'd appreciate some help.
Regards!
UPDATE:
I found that the problem was related to the formatting codes on the query template.
I re wrote the function like this:
def saveMetric(metrics):
cnx = RDS_Connect()
jsonMetrics = json.loads(metrics)
print json.dumps(jsonMetrics,indent=4)
persistance = 2000
row_id_query_template = "SELECT COALESCE(MAX(row_id), 0) % {} + 1 FROM metrics AS m"
row_id_query = row_id_query_template.format(persistance)
save_metrics_query = (
"REPLACE INTO metrics "
"SET metric_seq = (" + row_id_query + "),"
"instance_id = %(instance_id)s,"
"service = %(service)s,"
"metric_name = %(metric_name)s,"
"metric_value = %(metric_value)s"
)
for metric in jsonMetrics['Metrics']:
formatData = {}
formatData['instance_id'] = arguments.dimensionValue
formatData['service'] = jsonMetrics['Service']
formatData['metric_name'] = metric
formatData['metric_value'] = jsonMetrics['Metrics'][metric]
if arguments.verbose == True:
print "Data: ",formatData
print "Query Template: ",save_metrics_query.format(**formatData)
try:
cursor = cnx.cursor()
cursor.execute(save_metrics_query, formatData)
logger('info','Metrics were saved successfully!')
cnx.commit()
cursor.close()
except mysql.connector.Error as err:
logger('error', "Something went wrong: %s" % err)
cnx.close()
As you can see, I format the SELECT outside. I believe the whole problem was due to this line:
"metric_value = %(metric_value)f"
I changed to:
"metric_value = %(metric_value)s"
and now it works. I think the formatting was wrong, given a syntax error (tho I don't know how the exception was never thrown).
Thanks to everyone who took time to help me!
I haven't actually used MySQL, but the docs seem to indicate that calling cursor.execute with multi=True just returns an iterator. If that is true, then it wouldn't actually insert anything - you'd need to call .next() on the iterator (or just iterate over it) to actually insert the record.
It also goes on to advise against using parameters with multi=True:
If multi is set to True, execute() is able to execute multiple statements specified in the operation string. It returns an iterator that enables processing the result of each statement. However, using parameters does not work well in this case, and it is usually a good idea to execute each statement on its own.
tl;dr: remove that parameter, as the default is False.
This was the solution. I changed:
"metric_value = %(metric_value)f"
To:
"metric_value = %(metric_value)s"
Doing some troubleshooting I found a syntax error on the SQL. Somehow the exception didn't show.