I have a problem with the follwoing code:
import pandas as pd
import tensorflow
from matplotlib import pyplot
from sklearn.preprocessing import MinMaxScaler
from keras.models import model_from_json
import pymssql
load json and create model
json_file = open('model_Messe_Dense.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
loaded_model.load_weights("model_Messe_Dense.h5")
Import Values
import pickle
y_scaler = pickle.load( open( "y_scaler.p", "rb" ))
x_scaler = pickle.load( open( "x_scaler.p","rb"))
Connecting to server and update values
while True:
try:
conn = pymssql.connect(
server='SHS_Messe',
user='sa',
password='sa',
database='ChillWARE_Transfer'
)
stmt = "SELECT screw_speed,\
ID,\
Cylinder_Temperatur_Zone_1,\
Cylinder_Temperatur_Zone_2,\
Cylinder_Temperatur_Zone_3,\
Cylinder_Temperatur_Zone_4,\
Cylinder_Temperatur_Zone_5,\
Cylinder_Temperatur_Zone_6,\
mass_pressure,\
Update_Done\
FROM to_ChillWARE where ID= (SELECT MAX(ID) FROM
to_ChillWARE)"
# Excute Query here
df = pd.read_sql(stmt,conn)
except pymssql.Error as e:
print (e)
break
feature_col_names = ['screw_speed','Cylinder_Temperatur_Zone_1','Cylinder_Temperatur_Zone_2','Cylinder_Temperatur_Zone_3',\
'Cylinder_Temperatur_Zone_4','Cylinder_Temperatur_Zone_5','Cylinder_Temperatur_Zone_6']
predicted_class_names = ['mass_pressure']
Update = ['Update_Done']
x = df[feature_col_names].values
Update = df[Update].values
x_scaled = x_scaler.transform(x)
x_test = x_scaled
predicted = loaded_model.predict(x_test)
predicted = y_scaler.inverse_transform(predicted)
predicted=predicted.reshape(-1)
predicted.shape
predicted=predicted * 51
value=str(predicted)
value=value.replace('[','')
value=value.replace(']','')
Update = str(Update)
Update=Update.replace('[','')
Update=Update.replace(']','')
if Update == "False":
cursor = conn.cursor()
query = "UPDATE to_ChillWARE SET [mass_pressure] ="
query = query + value + ",[Update_Done] = 1"
query = query + " where ID= (SELECT MAX(ID) FROM to_ChillWARE)"
cursor.execute(query)
conn.commit()
I want to check if i have a connection with a mssql server and if Update == False i want to update values.
On my pc everything works just fine. I executed the code via python and via exe (pyinstaller). But if i want to transfer this to another pc i get the error:
Traceback (most recent call last):
File "Test.py", line 29, in <module>
File "src\pymssql.pyx", line 636, in pymssql.connect
File "src\_mssql.pyx", line 1957, in _mssql.connect
File "src\_mssql.pyx", line 675, in _mssql.MSSQLConnection.__init__
ValueError: list.remove(x): x not in list
I think there is a problem with the pymssql function.
I found the same error here but i don't understand the solution:
https://github.com/sqlmapproject/sqlmap/issues/3035
If anyone could help that would be amazing.
Thanks everybody
According to the comment in the link you provided it looks like a connection error.
Have you checked that from the machine you are trying to use the code you have access to the DB server with the server name provided and those credentials?
Edit with solution from comments below:
You can reuse the connection by defining "conn = pymssql.connect..." outside the while loop and always use that variable, so you are not creating a connection on each iteration.
Related
I am running all the sql scripts under the scripts path in a for loop and copying the data into #priya_stage area in snowflake and then using GET command , i am unloading data from stage area to my Unix path in csv format. But I am getting error.
Note: this same code works on my MAC but not on unix server.
import logging
import os
import snowflake.connector
from snowflake.connector import DictCursor as dict
from os import walk
try:
conn = snowflake.connector.connect(
account = 'xxx' ,
user = 'xxx' ,
password = 'xxx' ,
database = 'xxx' ,
schema = 'xxx' ,
warehouse = 'xxx' ,
role = 'xxx' ,
)
conn.cursor().execute('USE WAREHOUSE xxx')
conn.cursor().execute('USE DATABASE xxx')
conn.cursor().execute('USE SCHEMA xxx')
take = []
scripts = '/xxx/apps/xxx/xxx/scripts/snow/scripts/'
os.chdir('/xxx/apps/xxx/xxx/scripts/snow/scripts/')
for root , dirs , files in walk(scripts):
for file in files:
inbound = file[0:-4]
sql = open(file , 'r').read()
# file_number = 0
# file_number += 1
file_prefix = 'bridg_' + inbound
file_name = file_prefix
result_query = conn.cursor(dict).execute(sql)
query_id = result_query.sfqid
sql_copy_into = f'''
copy into #priya_stage/{file_name}
from (SELECT * FROM TABLE(RESULT_SCAN('{query_id}')))
DETAILED_OUTPUT = TRUE
HEADER = TRUE
SINGLE = FALSE
OVERWRITE = TRUE
max_file_size=4900000000'''
rs_copy_into = conn.cursor(dict).execute(sql_copy_into)
for row_copy in rs_copy_into:
file_name_in_stage = row_copy["FILE_NAME"]
sql_get_to_local = f"""
GET #priya_stage/{file_name_in_stage} file:///xxx/apps/xxx/xxx/inbound/zip_files/{inbound}/"""
rs_get_to_local = conn.cursor(dict).execute(sql_get_to_local)
except snowflake.connector.errors.ProgrammingError as e:
print('Error {0} ({1}): {2} ({3})'.format(e.errno , e.sqlstate , e.msg , e.sfqid))
finally:
conn.cursor().close()
conn.close()
Error
Traceback (most recent call last):
File "Generic_local.py", line 52, in <module>
rs_get_to_local = conn.cursor(dict).execute(sql_get_to_local)
File "/usr/local/lib64/python3.6/site-packages/snowflake/connector/cursor.py", line
746, in execute
sf_file_transfer_agent.execute()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/file_transfer_agent.py", line 379, in execute
self._transfer_accelerate_config()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/file_transfer_agent.py", line 671, in
_transfer_accelerate_config
self._use_accelerate_endpoint = client.transfer_accelerate_config()
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/s3_storage_client.py", line 572, in
transfer_accelerate_config
url=url, verb="GET", retry_id=retry_id, query_parts=dict(query_parts)
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/s3_storage_client.py", line 353, in _.
send_request_with_authentication_and_retry
verb, generate_authenticated_url_and_args_v4, retry_id
File "/usr/local/lib64/python3.6/site-
packages/snowflake/connector/storage_client.py", line 313, in
_send_request_with_retry
f"{verb} with url {url} failed for exceeding maximum retries."
snowflake.connector.errors.RequestExceedMaxRetryError: GET with url b'https://xxx-
xxxxx-xxx-x-customer-stage.xx.amazonaws.com/https://xxx-xxxxx-xxx-x-customer-
stage.xx.amazonaws.com/?accelerate' failed for exceeding maximum retries.
This link redirects me to a error message .
https://xxx-
xxxxx-xxx-x-customer-stage.xx.amazonaws.com/https://xxx-xxxxx-xxx-x-customer-
stage.xx.amazonaws.com/?accelerate
Access Denied error :
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
<RequestId>1X1Z8G0BTX8BAHXK</RequestId>
<HostId>QqdCqaSK7ogAEq3sNWaQVZVXUGaqZnPv78FiflvVzkF6nSYXTSKu3iSiYlUOU0ka+0IMzErwGC4=</HostId>
</Error>
I'm working on automating some query extraction using python and pyodbc, and then converting to parquet format, and send to AWS S3.
My script solution is working fine so far, but I have faced a problem. I have a Schema, let us call it SCHEMA_A, and inside of it several tables, TABLE_1, TABLE_2 .... TABLE_N.
All those tables inside that schema are accessible by using the same credentials.
So I'm using a script like this one to automate the task.
def get_stream(cursor, batch_size=100000):
while True:
row = cursor.fetchmany(batch_size)
if row is None or not row:
break
yield row
cnxn = pyodbc.connect(driver='pyodbc driver here',
host='host name',
database='schema name',
user='user name,
password='password')
print('Connection stabilished ...')
cursor = cnxn.cursor()
print('Initializing cursos ...')
if len(sys.argv) > 1:
table_name = sys.argv[1]
cursor.execute('SELECT * FROM {}'.format(table_name))
else:
exit()
print('Query fetched ...')
row_batch = get_stream(cursor)
print('Getting Iterator ...')
cols = cursor.description
cols = [col[0] for col in cols]
print('Initalizin batch data frame ..')
df = pd.DataFrame(columns=cols)
start_time = time.time()
for rows in row_batch:
tmp = pd.DataFrame.from_records(rows, columns=cols)
df = df.append(tmp, ignore_index=True)
tmp = None
print("--- Batch inserted inn%s seconds ---" % (time.time() - start_time))
start_time = time.time()
I run a code similar to that inside Airflow tasks, and works just fine for all other tables. But then I have two tables, lets call TABLE_I and TABLE_II that yields the following error when I execute cursor.fetchmany(batch_size):
ERROR - ('ODBC SQL type -151 is not yet supported. column-index=16 type=-151', 'HY106')
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1112, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1310, in _execute_task
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 117, in execute
return_value = self.execute_callable()
File "/home/ubuntu/.local/lib/python3.8/site-packages/airflow/operators/python.py", line 128, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/ubuntu/prea-ninja-airflow/jobs/plugins/extract/fetch.py", line 58, in fetch_data
for rows in row_batch:
File "/home/ubuntu/prea-ninja-airflow/jobs/plugins/extract/fetch.py", line 27, in stream
row = cursor.fetchmany(batch_size)
Inspecting those tables with SQLElectron, and Querying the first few lines, I have realized that both TABLE_I and TABLE_II have a Column called 'Geolocalizacao', when I use SQL server language to find the DATA TYPE of that column with:
SELECT DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE
TABLE_NAME = 'TABLE_I' AND
COLUMN_NAME = 'Geolocalizacao';
It yields:
DATA_TYPE
geography
Seraching here on stack overflow I found this solution: python pyodbc SQL Server Native Client 11.0 cannot return geometry column
By the description of the user, it seem work fine by adding:
def unpack_geometry(raw_bytes):
# adapted from SSCLRT information at
# https://learn.microsoft.com/en-us/openspecs/sql_server_protocols/ms-ssclrt/dc988cb6-4812-4ec6-91cd-cce329f6ecda
tup = struct.unpack('<i2b3d', raw_bytes)
# tup contains: (unknown, Version, Serialization_Properties, X, Y, SRID)
return tup[3], tup[4], tup[5]
and then:
cnxn.add_output_converter(-151, unpack_geometry)
After creating the connection. But It's not working for the GEOGRAPHY DATA TYPE, when I use this code (add import struct on python script), it gives me the following error:
Traceback (most recent call last):
File "benchmark.py", line 79, in <module>
for rows in row_batch:
File "benchmark.py", line 39, in get_stream
row = cursor.fetchmany(batch_size)
File "benchmark.py", line 47, in unpack_geometry
tup = struct.unpack('<i2b3d', raw_bytes)
struct.error: unpack requires a buffer of 30 bytes
An example of values that this column have, follows the given template:
{"srid":4326,"version":1,"points":[{}],"figures":[{"attribute":1,"pointOffset":0}],"shapes":[{"parentOffset":-1,"figureOffset":0,"type":1}],"segments":[]}
I honestly don't know how to adapt the code for this given structure, can someone help me? It's been working fine for all other tables, but I have those two tables with this column that are giving me a lot o headeach.
Hi this is what I have done:
from binascii import hexlify
def _handle_geometry(geometry_value):
return f"0x{hexlify(geometry_value).decode().upper()}"
and then on connection:
cnxn.add_output_converter(-151, _handle_geometry)
this will return value as SSMS.
Below is my python code I read the keys from a CSV file and delete them in the database.It's running fine for a while and throwing me this timeout error. I don't see any GC issue and health of the node is working fine.
Traceback (most recent call last):
File "/Users/XXX/Downloads/XXX/XXX", line 65, in <module>
parse_file(datafile)
File "/Users/XXX/Downloads/XXX/XXX", line 49, in parse_file
session = cluster.connect('XXX')
File "cassandra/cluster.py", line 1193, in cassandra.cluster.Cluster.connect (cassandra/cluster.c:17796)
File "cassandra/cluster.py", line 1240, in cassandra.cluster.Cluster._new_session (cassandra/cluster.c:18952)
File "cassandra/cluster.py", line 1980, in cassandra.cluster.Session.__init__ (cassandra/cluster.c:35191)
cassandra.cluster.NoHostAvailable: ("Unable to connect to any servers using keyspace 'qualys_ioc'", ['127.0.0.1'])
Python Code:
import argparse
import sys
import itertools
import codecs
import uuid
import os
import subprocess
try:
import cassandra
import cassandra.concurrent
except ImportError:
sys.exit('Python Cassandra driver not installed. You might try \"pip install cassandra-driver\".')
from cassandra.cluster import Cluster, ResultSet, Session
from cassandra.policies import DCAwareRoundRobinPolicy
from cassandra.auth import PlainTextAuthProvider
from cassandra.cluster import ConsistencyLevel
from cassandra import ReadTimeout
datafile = "/Users/XXX/adf.csv"
if os.path.exists(datafile):
os.remove(datafile)
def dumptableascsv():
os.system(
"sh /Users/XXX/Documents/dse-5.0.14/bin/cqlsh 127.0.0.1 9042 -u cassandra -p cassandra -e \" COPY XXX.agent_delta_fragment(agent_id,delta_id ,last_fragment_id ,processed) TO \'/Users/XXX/adf.csv\' WITH HEADER = true;\"\n"
" ")
#print datafile
def parse_file(datafile):
global fields
data = []
with open(datafile, "rb") as f:
header = f.readline().split(",")
# Loop through remaining lines in file object f
for line in f:
fields = line.split(",") # Split line into list
#print fields[3]
if fields[3]:
print "connect"
print fields[0],fields[1],fields[2],fields[3]
auth_provider = PlainTextAuthProvider(username='cassandra', password='cassandra')
cluster = Cluster(['127.0.0.1'],
load_balancing_policy=DCAwareRoundRobinPolicy(local_dc='Cassandra'),
port=9042, auth_provider=auth_provider, connect_timeout=10000,)
session = cluster.connect('XXX')
#session = cluster.connect('XXX')
# session.execute("select * from XXX.agent_delta_fragment LIMIT 1")
#rows = session.execute('select agent_id from XXX.agent_delta_fragment LIMIT 1')
#for row in rows:
# print row.agent_id
#batch = BatchStatement("DELETE FROM XXX.agent_delta_fragment_detail_test WHERE agent_id=%s and delta_id=%s and fragment_id=%s", (uuid.UUID(fields[0]), uuid.UUID(fields[1]), int(fields[3])))
session.execute("DELETE FROM XXX.agent_delta_fragment_detail WHERE agent_id=%s and delta_id=%s and fragment_id=%s", (uuid.UUID(fields[0]), uuid.UUID(fields[1]), int(fields[2])), timeout=1000000)
#session.execute(batch)
else:
print fields[3]
print "connect-False"
# print fields[3]
dumptableascsv()
parse_file(datafile)
I am running an example for learning The Model-View-Controller Pattern in python, but the code is giving an error. I tried to debug the code, but I couldn't find the main root/cause. Removing close connection system works, but what is the issue of the code? Can you advise me what is wrong?
# Filename: mvc.py
import sqlite3
import types
class DefectModel:
def getDefectList(self, component):
query = '''select ID from defects where Component = '%s' ''' %component
defectlist = self._dbselect(query)
list = []
for row in defectlist:
list.append(row[0])
return list
def getSummary(self, id):
query = '''select summary from defects where ID = '%d' ''' % id
summary = self._dbselect(query)
for row in summary:
return row[0]
def _dbselect(self, query):
connection = sqlite3.connect('example.db')
cursorObj = connection.cursor()
results = cursorObj.execute(query)
connection.commit()
cursorObj.close()
return results
class DefectView:
def summary(self, summary, defectid):
print("#### Defect Summary for defect# %d ####\n %s" % (defectid,summary) )
def defectList(self, list, category):
print("#### Defect List for %s ####\n" % category )
for defect in list:
print(defect )
class Controller:
def __init__(self): pass
def getDefectSummary(self, defectid):
model = DefectModel()
view = DefectView()
summary_data = model.getSummary(defectid)
return view.summary(summary_data, defectid)
def getDefectList(self, component):
model = DefectModel()
view = DefectView()
defectlist_data = model.getDefectList(component)
return view.defectList(defectlist_data, component)
This is related run.py.
#run.py
import mvc
controller = mvc.Controller()
# Displaying Summary for defect id # 2
print(controller.getDefectSummary(2))
# Displaying defect list for 'ABC' Component print
controller.getDefectList('ABC')
If you need to create the database, it is available here:
# Filename: datbase.py
import sqlite3
import types
# Create a database in RAM
db = sqlite3.connect('example.db')
# Get a cursor object
cursor = db.cursor()
cursor.execute("drop table defects")
cursor.execute("CREATE TABLE defects(id INTEGER PRIMARY KEY, Component TEXT, Summary TEXT)")
cursor.execute("INSERT INTO defects VALUES (1,'XYZ','File doesn‘t get deleted')")
cursor.execute("INSERT INTO defects VALUES (2,'XYZ','Registry doesn‘t get created')")
cursor.execute("INSERT INTO defects VALUES (3,'ABC','Wrong title gets displayed')")
# Save (commit) the changes
db.commit()
# We can also close the connection if we are done with it.
# Just be sure any changes have been committed or they will be lost.
db.close()
My error is as below:
> Windows PowerShell Copyright (C) Microsoft Corporation. All rights
> reserved.
>
> PS E:\Projects\test> & python e:/Projects/test/mvc.py
> Traceback (most recent call last): File
> "e:/Projects/test/mvc.py", line 56, in <module>
> import mvc File "e:\Projects\test\mvc.py", line 65, in <module>
> cursor.execute("drop table defects") sqlite3.OperationalError: no such table: defects PS E:\Projects\test> & python
> e:/Projects/ramin/mvc.py Traceback (most recent call last):
> File "e:/Projects/test/mvc.py", line 56, in <module>
> import mvc File "e:\Projects\test\mvc.py", line 80, in <module>
> print(controller.getDefectSummary(2)) File "e:\Projects\test\mvc.py", line 44, in getDefectSummary
> summary_data = model.getSummary(defectid) File "e:\Projects\test\mvc.py", line 18, in getSummary
> for row in summary: sqlite3.ProgrammingError: Cannot operate on a closed cursor. PS E:\Projects\test>
I suspect that the problem is this line: cursor.execute("drop table defects")
Maybe you dropped that table in a previous run, and since it's no longer there, sqlite3 raises an OperationalError exception.
In your code there is a comment that says that you are using an in-memory sqlite database, but you are not. This is how you create an in-memory db:
db = sqlite3.connect(:memory:)
If you use an in-memory db you don't need to drop anything, since you are creating the db on the fly when you run your script.
Note: last year I wanted to understand the MVC better, so I wrote a series of articles about it. Here is the one where I use SQLite as a storage backend for my Model.
I have a problem with a Python 2.7 project.
I'm trying to set a variable to a value retrieved from an sqlite3 database, but I'm having trouble. Here is my code thus far, and the error I'm receiving. Yes, the connection opens just fine, and the table, columns, and indicated row are there as they should be.
import sqlite3 import Trailcrest
conn = sqlite3.connect('roster.paw')
c = conn.cursor()
def Lantern(AI):
"""Pulls all of the data for the selected user."""
Trailcrest.FireAutoHelp = c.execute("""select fireautohelp
from roster
where index = ?;""", (AI,) ).fetchall()
The error is:
> Traceback (most recent call last):
> File "<pyshell#4>", line 1, in <module> Lantern(1)
> File "C:\Users\user\MousePaw Games\Word4Word\PYM\Glyph.py", line 20,
> in Lantern
> Trailcrest.FireAutoHelp = c.execute("""select fireautohelp from roster where index = ?;""", (AI,)).fetchall()
> OperationalError: near "index": syntax error
As Thomas K mentions in a comment, index is a SQL keyword.
You can either rename that column, or enclose in backticks:
Trailcrest.FireAutoHelp = c.execute("""select fireautohelp
from roster
where `index` = ?;""", (AI,) ).fetchall()