Run the query saved in MS Access with required parameters through Pyodbc? - python

I am using Pyodbc to connect my program with MS Access. In the Access database, I pre-created some queries that require parameters. How can I pass values to parameters of the queries when executing them in Python?

When an Access database contains saved parameter queries they are exposed by Access ODBC as stored procedures and can be invoked using the ODBC {call ...} syntax. For example, with a saved query named [ClientEmails] ...
PARAMETERS prmLastName Text ( 255 );
SELECT Clients.ID, Clients.LastName, Clients.FirstName, Clients.Email
FROM Clients
WHERE (((Clients.LastName)=[prmLastName]));
... the following Python code will run that query and return results for a specific Last Name:
cmd = "{call ClientEmails(?)}"
params = ("Thompson",)
crsr.execute(cmd, params) # pyodbc "cursor" object
for row in crsr.fetchall():
print(row)

Here's a generalized example. First, connect to the database. Then, issue commands. The command is just a string. You can incorporate variables from elsewhere in your code through simple string concatenation.
import pyodbc
connStr = """
DRIVER={Microsoft Access Driver (*.mdb, *.accdb)};
DBQ=C:\full\path\to\your\PYODBC.accdb;
"""
cnxn = pyodbc.connect(connStr)
cursor = cnxn.cursor()
desired_column = "Forename"
table_name = "Student"
command = "SELECT " + desired_column + " FROM " + table_name
cursor.execute(command)
row = cursor.fetchone()
if row:
print(row)

Related

How to execute query formed by python-sql query builder?

I am using the python-sql query builder to build queries. Below is the link:
https://pypi.org/project/python-sql/
How do I execute queries from this query builder? Below is an example:
user = Table('user')
select = user.select()
tuple(select)
('SELECT * FROM "user" AS "a"', ())
How to execute this in python?
It seems that python-sql only returns a tuple with the SQL string and a list of parameters. It does not execute anything. You can execute the generated code using pyodbc or other library, for example, for SQL Server:
import pyodbc
from sql import *
conn = pyodbc.connect("Driver={SQL Server Native Client 11.0};"
"Server=YourServer;"
"Database=Your database;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
user = Table('user')
select = user.select()
cursor.execute(select[0], select[1])
for row in cursor:
print('row = %r' % (row,))
For other database system just change the driver name, etc...

Python to SQL Server Insert

I'm trying to follow the method for inserting a Panda data frame into SQL Server that is mentioned here as it appears to be the fastest way to import lots of rows.
However I am struggling with figuring out the connection parameter.
I am not using DSN , I have a server name, a database name, and using trusted connection (i.e. windows login).
import sqlalchemy
import urllib
server = 'MYServer'
db = 'MyDB'
cxn_str = "DRIVER={SQL Server Native Client 11.0};SERVER=" + server +",1433;DATABASE="+db+";Trusted_Connection='Yes'"
#cxn_str = "Trusted_Connection='Yes',Driver='{ODBC Driver 13 for SQL Server}',Server="+server+",Database="+db
params = urllib.parse.quote_plus(cxn_str)
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
conn = engine.connect().connection
cursor = conn.cursor()
I'm just not sure what the correct way to specify my connection string is. Any suggestions?
I have been working with pandas and SQL server for a while and the fastest way I found to insert a lot of data in a table was in this way:
You can create a temporary CSV using:
df.to_csv('new_file_name.csv', sep=',', encoding='utf-8')
Then use pyobdc and BULK INSERT Transact-SQL:
import pyodbc
conn = pyodbc.connect(DRIVER='{SQL Server}', Server='server_name', Database='Database_name', trusted_connection='yes')
cur = conn.cursor()
cur.execute("""BULK INSERT table_name
FROM 'C:\\Users\\folders path\\new_file_name.csv'
WITH
(
CODEPAGE = 'ACP',
FIRSTROW = 2,
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)""")
conn.commit()
cur.close()
conn.close()
Then you can delete the file:
import os
os.remove('new_file_name.csv')
It was a second to charge a lot of data at once into SQL Server. I hope this gives you an idea.
Note: don't forget to have a field for the index. It was my mistake when I started to use this lol.
Connection string parameter values should not be enclosed in quotes so you should use Trusted_Connection=Yes instead of Trusted_Connection='Yes'.

create a database using pyodbc

I am trying to create a database using pyodbc, however, I cannot find it seems to be paradox as the pyodbc needs to connect to a database first, and the new database is created within the linked one. Please correct me if I am wrong.
In my case, I used following code to create a new database
conn = pyodbc.connect("driver={SQL Server};server= serverName; database=databaseName; trusted_connection=true")
cursor = conn.cursor()
sqlcommand = """
CREATE DATABASE ['+ #IndexDBName +'] ON PRIMARY
( NAME = N'''+ #IndexDBName+''', FILENAME = N''' + #mdfFileName + ''' , SIZE = 4000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'''+ #IndexDBName+'_log'', FILENAME = N''' + #ldfFileName + ''' , SIZE = 1024KB , MAXSIZE = 100GB , FILEGROWTH = 10%)'
"""
cursor.execute(sqlcommand)
cursor.commit()
conn.commit()
The above code works without errors, however, there is no database created.
So how can I create a database using pyodbc?
Thanks a lot.
If you try to create a database with the default autocommit value for the connection, you should receive an error like the following. If you're not seeing this error message, try updating the SQL Server native client for a more descriptive message:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 11.0]
[SQL Server]CREATE DATABASE statement not allowed within multi-statement transaction.
(226) (SQLExecDirectW)')
Turn on autocommit for the connection to resolve:
conn = pyodbc.connect("driver={SQL Server};server=serverName; database=master; trusted_connection=true",
autocommit=True)
Note two things:
autocommit is not part of the connection string, it is a separate keyword passed to the connect function
specify the initial connection database context is the master system database
As an aside, you may want to check the #IndexDBName, #mdfFileName, and #ldfFileName are being appropriately set in your T-SQL. With the code you provided, a database named '+ #IndexDBName +' would be created.
The accepted answer did not work for me but I managed to create a database using the following code on Ubuntu:
conn_str = r"Driver={/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.9.so.1.1};" + f"""
Server={server_ip};
UID=sa;
PWD=passwd;
"""
conn = pyodbc.connect(conn_str, autocommit=True)
cursor = conn.cursor()
cursor.execute(f"CREATE DATABASE {db_name}")
Which uses the default "master database" when connecting. You can check if the dataset is created by this query:
SELECT name FROM master.sys.databases

Can't upload Images to MS Sql server via pyodbc

i'm trying to upload an image to MS SQL web-server in Linux(raspbian) environment using python language. so far i had able connect to MS Sql and also i had create a table. And im using pyodbc.
#! /user/bin/env python
import pyodbc
dsn = 'nicedcn'
user = myid
password = mypass
database = myDB
con_string = 'DSN=%s;UID=%s;PWD=%s;DATABASE=%s;' % (dsn, user, password, database)
cnxn = pyodbc.connect(con_string)
cursor = cnxn.cursor()
string = "CREATE TABLE Database1([image name] varchar(20), [image] varbinary(max))"
cursor.execute(string)
cnxn.commit()
this part complied without any error. that means i have successfully created a table isn't? or is there any issue?
i try to upload image as this way.
with open('new1.jpg','rb') as f:
bindata = f.read()
cursor.execute("insert into Database1(image name, image) values (?,?)", 'new1', bindata)
cnxn.commit()
i get the error on this part. and it pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS] [SQL Server] Satement(s) could not be prepared. (8180) (SQLParamData)')
can some one help me please. thank you
Your parameters must be passed in as one sequence, not as two separate arguments. A tuple will do nicely here:
cursor.execute(
"insert into Database1([image name], image) values (?,?)",
('new1', pyodbc.Binary(bindata)))
Note that you also need to quote the image name column correctly, and wrap the data in a pyodbc.Binary() object; this will produce the right datatype for your Python version (bytearray or bytes).

Pyodbc Accessing Multiple Databases on same server

I'm tasked with obtaining data from two MS SQL databases on the same server so i can run a single query that uses info from both databases simultaneously. I am trying to achieve this in python 2.7 with pyodbc 3.0.7. My query would look like this:
Select forcast.WindGust_Forecast, forcast.Forecast_Date, anoSection.SectionName, refTable.WindGust
FROM [EO1D].[dbo].[Dashboard_Forecast] forcast
JOIN [EO1D].[dbo].[Dashboard_AnoSections] anoSection
ON forcast.Section_ID = anoSection.Record_ID
JOIN [EO1D].[dbo].[Dashboard_AnoCircuits] anoCircuits
ON anoSection.Circuit_Number = anoCircuits.Circuit_Number
JOIN [FTSAutoCaller].[dbo].[ReferenceTable] refTable
ON anoCircuits.StationCode = refTable.StationCode
Where refTable.Circuit IS NOT NULL and refTable.StationCode = 'sil'
the typical connection for pyodbc looks like:
cnxn = pyodbc.connect('DRIVER{SQLServer};SERVER=SQLSRV01;DATABASE=DATABASE;UID=USER;PWD=PASSWORD')
Which would only allow access to the database name provided.
how would I go about setting up a connection that allows me access to both databases so this query can be ran. The two database names in my case are EO1D and FTSAutoCaller.
you're overthinking it. If you setup the connection as you did above, and then simply pass the sql along to a cursor it should work.
import pyodbc
conn_string = '<removed>'
conn = pyodbc.connect(conn_string)
cur = conn.cursor()
query = 'select top 10 * from table1 t1 inner join database2..table2 t2 on t1.id = t2.id'
cur.execute(query)
and you are done (tested in my own environment, clearly the connection string and query were different, but it did work.)
The query takes care of its self although I only referenced one of the tables in the connection the query didnt have an issue connecting to both of the database. Not 100% sure but im assuming it worked because of the prefixed in "[ ]"

Categories

Resources