Create a schema in SQL Server using pyodbc - python

I am using pyodbc to read from a SQL Server database and create analogous copies of the same structure in a different database somewhere else.
Essentially:
for db in source_dbs:
Execute('create database [%s]' % db) # THIS WORKS.
for schema in db:
# The following result in an error starting with:
# [42000] [Microsoft][ODBC SQL Server Driver][SQL Server]
Execute('create schema [%s].[%s]' % (db, schema)
# Incorrect syntax near '.'
Execute('use [%s]; create schema [%s]' %(db, schema)
# CREATE SCHEMA' must be the first statement in a query batch.
In this example, you can assume that Execute creates a cursor using pyodbc and executes the argument SQL string.
I'm able to create the empty databases, but I can't figure out how to create the schemas within them.
Is there a solution, or is this a limitation of using pyodbc with MS SQL Server?
EDIT: FWIW - I also tried to pass the database name to Execute, so I could try to set the database name in the connection string. This doesn't work either - it seems to ignore the database name completely.

Python database connections usually default to having transactions enabled (autocommit == False) and SQL Server tends to dislike certain DDL commands being executed in a transaction.
I just tried the following and it worked for me:
import pyodbc
connStr = (
r"Driver={SQL Server Native Client 10.0};"
r"Server=(local)\SQLEXPRESS;"
r"Trusted_connection=yes;"
)
cnxn = pyodbc.connect(connStr, autocommit=True)
crsr = cnxn.cursor()
crsr.execute("CREATE DATABASE pyodbctest")
crsr.execute("USE pyodbctest")
crsr.execute("CREATE SCHEMA myschema")
crsr.close()
cnxn.close()

Related

How to restore MySQL database using Python

I have a sql file generated during database backup process and I want to load all database content from that sql file to a different MySQL database (secondary database).
I have created a python function to load the whole database in that sql file but when I execute the function, I get an error
'str' object is not callable
Below is python script
def load_database_dump_to_secondary_mysql(file_path='db_backup_file.sql'):
query = f'source {file_path}'
try:
connection = mysql_hook.get_conn() # connection to secondary db
cursor = connection.cursor(query)
print('LOAD TO MYSQL COMPLETE')
except Exception as xerror:
print("LOAD ERROR: ", xerror)
NB: mysql_hook is an airflow connector that contains MySQL DB connection info such as Host, user/passwd, Database name. Also, I don't have connection to the primary database, I'm only receiving sql dump file.
What I'm I missing?
source is a client builtin command: https://dev.mysql.com/doc/refman/8.0/en/mysql-commands.html
It's not an SQL query that MySQL's SQL parser understands.
So you can't execute source using cursor.execute(), because that goes directly to the dynamic SQL interface.
You must run it using the MySQL command-line client as a subprocess:
subprocess.run(['mysql', '-e', f'source {file_path}'])
You might need other options to the mysql client, such as user, password, host, etc.
try this
import mysql.connector as m
# database which you want to backup
db = 'geeksforgeeks'
connection = m.connect(host='localhost', user='root',
password='123', database=db)
cursor = connection.cursor()
# Getting all the table names
cursor.execute('SHOW TABLES;')
table_names = []
for record in cursor.fetchall():
table_names.append(record[0])
backup_dbname = db + '_backup'
try:
cursor.execute(f'CREATE DATABASE {backup_dbname}')
except:
pass
cursor.execute(f'USE {backup_dbname}')
for table_name in table_names:
cursor.execute(
f'CREATE TABLE {table_name} SELECT * FROM {db}.{table_name}')

sqlalchemy - pass through exact pyodbc string returning SQL server permission error?

The following code is working only using pyodbc -
conn_string = 'DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password
conn = pyodbc.connect(conn_string)
sql_query = 'SELECT * FROM table'
df = pd.read_sql(sql_query,conn)
However, when I try to use the pass through exact pyodbc string method of creating a sqlaclehmy engine, I get a permission error -
conn_string = 'DRIVER='+driver+';SERVER='+server+';DATABASE='+database+';UID='+username+';PWD='+ password
params = urllib.parse.quote_plus(conn_string)
engine = sqlalchemy.create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
engine.connect()
Error -
SAWarning: Could not fetch transaction isolation level, tried views: ('sys.dm_exec_sessions',
'sys.dm_pdw_nodes_exec_sessions'); final error was: ('42000', '[42000] [Microsoft]
[ODBC Driver 17 for SQL Server][SQL Server]User does not have permission to perform this action.
(6004) (SQLExecDirectW)')
(I tried with and without the port included)
I'm confused on where this permission error is coming from because its working when only using pyodbc. is there some parameter im missing here?
Solution - https://github.com/catherinedevlin/ipython-sql/issues/105
I know this is an older issue, but since I just resolved a similar
issue, I'd thought I would share. This is an issue with SQL DW, not
the SQL Alchemy engine that ipython-sql uses.
I did notice that configuring SQL Magic to use autocommit doesn't seem
to work with MSSQL. Go ahead and add that to your connection string:
%sql mssql+pyodbc://user:pwd#MSSQL_DSN?autocommit=true
You should be able to connect using an Admin account. For general
users:
"Could not fetch transaction isolation level" is a SQL error. This
means the user does not have permissions to see what the current
isolation is. For Azure SQL Data Warehouse, those permissions are in
this view:
https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-pdw-exec-connections-transact-sql
Since VIEW SERVER STATE is not supported in SQL DW, you need to use
"GRANT VIEW DATABASE STATE TO user" instead:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/43a4f051-2a12-4a15-8362-3d7a67f5c88f/unable-to-give-permission-on-azure-sql-data-warehouse-catalog-views-and-dmvs?forum=AzureSQLDataWarehouse

Connect to SQL Server and run query as "passthrough" from Python

I currently have code that executes queries on data stored on a SQL Server database, such as the following:
import pyodbc
conn = pyodbc.connect(
r'DRIVER={SQL Server};'
r'SERVER=SQL2SRVR;'
r'DATABASE=DBO732;'
r'Trusted_Connection=yes;'
)
sqlstr = '''
SELECT Company, Street_Address, City, State
FROM F556
WHERE [assume complicated criteria statement here]
'''
crsr = conn.cursor()
for row in crsr.execute(sqlstr):
print(row.Company, row.Street_Address, row.City, row.State)
I can't find documentation online of whether pyodbc can (or is by default) running my queries on the SQL Server (as passthrough queries), or whether (if pyodbc can't do that) there is another way (maybe sqlalchemy or similar?) of doing that. Any insight?
Or is there a way to execute passthrough queries directly from Pandas?
If you are working with pandas and SQL Server then you should already have created a SQLAlchemy Engine object (usually named engine). To execute a raw DML statement you can use the construct
with engine.begin() as conn:
conn.execute("UPDATE table_name SET column_name ...")
print("table updated")

create a database using pyodbc

I am trying to create a database using pyodbc, however, I cannot find it seems to be paradox as the pyodbc needs to connect to a database first, and the new database is created within the linked one. Please correct me if I am wrong.
In my case, I used following code to create a new database
conn = pyodbc.connect("driver={SQL Server};server= serverName; database=databaseName; trusted_connection=true")
cursor = conn.cursor()
sqlcommand = """
CREATE DATABASE ['+ #IndexDBName +'] ON PRIMARY
( NAME = N'''+ #IndexDBName+''', FILENAME = N''' + #mdfFileName + ''' , SIZE = 4000KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
LOG ON
( NAME = N'''+ #IndexDBName+'_log'', FILENAME = N''' + #ldfFileName + ''' , SIZE = 1024KB , MAXSIZE = 100GB , FILEGROWTH = 10%)'
"""
cursor.execute(sqlcommand)
cursor.commit()
conn.commit()
The above code works without errors, however, there is no database created.
So how can I create a database using pyodbc?
Thanks a lot.
If you try to create a database with the default autocommit value for the connection, you should receive an error like the following. If you're not seeing this error message, try updating the SQL Server native client for a more descriptive message:
pyodbc.ProgrammingError: ('42000', '[42000] [Microsoft][SQL Server Native Client 11.0]
[SQL Server]CREATE DATABASE statement not allowed within multi-statement transaction.
(226) (SQLExecDirectW)')
Turn on autocommit for the connection to resolve:
conn = pyodbc.connect("driver={SQL Server};server=serverName; database=master; trusted_connection=true",
autocommit=True)
Note two things:
autocommit is not part of the connection string, it is a separate keyword passed to the connect function
specify the initial connection database context is the master system database
As an aside, you may want to check the #IndexDBName, #mdfFileName, and #ldfFileName are being appropriately set in your T-SQL. With the code you provided, a database named '+ #IndexDBName +' would be created.
The accepted answer did not work for me but I managed to create a database using the following code on Ubuntu:
conn_str = r"Driver={/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.9.so.1.1};" + f"""
Server={server_ip};
UID=sa;
PWD=passwd;
"""
conn = pyodbc.connect(conn_str, autocommit=True)
cursor = conn.cursor()
cursor.execute(f"CREATE DATABASE {db_name}")
Which uses the default "master database" when connecting. You can check if the dataset is created by this query:
SELECT name FROM master.sys.databases

importing data to sql server database from json

I am attempting to import data into a SQL Server 2012 database from a large json file using Python 2.7. Everything in the below code executes and returns no error, however when I go into Sql Server Management Studio and query the table, it returns zero rows. Why is this?
import json, pyodbc
#import data
path = 'phys2211-001_clickstream_export'
records = [json.loads(line) for line in open(path)]
#connect to database, create db cursor
cnxn = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;DATABASE=fall_2013_blended;Trusted_Connection=Yes')
cursor = cnxn.cursor()
#insert data into db
for record in records:
cursor.execute("insert into clickstream_json(json_event) values (?)",json.dumps(record))
I didn't have autocommit setup. Therefore it ran through everything then committed nothing to the database. It was similar to this question:
Issue creating table in MS SQL Database with Python script

Categories

Resources