How to execute SQL Script with hdbcli in python - python

I use the hdbcli in python to connect to a hana db.
SQL command execution works via cursor an my connection:
conn = dbapi.connect(
address=os.environ['HOST'],
port=dbconnectport,
user=dbusername,
password=dbpasswd,
databasename=dbconnectdbname
)
...
cursor=conn.cursor()
and execution looks like:
cursor.execute("Select USER_NAME from \"SYS\".\"USERS\" WHERE USER_NAME=\'%s\'" % varcrdbust)
For single queries it works fine. But how can I execute a sql script with a lot of spcial characters?
Via bash shell I can do this fo example in this way:
Create file on os level:
tee >> $PRIVFILE << EOF
WITH
/*
[NAME]
- HANA_Security_CopyPrivilegesAndRoles_CommandGenerator_2.00.000+
[DESCRIPTION]
- Generates SQL commands that can be used to grant roles and privileges assigned to one user to another user or role
SQL script text here.........
And then execute this via hdbsql with argument -I <pathname/filename>
Is there any alternativ in python? Maybe without usage of file creation on os level?
Thanks David

All SAP HANA clients allow the execution of only a single command at a time.
The monitoring script that you chose as an example is in fact just one single command: a relatively large SELECT statement.
So, for every command, you want to have executed, you will need to send a separate .execute.
If you want to process a larger "script" file with several commands, you will need to look out for a "command separator" character (like ; in HANA Studio or hdbsql) and build the individual commands from the strings between those separators.

Related

execute raw python commands using pymongo

I wanted to execute raw mongo commands like (db.getUsers()) through the pymongo SDK. For example I have some js file which will contain only db.getUsers() in it . My python program needs to execute that command through establishing connection . I tried this db.command , db.runCommand but I'm not able to achieve that. After establishing the connection it should execute the mongo command whatever in the js file. Please assist.
db.getUsers() is a shell helper script to the usersInfo command (see https://docs.mongodb.com/manual/reference/method/db.getUsers/)
db.getUsers() wraps the usersInfo: 1 command.
You can run the usersInfo command using db.command() in pymongo with something like
from pymongo import MongoClient
db = MongoClient()['admin']
command_result = db.command({'usersInfo': 1})
print(command_result)

Import Excel file into MSSQL via Python, using an SQL Agent job

Task specs:
Import Excel file(s) into MSSQL database(s) using Python, but in a parametrized manner, and using SQL Server Agent job(s).
With the added requirement to set parameter values and/or run the job steps from SQL (query or SP).
And without using Access Database Engine(s) and/or any code that makes use of such drivers (in any wrapping).
First. Let's get some preparatory stuff out of the way.
We will need to set some PowerShell settings.
Run windows PowerShell as Administrator and do:
Set-ExecutionPolicy -ExecutionPolicy Unrestricted
Second. Some assumptions for reasons of clarity.
And those are:
1a. You have at least one instance of SQL2017 or later (Developer / Enterprise / Standard edition) installed and running on your machine.
1b. You have not bootstrapped the installation of this SQL instance so as to exclude Integration Services (SSIS).
1c. There exists SQL Server Agent running, bound to this SQL instance.
1d. You have some SSMS installed.
2a. There is at least one database attached to this instance (if not create one – please refrain from using in-memory filegroups for this exercise, I have not tested on those).
2b. There are no database level DML triggers that log all data changes in a designated table.
3. There is no active Server Audit Specification for this database logging everything we do.
4. Replication is not enabled (I mean the proper MSSQL replication feature not like scripts by 3rd party apps).
For 2b and 3 it's just cause I have not tested this with those on, but for number 4 it defo won't work with that on.
5. You are windows authenticated into the chosen SQL instance and your instance login and db mappings and privileges are sufficient for at least table creation and basic stuff.
Third.
We are going to need some kind of Python script to do this right?
Ok let's make one.
import pandas as pd
import sqlalchemy as sa
import urllib
import sys
import warnings
import os
import re
import time
#COMMAND LINE PARAMETERS
server = sys.argv[1]
database = sys.argv[2]
ExcelFileHolder = sys.argv[3]
SQLTableName = sys.argv[4]
#END OF COMMAND LINE PARAMETERS
excel_sheet_number_left_to_right = 0
warnings.filterwarnings('ignore')
driver = "SQL Server Native Client 11.0"
params = "DRIVER={%s};SERVER=%s;DATABASE=%s;Trusted_Connection=yes;QuotedID=Yes;" % (driver, server, database) #added the explicit "QuotedID=Yes;" to ensure no issues with column names
params = urllib.parse.quote_plus(params) #urllib.parse.quote_plus for Python 3
engine = sa.create_engine("mssql+pyodbc:///?odbc_connect=%s?charset=utf8" % params) #charset is cool to have here
conn = engine.connect()
def execute_sql_trans(sql_string, log_entry):
with conn.begin() as trans:
result = conn.execute(sql_string)
if len(log_entry) >= 1:
log.write(log_entry + "\n")
return result
excelfilesCursor = {}
def process_excel_file(excelfile, excel_sheet_name, tableName, withPyIndexOrSQLIndex, orderByCandidateFields):
withPyIndexOrSQLIndex = 0
excelfilesCursor.update({tableName: withPyIndexOrSQLIndex})
df = pd.read_excel(open(excelfile,'rb'), sheet_name=excel_sheet_name)
now = time.time()
mlsec = repr(now).split('.')[1][:3]
log_string = "Reading file \"" + excelfile + "\" to memory: " + str(time.strftime("%Y-%m-%d %H:%M:%S.{} %Z".format(mlsec), time.localtime(now))) + "\n"
print(log_string)
df.to_sql(tableName, engine, if_exists='replace', index_label='index.py')
now = time.time()
mlsec = repr(now).split('.')[1][:3]
log_string = "Writing file \"" + excelfile + "\", sheet " +str(excel_sheet_name)+ " to SQL instance " +server+ ", into ["+database+"].[dbo].["+tableName+"]: " + str(time.strftime("%Y-%m-%d %H:%M:%S.{} %Z".format(mlsec), time.localtime(now))) + "\n"
print(log_string)
def convert_datetimes_to_dates(tableNameParam):
sql_string = "exec [convert_datetimes_to_dates] '"+tableNameParam+"';"
execute_sql_trans(sql_string, "")
process_excel_file(ExcelFileHolder, excel_sheet_number_left_to_right, SQLTableName, 0, None)
sys.exit(0)
Ok you may or may not notice that my script contains some extra defs, I sometimes use them for convenience you may as well ignore them.
Save the python script somewhere nice say C:\PythonWorkspace\ExcelPythonToSQL.py
Also, needless to mention that you will need some py modules in your venv. The ones you don't already have you need to pip install them obviously.
Fourth.
Connect to your db, SSMS, etc. and create a new Agent job.
Let's call it "ExcelPythonToSQL".
New step, let's call it "PowerShell parametrized script".
Set the Type to PowerShell.
And place this code inside it:
$pyFile="C:\PythonWorkspace\ExcelPythonToSQL.py"
$SQLInstance="SomeMachineName\SomeNamedSQLInstance"
#or . or just the computername or localhost if your SQL instance is a default instance i.e. not a named one.
$dbName="SomeDatabase"
$ExcelFileFullPath="C:\Temp\ExampleExcelFile.xlsx"
$targetTableName="somenewtable"
C:\ProgramData\Miniconda3\envs\YOURVENVNAMEHERE\python $pyFile $SQLInstance $dbName $ExcelFileFullPath $targetTableName
Save the step and the job.
Now let's wrap it around something easier to handle. Because remember, this job and step is not like an SSIS step where you could potentially alter the parameter values in its configuration tab. You don't want to properties the job and the step each time and specify different excel file or target table.
So.
Ah also, do me a solid and do this little trick. Do a small alteration in the code, anything and then instead of OK do a Script to New Query Window. That way we can capture the guid of the job without having to query for it.
So now.
Create a SP like so:
use [YourDatabase];
GO
create proc [ExcelPythonToSQL_amend_job_step_params]( #pyFile nvarchar(max),
#SQLInstance nvarchar(max),
#dbName nvarchar(max),
#ExcelFileFullPath nvarchar(max),
#targetTableName nvarchar(max)='somenewtable'
)
as
begin
declare #sql nvarchar(max);
set #sql = '
exec msdb.dbo.sp_update_jobstep #job_id=N''7f6ff378-56cd-4a8d-ba40-e9057439a5bc'', #step_id=1,
#command=N''
$pyFile="'+#pyFile+'"
$SQLInstance="'+#SQLInstance+'"
$dbName="'+#dbName+'"
$ExcelFileFullPath="'+#ExcelFileFullPath+'"
$targetTableName="'+#targetTableName+'"
C:\ProgramData\Miniconda3\envs\YOURVENVGOESHERE\python $pyFile $SQLInstance $dbName $ExcelFileFullPath $targetTableName''
';
--print #sql;
exec sp_executesql #sql;
end
But inside it you must replace 2 things. One, the global uniqueidentifier for the Agent job that you found by doing the trick I described earlier, yes the one with the script to new query window. Two, you must fill in the name of your Python venv replacing the word YOURVENVGOESHERE in the code.
Cool.
Now, with a simple script we can play-test it.
Let's have in a new query window something like this:
use [YourDatabase];
GO
--to set parameters
exec [ExcelPythonToSQL_amend_job_step_params] #pyFile='C:\PythonWorkspace\ExcelPythonToSQL.py',
#SQLInstance='.',
#dbName='YourDatabase',
#ExcelFileFullPath='C:\Temp\ExampleExcelFile.xlsx',
#targetTableName='somenewtable';
--to execute the job
exec msdb.dbo.sp_start_job N'ExcelPythonToSQL', #step_name = N'PowerShell parametrized script';
--let's test that the table is there and drop it.
if object_id('YourDatabase..somenewtable') is not null
begin
select 'Table was here!' [test: table exists?];
drop table [somenewtable];
end
else select 'NADA!' [test: table exists?];
You can run the set parameters part, then the execution, carefull to then wait a little bit like a few seconds, calling the sp_start_job like in this script is asynchronous. And then run the test script to clean up and make sure it had gone in.
That's it.
Obviously lots of variations are possible.
Like in the job step, we could instead call a batch file, we could call a powershell .ps1 file and have the parameters in there, lots and lots of other ways of doing it. I merely described one in this post.

python values to bash line on a remote server

So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.

Dump data from malformed SQLite in Python

I have a malformed database. When I try to get records from any of two tables, it throws an exception:
DatabaseError: database disk image is malformed
I know that through commandline I can do this:
sqlite3 ".dump" base.db | sqlite3 new.db
Can I do something like this from within Python?
As far as i know you cannot do that (alas, i might be mistaken), because the sqlite3 module for python is very limited.
Only workaround i can think of involves calling the os command shell (e.g. terminal, cmd, ...) (more info) via pythons call-command:
Combine it with the info from here to do something like this:
This is done on an windows xp machine:
Unfortunately i can't test it on a unix machine right now - hope it will help you:
from subprocess import check_call
def sqliterepair():
check_call(["sqlite3", "C:/sqlite-tools/base.db", ".mode insert", ".output C:/sqlite-tools/dump_all.sql", ".dump", ".exit"])
check_call(["sqlite3", "C:/sqlite-tools/new.db", ".read C:/sqlite-tools/dump_all.sql", ".exit"])
return
The first argument is calling the sqlite3.exe. Because it is in my system path variable, i don't need to specify the path or the suffix ".exe".
The other arguments are chained into the sqlite3-shell.
Note that the argument ".exit" is required so the sqlite-shell will exit. Otherwise the check_call() will never complete because the outer cmd-shell or terminal will be in suspended.
Of course the dump-file should be removed afterwards...
EDIT: Much shorter solution (credit goes to OP (see comment))
os.system("sqlite3 C:/sqlite-tools/base.db .dump | sqlite3 C:/sqlite-tools/target.db")
Just tested this: it works. Apparently i was wrong in the comments.
If I understood properly, what you want is to duplicate an sqlite3 database in python. Here is how I would do it:
# oldDB = path to the corrupted db,
# newDB = path to the new db
def duplicateDB(oldDB, newDB):
con = sqlite3.connect(oldDB)
script = ''.join(con.iterdump())
con.close()
con = sqlite3.connect(newDB)
con.executescript(script)
con.close()
print "duplicated %s into %s" % (oldDB,newDB)
In your example, call duplicateDB('base.db', 'new.db'). The iterdump function is equivalent to dump.
Note that if you use Python 3, you will need to change the print statement.

Set Input Variable at the Start of a Python Script

I'm working on a python script in Red Hat that connect to a SQL database and run queries on a specific row. What I am attempting to do is to set the row that I want to query as an input variable that get's passed into the script when i start it. For example, when I start the script I would like to just run the command:
./SQLQuery.py Row_To_Query = Row_Name
For a similar script that I have in VBScript I would run the wscript command like this:
wscript SQLQuery.vbs /name:Row_Name
Then the row name would get passed into the script and the relevant data used as the script progresses.
My end goal is to run the python script as a task every 10 minuets or so and pull the relevant data from a specific row, but also have the ability to specify which row the SQL queries are run against without having to edit the script each time I want to run it.
You can use argparse to parse the commandline arguments given in a standard unix-like format:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--row-to-query', type=str, help='row to query')
namespace = parser.parse_args()
row_name = namespace.row_to_query
...
You'd then run the program like:
./SQLQuery.py --row-to-query=some_row_name
or
./SQLQuery.py --row-to-query some_row_name
both invocations should be equivalent.

Categories

Resources