I am checking for changes on a database table and running a python script.
Here is the function:
CREATE OR REPLACE FUNCTION callMyFooFunc()
RETURNS trigger
AS $$
import subprocess
subprocess.call(['/usr/bin/python3', '/var/lib/postgresql/foo.py'])
$$ LANGUAGE plpython3u;
and here is the trigger:
CREATE TRIGGER executePython
AFTER UPDATE ON public."FooTable"
FOR EACH ROW EXECUTE PROCEDURE callMyFooFunc();
Everything is working. The foo.py is executed.
How can I pass the psql "variables/arguments" to my foo.py? Things like tg_op, NEW etc.
In the foo.py I want to process the "changed" data.
Related
I am trying to execute this but it is throwing a Name error as executing variable not defined. This is using external script in sql to run python code. I have declared a variable and trying to access and change the value inside the function so declared it as global but while using it inside if condition getting error as "Name Error: "executing" is not defined
--exec TEST
Alter PROCEDURE TEST
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for procedure here
EXEC sp_execute_external_script #language = N'Python',
#script =N'import logging
executing=0
def pre():
global executing
print(executing)
if(executing==0):
print(executing)
executing=1
pre()
'
END
GO
I use the hdbcli in python to connect to a hana db.
SQL command execution works via cursor an my connection:
conn = dbapi.connect(
address=os.environ['HOST'],
port=dbconnectport,
user=dbusername,
password=dbpasswd,
databasename=dbconnectdbname
)
...
cursor=conn.cursor()
and execution looks like:
cursor.execute("Select USER_NAME from \"SYS\".\"USERS\" WHERE USER_NAME=\'%s\'" % varcrdbust)
For single queries it works fine. But how can I execute a sql script with a lot of spcial characters?
Via bash shell I can do this fo example in this way:
Create file on os level:
tee >> $PRIVFILE << EOF
WITH
/*
[NAME]
- HANA_Security_CopyPrivilegesAndRoles_CommandGenerator_2.00.000+
[DESCRIPTION]
- Generates SQL commands that can be used to grant roles and privileges assigned to one user to another user or role
SQL script text here.........
And then execute this via hdbsql with argument -I <pathname/filename>
Is there any alternativ in python? Maybe without usage of file creation on os level?
Thanks David
All SAP HANA clients allow the execution of only a single command at a time.
The monitoring script that you chose as an example is in fact just one single command: a relatively large SELECT statement.
So, for every command, you want to have executed, you will need to send a separate .execute.
If you want to process a larger "script" file with several commands, you will need to look out for a "command separator" character (like ; in HANA Studio or hdbsql) and build the individual commands from the strings between those separators.
I have a Python script that in the middle of it will have a function where I want to query a DB table and run whatever Python scripts are listed in one of the columns. The Python scripts themselves reside in the same folder as the main Python script that is being executed. For specific reasons I need to keep these script names in a DB table though and call/read them from there, hence my issue.
python_script_table in DB looks like:
TABLE_ID PYTHON_SCRIPT
1 script1.py
2 script2.py
3 null
Query would be something like:
select * from python_script_table where python_script is not null
At that point I want to execute whatever is returned under PYTHON_SCRIPT (in this case script1.py and script2.py).
I am unsure the best way to approach this..
You should be able to execute the scripts with something like this:
with open('path/to/script.py') as file:
script = file.read()
exec(script)
I don't understand how to test my repositories.
I want to be sure that I really saved object with all of it parameters into database, and when I execute my SQL statement I really received what I am supposed to.
But, I cannot put "CREATE TABLE test_table" in setUp method of unittest case because it will be created multiple times (tests of the same testcase are runned in parallel). So, as long as I create 2 methods in the same class which needs to work on the same table, it won't work (name clash of tables)
Same, I cannot put "CREATE TABLE test_table" setUpModule, because, now the table is created once, but since tests are runned in parallel, there is nothing which prevents from inserting the same object multiple times into my table, which breakes the unicity constraint of some field.
Same, I cannot "CREATE SCHEMA some_random_schema_name" in every method, because I need to globally "SET search_path TO ..." for a given Database, so every method runned in parallel will be affected.
The only way I see is to create to "CREATE DATABASE" for each test, and with unique name, and establish a invidual connection to each database.. This looks extreeeemly wasteful. Is there a better way?
Also, I cannot use SQLite in memory because I need to test PostgreSQL.
The best solution for this is to use the testing.postgresql module. This fires up a db in user-space, then deletes it again at the end of the run. You can put the following in a unittest suite - either in setUp, setUpClass or setUpModule - depending on what persistence you want:
import testing.postgresql
def setUp(self):
self.postgresql = testing.postgresql.Postgresql(port=7654)
# Get the url to connect to with psycopg2 or equivalent
print(self.postgresql.url())
def tearDown(self):
self.postgresql.stop()
If you want the database to persist between/after tests, you can run it with the base_dir option to set a directory - which will prevent it's removal after shutdown:
name = "testdb"
port = "5678"
path = "/tmp/my_test_db"
testing.postgresql.Postgresql(name=name, port=port, base_dir=path)
Outside of testing it can also be used as a context manager, where it will automatically clean up and shut down when the with block is exited:
with testing.postgresql.Postgresql(port=7654) as psql:
# do something here
when I issue git with tab , it can auto-complete with a list, I want to write a test.py, when I type test.py followed with tab, it can auto-complete with a given list defined in test.py, is it possible ?
$ git [tab]
add branch column fetch help mv reflog revert stash
am bundle commit filter-branch imap-send name-rev relink rm status
annotate checkout config format-patch init notes remote send-email submodule
apply cherry credential fsck instaweb p4 repack shortlog subtree
archive cherry-pick describe gc log pull replace show tag
bisect clean diff get-tar-commit-id merge push request-pull show-branch whatchanged
blame clone difftool grep mergetool rebase reset stage
The method you are looking for is: readline.set_completer . This method interacts with the readline of the bash shell. It's simple to implement. Examples: https://pymotw.com/2/readline/
That's not a feature of the git binary itself, it's a bash completion 'hack' and as such has nothing to do with Python per-se, but since you've tagged it as such let's add a little twist. Let's say we create a script aware of its acceptable arguments - test.py:
#!/usr/bin/env python
import sys
# let's define some sample functions to be called on passed arguments
def f1():
print("F1 called!")
def f2():
print("F2 called!")
def f3():
print("F3 called!")
def f_invalid(): # a simple invalid placeholder function
print("Invalid command!")
def f_list(): # a function to list all valid arguments
print(" ".join(sorted(arguments.keys())))
if __name__ == "__main__": # make sure we're running this as a script
arguments = { # a simple argument map, use argparse or similar in a real world use
"arg1": f1,
"arg2": f2,
"arg3": f3,
"list_arguments": f_list
}
if len(sys.argv) > 1:
for arg in sys.argv[1:]: # loop through all arguments
arguments.get(arg, f_invalid)() # call the mapped or invalid function
else:
print("At least one argument required!")
NOTE: Make sure you add an executable flag to the script (chmod +x test.py) so its shebang is used for executing instead of providing it as an argument to the Python interpreter.
Apart from all the boilerplate, the important argument is list_arguments - it lists all available arguments to this script and we'll use this output in our bash completion script to instruct bash how to auto-complete. To do so, create another script, let's call it test-completion.bash:
#!/usr/bin/env bash
SCRIPT_NAME=test.py
SCRIPT_PATH=/path/to/your/script
_complete_script()
{
local cursor options
options=$(${SCRIPT_PATH}/${SCRIPT_NAME} list_arguments)
cursor="${COMP_WORDS[COMP_CWORD]}"
COMPREPLY=( $(compgen -W "${options}" -- ${cursor}) )
return 0
}
complete -F _complete_script ${SCRIPT_NAME}
What it does is essentially adding to complete the _complete_script function to be called whenever a completion over test.py is invoked. The _complete_script function itself first calls list_arguments on test.py to retrieve its acceptable arguments, and then uses compgen to create a required structure for complete to be able to print it out.
To test, all you need is to source this script as:
source test-completion.bash
And then your bash will behave as:
$ ./test.py [tab]
arg1 arg2 arg3 list_arguments
And what's more, it's completely controllable from your Python script - whatever gets printed as a list on list_arguments command is what will be shown as auto-completion help.
To make the change permanent, you can simply add the source line to your .bashrc, or if you want more structured solution you can follow the guidelines for your OS. There are a couple of ways described on the git-flow-completion page for example. Of course, this assumes you actually have bash-autocomplete installed and enabled on your system, but your git autocompletion wouldn't work if you didn't.
Speaking of git autocompletion, you can see how it's implemented by checking git-completion.bash source - a word of warning, it's not for the fainthearted.