Set Input Variable at the Start of a Python Script - python

I'm working on a python script in Red Hat that connect to a SQL database and run queries on a specific row. What I am attempting to do is to set the row that I want to query as an input variable that get's passed into the script when i start it. For example, when I start the script I would like to just run the command:
./SQLQuery.py Row_To_Query = Row_Name
For a similar script that I have in VBScript I would run the wscript command like this:
wscript SQLQuery.vbs /name:Row_Name
Then the row name would get passed into the script and the relevant data used as the script progresses.
My end goal is to run the python script as a task every 10 minuets or so and pull the relevant data from a specific row, but also have the ability to specify which row the SQL queries are run against without having to edit the script each time I want to run it.

You can use argparse to parse the commandline arguments given in a standard unix-like format:
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--row-to-query', type=str, help='row to query')
namespace = parser.parse_args()
row_name = namespace.row_to_query
...
You'd then run the program like:
./SQLQuery.py --row-to-query=some_row_name
or
./SQLQuery.py --row-to-query some_row_name
both invocations should be equivalent.

Related

Dynamically running Python scripts pulled from SQL query

I have a Python script that in the middle of it will have a function where I want to query a DB table and run whatever Python scripts are listed in one of the columns. The Python scripts themselves reside in the same folder as the main Python script that is being executed. For specific reasons I need to keep these script names in a DB table though and call/read them from there, hence my issue.
python_script_table in DB looks like:
TABLE_ID PYTHON_SCRIPT
1 script1.py
2 script2.py
3 null
Query would be something like:
select * from python_script_table where python_script is not null
At that point I want to execute whatever is returned under PYTHON_SCRIPT (in this case script1.py and script2.py).
I am unsure the best way to approach this..
You should be able to execute the scripts with something like this:
with open('path/to/script.py') as file:
script = file.read()
exec(script)

Identifying the currently running script

Say I have a config.py that manages the command-line arguments
parser = argparse.ArgumentParser()
parser.add_argument('common_argument')
args = parser.parse_args()
input_common = args. common_argument
This file is imported from many other script which I execute in my project. However, among those are scripts that expect additional arguments, e.g. special_file.py. How can I add these arguments?
Alternative 1
In config.py, I identify the script that is importing it to add the additional argument. Say there was a variable like __importing_file__, then I could do
if __importing_file__ == 'special_file':
parser.add_argument('special_argument')
However, I couldn't find out how to identify the currently running script. Is it possible?
Alternative 2
In my special_file.py I can simply add another argument and parse again, i.e.
from config import *
parser.add_argument('special_argument')
args = parser.parse_args()
input_special = args.special_argument
However, python does not recognize the special_argument.
Is there a solution to this problem?
What you are looking for is __file__. This is however NOT to be mistaken with sys.argv[0].
sys.argv[0] gives the module's entrypoint i.e where the application was started from. IF this was a django app sys.argv[0] would have given manage.py while __file__ would have returned the absolute path of the currently running script.

How to grab files generated by a subprocess?

I want to run some command line scripts from within my python program. These scripts generates some output files. I want to grab these output files from the subprocess call as object in my python program, while canceling generation of files on disk. Problem is I don't know how to do it, or whether that is even possible.
A simple example would look like this:
#foo.py
fout1 = open("temp1.txt","w")
fout2 = open("temp2.txt","w")
fout1.write("fout1")
fout2.write("fout2")
fout1.close()
fout2.close()
#test.py
import subprocess
process = subprocess.Popen(["python","foo.py"], ????????) #what arguments to use to grab temp1.txt and temp2.txt
print(process.??????) #how to access those files
I am familiar with subprocess.Popen so that is what the example code uses, but I am open to the use of other modules too if they could do it.

python values to bash line on a remote server

So i have a script from Python that connects to the client servers then get some data that i need.
Now it will work in this way, my bash script from the client side needs input like the one below and its working this way.
client.exec_command('/apps./tempo.sh' 2016 10 01 02 03))
Now im trying to get the user input from my python script then transfer it to my remotely called bash script and thats where i get my problem. This is what i tried below.
Below is the method i tried that i have no luck working.
import sys
client.exec_command('/apps./tempo.sh', str(sys.argv))
I believe you are using Paramiko - which you should tag or include that info in your question.
The basic problem I think you're having is that you need to include those arguments inside the string, i.e.
client.exec_command('/apps./tempo.sh %s' % str(sys.argv))
otherwise they get applied to the other arguments of exec_command. I think your original example is not quite accurate in how it works;
Just out of interest, have you looked at "fabric" (http://www.fabfile.org ) - this has lots of very handy funcitons like "run" which will run a command on a remote server (or lots of remote servers!) and return you the response.
It also gives you lots of protection by wrapping around popen and paramiko for hte ssh login etcs, so it can be much more secure then trying to make web services or other things.
You should always be wary of injection attacks - Im unclear how you are injecting your variables, but if a user calls your script with something like python runscript "; rm -rf /" that would have very bad problems for you It would instead be better to have 'options' on the command, which are programmed in, limiting the users input drastically, or at least a lot of protection around the input variables. Of course if this is only for you (or trained people), then its a little easier.
I recommend using paramiko for the ssh connection.
import paramiko
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(server, username=user,password=password)
...
ssh_client.close()
And If you want to simulate a terminal, as if a user was typing:
chan=ssh_client.invoke_shell()
chan.send('PS1="python-ssh:"\n')
def exec_command(cmd):
"""Gets ssh command(s), execute them, and returns the output"""
prompt='python-ssh:' # the command line prompt in the ssh terminal
buff=''
chan.send(str(cmd)+'\n')
while not chan.recv_ready():
time.sleep(1)
while not buff.endswith(prompt):
buff+=ssh_client.chan.recv(1024)
return buff[:len(prompt)]
Example usage: exec_command('pwd')
And the result would even be returned to you via ssh
Assuming that you are using paramiko you need to send the command as a string. It seems that you want to pass the command line arguments passed to your Python script as arguments for the remote command, so try this:
import sys
command = '/apps./tempo.sh'
args = ' '.join(sys.argv[1:]) # all args except the script's name!
client.exec_command('{} {}'.format(command, args))
This will collect all the command line arguments passed to the Python script, except the first argument which is the script's file name, and build a space separated string. This argument string is them concatenated with the bash script command and executed remotely.

Recording the Python script execution method

I have composed an ArcPy script which is run via a windows scheduler.The same script is loaded into a script tool so a user can run the process manually. I've used: get parameters as text, with or's and not's, to hard-wire the standard variables if they are not speicifed.
ReportFolder = arcpy.GetParameterAsText(0)
if ReportFolder == '#' or not ReportFolder:
ReportFolder = "C:\\Data\\GIS"
The process runs and during so writes to a text file log, for example:
txtFile.write("= For ArcGIS 10.3.1: Date: "+str(timed)),txtFile.write ('\n')
I'd like to record what method was used to execute the script; was it via the windows scheduler, or by the script tool via ArcGIS, or by a python client like PyScripter.
Is anyone aware of some form of os environment thingy that can be called by Python?

Categories

Resources