Passing arguments to execfile in python 2.7 - python

I need to call one python script from another script,I'm trying to do it with the help of execfile function.I need to pass a dictionary as an argument to the calling function.Is there any possibility to do that?
import subprocess
from subprocess import Popen
-------To read the data from xls-----
ret_lst = T_read("LDW_App05")
for each in ret_lst:
lst.append(each.replace(' ','-'))
lst.append(' ')
result = Popen(['python','LDW_App05.py'] + lst ,stdin = subprocess.PIPE,stdout = subprocess.PIPE).communicate()
print result
Here,in the above code I'm reading the Input data from the Excel sheet in the form of list,I need to pass the list as an argument to LDW_App05.py file

Instead of passing complex data as CL arguments, I propose piping your data via the STDIN/STDOUT - then you don't need to worry about escaping special, shell-significant chars and exceeding the maximum command line length.
Typically, as CL argument-based script you might have something like app.py:
import sys
if __name__ == "__main__": # ensure the script is run directly
if len(sys.argv) > 1: # if at least one CL argument was provided
print("ARG_DATA: {}".format(sys.argv[1])) # print it out...
else:
print("usage: python {} ARG_DATA".format(__file__))
It clearly expects an argument to be passed and it will print it out if passed from another script, say caller.py:
import subprocess
out = subprocess.check_output(["python", "app.py", "foo bar"]) # pass foo bar to the app
print(out.rstrip()) # print out the response
# ARG_DATA: foo bar
But what if you want to pass something more complex, let's say a dict? Since a dict is a hierarchical structure we'll need a way to present it in a single line. There are a lot of formats that would fit the bill, but let's stick to the basic JSON, so you might have your caller.py set to something like this:
import json
import subprocess
data = { # our complex data
"user": {
"first_name": "foo",
"last_name": "bar",
}
}
serialized = json.dumps(data) # serialize it to JSON
out = subprocess.check_output(["python", "app.py", serialized]) # pass the serialized data
print(out.rstrip()) # print out the response
# ARG_DATA: {"user": {"first_name": "foo", "last_name": "bar"}}
Now if you modify your app.py to recognize the fact that it's receiving JSON as an argument you can deserialize it back to Python dict to access its structure:
import json
import sys
if __name__ == "__main__": # ensure the script is run directly
if len(sys.argv) > 1:
data = json.loads(sys.argv[1]) # parse the JSON from the first argument
print("First name: {}".format(data["user"]["first_name"]))
print("Last name: {}".format(data["user"]["last_name"]))
else:
print("usage: python {} JSON".format(__file__))
Then if you run your caller.py again you'll get:
First name: foo
Last name: bar
But this is very tedious and JSON is not very friendly to the CL (behind the scenes Python does a ton of escaping to make it work) not to mention there is a limit (OS and shell depending) on how big your JSON can be passed this way. It's much better to use STDIN/STDOUT buffer to pass your complex data between processes. To do so, you'll have to modify your app.py to wait for input on its STDIN, and for caller.py to send serialized data to it. So, app.py can be as simple as:
import json
if __name__ == "__main__": # ensure the script is run directly
try:
arg = raw_input() # get input from STDIN (Python 2.x)
except NameError:
arg = input() # get input from STDIN (Python 3.x)
data = json.loads(arg) # parse the JSON from the first argument
print("First name: {}".format(data["user"]["first_name"])) # print to STDOUT
print("Last name: {}".format(data["user"]["last_name"])) # print to STDOUT
and caller.py:
import json
import subprocess
data = { # our complex data
"user": {
"first_name": "foo",
"last_name": "bar",
}
}
# start the process and pipe its STDIN and STDOUT to this process handle:
proc = subprocess.Popen(["python", "app.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
serialized = json.dumps(data) # serialize data to JSON
out, err = proc.communicate(serialized) # send the serialized data to proc's STDIN
print(out.rstrip()) # print what was returned on STDOUT
and if you invoke caller.py you again get:
First name: foo
Last name: bar
But this time there is no limit to the data size you're passing over to your app.py and you don't have to worry if a certain format would be messed up during shell escaping etc. You can also keep the 'channel' open and have both processes communicate with each other in a bi-directional fashion - check this answer for an example.

Related

How to insert an *entire* sentence from a shell script into a python script?

I am using a Python script that receives variables from a shell script to transmit data via an API call, the third variable being the commit message. When I try to use it however, it only prints the first word of the string and stops at the next space. Example:
"Testing this out" becomes "Testing".
I want the entire thing to transmit. It is transmitted from a shell script that is the third argument and comes over as "3=Testing this out" I have the code currently removing the 3= part, but the problem described above happens.
This is what I have so far:
The shell script $1, 2, and 3 are prompts taken by a program called appworx as user entered prompts:
python3 /RPS_files/script_deployment/Official_Gitlab.py $1 $2 $3
The python code:
import requests
import string
import json
import sys
from pathlib import Path
url = "hidden for privacy purposes"
headers = {
"PRIVATE-TOKEN": "hidden for privacy purposes",
"Content-Type": "application/json"
}
author = sys.argv[1]
author = author.replace("1=","")
filename = "/RPS_files/script_deployment_nz/" + sys.argv[2]
filename = filename.replace("2=","")
commit = sys.argv[3]
commit = commit.replace("3=","")
content = Path(filename).read_text()
sql_substring = ".sql"
loader_substring = ".ctl"
shell_substring = ".sh"
if sql_substring in filename:
file_path = "New Testing Folder/NZ SQL Folder" + filename
elif loader_substring in filename:
file_path = "New Testing Folder/NZ Loader Scripts Folder" + filename
elif shell_substring in filename:
file_path = "New Testing Folder/NZ Shell Scripts" + filename
payload = {
"id": "9081",
"branch" : "master",
"author_name": author,
"author_email" : "N/A",
"committer_name": author,
"committer_email" : "N/A",
"commit_message": commit,
"actions": [
{
"action":"create",
"file_path": file_path,
"content": content
}
]
}
verify='/RPS_files/script_deployment/cacert.pem'
response = requests.post(url, headers=headers, data=json.dumps(payload), verify=verify)
pretty_json = json.loads(response.text)
print(json.dumps(pretty_json, indent=2))
Hmm..
I did an easy script to receive data a while ago.
I used the popen function from the os module.
Did it like this:
stream = popen("command to run")
data = stream.read()
print(data)
For me it did the trick to receive the complete shell output
You can concatenate all the arguments from number 3 to the end with space like this
commit = " ".join(sys.argv[3:])
Or simply you can pass the commit message in quotes
python pgm.py arg1 arg2 'commit message'
The issue here is that space is used to separate arguments on the prompt - and that INCLUDES spaces in variables.
The reason is that variables get replaced before the command gets executed, so it's effectively running:
python3 /RPS_files/script_deployment/Official_Gitlab.py arg1 arg2 Testing this out
As you can see, $3 is just the word "Testing" - the rest is in $4 and $5.
If you quote your variables, the shell knows to treat it as a single argument, which is what you want. A good habit to get into is to ALWAYS quote your variables unless you explicitly want it treated otherwise - which is almost always never (whenever I do, I leave a comment explaining why, as future is me very likely to quote it).
In this case I would be running:
python3 /RPS_files/script_deployment/Official_Gitlab.py "$1" "$2" "$3"
Another reason to quote them all - if $1 is empty for any reason, $2 suddenly becomes the first argument, and everything gets shifted forward a place. If it's in quotes, however, even if it's empty it still gets run as "", so at least the emptiness of gets preserved.
As a bonus, you can use bashes prefix matching feature to remove the leading "3=" if it exists at the same time:
python3 /RPS_files/script_deployment/Official_Gitlab.py "${1#1=}" "${2#2=}" "${3#3=}"

How to pass a function, list, among others as args to argparse in Python?

I have the following script which works, and I'm trying to avoid using '__main__' inside the same module:
def download():
urls = \
[
'https://ipleak.net/json',
'https://httpbin.org/get'
] * 4
downloads = asyn.init_download(urls, "json")
return downloads
def pprint_json(d):
print(json.dumps(d, indent=4, sort_keys=True))
def multiprocess_list(n_pools, func, list):
executor = concurrent.futures.ProcessPoolExecutor(n_pools)
futures = [executor.submit(func, item) for item in list]
concurrent.futures.wait(futures)
if __name__ == '__main__':
multiprocess_list(4, pprint_json, download())
The download() function, as its name depicts, downloads the urls asynchronously using asyncio and aiohttp.
I want to execute multiprocess_list from anywhere 'globally':
def multiprocess_list(n_pools, func, list):
executor = concurrent.futures.ProcessPoolExecutor(n_pools)
futures = [executor.submit(func, item) for item in list]
concurrent.futures.wait(futures)
def main(args):
parser = argparse.ArgumentParser(description="Multiprocessing a list.")
parser.add_argument("-n", "--n_pools", type=int, required=True)
parser.add_argument("-f", "--function", required=True)
parser.add_argument("-l", "--list", required=True)
args = parser.parse_args(args)
multiprocess_list(args.n_pools, args.function, args.list)
if __name__ == '__main__':
import sys
main(sys.argv[1:])
Import the above module into any other python file, and perhaps run it like: (doesn't work though)
def download():
urls = \
[
'https://ipleak.net/json',
'https://httpbin.org/get'
] * 4
downloads = asyn.init_download(urls, "json")
return downloads
def pprint_json(d):
print(json.dumps(d, indent=4, sort_keys=True))
mp.main(["-n", 4, "-f", pprint_json, "-l", download()])
This gives me an error:
if not arg_string[0] in self.prefix_chars:
TypeError: 'int' object is not subscriptable
Hence, pass one argument as a function to be run, another as a list or a function which returns a list like download().
Can this be done in python?
If yes, can someone please explain how?
Is my approach correct or am I completely losing it?
NB: My interpreter uses Python3.8 and I'm kind of new to python, please bear with me a little.
Argparse expects a list of strings and may choke on other types. Does it work if you quote the 4?
mp.main(["-n", "4", "-f", pprint_json, "-l", download()])
pprint_json and the result of download() should also be strings for this to work.
This approach of creating a substitute argv is not always crazy, but in your case, why call main() to parse the args for you if you've already have them? Why not call multiprocess_list() directly with appropriate arguments?
The reason for using main() is because I might add more functions in the future apart from multiprocess_list()
Then you can call those directly when invoking the script from Python, instead of creating another argument to select one of them. You can still use main() to parse the args from the command line.
next error TypeError: 'function' object is not subscriptable
Ah, that wasn't a string either. In that case I don't expect it to work from the command line either. Did you get that much working yet?
You could try something like
mp.main(["-n", "4", "-f", "pprint_json", "-l", download()])
But then main() has to be able to interpret that function name as a function somehow. Maybe something like
multiprocess_list(args.n_pools, getattr(foo, args.function), args.list)
where foo is the module where you keep the selectable functions.

How to obtain a python command line argument if only it's a string

I'm making my own python CLI and i want to pass only String arguments
import sys
import urllib, json
# from .classmodule import MyClass
# from .funcmodule import my_function
def main():
args = sys.argv[1:]
#print('count of args :: {}'.format(len(args)))
#for arg in args:
# print('passed argument :: {}'.format(arg))
#always returns true even if i don't pass the argument as a "String"
if(isinstance(args[0], str)):
print('JSON Body:')
url = args[0]
response = urllib.urlopen(url)
data = json.loads(response.read())
print(data)
# my_function('hello world')
# my_object = MyClass('Thomas')
# my_object.say_name()
if __name__ == '__main__':
main()
I execute it by api "url" and this is the correct output:
Although when i'm trying to execute api url without passing it as a String my output is a little odd:
How can i accept only String arguments?
What I've tried so far:
Found this solution here but it didn't work for me (couldn't recognize the join() function)
problem isn't a python issue. It's just that your URL contains a &, and on a linux/unix shell, this asks to run your command in the background (and the data after & is dropped). That explains the [1]+ done output, with your truncated command line.
So you have to quote your argument to avoid it to be interpreted (or use \&). There's no way around this from a Un*x shell (that would work unquoted from a Windows shell for instance)

How to capture output of a command run in python3 in python2?

I have a file which connects a database and fetches the result. Now the file must be ran using python 3 and my project uses python 2.7. So I run the file as a command line using subprocess module. Here is how I call the file.
import subprocess
import ast
def execute_python3(param):
param = param.replace("\\", "")
param = "\"" + param + "\""
cmd = "python3 " + "get_db_result.py" + " " + param
result = subprocess.check_output(cmd, shell=True)
return ast.literal_eval(result)
execute_python3(sql_query)
Here in the command, I am passing sql query to the get_db_result file.
The get_db_result.py file looks something like this
import sys
def get_result():
param = sys.argv[1]
'''
Logic to get result from db
'''
result = db_output
print(result)
if __name__ == "__main__":
get_result()
Now the issue is when I fetch the output from db, I have to do a print for the output to be captured by the subprocess module. This makes it difficult to parse the output to be used by program for further work. For example, when I receive an output like this
"[(u'Delhi', 20199330), (u'Mumbai', 134869470), (u'Kolkata', 6678446)]"
This is a string list of tuples which can be converted to list of tuples by doing something like ast.literal_eval(result)
But sometimes I get output like this
"[(datetime.date(2019, 5, 27), 228.168093587), (datetime.date(2019, 5, 28), 228.834493641)]"
Here ast doesn't understand datetime. Even json.loads() doesn't work on this.
How can I capture the output from a file without having to use print and simply return it back to subprocess as it is. Is it even possible?
You need to serialize and deserialize the data on both ends. Simplest solution would be to use Python's pickle module and hope the types that are serialized on the Python 3 end, are similar enough to those on the deserializing Python 2 end. You need to specify the used protocol on the sending end to a version understood by the receiving end:
Receiver with safer call of subprocess (no shell process in between):
#!/usr/bin/env python
import pickle
import subprocess
def execute_python3(param):
result = subprocess.check_output(['python3', 'get_db_result.py', param])
return pickle.loads(result)
def main():
execute_python3(sql_query)
if __name__ == '__main__':
main()
Sender, explicitly choosing a pickle protocol still understood by Python 2:
#!/usr/bin/env python3
import sys
import pickle
def get_result():
param = sys.argv[1]
'''
Logic to get result from db
'''
result = db_output
pickle.dump(result, sys.stdout.buffer, protocol=2)
if __name__ == '__main__':
get_result()
If this doesn't work because of differences in the (de)serialized objects between Python 2 and 3, you have to fall back to explicitly (de)serialize the data, for example in JSON, as suggested by a comment from Jay.

How to use subprocess to execute programs with Python

Hello i am using the subprocess.Popen() class and i succesful execute commands on the terminal, but when i try to execute programs for example an script written on Python and i try to pass arguments the system fails.
This is the code:
argPath = "test1"
args = open(argPath, 'w')
if self.extract.getByAttr(self.block, 'name', 'args') != None:
args.write("<request>"+self.extract.getByAttr(self.block, 'name', 'args')[0].toxml()+"</request>")
else:
args.write('')
car = Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE))
args.close()
dataOut = car.stdout.read().decode()
log = car.stderr.read().decode()
if dataOut!='':
return dataOut.split('\n')
elif log != '':
return log.split('\n')[0]
else:
return None
And the code from the saveCSS.py
from xml.dom.minidom import parseString
import os
import sys
class savCSS:
"""This class has to save
the changes on the css file.
"""
def __init__(self, args):
document = parseString(args)
request = document.firstChild
address = request.getElementsByTagName('element')[0]
newdata = request.getElementsByTagName('element')[1]
cssfl = open("/webapps/karinapp/Suite/"+address.getAttribute('value'), 'r')
cssData = cssfl.read()
cssfl.close()
dataCSS = ''
for child in newdata.childNodes:
if child.nodeType == 3:
dataCSS += child.nodeValue
nwcssDict = {}
for piece in dataCSS.split('}'):
nwcssDict[piece.split('{')[0]] = piece.split('{')[1]
cssDict = {}
for piece in cssData.split('}'):
cssDict[piece.split('{')[0]] = piece.split('{')[1]
for key in nwcssDict:
if key in cssDict == True:
del cssDict[key]
cssDict[key] = nwcssDict[key]
result = ''
for key in cssDict:
result += key+"{"+cssDict[key]+"}"
cssfl = open(cssfl.name, 'a')
cssfl.write(result)
cssfl.close()
if __name__ == "__main__":
savCSS(sys.stdin)
BTW: There's no output...
Thanks in advance.
OK, I'm ignoring that your code doesn't run (neither the script you try to execute, not the main script actually works), and looking at what you are doing:
It does execute the script, or you would get an error, like "bin/sh: foo: not found".
Also you seem to be using an open file as stdin after you have written to it. That doesn't work.
>>> thefile = open('/tmp/foo.txt', 'w')
>>> thefile.write("Hej!")
4
>>> thefile.read()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IOError: not readable
You need to close the file, and reopen it as a read file. Although better in this case would be to use StringIO, I think.
To talk to the subprocess, you use communicate(), not read() on the pipes.
I'm not sure why you are using shell=True here, it doesn't seem necessary, I would remove it if I was you, it only complicates stuff unless you actually need the shell to do things.
Specifically you should not split the command into a list when using shell=True. What your code is actually doing, is starting a Python prompt.
You should rather use communicate() instead of .stdout.read().
And the code you posted isn't even correct:
Popen(shlex.split('python3.1 /home/hidura/webapps/karinapp/Suite/ForeingCode/saveCSS.py', stdin=args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
There's a missing parenthesis, and from the stdout/stderr parameters, it's clear that you get no output to the console, but rather into pipes (if that's what you meant by "There's no output...").
Your code will actually work on Windows, but on Linux you must remove the shell=True parameter. You should always omit that parameter if you provide the full command line yourself (as a sequence).

Categories

Resources