I tried running the myview command and it ran successfully, but I am stuck after this step.
I have to choose from a list of views and have to pass in a number, for example say <1>,<2>..<10>. But when I execute the script it shows me option on the terminal window instead.
Which command should I be using? Because after this I have to run a bunch of other commands as well and basically have to execute them in a particular order. So say cmd should wait for cmd to finish. Thanks in advance for the help.
This is what I have so far.
#! /usr/bin/python
import sys
from subprocess import call
for arg in sys.argv:
print arg
call(["myview"])
Check out the doc for subprocess. I think the API call you need is check_call.
From pydoc subprocess:
try:
retcode = call("mycmd" + " myarg", shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Related
I have a central python script that calls various other python scripts and looks like this:
os.system("python " + script1 + args1)
os.system("python " + script2 + args2)
os.system("python " + script3 + args3)
Now, I want to exit from my central script if any of the sub-scripts encounter an error.
What is happening with current code is that let's say script1 encounters an error. The console will display that error and then central script will move onto calling script2 and so on.
I want to display the encountered error and immediately exit my central code.
What is the best way to do this?
Overall this is a terrible way to execute a series of commands from within Python. However here's a minimal way to handle it:
#!python
import os, system
for script, args in some_tuple_of_commands:
exit_code = os.system("python " + script + args)
if exit_code > 0:
print("Error %d running 'python %s %s'" % (
exit_code, script, args), file=sys.stderr)
sys.exit(exit_code)
But, honestly this is all horrible. It's almost always a bad idea to concatenate strings and pass them to your shell for execution from within any programming language.
Look at the subprocess module for much more sane handling of subprocesses in Python.
Also consider trying the sh or the pexpect third party modules depending on what you're trying to do with input or output.
You can try subprocess
import subprocess,sys
try:
output = subprocess.check_output("python test.py", shell=True)
print(output)
except ValueError as e:
print e
sys.exit(0)
print("hello world")
I don't know if it's ideal for you but enclosing these commands in a function seems a good idea to me:
I am using the fact that when a process exits with error os.system(process) returns 256 else it returns 0 as an output respectively.
def runscripts():
if os.system("python " + script1 + args1):return(-1); #Returns -1 if script1 fails and exits.
if os.system("python " + script2 + args2):return(-2); #Returns -2 and exits
if os.system("python " + script3 + args3):return(-3); #Pretty obvious
return(0)
runscripts()
#or if you want to exit the main program
if runscripts():sys.exit(0)
Invoking the operating system like that is a security breach waiting to happen. One should use the subprocess module, because it is more powerful and does not invoke a shell (unless you specifically tell it to). In general, avoid invoking shell whenever possible (see this post).
You can do it like this:
import subprocess
import sys
# create a list of commands
# each command to subprocess.run must be a list of arguments, e.g.
# ["python", "echo.py", "hello"]
cmds = [("python " + script + " " + args).split()
for script, args in [(script1, args1), (script2, args2), (script3,
args3)]]
def captured_run(arglist):
"""Run a subprocess and return the output and returncode."""
proc = subprocess.run( # PIPE captures the output
arglist, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return proc.stdout, proc.stderr, proc.returncode
for cmd in cmds:
stdout, stderr, rc = captured_run(cmd)
# do whatever with stdout, stderr (note that they are bytestrings)
if rc != 0:
sys.exit(rc)
If you don't care about the output, just remove the subprocess.PIPE stuff and return only the returncode from the function. You may also want to add a timeout to the execution, see the subprocess docs linked above for how to do that.
This code Running shell command and printing the output in real time.
process = subprocess.Popen('yt-dlp https://www.youtube.com/watch?v=spvPvXXu36A', shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
while True:
output = process.stdout.readline().decode()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
rc = process.poll()
if rc == 0:
print("Command succeeded.")
else:
print("Command failed.")
You can use the subprocess module to do all that kind of stuff
I've included a small example below
from subprocess import call
call(['youtube-dl', 'https://www.youtube.com/watch?v=PT2_F-1esPk'])
Python docs to subprocess
You are calling the exectuable --youtube-dl which probably not exists.
If --youtube-dl is a command you can type in from the cmd prompt, you should try subprocess.check_output(['--youtube-dl', some_url], shell=True) then the cmd.exe (at least on Windows) will get invoked.
I'm currently writing a shell script which is interfacing with numerous python scripts. In one of these Python scripts I'm calling grass without starting it explicitly. When I run my shell script I have to hit enter at the point where I call grass (this is the code I got from the official working with grass page):
startcmd = grass7bin + ' -c ' + file_in2 + ' -e ' + location_path
print startcmd
p = subprocess.Popen(startcmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if p.returncode != 0:
print >>sys.stderr, 'ERROR: %s' % err
print >>sys.stderr, 'ERROR: Cannot generate location (%s)' % startcmd
sys.exit(-1)
else:
print 'Created location %s' % location_path
gsetup.init(gisbase, gisdb, location, mapset)
My problem is that I want this process to run automatically without me having to press enter everytime in between!
I have already tried numerous options such as pexpect, uinput (doesn't work that well because of problems with the module). I know that in windows you have the msvcrt module, but I am working with linux... any ideas how to solve this problem?
Use the pexpect library for expect functionnality.
Here's an example of interaction with a an application requiring user to type in his password:
child = pexpect.spawn('your command')
child.expect('Enter password:')
child.sendline('your password')
child.expect(pexpect.EOF, timeout=None)
cmd_show_data = child.before
cmd_output = cmd_show_data.split('\r\n')
for data in cmd_output:
print data
I finally found an easy and fast way for simulating a key press:
just install xdotool and then use the following code for simulating e.g. the enter key:
import subprocess
subprocess.call(["xdotool","key","Return"])
I am using python 2.7 on windows 7 64 bit machine.
I am calling external application within python code as
os.startfile("D:\\dist\\NewProcess.exe")
This application(used py2exe for converting python script into an exe) uses two strings, which need to pass from parent process.
So, how to pass these two strings, and how to get these strings in NewProcess.py file(may be by sys.argv argument)
You may try this:
import subprocess
import sys
try:
retcode = subprocess.call("D:\\dist\\NewProcess.exe " + sys.argv[1] + " " + sys.argv[2], shell=True)
if retcode < 0:
print >>sys.stderr, "Child was terminated by signal", -retcode
else:
print >>sys.stderr, "Child returned", retcode
except OSError as e:
print >>sys.stderr, "Execution failed:", e
In sys.argv[0] is the script name, and in sys.argv[1] ... [n] are script arguments. Example above is taken from subprocess module documentation https://docs.python.org/2/library/subprocess.html
The problem I'm having is with Eclipse/PyCharm interpreting the results of subprocess's Popen() differently from a standard terminal. All are using python2.6.1 on OSX.
Here's a simple example script:
import subprocess
args = ["/usr/bin/which", "git"]
print "Will execute %s" % " ".join(args)
try:
p = subprocess.Popen(["/usr/bin/which", "git"], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# tuple of StdOut, StdErr is the responses, so ..
ret = p.communicate()
if ret[0] == '' and ret[1] <> '':
msg = "cmd %s failed: %s" % (fullcmd, ret[1])
if fail_on_error:
raise NameError(msg)
except OSError, e:
print >>sys.stderr, "Execution failed:", e
With a standard terminal, the line:
ret = p.communicate()
gives me:
(Pdb) print ret
('/usr/local/bin/git\n', '')
Eclipse and PyCharm give me an empty tuple:
ret = {tuple} ('','')
Changing the shell= value does not solve the problem either. On the terminal, setting shell=True, and passing the command in altogether (i.e., args=["/usr/bin/which git"]) gives me the same result: ret = ('/usr/local/bin/git\n', ''). And Eclipse/PyCharm both give me an empty tuple.
Any ideas on what I could be doing wrong?
Ok, found the problem, and it's an important thing to keep in mind when using an IDE in a Unix-type environment. IDE's operate under a different environment context than the terminal user (duh, right?!). I was not considering that the subprocess was using a different environment than the context that I have for my terminal (my terminal has bash_profile set to have more things in PATH).
This is easily verified by changing the script as follows:
import subprocess
args = ["/usr/bin/which", "git"]
print "Current path is %s" % os.path.expandvars("$PATH")
try:
p = subprocess.Popen(args, shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# tuple of StdOut, StdErr is the responses, so ..
out, err = p.communicate()
if err:
msg = "cmd %s failed: %s" % (fullcmd, err)
except OSError, e:
print >>sys.stderr, "Execution failed:", e
Under the terminal, the path includes /usr/local/bin. Under the IDE it does not!
This is an important gotcha for me - always remember about environments!