How to avoid displaying errors caused after running subprocess.call - python

So when I run subprocess.call in python, after running the script, if there are error messages caused by the bash, I would like to not display it to the user.
So for instance,
for i in range(len(official_links)):
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE) == 0):
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' installed')
else:
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' not installed')
the command pacman -Qi packagename cheks if the packagename is already installed or not. When I run my script, if it is installed, I get no extra messages from the bash, only what I print. But if the package does not exist and an error is caused, both the error and my print gets printed on the screen.
Is there a way to avoid printing command errors too?
Thanks.

Redirect the stderr as well:
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) == 0):
That's where the error is displayed.

Related

Running vulture from a python script

I'm trying to find a way to run vulture (which finds unused code in python projects) inside a python script.
vulture documentation can be found here:
https://pypi.org/project/vulture/
Does anyone know how to do it?
The only way I know to use vulture is by shell commands.
I tried to tun the shell commands from the script, using module subprocess, something like this:
process = subprocess.run(['vulture', '.'], check=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True)
which I though would have the same effect as running the shell command "vulture ."
but it doesn't work.
Can anyone help?
Thanks
Vulture dev here.
The Vulture package exposes an API, called scavenge - which it uses internally for running the analysis after parsing command line arguments (here in vulture.main).
It takes in a list of Python files/directories. For each directory, Vulture analyzes all contained *.py files.
To analyze the current directory:
import vulture
v = vulture.Vulture()
v.scavenge(['.'])
If you just want to print the results to stdout, you can call:
v.report()
However, it's also possible to perform custom analysis/filters over Vulture's results. The method vulture.get_unused_code returns a list of vulture.Item objects - which hold the name, type and location of unused code.
For the sake of this answer, I'm just gonna print the name of all unused objects:
for item in v.get_unused_code():
print(item.name)
For more info, see - https://github.com/jendrikseipp/vulture
I see you want to capture the output shown at console:
Below code might help:
import tempfile
import subprocess
def run_command(args):
with tempfile.TemporaryFile() as t:
try:
out = subprocess.check_output(args,shell=True, stderr=t)
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + out + '\n'
return_code = 0
except subprocess.CalledProcessError as e:
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + e.output + '\n'
return_code = e.returncode
return return_code, console_output
Your expected output will be displayed in console_output
Link:
https://docs.python.org/3/library/subprocess.html

Python subprocess not returning

I want to call a Python script from Jenkins and have it build my app, FTP it to the target, and run it.
I am trying to build and the subprocess command fails. I have tried this with both subprocess.call() and subprocess.popen(), with the same result.
When I evaluate shellCommand and run it from the command line, the build succeeds.
Note that I have 3 shell commands: 1) remove work directory, 2) create a fresh, empty, work directory, then 3) build. The first two commands return from subprocess, but the third hangs (although it completes when run from the command line).
What am I doing wrongly? Or, what alternatives do I have for executing that command?
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand):
try:
process = subprocess.call(shellCommand, shell=True, stdout=subprocess.PIPE)
#process.wait()
return process #.returncode
except KeyboardInterrupt, e: # Ctrl-C
raise e
except SystemExit, e: # sys.exit()
raise e
except Exception, e:
print 'Exception while executing shell command : ' + shellCommand
print str(e)
traceback.print_exc()
os._exit(1)
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def BuildApplciation(arguments):
# See http://gnuarmeclipse.github.io/advanced/headless-builds/
jenkinsWorkspaceDirectory = arguments.eclipseworkspace + '/jenkins'
shellCommand = 'rm -r ' + jenkinsWorkspaceDirectory
ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand)
shellCommand = 'mkdir ' + jenkinsWorkspaceDirectory
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error making Jenkins work directory in Eclipse workspace : " + jenkinsWorkspaceDirectory
return False
application = 'org.eclipse.cdt.managedbuilder.core.headlessbuild'
shellCommand = 'eclipse -nosplash -application ' + application + ' -import ' + arguments.buildRoot + '/../Project/ -build myAppApp/TargetRelease -cleanBuild myAppApp/TargetRelease -data ' + jenkinsWorkspaceDirectory + ' -D DO_APPTEST'
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error in build"
return False
I Googled further and found this page, which, at 1.2 says
One way of gaining access to the output of the executed command would
be to use PIPE in the arguments stdout or stderr, but the child
process will block if it generates enough output to a pipe to fill up
the OS pipe buffer as the pipes are not being read from.
Sure enough, when I deleted the , stdout=subprocess.PIPE from the code above, it worked as expected.
As I only want the exit code from the subprocess, the above code is enough for me. Read the linked page if you want the output of the command.

python subprocess popen with arguments

I'm writing a precommit hook for svn and I have to run the "svnlook log" command, capture and parse its output.
I'm stuck at this point:
svnlookCmd = ['/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook', 'log', repoPath, '-t ', transID]
sys.stderr.write('svnlookCom = ' + str(svnlookCmd) + '\n')
svnlook = Popen(svnlookCmd, stdout=PIPE)
commitMsg = svnlook.stdout.read()
sys.stderr.write ("\n commit message is: : \n" + commitMsg + "\n")
This will run svnlook but complain svnlook itself will complain that "Too many arguments given" which is not true if you check the svnlook help.
So I thought I had to put "svnlook log" together like this:
['/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook log', repoPath, '-t ', transID]
But this will not run svn look at all giving me:
"OSError: [Errno 2] No such file or directory".
which it makes sense as:
'/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook log' does not exists.
Any idea of what I'm missing here? It's woth mentioning that it's a very long time since I have worked with python so I may be missing something very basic...
S.
found the issue:
it's the space in the -t option:
'-t '
it should be
'-t'

How to run a shell script without having to press enter/confirm s.th. inbetween

I'm currently writing a shell script which is interfacing with numerous python scripts. In one of these Python scripts I'm calling grass without starting it explicitly. When I run my shell script I have to hit enter at the point where I call grass (this is the code I got from the official working with grass page):
startcmd = grass7bin + ' -c ' + file_in2 + ' -e ' + location_path
print startcmd
p = subprocess.Popen(startcmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if p.returncode != 0:
print >>sys.stderr, 'ERROR: %s' % err
print >>sys.stderr, 'ERROR: Cannot generate location (%s)' % startcmd
sys.exit(-1)
else:
print 'Created location %s' % location_path
gsetup.init(gisbase, gisdb, location, mapset)
My problem is that I want this process to run automatically without me having to press enter everytime in between!
I have already tried numerous options such as pexpect, uinput (doesn't work that well because of problems with the module). I know that in windows you have the msvcrt module, but I am working with linux... any ideas how to solve this problem?
Use the pexpect library for expect functionnality.
Here's an example of interaction with a an application requiring user to type in his password:
child = pexpect.spawn('your command')
child.expect('Enter password:')
child.sendline('your password')
child.expect(pexpect.EOF, timeout=None)
cmd_show_data = child.before
cmd_output = cmd_show_data.split('\r\n')
for data in cmd_output:
print data
I finally found an easy and fast way for simulating a key press:
just install xdotool and then use the following code for simulating e.g. the enter key:
import subprocess
subprocess.call(["xdotool","key","Return"])

Python raw_input doesn't work after using subprocess module

I'm using the subprocess module to invoke plink and run some commands on a remote server. This works as expected, but after a successful call to subprocess.check_call or subprocess.check_output the raw_input method seems to block forever and doesn't accept input at the command line.
I've reduced it to this simple example:
import subprocess
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command)
input = raw_input('Enter some text: ')
print('You entered: ' + input)
execute('echo "Hello, World"')
# I see the following prompt, but it's not accepting input
input = raw_input('Enter some more text: ')
print('You entered: ' + input)
I see the same results with subprocess.check_call and subprocess.check_output. If I replace the final raw_input call with a direct read from stdin (sys.stdin.read(10)) the program does accept input.
This is Python 2.7 on Windows 7 x64. Any ideas what I'm doing wrong?'
Edit: If I change execute to call something other than plink it seems to work okay.
def execute(command):
return subprocess.check_call('cmd.exe /C ' + command)
This suggests that plink might be the problem. However, I can run multiple plink commands directly in a console window without issue.
I was able to resolve this by attaching stdin to devnull:
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command, stdin=open(os.devnull))

Categories

Resources