python subprocess popen with arguments - python

I'm writing a precommit hook for svn and I have to run the "svnlook log" command, capture and parse its output.
I'm stuck at this point:
svnlookCmd = ['/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook', 'log', repoPath, '-t ', transID]
sys.stderr.write('svnlookCom = ' + str(svnlookCmd) + '\n')
svnlook = Popen(svnlookCmd, stdout=PIPE)
commitMsg = svnlook.stdout.read()
sys.stderr.write ("\n commit message is: : \n" + commitMsg + "\n")
This will run svnlook but complain svnlook itself will complain that "Too many arguments given" which is not true if you check the svnlook help.
So I thought I had to put "svnlook log" together like this:
['/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook log', repoPath, '-t ', transID]
But this will not run svn look at all giving me:
"OSError: [Errno 2] No such file or directory".
which it makes sense as:
'/appl/atlad00/CollabNetSubversionEdge-5.0.1/csvn/bin/svnlook log' does not exists.
Any idea of what I'm missing here? It's woth mentioning that it's a very long time since I have worked with python so I may be missing something very basic...
S.

found the issue:
it's the space in the -t option:
'-t '
it should be
'-t'

Related

linux directory permission check and/or dir exist or not

I have a script which is running as a ROOT on linux collecting data from different users. Given a nfs path for each individual user 1) verify the director does not exist 2) verify permission denied
verify_model_path_not_found = '/usr/bin/su ' + userID + ' -c \'/usr/bin/ls ' + u_model_path + ' | /usr/bin/grep \"Permission denied\|No such file or directory\"\''
perm_denied_str = 'Permission denied'
no_file_dir_str = 'No such file or directory'
print("verify_model_path_not_found:", verify_model_path_not_found)
#Run as a root
try:
verify_cmd_out = subprocess.check_output(verify_model_path_not_found, shell=True)
verify_cmd_out = str(verify_cmd_out)
print("verify_cmd_out:", verify_cmd_out, "\n")
except subprocess.CalledProcessError as errorcatch:
print(datetime.datetime.now(), " Error while executing ", verify_model_path_not_found)
print("error code", errorcatch.returncode, errorcatch.output, "\n")
continue
#only add items that are not accessible (perm denied or no file found)
if ((perm_denied_str in verify_cmd_out) or (no_file_dir_str in verify_cmd_out)):
#remaining actions .... send an email to the user ...
Example output Error:
verify_model_path_not_found: /usr/bin/su xxxx -c '/usr/bin/ls /nfs/x/y/z | /usr/bin/grep "Permission denied\|No such file or directory"'
2021-08-10 17:00:31.827186 Error while executing /usr/bin/su xxxx -c '/usr/bin/ls /nfs/x/y/z | /usr/bin/grep "Permission denied\|No such file or directory"'
error code 1 b'' #I know this dir does not exist or perm denied - still getting error
given /nfs/x/y/z, if the user does not have a read access, I would like to get "Permission denied" using grep - "Permission denied" should be the value of verify_cmd_out
given /nfs/x/y/z, if the dir does not exist, I would like to get "No such file or directory" using grep - "No such file or directory" should be the value of verify_cmd_out
once perm denied or no such file is confirmed for the user, certain actions need to place.
The /usr/bin/su xxxx -c ... command is not properly working, any thought or idea how to resolve the issue?
You are examining standard output (file descriptor 1), but error messages (and progress and diagnostics in general) are posted on standard error (file descriptor 2).
Your code is quite clunky anyway. Probably try something along the lines of
import subprocess
def assert_error_stderr(cmd, expected_stderr):
"""
Accept a shell command; verify that it fails with error,
and emits the expected message on standard error.
"""
try:
result = subprocess.run(cmd, check=True, text=True, capture_output=True)
raise ValueError("%r did not raise an error" % cmd)
except CalledProcessError:
assert expected_stderr in result.stderr
def assert_silent_failure(cmd):
"""
Check that a command fails; verify that standard error is empty
"""
try:
result = subprocess.run(cmd, check=True, text=True, capture_output=True)
except CalledProcessError:
assert result.stderr == ''
raise ValueError("%r did not fail", cmd)
assert_silent_failure(['su', '-c' 'test -d ' + u_model_path])
...
but of course using Python when you fundamentally want to test the shell might not make much sense.
#!/bin/sh
! test -d "$1"
Basically never use ls in scripts and generally probably don't rely on a particular error message (it could be localized, or change between versions).
Also, in Python subprocess code, avoid shell=True whenever you can. See also Actual meaning of shell=True in subprocess

Running vulture from a python script

I'm trying to find a way to run vulture (which finds unused code in python projects) inside a python script.
vulture documentation can be found here:
https://pypi.org/project/vulture/
Does anyone know how to do it?
The only way I know to use vulture is by shell commands.
I tried to tun the shell commands from the script, using module subprocess, something like this:
process = subprocess.run(['vulture', '.'], check=True,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT,universal_newlines=True)
which I though would have the same effect as running the shell command "vulture ."
but it doesn't work.
Can anyone help?
Thanks
Vulture dev here.
The Vulture package exposes an API, called scavenge - which it uses internally for running the analysis after parsing command line arguments (here in vulture.main).
It takes in a list of Python files/directories. For each directory, Vulture analyzes all contained *.py files.
To analyze the current directory:
import vulture
v = vulture.Vulture()
v.scavenge(['.'])
If you just want to print the results to stdout, you can call:
v.report()
However, it's also possible to perform custom analysis/filters over Vulture's results. The method vulture.get_unused_code returns a list of vulture.Item objects - which hold the name, type and location of unused code.
For the sake of this answer, I'm just gonna print the name of all unused objects:
for item in v.get_unused_code():
print(item.name)
For more info, see - https://github.com/jendrikseipp/vulture
I see you want to capture the output shown at console:
Below code might help:
import tempfile
import subprocess
def run_command(args):
with tempfile.TemporaryFile() as t:
try:
out = subprocess.check_output(args,shell=True, stderr=t)
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + out + '\n'
return_code = 0
except subprocess.CalledProcessError as e:
t.seek(0)
console_output = '--- Provided Command: --- ' + '\n' + args + '\n' + t.read() + e.output + '\n'
return_code = e.returncode
return return_code, console_output
Your expected output will be displayed in console_output
Link:
https://docs.python.org/3/library/subprocess.html

How to avoid displaying errors caused after running subprocess.call

So when I run subprocess.call in python, after running the script, if there are error messages caused by the bash, I would like to not display it to the user.
So for instance,
for i in range(len(official_links)):
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE) == 0):
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' installed')
else:
print(official_links[i].replace('https://www.archlinux.org/packages/?q=', '') + ' not installed')
the command pacman -Qi packagename cheks if the packagename is already installed or not. When I run my script, if it is installed, I get no extra messages from the bash, only what I print. But if the package does not exist and an error is caused, both the error and my print gets printed on the screen.
Is there a way to avoid printing command errors too?
Thanks.
Redirect the stderr as well:
if(subprocess.call('pacman ' + '-Qi ' + official_links[i].replace('https://www.archlinux.org/packages/?q=', ''),shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) == 0):
That's where the error is displayed.

Python subprocess not returning

I want to call a Python script from Jenkins and have it build my app, FTP it to the target, and run it.
I am trying to build and the subprocess command fails. I have tried this with both subprocess.call() and subprocess.popen(), with the same result.
When I evaluate shellCommand and run it from the command line, the build succeeds.
Note that I have 3 shell commands: 1) remove work directory, 2) create a fresh, empty, work directory, then 3) build. The first two commands return from subprocess, but the third hangs (although it completes when run from the command line).
What am I doing wrongly? Or, what alternatives do I have for executing that command?
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand):
try:
process = subprocess.call(shellCommand, shell=True, stdout=subprocess.PIPE)
#process.wait()
return process #.returncode
except KeyboardInterrupt, e: # Ctrl-C
raise e
except SystemExit, e: # sys.exit()
raise e
except Exception, e:
print 'Exception while executing shell command : ' + shellCommand
print str(e)
traceback.print_exc()
os._exit(1)
# +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=
def BuildApplciation(arguments):
# See http://gnuarmeclipse.github.io/advanced/headless-builds/
jenkinsWorkspaceDirectory = arguments.eclipseworkspace + '/jenkins'
shellCommand = 'rm -r ' + jenkinsWorkspaceDirectory
ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand)
shellCommand = 'mkdir ' + jenkinsWorkspaceDirectory
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error making Jenkins work directory in Eclipse workspace : " + jenkinsWorkspaceDirectory
return False
application = 'org.eclipse.cdt.managedbuilder.core.headlessbuild'
shellCommand = 'eclipse -nosplash -application ' + application + ' -import ' + arguments.buildRoot + '/../Project/ -build myAppApp/TargetRelease -cleanBuild myAppApp/TargetRelease -data ' + jenkinsWorkspaceDirectory + ' -D DO_APPTEST'
if not ExcecuteShellCommandAndGetReturnCode(arguments, shellCommand) == 0:
print "Error in build"
return False
I Googled further and found this page, which, at 1.2 says
One way of gaining access to the output of the executed command would
be to use PIPE in the arguments stdout or stderr, but the child
process will block if it generates enough output to a pipe to fill up
the OS pipe buffer as the pipes are not being read from.
Sure enough, when I deleted the , stdout=subprocess.PIPE from the code above, it worked as expected.
As I only want the exit code from the subprocess, the above code is enough for me. Read the linked page if you want the output of the command.

Python subprocess call rsync

I am trying to to run rsync for each folder in a folder.
__author__ = 'Alexander'
import os
import subprocess
root ='/data/shares'
arguments=["--verbose", "--recursive", "--dry-run", "--human-readable", "--remove-source-files"]
remote_host = 'TL-AS203'
for folder in os.listdir(root):
print 'Sync Team ' + folder.__str__()
path = os.path.join(root,folder, 'in')
if os.path.exists(path):
folder_arguments = list(arguments)
print (type(folder_arguments))
folder_arguments.append("--log-file=" + path +"/rsync.log")
folder_arguments.append(path)
folder_arguments.append("transfer#"+remote_host+":/data/shares/"+ folder+"/out")
print "running rsync with " + str(folder_arguments)
returncode = subprocess.call(["rsync",str(folder_arguments)])
if returncode == 0:
print "pull successfull"
else:
print "error during rsync pull"
else:
print "not a valid team folder, in not found"
If I run this I get the following output:
Sync Team IT-Systemberatung
<type 'list'>
running rsync with ['--verbose', '--recursive', '--dry-run', '--human-readable', '--remove-source-files', '--log-file=/data/shares/IT-Systemberatung/in/rsync.log', '/data/shares/IT-Systemberatung/in', 'transfer#TL-AS203:/data/shares/IT-Systemberatung/out']
rsync: change_dir "/data/shares/IT-Systemberatung/['--verbose', '--recursive', '--dry-run', '--human-readable', '--remove-source-files', '--log-file=/data/shares/IT-Systemberatung/in/rsync.log', '/data/shares/IT-Systemberatung/in', 'transfer#TL-AS203:/data/shares/IT-Systemberatung" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1040) [sender=3.0.4]
error during rsync pull
Sync Team IT-Applikationsbetrieb
not a valid team folder, in not found
transfer#INT-AS238:/data/shares/IT-Systemberatung
If i manually start rsync from bash with these arguments, everything works fine. I also tried it with shell=true but with the same result.
You need to do:
returncode = subprocess.call(["rsync"] + folder_arguments)
Calling str() on a list will return the string represention of the python list which is not what you want to pass in as an argument to rsync
You do a os.chdir(os.path.join(root,folder)), but never go back.
In order to properly resume operation on the next folder, you should either remember the last os.getpwd() and return to it, or just do os.chdir('..') at the end of one loop run.

Categories

Resources