I am using the below code to run a git command "git tag -l contains ad0beef66e5890cde6f0961ed03d8bc7e3defc63" ..if I run this command standalone I see the required output..but through the below program,it doesnt work,does anyone have any inputs on what could be wrong?
from subprocess import check_call,Popen,PIPE
revtext = "ad0beef66e5890cde6f0961ed03d8bc7e3defc63"
proc = Popen(['git', 'tag', '-l', '--contains', revtext ],stdout=PIPE ,stderr=PIPE)
(out, error) = proc.communicate()
print "OUT"
print out
Related
For example I'm trying to run some bash command from python:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
run(command, shell=True, check=True)
but if it fails I just get subprocess.CalledProcessError: Command 'ffmpeg ...' returned non-zero exit status 127. how can I get full ffmpeg error message?
It's the check=True kwarg that's causing it to throw a CalledProcessError. Just remove check=True, and it will stop throwing the error. If you want to print the STDERR printed by ffmpeg, you can use capture_output=True. Then, the resulting CompletedProcess object will have a .stderr member that contains the STDERR of your command, encoded as a bytes-like string. Use str.decode() to turn it into a normal string:
from subprocess import run
command = f'ffmpeg -y -i "{video_path}" "{frames_path}/%d.png"'
proc = run(command, shell=True, capture_output=True)
out = proc.stdout.decode() # stores the output of stdout
err = proc.stderr.decode() # stores the output of stderr
print(err)
Suppose I have 3 text files ours.txt, base.txt and theirs.txt and want to do a three way merge on them. When I call git merge-file -p ours.txt base.txt theirs.txt in Git Bash, it will print the merged text.
Whereas, when I run
import subprocess
dir = "path/to/text files"
cmd = ["git", "merge-file", "-p ", "ours.txt", "base.txt", "theirs.txt"]
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, cwd=dir)
I can access stdout and stderr through
(out, error) = p.communicate()
But can't seem to store the merged text that gets printed in Git Bash in a variable.
Does anybody have any ideas on how to retrieve it?
Thanks in advance.
Below is the python code I am running to call sqoop, But this is not capturing the logs except the below few lines
Warning: /usr/hdp/2.6.4.0-91/accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\""
print job
with open('save.txt','w') as fp:
proc = subprocess.Popen(job, stdout=fp, stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
print "Here is the return code :: " + str(proc.returncode)
print stdout`
Please let me know if there is an issue with the way I am calling.
Note : The individual sqoop cmd is running fine and producing all the logs.
I have tried the below way as well, the result is the same
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\""
proc = subprocess.Popen(job, stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
and also using '2> mylog.log' at the end of the cmd
import subprocess
job = "sqoop-import --direct --connect 'jdbc:sqlserver://host' --username myuser --password-file /user/ivr_sqoop --table data_app_det --delete-target-dir --verbose --split-by attribute_name_id --where \"db_process_time BETWEEN ('2018-07-15') and ('9999-12-31')\" > mylog.log "
proc = subprocess.Popen(job, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
stdout, stderr = proc.communicate()
I have found the below similar question but there was no answer there as well.
Subprocess Popen : Ignore Accumulo warning and continue execution of Sqoop
Since you have added shell=True, it is not capturing Sqoop logs. Please remove shell=True from your command and add universal_newlines=True, it will display the console log.
The working piece of code:
import subprocess
import logging
logging.basicConfig(format='%(levelname)s:%(message)s', level=logging.DEBUG)
# Function to run Hadoop command
def run_unix_cmd(args_list):
"""
run linux commands
"""
print('Running system command: {0}'.format(' '.join(args_list)))
proc = subprocess.Popen(args_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
s_output, s_err = proc.communicate()
s_return = proc.returncode
return s_return, s_output, s_err
# Create Sqoop Job
def sqoop_job():
"""
Create Sqoop job
"""
cmd = ['sqoop', 'import', '--connect', 'jdbc:oracle:thin:#//host:port/schema', '--username', 'user','--password', 'XX', '--query', '"your query"', '-m', '1', '--target-dir', 'tgt_dir']
print(cmd)
(ret, out, err) = run_unix_cmd(cmd)
print(ret, out, err)
if ret == 0:
logging.info('Success.')
else:
logging.info('Error.')
if __name__ == '__main__':
sqoop_job()
I'm stuck at a point where I can't get my python subprocess call for "git show" to run though the core.pager.
In my ~/.gitconfig I have specified a core pager;
[core]
pager = cat -vet
And when I run this through subprocess (Popen or check_output)
cmd = ['git', '-C', repo, 'show', r'{0}:{1}'.format(commit, filename)]
stdout = subprocess.check_output(cmd)
The output I get have not run through the cat pager (the lines would end with '$')
When I run it myself from the cli, the output does go through the pager.
How should I do the call to subprocess to get the "git show" command to run through the core.pager?
AFAIK, git only post-process output through the configured pager when output is directed to a terminal. Since you are using subprocess.check_output, the output from git command is redirected to a pipe (to allow to give it to Python caller). As such core.pager is not called.
It you want to get a post-processed output, you will have to do it by hand
Assuming you want to use cat -vet as a post-processing filter, you could do:
cmd = ['git', '-C', repo, 'show', r'{0}:{1}'.format(commit, filename)]
p1 = subprocess.Popen(cmd, stdout = subprocess.PIPE)
filter = [ '/bin/cat', '-vet' ]
p2 = subprocess.Popen(filter, stdout = subprocess.PIPE, stdin = p1.stdout)
p2.wait()
stdout = p2.stdout.read()
I have the following script:
import subprocess
arguments = ["d:\\simulator","2332.txt","2332.log", "-c"]
output=subprocess.Popen(arguments, stdout=subprocess.PIPE).communicate()[0]
print(output)
which gives me b'' as output.
I also tried this script:
import subprocess
arguments = ["d:\\simulator","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments,stdout=subprocess.PIPE)
process.wait()
print(process.stdout.read())
print("ERROR:" + str(process.stderr))
which gives me the output: b'', ERROR:None
However when I run this at the cmd prompt I get a 5 lines of text.
d:\simulator atp2332.txt atp2332.log -c
I have added to simulator a message box which pops up when it launches. This is presented for all three cases. So I know that I sucessfully launch the simulator. However the python scripts are not caturing the stdout.
What am I doing wrong?
Barry.
If possible (not endless stream of data) you should use communicate() as noted on the page.
Try this:
import subprocess
arguments = ["d:\\simulator","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
sout, serr = process.communicate()
print(sout)
print(serr)
The following code gives me text output on stdout.
Perhaps you could try it, and then substitute your command for help
import subprocess
arguments = ["help","2332.txt","atp2332.log", "-c"]
process = subprocess.Popen(arguments,stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process.wait()
print 'Return code', process.returncode
print('stdout:', process.stdout.read())
print("stderr:" + process.stderr.read())