Python 2 subprocess (dmidecode) to a variable? [duplicate] - python

This question already has answers here:
How to store the result of an executed shell command in a variable in python? [duplicate]
(4 answers)
Closed 7 years ago.
I'm running a dmidecode in linux to get the list of hardware information. What is the best way to read over the output and select certain bits of information? For example get the Product Name: part of the dmidecode?
At the moment I'm writing the subprocess output to a file then reading over the file for a given string. This seems such an inefficient way of doing things.
Also I know about the python dmidecode model but for the life of me I can't get it working it just keeps saying there's no bios attribute

If you know the specific keyword you are looking for you can type: dmidecode -s keyword
In your case it would be:
dmidecode -s system-product-name
You can also filter by type. For example:
To return System information:
dmidecode -t1
To return BaseBoard information:
dmidecode -t2
To return Chassis Information:
dmidecode -t3

There are multiple ways with which you can get the output of the command in your python script using subprocess module.
subprocess.Popen() - you can start the command line process using this Popen class specifying stdout as subprocess.PIPE and then use communicate function to get the results. Example -
import subprocess
p = subprocess.Popen(['dmidecode'] , stdout=subprocess.PIPE)
result = p.communicate()[0]
subprocess.check_output() - this function returns the output of the command (output to stdout) as a byte string after executing the command. Example -
import subprocess
result = subprocess.check_output(['dmidecode'])
For your particular case, subprocess.check_output() is most probably more suited as you do not need to provide any inputs to the process.
With subprocess.Popen() you can also need to provide inputs to the process , by PIPING the stdin for the process.

Related

Issue using subprocess to run a PDAL bash command from Python [duplicate]

This question already has answers here:
File not found error when launching a subprocess containing piped commands
(6 answers)
Closed 2 years ago.
Issue:
I cannot run a pdal bash command from Python using subprocess.
Here is the code
based on Running Bash commands in Python:
import os, subprocess
input = '/path/to/file.ply'
output = '/path/to/statfile.json'
if not os.path.isfile(output):
open(output, 'a').close()
bashcmd = ("pdal info --boundary "
+input
+" > "
+output
)
print("Bash command is:\n{}\n".format(bashcmd))
process = subprocess.Popen(bashcommand.split(),
stdout=subprocess.PIPE,
shell=True)
output, error = process.communicate()
print("Output:\n{}\n".format(output))
print("Error:\n{}\n".format(error))
Which gives me this output in the Python console:
Bash command is:
pdal info --boundary /path/to/file.ply > /path/to/statfile.json
Output:
Usage:
pdal <options>
pdal <command> <command options>
--command The PDAL command
--debug Sets the output level to 3 (option deprecated)
--verbose, -v Sets the output level (0-8)
--drivers List available drivers
--help, -h Display help text
--list-commands List available commands
--version Show program version
--options Show options for specified driver (or 'all')
--log Log filename (accepts stderr, stdout, stdlog, devnull as
special cases)
--logtiming Turn on timing for log messages
The following commands are available:
- delta
- diff
- fauxplugin
- ground
- hausdorff
- info
- merge
- pcl
- pipeline
- random
- smooth
- sort
- split
- tindex
- translate
See http://pdal.io/apps/ for more detail
Error:
None
It looks as if it had stop reading the arguments of the command after the call to 'pdal' only, which prints this help message.
If I copy the output of the first print and paste it in a bash terminal, it works properly, giving me the output file with the desired metadata. But from Python no output file is created.
Question:
I wonder why (e.g. is there anything wrong with the redirection or the fact that the computation itself may take ~20sec normally?), and how to execute this command from Python?
This doesn't provide a clear enough answer to the present issue.
There are multiple errors here.
You are using an undefined variable bashCommand instead of the one you defined above bashcmd.
You are mixing output to a Python file handle with shell redirection.
You are not capturing the stderr of the process. (I will vaguely assume you do not need the standard error anyway.)
You should not split() the command if you run it with shell=True.
More broadly, you should probably avoid the shell=True and let Python take care of the redirection for you by connecting the output to the file you open; and in modern times, you really should not use subprocess.Popen() if you can use subprocess.run() or subprocess.check_call() or friends.
import subprocess
input = '/path/to/file.ply'
output = '/path/to/statfile.json'
with open(output, 'a') as handle:
bashcmd = ['pdal', 'info', '--boundary', input]
#print("Bash command is:\n{}\n".format(bashcmd))
result = subprocess.run(bashcmd, stdout=handle, stderr=subprocess.PIPE)
# No can do because output goes straight to the file now
##print("Output:\n{}\n".format(output))
#print("Error:\n{}\n".format(result.stdout))

subprocess call with args is only reading first arg [duplicate]

This question already has answers here:
subprocess.call() arguments ignored when using shell=True w/ list [duplicate]
(2 answers)
Python subprocess.call seems to ignore parameters
(2 answers)
Closed 4 years ago.
I am trying to automate nmap scans and I am using the subprocess module to do so. I am pretty much passing three variables to subprocess.call and having the command be run. Here is my code
import subprocess
TOOL = 'nmap'
joined = '-p1 5000'
target = 'localhost'
subprocess.call([TOOL, joined, target], shell=True)
This should lead to nmap -p1 5000 localhost being ran on my system which is a valid command, however, the call method seems to only be recognizing TOOL(nmap) and it just prints out the options for nmap. Does anyone know what I'm missing here?
I don't have nmap installed but you need set shell=False and split parameters:
import subprocess
TOOL = 'ls'
joined = '-a -l'
target = '/tmp'
print(subprocess.call([TOOL, *joined.split(), target], shell=False))

Running python script from perl, with argument to stdin and saving stdout output

My perl script is at path:
a/perl/perlScript.pl
my python script is at path:
a/python/pythonScript.py
pythonScript.py gets an argument from stdin, and returns result to stdout. From perlScript.pl , I want to run pythonScript.py with the argument hi to stdin, and save the results in some variable. That's what I tried:
my $ret = `../python/pythonScript.py < hi`;
but I got the following error:
The system cannot find the path specified.
Can you explain the path can't be found?
The qx operator (backticks) starts a shell (sh), in which prog < input syntax expects a file named input from which it will read lines and feed them to the program prog. But you want the python script to receive on its STDIN the string hi instead, not lines of a file named hi.
One way is to directly do that, my $ret = qx(echo "hi" | python_script).
But I'd suggest to consider using modules for this. Here is a simple example with IPC::Run3
use warnings;
use strict;
use feature 'say';
use IPC::Run3;
my #cmd = ('program', 'arg1', 'arg2');
my $in = "hi";
run3 \#cmd, \$in, \my $out;
say "script's stdout: $out";
The program is the path to your script if it is executable, or perhaps python script.py. This will be run by system so the output is obtained once that completes, what is consistent with the attempt in the question. See documentation for module's operation.
This module is intended to be simple while "satisfy 99% of the need for using system, qx, and open3 [...]. For far more power and control see IPC::Run.
You're getting this error because you're using shell redirection instead of just passing an argument
../python/pythonScript.py < hi
tells your shell to read input from a file called hi in the current directory, rather than using it as an argument. What you mean to do is
my $ret = `../python/pythonScript.py hi`;
Which correctly executes your python script with the hi argument, and returns the result to the variable $ret.
The Some of the other answers assume that hi must be passed as a command line parameter to the Python script but the asker says it comes from stdin.
Thus:
my $ret = `echo "hi" | ../python/pythonScript.py`;
To launch your external script you can do
system "python ../python/pythonScript.py hi";
and then in your python script
import sys
def yourFct(a, b):
...
if __name__== "__main__":
yourFct(sys.argv[1])
you can have more informations on the python part here

Broken-pipe Error Python subprocess [duplicate]

This question already has answers here:
'yes' reporting error with subprocess communicate()
(3 answers)
Closed 6 years ago.
I'm trying to launch several bash routines
from a GUI based software. The problem I'm facing is a piping issue.
Here the test bash-script (bashScriptTest.sh):
#!/bin/bash
#---------- Working
ls | sort | grep d > testFile.txt
cat testFile.txt
#---------- NOT working
echo $RANDOM > testFile2.txt
for i in `seq 1 15000`; do
echo $RANDOM >> testFile2.txt
done
awk '{print $1}' testFile2.txt | sort -g | head -1
And here the python script that creates the error:
import subprocess
#
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh", stdout=outfile, stderr=outfile)
print CLEAN.pid
OUTSEE=subprocess.Popen(['x-terminal-emulator', '-e','tail -f '+outfile.name])
As you can see from running the python script, the Broken-pipe error is encountered
not in the first three pipes (first line) but instead after the huge work done by awk.
I need to manage an huge quantities of routine and subroutines in bash
and also using the shell==True flag doesn't change a thing.
I tried to write everything in the most pythonic way but unfortunately there is no
chance I can rewrite all the piping step inside python.
Another thing to mention is that if you test the bash script inside a terminal
everything works fine.
Any help would be really appreciated. Thanks in advance!
EDIT 1:
The log file containing the error says:
bashScriptTest.sh
log.txt
stack.txt
testFile2.txt
test.py
3
sort: write failed: standard output: Broken pipe
sort: write error
Okay so this is a little bit obscure, but it just so happens that I ran across a similar issue while researching a question on the python-tutor mailing list some time ago.
The reason you're seeing different behavior when running your script via the subprocess module (in python) vs. bash directly, is that python overrides the disposition of SIGPIPEs to SIG_IGN (ignore) for all child processes (globally).
When the following pipeline is executed ...
awk '{print $1}' testFile2.txt | sort -g | head -1
... head will exit after it prints the first line of stdout from the sort command, due to the -1 flag. When the sort command attempts to write more lines to its stdout, a SIGPIPE is raised.
The default action of a SIGPIPE; when the pipeline is executed in a shell like bash, for example; is to terminate the sort command.
As stated earlier, python overrides the default action with SIG_IGN (ignore), so we end up with this bizarre, and somewhat inexplicable, behavior.
That's all well and good, but you might be wondering what to do now? It's dependant on the version of python you're using ...
For Python 3.2 and greater, you're already set. subprocess.Popen in 3.2 added the restore_signals parameter, which defaults to True, and effectively solves the issue without further action.
For previous versions, you can supply a callable to the preexec_fn argument of subprocess.Popen, as in ...
import signal
def default_sigpipe():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
# ...
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh",
stdout=outfile, stderr=outfile
preexec_fn=default_sigpipe)
I hope that helps!
EDIT: It should probably be noted that your program is actually functioning properly, AFAICT, as is. You're just seeing additional error messages that you wouldn't normally see when executing the script in a shell directly (for the reasons stated above).
See Also:
https://mail.python.org/pipermail/python-dev/2007-July/073831.html
https://bugs.python.org/issue1652

redirect output to a text file using windows shell '>' in python [duplicate]

This question already has answers here:
How to redirect output with subprocess in Python?
(6 answers)
Closed 7 years ago.
In my python script,
I am trying to run a windows program that prints output.
But I would like to redirect that output to a text file.
I tried
command = 'program' + arg1 + ' > temp.txt'
subprocess.call(command)
Where program is my program name and arg1 is argument it takes.
but it does not redirect the output to the text file
It just prints that on the screen.
Can anyone help me how to do this?
Thank you!
Pass a file object to the stdout parameter of subprocess.call():
with open('myoutfilename', 'w') as myoutfile:
subprocess.call(cmd, stdout=myoutfile)
You can use shell=True in subprocess.call
However, a (much) better way to do this would be:
command = ['program',arg1]
with open('temp.txt','w') as fout:
subprocess.call(command,stdout=fout)
This removes the shell from the whole thing making it more system independent, and it also makes your program safe from "shell injection" attacks (consider arg1='argument; rm -rf ~' or whatever the windows equivalent is).
The context manager (with statement) is a good idea as it guarantees that your file object is properly flushed and closed when you leave the "context".
Note that it is important that if you're not using shell=True to a subprocess.Popen (or similar) class, you should pass the arguments as a list, not a string. Your code will be more robust that way. If you want to use a string, python provides a convenience function shlex.split to split a string into arguments the same way your shell would. e.g.:
import subprocess
import shlex
with open('temp.txt','w') as fout:
cmd = shlex.split('command argument1 argument2 "quoted argument3"'
#cmd = ['command', 'argument1', 'argument2', 'quoted argument3']
subprocess.call(cmd,stdout=fout)

Categories

Resources