Equivalent of subprocess.call() in C [duplicate] - python

In C, how should I execute external program and get its results as if it was ran in the console?
if there is an executable called dummy, and it displays 4 digit number in command prompt when executed, I want to know how to run that executable and get the 4 digit number that it had generated. In C.

popen() handles this quite nicely. For instance if you want to call something and read the results line by line:
char buffer[140];
FILE *in;
extern FILE *popen();
if(! (in = popen(somecommand, "r"""))){
exit(1);
}
while(fgets(buff, sizeof(buff), in) != NULL){
//buff is now the output of your command, line by line, do with it what you will
}
pclose(in);
This has worked for me before, hopefully it's helpful. Make sure to include stdio in order to use this.

You can use popen() on UNIX.

This is not actually something ISO C can do on its own (by that I mean the standard itself doesn't provide this capability) - possibly the most portable solution is to simply run the program, redirecting its standard output to a file, like:
system ("myprog >myprog.out");
then use the standard ISO C fopen/fread/fclose to read that output into a variable.
This is not necessarily the best solution since that may depend on the underlying environment (and even the ability to redirect output is platform-specific) but I thought I'd add it for completeness.

There is popen() on unix as mentioned before, which gives you a FILE* to read from.
Alternatively on unix, you can use a combination of pipe(), fork(), exec(), select(), and read(), and wait() to accomplish the task in a more generalized/flexible way.
The popen library call invokes fork and pipe under the hood to do its work. Using it, you're limited to simply reading whatever the process dumps to stdout (which you could use the underlying shell to redirect). Using the lower-level functions you can do pretty much whatever you want, including reading stderr and writing stdin.
On windows, see calls like CreatePipe() and CreateProcess(), with the IO members of STARTUPINFO set to your pipes. You can get a file descriptor to do read()'s using _open_ofshandle() with the process handle. Depending on the app, you may need to read multi-threaded, or it may be okay to block.

Related

How to run a subprocess and store the results in a file?

I am trying to run a hive/spark submit from python using subprocess module. I am trying to write the data output to a file (log file). cCn you please help me in this?
import subprocess
file = ["hive" "-f" "test.sql"]
process = subprocess.Popen(file,shell=False,stderr=subprocess.PIPE,
stdout=subprocess.STDOUT,universal_newlines=True)
process.wait()
out,err=process.communicate()
The out file I need to write it to new file let say test.log/test.txt file.
You have an error in your command; the list needs to have commas between the strings (otherwise you are pasting together the individual strings to a single long string "hive-ftest.sql"!)
As pointed out in the subprocess documentation, you should generally avoid bare Popen when you can. If all you need is for a command to run to completion, subprocess.run or its legacy siblings check_call et al. should be preferred for simplicity and robustness.
import subprocess
# Renamed the variable; this is not a "file" by any stretch
cmd = ["hive", "-f", "test.sql"]
with open(filename, "wb") as outputfile:
process = subprocess.run(cmd, stdout=outputfile, check=True)
Specifying a binary output mode avoids having Python try to infer anything about the encoding of the bytes emitted; if you need to process text, you might want to add an encoding= keyword argument to the subprocess call.
Not specifying any destination for stderr means error messages will be displayed to the user, which is probably a useful simplification if the tool will be invoked interactively. If not, you will probably need to capture any diagnostic messages and display them in a log file or something.
check=True specifies that Python should check that the command succeeds, and raise an exception if not. This is usually good hygiene, but might need to be tweaked if the command you run could emit an error status in situations where your use case could nevertheless be completed, or if you need to avoid tracebacks in unattended use.
shell=False is the default, and so I omitted that.
I can see no reason to store the command in a variable, but perhaps you have one. Inlining the command will avoid having to come up with a useful name for the variable (^:

atomic write with Python's print function

I want to do an atomic write with Python's print function. I found this answer, already:
How can I do an atomic write to stdout in python?
But this uses sys.stdout.write. I want to be more flexible and use print instead. When I implemented the same code with print, apparently this matters, since my output turned out not to be correct.
lock = Lock()
def synced_out(*inp):
with lock:
print(*inp, file=args.out, sep=args.field_seperator)
Apparently it matters that I use print and not sys.stdout.write.
Full code here, if you expect that is not possible and I might be doing something else wrong:
https://termbin.com/s9ox
In the case of corruption the file was sys.stdout, but I was using redirection and send it to file anyway. I want to preserve however the --out flag so that people with less understanding of "> file" can also use it, or if it is used with a pipe, just maintaining this flexibility.
Python 3.5.2
Linux Ubuntu 16.04

How can I redirect outputs from a Python process into a Rust process?

I am trying to spawn a Rust process from a Python program and redirect Python's standard output into its standard input. I have used the following function:
process = subprocess.Popen(["./target/debug/mypro"], stdin=subprocess.PIPE)
and tried to write to the subprocess using:
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)]))) #Write bytes of Json representation of previous track
I am not getting any errors but standard input in Rust doesn't seem to take any input and standard output isn't printing anything at all.
Here's the version of the Rust code I am currently running:
extern crate rustc_serialize;
use rustc_serialize::json::Json;
use std::fs::File;
use std::io;
use std::env;
use std::str;
fn main(){
let mut buffer = String::new();
let stdin = io::stdin();
//stdin.lock();
stdin.read_line(&mut buffer).unwrap();
println!{"{}", buffer};
println!{"ok"};
}
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)])) does not add a newline character by default, so on the Rust side I was never getting to the end of the line which was making the process block on read_line.
Adding it manually made everything work smoothly.
process.stdin.write(str.encode(json.dumps(dictionnaire[str(index)])+ "\n") )
This may be a problem on the Python side
subprocess.run(["cargo run -- " + str(r)], shell=True)
This assumes that you have a numeric file descriptor that remains open across fork and exec. Spawning processes may close file descriptors either because they're marked as CLOEXEC or due to explicit cleanup code before exec.
Before attempting to pass a numeric file descriptor as a string argument, you should make sure that they will remain valid in the new process.
A better approach is to use some process spawning API that allows you to explicitly map the file descriptors in the new process to open handles or an API that spawns a process with stdin/out tied to pipes.

In Python, what is the difference between open(file).read() and subprocess(['cat', file]) and is there a preference for one over the other?

Let's say I want to read RAM usage from /proc/meminfo. There are two basic ways to do this that I can think of.
Use a shell command
output = subprocess.check_output('cat /proc/meminfo', shell=True)
# or output = subprocess.check_output(['cat', '/proc/meminfo'])
lines = output.splitlines()
Use open()
with open('/proc/meminfo') as meminfo:
output = meminfo.read()
lines = output.splitlines()
My question is what is the difference between the two methods? Is there a significant performance difference? My assumption is that using open() is the preferred method, since using a shell command is a bit hackish and may be system dependent, but I can't find any information on this so I thought I'd ask.
...so, let's look at what output = subprocess.check_output('cat /proc/meminfo', shell=True) does:
Creates a FIFO pair with mkfifo(), and spawns a shell running sh -c 'cat /proc/meminfo' writing to the input end of the FIFO (while the Python interpreter itself watches for output on the other end, either using the select() call or blocking IO operations). This means opening /bin/sh, opening all the libraries it depends on, etc.
The shell parses those arguments as code. This can be dangerous if, instead of opening /proc/meminfo. you're instead opening /tmp/$(rm -rf ~)/pwned.txt.
The shell forks a subprocess (optionally; shells may have an implicit exec), which then uses the execve system call to invoke /bin/cat with an argv of ['cat', '/proc/meminfo'] -- meaning that /bin/cat is again loaded as an executable, with its dynamic libraries, with all the performance overhead that implies.
/bin/cat then opens /proc/meminfo, reads from it, and writes to its stdout
The shell, if it did not use the implicit-exec optimization, waits for the /bin/cat executable to finish and exit using a wait()-family syscall.
The Python interpreter reads from the remote end of the FIFO until it provides an EOF (which will not happen until after cat has closed its output pipeline, potentially by exiting), and then uses a wait()-family call to retrieve information on how the shell it spawned exited, checking that exit status to determine whether an error occurred.
Now, let's look at what open('/proc/meminfo').read() does:
Opens the file using the open() syscall.
Reads the file using the read() syscall.
Drops the reference count on the file, allowing it to be closed (either immediately or on a future garbage collection pass) with the close() syscall.
One of these things is much, much, much more efficient and generally sensible than the other.

Python inline linux commands

I am testing sorting algorithms and therefore I would like to compine in my Python code, the linux command "time", because it takes some interesting arguments and for example the call of quicksort.
from subprocess import Popen
import quicksort
import rand
time=Popen("time quicksort.main(rand.main())")
This is tottaly wrong, but it is the closest I managed to get. I haven't grasped the idea of subprocess class, is it possible to combine method calls with linux commands, or only add commands in python like "grep..." and send the output to a variable??
If you use Popen from subprocess you need to do a lot of things differently.
I believe what you are looking for is check_output, another function belonging to the subprocess module.
But in order to further your understanding, since you are sort-of close, here is what you need to change to get it to work:
The command string "time quicksort.main(rand.main())" is not going to mean anything to bash. That is python. BUT in the case that it was valid bash language, it would need to be split on word boundaries (like bash WOULD normally do) so you would make it into a list:
['time', '...','...']
The only time you can pass Popen a command STRING (not a list) is when you set shell=True in the keywords to Popen.
But let's just leave shell at False, do some word-splitting for bash, and pass in a list. On to the next part.
Popen returns something you can communicate to/at/with. Not the result of the process' stdout. Use subprocess.PIPE for stdin and stdout keywords to Popen.
Once you have made a Popen object as described, you can call it's communicate method.
The result is two things, stdout and stderr.
You're after the first one. One use case for Popen is for when you need to keep errors and output seperate. Obviously this isn't turning out to be the best option for inline but oh well. Lets deal with stdout.
sdtout will probably need to be decoded:
stdout.decode()
or perhaps even have newlines stripped as well:
stdout.decode().rstrip()
So as you can see, Popen does not fit the use case you have in mind. There is no need to use subprocess and make system calls in order to time python. Look into timeit.

Categories

Resources