I have a simple Python script that asks for your name, then spits it back out:
def main():
print('Enter your name: ')
for line in sys.stdin:
print 'You entered: ' + line
Pretty simple stuff! When running this in the OS X Terminal, it works great:
$ python nameTest.py
Enter your name:
Craig^D
You entered: Craig
But, when attempting to run this process via an NSTask, the stdout only appears if additional flush() calls are added to the Python script.
This is how I have my NSTask and piping configured:
NSTask *_currentTask = [[NSTask alloc] init];
_currentTask.launchPath = #"/usr/bin/python";
_currentTask.arguments = [NSArray arrayWithObject:#"nameTest.py"];
NSPipe *pipe = [[NSPipe alloc] init];
_currentTask.standardOutput = pipe;
_currentTask.standardError = pipe;
dispatch_queue_t stdout_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);
__block dispatch_block_t checkBlock;
checkBlock = ^{
NSData *readData = [[pipe fileHandleForReading] availableData];
NSString *consoleOutput = [[NSString alloc] initWithData:readData encoding:NSUTF8StringEncoding];
dispatch_sync(dispatch_get_main_queue(), ^{
[self.consoleView appendString:consoleOutput];
});
if ([_currentTask isRunning]) {
[NSThread sleepForTimeInterval:0.1];
checkBlock();
} else {
dispatch_sync(dispatch_get_main_queue(), ^{
NSData *readData = [[pipe fileHandleForReading] readDataToEndOfFile];
NSString *consoleOutput = [[NSString alloc] initWithData:readData encoding:NSUTF8StringEncoding];
[self.consoleView appendString:consoleOutput];
});
}
};
dispatch_async(stdout_queue, checkBlock);
[_currentTask launch];
But when running the NSTask, this is how it appears (it is initially blank, but after entering my name and pressing CTRL+D, it finishes all at once):
Craig^DEnter your name:
You entered: Craig
So, my question is: How can I read the stdout from my NSTask without requiring the additional flush() statements in my Python script? Why does the Enter your name: prompt not appear immediately when run as an NSTask?
When Python sees that its standard output is a terminal, it arranges to automatically flush sys.stdout when the script reads from sys.stdin. When you run the script using NSTask, the script's standard output is a pipe, not a terminal.
UPDATE
There is a Python-specific solution to this. You can pass the -u flag to the Python interpreter (e.g. _currentTask.arguments = #[ #"-u", #"nameTest.py"];), which tells Python not to buffer standard input, standard output, or standard error at all. You can also set PYTHONUNBUFFERED=1 in the process's environment to achieve the same effect.
ORIGINAL
A more general solution that applies to any program uses what's called a “pseudo-terminal” (or, historically, a “pseudo-teletype”), which we shorten to just “pty”. (In fact, this is what the Terminal app itself does. It is a rare Mac that has a physical terminal or teletype connected to a serial port!)
Each pty is actually a pair of virtual devices: a slave device and a master device. The bytes you write to the master, you can read from the slave, and vice versa. So these devices are more like sockets (which are bidirectional) than like pipes (which are one-directional). In addition, a pty also let you set terminal I/O flags (or “termios”) that control whether the slave echoes its input, whether it passes on its input a line at a time or a character at a time, and more.
Anyway, you can open a master/slave pair easily with the openpty function. Here's a little category that you can use to make an NSTask object use the slave side for the task's standard input and output.
NSTask+PTY.h
#interface NSTask (PTY)
- (NSFileHandle *)masterSideOfPTYOrError:(NSError **)error;
#end
NSTask+PTY.m
#import "NSTask+PTY.h"
#import <util.h>
#implementation NSTask (PTY)
- (NSFileHandle *)masterSideOfPTYOrError:(NSError *__autoreleasing *)error {
int fdMaster, fdSlave;
int rc = openpty(&fdMaster, &fdSlave, NULL, NULL, NULL);
if (rc != 0) {
if (error) {
*error = [NSError errorWithDomain:NSPOSIXErrorDomain code:errno userInfo:nil];
}
return NULL;
}
fcntl(fdMaster, F_SETFD, FD_CLOEXEC);
fcntl(fdSlave, F_SETFD, FD_CLOEXEC);
NSFileHandle *masterHandle = [[NSFileHandle alloc] initWithFileDescriptor:fdMaster closeOnDealloc:YES];
NSFileHandle *slaveHandle = [[NSFileHandle alloc] initWithFileDescriptor:fdSlave closeOnDealloc:YES];
self.standardInput = slaveHandle;
self.standardOutput = slaveHandle;
return masterHandle;
}
#end
You can use it like this:
NSTask *_currentTask = [[NSTask alloc] init];
_currentTask.launchPath = #"/usr/bin/python";
_currentTask.arguments = #[[[NSBundle mainBundle] pathForResource:#"nameTest" ofType:#"py"]];
NSError *error;
NSFileHandle *masterHandle = [_currentTask masterSideOfPTYOrError:&error];
if (!masterHandle) {
NSLog(#"error: could not set up PTY for task: %#", error);
return;
}
Then you can read from the task and write to the task using masterHandle.
Related
I can call a python script in C# and redirect the output/error using the following code:
using System.Diagnostics;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
// Configure new process
Process cmd = new Process();
cmd.StartInfo.FileName = "python.exe";
cmd.StartInfo.Arguments = "<PYTHON FILE>";
cmd.StartInfo.RedirectStandardInput = true;
cmd.StartInfo.RedirectStandardOutput = true;
cmd.StartInfo.RedirectStandardError = true;
cmd.StartInfo.CreateNoWindow = true;
cmd.StartInfo.UseShellExecute = false;
cmd.StartInfo.WorkingDirectory = "<PATH TO WORKING DIRECTORY>";
string stdout, stderr;
// Start process and wait for the process to exit
cmd.Start();
cmd.StandardInput.WriteLine();
cmd.StandardInput.Flush();
cmd.StandardInput.Close();
cmd.WaitForExit();
stderr = cmd.StandardError.ReadToEnd();
stdout = cmd.StandardOutput.ReadToEnd();
// Check for error
if (!string.IsNullOrEmpty(stderr))
{
Console.WriteLine(stderr);
}
else
{
Console.WriteLine(stdout);
}
}
}
}
I have been trying to debug the python file using the following:
using System.Diagnostics;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
// Configure new process
Process cmd = new Process();
cmd.StartInfo.FileName = "python.exe";
cmd.StartInfo.Arguments = "-mpdb <PYTHON FILE>";
cmd.StartInfo.RedirectStandardInput = true;
cmd.StartInfo.RedirectStandardOutput = true;
cmd.StartInfo.RedirectStandardError = true;
cmd.StartInfo.CreateNoWindow = true;
cmd.StartInfo.UseShellExecute = false;
cmd.StartInfo.WorkingDirectory = "<PATH TO WORKING DIRECTORY>";
string stdin = "", stdout, stderr;
cmd.Start();
while (true)
{
// This works the first iteration, but throws a 'Cannot write to a closed TextWriter' error on the following iteration
// But I need to close the StandardInput in order to get the output from the process (stdout/stderr)
cmd.StandardInput.WriteLine(stdin);
cmd.StandardInput.Flush();
cmd.StandardInput.Close();
cmd.WaitForExit();
stderr = cmd.StandardError.ReadToEnd();
stdout = cmd.StandardOutput.ReadToEnd();
// Check if the CMD was valid
if (!string.IsNullOrEmpty(stderr))
{
Console.WriteLine(stderr);
}
else
{
Console.WriteLine(stdout);
}
stdin = Console.ReadLine();
if (string.IsNullOrEmpty(stdin)) break;
}
cmd.WaitForExit();
}
}
}
I am sort of in a catch-22 problem where I want stdin to remain open so I can feed more commands to the python/pdb process (i.e. 'step', 'next', 'continue'...), but in order to get stdout or stderr I need to close and wait. Calling cmd.Start(); multiple times does not work because that just restarts the process using the same StartInfo.FileName and StartInfo.Arguments.
Am I going about this the wrong way or is there something trivial that I have missed in the documentation? Thanks in advance.
I made a Linux background process (in c++) that monitors a directory and attempts to launch a Python script if a certain file appears in that directory. My issue is that the child process responsible for launching the Python script exits immediately after the execvp function is called and I can't understand why. All of the necessary files are under root's ownership. Here is my code if it helps. Thank you in advance for any pointers! I have marked the error in my code where the error occurs. I have also included the Python script to be called
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
using namespace std;
char* arguments[3];
FILE* fd;
const char* logFilePath = "/home/BluetoothProject/Logs/fileMonitorLogs.txt";
char* rfcommPath = (char*)"/home/BluetoothProject/RFCOMMOut.py";
void logToFile(const char*);
void doWork();
void logToFile(const char* str) {
fd = fopen(logFilePath, "a");
fprintf(fd, "%s\n", str);
fclose(fd);
}
int main() {
arguments[0] = (char*)"python";
arguments[1] = rfcommPath;
arguments[2] = NULL;
pid_t pid = fork();
if(pid < 0) {
printf("Fork failed");
exit(1);
} else if(pid > 0) {
exit(EXIT_SUCCESS);
}
umask(0);
pid_t sid = setsid();
if(sid < 0) {
logToFile("setsid() didn't work.");
exit(1);
}
if ((chdir("/")) < 0) {
logToFile("chdir() didn't work.");
exit(EXIT_FAILURE);
}
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
doWork();
}
void doWork() {
pid_t pid = fork();
if(pid < 0) {
logToFile("doWork() fork didn't work.");
} else if(pid > 0) {
int status = 0;
waitpid(pid, &status, 0);
if(WEXITSTATUS(status) == 1) {
logToFile("Child process exited with an error.");
}
} else {
int error = execvp(arguments[0], arguments); //Here is where the error is
if(error == -1) {
logToFile("execvp() failed.");
}
exit(1);
}
}
Python script (AKA RFCOMMOut.py)
import RPi.GPIO as gpio
import serial
led_state = 0
led_pin = 11
gpio.setmode(gpio.BOARD)
gpio.setwarnings(False)
gpio.setup(led_pin, gpio.OUT)
try:
ser = serial.Serial(port = '/dev/rfcomm0',
baudrate = 9600,
parity = serial.PARITY_NONE,
stopbits = serial.STOPBITS_ONE,
bytesize = serial.EIGHTBITS)
except IOException as e:
logFile = open("/home/BluetoothProject/Logs/fileMonitorLogs.txt", "a")
logFile.write("(First error handler) There was an exception:\n")
logFile.write(str(e))
logFile.write("\n")
logFile.close()
#gpio.output
def process_input(input):
global led_state
if input == "I have been sent.\n":
if led_state == 1:
led_state = 0
gpio.output(led_pin, led_state)
else:
led_state = 1
gpio.output(led_pin, led_state)
while True:
try:
transmission = ser.readline()
process_input(transmission)
except IOError as e:
logFile = open("/home/BluetoothProject/Logs/fileMonitorLogs.txt", "a")
logFile.write("(second error handler) There was an exception:\n")
logFile.write(str(e))
logFile.write("\n")
logFile.close()
break
led_state = 0
gpio.output(led_pin, led_state)
gpio.cleanup()
print("End of program\n")
The question is a little unclear, so I'll try to take a few different educated guesses at what the problem is and address each one individually.
TL;DR: Remove close(STDOUT_FILENO) and close(STDERR_FILENO) to get more debugging information which will hopefully point you in the right direction.
execvp(3) is returning -1
According to the execvp(3) documentation, execvp(3) sets errno when it fails. In order to understand why it is failing, your program will need to output the value of errno somewhere; perhaps stdout, stderr, or your log file. A convenient way to do this is to use perror(3). For example:
#include <stdio.h>
...
void doWork() {
...
} else {
int error = execvp(arguments[0], arguments);
if(error == -1) {
perror("execvp() failed");
}
}
...
}
Without knowing what that errno value is, it will be difficult to identify why execvp(3) is failing.
execvp(3) is succeeding, but my Python program doesn't appear run
execvp(3) succeeding means that the Python interpreter has successfully been invoked (assuming that there is no program in your PATH that is named "python", but is not actually a Python interpreter). If your program doesn't appear to be running, that means Python is having difficulty loading your program. To my knowledge, Python will always output relevant error messages in this situation to stderr; for example, if Python cannot find your program, it will output "No such file or directory" to stderr.
However, it looks like your C program is calling close(STDERR_FILENO) before calling doWork(). According to fork(2), child processes inherit copies of their parent's set of open file descriptors. This means that calling close(STDERR_FILENO) before forking will result in the child process not having an open stderr file descriptor. If Python is having any errors executing your program, you'll never know, since Python is trying to notify you through a file descriptor that doesn't exist. If execvp(3) is succeeding and the Python program appears to not run at all, then I recommend you remove close(STDERR_FILENO) from your C program and run everything again. Without seeing the error message output by Python, it will be difficult to identify why it is failing to run the Python program.
As an aside, I recommend against explicitly closing stdin, stdout, and stderr. According to stdin(3), the standard streams are closed by a call to exit(3) and by normal program termination.
execvp(3) is succeeding, my Python program is running, but my Python program exits before it does any useful work
In this case, I'm not sure what the problem might be, since I'm not very familiar with Raspberry Pi. But I think you'll have an easier time debugging if you don't close the standard streams before running the Python program.
Hope this helps.
I have a web interface built with Spring and I want to execute the command "python file.py" from it.
The main problem is that inside the file.py there is a pyomo model that is supposed to give some output. I can execute a python script if it's a simple print or something, but the pyomo model is completely ignored.
What could be the reason?
Here is the code I wrote in the controller to execute the call:
#PostMapping("/execute")
public void execute(#ModelAttribute("component") #Valid Component component, BindingResult result, Model model) {
Process process = null;
//System.out.println("starting!");
try {
process = Runtime.getRuntime().exec("python /home/chiara/Documents/GitHub/Pyomo/Solver/test/sample.py");
//System.out.println("here!");
} catch (Exception e) {
System.out.println("Exception Raised" + e.toString());
}
InputStream stdout = process.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(stdout, StandardCharsets.UTF_8));
String line;
try {
while ((line = reader.readLine()) != null) {
System.out.println("stdout: " + line);
}
} catch (IOException e) {
System.out.println("Exception in reading output" + e.toString());
}
}
Update: I found that what I was missing was that I didn't check where the code run. So be sure to do so and eventually move the input files (if you have any) in the directory where python is executing, otherwise the script can't find them and elaborate them.
You can use
cwd = os.getcwd()
to check the current working directory of a process.
Another possibility is to redirect the stderr on the terminal or in a log file, because from the Server terminal you won't see anything even if there are errors.
The code posted in the question is the correct way to invoke a bash command from java.
I want to write into std::cerr or std::cout with my MFC application. In a python script I call this application and I want to read from stdout or stderr.
Both is not working. Just using std::cout yields no output. After AllocConsole() I was at least able to print to a debug console. Unfortunately, there is still no output on the python site.
In my MFC application I initialize a console to write to with this code:
void BindStdHandlesToConsole()
{
// Redirect the CRT standard input, output, and error handles to the console
freopen("CONIN$", "r", stdin);
freopen("CONOUT$", "w", stdout);
freopen("CONOUT$", "w", stderr);
std::wcout.clear();
std::cout.clear();
std::wcerr.clear();
std::cerr.clear();
std::wcin.clear();
std::cin.clear();
}
// initialization
BOOL foo::InitInstance()
{
// allocate a console
if (!AllocConsole())
AfxMessageBox("Failed to create the console!", MB_ICONEXCLAMATION);
else
BindStdHandlesToConsole();
On the python site, I try to print the output.
process = subprocess.Popen(args,stdout=subprocess.PIPE,shell=True)
output = process.stdout.read()
process.wait()
Is there a way to make my MFC program really write and the python script read the standard output?
The proper way of writing stuff to stdout pipe is the following:
HANDLE hStdOut = GetStdHandle ( STD_OUTPUT_HANDLE );
WriteFile( hStdOut, chBuf, dwRead, &dwWritten, NULL );
I need to create a script that calls an application (c++ binary) 4000 times. The application takes some arguments and for each call writes a zip file to disk. So when the script is executed 4000 zip files will be written to disk. The application supports multiple threads.
I first created a bash script that does the job and it works fine. But now I need the script to be platform independent. I have therefore tried to port the script to groovy, something like this:
for (int i = 1; i <= 4000; i++) {
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute() // Call *execute* on the string
proc.waitFor() // Wait for the command to finish
// Obtain status and output
println "return code: ${ proc.exitValue()}"
println "stderr: ${proc.err.text}"
println "stdout: ${proc.in.text}" // *out* from the external program is *in* for groovy
println "iteration : " + i
}
But after 381 zipfiles have been written to disk the script just hangs. Do I need to close the process after each call or something similar?
Here:
http://groovy.codehaus.org/Process+Management
it says that its known that java.lang.Process might hang or deadlock. Is it no-go to do something like this in groovy?
I will also give it at try in python to see if it gives the same problems
It might be the output stream blocking:
(1..<4000).each { i ->
println "iteration : $i"
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute()
// Consume the outputs from the process and pipe them to our output streams
proc.consumeProcessOutput( System.out, System.err )
// Wait for the command to finish
proc.waitFor()
// Obtain status
println "return code: ${proc.exitValue()}"
}
Yes, you should close streams belongs to process.
Or, as say #tim_yates you shoul use consumeProcessOutput, or, in concurent solution, waitForProcessOutput, which closes them for you.
For parallel computation you could use smth. like this:
import groovyx.gpars.GParsPool
GParsPool.withPool(8){ // Start in pool with 8 threads.
(1..4000).toList().eachParallel {
def p = "myExecutable a=$argA b=$outDir".execute()
def sout = new StringBuffer();
def serr = new StringBuffer();
p.waitForProcessOutput(sout, serr)
synchronized (System.out) {
println "return code: ${ p.exitValue()}"
println "stderr: $serr"
println "stdout: $sout"
println "iteration $it"
}
}
}