Handling stdin and stdout of subprocess [duplicate] - python

This question already has answers here:
Dynamic communication between main and subprocess in Python
(2 answers)
Closed 3 years ago.
i want to write a Python script that creates a subprocess, reads from the stdout and writes to stdin. What is written to stdin should depend on whats been read from stdout.
I've tried pretty much everything i could find about subprocess.Popen, but nothing worked out.
Basically, i want to write a script that makes the following c-code print "success":
#include <stdio.h>
#include <stlib.h>
int main()
{
int var, inp;
for (int i = 0; i < 100; i++) {
var = rand();
printf("Please type %d:\n", var); //random var. is printed
scanf("%d", &inp); //inp == var?
if (inp != var) {
printf("you failed miserably\n");
return 0;
}
}
printf("success\n");
return 0;
}
I'm failing at reading from stdout while still keeping the subprocess alive. The task seems so simple, but I can't find a simple solution.
Python code that i would expect to work:
from subprocess import *
def getNum(s): # "Please type 1234567:\t" -> "1234567"
return "".join([t for t in s if t.isdigit()])
p = Popen("./io", stdin=PIPE, stdout=PIPE) #io is the binary obtained from above c code
for i in range(100):
out = p.stdout.readline() #script deadlocks here
print( out )
inp = getNum(out)+"\n"#convert out into desired inp
p.stdin.write(inp)
print (p.communicate()[0]) #kill p and get last output
This approach might be a little naive, but i also don't understand why it doesn't work.

The python program gets stuck waiting to receive something from stdout. This can be due to stdout buffering. There are probably a few ways to change this behavior, but one quick way to test this out is to change the C executable to force a fflush right after it prints out the randomly generated number. The C code would look like this:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int var, inp;
for (int i = 0; i < 100; i++) {
var = rand();
printf("Please type %d:\n", var); //random var. is printed
// ADDED EXPLICIT stdout flush
fflush(stdout);
scanf("%d", &inp); //inp == var?
if (inp != var) {
printf("you failed miserably\n");
return 0;
}
}
printf("success\n");
return 0;
}
With that change, the original python script is able to drive the whole interaction.

Related

Python sys.stdin.buffer size detection

I'm trying to execute a Python script from a Qt application and to communicate with the script via standard input and output (as one would do via common Unix pipes). My calling code stub looks like this:
int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
QProcess process;
QTimer timer;
QObject::connect(&process, &QProcess::readyReadStandardOutput, [&process]() {
qDebug() << "<<o" << process.readAllStandardOutput();
});
QObject::connect(&process, &QProcess::readyReadStandardError, [&process]() {
qDebug() << "<<e" << process.readAllStandardError();
});
QObject::connect(&process, &QProcess::started, [&process] {
qDebug() << "Process name" << process.program() << process.processId();
});
QObject::connect(&timer, &QTimer::timeout, [&process]() {
qDebug() << process.state();
QByteArray ba("12345");
qDebug() << ">>" << ba;
process.write(ba);
if (!process.waitForBytesWritten())
qDebug() << process.errorString();
});
QObject::connect(&a, &QCoreApplication::aboutToQuit, [&]() {
process.terminate();
process.kill();
});
process.start("python3", {"main.py"});
// process.start("cat", QStringList{});
timer.start(2000);
a.exec();
process.terminate();
process.kill();
return 0;
}
And my Python script is shown below:
import sys, time
def process_data(data):
size=len(data)
if size %2:
print(f'Odd, {size}',data)
else:
print(f'Even, {size}',data)
sys.stdout.flush()
if __name__ == '__main__':
while True:
data=sys.stdin.buffer.read(5)
if len(data):
process_data(data)
else:
print('.')
time.sleep(0.02)
The thing is that I want to have my script react on any incoming buffer, much like a cat command does. When I comment out the line calling my script and uncomment the one calling the cat command, each time I send a buffer, I receive a reply, which is what I want. But when I'm calling a Python script, I have no means of detecting incoming buffer size that I know of. Explicitly setting a value in a sys.stdin.buffer.read command allows me not to wait for an EOF, but I want to receive a buffer without knowing its size in advance. In Qt I would achieve such behavior by calling readAll() method of a QIODevice. Is there a way of doing the same in Python?
I have tried calling sys.stdin.buffer.read() without any arguments, expecting it to behave like QIODevice::readAll() - producing a buffer with all the data read so far. But obviously it produces nothing until it receives an EOF. I hope there is a kind of method that yields a size of buffer received so that I could write:
size=stdin.buffer.bytesReceived()
data=stdin.buffer.read(size)
yet such method seems to be missing.
Does anyone know of any solution to this problem?
The problem is solved by changing sys.stdin.buffer.read line to:
data = sys.stdin.buffer.raw.read(20000)
This also works:
data = sys.stdin.buffer.read1(20000)
This answer was posted as edit 1 and edit 2 to the question Python sys.stdin.buffer size detection [solved] by the OP Kirill Didkovsky under CC BY-SA 4.0.

How can I make a linux background process (in c) that launches a Python script

I made a Linux background process (in c++) that monitors a directory and attempts to launch a Python script if a certain file appears in that directory. My issue is that the child process responsible for launching the Python script exits immediately after the execvp function is called and I can't understand why. All of the necessary files are under root's ownership. Here is my code if it helps. Thank you in advance for any pointers! I have marked the error in my code where the error occurs. I have also included the Python script to be called
#include <sys/types.h>
#include <sys/stat.h>
#include <sys/wait.h>
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
using namespace std;
char* arguments[3];
FILE* fd;
const char* logFilePath = "/home/BluetoothProject/Logs/fileMonitorLogs.txt";
char* rfcommPath = (char*)"/home/BluetoothProject/RFCOMMOut.py";
void logToFile(const char*);
void doWork();
void logToFile(const char* str) {
fd = fopen(logFilePath, "a");
fprintf(fd, "%s\n", str);
fclose(fd);
}
int main() {
arguments[0] = (char*)"python";
arguments[1] = rfcommPath;
arguments[2] = NULL;
pid_t pid = fork();
if(pid < 0) {
printf("Fork failed");
exit(1);
} else if(pid > 0) {
exit(EXIT_SUCCESS);
}
umask(0);
pid_t sid = setsid();
if(sid < 0) {
logToFile("setsid() didn't work.");
exit(1);
}
if ((chdir("/")) < 0) {
logToFile("chdir() didn't work.");
exit(EXIT_FAILURE);
}
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
doWork();
}
void doWork() {
pid_t pid = fork();
if(pid < 0) {
logToFile("doWork() fork didn't work.");
} else if(pid > 0) {
int status = 0;
waitpid(pid, &status, 0);
if(WEXITSTATUS(status) == 1) {
logToFile("Child process exited with an error.");
}
} else {
int error = execvp(arguments[0], arguments); //Here is where the error is
if(error == -1) {
logToFile("execvp() failed.");
}
exit(1);
}
}
Python script (AKA RFCOMMOut.py)
import RPi.GPIO as gpio
import serial
led_state = 0
led_pin = 11
gpio.setmode(gpio.BOARD)
gpio.setwarnings(False)
gpio.setup(led_pin, gpio.OUT)
try:
ser = serial.Serial(port = '/dev/rfcomm0',
baudrate = 9600,
parity = serial.PARITY_NONE,
stopbits = serial.STOPBITS_ONE,
bytesize = serial.EIGHTBITS)
except IOException as e:
logFile = open("/home/BluetoothProject/Logs/fileMonitorLogs.txt", "a")
logFile.write("(First error handler) There was an exception:\n")
logFile.write(str(e))
logFile.write("\n")
logFile.close()
#gpio.output
def process_input(input):
global led_state
if input == "I have been sent.\n":
if led_state == 1:
led_state = 0
gpio.output(led_pin, led_state)
else:
led_state = 1
gpio.output(led_pin, led_state)
while True:
try:
transmission = ser.readline()
process_input(transmission)
except IOError as e:
logFile = open("/home/BluetoothProject/Logs/fileMonitorLogs.txt", "a")
logFile.write("(second error handler) There was an exception:\n")
logFile.write(str(e))
logFile.write("\n")
logFile.close()
break
led_state = 0
gpio.output(led_pin, led_state)
gpio.cleanup()
print("End of program\n")
The question is a little unclear, so I'll try to take a few different educated guesses at what the problem is and address each one individually.
TL;DR: Remove close(STDOUT_FILENO) and close(STDERR_FILENO) to get more debugging information which will hopefully point you in the right direction.
execvp(3) is returning -1
According to the execvp(3) documentation, execvp(3) sets errno when it fails. In order to understand why it is failing, your program will need to output the value of errno somewhere; perhaps stdout, stderr, or your log file. A convenient way to do this is to use perror(3). For example:
#include <stdio.h>
...
void doWork() {
...
} else {
int error = execvp(arguments[0], arguments);
if(error == -1) {
perror("execvp() failed");
}
}
...
}
Without knowing what that errno value is, it will be difficult to identify why execvp(3) is failing.
execvp(3) is succeeding, but my Python program doesn't appear run
execvp(3) succeeding means that the Python interpreter has successfully been invoked (assuming that there is no program in your PATH that is named "python", but is not actually a Python interpreter). If your program doesn't appear to be running, that means Python is having difficulty loading your program. To my knowledge, Python will always output relevant error messages in this situation to stderr; for example, if Python cannot find your program, it will output "No such file or directory" to stderr.
However, it looks like your C program is calling close(STDERR_FILENO) before calling doWork(). According to fork(2), child processes inherit copies of their parent's set of open file descriptors. This means that calling close(STDERR_FILENO) before forking will result in the child process not having an open stderr file descriptor. If Python is having any errors executing your program, you'll never know, since Python is trying to notify you through a file descriptor that doesn't exist. If execvp(3) is succeeding and the Python program appears to not run at all, then I recommend you remove close(STDERR_FILENO) from your C program and run everything again. Without seeing the error message output by Python, it will be difficult to identify why it is failing to run the Python program.
As an aside, I recommend against explicitly closing stdin, stdout, and stderr. According to stdin(3), the standard streams are closed by a call to exit(3) and by normal program termination.
execvp(3) is succeeding, my Python program is running, but my Python program exits before it does any useful work
In this case, I'm not sure what the problem might be, since I'm not very familiar with Raspberry Pi. But I think you'll have an easier time debugging if you don't close the standard streams before running the Python program.
Hope this helps.

How to properly capture output of process using pwntools

I'm currently confused on how to use the pwntools library for python3 for exploiting programs - mainly sending the input into a vulnerable program.
This is my current python script.
from pwn import *
def executeVuln():
vulnBin = process("./buf2", stdin=PIPE, stdout=PIPE)
vulnBin.sendlineafter(': ','A'*90)
output = vulnBin.recvline(timeout=5)
print(output)
executeVuln()
The program I'm trying to exploit is below - This isn't about how to exploit the program, more on using the script to properly automate it.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#define BUFSIZE 176
#define FLAGSIZE 64
void flag(unsigned int arg1, unsigned int arg2) {
char buf[FLAGSIZE];
FILE *f = fopen("flag.txt","r");
if (f == NULL) {
printf("Flag File is Missing. Problem is Misconfigured, please contact an Admin if you are running this on the shell server.\n");
exit(0);
}
fgets(buf,FLAGSIZE,f);
if (arg1 != 0xDEADBEEF)
return;
if (arg2 != 0xC0DED00D)
return;
printf(buf);
}
void vuln(){
char buf[BUFSIZE];
gets(buf);
puts(buf);
}
int main(int argc, char **argv){
setvbuf(stdout, NULL, _IONBF, 0);
gid_t gid = getegid();
setresgid(gid, gid, gid);
puts("Please enter your string: ");
vuln();
return 0;
}
The process is opened fine.
sendlineafter blocks until it sends the line and so if it doesn't match it waits indefinitely. However, it runs fine and so the input should be sent.
output should receive 90 A's from recvLine due to
puts(buffer) outputting the inputted string.
However, all that is returned is
b'', which seems to indicate that the vulnerable program isn't receiving the input and returning an empty string.
Anyone know what's causing this?
With the above programs, I'm getting the print(output) as b'\n' (not b'') and here is the explanation for it.
The puts() statement outputs a newline character at the end and it is not read by the sendlineafter() call, which in-turn leads the stray newline character to be read by the below recvline() printing b'\n'.
Why the newline character is not by read by sendlineafter()? Because the sendlineafter() is just a combination of recvuntil() and sendline(), where recvuntil() only reads till delimiter leaving characters after. (pwntools docs)
So the solution for this is to read the newline character with sendlineafter() like below (or by calling recvline() twice),
from pwn import *
def executeVuln():
vulnBin = process("./buf2", stdin=PIPE, stdout=PIPE)
vulnBin.sendlineafter(b': \n',b'A'*90)
output = vulnBin.recvline(timeout=5)
print(output)
executeVuln()
Output:
[+] Starting local process './buf2': pid 3493
[*] Process './buf2' stopped with exit code 0 (pid 3493)
b'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n'
Note:
I added the strings as bytes in sendlineafter() to remove the below BytesWarning.
program.py:5: BytesWarning: Text is not bytes; assuming ASCII, no guarantees. See https://docs.pwntools.com/#bytes
vulnBin.sendlineafter(': \n','A'*90)

IPC between Python and C on Windows

I'm trying to establish communication between a Python program and a C program.
The Python part of this project manages the application logic and the GUI, while I have written a C program to interface with a sensor that has a manufacturer supplied C library.
Now I need an easy way of communicating between these two programs on a Windows machine. The C program will continously stream data to the Python program. The Python software should be able to send commands to the C software for changing settings etc.
What I found so far is:
the ZeroMQ library which seemed pretty promising, but I could not get it to run on windows and it seems to not be mantained anymore.
the Python subprocess module which is able to pipe the stdin and stdout of the called process, but I don't seem to get it working like I want it to..
The C code I have written now is just a dummy program that streams an output string and prints any commands that are given. The problem is, that the data from the C program does not continously stream to the output of the python program but just shows when I close the program. Also I'm not able to send a command back to the c program.
-Python Code:
import subprocess as subp
pro = subp.Popen("C:/Users/schoenhofer/Documents/Programming/Py2C/simpleIO/bin/Debug/simpleIO.exe",
stdin=subp.PIPE, stdout=subp.PIPE, bufsize=-1, universal_newlines=True)
while not pro.poll():
ln = pro.stdout.read()
if ln == '':
pro.kill()
break
else:
print(ln)
-C code:
#include <stdlib.h>
#include <stdint.h>
#include <memory.h>
#include <windows.h>
#include <conio.h>
#include <time.h>
void delay(unsigned int mseconds)
{
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main()
{
TCHAR buf[20] = {0};
DWORD dwLength, dwRead;
DWORD got = 0;
uint16_t i = 0;
LPDWORD mode = 0;
HANDLE h_in = GetStdHandle(STD_INPUT_HANDLE);
if(h_in == INVALID_HANDLE_VALUE){
printf("Error getting input handle\n");
exit(EXIT_FAILURE);
}
dwLength = sizeof(buf);
SetConsoleMode(h_in, ENABLE_PROCESSED_INPUT|ENABLE_LINE_INPUT|ENABLE_EXTENDED_FLAGS);
while(1){
if(kbhit()){
if(ReadConsole(h_in, buf, dwLength, &got, NULL) == 0){
DWORD err = GetLastError();
printf("Error reading from console: %u", err);
exit(EXIT_FAILURE);
}
}
if(got > 0){
printf("Got: %s", buf);
memset(buf, 0, 20);
got = 0;
}
else{
printf("Got nothing\n");
}
delay(300);
}
return 0;
}
Any help would be greatly appreciated.
Thanks in advance, Thomas

Python : communication with c++ command line program not working when using <cstdio>

I have the following python code, which is supposed to provide the intial input to a C++ program, then take its output and feed it back into it, until the program finishes execution:
comm.py
p = subprocess.Popen('test__1.exe', bufsize=1, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=False)
p.stdin.flush()
p.stdout.flush()
x = b'1\n'
while True:
p.stdin.write(x)
p.stdin.flush()
p.stdout.flush()
x = p.stdout.readline()
print(x)
if p.poll() != None:
break
I am currently testing it with two simple C++ programs:
test__1.cpp:
#include <iostream>
using namespace std;
int main()
{
for( int i = 0; i < 3; ++i )
{
int n;
cin >> n;
cout << n+1 << endl;
}
return 0;
}
test__2.cpp
#include <cstdio>
int main()
{
for( int i = 0; i < 3; ++i )
{
int n;
scanf("%d", &n);
printf("%d\n", n+1);
}
return 0;
}
When comm.py opens test__1.exe everything works fine, but when it opens test__2.exe it hangs on the first call to readline().
Note that this problem does not occur when I feed test__2.exe the whole input before execution (i.e. initially set x = '1\n2\n3\n')
What could be causing this issue?
(Also, comm.py should be able to handle any valid C++ program, so only using iostream would not be an acceptable solution.)
EDIT: I also need the solution to work on Windows.
It is caused by the fact that std::endl flushes the ostream and printf does not flush stdout,
as you can see by amending test__2.cpp as follows:
#include <cstdio>
int main()
{
for( int i = 0; i < 3; ++i )
{
int n;
scanf("%d", &n);
printf("%d\n", n+1);
fflush(stdout); //<-- add this
}
return 0;
}
You say that you want to module to work correctly with any C++ program, so
you can't rely upon it flushing the standard output (or standard error)
after every write.
That means you must cause the program's standard streams to be unbuffered
and do so externally to the program itself. You will need to do that
in comm.py.
In Linux (or other host providing GNU Core Utils), you could so by
executing the program via stdbuf, e.g.
import subprocess
cmd = ['/usr/bin/stdbuf', '-i0','-o0', '-e0', './test__2']
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=False)
p.stdin.flush()
p.stdout.flush()
x = b'1\n'
while True:
p.stdin.write(x)
x = p.stdout.readline()
print(x)
if p.poll() != None:
break
which unbuffers all the standard streams. For Windows, you will need
to research how do the same thing. For the time being I don't know.

Categories

Resources