Run .py script in Qt - python

i want to run (after i click a button) a .py Script
I already tried the following Code
QProcess p;
QStringList params;
params << "createJSON.py";
p.start("python.exe", params);
p.waitForFinished(-1);
QString p_stdout = p.readAll();
My Python Script create a JSON-File when it runs successfully.
So i can see if the runs successfully.

I have been able to write a more detailed version of your code.
QProcess p;
QStringList params;
params << "createJSON.py";
QObject::connect(&p, &QProcess::started, []() {
qInfo() << "Process started!";
});
QObject::connect(&p, &QProcess::errorOccurred, [&p]() {
qWarning() << "Error occurred" << p.errorString();
});
p.start("python.exe", params);
p.waitForFinished(-1);
QString p_stdout = p.readAllStandardOutput();
QString p_stderr = p.readAllStandardError();
qDebug() << "OUT" << p_stdout;
qDebug() << "ERR" << p_stderr;
This effectively leads to an error. In my case, I get the following:
Process started!
OUT ""
ERR "python.exe: can't open file 'createJSON.py': [Errno 2] No such file or directory\n"
It may be different in your case. Either way, using the errorOccurred signal along with the errorString method will allow you to debug cases where the process actually cannot start. Reading stderr will allow you to debug cases where the process starts, but does not run as expected.

Related

Python sys.stdin.buffer size detection

I'm trying to execute a Python script from a Qt application and to communicate with the script via standard input and output (as one would do via common Unix pipes). My calling code stub looks like this:
int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
QProcess process;
QTimer timer;
QObject::connect(&process, &QProcess::readyReadStandardOutput, [&process]() {
qDebug() << "<<o" << process.readAllStandardOutput();
});
QObject::connect(&process, &QProcess::readyReadStandardError, [&process]() {
qDebug() << "<<e" << process.readAllStandardError();
});
QObject::connect(&process, &QProcess::started, [&process] {
qDebug() << "Process name" << process.program() << process.processId();
});
QObject::connect(&timer, &QTimer::timeout, [&process]() {
qDebug() << process.state();
QByteArray ba("12345");
qDebug() << ">>" << ba;
process.write(ba);
if (!process.waitForBytesWritten())
qDebug() << process.errorString();
});
QObject::connect(&a, &QCoreApplication::aboutToQuit, [&]() {
process.terminate();
process.kill();
});
process.start("python3", {"main.py"});
// process.start("cat", QStringList{});
timer.start(2000);
a.exec();
process.terminate();
process.kill();
return 0;
}
And my Python script is shown below:
import sys, time
def process_data(data):
size=len(data)
if size %2:
print(f'Odd, {size}',data)
else:
print(f'Even, {size}',data)
sys.stdout.flush()
if __name__ == '__main__':
while True:
data=sys.stdin.buffer.read(5)
if len(data):
process_data(data)
else:
print('.')
time.sleep(0.02)
The thing is that I want to have my script react on any incoming buffer, much like a cat command does. When I comment out the line calling my script and uncomment the one calling the cat command, each time I send a buffer, I receive a reply, which is what I want. But when I'm calling a Python script, I have no means of detecting incoming buffer size that I know of. Explicitly setting a value in a sys.stdin.buffer.read command allows me not to wait for an EOF, but I want to receive a buffer without knowing its size in advance. In Qt I would achieve such behavior by calling readAll() method of a QIODevice. Is there a way of doing the same in Python?
I have tried calling sys.stdin.buffer.read() without any arguments, expecting it to behave like QIODevice::readAll() - producing a buffer with all the data read so far. But obviously it produces nothing until it receives an EOF. I hope there is a kind of method that yields a size of buffer received so that I could write:
size=stdin.buffer.bytesReceived()
data=stdin.buffer.read(size)
yet such method seems to be missing.
Does anyone know of any solution to this problem?
The problem is solved by changing sys.stdin.buffer.read line to:
data = sys.stdin.buffer.raw.read(20000)
This also works:
data = sys.stdin.buffer.read1(20000)
This answer was posted as edit 1 and edit 2 to the question Python sys.stdin.buffer size detection [solved] by the OP Kirill Didkovsky under CC BY-SA 4.0.

Getting Python output in real-time from C++/boost::process

From a C++ program (running under Windows 10), I use boost::process to invoke Python in order to interpret a simple Python script. I want to redirect Python script's output in real-time to my C++ program's console.
My problem is that I'm getting the whole Python script output at once when the program completed, I'm not getting it in real-time.
Here is my MCVE:
Python script (script.py):
import time
from datetime import datetime
print( "Started Python script at t=" + str(datetime.now().time()) )
time.sleep(1)
print( "Slept 1 sec, t=" + str(datetime.now().time()) )
time.sleep(1)
print( "Slept 1 sec, t=" + str(datetime.now().time()) )
print( "Stopped Python script at t=" + str(datetime.now().time()) )
C++ program (main.cpp):
#include <boost/process.hpp>
#include <ctime>
#include <iostream>
#include <chrono>
namespace bp = boost::process;
std::ostream &operator<<(std::ostream &stream, const std::chrono::system_clock::time_point& time_point)
{
const auto time {std::chrono::system_clock::to_time_t (time_point)};
const auto localtime {*std::localtime (&time)};
const auto time_since_epoch {time_point.time_since_epoch()};
const auto milliseconds_count {std::chrono::duration_cast<std::chrono::milliseconds> (time_since_epoch).count() % 1000};
stream << "[" << std::put_time (&localtime, "%T") << "." << std::setw (3) << std::setfill ('0') << milliseconds_count << "] - ";
return stream;
}
int main( int argc, char* argv[] )
{
std::cout << std::chrono::system_clock::now() << "Creating child" << std::endl;
bp::ipstream stream;
bp::child c("python.exe", "script.py", bp::std_out > stream);
std::cout << std::chrono::system_clock::now() << "Created child" << std::endl;
std::cout << std::chrono::system_clock::now() << "Invoking getline" << std::endl;
std::string line;
while (getline(stream, line)) {
std::cout << std::chrono::system_clock::now() << "From Python output: " << line << std::endl;
}
std::cout << std::chrono::system_clock::now() << "getline ended" << std::endl;
c.wait();
return 0;
}
This program outputs:
[12:50:34.684] - Creating child
[12:50:34.706] - Created child
[12:50:34.708] - Invoking getline
[12:50:36.743] - From Python output: Started Python script at t=12:50:34.742105
[12:50:36.745] - From Python output: Slept 1 sec, t=12:50:35.743111
[12:50:36.745] - From Python output: Slept 1 sec, t=12:50:36.743328
[12:50:36.746] - From Python output: Stopped Python script at t=12:50:36.743328
[12:50:36.747] - getline ended
As you can see, we get the whole 4 outputs after Python process ended (first call to getline freezes for 2 seconds - while Python runs the two time.sleep(1) - and then within 3ms we get the whole 4 lines). I would expect to get something like:
[12:50:34.684] - Creating child
[12:50:34.706] - Created child
[12:50:34.708] - Invoking getline
[12:50:34.XXX] - From Python output: Started Python script at t=12:50:34.742105
[12:50:35.XXX] - From Python output: Slept 1 sec, t=12:50:35.743111
[12:50:36.XXX] - From Python output: Slept 1 sec, t=12:50:36.743328
[12:50:36.XXX] - From Python output: Stopped Python script at t=12:50:36.743328
[12:50:36.XXX] - getline ended
I suspect the problem comes more from boost::process than from Python, but all the examples I could find for boost::process are reading std::cout the same way. Is there any thing I should change to have the getline loop run in real-time while Python prints output?
Python streams are (like C streams or C++ ones) buffered (for performance reasons).
You may want to use some flush method in your Python code.
And your question could be operating system specific. For Linux, be also aware of fsync(2) and termios(3) (and pipe(7) and fifo(7)...). For other operating systems, read their documentation.

How to execute a pyomo model script inside Spring?

I have a web interface built with Spring and I want to execute the command "python file.py" from it.
The main problem is that inside the file.py there is a pyomo model that is supposed to give some output. I can execute a python script if it's a simple print or something, but the pyomo model is completely ignored.
What could be the reason?
Here is the code I wrote in the controller to execute the call:
#PostMapping("/execute")
public void execute(#ModelAttribute("component") #Valid Component component, BindingResult result, Model model) {
Process process = null;
//System.out.println("starting!");
try {
process = Runtime.getRuntime().exec("python /home/chiara/Documents/GitHub/Pyomo/Solver/test/sample.py");
//System.out.println("here!");
} catch (Exception e) {
System.out.println("Exception Raised" + e.toString());
}
InputStream stdout = process.getInputStream();
BufferedReader reader = new BufferedReader(new InputStreamReader(stdout, StandardCharsets.UTF_8));
String line;
try {
while ((line = reader.readLine()) != null) {
System.out.println("stdout: " + line);
}
} catch (IOException e) {
System.out.println("Exception in reading output" + e.toString());
}
}
Update: I found that what I was missing was that I didn't check where the code run. So be sure to do so and eventually move the input files (if you have any) in the directory where python is executing, otherwise the script can't find them and elaborate them.
You can use
cwd = os.getcwd()
to check the current working directory of a process.
Another possibility is to redirect the stderr on the terminal or in a log file, because from the Server terminal you won't see anything even if there are errors.
The code posted in the question is the correct way to invoke a bash command from java.

Writing into stdout with MFC and read stdout by python Popen

I want to write into std::cerr or std::cout with my MFC application. In a python script I call this application and I want to read from stdout or stderr.
Both is not working. Just using std::cout yields no output. After AllocConsole() I was at least able to print to a debug console. Unfortunately, there is still no output on the python site.
In my MFC application I initialize a console to write to with this code:
void BindStdHandlesToConsole()
{
// Redirect the CRT standard input, output, and error handles to the console
freopen("CONIN$", "r", stdin);
freopen("CONOUT$", "w", stdout);
freopen("CONOUT$", "w", stderr);
std::wcout.clear();
std::cout.clear();
std::wcerr.clear();
std::cerr.clear();
std::wcin.clear();
std::cin.clear();
}
// initialization
BOOL foo::InitInstance()
{
// allocate a console
if (!AllocConsole())
AfxMessageBox("Failed to create the console!", MB_ICONEXCLAMATION);
else
BindStdHandlesToConsole();
On the python site, I try to print the output.
process = subprocess.Popen(args,stdout=subprocess.PIPE,shell=True)
output = process.stdout.read()
process.wait()
Is there a way to make my MFC program really write and the python script read the standard output?
The proper way of writing stuff to stdout pipe is the following:
HANDLE hStdOut = GetStdHandle ( STD_OUTPUT_HANDLE );
WriteFile( hStdOut, chBuf, dwRead, &dwWritten, NULL );

Why doesn't Python check_output() return when calling daemon?

I have a Python v3.4 application that uses check_output() to invoke a C++ application that calls fork(), with the original process exiting and the child process continuing on. It seems the check_output() is also waiting on the child process instead of returning once the main process returns and says the daemon was started successfully.
Do I need to change how I fork() in C++ or does Python check_output() call need to somehow be told only wait for parent process to exit? Do I need to do a second fork() in C++ as described here?
Here is stripped down Python that exhibits the issue:
#! /usr/local/bin/python3
import logging
import argparse
from subprocess import CalledProcessError, check_output, STDOUT
ARGS = ["user#hostname:23021:"]
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Try launching subprocess")
parser.add_argument("exec", type=str, help="the exec to run")
args = parser.parse_args()
logging.basicConfig(level=logging.INFO,
format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S')
cmd_list = [args.exec] + ARGS
logging.info(str(cmd_list))
try:
output = check_output(cmd_list,
stderr=STDOUT,
universal_newlines=True)
logging.info("Exec OK with output:")
logging.info(output)
except CalledProcessError as e:
logging.info("Exec Not OK with output:")
logging.info(str(e))
Here is the C++ code to daemonize the C++ application:
void
daemonize()
{
// This routine backgrounds the process to run as a daemon.
// Returns to caller only if we are the child, otherwise exits normally.
if (getppid() == 1) {
return; // Leave if we're already a daemon
}
// Create the backgrounded child process.
const pid_t parent = getpid();
const pid_t pid = fork();
CSysParamAccess param;
const string progName(param.getProgramKindName());
::close(STDIN_FILENO);
::close(STDOUT_FILENO);
::close(STDERR_FILENO);
if (pid < 0) {
cerr << "Error: " << progName << " failed to fork server. Aborting."
<< endl; // inform the client of the failure
exit(appExit::failForkChild); // Error. No child created.
} else if (pid > 0) {
// We're in the parent. Optionally print the child's pid, then exit.
if (param.getDebug()) {
clog << "Successful fork. The Application server's (" << progName
<< ") pid is: " << pid << "(self) from parent " << parent << endl;
}
::close(STDIN_FILENO);
::close(STDOUT_FILENO);
::close(STDERR_FILENO);
exit(appExit::normal);
}
::close(STDIN_FILENO);
::close(STDOUT_FILENO);
::close(STDERR_FILENO);
// Here only in the child (daemon).
if (-1 == setsid()) { // Get a new process group
cerr << "Error: Failed to become session leader while daemonising - errno: "
<< errno;
exit(appExit::failForkChild); // Error. Child failed.
}
signal(SIGHUP, SIG_IGN); // Per example.
// Fork again, allowing the parent process to terminate.
const pid_t midParent = getpid();
const pid_t grandChildPid = fork();
if (grandChildPid < 0) {
cerr << "Error: Failed to fork while daemonising - errno: " << errno;
exit(appExit::failForkChild); // Error. GrandChild failed.
} else if (grandChildPid > 0) {
// We're in the parent. Optionally print the grandchild's pid, then exit.
if (param.getDebug()) {
clog << "Successful second fork. The Application server's (" << progName
<< ") pid is: " << grandChildPid << "(self) from parent "
<< midParent << endl;
}
::close(STDIN_FILENO);
::close(STDOUT_FILENO);
::close(STDERR_FILENO);
exit(appExit::normal);
}
// Here only in the grandchild (daemon).
appGlobalSetSignalHandlers();
// Set the current working directory to the root directory.
if (chdir("/") == -1) {
cerr <<
"Error: Failed to change working directory while daemonising - errno:"
<< errno;
exit(appExit::failForkChild); // Error. GrandChild failed.
}
// Set the user file creation mask to zero.
umask(0);
//close(STDIN_FILENO); // Cannot close due to assertion in transfer.cpp
// Theoretically, we would reopen stderr and stdout using the log file.
::close(STDIN_FILENO);
::close(STDOUT_FILENO);
::close(STDERR_FILENO);
// We only return here if we're the grandchild process, the Application
// server. The summoner exited in daemonize().
clog << "Application " << param.getProgramKindName()
<< " (" << appGlobalProgramName() << ") successfully started." << endl;
}
It works when called with echo and fails with my C++ application:
> stuckfork.py echo
2016-02-05 10:17:34 - ['echo', 'user#hostname:23021:']
2016-02-05 10:17:34 - Exec OK with output:
2016-02-05 10:17:34 - user#hostname:23021:
> stuckfork.py flumep
2016-02-05 10:17:53 - ['flumep', 'user#hostname:23021:']
C-c Traceback (most recent call last):
File "/home/user/Bin/Bin/stuckfork.py", line 26, in <module>
universal_newlines=True)
File "/usr/local/lib/python3.4/subprocess.py", line 609, in check_output
output, unused_err = process.communicate(inputdata, timeout=timeout)
File "/usr/local/lib/python3.4/subprocess.py", line 947, in communicate
stdout = _eintr_retry_call(self.stdout.read)
File "/usr/local/lib/python3.4/subprocess.py", line 491, in _eintr_retry_call
return func(*args)
KeyboardInterrupt
>
I've narrowed the issue down to one of my C++ static constructors is doing something that causes the launching process to go defunct which is why Python is still waiting. Dividing now to see which one.
A correct solution will be to find correct file descriptor that pipes the output to python from the forked C++ child and close it.
For now you may try to close(1) SYSTEM CALL in the C++ child process or before calling child process (just after fork()) . That will signal python to stop trying to read from the child.
I am not sure if this will work, as the code you posted is not enough.
The issue was open file descriptors, it was due to this static code being run:
FILE *origStdErr = fdopen(dup(fileno(stderr)), "a");
Once that line was removed, the daemon's close(0), close(1), and close(2) had the proper effect and the Python code stopped waiting.

Categories

Resources