I am trying to track the execution of python scripts with C++ Threads (If anyone knows a better approach, feel free to mention it)
This is the code I have so far.
#define PY_SSIZE_T_CLEAN
#include </usr/include/python3.8/Python.h>
#include <iostream>
#include <thread>
void launchScript(const char *filename){
Py_Initialize();
FILE *fd = fopen(filename, "r");
PyRun_SimpleFile(fd, filename);
PyErr_Print();
Py_Finalize();
}
int main(int argc, char const *argv[])
{
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\".\")");
std::thread first (launchScript,"script.py");
std::cout << "Thread 1 is running with thread ID: " << first.get_id() << std::endl;
std::thread second (launchScript,"script2.py");
std::cout << "Thread 2 is running with thread ID: " << second.get_id() << std::endl;
first.join();
second.join();
Py_Finalize();
return 0;
}
Script.py just has a print statement that prints "Hello World"
Script2.py has a print statement that prints "Goodbye World"
I build the application with the following commands
g++ -pthread -I/usr/include/python3.8/ main.cpp -L/usr/lib/python3.8/config-3.8-x86_64 linux-gnu -lpython3.8 -o output
When I run ./output, I receive the following on my terminal
Thread 1 is running with thread ID: 140594340370176
Thread 2 is running with thread ID: 140594331977472
GoodBye World
./build.sh: line 2: 7864 Segmentation fault (core dumped) ./output
I am wondering why I am getting Segmentation Fault. I have tried to debug with PyErr_Print(); but that has not given me any clues.
Any feed back is appreciated.
After testing and debugging the program for about 20 minutes I found that the problem is caused because in your example you've created the second std::thread named second before calling join() on the first thread.
Thus, to solve this just make sure that you've used first.join() before creating the second thread as shown below:
int main(int argc, char const *argv[])
{
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\".\")");
std::thread first (launchScript,"script.py");
std::cout << "Thread 1 is running with thread ID: " << first.get_id() << std::endl;
//--vvvvvvvvvvvvv-------->call join on first thread before creating the second std::thread
first.join();
std::thread second (launchScript,"script2.py");
std::cout << "Thread 2 is running with thread ID: " << second.get_id() << std::endl;
second.join();
Py_Finalize();
return 0;
}
Related
From a C++ program (running under Windows 10), I use boost::process to invoke Python in order to interpret a simple Python script. I want to redirect Python script's output in real-time to my C++ program's console.
My problem is that I'm getting the whole Python script output at once when the program completed, I'm not getting it in real-time.
Here is my MCVE:
Python script (script.py):
import time
from datetime import datetime
print( "Started Python script at t=" + str(datetime.now().time()) )
time.sleep(1)
print( "Slept 1 sec, t=" + str(datetime.now().time()) )
time.sleep(1)
print( "Slept 1 sec, t=" + str(datetime.now().time()) )
print( "Stopped Python script at t=" + str(datetime.now().time()) )
C++ program (main.cpp):
#include <boost/process.hpp>
#include <ctime>
#include <iostream>
#include <chrono>
namespace bp = boost::process;
std::ostream &operator<<(std::ostream &stream, const std::chrono::system_clock::time_point& time_point)
{
const auto time {std::chrono::system_clock::to_time_t (time_point)};
const auto localtime {*std::localtime (&time)};
const auto time_since_epoch {time_point.time_since_epoch()};
const auto milliseconds_count {std::chrono::duration_cast<std::chrono::milliseconds> (time_since_epoch).count() % 1000};
stream << "[" << std::put_time (&localtime, "%T") << "." << std::setw (3) << std::setfill ('0') << milliseconds_count << "] - ";
return stream;
}
int main( int argc, char* argv[] )
{
std::cout << std::chrono::system_clock::now() << "Creating child" << std::endl;
bp::ipstream stream;
bp::child c("python.exe", "script.py", bp::std_out > stream);
std::cout << std::chrono::system_clock::now() << "Created child" << std::endl;
std::cout << std::chrono::system_clock::now() << "Invoking getline" << std::endl;
std::string line;
while (getline(stream, line)) {
std::cout << std::chrono::system_clock::now() << "From Python output: " << line << std::endl;
}
std::cout << std::chrono::system_clock::now() << "getline ended" << std::endl;
c.wait();
return 0;
}
This program outputs:
[12:50:34.684] - Creating child
[12:50:34.706] - Created child
[12:50:34.708] - Invoking getline
[12:50:36.743] - From Python output: Started Python script at t=12:50:34.742105
[12:50:36.745] - From Python output: Slept 1 sec, t=12:50:35.743111
[12:50:36.745] - From Python output: Slept 1 sec, t=12:50:36.743328
[12:50:36.746] - From Python output: Stopped Python script at t=12:50:36.743328
[12:50:36.747] - getline ended
As you can see, we get the whole 4 outputs after Python process ended (first call to getline freezes for 2 seconds - while Python runs the two time.sleep(1) - and then within 3ms we get the whole 4 lines). I would expect to get something like:
[12:50:34.684] - Creating child
[12:50:34.706] - Created child
[12:50:34.708] - Invoking getline
[12:50:34.XXX] - From Python output: Started Python script at t=12:50:34.742105
[12:50:35.XXX] - From Python output: Slept 1 sec, t=12:50:35.743111
[12:50:36.XXX] - From Python output: Slept 1 sec, t=12:50:36.743328
[12:50:36.XXX] - From Python output: Stopped Python script at t=12:50:36.743328
[12:50:36.XXX] - getline ended
I suspect the problem comes more from boost::process than from Python, but all the examples I could find for boost::process are reading std::cout the same way. Is there any thing I should change to have the getline loop run in real-time while Python prints output?
Python streams are (like C streams or C++ ones) buffered (for performance reasons).
You may want to use some flush method in your Python code.
And your question could be operating system specific. For Linux, be also aware of fsync(2) and termios(3) (and pipe(7) and fifo(7)...). For other operating systems, read their documentation.
I would like to serialize a dataframe to pipe it to an executable that at the moment for testing purposes just prints out whatever it is receiving from the command line.
import numpy as np
import pandas as pd
import subprocess
df = pd.DataFrame(np.random.normal(size = 1000).reshape(100,10))
exe = r"C:\Users\Snake91\source\repos\ConsoleApplication1\Debug\ConsoleApplication1.exe "
process = subprocess.Popen([exe],
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr=subprocess.PIPE, shell = True)
stdout, stderr = process.communicate(np.array(df).tobytes())
Below the code of the executable
#include <iostream>
int main(int argc, char* argv[])
{
int size = sizeof(argv)/sizeof(argv[0]) + 1;
if (size > 1)
{
for (int idx = 1; idx < size; idx++)
{
std::cout << argv[idx];
}
}
}
When running the python script, however, I get an empty byte string instead of the serialized dataframe. What am I getting wrong?
As correctly pointed out in the comments, communicate does not pass the arguments to the command line, but through std::cin. The code of the executable has to be modified in the following way to make it work.
#include <string>
#include <iostream>
using namespace std;
int main(int argc, char* argv[])
{
std::string line;
while (getline(std::cin, line))
{
std::cout << line << std::endl;
}
}
Thanks for the help.
I am trying to run a simple py file in dev c++.
/// **main file**
string script_name = "example.py"
char* script_name2 = new char[script_name.length() + 1];
strcpy(script_name2, script_name.c_str());
FILE* file_pointer;
Py_Initialize();
file_pointer = fopen(script_name2,"r");
if (file_pointer == NULL)
{
PyErr_Print();
cout << "Cannot read file -> " << script_name2 << " ... exiting" << "\n";
exit(0);
}
cout << file_pointer;
PyRun_SimpleFile(file_pointer, script_name2);
example.py
print("My Name is TUTANKHAMEN")
The file pointer is not null and prints an adress.
example.py is in the same folder containing main.cpp
Below statement runs fine and prints output on console.
Py_Initialize();
PyRun_SimpleString("import example\n");
Py_Finalize();
The script runs fine Visual Studio 2019.
But in Dev C++ :
file_pointer = _Py_fopen(script_name2, "r");
This does the trick.
I'm trying to establish communication between a Python program and a C program.
The Python part of this project manages the application logic and the GUI, while I have written a C program to interface with a sensor that has a manufacturer supplied C library.
Now I need an easy way of communicating between these two programs on a Windows machine. The C program will continously stream data to the Python program. The Python software should be able to send commands to the C software for changing settings etc.
What I found so far is:
the ZeroMQ library which seemed pretty promising, but I could not get it to run on windows and it seems to not be mantained anymore.
the Python subprocess module which is able to pipe the stdin and stdout of the called process, but I don't seem to get it working like I want it to..
The C code I have written now is just a dummy program that streams an output string and prints any commands that are given. The problem is, that the data from the C program does not continously stream to the output of the python program but just shows when I close the program. Also I'm not able to send a command back to the c program.
-Python Code:
import subprocess as subp
pro = subp.Popen("C:/Users/schoenhofer/Documents/Programming/Py2C/simpleIO/bin/Debug/simpleIO.exe",
stdin=subp.PIPE, stdout=subp.PIPE, bufsize=-1, universal_newlines=True)
while not pro.poll():
ln = pro.stdout.read()
if ln == '':
pro.kill()
break
else:
print(ln)
-C code:
#include <stdlib.h>
#include <stdint.h>
#include <memory.h>
#include <windows.h>
#include <conio.h>
#include <time.h>
void delay(unsigned int mseconds)
{
clock_t goal = mseconds + clock();
while (goal > clock());
}
int main()
{
TCHAR buf[20] = {0};
DWORD dwLength, dwRead;
DWORD got = 0;
uint16_t i = 0;
LPDWORD mode = 0;
HANDLE h_in = GetStdHandle(STD_INPUT_HANDLE);
if(h_in == INVALID_HANDLE_VALUE){
printf("Error getting input handle\n");
exit(EXIT_FAILURE);
}
dwLength = sizeof(buf);
SetConsoleMode(h_in, ENABLE_PROCESSED_INPUT|ENABLE_LINE_INPUT|ENABLE_EXTENDED_FLAGS);
while(1){
if(kbhit()){
if(ReadConsole(h_in, buf, dwLength, &got, NULL) == 0){
DWORD err = GetLastError();
printf("Error reading from console: %u", err);
exit(EXIT_FAILURE);
}
}
if(got > 0){
printf("Got: %s", buf);
memset(buf, 0, 20);
got = 0;
}
else{
printf("Got nothing\n");
}
delay(300);
}
return 0;
}
Any help would be greatly appreciated.
Thanks in advance, Thomas
I'm having trouble while running embedded python. It turns out that I can't capture that SystemExit exception raised by sys.exit();
This is what I have so far:
$ cat call.c
#include <Python.h>
int main(int argc, char *argv[])
{
Py_InitializeEx(0);
PySys_SetArgv(argc-1, argv+1);
if (PyRun_AnyFileEx(fopen(argv[1], "r"), argv[1], 1) != 0) {
PyObject *exc = PyErr_Occurred();
printf("terminated by %s\n",
PyErr_GivenExceptionMatches(exc, PyExc_SystemExit) ?
"exit()" : "exception");
}
Py_Finalize();
return 0;
}
Also, my script is:
$ cat unittest-files/python-return-code.py
from sys import exit
exit(99)
Running it:
$ ./call unittest-files/python-return-code.py
$ echo $?
99
I must execute a file, not a command.
PyRun_SimpleFileExFlags function (and all functions using it, including PyRun_AnyFileEx) handles exceptions itself by exiting for SystemExit or printing traceback. Use PyRun_File* family of functions to handle exceptions in surrounding code.