Communicate C++ program with Arduino through Python - python

I'm been struggling with this problem pretty long. I have to communicate with an Arduino and C++ program. I know it is easy in Python, but it has to be in C++ for school project. So I'm reading a lot of things, but no one help me further.
So I have a short Python script that communicate with the arduino very well and fast written in Python.
communication.py:
import serial
ser = serial.Serial('/dev/ttyACM0', 9600)
def communicate(number):
ser.write(number)
return ser.readline()
The Arduino gets the number and returns values of light sensors in a string. Now I want this to communicate with C++ program. My C++ code (this is only a small part of the full program)
mainwindow.cpp:
void MainWindow::run(){
while (ui->startstop->value() == 1){// as long that the program must run
blockchange = 0;
// get data from arduino
Py_Initialize();
const char* modulename = "communication";
PyObject *pName = PyUnicode_FromString(modulename);
PyObject *pModule = PyImport_Import(pName);
if (pModule != NULL){
PyObject *pDict = PyModule_GetDict(pModule);
PyObject *pFunc = PyDict_GetItem(pDict, PyUnicode_FromString("communicate"));
if (pFunc != NULL){
PyObject_CallObject(pFunc, PyLong_FromLong(blockchange));
}else{
std::cout << "couldn't find func\n";
}
}else{
std::cout << "pyhton module not found\n";
}
}
It only gives "python module not found". Which means that PyImport_Import(pName) returns NULL. What is wrong?
I use Ubuntu 18.04 and my standard version of Python is 3.5 and the program is written in Qt Creator. I tried a lot of things, also without Python, but I haven't found anything that works. I only want that my Arduino reads one int from 0 to 6, and that the C++ program reads a string of 6 numbers separated with a ",".

Related

Creating a basic PyTupleObject using Python's C API

I'm having difficulty with creating a PyTupleObject using the Python C api.
#include "Python.h"
int main() {
int err;
Py_ssize_t size = 2;
PyObject *the_tuple = PyTuple_New(size); // this line crashes the program
if (!the_tuple)
std::cerr << "the tuple is null" << std::endl;
err = PyTuple_SetItem(the_tuple, (Py_ssize_t) 0, PyLong_FromLong((long) 5.7));
if (err < 0) {
std::cerr << "first set item failed" << std::endl;
}
err = PyTuple_SetItem(the_tuple, (Py_ssize_t) 1, PyLong_FromLong((long) 5.7));
if (err < 0) {
std::cerr << "second set item failed" << std::endl;
}
return 0;
}
crashes with
Process finished with exit code -1073741819 (0xC0000005)
But so does everything else i've tried so far. Any ideas what I'm doing wrong? Not that I'm just trying to run the as a C++ program, as I'm just trying to do tests on the code before adding a swig typemap.
The commenter #asynts is correct in that you need to initialize the interpreter via Py_Initialize if you want to interact with Python objects (you are, in fact, embedding Python). There are a subset of functions from the API that can safely be called without initializing the interpreter, but creating Python objects do not fall within this subset.
Py_BuildValue may "work" (as in, not creating a segfault with those specific arguments), but it will cause issues elsewhere in the code if you try to do anything with it without having initialized the interpreter.
It seems that you're trying to extend Python rather than embed it, but you're embedding it to test the extension code. You may want to refer to the official documentation for extending Python with C/C++ to guide you through this process.

Is it C++ for loop had require run time?

I had one c++ program that inside for loop, calling a function.
The function is doing a heavy process, it is embedded with python and performing image processing.
My question is, why can it only run at the first instance of the variable?
Main function (I only show the part of code require in this title):
int main(){
for(int a = 0;a<5;a++){
for(int b=0;b<5;b++){
// I want every increment it go to PyRead() function, doing image processing, and compare
if(PyRead()==1){
// some application might be occur
}
else {
}
}
}
PyRead() function, the function in c++ to go into python environment performing image processing:
bool PyRead(){
string data2;
Py_Initialize();
PyRun_SimpleString("print 'hahahahahawwwwwwwwwwwww' ");
char filename[] = "testcapture";
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append(\".\")");
PyObject * moduleObj = PyImport_ImportModule(filename);
if (moduleObj)
{
PyRun_SimpleString("print 'hahahahaha' ");
char functionName[] = "test";
PyObject * functionObj = PyObject_GetAttrString(moduleObj, functionName);
if (functionObj)
{
if (PyCallable_Check(functionObj))
{
PyObject * argsObject = PyTuple_New(0);
if (argsObject)
{
PyObject * resultObject = PyEval_CallObject(functionObj, argsObject);
if (resultObject)
{
if ((resultObject != Py_None)&&(PyString_Check(resultObject)))
{
data2 = PyString_AsString(resultObject);
}
Py_DECREF(resultObject);
}
else if (PyErr_Occurred()) PyErr_Print();
Py_DECREF(argsObject);
}
}
Py_DECREF(functionObj);
}
else PyErr_Clear();
Py_DECREF(moduleObj);
}
Py_Finalize();
std::cout << "The Python test function returned: " << data2<< std::endl;
cout << "Data2 \n" << data2;
if(compareID(data2) == 1)
return true;
else
return false;
}
This is second time I ask this question in stack overflow. I hope this time this question will be more clear!
I can successful compile with no error.
When I run the program, I realize at a=0, b=0 it will go to PyRead() function and return value, after that it go to a=0, b=1, at that moment the whole program will end.
It supposes to go to PyRead() function again, but it does not do that and straight ending the program.
I must strongly mention that PyRead() function needed a long time to run (30seconds).
I had no idea what happens, seeking for somehelp. Please focus on the Bold part to understand my question.
Thanks.
See the comment in https://docs.python.org/2/c-api/init.html#c.Py_Finalize
Ideally, this frees all memory allocated by the Python interpreter.
Dynamically loaded extension modules loaded by Python are not unloaded.
Some extensions may not work properly if their initialization routine is called more than once
It seems your module, does not play well with this function.
A workaround can be - create the script on the fly and call it with python subprocess.

Limit Zbar to QR code only in Python

I'm using Zbar with it's Processor option in Python. I've been trying to figure out how to limit the symbology to QR-code only, but have only found answers for C as it follows:
scanner = new ImageScanner();
scanner.setConfig(Symbol.QRCODE, Config.ENABLE, 1);
I understand that the original code is written for C but is there anyway to do it in Python? Python isn't my main language and it's a bit difficult for me to understand what the arguments are in this case for processor.parse_config() (which I have currently set to 'enable'):
From https://github.com/npinchot/zbar/blob/master/processor.c
static PyObject*
processor_parse_config (zbarProcessor *self,
PyObject *args,
PyObject *kwds)
{
const char *cfg = NULL;
static char *kwlist[] = { "config", NULL };
if(!PyArg_ParseTupleAndKeywords(args, kwds, "s", kwlist, &cfg))
return(NULL);
if(zbar_processor_parse_config(self->zproc, cfg)) {
PyErr_Format(PyExc_ValueError, "invalid configuration setting: %s",
cfg);
return(NULL);
}
Py_RETURN_NONE;
}
I don't even understand why 'enable' is a valid argument.
Took me some time to figure this out since there's no documentation and the config format is counter-intuitive, IMO, but here you go:
proc.parse_config('disable')
proc.parse_config('qrcode.enable')
The first line, disable, disables all scanners.
The second line enables the qrcode scanner.

Send 50KB array from C++ to Python (NumPy) on Windows

I have a C++ and a Python application on Windows (7+). I wish to send a ~50KB array of binary data (int[], float[], or double[]) from the C++ application to a NumPy array in the Python application in real time. I want <100ms latency, but can handle up to 500ms. I'm unsure of the correct way to do this.
I believe NumPy technically stores its arrays as just an array of binary data just like C++ (assuming a reasonable C++ compiler, like modern MSVC or GCC). Therefore technically it should be very easy, but I haven't been able to identify a good way to do this.
My current plan would be to use a memory mapped file, and then handle locking the memory-mapped file with more traditional IPC such as the Win32 message pump or a semaphore.
I'm however not sure whether NumPy can read straight from a memory mapped file. It can create a memory-map to a file on disk with numpy.memmap, but this doesn't seem to work for pure memory mapped file where I just have a name or a handle.
I don't know if this is the right approach. Maybe I can get it to work, but ideally I would also want to do it the right way and not be surprised by nasty consequences of me coding stuff I don't understand.
I would appreciate any help or pointers to material that might help me figure out the correct way to do this.
UPDATE:
My C++ code (proof-of-concept) would look like this:
// Host application.
// Creates 20 byte memory mapped file with name "Global\test_mmap_file" and
// containing 5 uint32s.
// Note: Requires SeCreateGlobalPrivilege to create global memory mapped
#include <windows.h>
#include <iostream>
#include <cassert>
file
int main()
{
HANDLE file_mapping_handle = NULL;
unsigned int* buffer = 0;
assert(sizeof(unsigned int) == 4); // Require compatability with np.uint32
const size_t buffer_sz = 5 * sizeof(unsigned int);
file_mapping_handle = CreateFileMapping(
INVALID_HANDLE_VALUE,
NULL,
PAGE_READWRITE,
0,
buffer_sz,
L"Global\\test_mmap_file");
if (!file_mapping_handle)
{
std::cout << "CreateFileMapping failed (Host).\n";
std::cout << "Error code: 0x" << std::hex << GetLastError() << std::endl;
std::cin.get();
return 1;
}
buffer = (unsigned int*)MapViewOfFile(
file_mapping_handle,
FILE_MAP_ALL_ACCESS,
0,
0,
buffer_sz);
if (!buffer)
{
CloseHandle(file_mapping_handle);
std::cout << " MapViewOfFile failed (Host).\n";
std::cout << "Error code: 0x" << std::hex << GetLastError() << std::endl;
std::cin.get();
return 1;
}
buffer[0] = 2;
buffer[1] = 3;
buffer[2] = 5;
buffer[3] = 7;
buffer[4] = 11;
std::cout << "Data sent, press enter to exit.\n";
std::cin.get();
UnmapViewOfFile(buffer);
CloseHandle(file_mapping_handle);
return 0;
}
I wanted some way to access this shared memory from Python and create a numpy array. I tried,
import numpy as np
L_mm = np.memmap('Global\\test_mmap_file')
but this fails as Global\test_mmap_file is not a filename. Following the hints given by abarnert I constructed the following client program which seems to work:
import numpy as np
import mmap
mm = mmap.mmap(0,20,'Global\\test_mmap_file')
L = np.frombuffer(mm,dtype = np.uint32)
print (L)
mm.close()
This requires admin privileges for both programs to run (or giving the user the right to SeCreateGlobalObjects). However I think this should easily be bypassed by not giving the shared memory a global name, and instead duplicating the handle and passing it to the Python program. It also doesn't control access to the shared memory properly, but that should be easy with a semaphore or some other such construct.

How can I handle IPC between C and Python?

I have a an application with two processes, one in C and one in Python. The C process is where all the heavy lifting is done, while the Python process handles the user interface.
The C program writes to a large-ish buffer 4 times per second, and the Python process reads this data. To this point the communication to the Python process has been done by AMQP. I would much rather setup some for of memory sharing between the two processes to reduce overhead and increase performance.
What are my options here? Ideally I would simply have the Python process read the physical memory straight (preferable from memory and not from disk), and then taking care of race conditions with Semaphores or something similar. This is however something I have little experience with, so I'd appreciate any help I can get.
I am using Linux btw.
This question has been asked for a long time. I believe the questioner already has the answer, so I wrote this answer for people later coming.
/*C code*/
#include <stdio.h>
#include <string.h>
#include <errno.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#define GETEKYDIR ("/tmp")
#define PROJECTID (2333)
#define SHMSIZE (1024)
void err_exit(char *buf) {
fprintf(stderr, "%s\n", buf);
exit(1);
}
int
main(int argc, char **argv)
{
key_t key = ftok(GETEKYDIR, PROJECTID);
if ( key < 0 )
err_exit("ftok error");
int shmid;
shmid = shmget(key, SHMSIZE, IPC_CREAT | IPC_EXCL | 0664);
if ( shmid == -1 ) {
if ( errno == EEXIST ) {
printf("shared memeory already exist\n");
shmid = shmget(key ,0, 0);
printf("reference shmid = %d\n", shmid);
} else {
perror("errno");
err_exit("shmget error");
}
}
char *addr;
/* Do not to specific the address to attach
* and attach for read & write*/
if ( (addr = shmat(shmid, 0, 0) ) == (void*)-1) {
if (shmctl(shmid, IPC_RMID, NULL) == -1)
err_exit("shmctl error");
else {
printf("Attach shared memory failed\n");
printf("remove shared memory identifier successful\n");
}
err_exit("shmat error");
}
strcpy( addr, "Shared memory test\n" );
printf("Enter to exit");
getchar();
if ( shmdt(addr) < 0)
err_exit("shmdt error");
if (shmctl(shmid, IPC_RMID, NULL) == -1)
err_exit("shmctl error");
else {
printf("Finally\n");
printf("remove shared memory identifier successful\n");
}
return 0;
}
#python
# Install sysv_ipc module firstly if you don't have this
import sysv_ipc as ipc
def main():
path = "/tmp"
key = ipc.ftok(path, 2333)
shm = ipc.SharedMemory(key, 0, 0)
#I found if we do not attach ourselves
#it will attach as ReadOnly.
shm.attach(0,0)
buf = shm.read(19)
print(buf)
shm.detach()
pass
if __name__ == '__main__':
main()
The C program need to be executed firstly and do not just stop it before python code executed, it will create the shared memory segment and write something into it. Then Python code attach the same segment and read data from it.
After done the all things, press enter key to stop C program and remove shared memory ID.
We can see more about SharedMemory for python in here:
http://semanchuk.com/philip/sysv_ipc/#shared_memory
Suggestion #1:
The simplest way should be using TCP. You mentioned your data size is large. Unless your data size is too huge, you should be fine using TCP. Ensure you make separate threads in C and Python for transmitting/receiving data over TCP.
Suggestion #2:
Python supports wrappers over C. One popular wrapper is ctypes - http://docs.python.org/2/library/ctypes.html
Assuming you are familiar with IPC between two C programs through shared-memory, you can write a C-wrapper for your python program which reads data from the shared memory.
Also check the following diccussion which talks about IPC between python and C++:
Simple IPC between C++ and Python (cross platform)
How about writing the weight-lifting code as a library in C and then providing a Python module as wrapper around it? That is actually a pretty usual approach, in particular it allows prototyping and profiling in Python and then moving the performance-critical parts to C.
If you really have a reason to need two processes, there is an XMLRPC package in Python that should facilitate such IPC tasks. In any case, use an existing framework instead of inventing your own IPC, unless you can really prove that performance requires it.

Categories

Resources