Code design for main program + configuration module - python

I have a main program X which is getting feed from my webcam.
I want to configure X in real-time while it's executing.
I understand that one of the common ways of doing that is using IPC like named-pipes/Unix sockets/Internet sockets, etc. But I want to avoid each caller to have to separately open a socket/named-pipe, and communicate each time.
In short, I want a helper program by the name Y, which I can use in the following manner:
Y set-fps=15
Y show-frame=true
Y get-fps (should return 15)
I would want to play this helper program Y in /usr/bin/* (or rather place it in one of $PATH directoreis) so that it's executable from the command-line.
What are my options for obtaining this functionality. My constraints are as under:
(i) Program X could be either C++/Python.
(ii) Multiple clients could call Y simultaneously.
I guess such systems are common on linux where you have programs like nmcli interacting with services like the network-manager ?

sorry to put that post in answer, i still can't comment.
one of the way i could see to do such is the following :
as your program takes frames from the webcam, there is certainly a loop or a redundant automat.
In this way, you could allocate some registers in the kernel memory in one block, each of registers being one of the parameter.
then, at each loop, your X program could read the different parameters from that kernel space. On the other side, each client who wants to modify the parameters will access this kernel space too, and modify the required values.
Then, you have to protect the kernel space with a semaphore, for any write or read operation.(a mutex is ok there, however you may want some writer/reader implementation)
Of course, this is not optimal in performance compared to a protection for each one of the parameters, but with that you have to handle only one semaphore, and a simple mutex would do the job. It's certainly also less beautiful than a communication with pipes or sockets, but with them, you would still have to protect the read and write of your parameters...

Related

In python, what is the optimal way to implement "debug mode" functions which only run conditionally (e.g. if app is in dev mode), but skips otherwise?

I am currently developing (Python 2.7) a device which does fairly complex imaging work with a stereo camera rig on a Raspberry Pi. The functionality isn't particularly relevant to this post, however the fact that we are limited to a Pi's processing power while doing costly operations is very relevant.
The device is governed by one central class, which I'll call BaseClass, where most of the functionality lives, however it is instantiated in one of two subclasses which extend it: DeviceClass, which is instantiated when the device itself is booting up, and DesktopClass which is instantiated on a computer, and contains functions and overrides as needed in order to emulate the exact same functionality as we get on the Pi - essentially, it is used to streamline the development process.
Often times the code does image manipulation and remapping, and I want to save the image in between steps to confirm that everything looks the way it should. I want this functionality to be toggleable based on a parameter you set on instantiation, without having to constantly change code, and only in DesktopClass. There are many ways to get this functionality, however, most of them involve conditionals or leveraging polymorphism, which either sacrifice unnecessary clock cycles when run on the device (bad, because of limited processing power), or lead to repetitive code in separate classes with only a couple lines changed, which I want to avoid just as much to keep the codebase clean.
So the question is, is there a way to create this functionality in such a way that I can have a single call which outputs the debug code (e.g. displaying an image) when the development mode is enabled, but simply skips without wasting any extra clock cycles when not in development mode, as might be achieved with macros or something else in a compiled language?
NOTE: This is also not the best example because compared to an image manipulation, a function call which returns immediately or a conditional is practically negligible. However there are other cases where I will want to call similar debug functions on a much more granular level, where it could add a nonnegligible overhead.
For example, in the following code, I have a function which does several successive image operations using functions that are the same on desktop and device, so it is defined in BaseClass and defining it in the subclasses would be wasteful. In desktop mode however, I want to save the images between each step if self.dev is True.
class BaseClass:
def __init__(self):
self.dev = False
def imaging(self, img_list, map_a, map_b):
imgsout = []
for img in img_list:
# Do some image remapping
step1 = cv2.remap(img, map_a[0], map_a[1])
#DEBUG SAVE IMAGE
step2 = cv2.remap(step1, map_b[0], map_b[1])
#DEBUG SAVE IMAGE
imgsout.append(step2)
return imgsout
class DesktopClass(BaseClass):
def __init__(self, dev=False):
BaseClass.__init__(self)
self.dev = dev
class DeviceClass(BaseClass):
def __init__(self):
BaseClass.__init__(self)
# Body code
# Bunch of images to work on, doesn't matter what this is
imgs = [img1, img2, img3]
# This code runs on the desktop development setting
desktop = DesktopClass(dev=True)
output1 = desktop.imaging(imgs, map_a, map_b)
# When stepping through this code, I should save the intermediary images several steps
desktop.dev = False
output2 = desktop.imaging(imgs, map_a, map_b)
# This code should run without saving any images but give the same output
# This code runs on the raspberry pi
device = DeviceClass()
output3 = desktop.imaging(imgs, map_a, map_b)
# This code should run without saving any images but give the same output
Some potential solutions and their shortcomings:
Define a class function BaseClass.debug_saveim() which checks self.dev is True before executing any debug code.
Pros: Single line for each debug call in the base class - doesn't clutter the code significantly, functionality is obvious at first glance and doesn't hurt readability of code.
Cons: On the device, enters a new function just to fail a conditional and exit. Relatively wasteful when trying to do stuff in realtime.
Define a class function BaseClass.saveim() which saves the image. Every time it is used in the imaging() function (or elsewhere), wrap it in a conditional so that it only runs if in dev mode
Pros: Avoids entering a new function stack frame on device, only has to do the conditional which is more efficient
Cons: Clutters the code. Requires two lines for a single function which doesn't even execute on the intended, final product. Hurts readability and honestly just looks bad. Still wastes a tiny bit of time with a conditional.
Define DesktopClass.imaging() separately from BaseClass.imaging(), where the base one has no debug calls, and the desktop one overrides it.
Pros: Device never wastes any cycles because the debug calls don't even exist in that context
Cons: Redundant code, and all the bad things that come with it.
Define an identically named debug function separately such as BaseClass.debug_save_im() and DesktopClass.debug_save_im(). These pass without doing anything, and save the image, respectively. This function is called in BaseClass.imaging().
Pros: One very readable line per debug output, doesn't clutter code. Leverages polymorphism without using any redundant code. Elegant.
Cons: Still has to enter a new function in the device context, even if it just passes. Have to write two definitions per debug function.
Is there a standard, or commonly accepted practice to get this functionality as efficiently as possible? Is there a tool or library which does this as effectively as possible?
Thank you!
P.S. I know python is very suboptimal for any sort of realtime operation, but we have our reasons for using it, and truth be told having conditionals and whatnot probably won't hurt out operation at all, but it still feels dirty on principle, and I would very much like to know if there is a clean and elegant solution that also optimizes performance.

How to read a sensor in c but then use that input in python

I have a flow sensor that I have to read with c because python isn't fast enough but the rest of my code is python. What I want to do is have the c code running in the background and just have the python request a value from it every now and then. I know that popen is probably the easiest way to do this but I don't fully understand how to use it. I don't want completed code I just want a way to send text/numbers back and forth between a python and a c code. I am running raspbian on a raspberry pi zero w. Any help would be appreciated.
Probably not a full answer, but I expect it gives some hints and it is far too long for a comment. You should think twice about your requirements, because it will probably not be that easy depending on your proficiency in C and what OS you are using.
If I have correctly understood, you have a sensor that sends data (which is already weird unless the sensor is an intelligent one). You want to write a C program that will read that data and either buffer it, and retain only last (you did not say...) and at the same time will wait for requests from a Python script to give it back what it has received (and kept) from the sensor. That probably means a dual thread program with quite a bit of synchronization.
You will also need to specify the communication way between C and Python. You can certainly use the subprocess module, but do not forget to use unbuffered output in C. But you could also imagine an independant program that uses a FIFO or a named piped with a well defined protocol for external requests, in order to completely separate both problems.
So my opinion is that this is currently too broad for a single SO question...

Dynamic python user input to a seperate C program

I have a python GUI interface written in Tkinter. The main point of the program is to setup many different variables for a final calculation on a hyperspectral image (geography stuff). Now, some further specifications have developed where the user would like to be able to actively input some parameters for groups of pixels to be smoothed. This information would be input in the Python GUI and the C programs that handle the image modifications need this as input. Since the images can be giant, I want to try and avoid always re-running the C program (which involves memory allocation, reading a giant file, etc.) with a call such as
os.system(./my_C_Program param1 param2 param3....)
I'd prefer to have a system where once I've called my_C_Program, it can be in a state of waiting after having loaded all the resources into memory. I was thinking something involving getchar() would be what I want, but I don't know how I can get the output from python to go my_C_Program. I've seen a few similar questions about this on SO, but I wasn't able to determine quite how those scenarios would help mine specifically.
If getchar() is the answer, can someone please explain how stdout works with multiple terminals open?
As well, I'm trying to keep this program easily multiplatform across linux/mac/windows.
To summarize, I want the following functionality:
User selects certain input from python GUI
That input becomes the input for a C program
That C program can handle more input without having to be run again from the start (avoiding file I/O, etc).
The first thing you should probably do is start using Python's subprocess module, rather than os.system. Once you've done that, then you can change it so the C program's stdin is something you can write to in Python, rather than inheriting the Python script's stdin.
After that, you could just have Python send data over that the C program can interpret. For example, you might want to use a bunch of JSON chunks, one per line, like Twitter's streaming API1; the Python script makes a request dictionary, serializes it with json.dump, and then writes a newline. The C program reads a line, parses the JSON, and handles the request.
1 Upon reading the documentation, it looks like their implementation is a little more complex. You could adopt how they do it or just do it like I described.
icktoofay and JasonFruit have suggested decent approaches; I'm going to suggest something to decouple the two programs a little further.
If you write your C program as a server that listens for requests and replies with answers on a TCP socket, you can more easily change clients, make it support multiple simultaneous clients, perform near-seamless upgrades of clients or servers without necessarily needing to modify the other, or you could move the C program to more powerful hardware without needing to do more than slightly modify configurations.
Once you have opened a listening socket and accepted a connection, the rest of your program could continue as if you were just interacting over standard input and standard output. This works well enough, but you might prefer to encode your data in some standardized format such as JSON or ASN.1, which can save some manual string handling.
Could you do something with pexpect? It lets you provide input to a command-line program, waiting for specified prompts from it before continuing. It also lets you read the intervening output, so you could respond to that as needed.
If you're on Windows (as I note from your comment that you are), you could try winpexpect, which is similar.

Lock down a program so it has no access to outside files, like a virus scanner does

I would like to launch an untrusted application programmatically, so I want to remove the program's ability to access files, network, etc. Essentially, I want to restrict it so its only interface to the rest of the computer is stdin and stdout.
Can I do that? Preferably in a cross-platform way but I sort of expect to have to do it differently for each OS. I'm using Python, but I'm willing to write this part in a lower level or more platform integrated language if necessary.
The reason I need to do this is to write a distributed computing infrastructure. It needs to download a program, execute it, piping data to stdin, and returning data that it receives on stdout to the central server. But since the program it downloads is untrusted, I want to restrict it to only using stdin and stdout.
The short answer is no.
The long answer is not really. Consider a C program, in which the program opens a log file by grabbing the next available file descriptor. Your program, in order to stop this, would need to somehow monitor this, and block it. Depending on the robustness of the untrusted program, this could cause a fatal crash, or inhibit harmless functionality. There are many other similar issues to this one that make what you are trying to do hard.
I would recommend looking into sandboxing solutions already available. In particular, a virtual machine can be very useful for testing out untrusted code. If you can't find anything that meets your needs, your best bet is to probably deal with this at the kernel level, or with something a bit closer to the hardware such as C.
Yes, you can do this. You can run an inferior process through ptrace (essentially you act as a debugger) and you hook on system calls and determine whether they should be allowed or not.
codepad.org does this for instance, see: about codepad. It uses the geordi supervisor to execute the untrusted code.
You can run untrusted apps in chroot and block them from using network with an iptables rule (for example, owner --uid-owner match)
But really, virtual machine is more reliable and on modern hardware performance impact is negligible.

python queue concurrency process management

The use case is as follows :
I have a script that runs a series of
non-python executables to reduce (pulsar) data. I right now use
subprocess.Popen(..., shell=True) and then the communicate function of subprocess to
capture the standard out and standard error from the non-python executables and the captured output I log using the python logging module.
The problem is: just one core of the possible 8 get used now most of the time.
I want to spawn out multiple processes each doing a part of the data set in parallel and I want to keep track of progres. It is a script / program to analyze data from a low frequencey radio telescope (LOFAR). The easier to install / manage and test the better.
I was about to build code to manage all this but im sure it must already exist in some easy library form.
The subprocess module can start multiple processes for you just fine, and keep track of them. The problem, though, is reading the output from each process without blocking any other processes. Depending on the platform there's several ways of doing this: using the select module to see which process has data to be read, setting the output pipes non-blocking using the fnctl module, using threads to read each process's data (which subprocess.Popen.communicate itself uses on Windows, because it doesn't have the other two options.) In each case the devil is in the details, though.
Something that handles all this for you is Twisted, which can spawn as many processes as you want, and can call your callbacks with the data they produce (as well as other situations.)
Maybe Celery will serve your needs.
If I understand correctly what you are doing, I might suggest a slightly different approach. Try establishing a single unit of work as a function and then layer on the parallel processing after that. For example:
Wrap the current functionality (calling subprocess and capturing output) into a single function. Have the function create a result object that can be returned; alternatively, the function could write out to files as you see fit.
Create an iterable (list, etc.) that contains an input for each chunk of data for step 1.
Create a multiprocessing Pool and then capitalize on its map() functionality to execute your function from step 1 for each of the items in step 2. See the python multiprocessing docs for details.
You could also use a worker/Queue model. The key, I think, is to encapsulate the current subprocess/output capture stuff into a function that does the work for a single chunk of data (whatever that is). Layering on the parallel processing piece is then quite straightforward using any of several techniques, only a couple of which were described here.

Categories

Resources