Receiving contiinuous output from python spawn child process not working - python

I am attempting to stream output from a weighing scale that is written in python. This program (scale.py) runs continuously and prints the raw value every half second.
import RPi.GPIO as GPIO
import time
import sys
from hx711 import HX711
def cleanAndExit():
print "Cleaning..."
GPIO.cleanup()
print "Bye!"
sys.exit()
hx = HX711(5, 6)
hx.set_reading_format("LSB", "MSB")
hx.reset()
hx.tare()
while True:
try:
val = hx.get_weight(5)
print val
hx.power_down()
hx.power_up()
time.sleep(0.5)
except (KeyboardInterrupt, SystemExit):
cleanAndExit()
I am trying to get each raw data point in a separate NodeJs program (index.js located in the same folder) that is printed by print val. Here is my node program.
var spawn = require('child_process').spawn;
var py = spawn('python', ['scale.py']);
py.stdout.on('data', function(data){
console.log("Data: " + data);
});
py.stderr.on('data', function(data){
console.log("Error: " + data);
});
When I run sudo node index.js there is no output and the program waits into perpetuity.
My thought process is that print val should put output into stdout stream and this should fire the data event in the node program. But nothing is happening.
Thanks for your help!

By default, all C programs (CPython included as it is written in C) that use libc will automatically buffer console output when it is connected to a pipe.
One solution is to flush the output buffer every time you need:
print val
sys.stdout.flush()
Another solution is to invoke python with the -u flag which forces it to be unbuffered:
var py = spawn('python', ['-u', 'scale.py']);

Related

Node.js child process to Python process

I must send text from a node.js child process to a python process.
My dummy node client looks like
var resolve = require('path').resolve;
var spawn = require('child_process').spawn;
data = "lorem ipsum"
var child = spawn('master.py', []);
var res = '';
child.stdout.on('data', function (_data) {
try {
var data = Buffer.from(_data, 'utf-8').toString();
res += data;
} catch (error) {
console.error(error);
}
});
child.stdout.on('exit', function (_) {
console.log("EXIT:", res);
});
child.stdout.on('end', function (_) {
console.log("END:", res);
});
child.on('error', function (error) {
console.error(error);
});
child.stdout.pipe(process.stdout);
child.stdin.setEncoding('utf-8');
child.stdin.write(data + '\r\n');
while the Python process master.py is
#!/usr/bin/env python
import sys
import codecs
if sys.version_info[0] >= 3:
ifp = codecs.getreader('utf8')(sys.stdin.buffer)
else:
ifp = codecs.getreader('utf8')(sys.stdin)
if sys.version_info[0] >= 3:
ofp = codecs.getwriter('utf8')(sys.stdout.buffer)
else:
ofp = codecs.getwriter('utf8')(sys.stdout)
for line in ifp:
tline = "<<<<<" + line + ">>>>>"
ofp.write(tline)
# close files
ifp.close()
ofp.close()
I must use a utf-8 encoded input reader so I'm using a sys.stdin, but it seems that when node.js writes to child process stdin using child.stdin.write(data + '\r\n');, this will not be read by sys.stdin in for line in ifp:
You'll need to call child.stdin.end() in the Node program after the final call to child.stdin.write(). Until end() is called, the child.stdin writable stream will hold the written data in a buffer, so the Python program won't see it. See the Buffering discussion in https://nodejs.org/docs/latest-v8.x/api/stream.html#stream_buffering for details.
(If you write lots of data into stdin then the write buffer will eventually fill to a point where the accumulated data will be flushed out automatically to the Python program. The buffer will then begin again to collect data. An end() call is needed to make sure that the final portion of the written data is flushed out. It also has the effect of indicating to the child process that no more data will be sent on this stream.)

python: Using ncurses when underlying library logs to stdout

I am trying to write a small python program that uses curses and a SWIGed C++ library. That library logs a lot of information to STDOUT, which interferes with the output from curses. I would like to somehow intercept that content and then display it nicely through ncurses. Is there some way to do this?
A minimal demonstrating example will hopefully show how this all works. I am not going to set up SWIG just for this, and opt for a quick and dirty demonstration of calling a .so file through ctypes to emulate that external C library usage. Just put the following in the working directory.
testlib.c
#include <stdio.h>
int vomit(void);
int vomit()
{
printf("vomiting output onto stdout\n");
fflush(stdout);
return 1;
}
Build with gcc -shared -Wl,-soname,testlib -o _testlib.so -fPIC testlib.c
testlib.py
import ctypes
from os.path import dirname
from os.path import join
testlib = ctypes.CDLL(join(dirname(__file__), '_testlib.so'))
demo.py (for minimum demonstration)
import os
import sys
import testlib
from tempfile import mktemp
pipename = mktemp()
os.mkfifo(pipename)
pipe_fno = os.open(pipename, os.O_RDWR | os.O_NONBLOCK)
stdout_fno = os.dup(sys.stdout.fileno())
os.dup2(pipe_fno, 1)
result = testlib.testlib.vomit()
os.dup2(stdout_fno, 1)
buf = bytearray()
while True:
try:
buf += os.read(pipe_fno, 1)
except Exception:
break
print("the captured output is: %s" % open('scratch').read())
print('the result of the program is: %d' % result)
os.unlink(pipename)
The caveat is that the output generated by the .so might be buffered somehow within the ctypes system (I have no idea how that part all works), and I cannot find a way to flush the output to ensure they are all outputted unless the fflush code is inside the .so; so there can be complications with how this ultimately behaves.
With threading, this can be done also (code is becoming quite atrocious, but it shows the idea):
import os
import sys
import testlib
from threading import Thread
from time import sleep
from tempfile import mktemp
def external():
# the thread that will call the .so that produces output
for i in range(7):
testlib.testlib.vomit()
sleep(1)
# setup
stdout_fno = os.dup(sys.stdout.fileno())
pipename = mktemp()
os.mkfifo(pipename)
pipe_fno = os.open(pipename, os.O_RDWR | os.O_NONBLOCK)
os.dup2(pipe_fno, 1)
def main():
thread = Thread(target=external)
thread.start()
buf = bytearray()
counter = 0
while thread.is_alive():
sleep(0.2)
try:
while True:
buf += os.read(pipe_fno, 1)
except BlockingIOError:
if buf:
# do some processing to show that the string is fully
# captured
output = 'external lib: [%s]\n' % buf.strip().decode('utf8')
# low level write to original stdout
os.write(stdout_fno, output.encode('utf8'))
buf.clear()
os.write(stdout_fno, b'tick: %d\n' % counter)
counter += 1
main()
# cleanup
os.dup2(stdout_fno, 1)
os.close(pipe_fno)
os.unlink(pipename)
Example execution:
$ python demo2.py
external lib: [vomiting output onto stdout]
tick: 0
tick: 1
tick: 2
tick: 3
external lib: [vomiting output onto stdout]
tick: 4
Note that everything is captured.
Now, since you do have make use of ncurses and also run that function in a thread, this is a bit tricky. Here be dragons.
We will need the ncurses API that will actually let us create a new screen to redirect the output, and again ctypes can be handy for this. Unfortunately, I am using absolute paths for the DLLs on my system; adjust as required.
lib.py
import ctypes
libc = ctypes.CDLL('/lib64/libc.so.6')
ncurses = ctypes.CDLL('/lib64/libncursesw.so.6')
class FILE(ctypes.Structure):
pass
class SCREEN(ctypes.Structure):
pass
FILE_p = ctypes.POINTER(FILE)
libc.fdopen.restype = FILE_p
SCREEN_p = ctypes.POINTER(SCREEN)
ncurses.newterm.restype = SCREEN_p
ncurses.set_term.restype = SCREEN_p
fdopen = libc.fdopen
newterm = ncurses.newterm
set_term = ncurses.set_term
delscreen = ncurses.delscreen
endwin = ncurses.endwin
Now that we have newterm and set_term, we can finally complete the script. Remove everything from the main function, and add the following:
# setup the curse window
import curses
from lib import newterm, fdopen, set_term, endwin, delscreen
stdin_fno = sys.stdin.fileno()
stdscr = curses.initscr()
# use the ctypes library to create a new screen and redirect output
# back to the original stdout
screen = newterm(None, fdopen(stdout_fno, 'w'), fdopen(stdin_fno, 'r'))
old_screen = set_term(screen)
stdscr.clear()
curses.noecho()
border = curses.newwin(8, 68, 4, 4)
border.border()
window = curses.newwin(6, 66, 5, 5)
window.scrollok(True)
window.clear()
border.refresh()
window.refresh()
def main():
thread = Thread(target=external)
thread.start()
buf = bytearray()
counter = 0
while thread.isAlive():
sleep(0.2)
try:
while True:
buf += os.read(pipe_fno, 1)
except BlockingIOError:
if buf:
output = 'external lib: [%s]\n' % buf.strip().decode('utf8')
buf.clear()
window.addstr(output)
window.refresh()
window.addstr('tick: %d\n' % counter)
counter += 1
window.refresh()
main()
# cleanup
os.dup2(stdout_fno, 1)
endwin()
delscreen(screen)
os.close(pipe_fno)
os.unlink(pipename)
This should sort of show that the intended result with the usage of ncurses be achieved, however for my case it hung at the end and I am not sure what else might be going on. I thought this could be caused by an accidental use of 32-bit Python while using that 64-bit shared object, but on exit things somehow don't play nicely (I thought misuse of ctypes is easy, but turns out it really is!). Anyway, this least it shows the output inside an ncurse window as you might expect.
#metatoaster indicated a link which talks about a way to temporarily redirect the standard output to /dev/null. That could show something about how to use dup2, but is not quite an answer by itself.
python's interface to curses uses only initscr, which means that the curses library writes its output to the standard output. The SWIG'd library writes its output to the standard output, but that would interfere with the curses output. You could solve the problem by
redirecting the curses output to /dev/tty, and
redirecting the SWIG'd output to a temporary file, and
reading the file, checking for updates to add to the screen.
Once initscr has been called, the curses library has its own copy of the output stream. If you can temporarily point the real standard output to a file first (before initializing curses), then open a new standard output to /dev/tty (for initscr), and then restore the (global!) output stream then that should work.

How to call endless python script from NodeJS

I have a python script:
//test.py
import psutil
while True:
result = psutil.cpu_percent(interval=1)
print(result)
and then nodejs code:
//test.js
var PythonShell = require('python-shell')
pyshell = new PythonShell('test.py')
pyshell.on('message', function(message) {
console.log(message)
})
nothing happened when I executing node script. Please help me how to get data per second (like "real-time") from endless python code from Node and loggging it to console.
You need to flush STDOUT:
#test.py
import psutil
import sys
while True:
result = psutil.cpu_percent(interval=1)
print(result)
sys.stdout.flush()
It looks like it's a common issue with the python-shell npm package - https://github.com/extrabacon/python-shell/issues/81

LLDB Python/C++ bindings: Async Step instructions

I am trying to step through a thread. This works while I use debugger.SetAsync(False), but I want to do this asynchronously. Here is a script to reproduce it. It steps when setting debugger.SetAsync (False) instead of True. I added time.sleep so that it has time to execute my instructions. I expect the next instruction in the frame.pc
import time
import sys
lldb_path = "/Applications/Xcode.app/Contents/SharedFrameworks/LLDB.framework/Resources/Python"
sys.path = sys.path + [lldb_path]
import lldb
import os
exe = "./a.out"
debugger = lldb.SBDebugger.Create()
debugger.SetAsync (True) # change this to False, to make it work
target = debugger.CreateTargetWithFileAndArch (exe, lldb.LLDB_ARCH_DEFAULT)
if target:
main_bp = target.BreakpointCreateByName ("main", target.GetExecutable().GetFilename())
print main_bp
launch_info = lldb.SBLaunchInfo(None)
launch_info.SetExecutableFile (lldb.SBFileSpec(exe), True)
error = lldb.SBError()
process = target.Launch (launch_info, error)
time.sleep(1)
# Make sure the launch went ok
if process:
# Print some simple process info
state = process.GetState ()
print 'process state'
print state
thread = process.GetThreadAtIndex(0)
frame = thread.GetFrameAtIndex(0)
print 'stop loc'
print hex(frame.pc)
print 'thread stop reason'
print thread.stop_reason
print 'stepping'
thread.StepInstruction(False)
time.sleep(1)
print 'process state'
print process.GetState ()
print 'thread stop reason'
print thread.stop_reason
frame = thread.GetFrameAtIndex(0)
print 'stop loc'
print hex(frame.pc) # invalid output?
Version: lldb-340.4.110 (Provided with Xcode)
Python: Python 2.7.10
Os: Mac Yosemite
The "async" version of the lldb API's uses an event based system. You can't wait for things to happen using sleep's - but rather using the WaitForEvent API's lldb provides. An example of how to do this is given at:
https://github.com/llvm/llvm-project/blob/main/lldb/examples/python/process_events.py
There's a bunch of stuff at the beginning of the example that shows how to load the lldb module and does argument parsing. The part you want to look at is the loop:
listener = debugger.GetListener()
# sign up for process state change events
stop_idx = 0
done = False
while not done:
event = lldb.SBEvent()
if listener.WaitForEvent (options.event_timeout, event):
and below.

How to control background process in linux

I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.

Categories

Resources