I'm trying to implement simple bidirectional communication between node and a spawned Python process.
Python:
import sys
for l in sys.stdin:
print "got: %s" % l
Node:
var spawn = require('child_process').spawn;
var child = spawn('python', ['-u', 'ipc.py']);
child.stdout.on('data', function(data){console.log("stdout: " + data)});
var i = 0;
setInterval(function(){
console.log(i);
child.stdin.write("i = " + i++ + "\n");
}, 1000);
Using -u on Python forces unbuffered I/O so I would expect to see the output (I've also tried sys.stdout.flush()) but don't. I know I can use child.stdout.end() but that prevents me from writing data later.
Your Python code crashes with TypeError: not all arguments converted during string formatting at line
print "got: " % l
You ought to write
print "got: %s" % l
You can see the errors that Python outputs by doing:
var child = spawn('python', ['-u', 'ipc.py'],
{ stdio: [ 'pipe', 'pipe', 2 ] });
on Node.js, that is, pipe only standard output but let the standard error go to Node's stderr.
Even with these fixes, and even accounting for -u the sys.stdin.__iter__ will be buffered. To work around it, use .readline instead:
for line in iter(sys.stdin.readline, ''):
print "got: %s" % line
sys.stdout.flush()
Related
I've got an After Effects Scripting question, but I'm not sure it will be resolved with AE knowledge, maybe more with standalone development.
I want to launch an external process from After Effects, actually I want to launch a render of the openned AEP file with the aerender.exe provided with After Effects while keeping it usable.
var projectFile = app.project.file;
var aeRender = "C:\\Program Files\\Adobe\\Adobe After Effects CC 2018\\Support Files\\aerender.exe";
var myCommand = "-project" + " " + projectFile.fsName;
system.callSystem("cmd /c \""+aeRender+"\"" + " " + myCommand);
So I wrote this simple JSX code and it works, it renders the scene render queue properly.
But After Effects is freezing, it waits for the end of the process.
I want it to stay usable.
So I tried to write a .cmd file and launch it with AE system.callSystem and I got the same problem,
I tried to go through an .exe file (compiled from a simple python with pyInstaller), same problem :
import sys
import subprocess
arg = sys.argv
pythonadress = arg[0]
aeRender = arg[1]
projectFileFSname = arg[2]
myCommand = "-project" + " " +projectFileFSname
callSystem = "cmd /c \""+aeRender +"\"" + " " + myCommand
subprocess.run(callSystem)
I even tried with "cmd /c start ", and it seems to be worse as After Effects continue freezing after the process is completed.
Is there a way to make AE believe the process is complete while it's actually not ?
Any help would be very apreciated !
system.callSystem() will freeze the script's execution so instead, you can dynamically create a .bat file and run it with .execute().
Here's a sample .js:
var path = {
"join": function ()
{
if (arguments.length === 0) return null;
var args = [];
for (var i = 0, iLen = arguments.length; i < iLen; i++)
{
args.push(arguments[i]);
}
return args.join(String($.os.toLowerCase().indexOf('win') > -1 ? '\\' : '/'));
}
};
if (app.project.file !== null && app.project.renderQueue.numItems > 0)
{
var
// aeRenderPath = path.join(new File($._ADBE_LIBS_CORE.getHostAppPathViaBridgeTalk()).parent.fsName, 'aerender.exe'), // works only in CC 2018 and earlier
aeRenderPath = path.join(new File(BridgeTalk.getAppPath(BridgeTalk.appName)).parent.fsName, 'aerender.exe'),
batFile = new File(path.join(new File($.fileName).parent.fsName, 'render.bat')),
batFileContent = [
'"' + aeRenderPath + '"',
"-project",
'"' + app.project.file.fsName + '"'
];
batFile.open('w', undefined, undefined);
batFile.encoding = 'UTF-8';
batFile.lineFeed = 'Unix';
batFile.write(batFileContent.join(' '));
batFile.close();
// system.callSystem('explorer ' + batFile.fsName);
batFile.execute();
$.sleep(1000); // Delay the script so that the .bat file can be executed before it's being deleted
batFile.remove();
}
You can, of course, develop it further and make it OSX compatible, add more features to it .etc, but this is the main idea.
Here's a list with all the aerender options (if you don't already know them): https://helpx.adobe.com/after-effects/using/automated-rendering-network-rendering.html
Btw, $._ADBE_LIBS_CORE.getHostAppPathViaBridgeTalk() will get you the "AfterFX.exe" file path so you can get the "aerender.exe" path easier this way.
EDIT: $._ADBE_LIBS_CORE was removed in CC2019 so you can use BridgeTalk directly instead for CC 2019 and above.
I must send text from a node.js child process to a python process.
My dummy node client looks like
var resolve = require('path').resolve;
var spawn = require('child_process').spawn;
data = "lorem ipsum"
var child = spawn('master.py', []);
var res = '';
child.stdout.on('data', function (_data) {
try {
var data = Buffer.from(_data, 'utf-8').toString();
res += data;
} catch (error) {
console.error(error);
}
});
child.stdout.on('exit', function (_) {
console.log("EXIT:", res);
});
child.stdout.on('end', function (_) {
console.log("END:", res);
});
child.on('error', function (error) {
console.error(error);
});
child.stdout.pipe(process.stdout);
child.stdin.setEncoding('utf-8');
child.stdin.write(data + '\r\n');
while the Python process master.py is
#!/usr/bin/env python
import sys
import codecs
if sys.version_info[0] >= 3:
ifp = codecs.getreader('utf8')(sys.stdin.buffer)
else:
ifp = codecs.getreader('utf8')(sys.stdin)
if sys.version_info[0] >= 3:
ofp = codecs.getwriter('utf8')(sys.stdout.buffer)
else:
ofp = codecs.getwriter('utf8')(sys.stdout)
for line in ifp:
tline = "<<<<<" + line + ">>>>>"
ofp.write(tline)
# close files
ifp.close()
ofp.close()
I must use a utf-8 encoded input reader so I'm using a sys.stdin, but it seems that when node.js writes to child process stdin using child.stdin.write(data + '\r\n');, this will not be read by sys.stdin in for line in ifp:
You'll need to call child.stdin.end() in the Node program after the final call to child.stdin.write(). Until end() is called, the child.stdin writable stream will hold the written data in a buffer, so the Python program won't see it. See the Buffering discussion in https://nodejs.org/docs/latest-v8.x/api/stream.html#stream_buffering for details.
(If you write lots of data into stdin then the write buffer will eventually fill to a point where the accumulated data will be flushed out automatically to the Python program. The buffer will then begin again to collect data. An end() call is needed to make sure that the final portion of the written data is flushed out. It also has the effect of indicating to the child process that no more data will be sent on this stream.)
I am attempting to stream output from a weighing scale that is written in python. This program (scale.py) runs continuously and prints the raw value every half second.
import RPi.GPIO as GPIO
import time
import sys
from hx711 import HX711
def cleanAndExit():
print "Cleaning..."
GPIO.cleanup()
print "Bye!"
sys.exit()
hx = HX711(5, 6)
hx.set_reading_format("LSB", "MSB")
hx.reset()
hx.tare()
while True:
try:
val = hx.get_weight(5)
print val
hx.power_down()
hx.power_up()
time.sleep(0.5)
except (KeyboardInterrupt, SystemExit):
cleanAndExit()
I am trying to get each raw data point in a separate NodeJs program (index.js located in the same folder) that is printed by print val. Here is my node program.
var spawn = require('child_process').spawn;
var py = spawn('python', ['scale.py']);
py.stdout.on('data', function(data){
console.log("Data: " + data);
});
py.stderr.on('data', function(data){
console.log("Error: " + data);
});
When I run sudo node index.js there is no output and the program waits into perpetuity.
My thought process is that print val should put output into stdout stream and this should fire the data event in the node program. But nothing is happening.
Thanks for your help!
By default, all C programs (CPython included as it is written in C) that use libc will automatically buffer console output when it is connected to a pipe.
One solution is to flush the output buffer every time you need:
print val
sys.stdout.flush()
Another solution is to invoke python with the -u flag which forces it to be unbuffered:
var py = spawn('python', ['-u', 'scale.py']);
I have the following perl module for wrapping CORE::system in perl scripts:
package system_wrapper;
sub check_system {
my ($cmd) = #_;
my $err = CORE::system($cmd);
if ($err != 0) {
print "Error occured when executing: $cmd. Exiting.\n";
exit(-1);
}
}
*CORE::GLOBAL::system = \&check_system;
1;
__END__
I'm attempting to acheive the same thing in python. I can't work out how to extend the syntax described here using decorators to this os method.
I would like calls to the wrapped method to be exactly the same as the unwrapped.
i.e. status = os.system("mycmd" + " myarg")
You can just monkey patch os.system. Rename the real os.system to something else,
then create a function using it and assign it to os.system:
def my_os_system(cmd):
err = os._system(cmd)
if err != 0:
print "Error occured when executing: %s. Exiting." % cmd
sys.exit(-1)
os._system = os.system
os.system = my_os_system
I need to create a script that calls an application (c++ binary) 4000 times. The application takes some arguments and for each call writes a zip file to disk. So when the script is executed 4000 zip files will be written to disk. The application supports multiple threads.
I first created a bash script that does the job and it works fine. But now I need the script to be platform independent. I have therefore tried to port the script to groovy, something like this:
for (int i = 1; i <= 4000; i++) {
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute() // Call *execute* on the string
proc.waitFor() // Wait for the command to finish
// Obtain status and output
println "return code: ${ proc.exitValue()}"
println "stderr: ${proc.err.text}"
println "stdout: ${proc.in.text}" // *out* from the external program is *in* for groovy
println "iteration : " + i
}
But after 381 zipfiles have been written to disk the script just hangs. Do I need to close the process after each call or something similar?
Here:
http://groovy.codehaus.org/Process+Management
it says that its known that java.lang.Process might hang or deadlock. Is it no-go to do something like this in groovy?
I will also give it at try in python to see if it gives the same problems
It might be the output stream blocking:
(1..<4000).each { i ->
println "iteration : $i"
def command = """myExecutable
a=$argA
b=$outDir"""
def proc = command.execute()
// Consume the outputs from the process and pipe them to our output streams
proc.consumeProcessOutput( System.out, System.err )
// Wait for the command to finish
proc.waitFor()
// Obtain status
println "return code: ${proc.exitValue()}"
}
Yes, you should close streams belongs to process.
Or, as say #tim_yates you shoul use consumeProcessOutput, or, in concurent solution, waitForProcessOutput, which closes them for you.
For parallel computation you could use smth. like this:
import groovyx.gpars.GParsPool
GParsPool.withPool(8){ // Start in pool with 8 threads.
(1..4000).toList().eachParallel {
def p = "myExecutable a=$argA b=$outDir".execute()
def sout = new StringBuffer();
def serr = new StringBuffer();
p.waitForProcessOutput(sout, serr)
synchronized (System.out) {
println "return code: ${ p.exitValue()}"
println "stderr: $serr"
println "stdout: $sout"
println "iteration $it"
}
}
}