I have a simple C program which works the following way:
Ask for input
Print it
Ask another input
Print again
Now iam using python to call this program.
import subprocess
sobj = subprocess.Popen("./cprog", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
sobj.stdin.write("2 3\n")
sobj.stdin.close()
sobj.stdout.read()
This works fine. Similarly with communicate its working fine.
But when I try to do something like this it won't work
sobj = subprocess.Popen("./cprog", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
sobj.stdout.readline()
sobj.stdin.write("2 3\n")
sobj.stdin.close()
sobj.stdout.read()
Here are the few things:
1. I saw pexpect but I think we should give what program asks in advance.
2. Can I reopen closed subprocess pipe ?
Iam using the above script as CGI and I don't know why but subprocess.call won't work in that. Can anyone explain why?
EDIT:
Iam doing a web based project where users write code in either C, C++ or JAVA and execute them on browser. So first I thought of using PHP for it but I couldn't find a way to call programs and run them interactively. Then I saw python subprocess module. Everything was working fine in interpreter when I was using subprocess.call. But the same python program when saved it as .cgi and opened it in browser it didn't work. Then I started looking at subprocess.popen. But with this I need to give all the inputs in beginning and then run the code. What I want to do is run an interactive session in browser.
EDIT 2:
So what I want is user runs program in browser and enters input in textbox provided whenever needed and that input is redirected to stdin of subprocess and output based on it.
EDIT 3: cprog.c
#include <stdio.h>
int main() {
int x;
printf("Enter value of x: \n");
scanf("%d", &x);
printf("Value of x: %d\n", x);
return 0;
}
I'm assuming your C application displays a prompt and expects the user to enter their input on the same line, and that in your readline() call above you're trying to get the prompt.
If this is the case, readline() will block forever because it's waiting for a newline character and never seeing it. If you convert this call to a simple read(X) (where X is a number of bytes to read in one go) then you'll probably have better luck, although you should cope with partial input (i.e. loop around collecting input until you've seen the whole prompt). The only other issue you might see is if the C application isn't flushing the output before prompting the user, but I'd expect you to see that problem in the interactive session as well if that were the case.
When running under the context of a webserver like Apache then it's generally a bad idea to use things like subprocess as they involve forking additional processes and that's often quite a tricky thing to manage. This is because the fork process duplicates much of the state of the parent and sometimes this can cause issues. I'm not saying it won't work, I'm just saying you can make some subtle problems for yourself if you're not careful, and it wouldn't surprise me if that's why you're having trouble using subprocess.
To give any more helpful advice, though, you'd need to describe exactly the error you see when you call subprocess. For example, there's quite likely an exception being thrown which will probably be in your webserver logs - reproducing that here would be a good start.
When I run C program directly through terminal its working fine. But when I run the same program with 2nd code i provided above nothing prints.
The reason you don't see any output is that C stdio uses block-buffering when the program is run in non-interactive mode. See my answer that demonstrate several solutions: pty, stdbuf, pexpect. If you can change the C code then you could also fflush the output explicitly or make it unbuffered.
If you can provide all input at once and the output is bounded then you could use .communicate():
from subprocess import Popen, PIPE
p = Popen(["./cprog"], stdin=PIPE, stdout=PIPE, stderr=PIPE,
universal_newlines=True)
out, err = p.communicate("2\n")
So what I want is user runs program in browser and enters input in textbox provided whenever needed and that input is redirected to stdin of subprocess and output based on it.
Based on ws-cli example:
#!/usr/bin/python
"""WebSocket CLI interface.
Install: pip install twisted txws
Run: twistd -ny wscli.py
Visit http://localhost:8080/
"""
import sys
from twisted.application import strports # pip install twisted
from twisted.application import service
from twisted.internet import protocol
from twisted.python import log
from twisted.web.resource import Resource
from twisted.web.server import Site
from twisted.web.static import File
from txws import WebSocketFactory # pip install txws
class Protocol(protocol.Protocol):
def connectionMade(self):
from twisted.internet import reactor
log.msg("launch a new process on each new connection")
self.pp = ProcessProtocol()
self.pp.factory = self
reactor.spawnProcess(self.pp, command, command_args)
def dataReceived(self, data):
log.msg("redirect received data to process' stdin: %r" % data)
self.pp.transport.write(data)
def connectionLost(self, reason):
self.pp.transport.loseConnection()
def _send(self, data):
self.transport.write(data) # send back
class ProcessProtocol(protocol.ProcessProtocol):
def connectionMade(self):
log.msg("connectionMade")
def outReceived(self, data):
log.msg("send stdout back %r" % data)
self._sendback(data)
def errReceived(self, data):
log.msg("send stderr back %r" % data)
self._sendback(data)
def processExited(self, reason):
log.msg("processExited")
self._sendback('program exited')
def processEnded(self, reason):
log.msg("processEnded")
def _sendback(self, data):
self.factory._send(data)
command = './cprog'
command_args = [command]
application = service.Application("ws-cli")
echofactory = protocol.Factory()
echofactory.protocol = Protocol
strports.service("tcp:8076:interface=127.0.0.1",
WebSocketFactory(echofactory)).setServiceParent(application)
resource = Resource()
resource.putChild('', File('index.html'))
strports.service("tcp:8080:interface=127.0.0.1",
Site(resource)).setServiceParent(application)
where index.html:
<!doctype html>
<title>Send input to subprocess using websocket and echo the response</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js">
</script>
<script>
// send keys to websocket and echo the response
$(document).ready(function() {
// create websocket
if (! ("WebSocket" in window)) WebSocket = MozWebSocket; // firefox
var socket = new WebSocket("ws://localhost:8076");
// open the socket
socket.onopen = function(event) {
// show server response
socket.onmessage = function(e) {
$("#output").text(e.data);
}
// sent input
$("#entry").keyup(function (e) {
socket.send($("#entry").attr("value")+"\n");
});
}
});
</script>
<pre id=output>Here you should see the output from the command</pre>
<input type=text id=entry value="123">
And cprog.c:
#include <stdio.h>
int main() {
int x = -1;
setbuf(stdout, NULL); // make stdout unbuffered
while (1) {
printf("Enter value of x: \n");
if (scanf("%d", &x) != 1)
return 1;
printf("Value of x: %d\n", x);
}
return 0;
}
Related
I wrote a python program script1.py. Its general logic flow is as follows:
while(true)
if(hasTask)
print('task flow info print')
Then I wrote another python script called monitor.py, using os.popen() to monitor the console output of script1. py, and after obtaining specific information, sent a message to the Redis channel:
redis_key = "command"
cmd = "python script1.py"
pool = redis.ConnectionPool(host="127.0.0.1")
r = redis.Redis(connection_pool=pool)
with os.popen(cmd,"r") as stream:
while True:
buf = stream.readline().strip()
if re.match("target",buf) is not None:
message = stream.readline().strip()
command_info["command"] = message
r.publish(redis_key, json.dumps(command_info))
In the beginning, the monitor can correctly read the script output and send messages to Redis. The problem is that after some time, this combination does not seem to work properly, and no messages are sent to Redis. Why does this happen?
Is the file object returned by popen is too large? or how can I deal with it, need your help.
Good day everyone,
Through recommendation of this community, I have install shelljs such as npm install shelljs --save into my app folder.
Now I am working out on how to implement it in my nodejs file.
Below is my nodejs file
var express = require('express');
var router = express.Router();
var COMPORT = 5;
var command = 'option1';
Below this i would like to include my python script
import serial
ser = serial.Serial()
ser.baudrate = 38400 #Suggested rate in Southco documentation, both locks and program MUST be at same rate
// COMPORT is a variable that stores an integer such as 6
ser.port = "COM{}".format(COMPORT)
ser.timeout = 10
ser.open()
#call the serial_connection() function
ser.write(("%s\r\n"%command).encode('ascii'))
Hence, my question is how do you include this python script with the variables defined in nodejs included in the script in the same file as the nodejs.
This app that runs on nodejs will be package as an executable desktop app via electron.
I think it is the easiest way to run python script as child process.
const childProcess = require('child_process'),
cmd = './script.py --opt=' + anyValueToPass;
childProcess.exec(cmd, function(err, stdout, stderr) {
// command output is in stdout
});
Read more about child process at: Execute a command line binary with Node.js
In this way, you ned to care for command injection vulnerability.
I'm running Ubuntu 16.04 and I'm trying to write a python script that makes a GET request to a specified image file given the url. As an example, in the code below:
host is www.google.com
port is 80
u.path is /images/srpr/logo3w.png
proc = Popen(["netcat {} {}".format(host, port)], shell= True)
proc = Popen(["GET {} HTTP/1.1".format(u.path)], shell= True)
proc = Popen(["Host: {}".format(host)], shell= True)
proc = Popen(["Connection: close"], shell= True)
proc = Popen(["\n"], shell= True)
My problem is that I can execute these normally in the terminal, but when I try to run the script it seems like sends the GET request to www.google.com before it takes the specification of u.path. I know it is doing this for two reasons. First, just before the server response comes in I get the following:
/bin/sh: 1: Host:: not found
/bin/sh: 1: Connection:: not found
Second, I know that the server response of the image data is a bunch of ugly stuff interpreted as weird Unicode symbols on the terminal, but I'm clearly getting the www.google.com HTML text on the server response.
I was thinking I may need to make it wait to do the HTTP request until the netcat STDIN is open, but I don't know how. Or maybe it's just completing the request because it's sending a \n somehow? I really don't know.
EDIT: It seems like it actually isn't sending the request to www.google.com. I saved the server response as a .html file and it looks like a cloudfront website
EDIT2: After more research, it seems as if the problem is that since netcat is interactive and so it 'deadlocks' or something like that. I tried to use proc.communicate() but since I need to send multiple lines it doesn't allow it seeing as communicate only allows the initial input to be written to STDIN and then it sends EOF or something along those lines. This led me to trying to use proc.stdin.write but this is apparently also known to cause deadlock with something related to making the Popen commands use subprocess.PIPE for STDIN, STDOUT, and STDERR. It also requires the input to be encoded as a bytes-like object, which I have done but when I send \r\n\r\n at the end to try to close the connection it doesn't do anything and the STDOUT just contains b'' which I understand to be an empty string in the form of bytes
For anyone that has a similar problem, here is the solution that I found:
#begin the interactive shell of netcat
proc = Popen(['netcat -q -1 {} {}'.format(host, port)], shell=True, stdout=PIPE, stdin=PIPE, stderr=PIPE)
#set file status flags on stdout to non-blocking reads
fcntl.fcntl(proc.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
#each time we write a diffrent line to the interactive shell
#we need to flush the buffer just to be safe
#credit to http://nigelarmstrong.me/2015/04/python-subprocess/
proc.stdin.write(str.encode('GET %s HTTP/1.1\n' %(path+filename)))
proc.stdin.flush()
proc.stdin.write(str.encode('Host: {}\n'.format(host)))
proc.stdin.flush()
proc.stdin.write(str.encode('Connection: close\n'))
proc.stdin.flush()
proc.stdin.write(str.encode('\r\n\r\n'))
proc.stdin.flush()
#give the server time to respond
proc.wait()
#store the server response (which is bytes-like)
#attempting to decode it results in error since we're recieving data as a mix of text/image
serv_response = proc.stdout.read()
I'm developing a Chrome extension working with native messaging host.
It works in most cases, but I have found a strange behavior when I send messages of certain sizes.
It seems that message is dropped, when the size is between 2560 and 2815 bytes (A00 and AFF in hex). All subsequent messages are also not arriving, which suggests that the stream is corrupted for some reason.
Here is a stripped down Python native messaging app, which can be used to test it:
import sys
import struct
def output(message):
encoded_message = message.encode('utf-8')
# Write message size.
sys.stdout.write(struct.pack('I', len(encoded_message)))
# Write the message itself.
sys.stdout.write(encoded_message)
sys.stdout.flush()
if __name__ == "__main__":
output('{"type": "%s"}' % ('x'*2820))
output('{"type": "%s"}' % ('x'*2560))
I'm getting the first message and not the second one.
I have took a look at the code in Chrome repository. Function, which seems to be responsible for that functionality, doesn't have anything special:
void NativeMessageProcessHost::ProcessIncomingData(
const char* data, int data_size) {
DCHECK_CURRENTLY_ON(content::BrowserThread::IO);
incoming_data_.append(data, data_size);
while (true) {
if (incoming_data_.size() < kMessageHeaderSize)
return;
size_t message_size =
*reinterpret_cast<const uint32*>(incoming_data_.data());
if (message_size > kMaximumMessageSize) {
LOG(ERROR) << "Native Messaging host tried sending a message that is "
<< message_size << " bytes long.";
Close(kHostInputOuputError);
return;
}
if (incoming_data_.size() < message_size + kMessageHeaderSize)
return;
content::BrowserThread::PostTask(content::BrowserThread::UI, FROM_HERE,
base::Bind(&Client::PostMessageFromNativeProcess, weak_client_ui_,
destination_port_,
incoming_data_.substr(kMessageHeaderSize, message_size)));
incoming_data_.erase(0, kMessageHeaderSize + message_size);
}
}
Does anybody have any idea what may be happening here?
Update
I have experienced this problem on 64 bit versions of Windows 7 and Windows 8.1.
I tried Chrome 64-bit on Stable, Beta and Dev channels - versions 37, 38 and 39.
I have also tried stable Chrome 32-bit
I use 32 bit version of Python 2.7.7 and PyInstaller 2.1 to create an executable for native messaging host.
Since you're using Windows, I suspect that Windows is adding carriage returns (\x0D) to newline characters (\x0A).
According to Python 2.x - Write binary output to stdout?, a way to prevent modification of the output stream on Windows is to use the following snippet before writing anything to stdout.
if sys.platform == "win32":
import os, msvcrt
msvcrt.setmode(sys.stdout.fileno(), os.O_BINARY)
I would like to be able to allow a user to view the output of a long-running GCI script as it is generated rather than after the script is complete. However even when I explicitly flush STDOUT the server seems to wait for the script to complete before sending the response to the client. This is on a Linux server running Apache 2.2.9.
Example python CGI:
#!/usr/bin/python
import time
import sys
print "Content-type: text/plain"
print
for i in range(1, 10):
print i
sys.stdout.flush()
time.sleep(1)
print "Done."
Similar example in perl:
#!/usr/bin/perl
print "Content-type: text/plain\n\n";
for ($i = 1; $i <= 10 ; $i++) {
print "$i\n";
sleep(1);
}
print "Done.";
This link says as of Apache 1.3 CGI output should be unbuffered (but this might apply only to Apache 1.x): http://httpd.apache.org/docs/1.3/misc/FAQ-F.html#nph-scripts
Any ideas?
Randal Schwartz's article Watching long processes through CGI explains a different (and IMHO, better) way of watching a long running process.
Flushing STDOUT can help. For example, the following Perl program should work as intended:
#!/usr/bin/perl
use strict;
use warnings;
local $| = 1;
print "Content-type: text/plain\n\n";
for ( my $i = 1 ; $i <= 10 ; $i++ ) {
print "$i\n";
sleep(1);
}
print "Done.";
You must put your push script into a special directory wich contain a special .htaccess
with this environnment specs:
Options +ExecCGI
AddHandler cgi-script .cgi .sh .pl .py
SetEnvIfNoCase Content-Type \
"^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads"
SetEnv no-gzip dont-vary
According to CGI::Push,
Apache web server from version 1.3b2
on does not need server push scripts
installed as NPH scripts: the -nph
parameter to do_push() may be set to a
false value to disable the extra
headers needed by an NPH script.
You just have to find do_push equivalent in python.
Edit: Take a look at CherryPy: Streaming the response body.
When you set the config entry "response.stream" to True (and use "yield") CherryPy manages the conversation between the HTTP server and your code like this:
(source: cherrypy.org)