error while using os.popen() to read command-line output - python

I wrote a python program script1.py. Its general logic flow is as follows:
while(true)
if(hasTask)
print('task flow info print')
Then I wrote another python script called monitor.py, using os.popen() to monitor the console output of script1. py, and after obtaining specific information, sent a message to the Redis channel:
redis_key = "command"
cmd = "python script1.py"
pool = redis.ConnectionPool(host="127.0.0.1")
r = redis.Redis(connection_pool=pool)
with os.popen(cmd,"r") as stream:
while True:
buf = stream.readline().strip()
if re.match("target",buf) is not None:
message = stream.readline().strip()
command_info["command"] = message
r.publish(redis_key, json.dumps(command_info))
In the beginning, the monitor can correctly read the script output and send messages to Redis. The problem is that after some time, this combination does not seem to work properly, and no messages are sent to Redis. Why does this happen?
Is the file object returned by popen is too large? or how can I deal with it, need your help.

Related

Test of sending & receiving message for Azure Service Bus Queue

I would like to write an integration test checking connection of the Python script with Azure Service Bus queue. The test should:
send a message to a queue,
confirm that the message landed in the queue.
The test looks like this:
import pytest
from azure.servicebus import ServiceBusClient, ServiceBusMessage, ServiceBusSender
CONNECTION_STRING = <some connection string>
QUEUE = <queue name>
def send_message_to_service_bus(sender: ServiceBusSender, msg: str) -> None:
message = ServiceBusMessage(msg)
sender.send_message(message)
class TestConnectionWithQueue:
def test_message_is_sent_to_queue_and_received(self):
msg = "test message sent to queue"
expected_message = ServiceBusMessage(msg)
servicebus_client = ServiceBusClient.from_connection_string(conn_str=CONNECTION_STRING, logging_enable=True)
with servicebus_client:
sender = servicebus_client.get_queue_sender(queue_name=QUEUE)
with sender:
send_message_to_service_bus(sender, expected_message)
receiver = servicebus_client.get_queue_receiver(queue_name=QUEUE)
with receiver:
messages_in_queue = receiver.receive_messages(max_message_count=10, max_wait_time=20)
assert any(expected_message == str(actual_message) for actual_message in messages_in_queue)
The test occassionally works, more often than not it doesn't. There are no other messages sent to the queue at the same time. As I debugged the code, if the test does not work, the variable messages_in_queue is just an empty list.
Why doesn't the code work at all times and what should be done to fix it?
Are you sure you don't have another process that receive your messages ? Maybe you are sharing your queue connections strings with other colleagues, build machines...
To troubleshoot you need to keep an eye on the Queue monitoring on Azure Portal. Debug your test and look at incoming messages if it increment by 1. Then continue your debug and check if it decrement by 1.
Also, are you sure that this unit test is useful? It looks like you are testing your infra instead of testing your code

Using python script to send GET request to a server with netcat

I'm running Ubuntu 16.04 and I'm trying to write a python script that makes a GET request to a specified image file given the url. As an example, in the code below:
host is www.google.com
port is 80
u.path is /images/srpr/logo3w.png
proc = Popen(["netcat {} {}".format(host, port)], shell= True)
proc = Popen(["GET {} HTTP/1.1".format(u.path)], shell= True)
proc = Popen(["Host: {}".format(host)], shell= True)
proc = Popen(["Connection: close"], shell= True)
proc = Popen(["\n"], shell= True)
My problem is that I can execute these normally in the terminal, but when I try to run the script it seems like sends the GET request to www.google.com before it takes the specification of u.path. I know it is doing this for two reasons. First, just before the server response comes in I get the following:
/bin/sh: 1: Host:: not found
/bin/sh: 1: Connection:: not found
Second, I know that the server response of the image data is a bunch of ugly stuff interpreted as weird Unicode symbols on the terminal, but I'm clearly getting the www.google.com HTML text on the server response.
I was thinking I may need to make it wait to do the HTTP request until the netcat STDIN is open, but I don't know how. Or maybe it's just completing the request because it's sending a \n somehow? I really don't know.
EDIT: It seems like it actually isn't sending the request to www.google.com. I saved the server response as a .html file and it looks like a cloudfront website
EDIT2: After more research, it seems as if the problem is that since netcat is interactive and so it 'deadlocks' or something like that. I tried to use proc.communicate() but since I need to send multiple lines it doesn't allow it seeing as communicate only allows the initial input to be written to STDIN and then it sends EOF or something along those lines. This led me to trying to use proc.stdin.write but this is apparently also known to cause deadlock with something related to making the Popen commands use subprocess.PIPE for STDIN, STDOUT, and STDERR. It also requires the input to be encoded as a bytes-like object, which I have done but when I send \r\n\r\n at the end to try to close the connection it doesn't do anything and the STDOUT just contains b'' which I understand to be an empty string in the form of bytes
For anyone that has a similar problem, here is the solution that I found:
#begin the interactive shell of netcat
proc = Popen(['netcat -q -1 {} {}'.format(host, port)], shell=True, stdout=PIPE, stdin=PIPE, stderr=PIPE)
#set file status flags on stdout to non-blocking reads
fcntl.fcntl(proc.stdout.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
#each time we write a diffrent line to the interactive shell
#we need to flush the buffer just to be safe
#credit to http://nigelarmstrong.me/2015/04/python-subprocess/
proc.stdin.write(str.encode('GET %s HTTP/1.1\n' %(path+filename)))
proc.stdin.flush()
proc.stdin.write(str.encode('Host: {}\n'.format(host)))
proc.stdin.flush()
proc.stdin.write(str.encode('Connection: close\n'))
proc.stdin.flush()
proc.stdin.write(str.encode('\r\n\r\n'))
proc.stdin.flush()
#give the server time to respond
proc.wait()
#store the server response (which is bytes-like)
#attempting to decode it results in error since we're recieving data as a mix of text/image
serv_response = proc.stdout.read()

Python paramiko module using multiple commands

I have a class that creates the connection. I can connect and execute 1 command before the channel is closed. On another system i have i can execute multiple commands and the channel does not close. Obviously its a config issue with the systems i am trying to connect to.
class connect:
newconnection = ''
def __init__(self,username,password):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
try:
ssh.connect('somehost', username=username,password=password,port=2222,timeout=5)
except:
print "Count not connect"
sys.exit()
self.newconnection = ssh
def con(self):
return self.newconnection
Then i use 'ls' command just to print some output
sshconnection = connect('someuser','somepassword').con()
stdin, stdout, stderr = sshconnection.exec_command("ls -lsa")
print stdout.readlines()
print stdout
stdin, stdout, stderr = sshconnection.exec_command("ls -lsa")
print stdout.readlines()
print stdout
sshconnection.close()
sys.exit()
After the first exec_command runs it prints the expected output of the dir list. When i print stdout after the first exec_command it looks like the channel is closed
<paramiko.ChannelFile from <paramiko.Channel 1 (closed) -> <paramiko.Transport at 0x2400f10L (cipher aes128-ctr, 128 bits) (active; 0 open channel(s))>>>
Like i said on another system i am able to keep running commands and the connection doesn't close. Is there a way i can keep this open? or a better way i can see the reason why it closes?
edit: So it looks like you can only run 1 command per SSHClient.exec_command... so i decided to get_transport().open_session() and then run a command. The first one always works. The second one always fails and the scripts just hangs
With just paramiko after the exec_command executes the channel is closed and the ssh returns an auth prompt.
Seems its not possible with just paramiko, try fabric or another tool.
** fabric did not work out too.
Please see the following referece as it provides a way to do this in Paramiko:
How do you execute multiple commands in a single session in Paramiko? (Python)
it's possible with netmiko (tested on windows).
this example is written for connecting to cisco devices but the principle is adaptable for others as well.
import netmiko
from netmiko import ConnectHandler
import json
def connect_enable_silent(ip_address,ios_command):
with open ("credentials.txt") as line:
line_1 = json.load(line)
for k,v in line_1.items():
router=(k,v)
try:
ssh = ConnectHandler(**router[1],device_type="cisco_ios",ip=ip_address)
ssh.enable()
except netmiko.ssh_exception.NetMikoAuthenticationException:
#incorrect credentials
continue
except netmiko.ssh_exception.NetMikoTimeoutException:
#oddly enough if it can log in but not able to authenticate to enable mode the ssh.enable() command does not give an authentication error
#but a time-out error instead
try:
ssh = ConnectHandler(username = router[1]['username'],password = router[1]['password'],device_type="cisco_ios", ip=ip_address)
except netmiko.ssh_exception.NetMikoTimeoutException:
# connection timed out (ssh not enabled on device, try telnet)
continue
except Exception:
continue
else:
output = ssh.send_command(ios_command)
ssh.disconnect()
if "at '^' marker." in output:
#trying to run a command that requires enble mode but not authenticated to enable mode
continue
return output
except Exception:
continue
else:
output = ssh.send_command(ios_command)
ssh.disconnect()
return output
output = connect_enable_silent(ip_address,ios_command)
for line in output.split('\n'):
print(line)
Credentials text is meant to store different credentials in case you are planning to call this function to access multiple devices and not all of them using the same credentials. It is in the format:
{"credentials_1":{"username":"username_1","password":"password_1","secret":"secret_1"},
"credentials_2":{"username":"username_2","password":"password_2","secret":"secret_2"},
"credentials_3": {"username": "username_3", "password": "password_3"}
}
The exceptions can be changed to do different things, in my case i just needed it to not return an error and continue trying the next set, which is why most exceptions are silenced.

Read / Write simultaneously python subprocess.Popen

I have a simple C program which works the following way:
Ask for input
Print it
Ask another input
Print again
Now iam using python to call this program.
import subprocess
sobj = subprocess.Popen("./cprog", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
sobj.stdin.write("2 3\n")
sobj.stdin.close()
sobj.stdout.read()
This works fine. Similarly with communicate its working fine.
But when I try to do something like this it won't work
sobj = subprocess.Popen("./cprog", stdin = subprocess.PIPE, stdout = subprocess.PIPE)
sobj.stdout.readline()
sobj.stdin.write("2 3\n")
sobj.stdin.close()
sobj.stdout.read()
Here are the few things:
1. I saw pexpect but I think we should give what program asks in advance.
2. Can I reopen closed subprocess pipe ?
Iam using the above script as CGI and I don't know why but subprocess.call won't work in that. Can anyone explain why?
EDIT:
Iam doing a web based project where users write code in either C, C++ or JAVA and execute them on browser. So first I thought of using PHP for it but I couldn't find a way to call programs and run them interactively. Then I saw python subprocess module. Everything was working fine in interpreter when I was using subprocess.call. But the same python program when saved it as .cgi and opened it in browser it didn't work. Then I started looking at subprocess.popen. But with this I need to give all the inputs in beginning and then run the code. What I want to do is run an interactive session in browser.
EDIT 2:
So what I want is user runs program in browser and enters input in textbox provided whenever needed and that input is redirected to stdin of subprocess and output based on it.
EDIT 3: cprog.c
#include <stdio.h>
int main() {
int x;
printf("Enter value of x: \n");
scanf("%d", &x);
printf("Value of x: %d\n", x);
return 0;
}
I'm assuming your C application displays a prompt and expects the user to enter their input on the same line, and that in your readline() call above you're trying to get the prompt.
If this is the case, readline() will block forever because it's waiting for a newline character and never seeing it. If you convert this call to a simple read(X) (where X is a number of bytes to read in one go) then you'll probably have better luck, although you should cope with partial input (i.e. loop around collecting input until you've seen the whole prompt). The only other issue you might see is if the C application isn't flushing the output before prompting the user, but I'd expect you to see that problem in the interactive session as well if that were the case.
When running under the context of a webserver like Apache then it's generally a bad idea to use things like subprocess as they involve forking additional processes and that's often quite a tricky thing to manage. This is because the fork process duplicates much of the state of the parent and sometimes this can cause issues. I'm not saying it won't work, I'm just saying you can make some subtle problems for yourself if you're not careful, and it wouldn't surprise me if that's why you're having trouble using subprocess.
To give any more helpful advice, though, you'd need to describe exactly the error you see when you call subprocess. For example, there's quite likely an exception being thrown which will probably be in your webserver logs - reproducing that here would be a good start.
When I run C program directly through terminal its working fine. But when I run the same program with 2nd code i provided above nothing prints.
The reason you don't see any output is that C stdio uses block-buffering when the program is run in non-interactive mode. See my answer that demonstrate several solutions: pty, stdbuf, pexpect. If you can change the C code then you could also fflush the output explicitly or make it unbuffered.
If you can provide all input at once and the output is bounded then you could use .communicate():
from subprocess import Popen, PIPE
p = Popen(["./cprog"], stdin=PIPE, stdout=PIPE, stderr=PIPE,
universal_newlines=True)
out, err = p.communicate("2\n")
So what I want is user runs program in browser and enters input in textbox provided whenever needed and that input is redirected to stdin of subprocess and output based on it.
Based on ws-cli example:
#!/usr/bin/python
"""WebSocket CLI interface.
Install: pip install twisted txws
Run: twistd -ny wscli.py
Visit http://localhost:8080/
"""
import sys
from twisted.application import strports # pip install twisted
from twisted.application import service
from twisted.internet import protocol
from twisted.python import log
from twisted.web.resource import Resource
from twisted.web.server import Site
from twisted.web.static import File
from txws import WebSocketFactory # pip install txws
class Protocol(protocol.Protocol):
def connectionMade(self):
from twisted.internet import reactor
log.msg("launch a new process on each new connection")
self.pp = ProcessProtocol()
self.pp.factory = self
reactor.spawnProcess(self.pp, command, command_args)
def dataReceived(self, data):
log.msg("redirect received data to process' stdin: %r" % data)
self.pp.transport.write(data)
def connectionLost(self, reason):
self.pp.transport.loseConnection()
def _send(self, data):
self.transport.write(data) # send back
class ProcessProtocol(protocol.ProcessProtocol):
def connectionMade(self):
log.msg("connectionMade")
def outReceived(self, data):
log.msg("send stdout back %r" % data)
self._sendback(data)
def errReceived(self, data):
log.msg("send stderr back %r" % data)
self._sendback(data)
def processExited(self, reason):
log.msg("processExited")
self._sendback('program exited')
def processEnded(self, reason):
log.msg("processEnded")
def _sendback(self, data):
self.factory._send(data)
command = './cprog'
command_args = [command]
application = service.Application("ws-cli")
echofactory = protocol.Factory()
echofactory.protocol = Protocol
strports.service("tcp:8076:interface=127.0.0.1",
WebSocketFactory(echofactory)).setServiceParent(application)
resource = Resource()
resource.putChild('', File('index.html'))
strports.service("tcp:8080:interface=127.0.0.1",
Site(resource)).setServiceParent(application)
where index.html:
<!doctype html>
<title>Send input to subprocess using websocket and echo the response</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.7.0/jquery.min.js">
</script>
<script>
// send keys to websocket and echo the response
$(document).ready(function() {
// create websocket
if (! ("WebSocket" in window)) WebSocket = MozWebSocket; // firefox
var socket = new WebSocket("ws://localhost:8076");
// open the socket
socket.onopen = function(event) {
// show server response
socket.onmessage = function(e) {
$("#output").text(e.data);
}
// sent input
$("#entry").keyup(function (e) {
socket.send($("#entry").attr("value")+"\n");
});
}
});
</script>
<pre id=output>Here you should see the output from the command</pre>
<input type=text id=entry value="123">
And cprog.c:
#include <stdio.h>
int main() {
int x = -1;
setbuf(stdout, NULL); // make stdout unbuffered
while (1) {
printf("Enter value of x: \n");
if (scanf("%d", &x) != 1)
return 1;
printf("Value of x: %d\n", x);
}
return 0;
}

Importing a script into another script

I am trying to import this file
http://pastebin.com/bEss4J6Q
Into this file
def MainLoop(self): #MainLoop is used to make the commands executable ie !google !say etc;
try:
while True:
# This method sends a ping to the server and if it pings it will send a pong back
#in other clients they keep receiving till they have a complete line however mine does not as of right now
#The PING command is used to test the presence of an active client or
#server at the other end of the connection. Servers send a PING
#message at regular intervals if no other activity detected coming
#from a connection. If a connection fails to respond to a PING
#message within a set amount of time, that connection is closed. A
#PING message MAY be sent even if the connection is active.
#PONG message is a reply to PING message. If parameter <server2> is
#given, this message will be forwarded to given target. The <server>
#parameter is the name of the entity who has responded to PING message
#and generated this message.
self.data = self.irc.recv( 4096 )
print self.data
if self.data.find ( 'PING' ) != -1:
self.irc.send(( "PONG %s \r\n" ) % (self.data.split() [ 1 ])) #Possible overflow problem
if "!chat" in self.data:
.....
So that I can successfully call upon the imported file (ipibot) whenever
'!chat' in self.data: # is called.
But I'm not sure how to write it. This is what I have so far
if "!chat" in self.data:
user = ipibot.ipibot()
user.respond
I'd like to state I have taken a look at the module portion of Python as well as Importing I just can't seem to grasp it I guess?
file -> class -> function is what I understand it to be.
A module is nothing but a python source file. You keep that python source file in the same directory as other source file and you can import that module in other source files. When you are importing that module, the classes and functions defined in that module are available for you to use.
For e.g. in your case, you would just do
import ipibot
At the top of your source, provided that ipibot.py (your pastebin) file is present in the same directory or PYTHONPATH (a standard directory where python programs can lookup for a module) and then start using ipibot.ipibot() to use the function ipibot()from that module. Thats it.

Categories

Resources