Executing a shell script using python's paramiko library - python

I am trying to execute a shell script with command line arguments using python's paramiko library and the code is as shown below.
import paramiko
ip = input("Enter the ip address of the machine: ")
mac = input("Enter the mac address of the machine: ")
model = input("Enter the model of the box(moto/wb): ")
spec = input("Enter the spec of the box(A/B/C/CI/D/E): ")
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect('hostname', username='xxxx', password='yyyy')
stdin, stdout, stderr = ssh_client.exec_command('ls -l')
for line in iter(stdout.readline, ""):
print(line, end = "")
stdin, stdout, stderr = ssh_client.exec_command('./name.sh'+ ip+ model + spec+ mac)
for line in iter(stdout.readline, ""):
print(line, end = "")
print('finished.')
I am not getting the output of second execute command. Instead directly it is jumping to finished. May i know how to get the output of the command execution?

You are not getting output because there is none. Your command is not valid, so the shell fails. The only output is sent to stderr, which you do not print.
Details:
stdin, stdout, stderr = ssh_client.exec_command('./name.sh'+ ip+ model + spec+ mac)
Assuming the user put these values:
ip: 111.222.333.444
model: wb
spec: C
mac: 28:d2:33:e6:4e:73
Then your command is:
./name.sh111.222.333.444wbC28:d2:33:e6:4e:73
Everything is appended with no spaces in between. Simple fix:
stdin, stdout, stderr = ssh_client.exec_command('./name.sh "' + ip + '" "' + model + '" "' + spec + '" "' + mac + '"')
Your command will now be:
./name.sh "111.222.333.444" "wb" "C" "28:d2:33:e6:4e:73"
I put the " around the values to ensure it will work even if the user puts spaces in the values.
Or store your 4 variables into a list and the join() function (https://www.geeksforgeeks.org/join-function-python/) to build your command.

Related

Log output of background process to a file

I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.

Python trouble with piping ssh using Popen

I am writing a python script that uses plink to ssh into a linux box, execute a command, then write the output of that command to a string and return to my python script.
I would also like to not print the commands I am running to the terminal.
I have the following which executes the command and prints to terminal, but it does not return to my python script, nor can I figure out how to store the output of my command to a string.
while(True):
network_name = raw_input('\nEnter test network: ')
network_name_check = raw_input('\nYou want to test on the %s network. Is this correct? (Y/N): ' %(network_name))
if inputYNChecker(network_name_check):
print "\nVerifying Network exists as Group_Name on Control VM..."
sshCommand = "plink root#Control -pw PASSWORD"
lsCommand = "ls -1 --color=never -d */ | grep " + network_name +"\n"
sshProcess = Popen(sshCommand,shell=False,stdin=PIPE)
sshProcess.stdin.write("cd /mnt/PCAPS/GroupSetup\n")
#sshProcess.communicate("cd /mnt/PCAPS/GroupSetup\n")
sshProcess.stdin.write(lsCommand)
sshProcess.stdin.write("exit\n")
sshProcess.stdin.close()
break
print "Back to python script"
I guess I really don't understand how pipes work, as when I have stdin=PIPE and stdout=PIPE, nothing is displayed in terminal except for "Using username "root"." and then it gets hung up.
How can I:
a) Not display the commands I'm sending to the ssh session
b) Store the output of the commands (which would be a folder name) to a string
c) return to my original python program

How to show user aliases in python

I am trying to make a scrip in python to show the aliases of a user I picked just like when you type alias in the terminal.
so far the code goes like this:
tt = open("/etc/passwd" , "r")
with tt as f2:
with open("passwd" , "w+") as f1:
f1.write(f2.read())
f1.seek(0,0)
command = f1.read()
print
print command
chose = raw_input("select user's name from this list > ")
rootlist = "1) Show user groups \n2) Show user id \n3) Show users alias\n4) Add new alias \n5) Change Password \n6) Back"
print
print rootlist
print
chose2 = int(raw_input("Choose a command > "))
if choose == 3:
os.system("alias ")
however os.system("alias ") doesn't work and I can't seem to find a proper way t do it.
Alias is a shell builtin which can be seen here
$ type -a alias
alias is a shell builtin
This is the problem, which you can solve by adding a call to bash to your shell command
import os
os.system('bash -i -c "alias"')
or the preferred way using subprocess module
from subprocess import Popen, PIPE, STDOUT
cmd = 'bash -i -c "alias"'
event = Popen(cmd, shell=True, stdin=PIPE, stdout=PIPE, stderr=STDOUT)
output = event.communicate()[0]
print(output)

Python raw_input doesn't work after using subprocess module

I'm using the subprocess module to invoke plink and run some commands on a remote server. This works as expected, but after a successful call to subprocess.check_call or subprocess.check_output the raw_input method seems to block forever and doesn't accept input at the command line.
I've reduced it to this simple example:
import subprocess
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command)
input = raw_input('Enter some text: ')
print('You entered: ' + input)
execute('echo "Hello, World"')
# I see the following prompt, but it's not accepting input
input = raw_input('Enter some more text: ')
print('You entered: ' + input)
I see the same results with subprocess.check_call and subprocess.check_output. If I replace the final raw_input call with a direct read from stdin (sys.stdin.read(10)) the program does accept input.
This is Python 2.7 on Windows 7 x64. Any ideas what I'm doing wrong?'
Edit: If I change execute to call something other than plink it seems to work okay.
def execute(command):
return subprocess.check_call('cmd.exe /C ' + command)
This suggests that plink might be the problem. However, I can run multiple plink commands directly in a console window without issue.
I was able to resolve this by attaching stdin to devnull:
def execute(command):
return subprocess.check_call('plink.exe -ssh ' + USER + '#' + HOST + ' -pw ' + PASSWD + ' ' + command, stdin=open(os.devnull))

Python subprocess - run multiple shell commands over SSH

I am trying to open an SSH pipe from one Linux box to another, run a few shell commands, and then close the SSH.
I don't have control over the packages on either box, so something like fabric or paramiko is out of the question.
I have had luck using the following code to run one bash command, in this case "uptime", but am not sure how to issue one command after another. I'm expecting something like:
sshProcess = subprocess.call('ssh ' + <remote client>, <subprocess stuff>)
lsProcess = subprocess.call('ls', <subprocess stuff>)
lsProcess.close()
uptimeProcess = subprocess.call('uptime', <subprocess stuff>)
uptimeProcess.close()
sshProcess.close()
What part of the subprocess module am I missing?
Thanks
pingtest = subprocess.call("ping -c 1 %s" % <remote client>,shell=True,stdout=open('/dev/null', 'w'),stderr=subprocess.STDOUT)
if pingtest == 0:
print '%s: is alive' % <remote client>
# Uptime + CPU Load averages
print 'Attempting to get uptime...'
sshProcess = subprocess.Popen('ssh '+<remote client>, shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
sshProcess,stderr = sshProcess.communicate()
print sshProcess
uptime = subprocess.Popen('uptime', shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
uptimeProcess,stderr = uptimeProcess.communicate()
uptimeProcess.close( )
print 'Uptime : ' + uptimeProcess.split('up ')[1].split(',')[0]
else:
print "%s: did not respond" % <remote client>
basically if you call subprocess it creates a local subprocess not a remote one
so you should interact with the ssh process. so something along this lines:
but be aware that if you dynamically construct my directory it is suceptible of shell injection then END line should be a unique identifier
To avoid the uniqueness of END line problem, an easiest way would be to use different ssh command
from __future__ import print_function,unicode_literals
import subprocess
sshProcess = subprocess.Popen(['ssh',
'-tt'
<remote client>],
stdin=subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True,
bufsize=0)
sshProcess.stdin.write("ls .\n")
sshProcess.stdin.write("echo END\n")
sshProcess.stdin.write("uptime\n")
sshProcess.stdin.write("logout\n")
sshProcess.stdin.close()
for line in sshProcess.stdout:
if line == "END\n":
break
print(line,end="")
#to catch the lines up to logout
for line in sshProcess.stdout:
print(line,end="")

Categories

Resources