I need to launch a server on the remote machine and retrieve the port number that the server process is lsitening on. When invoked, the server will listen on a random port and output the port number on stderr.
I want to automate the process of logging on to the remote machine, launching the process, and retrieving the port number. I wrote a Python script called "invokejob.py" that lives on the remote machine to act as a wrapper that invokes the job and then returns the port number, it looks like this:
import re, subprocess
executable = ... # Name of executable
regex = ... # Regex to extract the port number from the output
p = subprocess.Popen(executable,
bufsize=1, # line buffered
stderr=subprocess.PIPE
)
s = p.stderr.readline()
port = re.match(regex).groups()[0]
print port
If I log in interactively, this script works:
$ ssh remotehost.example.com
Last login: Thu Aug 28 17:31:18 2008 from localhost
$ ./invokejob.py
63409
$ exit
logout
Connection to remotehost.example.com closed.
(Note: successful logout, it did not hang).
However, if I try to invoke it from the command-line, it just hangs:
$ ssh remotehost.example.com invokejob.py
Does anybody know why it hangs in the second case, and what I can do to avoid this?
Note that I need to retrieve the output of the program, so I can't just use the ssh "-f" flag or redirect standard output.
s = p.stderr.readline()
I suspect it's the above line. When you invoke a command directly through ssh, you don't get your full pty (assuming Linux), and thus no stderr to read from.
When you log in interactively, stdin, stdout, and stderr are set up for you, and so your script works.
what if you do the following:
ssh <remote host> '<your command> ;<your regexp using awk or something>'
For example
ssh <remote host> '<your program>; ps aux | awk \'/root/ {print $2}\''
This will connect to , execute and then print each PSID for any user root or any process with root in its description.
I have used this method for running all kinds of commands on remote machines. The catch is to wrap the command(s) you wish to execute in single quotation marks (') and to separate each command with a semi-colon (;).
Related
I'm trying to write a Python script that starts a subprocess to run an Azure CLI command once the file is executed.
When I run locally, I run:
az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
I get prompted for an input which looks like:
Which service connection do you want to use to communicate with GitHub?
[1] Create new GitHub service connection
[2] <my connection name>
[3] <org name>
Please enter a choice [Default choice(1)]:
I can type in 2 and press enter then my pipeline is successfully created in Azure DevOps. I would like to run this command being dynamically entered when prompted.
So far I have tried:
import subprocess
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area
cmd = cmd.split()
subprocess.run(cmd, shell=True)
This will run in the exact same way as when I try to run it locally.
Try to follow answers from here I have also tried:
p = subprocess.run(cmd, input="1", capture_output=True, text=True, shell=True)
print(p)
Which gives me an error saying raise NoTTYException(error_msg)\nknack.prompting.NoTTYException.
Is there a way where I can execute this Python script, and it will run the Azure CLI command then enter 2 when prompted without any manually intervention?
You are trying to solve the wrong problem. az pipeline create takes a --service-connection parameter. You don't need to respond to the prompt, you can provide the service connection value on the command line and skip the prompt entirely.
IMHO, Daniel is right, you're not supposed to deal with stdin in your program.
Nevertheless, if you really need to, you should use pexpect package, which basically opens a process, waits for given output, and then sends input to the process' stdin.
Here's a basic example:
import pexpect
from pexpect.popen_spawn import PopenSpawn
cmd = 'az pipelines create --name pipeline-from-cli --repository https://github.com/<org>/<project> --yml-path <path to pipeline>.yaml --folder-path _poc-area'
child = pexpect.popen_spawn.PopenSpawn('cmd', timeout=1)
child.expect ('.*Please enter a choice.*')
child.sendline ('2')
# child.interact() # Give control of the child to the user.
Have a look at pexpect documentation for more details. MS Windows support is available since v4.0.
Another archaic solution would be to use subprocess the following way, emulating basically what expect would do:
import subprocess
from time import sleep
p = subprocess.Popen(azure_command, stdout=PIPE, stdin=PIPE, stderr=STDOUT)
sleep(.5)
stdout = p.communicate(input=b'2\n')[0]
print(stdout.decode())
Still, best solution is to use non-interactive mode of most CLI programs.
I am trying to port some simple scripts that I have in tcl to python.
Using tcl/expect, we can see every executed command on the standard output. For example,
spawn ssh admin#$IP
send "ls\r"
would yield by default an output like this:
ssh admin#10.10.10.10
ls
....
In python, only way I saw was to decode the child.before or after outputs.
Is there a way python can output everything it runs to the console or a file ?
This is what I am doing now:
#!/usr/bin/env python3
import pexpect
shellprompt = "] # "
child = pexpect.spawn('ssh admin#XYZ')
child.sendline('ls')
child.expect(shellprompt)
ls_out = child.before.decode()
print(ls_out)
This is run on a Linux machine, and doing ssh to a Linux machine
I got two functions: both for starting Server on particular port. Something like:
from Multiprocessing import Process
import sys
def start_server1(port1):
os.system("server1 --port %s" % port1)
def start_server2(port2):
os.system("server2 --port %s" % port2)
port1 = arg.sys[1]
port2 = arg.sys[2]
p1 = Process(target=start_server1, args=(port1,))
p2 = Process(target=start_server2, args=(port2,))
This allow me to start both servers from terminal within my script like
>>>python servers.py 8000 8001
But a lot of traces are expected to be displayed for both servers and I want to see them separately. So the question is: how to make so that script will be executed from one terminal shell, process p1 will be started in new shell as well as second process p2?
Thank you for advises and suggestions
You can redirect the output of each process in a file.
os.system("server1 --port %s > server1.log" % port1)
Then, run tail command on each logs in a separate shell to monitor them.
tail -f server1.log
However, if you are not interested to store the output of those process in a file and want a separate shell solution, you can use "xterm -e" option.
The -e option of xterm allows to run a shell command in a separate xterm window.
os.system('xterm -e "server1 --port %s"'% port)
I'm using an automated SSH script to copy/run/log hardware tests to a few computers via SSH, and everything works fine except one thing. The test file is supposed to run indefintely every 30 minutes and collect data, then write it to a file until killed. For lack of a better example:
NOTE: Neither of these files are the actual code. I don't have it in front of me to copy it.
file.py:
#!/usr/bin/env python
import os
idleUsage = []
sleepTime = 1800
while(True):
holder = os.popen('mpstat | awk \'{printf("%s\n", $9)}\'')
idleUsage.append(100.0 - float(holder[1]))
f = open("output.log", 'w')
f.write(%idleUsage)
f.close()
sleep(sleepTime)
automatic-ssh.sh:
#!/bin/bash
autossh uname1 password1 ip1 command <----gets stuck after ssh runs
autossh uname2 password2 ip2 command
autossh uname3 password2 ip3 command
Without fail it gets stuck on running the command. I've tried 'command &' as well as putting an ampersand at the end of the entire line of code. Anyone out there have some advice?
Not sure of your current context but I would recommend using subprocess:
from subprocess import Popen
p1 = Popen(["sar"], stdout=PIPE)
p2 = Popen(["grep", "kb"], stdin=p1.stdout, stdout=PIPE)
p1.stdout.close() # Allow p1 to receive a SIGPIPE if p2 exits.
output = p2.communicate()[0]
So, your shell script connects to a remote machine via ssh and runs an endless python command, and you want that ssh connection to go into the background?
#!/bin/sh
ssh thingie 1 > out.1 &
ssh thingie 2 > out.2 &
ssh thingie 3 > out.3 &
wait
That'll kick off three ssh commands in the background logging to individual files, and then the script will wait until they all exit (wait, if not given a pid as an argument, waits for all children to exit). If you kill the script, the child ssh processes should terminate as well. I'm not sure if that's what you're asking or not, but maybe it helps something? :)
I'm trying to SCP a file between machines and I need to fail when the user hasn't set up a private/public certificate to do passwordless logins. Unfortunatly, using subprocess.Popen I can't figure out how to capture the following output:
The authenticity of host '***' can't be established.
RSA key fingerprint is ***.
Are you sure you want to continue connecting (yes/no)
It always shows up on the console and I can't get it in my program to detect it.
Here's some example code:
proc = subprocess.Popen(['scp', 'user#server:/location/file.txt', '/someplace/file.txt',
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
I've tried many other permutations. This one still prompts me, and not Python to enter yes/no. At least when I type no though I get:
result: 'Host key verification failed.\r\n'
'The authenticity of host '***' can't be established' means the machine your connecting from hasn't been told to save the other ends (server) identity to the known_hosts file and it asking if you trust the machine. You can change the ssh client to just add it automatically without prompting you.
try this:
proc = subprocess.Popen(['scp', '-o BatchMode=yes',
'user#server:/location/file.txt',
'/someplace/file.txt'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
proc.wait()
print 'result: %s' % repr(proc.stderr.readline())
With the above code i get:
me#myMachine:~$ python tmp.py
result: 'Host key verification failed.\r\n'
me#myMachine:~$
If I use disable StrictHostKeyChecking i get:
me#myMachine:~$ python tmp.py
result: 'Permission denied (publickey,password,keyboard-interactive).\r\n'
me#myMachine:~$ python tmp.py
So it looks like it is printing the first line from stderr with BatchMode turned on :)
I've run into something similar before, though in my case it was actually helpful. I believe ssh and friends don't actually read stdin and print on stdout or stderr, they do funky things to hook up with the terminal you're running in directly.
I believe the reasoning is they they're supposed to be able to talk to the user directly, even when run through wrapper shell scripts, because the user knows the password, not the calling script (and probably they deliberately don't want calling scripts to have the opportunity to intercept a password).
[Edit to add]: According to the man page on my system, scp does have a flag that might do what you want:
-B Selects batch mode (prevents asking for passwords or passphrases).