How to keep ssh session open after logging in using subprocess.popen? - python
I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK
Related
Popen subprocess process stop reading after a specific reply
I'm using pygdbmi and I think I've hit a bug there. In a nutshell: it seems that non-blocking subprocess Pipe is interacting in a funny way with GDB. My problem is NOT to read from a subprocess started with popen, that's working. The problem seems that the subprocess (GDB in this case) is printing something to its stdout, I can read it, but after a very specific GDB command, something breaks and then I can't read anymore. This doesn't seem a matter of how many GDB commands are sent (and thus how much data is exchanged through the pipe). I'm using pygdbmi to talk to an embedded board through JLink JLinkGDBServerCL. This is working 99.999 % of the time. Pygdbmi uses subprocess.Popen(stdin=subprocess.PIPE) to spawn a GDB process. I then use response = gdbmi.write('-target-select remote localhost:2331', timeout_sec=5) To connect to a running instance of JLinkGDBServerCL and interact with my board. I can download code, set breakpoints, interrupt, start again, the works. I can even send -data-list-register-values x to a board running a Cortex-M3 core. When I try to run the same command on a board with an Arm Cortex-M33 board, I don't get any reply. I've enabled GDB logging with -gdb-set trace-commands on -gdb-set logging on and if I check on my gdb.txt file, I get the expected reply: ^done,register-values=[{number="0",value="0x0"},{number="1",value="0x0"},{number="2",value="0x0"},{number="3",value="0x80730"},{number="4",value="0x40000100"},{number="5",value="0x101"},{number="6",value="0x3ff01"},{number="7",value="0x0"},{number="8",value="0xffffffff"},{number="9",value="0x40001430"},{number="10",value="0xffffffff"},{number="11",value="0xffffffff"},{number="12",value="0xffffffff"},{number="13",value="0x20001e58"},{number="14",value="0x20001e69"},{number="15",value="0xeffffffe"},{number="25",value="0x41000003"},{number="91",value="0x20001e58"},{number="92",value="0x0"},{number="93",value="0x1"},{number="94",value="0x0"},{number="95",value="0x0"},{number="96",value="0x0"},{number="97",value="0x0"},{number="98",value="0x0"},{number="99",value="0x0"},{number="100",value="0x0"},{number="101",value="0x0"},{number="102",value="0x0"},{number="103",value="0x0"},{number="104",value="0x0"},{number="105",value="0x0"},{number="106",value="0x0"},{number="107",value="0x0"},{number="108",value="0x0"},{number="109",value="0x0"},{number="110",value="0x0"},{number="111",value="0x0"},{number="112",value="0x0"},{number="113",value="0x0"},{number="114",value="0x0"},{number="115",value="0x0"},{number="116",value="0x0"},{number="117",value="0x0"},{number="118",value="0x0"},{number="119",value="0x0"},{number="120",value="0x0"},{number="121",value="0x0"},{number="122",value="0x0"},{number="123",value="0x0"},{number="124",value="0x0"},{number="125",value="0x0"},{number="126",value="0x0"},{number="127",value="0x0"},{number="128",value="0x0"},{number="129",value="0x0"},{number="130",value="0x0"},{number="131",value="0x0"},{number="132",value="0x0"},{number="133",value="0x0"},{number="134",value="0x0"},{number="135",value="0x0"},{number="136",value="0x0"},{number="137",value="0x0"},{number="138",value="0x0"},{number="139",value="0x0"},{number="140",value="0x0"},{number="141",value="0x0"},{number="142",value="0x0"},{number="143",value="0x0"},{number="144",value="0x0"},{number="145",value="0x0"},{number="146",value="0x20001e58"},{number="147",value="0x0"},{number="148",value="0x0"},{number="149",value="0x0"},{number="150",value="0x0"},{number="151",value="0x0"},{number="152",value="0xfffffffc"},{number="153",value="0x0"},{number="154",value="0x0"},{number="155",value="0x0"},{number="156",value="0x0"},{number="157",value="0x1"},{number="158",value="0x0"},{number="159",value="0x0"},{number="160",value="0x0"},{number="161",value="0x0"}] This means pygdbmi is sending the command to the child GDB process, but it's not being able to read. Pygdbmi is using this gist to make readline() non-blocking: def make_non_blocking(file_obj: io.IOBase): """make file object non-blocking Windows doesn't have the fcntl module, but someone on stack overflow supplied this code as an answer, and it works http://stackoverflow.com/a/34504971/2893090""" if USING_WINDOWS: LPDWORD = POINTER(DWORD) PIPE_NOWAIT = wintypes.DWORD(0x00000001) SetNamedPipeHandleState = windll.kernel32.SetNamedPipeHandleState SetNamedPipeHandleState.argtypes = [HANDLE, LPDWORD, LPDWORD, LPDWORD] SetNamedPipeHandleState.restype = BOOL h = msvcrt.get_osfhandle(file_obj.fileno()) res = windll.kernel32.SetNamedPipeHandleState(h, byref(PIPE_NOWAIT), None, None) if res == 0: raise ValueError(WinError()) else: # Set the file status flag (F_SETFL) on the pipes to be non-blocking # so we can attempt to read from a pipe with no new data without locking # the program up fcntl.fcntl(file_obj, fcntl.F_SETFL, os.O_NONBLOCK) And it's reading with: while True: responses_list = [] try: self.stdout.flush() raw_output = self.stdout.readline().replace(b"\r", b"\n") responses_list = self._get_responses_list(raw_output, "stdout") except IOError as e: pass Because readline() is non-blocking, it throws many exceptions, but ends up reading GDB's output. That is, until -data-list-register-values x. The funny thing is that if I use -data-list-register-values (omitting the format specifier), I can read GDB's error message complaining about the missing argument: response = gdbmi.write('-data-list-register-values', timeout_sec=10) pprint(response) [{'message': 'error', 'payload': {'msg': '-data-list-register-values: Usage: ' '-data-list-register-values [--skip-unavailable] <format> ' '[<regnum1>...<regnumN>]'}, 'stream': 'stdout', 'token': None, 'type': 'result'}] At the very bottom of the GDB log, I see ~"Exception condition detected on fd 0\n" ~"error detected on stdin\n" I'm not sure if this is a red-herring or not. Any suggestions on how to debug why readline() is not actually reading the output from GDB?
Turns out that the problem is with JLinkGDBServerCL. I was originally spawning the subprocess with: command = ['C:\\Program Files (x86)\\SEGGER\\JLink_V694d\\JLinkGDBServerCL.exe', '-select', 'USB', '-if', 'SWD', '-device', 'RSL15', '-endian', 'little', '-speed', '1000', '-port', '2331', '-vd', '-ir', '-localhostonly', '1', '-noreset', '-singlerun', '-strict', '-timeout 0', '-nogui'] self.gdbServer = subprocess.Popen(command, shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) Turns out that I have to call popen with: self.gdbServer = subprocess.Popen(command, shell=False, stdout=subprocess.DEVNULL, stdin=subprocess.PIPE, stderr=subprocess.PIPE) I have no idea why, considering that I'm never reading from self.gdbServer.stdout. I've also tried to use make_non_blocking: from pygdbmi.IoManager import make_non_blocking self.gdbServer = subprocess.Popen(command, shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) make_non_blocking(self.gdbServer.stdout)
Run cmd file using python
I have a cmd file "file.cmd" containing 100s of lines of command. Example pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md" pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md" pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md" I am trying to run these commands using a script so that I don't have to go to a file and click on it. This is my code, and it results in no output: file1 = open('example.cmd', 'r') Lines = file1.readlines() # print(Lines) for i in Lines: print(i) os.system(i)
You don't need to read the cmd file line by line. you can simply try the following: import os os.system('myfile.cmd') or using the subprocess module: import subprocess p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True) stdout, stderr = proc.communicate() Example: myfile.cmd: #ECHO OFF ECHO Grettings From Python! PAUSE script.py: import os os.system('myfile.cmd') The cmd will open with: Greetings From Python! Press any key to continue ... You can debug the issue by knowing the return exit code by: import os return_code=os.system('myfile.cmd') assert return_code == 0 #asserts that the return code is 0 indicating success! Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')). since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list. use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!) the code below for python 3.3 and beyond import subprocess try: proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE) outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to! except TimeoutExpired: proc.kill() outs, errs = proc.communicate() for older python versions proc = subprocess.Popen('myfile.cmd', shell=True) t=10 while proc.poll() is None and t >= 0: print('Still waiting') time.sleep(1) t -= 1 proc.kill() In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use: proc = subprocess.Popen('myfile.cmd', shell=True) proc.communicate()
Python parallel subprocess commands while suppressing output
I am doing a simple ip scan using ping in python. I can run commands in parallel as demonstrated in this answer. However, I cannot suppress the output since it uses Popen, and I can't use check_output since the process returns with a exit status of 2 if a host is down at a certain ip address, which is the case for most addresses. Using a Pipe is also out of the question since too many processes are running concurrently. Is there a way to run these child processes in python concurrently while suppressing output? Here is my code for reference: def ICMP_scan(root_ip): host_list = [] cmds = [('ping', '-c', '1', (root_ip + str(block))) for block in range(0,256)] try: res = [subprocess.Popen(cmd) for cmd in cmds] for p in res: p.wait() except Exception as e: print(e)
How about piping the process output to /dev/null. Basing on this answer: import os devnull = open(os.devnull, 'w') subproc = subprocess.Popen(cmd, stdout=devnull, stderr=devnull)
Reading stdout from a subprocess in real time
Given this code snippet: from subprocess import Popen, PIPE, CalledProcessError def execute(cmd): with Popen(cmd, shell=True, stdout=PIPE, bufsize=1, universal_newlines=True) as p: for line in p.stdout: print(line, end='') if p.returncode != 0: raise CalledProcessError(p.returncode, p.args) base_cmd = [ "cmd", "/c", "d:\\virtual_envs\\py362_32\\Scripts\\activate", "&&" ] cmd1 = " ".join(base_cmd + ['python -c "import sys; print(sys.version)"']) cmd2 = " ".join(base_cmd + ["python -m http.server"]) If I run execute(cmd1) the output will be printed without any problems. However, If I run execute(cmd2) instead nothing will be printed, why is that and how can I fix it so I could see the http.server's output in real time. Also, how for line in p.stdout is been evaluated internally? is it some sort of endless loop till reaches stdout eof or something? This topic has already been addressed few times here in SO but I haven't found a windows solution. The above snippet is code from this answer and I'm running http.server from a virtualenv (python3.6.2-32bits on win7)
If you want to read continuously from a running subprocess, you have to make that process' output unbuffered. Your subprocess being a Python program, this can be done by passing -u to the interpreter: python -u -m http.server This is how it looks on a Windows box.
With this code, you can`t see the real-time output because of buffering: for line in p.stdout: print(line, end='') But if you use p.stdout.readline() it should work: while True: line = p.stdout.readline() if not line: break print(line, end='') See corresponding python bug discussion for details UPD: here you can find almost the same problem with various solutions on stackoverflow.
I think the main problem is that http.server somehow is logging the output to stderr, here I have an example with asyncio, reading the data either from stdout or stderr. My first attempt was to use asyncio, a nice API, which exists in since Python 3.4. Later I found a simpler solution, so you can choose, both of em should work. asyncio as solution In the background asyncio is using IOCP - a windows API to async stuff. # inspired by https://pymotw.com/3/asyncio/subprocesses.html import asyncio import sys import time if sys.platform == 'win32': loop = asyncio.ProactorEventLoop() asyncio.set_event_loop(loop) async def run_webserver(): buffer = bytearray() # start the webserver without buffering (-u) and stderr and stdin as the arguments print('launching process') proc = await asyncio.create_subprocess_exec( sys.executable, '-u', '-mhttp.server', stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE ) print('process started {}'.format(proc.pid)) while 1: # wait either for stderr or stdout and loop over the results for line in asyncio.as_completed([proc.stderr.readline(), proc.stdout.readline()]): print('read {!r}'.format(await line)) event_loop = asyncio.get_event_loop() try: event_loop.run_until_complete(run_df()) finally: event_loop.close() redirecting the from stdout based on your example this is a really simple solution. It just redirects the stderr to stdout and only stdout is read. from subprocess import Popen, PIPE, CalledProcessError, run, STDOUT import os def execute(cmd): with Popen(cmd, stdout=PIPE, stderr=STDOUT, bufsize=1) as p: while 1: print('waiting for a line') print(p.stdout.readline()) cmd2 = ["python", "-u", "-m", "http.server"] execute(cmd2)
How for line in p.stdout is been evaluated internally? is it some sort of endless loop till reaches stdout eof or something? p.stdout is a buffer (blocking). When you are reading from an empty buffer, you are blocked until something is written to that buffer. Once something is in it, you get the data and execute the inner part. Think of how tail -f works on linux: it waits until something is written to the file, and when it does it echo's the new data to the screen. What happens when there is no data? it waits. So when your program gets to this line, it waits for data and process it. As your code works, but when run as a model not, it has to be related to this somehow. The http.server module probably buffers the output. Try adding -u parameter to Python to run the process as unbuffered: -u : unbuffered binary stdout and stderr; also PYTHONUNBUFFERED=x see man page for details on internal buffering relating to '-u' Also, you might want to try change your loop to for line in iter(lambda: p.stdout.read(1), ''):, as this reads 1 byte at a time before processing. Update: The full loop code is for line in iter(lambda: p.stdout.read(1), ''): sys.stdout.write(line) sys.stdout.flush() Also, you pass your command as a string. Try passing it as a list, with each element in its own slot: cmd = ['python', '-m', 'http.server', ..]
You could implement the no-buffer behavior at the OS level. In Linux, you could wrap your existing command line with stdbuf : stdbuf -i0 -o0 -e0 YOURCOMMAND Or in Windows, you could wrap your existing command line with winpty: winpty.exe -Xallow-non-tty -Xplain YOURCOMMAND I'm not aware of OS-neutral tools for this.
How to create a subprocess in Python, send multiple commands based on previous output
I am writing a program which initiates a connection to a remote machine, then dynamically sending multiple commands to it by monitoring the response. Instead of using pexpect, what else can I use? I am trying to use subprocess.Popen, but communicate() method will kill the process. Pexpect version: 2.4, http://www.bx.psu.edu/~nate/pexpect/pexpect.html Referring to the API for subprocess in: https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate Popen.communicate(input=None) Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child. Thanks
Refer the subprocess documentation to understand the basics here You could do something like this ... Again, this is just a pointer... this approach may/may not be a best fit for your use case. Explore -> and Test to find what works for you! import shlex import subprocess import sys class Command(object): """ Generic Command Interface .""" def execute(self, cmd): proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE) stdout_value = proc.communicate()[0] exit_value = proc.poll() if exit_value: logger.error('Command execution failed. Command : %s' % cmd) return exit_value, stdout_value if __name__ == '__main__': cmd = Command() host = '' # HOSTNAME GOES HERE cmd_str = '' # YOUR COMMAND GOES HERE cmdline = 'ksh -c "ssh root#{0} "{1}""'.format(host, cmd_str) exit_value, stdout_value = cmd.execute(cmdline) if exit_value == 0: # execute other command/s # you basically use the same logic as above else: # return Or execute other command/s