read Linux terminal output using Python's sub process module - python

I am running a python on Linux terminal and it requires some bash commands to run while the test is running. So, I am using the subprocess module and running my test commands (bash script). These so-called bash commands might print something on the CLI which I need to know if it does while I am running my python code in parallel.
for Ex :
# running my python TCP server
subprocess.call(['.\run_some_shell_commands.sh'],shell=True)
while True:
# I am doing some other python stuff
if (CLI_HAS_SOME_OUTPUT_DETECTED):
#record the output to some variable
# doing some more python stuff
If I know for sure that run_some_shell_commands.sh returns some output for sure, I could simply use A = subprocess.checkoutput(['.\run_some_shell_commands.sh'],shell=True) which would save its output in variable A ..
Is there any way to grab the last n lines of the terminal ?? so that I can check if that event has occurred and I can assign that to CLI_HAS_SOME_OUTPUT_DETECTED
Any suggestions are highly appreciated.
Saira

import subprocess
import time as t
cmd = [' ']
P = subprocess.check_output(cmd,shell=True)
while True :
print(P)
t.sleep(0.1)

This is answered in Running shell command and capturing the output. There are two classes of shell commands, executables and inbuilt commands, in some programming languages this can make a difference, see How do I listen for a response from shell command in android studio?

Related

Cant trigger python 3 code with asterisk AGI even on using subprocess inside python2

I am running a python2 code which is triggered by dial plan. In order to process saved recording I need to run a python 3 script. Is there any way to do it. If I am switching the code to python3 the code is not working.
This is the extension
same=>n,AGI(code.py)
in code.py on giving the header
#!/usr/bin/env python2
i am able to run the function
def run_cmd(cmd):
#This runs the general command
sys.stderr.write(cmd)
sys.stderr.flush()
sys.stdout.write(cmd)
sys.stdout.flush()
result = sys.stdin.readline().strip()
checkresult(result)
which is able to process various agi command
but on switching it to python 3 #!/usr/bin/env python3
code wont run.
Now I need to use google cloud engine to process something thats written in python 3
Is there a way to make it run
i have done
def run_sys_command(command):
subprocess.call(command, shell=True)
checkresult(result)
command = "sudo python3 /root/Downloads/check2.py"
run_sys_command(command)
Is there any way to run the python 3 script or any way to run python 3 script directly with agi.
I have checked permission n everything
Sure you can run threads inside AGI.
But it should be stopped before AGI script end.
The simplest way do what you want - setup some type of queue(rabbitmq/simple tasks list in mysql?) and process it outside asterisk process scope.
There is no any problem with running python3 as AGI script. I have plenty of such scripts in my projects. Just check your code.

Launching subprocesses on resource limited machine

Edit:
The original intent of this question was to find a way to launch an interactive ssh session via a Python script. I'd tried subprocess.call() before and had gotten a Killed response before anything was output onto the terminal. I just assumed this was an issue/limitation with the subprocess module instead of an issue somewhere else.This was found not to be the case when I ran the script on a non-resource limited machine and it worked fine.
This then turned the question into: How can I run an interactive ssh session with whatever resource limitations were preventing it from running?
Shoutout to Charles Duffy who was a huge help in trying to diagnose all of this .
Below is the original question:
Background:
So I have a script that is currently written in bash. It parses the output of a few console functions and then opens up an ssh session based on those parsed outputs.
It currently works fine, but I'd like to expand it's capabilities a bit by adding some flag arguments to it. I've worked with argparse before and thoroughly enjoyed it. I tried to do some flag work in bash, and let's just say it leaves much to be desired.
The Actual Question:
Is it possible to have python to do stuff in a console and then put the user in that console?
Something like using subprocess to run a series of commands onto the currently viewed console? This in contrast to how subprocess normally runs, where it runs commands and then shuts the intermediate console down
Specific Example because I'm not sure if what I'm describing makes sense:
So here's a basic run down of the functionality I was wanting:
Run a python script
Have that script run some console command and parse the output
Run the following command:
ssh -t $correctnode "cd /local_scratch/pbs.$jobid; bash -l"
This command will ssh to the $correctnode, change directory, and then leave a bash window in that node open.
I already know how to do parts 1 and 2. It's part three that I can't figure out. Any help would be appreciated.
Edit: Unlike this question, I am not simply trying to run a command. I'm trying to display a shell that is created by a command. Specifically, I want to display a bash shell created through an ssh command.
Context For Readers
The OP is operating on a very resource-constrained (particularly, it appears, process-constrained) jumphost box, where starting an ssh process as a subprocess of python goes over a relevant limit (on number of processes, perhaps?)
Approach A: Replacing The Python Interpreter With Your Interactive Process
Using the exec*() family of system calls causes your original process to no longer be in memory (unlike the fork()+exec*() combination used to start a subprocess while leaving the parent process running), so it doesn't count against the account's limits.
import argparse
import os
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remote_cmd_str = 'cd /local_scratch/pbs.%s && exec bash -i' % (quote(args.jobid))
local_cmd = [
'/usr/bin/env', 'ssh', '-tt', node, remote_cmd_str
]
os.execv("/usr/bin/env", local_cmd)
Approach B: Generating Shell Commands From Python
If we use Python to generate a shell command, the shell can invoke that command only after the Python process exited, such that we stay under our externally-enforced process limit.
First, a slightly more robust approach at generating eval-able output:
import argparse
try:
from shlex import quote
except ImportError:
from pipes import quote
parser = argparse.ArgumentParser()
parser.add_argument('node')
parser.add_argument('jobid')
args = parser.parse_args()
remoteCmd = ['cd', '/local_scratch/pbs.%s' % (args.jobid)]
remoteCmdStr = ' '.join(quote(x) for x in remoteCmd) + ' && bash -l'
cmd = ['ssh', '-t', args.correctnode, remoteCmdStr]
print(' '.join(pipes.quote(x) for x in cmd)
To run this from a shell, if the above is named as genSshCmd:
#!/bin/sh
eval "$(genSshCmd "$#")"
Note that there are two separate layers of quoting here: One for the local shell running eval, and the second for the remote shell started by SSH. This is critical -- you don't want a jobid of $(rm -rf ~) to actually invoke rm.
This is in no way a real answer, just an illustration to my comment.
Let's say you have a Python script, test.py:
import argparse
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('myarg', nargs="*")
args = parser.parse_args()
print("echo Hello world! My arguments are: " + " ".join(args.myarg))
So, you create a bash wrapper around it, test.sh
set -e
$(python test.py $*)
and this is what you get:
$ bash test.sh
Hello world! My arguments are:
$ bash test.sh one two
Hello world! My arguments are: one two
What is going on here:
python script does not execute commands. Instead, it outputs the commands bash script will run (echo in this example). In your case, the last command will be ssh blabla
bash executes the output of the python script (the $(...) part), passing on all its arguments (the $* part)
you can use argparse inside the python script; if anything is wrong with the arguments, the message will be put to stderr and will not be executed by bash; bash script will stop because of set -e flag

Running multiple Python scripts

I would like to create a simple Python program that will concurrently execute 2 independent scripts. For now, the two scripts just print a sequence of numbers but my intention is to use this program to concurrently run a few Twitter streaming programs in the future.
I suspect I need to use subprocess.Popen but I cannot quite get my head around what arguments I should put in there. There was a similar question on StackOverflow but the code provided there (pasted below) doesn't print anything. I will appreciate your help.
My files are:
thread1.py
thread2.py
import subprocess
subprocess.Popen(['screen', './thread1.py']))
subprocess.Popen(['screen', './thread2.py'])
Use supervisord
supervisord is process control system just for the purpose of running multiple command line scripts.
It features:
multiple controlled processes
autorestarting failed runs
log stdout and stderr output
starting scripts in order (using priority)
command line utility to view latest log output, stop, start, restart the processes
This solution works only on *nix based systems, it is not available on Windows.
As wanderlust mentioned, why do you want to do it this way and not via linux command line?
Otherwise, the solution you post is doing what it is meant to, i.e, you are doing this at the command line:
screen ./thread1.py
screen ./thread2.py
This will open a screen session and run the program and output within this screen session, such that you will not see the output on your terminal directly. To trouble shoot your output, just execute the scripts without the screen call:
import subprocess
subprocess.Popen(['./thread1.py'])
subprocess.Popen(['./thread2.py'])
Content of thread1.py:
#!/usr/bin/env python
def countToTen():
for i in range(10):
print i
countToTen()
Content of thread2.py:
#!/usr/bin/env python
def countToHundreds():
for i in range(10):
print i*100
countToHundreds()
Then don't forget to do this on the command line:
chmod u+x thread*.py
You can also just open several Command Prompt windows to run several Python programs at once - just run one in each of them:
In each Command Prompt window, go to the correct directory (such as C:/Python27) and then type 'python YourCodeNo1.py' in one Command Prompt window, 'python YourCodeNo2.py' in the next one ect. .
I'm currently running 3 codes at one time in this way, without slowing any of them down.

How to issue a command from the command line of a process running on Linux

Lets say I issue a command from the Linux command line. This will cause Linux to create a new Process and lets say that the Process expects to receive the command from the user.
For Example: I will run a python script test.py which will accept a command from the user.
$python test.py
TEST>addController(192.168.56.101)
Controller added
TEST>
The question I have is can I write a script which will go into the command line (TEST>) and issue a command? As far as I know if I write a script to run multiple commands it will wait for the first process to exit before running the next command.
Regards,
Vinay Pai B.H.
You should look into expect. It's a tool that is designed to automate user interaction with commands that need it. The man page explains how to use it.
Seems like there is also pexpect, a Python version of similar functionality.
Assuming the Python script is reading its commands from stdin, you can pass them in with a pipe or a redirection:
$ python test.py <<< 'addController(192.168.56.101)'
$ echo $'addController(192.168.56.101)\nfoo()\nbar()\nbaz()' | python test.py
$ python test.py <<EOF
addController(192.168.56.101)
foo()
bar()
baz()
EOF
If you don't mind waiting for the calls to finish (one at a time) before returning control to your program, you can use the subprocess library. If you want to start something running and not wait for it to finish, you can use the multiprocessing library.

How to start child cmd terminals in separate windows, from python script and execute scripts on them?

I have been trying rather unsuccesfully to open several terminals (though one would be enough to start with) from say an ipython terminal that executes my main python script. I would like this main python script to open as many cmd terminals as needed and execute a specific python script on each of them. I need the terminal windows to remain open when the script finishes.
I can manage to start one terminal using the command:
import os
os.startfile('cmd')
but I don't know how to pass arguments to it, like:
/K python myscript.py
Does anyone have any ideas on how this could be done?
Cheers
H.H.
Use subprocess module. Se more info at. Google>>python subprocess
http://docs.python.org/2/library/subprocess.html
import subprocess
subprocess.check_output(["python", "c:\home\user\script.py"])
or
subprocess.call(["python", "c:\home\user\script.py"])

Categories

Resources