Python script for 'ps aux' command - python

I have tried to use subprocess.check_output() for getting the ps aux command using python but it looks like not working with the large grep string.
Can anyone have any solution?
subprocess.check_output('ps aux | grep "bin/scrapy" | grep "option1" | grep "option2" | grep "option3" | grep "option4" | grep "option5"' , shell =True)

You can use the following code snippet for executing commands on remote host
# create ssh client
ssh = paramiko.SSHClient()
# add host key
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# connect to host
ssh.connect(hostname='somehost', username='someuser', password='somepass')
# Login using RSA key
# ssh.connect(hostname='somehost', username='someuser', key_filename='/home/someuser/.ssh/id_rsa')
# execute command
stdin, stdout, stderr = ssh.exec_command('ps aux')
# print output
for line in stdout:
print(line)
# close connection
ssh.close()
# Output
# $ python3 test.py

Yes, I have found the solution to achieve the output, We can use the following code snippet to find the PID as output.
Instead of using ps aux we can use psutil python library.
import psutil
args_list = ["option1", "option2", "option3"]
for process in psutil.process_iter():
try:
process_args_list = process.cmdline()
except psutil._exceptions.NoSuchProcess as err:
logger.info(f"Found error in psutil, Error: {err}")
continue
if all(item in process_args_list for item in args_list):
print(process.pid)

Related

Can't assign bash variable in python subprocess

I am trying to assign to a variable the fingerprint of a pgp key in a bash subprocess of a python script.
Here's a snippet:
import subprocess
subprocess.run(
'''
export KEYFINGERPRINT="$(gpg --with-colons --fingerprint --list-secret-keys | sed -n 's/^fpr:::::::::\([[:alnum:]]\+\):/\1/p')"
echo "KEY FINGERPRINT IS: ${KEYFINGERPRINT}"
''',
shell=True, check=True,
executable='/bin/bash')
The code runs but echo shows an empty variable:
KEY FINGERPRINT IS:
and if I try to use that variable for other commands I get the following error:
gpg: key "" not found: Not found
HOWEVER, if I run the same exact two lines of bash code in a bash script, everything works perfectly, and the variable is correctly assigned.
What is my python script missing?
Thank you all in advance.
The problem is the backslashes in your sed command. When you paste those into a Python string, python is escaping the backslashes. To fix this, simply add an r in front of your string to make it a raw string:
import subprocess
subprocess.run(
r'''
export KEYFINGERPRINT="$(gpg --with-colons --fingerprint --list-secret-keys | sed -n 's/^fpr:::::::::\([[:alnum:]]\+\):/\1/p')"
echo "KEY FINGERPRINT IS: ${KEYFINGERPRINT}"
''',
shell=True, check=True,
executable='/bin/bash')
in order to run 2 commands in subprocess you need to run them one after each other or use ;
import subprocess
ret = subprocess.run('export KEYFINGERPRINT="$(gpg --with-colons --fingerprint --list-secret-keys | sed -n 's/^fpr:::::::::\([[:alnum:]]\+\):/\1/p')"; echo "KEY FINGERPRINT IS: ${KEYFINGERPRINT}"', capture_output=True, shell=True)
print(ret.stdout.decode())
you can use popen:
commands = '''
export KEYFINGERPRINT="$(gpg --with-colons --fingerprint --list-secret-keys | sed -n 's/^fpr:::::::::\([[:alnum:]]\+\):/\1/p')"
echo "KEY FINGERPRINT IS: ${KEYFINGERPRINT}"
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands)
print out

Paramiko doesn't return grep correctly

I am building myself a tool to run/kill scripts on my VPS over SSH. So far I was able to do exactly that (start or kill processes) but I can't manage to get the ps -fA | grep python command to work.
Some of my scripts actually spawn new scripts with Popen so I really need a way to check the PID of python scripts through SSH (and the name of the file the PID belongs to).
ssh_obj = self.get_ssh_connection()
stdin, stdout, stderr = ssh_obj.exec_command('ps -fA | grep python')
try:
stdin_read = "stdin: {0}".format(stdin.readline())
except Exception as e:
stdin_read = "stdin: ERROR " + str(e)
try:
stdout_read = "stdout: {0}".format(stdout.readline())
except Exception as e:
stdout_read = "stdout: ERROR " + str(e)
try:
stderr_read = "stderr: {0}".format(stderr.readline())
except Exception as e:
stderr_read = "stderr: ERROR " + str(e)
print("\n".join([stdin_read, stdout_read, stderr_read]))
But it doesn't works, the result it shows to me is:
stdin: ERROR File not open for reading
stdout: root 739 738 0 17:12 ? 00:00:00 bash -c ps -fA | grep python
stderr:
While the desired output would be something like:
PID: 123 - home/whatever/myfile.py
PID: 125 - home/whatever/myfile2.py
PID: 126 - home/whatever/myfile.py
That way I'll know which PIDs to kill for myfile script (123 and 126).
Bonus question: I'm not very linux experienced, does executing grep commands outside of the Terminal creates any PID that I have to manually kill?
You may need to escape the pipe character by passing the whole statement in single quotes to a shell on the other end:
ssh_obj.exec_command("sh -c 'ps -fA | grep python'")
Alternatively you could try running pgrep:
ssh_obj.exec_command('pgrep python')
pgrep will search the current running processes that match the search string python and will send just the process IDs to the stdout.

python fabric run command return error as stdout

I use fabric run to execute command on another host, and i want to catch the stderr of executing command.
Fabric code like this:
def remoteTask(logfile):
with settings(warn_only=True):
result = run("tail -n4 %s | grep \'[error]\' | awk \'{print $1,$2,$0}\'" % logfile)
if result.failed:
raise Exception("Tail failed")
else:
sys.stdout.write(result)
When the tail failed, the result.failed is false, and the value of result is stderr of tail. And the fabric won't throw an exception.
$ python exec.py
stdout: tail: cannot open `/var/log/test.log' for reading: No such file or directory
I just want to get a aborting or warning from fabric to know my script fails or not, but in this situation, i can't catch.
Because of the way piping and paramiko works, fabric only returns the result code of the last command in a piped chain. In this case, the awk command is returning 0 (or success==True). You can work around this with other fabric constructs if you’d like:
from fabric.contrib import files
def remoteTask(logfile):
if not files.exists(logfile):
raise Exception('%s does not exist!' % logfile)
with settings(warn_only=True):
result = run("tail -n4 %s | grep \'[error]\' | awk \'{print $1,$2,$0}\'" % logfile)
For more on why this works this way you can read more about pipelines here: http://www.gnu.org/software/bash/manual/html_node/Pipelines.html
You can alo just run the command that you have with
set -o pipefail

Error is being raised when executing a sub-process using " | "

I am trying to automate the process of executing a command. When I this command:
ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10
Into a termianl I get the response:
%CPU PID USER COMMAND
5.7 25378 stackusr whttp
4.8 25656 stackusr tcpproxy
But when I execute this section of code I get an error regarding the format specifier:
if __name__ == '__main__':
fullcmd = ['ps','-eo','pcpu,pid,user,args | sort -k 1 -r | head -10']
print fullcmd
sshcmd = subprocess.Popen(fullcmd,
shell= False,
stdout= subprocess.PIPE,
stderr= subprocess.STDOUT)
out = sshcmd.communicate()[0].split('\n')
#print 'here'
for lin in out:
print lin
This is the error showen:
ERROR: Unknown user-defined format specifier "|".
********* simple selection ********* ********* selection by list *********
-A all processes -C by command name
-N negate selection -G by real group ID (supports names)
-a all w/ tty except session leaders -U by real user ID (supports names)
-d all except session leaders -g by session OR by effective group name
-e all processes -p by process ID
T all processes on this terminal -s processes in the sessions given
a all w/ tty, including other users -t by tty
g OBSOLETE -- DO NOT USE -u by effective user ID (supports names)
r only running processes U processes for specified users
x processes w/o controlling ttys t by tty
I have tryed placing a \ before the | but this has not effect.
You would need to use shell=True to use the pipe character, if you are going to go down that route then using check_output would be the simplest approach to get the output:
from subprocess import check_output
out = check_output("ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10",shell=True,stderr=STDOUT)
You can also simulate a pipe with Popen and shell=False, something like:
from subprocess import Popen, PIPE, STDOUT
sshcmd = Popen(['ps', '-eo', "pcpu,pid,user,args"],
stdout=PIPE,
stderr=STDOUT)
p2 = Popen(["sort", "-k", "1", "-r"], stdin=sshcmd.stdout, stdout=PIPE)
sshcmd.stdout.close()
p3 = Popen(["head", "-10"], stdin=p2.stdout, stdout=PIPE,stderr=STDOUT)
p2.stdout.close()
out, err = p3.communicate()

perform echo xyz | ssh ... with python

how to perform
echo xyz | ssh [host]
(send xyz to host)
with python?
I have tried pexpect
pexpect.spawn('echo xyz | ssh [host]')
but it's performing
echo 'xyz | ssh [host]'
maybe other package will be better?
http://pexpect.sourceforge.net/pexpect.html#spawn
Gives an example of running a command with a pipe :
shell_cmd = 'ls -l | grep LOG > log_list.txt'
child = pexpect.spawn('/bin/bash', ['-c', shell_cmd])
child.expect(pexpect.EOF)
Previous incorrect attempt deleted to make sure no-one is confused by it.
You don't need pexpect to simulate a simple shell pipeline. The simplest way to emulate the pipeline is the os.system function:
os.system("echo xyz | ssh [host]")
A more Pythonic approach is to use the subprocess module:
p = subprocess.Popen(["ssh", "host"],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.stdin.write("xyz\n")
output = p.communicate()[0]

Categories

Resources