Unix command in python with & - python

Want to execute below command in python and get output in variable.
I tried to used Popen,system methods but it is not grepping particular content like 0 or 1.Python is also throwing some error for & character.
can anyone suggest, how can i prepare command. Using python 2.4.
"symtest host port USDSYM | & egrep -e '^0:' -e '^1:' "

If you are using Popen from subprocess, you can pass you own pipe and then read the pipe.

Related

Python Subprocess: fails to read "|" in Shell Command

I'm trying to use Go binary as well some shell packages in a python script. It's a chain command using |, for summary the command would look like this:
address = "http://line.me"
commando = f"echo {address} | /root/go/bin/crawler | grep -E --line-buffered '^200'"
Above code is just a demonstration, where the actual code is reading address from a wordlist. First try using os.system, it fails.
read = os.system(commando)
print(read)
Turns out os.system doesnt transfer any std. I had to use subprocess:
commando=subprocess.Popen(commando,shell=True,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
commandos = commando.stdout.read() + commando.stderr.read()
print(commandos)
Mentioning shell=True triggers:
b'/bin/sh: 1: Syntax error: "|" unexpected\n'
Trough more reading, it could be because sh can't read | or i need to use bash. Is there any alternative to this? have been trying to use shebang line in commando variable:
#!/bin/bash
Still doesn't push my luck...
Try this way:
subprocess.call(commando, shell=True)
the following code worked for me!
subprocess.call('ls | grep x | grep y', shell=True)
Fixed by explicitly mentioning bash:
['bash', '-c', commando]

Python script for Network Packet Capture in Nifi

I am new to the nifi platform.
I am trying to use a python script to capture network packet which works on VScode and want to implement same script using NiFi but unable to do so.
This python code I used:
import os, subprocess
from subprocess import PIPE
from datetime import datetime
n = 10
filename = str(datetime.now()).replace(" ","")
b = subprocess.run(f'sudo tcpdump udp -e -i wlp6s0 -nn -vvv -c {n} -w {filename}.raw',shell=True)
c = '"X%02x"'
a = subprocess.run(f"sudo hexdump -v -e '1/1 {c}' {filename}.raw| sed -e 's/\s\+//g' -e 's/X/\\x/g' ", shell=True , stdout=PIPE, stderr=PIPE)
output_file = open (f'{filename}.txt', 'w')
output_file.write(str(a.stdout))
# print("*************************File Created*************************")
output_file.close()
I am using Execute Script Processor for implementing the python script. But it doesn't seem to be working. For executing the "sudo command" I have set to use no password so that no input is needed while executing the script.
Thank you!
Since you're just calling shell commands, you might consider ExecuteStreamCommand instead. You can still run the top-level Python script to call the subprocesses, but since you're not working with flowfile attributes you might be better served being able to call "real" Python. In ExecuteScript the engine is actually Jython and it doesn't let you import native (CPython) modules such as scikit, you can only import pure Python modules (Python scripts that don't themselves import native modules)

hide popup windows while executing os.system cmd [duplicate]

I am calling this piece of code, but it produces some output in the console where I ran the python script (due to tee command):
os.system("echo 3 | sudo tee /proc/sys/vm/drop_caches")
This version does not produce console output but is there another way?
os.system('sudo bash -c "echo 3 > /proc/sys/vm/drop_caches"')
To answer the question based on its title in the most generic form:
To suppress all output from os.system(), append >/dev/null 2>&1 to the shell command, which silences both stdout and stderr; e.g.:
import os
os.system('echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null 2>&1')
Note that os.system() by design passes output from the calling process' stdout and stderr streams through to the console (terminal) - your Python code never sees them.
Also, os.system() does not raise an exception if the shell command fails and instead returns an exit code; note that it takes additional work to extract the shell command's true exit code: you need to extract the high byte from the 16-bit value returned, by applying >> 8 (although you can rely on a return value other than 0 implying an error condition).
Given the above limitations of os.system(), it is generally worthwhile to use the functions in the subprocess module instead:
For instance, subprocess.check_output() could be used as follows:
import subprocess
subprocess.check_output('echo 3 | sudo tee /proc/sys/vm/drop_caches', shell=True)
The above will:
capture stdout output and return it (with the return value being ignored in the example above)
pass stderr output through; passing stderr=subprocess.STDOUT as an additional argument would also capture stderr.
raise an error, if the shell command fails.
Note: Python 3.5 introduced subprocess.run(), a more flexible successor to both os.system() and subprocess.check_output() - see https://docs.python.org/3.5/library/subprocess.html#using-the-subprocess-module
Note:
The reason that the OP is employing tee in the first place - despite not being interested in stdout output - is that a naïve attempt to use > ... instead would be interpreted before sudo is invoked, and thus fail, because the required privileges to write to /proc/sys/... haven't been granted yet.
Whether you're using os.system() or a subprocess function, stdin is not affected by default, so if you're invoking your script from a terminal, you'll get an interactive password prompt when the sudo command is encountered (unless the credentials have been cached).
Write directly to the proc pseudo file instead via Python i/o lib.
This will require your script to run as root (via sudo), which means you should limit its scope to being an admin only tool. This also allows the script to run on boxes where sudo requires a password.
Example:
with open("/proc/sys/vm/drop_caches", "w") as drop_caches:
drop_caches.write("3")
subprocess.check_call(command,stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL)
you forgot to add stderr.
You also can use this simple method of subprocess module.
command = 'echo 3 | sudo tee /proc/sys/vm/drop_caches'
subprocess.check_call(shlex.split(command),stdin=subprocess.DEVNULL, stdout=subprocess.DEVNULL)
All outputs will be passed to DEVNULL. Any issues with the command will be reported by an exception. No issues means no output.
I believe, the easiest way to hide the console output when it's not possible with os.system is using os.popen:
os.popen("echo 3 | sudo tee /proc/sys/vm/drop_caches")

Bash command can run in the shell, but not via python suprocess.run

If I run this command in ubuntu shell:
debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'
It runs successfully, but if I run it via python:
>>> from subprocess import run
>>> run("debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'", shell=True)
/bin/sh: 1: Syntax error: redirection unexpected
CompletedProcess(args="debconf-set-selections <<< 'postfix postfix/mailname string server.exmaple.com'", returncode=2)
I don't understand why python is trying to interpret whether there is redirection etc. How does one make the command successfully run so one can script installation of an application, e.g. postfix in this case via python (not a normal bash script)?
I have tried various forms with double and single quotes (as recommended in other posts), with no success.
subprocess uses /bin/sh as shell, and presumably your system's one does not support here-string (<<<), hence the error.
From subprocess source:
if shell:
# On Android the default shell is at '/system/bin/sh'.
unix_shell = ('/system/bin/sh' if
hasattr(sys, 'getandroidapilevel') else '/bin/sh')
You can run the command as an argument to any shell that supports here string e.g. bash:
run('bash -c "debconf-set-selections <<< \"postfix postfix/mailname string server.exmaple.com\""', shell=True)
Be careful with the quoting.
Or better you can stay POSIX and use echo and pipe to pass via STDIN:
run("echo 'postfix postfix/mailname string server.exmaple.com' | debconf-set-selections", shell=True)

Filter output of a process with `grep` while keeping the return value

I think this is not a Python question but in order to provide the context I'll tell, what exactly I'm doing.
I run a command on a remote machine using ssh -t <host> <command> like this:
if os.system('ssh -t some_machine [ -d /some/directory ]') != 0:
do_something()
(note: [ -d /some/directory ] is only an example. Could be replaced by any command which returns 0 in case everything went fine)
Unfortunately ssh prints "Connection to some_machine close." every time I run it.
Stupidly I tried to run ssh -t some_machine <command> | grep -v "Connection" but this returns the result of grep of course.
So in short: In Python I'd like to run a process via ssh and evaluate it's return value while filtering away some unwanted output.
Edit: this question suggests s.th. like
<command> | grep -v "bla"; return ${PIPESTATUS[0]}
Indeed this might be an approach but it seems to work with bash only. At least with zsh PIPESTATUS seems to be not defined.
Use subprocess, and connect the two commands in Python rather than a shell pipeline.
from subprocess import Popen, PIPE, call
p1 = Popen(["ssh", "-t", "some_machine", "test", "-d", "/some/directory"],
stdout=PIPE)
if call(["grep", "-v", "Connection"], stdin=p1.stdout) != 0:
# use p1.returncode for the exit status of ssh
do_something()
Taking this a step further, try to avoid running external programs when unnecessary. You can examine the output of ssh directly in Python without using grep; for example, using the re library to examine the data read from p1.stdout yourself. You can also use a library like Paramiko to connect to the remote host instead of shelling out to run ssh.

Categories

Resources