Unable to print TCPcump information using python subprocess - python

I wanted to process tcpdump output in a python script and so far I was able to get to this implementation
from subprocess import Popen, PIPE, CalledProcessError
import os
import signal
import time
if __name__=="__main__":
cmd = ["sudo","tcpdump", "-c","1000","-i","any","port","22","-n"]
with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
try:
for line in p.stdout:
print(line,flush=True) # process line here
except KeyboardInterrupt:
print("Quitting")
This is what I uderstood from the second answer of this previously asked question.
While it is not waiting for the subprocess to complete to print the output of the tcpdump, I still get the output in chunks of 20-30 lines at a time. Is there a way to read even if there is a single line in stdout pf the subprocess?
PS: I am running this script on a raspberry Pi 4 with ubuntu server 22.04.1

tcpdump uses a larger buffer if you connect its standard output to a pipe. You can easily see this by running the following two commands. (I changed the count from 1000 to 40 and removed port 22 in order to quickly get output on my system.)
$ sudo tcpdump -c 40 -i any -n
$ sudo tcpdump -c 40 -i any -n | cat
The first command prints one line at a time. The second collects everything in a buffer and prints everything when tcpdump exits. The solution is to tell tcpdump to use line buffering with the -l argument.
$ sudo tcpdump -l -c 40 -i any -n | cat
Do the same in your Python program.
import subprocess
cmd = ["sudo", "tcpdump", "-l", "-c", "40", "-i", "any", "-n"]
with subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0, text=True) as p:
for line in p.stdout:
print(line.strip())
When I run this, I get one line printed at a time.
Note that universal_newlines is a backward-compatible alias for text, so the latter should be preferred.

Related

Send Multiple Terminal Commands in Gnome Terminals With Subprocess

So I am currently trying to run two different gnome-terminal windows in Ubuntu that I can send individual commands to after they are initially open.
def ssh_command(cmd):
ssh_terminal_1 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
ssh_terminal_2 = subprocess.Popen(['gnome-terminal', '--', 'bash', '-c', cmd], stderr=subprocess.STDOUT, stdout=subprocess.PIPE, stdin=subprocess.PIPE)
# Activate the conda environment for our multilateration server
spyder_activate('conda activate flyhound')
time.sleep(10)
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'cd srsRAN22.04/build')
ssh_terminal_2.stdin.flush()
ssh_terminal_1.stdin.write(b'sudo ./srsepc/src/srsepc ../srsepc/epc.conf.example --hss.db_file=../srsepc/user_db_unknown.csv.example\n')
ssh_terminal_1.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -l /home/administrator/Downloads/hostedxA5-latest.rbf\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'bladeRF-cli -f /home/administrator/Downloads/bladeRF_fw_v2.4.0.img\n')
ssh_terminal_2.stdin.flush()
ssh_terminal_2.stdin.write(b'sudo ./srsenb/src/srsenb ../srsenb/enb.conf.example --enb_files.sib_config=../srsenb/sib.conf.example --e nb.n_prb=50 --enb_files.rr_config=../srsenb/rr.conf.example\n')
However when I start the original subprocess command the terminals open up fine with the command given during the function call but all the following commands don't work and I get a broken pipe error errno 32. While I try to run these commands I also need to keep previous terminal open that looks like this below
def access_command(cmd):
while True:
process = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
while True:
output = process.stdout.readline()
if output == '' and process.poll() is not None:
break
if output:
print(output.strip())
if b"f0:9e:4a:5f:a4:5b" and b"handshake" in output:
ssh_command("sshpass -p earth ssh -o StrictHostKeyChecking=no administrator#ipaddress; clear; screen")
I am really not sure how I can send multiple commands to the ssh terminals after they ssh into that ip address. I am very new to subprocess and sending commands to terminals via python so any help would be amazing on this!!
As I explained in the comments, your pipe goes to gnome-terminal and neither to ssh nor to bash. gnome-terminal is not listening to stdin but is only listening to the user at the console. Here is what you do.
Make a FIFO (named pipe) -- os.mkfifo -- for each terminal, give it a name that won't collide with any other file (such as put your process ID in it).
Issue the command gnome-terminal -- bash -c ssh <options> < <fifo name> for each terminal. Do not make this a Popen call, use os.system or something like that.
Do your spydy magic (anaconda).
Open the FIFO as a file
Write your commands to the open file; they will be executed by the bash process in the ssh connection. You will probably have to flush, unless there is a way to open it in line-buffered mode.
When you want to close the terminal, close the file.
What this accomplishes is that we move the pipe from gnome-terminal to ssh and hence across the connection to bash. We feed it on one end and it comes out and gets digested by the shell.

wget missing url with shlex and subprocess

I'm struggling to understand why this fails with a wget: missing URL error:
import shlex
import subprocess
copy_command = "wget -O - 'http://example.com/somepath/somefile.txt?someparam=test' | sshpass -p pass ssh user#localhost -p 2222 \"cat - > /upload/somefile.txt\""
cmd = shlex.split(copy_command, posix=False)
with subprocess.Popen(
cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, shell=True
) as proc:
output, error = proc.communicate()
What am I missing here? If I just give subprocess the copy_command string directly, then it works without issues.
To set up a pipeline requires the parent process to spawn all the programs involved and the connect (pipe) the stdio of one to another.
The Python documentation for subprocess explains how to do this.
It works with string argument andshell=True because then it just hands off the command line to a sub shell, and that shell handles all those details.

Why is running Python subprocess grep ignoring --exclude-dir flag

When executing this command from the terminal:
\grep -inH -r --exclude-dir={node_modules,.meteor,.git} -e test -- /Users/nelsyeung/Sites/foo
It outputs the correct results, where it excludes the --exclude-dir directories.
The following Python script should theoretically perform the exact same operation:
#!/usr/bin/env python3
from subprocess import Popen, PIPE
cmd = 'grep -inH -r --exclude-dir={node_modules,.meteor,.git} -e test -- /Users/nelsyeung/Sites/foo'
with Popen(cmd.split(), stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for l in p.stdout:
print(l)
But the --exclude-dir flag seems to be ignored, that is, it also grep from node_modules, .meteor and .git.
Question: Why is the above code outputting a different result from just executing the command?
Note that this is more of a theory question than me looking for an alternative fix, since the Python code is basically from a plugin that I'm using where I can only supply flags for the command, and just that, to my surprise, passing in --exclude-dir does nothing. If there's something wrong with the code above, do point it out though.
System info:
OS: macOS 10.13.6
Python: 3.7.0
grep: (BSD grep) 2.5.1-FreeBSD (with --exclude-dir support)
--exclude-dir={dir1,dir2} is expanded to
--exclude-dir=dir1 --exclude-dir=dir2
by the shell, not by grep. Popen uses shell=False by default.
So instead use
from subprocess import Popen, PIPE
cmd = '''grep -inH -r --exclude-dir=node_modules --exclude-dir=.meteor
--exclude-dir=.git -e test -- /Users/nelsyeung/Sites/foo'''
with Popen(cmd.split(), stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for l in p.stdout:
print(l)
(Note that while using shell=True might be another option, it is not preferred because of security issues.)

Python subprocess piping to stdin

I'm trying to use python Popen to achieve what looks like this using the command line.
echo "hello" | docker exec -i $3 sh -c 'cat >/text.txt'
The goal is to pipe the "hello" text into the docker exec command and have it written to the docker container.
I've tried this but can't seem to get it to work.
import subprocess
from subprocess import Popen, PIPE, STDOUT
p = Popen(('docker', 'exec', '-i', 'nginx-ssl', 'sh', '-c', 'cat >/text.txt'), stdin=subprocess.PIPE)
p.stdin.write('Hello')
p.stdin.close()
You need to give stdin the new line also:
p.stdin.write('Hello\n')
That is the same thing even with sys.stdout. You don't need to give print a new line because it does that for you, but any writing to a file that you do manually, you need to include it. You should use p.communicate('Hello') instead, though. It's made for that.

Python subprocess output read error

I have a command which works great at the terminal:
sudo tshark -V -l -i "any" -f 'udp port 4729'
I trying to read the output from my python script:
import subprocess
command = ['tshark', '-V', '-l', '-i', '"any"', '-f', '"udp port 4729"'] # the shell command
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=None)
output, error = process.communicate()
print output
It does not work. Maybe it's some troubles with writing a command in the list.
I receive the error:
gooman#ubuntu:~/workspace/glade_tests/src$ sudo ./main.py
tshark: Lua: Error during loading:
[string "/usr/share/wireshark/init.lua"]:45: dofile has been disabled
Running as user "root" and group "root". This could be dangerous.
Capturing on "any"
tshark: The capture session could not be initiated (No such device exists).
Please check to make sure you have sufficient permissions, and that you have the proper interface or pipe specified.
0 packets captured
Thy this:
import subprocess
command = "sudo tshark -V -l -i "any" -f 'udp port 4729'"
try:
output = subprocess.check_output(command, shell=True)
except subprocess.CalledProcessError as e:
print "An error has been occured", e
raise
print "The subprocess output:", output
Maybe, it will be needed to add stdout=subprocess.PIPE argument.

Categories

Resources