Parse filename from Dumpcap output - python

I am trying to parse the filename from the ouput of running dumpcap in the terminal in linux in order to automatically attach it to an email. This is the relevant functions from a larger script. proc1, stdout, and eventfile are initialized to"" and DUMPCAP is the command line string dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
def startdump():
global DUMPCAP
global dumpdirectory
global proc1
global stdout
global eventfile
setDumpcapOptions()
print("dumpcap.exe = " + DUMPCAP)
os.chdir(dumpdirectory)
#subprocess.CREATE_NEW_CONSOLE
proc1 = subprocess.Popen(DUMPCAP, shell=True, stdout=subprocess.PIPE)
for line in proc1:
if 'File: ' in line:
parsedfile = line.split(':')
eventfile = parsedfile[1]
if dc_mode == "Dumpcap Only":
proc1.communicate()
mail_man(config_file)
return proc1
def startevent():
global EVENT
global proc1
global eventfile
setEventOptions()
print(EVENT)
# subprocess.CREATE_NEW_CONSOLE
proc2 = subprocess.Popen(EVENT, shell=True, stdout=subprocess.PIPE)
if dc_mode == "Dumpcap+Event" or dc_mode == "Trigger" or dc_mode == "Event Only":
proc2 = proc1.communicate()
mail_man(config_file)
return proc2
the problem I keep having is that I can't figure out how to parse the file name from the output of dumpcap. It keeps parsing ""from the output no matter what I do. I apologize if this seems unresearched. I am a month into learning python and linux on my own and the documentation is terse and confusing online.
Should I create a function to parse the eventfile from dumpcap's output or do it right there in the script? I'm truly at a loss here. I'm not sure how dumpcap stores its output either.
The output of dumcap in the terminal is:
dumpcap.exe = dumpcap -a duration:300 -b duration:2147483647 -c 500 -i 1 -n -p -s 2 -w test -B 20
-i 1 - f icmp and host 156.24.31.29 - c 2
/bin/sh: -i: command not found
Capturing on 'eth0'
File: test_00001_20150714141827
Packets captured: 500
Packets received/dropped on interface 'eth0': 500/0 (pcap:0/dumpcap:0/flushed:0/ps_ifdrop:0) (100.0%)
[Errno 2] No such file or directory: ''
the line File: ... contains the randomly generated name of the pcap file saved by dumpcap I am trying to parse that line from the terminal to get everything after the File: set to a variable but the conventional .split method doesn't seem to be working
The other error it gives is that Popen cannot be indexed

It looks like basically you need a regexp.
import re
rx = re.compile('^File: (\S+)$', re.MULTILINE)
m = rx.search(stdout_contents)
if m:
file_name = m.group(1)
# else: file name not detected.
Update: a minimal complete example of reading pipe's stdout; hope this helps.
import subprocess
proc = subprocess.Popen("echo foo", shell=True, stdout=subprocess.PIPE)
result = proc.communicate()
print result[0] # must be `foo\n`

Dumpcap outputs to its stderr as opposed to stdout. So I've managed to redirect the stderr to a txt file which I can then parse.
def startdump():
global DUMPCAP, dumpdirectory, proc1
global eventfile, dc_capfile
DUMPCAP = ''
print("========================[ MAIN DUMPCAP MONITORING ]===========================")
setDumpcapOptions()
os.chdir(dumpdirectory)
if platform == "Linux":
DUMPCAP = "dumpcap " + DUMPCAP
elif platform == "Windows":
DUMPCAP = "dumpcap.exe " + DUMPCAP
proc1 = subprocess.Popen(DUMPCAP, shell=True, stderr=subprocess.PIPE)
#procPID = proc1.pid
if dc_mode == "Dumpcap Only":
time.sleep(5)
with open("proc1stderr.txt", 'w+') as proc1stderr:
proc1stderr.write(str(proc1.stderr))
for line in proc1.stderr:
print("%s" % line)
if "File:" in line:
print(line)
raweventfile = line.split('File: ')[1]
eventfile = raweventfile.strip('\[]').rstrip('\n')
mail_man()
proc1.communicate()

Related

Log output of background process to a file

I have time consuming SNMP walk task to perform which I am running as a background process using Popen command. How can I capture the output of this background task in a log file. In the below code, I am trying to do snampwalk on each IP in ip_list and logging all the results to abc.txt. However, I see the generated file abc.txt is empty.
Here is my sample code below -
import subprocess
import sys
f = open('abc.txt', 'a+')
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
for ip in ip_list:
cmd = "snmpwalk.exe -t 1 -v2c -c public "
cmd = cmd + ip
print(cmd)
p = subprocess.Popen(cmd, shell=True, stdout=f)
p.wait()
f.close()
print("File output - " + open('abc.txt', 'r').read())
the sample output from the command can be something like this for each IP -
sysDescr.0 = STRING: Software: Whistler Version 5.1 Service Pack 2 (Build 2600)
sysObjectID.0 = OID: win32
sysUpTimeInstance = Timeticks: (15535) 0:02:35.35
sysContact.0 = STRING: unknown
sysName.0 = STRING: UDLDEV
sysLocation.0 = STRING: unknown
sysServices.0 = INTEGER: 72
sysORID.4 = OID: snmpMPDCompliance
I have already tried Popen. But it does not logs output to a file if it is a time consuming background process. However, it works when I try to run background process like ls/dir. Any help is appreciated.
The main issue here is the expectation of what Popen does and how it works I assume.
p.wait() here will wait for the process to finish before continuing, that is why ls for instance works but more time consuming tasks doesn't. And there's nothing flushing the output automatically until you call p.stdout.flush().
The way you've set it up is more meant to work for:
Execute command
Wait for exit
Catch output
And then work with it. For your usecase, you'd better off using an alternative library or use the stdout=subprocess.PIPE and catch it yourself. Which would mean something along the lines of:
import subprocess
import sys
ip_list = ["192.163.1.104", "192.163.1.103", "192.163.1.101"]
with open('abc.txt', 'a+') as output:
for ip in ip_list:
print(cmd := f"snmpwalk.exe -t 1 -v2c -c public {ip}")
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE) # Be wary of shell=True
while process.poll() is None:
for c in iter(lambda: process.stdout.read(1), ''):
if c != '':
output.write(c)
with open('abc.txt', 'r') as log:
print("File output: " + log.read())
The key things to take away here is process.poll() which checks if the process has finished, if not, we'll try to catch the output with process.stdout.read(1) to read one byte at a time. If you know there's new lines coming, you can switch those three lines to output.write(process.stdout.readline()) and you're all set.

Get all output from subprocess in python [duplicate]

This question already has answers here:
Running shell command and capturing the output
(21 answers)
Closed 4 years ago.
I'm using python 3.7 on Windows. I'm trying to execute a simple scan command and get its output as a string.
When I execute the command in python I only get the first line:
import subprocess
def execute(command):
proc = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
output = proc.stdout if proc.stdout else proc.stderr
path = "Somepath"
command = ['ecls.exe', '/files', path]
print(execute(command))
output:
WARNING! The scanner was run in the account of a limited user.
But when I run it in the CMD:
$ ecls.exe /files "SomePath"
WARNING! The scanner was run in the account of a limited user.
ECLS Command-line scanner ...
Command line: /files SomePath
Scan started at: 11/24/18 14:18:11
Scan completed at: 11/24/18 14:18:11 Scan time: 0 sec (0:00:00)
Total: files - 1, objects 1 Infected: files - 0, objects 0 Cleaned: files - 0, objects 0
I think that the command spawn a child process and it produces the scan output. I also tried to iterate over stdout but got the same output.
EDIT:
I tried other methods like check_output, Popen, etc with using PIPE but I only get the first line of output. I also tried to use shell=True but didn't make any difference. As I already said the command spawn a child process and I need to capture its output which seems that subprocess can't do it directly.
As I couldn't find a direct way to solve this problem, with help of this reference, the output can be redirected to a text file and then read it back.
import subprocess
import os
import tempfile
def execute_to_file(command):
"""
This function execute the command
and pass its output to a tempfile then read it back
It is usefull for process that deploy child process
"""
temp_file = tempfile.NamedTemporaryFile(delete=False)
temp_file.close()
path = temp_file.name
command = command + " > " + path
proc = subprocess.run(command, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, universal_newlines=True)
if proc.stderr:
# if command failed return
os.unlink(path)
return
with open(path, 'r') as f:
data = f.read()
os.unlink(path)
return data
if __name__ == "__main__":
path = "Somepath"
command = 'ecls.exe /files ' + path
print(execute(command))

Wrapping bash scripts in python

I just found this great wget wrapper and I'd like to rewrite it as a python script using the subprocess module. However it turns out to be quite tricky giving me all sorts of errors.
download()
{
local url=$1
echo -n " "
wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
echo -ne "\b\b\b\b"
echo " DONE"
}
Then it can be called like this:
file="patch-2.6.37.gz"
echo -n "Downloading $file:"
download "http://www.kernel.org/pub/linux/kernel/v2.6/$file"
Any ideas?
Source: http://fitnr.com/showing-file-download-progress-using-wget.html
I think you're not far off. Mainly I'm wondering, why bother with running pipes into grep and sed and awk when you can do all that internally in Python?
#! /usr/bin/env python
import re
import subprocess
TARGET_FILE = "linux-2.6.0.tar.xz"
TARGET_LINK = "http://www.kernel.org/pub/linux/kernel/v2.6/%s" % TARGET_FILE
wgetExecutable = '/usr/bin/wget'
wgetParameters = ['--progress=dot', TARGET_LINK]
wgetPopen = subprocess.Popen([wgetExecutable] + wgetParameters,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
for line in iter(wgetPopen.stdout.readline, b''):
match = re.search(r'\d+%', line)
if match:
print '\b\b\b\b' + match.group(0),
wgetPopen.stdout.close()
wgetPopen.wait()
If you are rewriting the script in Python; you could replace wget by urllib.urlretrieve() in this case:
#!/usr/bin/env python
import os
import posixpath
import sys
import urllib
import urlparse
def url2filename(url):
"""Return basename corresponding to url.
>>> url2filename('http://example.com/path/to/file?opt=1')
'file'
"""
urlpath = urlparse.urlsplit(url).path # pylint: disable=E1103
basename = posixpath.basename(urllib.unquote(urlpath))
if os.path.basename(basename) != basename:
raise ValueError # refuse 'dir%5Cbasename.ext' on Windows
return basename
def reporthook(blocknum, blocksize, totalsize):
"""Report download progress on stderr."""
readsofar = blocknum * blocksize
if totalsize > 0:
percent = readsofar * 1e2 / totalsize
s = "\r%5.1f%% %*d / %d" % (
percent, len(str(totalsize)), readsofar, totalsize)
sys.stderr.write(s)
if readsofar >= totalsize: # near the end
sys.stderr.write("\n")
else: # total size is unknown
sys.stderr.write("read %d\n" % (readsofar,))
url = sys.argv[1]
filename = sys.argv[2] if len(sys.argv) > 2 else url2filename(url)
urllib.urlretrieve(url, filename, reporthook)
Example:
$ python download-file.py http://example.com/path/to/file
It downloads the url to a file. If the file is not given then it uses basename from the url.
You could also run wget if you need it:
#!/usr/bin/env python
import sys
from subprocess import Popen, PIPE, STDOUT
def urlretrieve(url, filename=None, width=4):
destination = ["-O", filename] if filename is not None else []
p = Popen(["wget"] + destination + ["--progress=dot", url],
stdout=PIPE, stderr=STDOUT, bufsize=1) # line-buffered (out side)
for line in iter(p.stdout.readline, b''):
if b'%' in line: # grep "%"
line = line.replace(b'.', b'') # sed -u -e "s,\.,,g"
percents = line.split(None, 2)[1].decode() # awk $2
sys.stderr.write("\b"*width + percents.rjust(width))
p.communicate() # close stdout, wait for child's exit
print("\b"*width + "DONE")
url = sys.argv[1]
filename = sys.argv[2] if len(sys.argv) > 2 else None
urlretrieve(url, filename)
I have not noticed any buffering issues with this code.
I've done something like this before. and i'd love to share my code with you:)
#!/usr/bin/python2.7
# encoding=utf-8
import sys
import os
import datetime
SHEBANG = "#!/bin/bash\n\n"
def get_cmd(editor='vim', initial_cmd=""):
from subprocess import call
from tempfile import NamedTemporaryFile
# Create the initial temporary file.
with NamedTemporaryFile(delete=False) as tf:
tfName = tf.name
tf.write(initial_cmd)
# Fire up the editor.
if call([editor, tfName], shell=False) != 0:
return None
# Editor died or was killed.
# Get the modified content.
fd = open(tfName)
res = fd.read()
fd.close()
os.remove(tfName)
return res
def main():
initial_cmd = "wget " + sys.argv[1]
cmd = get_cmd(editor='vim', initial_cmd=initial_cmd)
if len(sys.argv) > 1 and sys.argv[1] == 's':
#keep the download infomation.
t = datetime.datetime.now()
filename = "swget_%02d%02d%02d%02d%02d" %\
(t.month, t.day, t.hour, t.minute, t.second)
with open(filename, 'w') as f:
f.write(SHEBANG)
f.write(cmd)
f.close()
os.chmod(filename, 0777)
os.system(cmd)
main()
# run this script with the optional argument 's'
# copy the command to the editor, then save and quit. it will
# begin to download. if you have use the argument 's'.
# then this script will create another executable script, you
# can use that script to resume you interrupt download.( if server support)
so, basically, you just need to modify the initial_cmd's value, in your case, it's
wget --progress=dot $url 2>&1 | grep --line-buffered "%" | \
sed -u -e "s,\.,,g" | awk '{printf("\b\b\b\b%4s", $2)}'
this script will first create a temp file, then put shell commands in it, and give it execute permissions. and finally run the temp file with commands in it.
vim download.py
#!/usr/bin/env python
import subprocess
import os
sh_cmd = r"""
download()
{
local url=$1
echo -n " "
wget --progress=dot $url 2>&1 |
grep --line-buffered "%" |
sed -u -e "s,\.,,g" |
awk '{printf("\b\b\b\b%4s", $2)}'
echo -ne "\b\b\b\b"
echo " DONE"
}
download "http://www.kernel.org/pub/linux/kernel/v2.6/$file"
"""
cmd = 'sh'
p = subprocess.Popen(cmd,
shell=True,
stdin=subprocess.PIPE,
env=os.environ
)
p.communicate(input=sh_cmd)
# or:
# p = subprocess.Popen(cmd,
# shell=True,
# stdin=subprocess.PIPE,
# env={'file':'xx'})
#
# p.communicate(input=sh_cmd)
# or:
# p = subprocess.Popen(cmd, shell=True,
# stdin=subprocess.PIPE,
# stdout=subprocess.PIPE,
# stderr=subprocess.PIPE,
# env=os.environ)
# stdout, stderr = p.communicate(input=sh_cmd)
then you can call like:
file="xxx" python dowload.py
In very simple words, considering you have script.sh file, you can execute it and print its return value, if any:
import subprocess
process = subprocess.Popen('/path/to/script.sh', shell=True, stdout=subprocess.PIPE)
process.wait()
print process.returncode

Python subprocess - run multiple shell commands over SSH

I am trying to open an SSH pipe from one Linux box to another, run a few shell commands, and then close the SSH.
I don't have control over the packages on either box, so something like fabric or paramiko is out of the question.
I have had luck using the following code to run one bash command, in this case "uptime", but am not sure how to issue one command after another. I'm expecting something like:
sshProcess = subprocess.call('ssh ' + <remote client>, <subprocess stuff>)
lsProcess = subprocess.call('ls', <subprocess stuff>)
lsProcess.close()
uptimeProcess = subprocess.call('uptime', <subprocess stuff>)
uptimeProcess.close()
sshProcess.close()
What part of the subprocess module am I missing?
Thanks
pingtest = subprocess.call("ping -c 1 %s" % <remote client>,shell=True,stdout=open('/dev/null', 'w'),stderr=subprocess.STDOUT)
if pingtest == 0:
print '%s: is alive' % <remote client>
# Uptime + CPU Load averages
print 'Attempting to get uptime...'
sshProcess = subprocess.Popen('ssh '+<remote client>, shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
sshProcess,stderr = sshProcess.communicate()
print sshProcess
uptime = subprocess.Popen('uptime', shell=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
uptimeProcess,stderr = uptimeProcess.communicate()
uptimeProcess.close( )
print 'Uptime : ' + uptimeProcess.split('up ')[1].split(',')[0]
else:
print "%s: did not respond" % <remote client>
basically if you call subprocess it creates a local subprocess not a remote one
so you should interact with the ssh process. so something along this lines:
but be aware that if you dynamically construct my directory it is suceptible of shell injection then END line should be a unique identifier
To avoid the uniqueness of END line problem, an easiest way would be to use different ssh command
from __future__ import print_function,unicode_literals
import subprocess
sshProcess = subprocess.Popen(['ssh',
'-tt'
<remote client>],
stdin=subprocess.PIPE,
stdout = subprocess.PIPE,
universal_newlines=True,
bufsize=0)
sshProcess.stdin.write("ls .\n")
sshProcess.stdin.write("echo END\n")
sshProcess.stdin.write("uptime\n")
sshProcess.stdin.write("logout\n")
sshProcess.stdin.close()
for line in sshProcess.stdout:
if line == "END\n":
break
print(line,end="")
#to catch the lines up to logout
for line in sshProcess.stdout:
print(line,end="")

python command substitution for linux commands

I am trying to use command substitution for building a linux command from a python script, but am not able to get the following simple example to work:
LS="/bin/ls -l"
FILENAME="inventory.txt"
cmd = "_LS _FILENAME "
ps= subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
output = ps.communicate()[0]
print output
Thanks!
JB
Use string substitution:
cmd = '{} {}'.format(LS, FILENAME)
or (in Python2.6):
cmd = '{0} {1}'.format(LS, FILENAME)
import subprocess
import shlex
LS="/bin/ls -l"
FILENAME="inventory.txt"
cmd = '{} {}'.format(LS, FILENAME)
ps = subprocess.Popen(shlex.split(cmd),
stdout = subprocess.PIPE,
stderr = subprocess.STDOUT)
output, err = ps.communicate()
print(output)
Or, using the sh module:
import sh
FILENAME = 'inventory.txt'
print(sh.ls('-l', FILENAME, _err_to_out=True))

Categories

Resources