Snort command run through python script - python

I am trying to read the snort alert in the console for one of my project.
What I did is, I wrote a following code to run the snort to listen to my interface.
import subprocess
command = 'snort -l /var/log/snort -c /etc/snort/snort.conf -A console -i s1-eth1'
process = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE)
process.wait()
print process.returncode
I run the above script through another python script as below
#!/usr/bin/env python
import os
import sys
from subprocess import Popen, PIPE, STDOUT
script_path = os.path.join(os.getcwd() +'/', 'comm.py')
p = Popen([sys.executable, '-u', script_path],
stdout=PIPE, stderr=STDOUT, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
print line,
p.wait()
The output of the script takes me where snort listens. but when doing experiment that triggers the rule of snort file to alert, its not printing the output in terminal where i ran the python script.
when I run the snort command in the normal terminal the alert message prints all fine.
does any know any work around for this.
Thanks in advance.

Related

Unable to print TCPcump information using python subprocess

I wanted to process tcpdump output in a python script and so far I was able to get to this implementation
from subprocess import Popen, PIPE, CalledProcessError
import os
import signal
import time
if __name__=="__main__":
cmd = ["sudo","tcpdump", "-c","1000","-i","any","port","22","-n"]
with Popen(cmd, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
try:
for line in p.stdout:
print(line,flush=True) # process line here
except KeyboardInterrupt:
print("Quitting")
This is what I uderstood from the second answer of this previously asked question.
While it is not waiting for the subprocess to complete to print the output of the tcpdump, I still get the output in chunks of 20-30 lines at a time. Is there a way to read even if there is a single line in stdout pf the subprocess?
PS: I am running this script on a raspberry Pi 4 with ubuntu server 22.04.1
tcpdump uses a larger buffer if you connect its standard output to a pipe. You can easily see this by running the following two commands. (I changed the count from 1000 to 40 and removed port 22 in order to quickly get output on my system.)
$ sudo tcpdump -c 40 -i any -n
$ sudo tcpdump -c 40 -i any -n | cat
The first command prints one line at a time. The second collects everything in a buffer and prints everything when tcpdump exits. The solution is to tell tcpdump to use line buffering with the -l argument.
$ sudo tcpdump -l -c 40 -i any -n | cat
Do the same in your Python program.
import subprocess
cmd = ["sudo", "tcpdump", "-l", "-c", "40", "-i", "any", "-n"]
with subprocess.Popen(cmd, stdout=subprocess.PIPE, bufsize=0, text=True) as p:
for line in p.stdout:
print(line.strip())
When I run this, I get one line printed at a time.
Note that universal_newlines is a backward-compatible alias for text, so the latter should be preferred.

Open shell environment and run a series of command using Python

Seems duplicate of Using Python to open a shell environment, run a command and exit environment. I want to run the ulimit command in the shell environment in Redhat. Procedure: Open shell environment, run ulimit commands on shell, get the result and exit the shell environmnet. Referencing the above solution, I tried:
from subprocess import Popen, PIPE
def read_limit():
p = Popen('sh', stdin=PIPE)
file_size = p.communicate('ulimit -n')
open_files = p.communicate('ulimit -f')
file_locks = p.communicate('ulimit -x')
return file_size, open_files, file_locks
But I got error: ValueError: I/O operation on closed file.
The documentation for communicate() says:
send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
After that, the pipe will be closed.
You can use
p.stdin.write("something")
p.stdin.flush()
result = p.stdout.readline()
for your three commands and then
p.stdin.close()
p.wait()
at the end to terminate it

Why subprocess continue to live when stdout and stderror are file handlers?

When I run a very simple script with subprocess the subprocess is terminated once following script ends.
import subprocess
import os
import tempfile
if __name__ == '__main__':
subproc = subprocess.Popen(['sudo', 'tcpdump', '-c', '10000'])
print(subproc.pid)
ps -p <pid> shows no such process
But when I modify the script to use handlers for stdout and stderror it continue to run until it finishes its job.
import subprocess
import os
import tempfile
if __name__ == '__main__':
print(os.getpid())
with tempfile.NamedTemporaryFile() as stderr, tempfile.NamedTemporaryFile() as stdout:
proc = subprocess.Popen(['sudo', 'tcpdump', '-c', '10000'], stderr=stderr, stdout=stdout)
print(proc.pid)
ps -p <pid> shows subprocess is still running for some time.
Not sure why is that and whether I could rely that my subprocess will always finish in the second sample.
Tested in python 2.7 and 3.7 macos and debian.

Launch Terminal via Python and run commands

I am writing an automation script and it would be nice to be able to launch Terminal on my mac via my Python script in order to launch the Appium servers, instead of doing it manually.
The closest I've come is by using the following code, but this only launches Terminal and I am unable to send commands to it:
from subprocess import Popen, PIPE, STDOUT
Popen(['open', '-a', 'Terminal', '-n'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
I need to be able to launch two Terminal instances and run the following
'appium'
'appium -a 0.0.0.0 -p 4724'
You can execute shell commands in python like this:
import os
os.system('appium &')
this will start the Appium server
You have to use communicate to send cmd to your terminal.
from subprocess import Popen, PIPE, STDOUT
p1 = Popen(['open', '-a', 'Terminal', '-n'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p2 = Popen(['open', '-a', 'Terminal', '-n'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
p1.communicate('appium')
p2.communicate('appium -a 0.0.0.0 -p 4724')

subprocess popen Python

i am executing a shell script which is starting a process with background option &. The shell script is called from python script which hangs.
Shell script:
test -f filename -d &
python file
cmd =["shellscript","restart"]
proc = subprocess.Popen(cmd, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, stdin=subprocess.PIPE, **kwargs)
pid = proc.pid
out, err = proc.communicate()
returncode = proc.poll()
Python file hangs and it won't return out of the python process. Also python process is an automated one.
The call to proc.communicate() will block until the pipes used for stderr and stdout are closed. If your shell script spawns a child process which inherits those pipes, then it will exit only after that process also has closed its writing ends of the pipes or exited.
To solve this you can either
redirect the output of the started subprocess to /dev/null or a logfile in your shell script, e.g.:
subprocess_to_start >/dev/null 2>&1 &
use subprocess.DEVNULL or an open file object for stderr and stdout in your python script and drop the communicate() call if you don't need the output of "shellscript" in python
A comma is missing in your cmd list:
cmd =["shellscript", "restart"]

Categories

Resources