How to exit cleanly with tcpdump running in subprocess in sudo mode - python

I am running tcpdump using the subprocess module in python to capture a trace of a website, using this piece of code:
import subprocess
from tbselenium.tbdriver import TorBrowserDriver
site = "check.torproject.org"
try:
process = subprocess.Popen(['sudo', 'tcpdump', '-l', '-i', 'eth0', '-w', 'trace.pcap'], stdout=subprocess.PIPE)
with TorBrowserDriver("/path/to/tor-browser_en-US/") as driver:
driver.load_url("https://" + site, wait_on_page=20)
process.send_signal(subprocess.signal.SIGTERM)
except OSError:
print "OSError"
The code gives me an OSError and when I try to open the pcap file in wireshark I get the following error box:
The capture file appears to have been cut short in the middle of a packet.
I've read this solution to the same issue, and have tried sending both a SIGINT and a SIGTERM, but I get the same truncated-packet message in each case along with an OSError. I have also tried using process.terminate() but that doesn't work either. Is there any way I could make tcpdump exit cleanly while running in sudo mode. Thanks!

As the OSError: [Errno 1] Operation not permitted suggest, killing the process is not permitted. Because you used sudo, killing the process should be instantiated sudo as well. Maybe you try this:
import subprocess
import os
from tbselenium.tbdriver import TorBrowserDriver
site = "check.torproject.org"
try:
process = subprocess.Popen(['sudo', 'tcpdump', '-l', '-i', 'eth0', '-w', 'trace.pcap'], stdout=subprocess.PIPE)
with TorBrowserDriver("/path/to/tor-browser_en-US/") as driver:
driver.load_url("https://" + site, wait_on_page=20)
cmd = "sudo kill " + str(process.pid)
os.system(cmd)
except OSError, e:
print e

Since tcpdump needs su privileges you can simply run the script as su and check for it before you spawn tcpdump:
# Check we are running as root:
if os.geteuid() != 0:
print('This script requires root privileges to capture packets. Try running this script as root.')
raise SystemExit
# Start TCPDUMP
import subprocess, os
_process = subprocess.Popen(['tcpdump', '-nnvvv', '-s0', '-w', os.path.join('/tmp', 'output.pcap'), ])
This way you can run
_process.terminate()
or
_process.kill()
to send the proper signal to tcpdump

Related

Communicating with root child process launched with osascript "with administrator privileges"

How can I pass messages between a parent process that launches a child process as root using Apple Script and stdin/stdout?
I'm writing an anti-forensics GUI application that needs to be able to do things that require root permissions on MacOS. For example, shutting down the computer.
For security reasons, I do not want the user to have to launch the entire GUI application as root. Rather, I want to just spawn a child process with root permission and a very minimal set of functions.
Also for security reasons, I do not want the user to send my application its user password. That authentication should be handled by the OS, so only the OS has visibility into the user's credentials. I read that the best way to do this with Python on MacOS is to leverage osascript.
Unfortunately, for some reason, communication between the parent and child process breaks when I launch the child process using osascript. Why?
Example with sudo
First, let's look at how it should work.
Here I'm just using sudo to launch the child process. Note I can't use sudo for my use-case because I'm using a GUI app. I'm merely showing it here to demonstrate how communication between the processes should work.
Parent (spawn_root.py)
The parent python script launches the child script root_child.py as root using sudo.
Then it sends it a command soft-shutdown\n and waits for the response
#!/usr/bin/env python3
import subprocess, sys
proc = subprocess.Popen(
[ 'sudo', sys.executable, 'root_child.py' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
print( "sending soft-shutdown command now" )
proc.stdin.write( "soft-shutdown\n" )
proc.stdin.flush()
print( proc.stdout.readline() )
proc.stdin.close()
Child (root_child.py)
The child process enters an infinite loop listening for commands (in our actual application, the child process will wait in the background for the command from the parent; it won't usually just get the soft-shutdown command immediately).
Once it does get a command, it does some sanity checks. If it matches soft-shutdown, then it executes shutdown -h now with subprocess.Popen().
#!/usr/bin/env python3
import os, sys, re, subprocess
if __name__ == "__main__":
# loop and listen for commands from the parent process
while True:
command = sys.stdin.readline().strip()
# check sanity of recieved command. Be very suspicious
if not re.match( "^[A-Za-z_-]+$", command ):
sys.stdout.write( "ERROR: Bad Command Ignored\n" )
sys.stdout.flush()
continue
if command == "soft-shutdown":
try:
proc = subprocess.Popen(
[ 'sudo', 'shutdown', '-h', 'now' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
sys.stdout.write( "SUCCESS: I am root!\n" )
sys.stdout.flush()
except Exception as e:
sys.stdout.write( "ERROR: I am not root :'(\n" )
sys.stdout.flush()
sys.exit(0)
continue
else:
sys.stdout.write( "WARNING: Unknown Command Ignored\n" )
sys.stdout.flush()
continue
Example execution
This works great. You can see in this example execution that the shutdown command runs without any exceptions thrown, and then the machine turns off.
user#host ~ % ./spawn_root.py
sending soft-shutdown command now
SUCCESS: I am root!
...
user#host ~ % Connection to REDACTED closed by remote host.
Connection to REDACTED closed.
user#buskill:~$
Example with osascript
Unfortunately, this does not work when you use osascript to get the user to authenticate in the GUI.
For example, if I change one line in the subprocess call in spawn_root.py from using sudo to using osascript as follows
Parent (spawn_root.py)
#!/usr/bin/env python3
import subprocess, sys
proc = subprocess.Popen(
['/usr/bin/osascript', '-e', 'do shell script "' +sys.executable+ ' root_child.py" with administrator privileges' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
print( "sending soft-shutdown command now" )
proc.stdin.write( "soft-shutdown\n" )
proc.stdin.flush()
print( proc.stdout.readline() )
proc.stdin.close()
Child (root_child.py)
(no changes in this script, just use 'root_child.py' from above)
Example Execution
This time, after I type my user password into the prompt provided by MacOS, the parent gets stuck indefinitely when trying to communicate with the child.
user#host spawn_root_sudo_communication_test % diff simple/spawn_root.py simple_gui/spawn_root.py
sending soft-shutdown command now
Why is it that I cannot communicate with a child process that was launched with osascript?
I ended-up solving this by abandoning osascript and instead calling the AuthorizationExecuteWithPrivileges() function with ctypes, which is actually just what osascript does indirectly.
Parent (spawn_root.py)
#!/usr/bin/env python3
################################################################################
# File: spawn_root.py
# Version: 0.1
# Purpose: Launch a child process with root permissions on MacOS via
# AuthorizationExecuteWithPrivileges(). For more info, see:
# * https://stackoverflow.com/a/74001980
# * https://stackoverflow.com/q/73999365
# * https://github.com/BusKill/buskill-app/issues/14
# Authors: Michael Altfield <michael#michaelaltfield.net>
# Created: 2022-10-15
# Updated: 2022-10-15
################################################################################
################################################################################
# IMPORTS #
################################################################################
import sys, ctypes, struct
import ctypes.util
from ctypes import byref
# import some C libraries for interacting via ctypes with the MacOS API
libc = ctypes.cdll.LoadLibrary(ctypes.util.find_library("c"))
# https://developer.apple.com/documentation/security
sec = ctypes.cdll.LoadLibrary(ctypes.util.find_library("Security"))
################################################################################
# SETTINGS #
################################################################################
kAuthorizationFlagDefaults = 0
################################################################################
# FUNCTIONS #
################################################################################
# this basically just re-implmenets python's readline().strip() but in C
def read_from_child(io):
# get the output from the child process character-by-character until we hit a new line
buf = ctypes.create_string_buffer(1)
result = ''
for x in range(1,100):
# read one byte from the child process' communication PIPE and store it to the buffer
libc.fread(byref(buf),1,1,io)
# decode the byte stored to the buffer as ascii
char = buf.raw[:1].decode('ascii')
# is the character a newline?
if char == "\n":
# the character is a newline; stop reading
break
else:
# the character is not a newline; append it to the string and continue reading
result += char
return result
################################################################################
# MAIN BODY #
################################################################################
################################
# EXECUTE CHILD SCRIPT AS ROOT #
################################
auth = ctypes.c_void_p()
r_auth = byref(auth)
sec.AuthorizationCreate(None,None,kAuthorizationFlagDefaults,r_auth)
exe = [sys.executable,"root_child.py"]
args = (ctypes.c_char_p * len(exe))()
for i,arg in enumerate(exe[1:]):
args[i] = arg.encode('utf8')
io = ctypes.c_void_p()
print( "running root_child.py")
err = sec.AuthorizationExecuteWithPrivileges(auth,exe[0].encode('utf8'),0,args,byref(io))
print( "err:|" +str(err)+ "|" )
print( "root_child.py executed!")
##################################
# SEND CHILD "MALICIOUS" COMMAND #
##################################
print( "sending malicious command now" )
# we have to explicitly set the encoding to ascii, else python will inject a bunch of null characters (\x00) between each character, and the command will be truncated on the receiving end
# * https://github.com/BusKill/buskill-app/issues/14#issuecomment-1279643513
command = "Robert'); DROP TABLE Students;\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
################################
# SEND CHILD "INVALID" COMMAND #
################################
print( "sending invalid command now" )
command = "make-me-a-sandwich\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
######################################
# SEND CHILD "soft-shutdown" COMMAND #
######################################
print( "sending soft-shutdown command now" )
command = "soft-shutdown\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
# clean exit
libc.close(io)
sys.exit(0)
Child (root_child.py)
#!/usr/bin/env python3
import os, time, re, sys, subprocess
def soft_shutdown():
try:
proc = subprocess.Popen(
[ 'sudo', 'shutdown', '-h', 'now' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
except Exception as e:
print( "I am not root :'(" )
if __name__ == "__main__":
# loop and listen for commands from the parent process
while True:
command = sys.stdin.buffer.readline().strip().decode('ascii')
# check sanity of recieved command. Be very suspicious
if not re.match( "^[A-Za-z_-]+$", command ):
msg = "ERROR: Bad Command Ignored\n"
sys.stdout.buffer.write( msg.encode(encoding='ascii') )
sys.stdout.flush()
continue
if command == "soft-shutdown":
try:
soft_shutdown()
msg = "SUCCESS: I am root!\n"
except Exception as e:
msg = "ERROR: I am not root :'(\n"
else:
msg = "WARNING: Unknown Command Ignored\n"
sys.stdout.buffer.write( msg.encode(encoding='ascii') )
sys.stdout.flush()
continue
Example Execution
maltfield#host communicate % ./spawn_root.py
running root_child.py
err:|0|
root_child.py executed!
sending malicious command now
result:|ERROR: Bad Command Ignored|
sending invalid command now
result:|WARNING: Unknown Command Ignored|
sending soft-shutdown command now
result:|SUCCESS: I am root!|
Traceback (most recent call last):
File "root_child.py", line 26, in <module>
sys.stdout.flush()
BrokenPipeError: [Errno 32] Broken pipe
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
maltfield#host communicate %
*** FINAL System shutdown message from maltfield#host.local ***
System going down IMMEDIATELY
Connection to REDACTED closed by remote host.
Connection to REDACTED closed.
maltfield#buskill:~$
Additional Information
Security
Note that AuthorizationExecuteWithPrivileges() has been deprecated by apple in-favor of an alternatve that requires you to pay them money. Unfortunately, there's some misinformation out there that AuthorizationExecuteWithPrivileges() is a huge security hole. While it's true that using AuthorizationExecuteWithPrivileges() incorrectly can cause security issues, it is not inherently insecure to use it.
Obviously, any time you run something as root, you need to be very careful!
AuthorizationExecuteWithPrivileges() is deprecated, but it can be used safely. But it can also be used unsafely!
It basically boils down to: do you actually know what you're running as root? If the script you're running as root is located in a Temp dir that has world-writeable permissions (as a lot of MacOS App installers have done historically), then any malicious process could gain root access.
To execute a process as root safely:
Make sure that the permissions on the process-to-be-launched are root:root 0400 (or writeable only by root)
Specify the absolute path to the process-to-be-launched, and don't allow any malicious modification of that path
Further Reading
AuthorizationExecuteWithPrivileges() Reference Documentation
Get root dialog in Python on Mac OS X, Windows?
https://github.com/BusKill/buskill-app/issues/14
https://www.jamf.com/blog/detecting-insecure-application-updates-on-macos/

Subprocess Timeout in Python

I am trying to check the header of a website and the code works perfectly fine. However when the website does not respond within a reasonable amount of time, I added a timeout and that works too.
Unfortunately the command is not taking parameters and am struck over there. Any suggestions would be highly appreciated
import subprocess
from threading import Timer
kill = lambda process: process.kill()
c1='curl -H'
cmd = [c1, 'google.com']
p = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
my_timer = Timer(10, kill, [p])
try:
my_timer.start()
stdout, stderr = p.communicate()
print stdout
finally:
print stderr
my_timer.cancel()
Error while running :
OSError: [Errno 2] No such file or directory
However if I change c1 as shown below, it works fine.
c1='curl'
With
c1='curl'
use
cmd = [c1, '-H','google.com']

How to keep ssh session open after logging in using subprocess.popen?

I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK

Run program using cmd on remote Windows machine

I want to create a Python script that opens a cmd in remote Windows machine using psexec, and runs my_program.exe from this cmd, and when some event occurs it sends Ctrl+c to my_program.exe which handles this signal somehow.
Here's my code:
from os import chdir, path
from subprocess import Popen, PIPE
psexec_dir = r'C:\Users\amos1\Downloads\PSTools'
chdir(psexec_dir)
path.join(psexec_dir, 'psexec.exe')
command = ['psexec.exe', '\\amos', 'cmd']
p = Popen(command, stdin = PIPE, stdout = PIPE)
p.stdin.write(b'my_program.exe\r\n')
while True:
if some_condition:
ctrl_c = b'\x03'
p.stdin.write(ctrl_c)
break
for line in p.stdout.readlines():
print(line)
p.kill()
The problems:
my_program.exe does not run
p.kill raises WindowsError: [Error 5] Access is denied (even though I used the answers from here and did both chdir and path.join in my code)
Notice that both my computer and the target computer are Windows machines

Not able to give inputs to subprocess(process which runs adb shell command) after 100 iterations

I want to run a stress test for adb(android debug bridge) shell. ( adb shell in this respect just a command line tool provided by Android phones).
I create a sub-process from python and in this subprocess i execute 'adb shell' command. there are some commands which has to be given to this subprocess which I am providing via stdin proper of the sub process.
Everything seems to be fine but when I am running a stress test. after around 100 iterations the command which I give to stdin does not reach to subprocess. If I run commands in separate terminal it is running fine. but the problem is with this stdin.
Can anyone tell me what I am doing wrong. Below is the code sample
class ADB():
def __init__(self):
self.proc = subprocess.Popen('adb shell', stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True,bufsize=0)
def provideAMcommand(self, testParam):
try:
cmd1 = "am startservice -n com.test.myapp/.ADBSupport -e \"" + "command" + "\" \"" + "test" + "\""
cmd2 = " -e \"" + "param" + "\"" + " " + testParam
print cmd1+cmd2
sys.stdout.flush()
self.proc.stdin.write(cmd1 + cmd2 + "\n")
except:
raise Exception("Phone is not Connected to Desktop or ADB is not available \n")
If it works for the first few commands but blocks later then you might forgot to read from self.proc.stdout that might lead to (as the docs warn) to OS pipe buffer filling up and blocking the child process.
To discard the output, redirect it to os.devnull:
import os
from subprocess import Popen, PIPE, STDOUT
DEVNULL = open(os.devnull, 'wb')
# ...
self.proc = Popen(['adb', 'shell'], stdin=PIPE, stdout=DEVNULL, stderr=STDOUT)
# ...
self.proc.stdin.write(cmd1 + cmd2 + "\n")
self.proc.stdin.flush()
There is pexpect module that might be a better tool for a dialog-based interaction (if you want both read/write intermitently).
IN provideAMcommand you are writing to and flushing the stdout of your main process. That will not send anything to the stdin of the child process you have created with Popen. The following code creates a new bash child process, a bit like the code in your __init__:
import subprocess as sp
cproc = sp.Popen("bash", stdin=sp.PIPE, stdout=sp.PIPE, stderr=sp.PIPE, shell=True)
Now, the easiest way to communicate with that child process is the following:
#Send command 'ls' to bash.
out, err = cproc.communicate("ls")
This will send the text "ls" and EOF to bash (equal to running a bash script with only the text "ls" in it). Bash will execute the ls command and then quit. Anything that bash or ls write to stdout and stderr will end up in the variables out and err respectively. I have not used the adb shell, but I guess it behaves like bash in this regard.
If you just want your child process to print to the terminal, don't specify the stdout and stderr arguments to Popen.
You can check the exit code of the child, and raise an exception if it is non-zero (indicating an error):
if (cproc.returncode != 0):
raise Exception("Child process returned non-zero exit code")

Categories

Resources