How can I pass messages between a parent process that launches a child process as root using Apple Script and stdin/stdout?
I'm writing an anti-forensics GUI application that needs to be able to do things that require root permissions on MacOS. For example, shutting down the computer.
For security reasons, I do not want the user to have to launch the entire GUI application as root. Rather, I want to just spawn a child process with root permission and a very minimal set of functions.
Also for security reasons, I do not want the user to send my application its user password. That authentication should be handled by the OS, so only the OS has visibility into the user's credentials. I read that the best way to do this with Python on MacOS is to leverage osascript.
Unfortunately, for some reason, communication between the parent and child process breaks when I launch the child process using osascript. Why?
Example with sudo
First, let's look at how it should work.
Here I'm just using sudo to launch the child process. Note I can't use sudo for my use-case because I'm using a GUI app. I'm merely showing it here to demonstrate how communication between the processes should work.
Parent (spawn_root.py)
The parent python script launches the child script root_child.py as root using sudo.
Then it sends it a command soft-shutdown\n and waits for the response
#!/usr/bin/env python3
import subprocess, sys
proc = subprocess.Popen(
[ 'sudo', sys.executable, 'root_child.py' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
print( "sending soft-shutdown command now" )
proc.stdin.write( "soft-shutdown\n" )
proc.stdin.flush()
print( proc.stdout.readline() )
proc.stdin.close()
Child (root_child.py)
The child process enters an infinite loop listening for commands (in our actual application, the child process will wait in the background for the command from the parent; it won't usually just get the soft-shutdown command immediately).
Once it does get a command, it does some sanity checks. If it matches soft-shutdown, then it executes shutdown -h now with subprocess.Popen().
#!/usr/bin/env python3
import os, sys, re, subprocess
if __name__ == "__main__":
# loop and listen for commands from the parent process
while True:
command = sys.stdin.readline().strip()
# check sanity of recieved command. Be very suspicious
if not re.match( "^[A-Za-z_-]+$", command ):
sys.stdout.write( "ERROR: Bad Command Ignored\n" )
sys.stdout.flush()
continue
if command == "soft-shutdown":
try:
proc = subprocess.Popen(
[ 'sudo', 'shutdown', '-h', 'now' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
sys.stdout.write( "SUCCESS: I am root!\n" )
sys.stdout.flush()
except Exception as e:
sys.stdout.write( "ERROR: I am not root :'(\n" )
sys.stdout.flush()
sys.exit(0)
continue
else:
sys.stdout.write( "WARNING: Unknown Command Ignored\n" )
sys.stdout.flush()
continue
Example execution
This works great. You can see in this example execution that the shutdown command runs without any exceptions thrown, and then the machine turns off.
user#host ~ % ./spawn_root.py
sending soft-shutdown command now
SUCCESS: I am root!
...
user#host ~ % Connection to REDACTED closed by remote host.
Connection to REDACTED closed.
user#buskill:~$
Example with osascript
Unfortunately, this does not work when you use osascript to get the user to authenticate in the GUI.
For example, if I change one line in the subprocess call in spawn_root.py from using sudo to using osascript as follows
Parent (spawn_root.py)
#!/usr/bin/env python3
import subprocess, sys
proc = subprocess.Popen(
['/usr/bin/osascript', '-e', 'do shell script "' +sys.executable+ ' root_child.py" with administrator privileges' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
print( "sending soft-shutdown command now" )
proc.stdin.write( "soft-shutdown\n" )
proc.stdin.flush()
print( proc.stdout.readline() )
proc.stdin.close()
Child (root_child.py)
(no changes in this script, just use 'root_child.py' from above)
Example Execution
This time, after I type my user password into the prompt provided by MacOS, the parent gets stuck indefinitely when trying to communicate with the child.
user#host spawn_root_sudo_communication_test % diff simple/spawn_root.py simple_gui/spawn_root.py
sending soft-shutdown command now
Why is it that I cannot communicate with a child process that was launched with osascript?
I ended-up solving this by abandoning osascript and instead calling the AuthorizationExecuteWithPrivileges() function with ctypes, which is actually just what osascript does indirectly.
Parent (spawn_root.py)
#!/usr/bin/env python3
################################################################################
# File: spawn_root.py
# Version: 0.1
# Purpose: Launch a child process with root permissions on MacOS via
# AuthorizationExecuteWithPrivileges(). For more info, see:
# * https://stackoverflow.com/a/74001980
# * https://stackoverflow.com/q/73999365
# * https://github.com/BusKill/buskill-app/issues/14
# Authors: Michael Altfield <michael#michaelaltfield.net>
# Created: 2022-10-15
# Updated: 2022-10-15
################################################################################
################################################################################
# IMPORTS #
################################################################################
import sys, ctypes, struct
import ctypes.util
from ctypes import byref
# import some C libraries for interacting via ctypes with the MacOS API
libc = ctypes.cdll.LoadLibrary(ctypes.util.find_library("c"))
# https://developer.apple.com/documentation/security
sec = ctypes.cdll.LoadLibrary(ctypes.util.find_library("Security"))
################################################################################
# SETTINGS #
################################################################################
kAuthorizationFlagDefaults = 0
################################################################################
# FUNCTIONS #
################################################################################
# this basically just re-implmenets python's readline().strip() but in C
def read_from_child(io):
# get the output from the child process character-by-character until we hit a new line
buf = ctypes.create_string_buffer(1)
result = ''
for x in range(1,100):
# read one byte from the child process' communication PIPE and store it to the buffer
libc.fread(byref(buf),1,1,io)
# decode the byte stored to the buffer as ascii
char = buf.raw[:1].decode('ascii')
# is the character a newline?
if char == "\n":
# the character is a newline; stop reading
break
else:
# the character is not a newline; append it to the string and continue reading
result += char
return result
################################################################################
# MAIN BODY #
################################################################################
################################
# EXECUTE CHILD SCRIPT AS ROOT #
################################
auth = ctypes.c_void_p()
r_auth = byref(auth)
sec.AuthorizationCreate(None,None,kAuthorizationFlagDefaults,r_auth)
exe = [sys.executable,"root_child.py"]
args = (ctypes.c_char_p * len(exe))()
for i,arg in enumerate(exe[1:]):
args[i] = arg.encode('utf8')
io = ctypes.c_void_p()
print( "running root_child.py")
err = sec.AuthorizationExecuteWithPrivileges(auth,exe[0].encode('utf8'),0,args,byref(io))
print( "err:|" +str(err)+ "|" )
print( "root_child.py executed!")
##################################
# SEND CHILD "MALICIOUS" COMMAND #
##################################
print( "sending malicious command now" )
# we have to explicitly set the encoding to ascii, else python will inject a bunch of null characters (\x00) between each character, and the command will be truncated on the receiving end
# * https://github.com/BusKill/buskill-app/issues/14#issuecomment-1279643513
command = "Robert'); DROP TABLE Students;\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
################################
# SEND CHILD "INVALID" COMMAND #
################################
print( "sending invalid command now" )
command = "make-me-a-sandwich\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
######################################
# SEND CHILD "soft-shutdown" COMMAND #
######################################
print( "sending soft-shutdown command now" )
command = "soft-shutdown\n".encode(encoding="ascii")
libc.fwrite(command,1,len(command),io)
libc.fflush(io)
print( "result:|" +str(read_from_child(io))+ "|" )
# clean exit
libc.close(io)
sys.exit(0)
Child (root_child.py)
#!/usr/bin/env python3
import os, time, re, sys, subprocess
def soft_shutdown():
try:
proc = subprocess.Popen(
[ 'sudo', 'shutdown', '-h', 'now' ],
stdin=subprocess.PIPE, stdout=subprocess.PIPE, text=True
)
except Exception as e:
print( "I am not root :'(" )
if __name__ == "__main__":
# loop and listen for commands from the parent process
while True:
command = sys.stdin.buffer.readline().strip().decode('ascii')
# check sanity of recieved command. Be very suspicious
if not re.match( "^[A-Za-z_-]+$", command ):
msg = "ERROR: Bad Command Ignored\n"
sys.stdout.buffer.write( msg.encode(encoding='ascii') )
sys.stdout.flush()
continue
if command == "soft-shutdown":
try:
soft_shutdown()
msg = "SUCCESS: I am root!\n"
except Exception as e:
msg = "ERROR: I am not root :'(\n"
else:
msg = "WARNING: Unknown Command Ignored\n"
sys.stdout.buffer.write( msg.encode(encoding='ascii') )
sys.stdout.flush()
continue
Example Execution
maltfield#host communicate % ./spawn_root.py
running root_child.py
err:|0|
root_child.py executed!
sending malicious command now
result:|ERROR: Bad Command Ignored|
sending invalid command now
result:|WARNING: Unknown Command Ignored|
sending soft-shutdown command now
result:|SUCCESS: I am root!|
Traceback (most recent call last):
File "root_child.py", line 26, in <module>
sys.stdout.flush()
BrokenPipeError: [Errno 32] Broken pipe
Exception ignored in: <_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>
BrokenPipeError: [Errno 32] Broken pipe
maltfield#host communicate %
*** FINAL System shutdown message from maltfield#host.local ***
System going down IMMEDIATELY
Connection to REDACTED closed by remote host.
Connection to REDACTED closed.
maltfield#buskill:~$
Additional Information
Security
Note that AuthorizationExecuteWithPrivileges() has been deprecated by apple in-favor of an alternatve that requires you to pay them money. Unfortunately, there's some misinformation out there that AuthorizationExecuteWithPrivileges() is a huge security hole. While it's true that using AuthorizationExecuteWithPrivileges() incorrectly can cause security issues, it is not inherently insecure to use it.
Obviously, any time you run something as root, you need to be very careful!
AuthorizationExecuteWithPrivileges() is deprecated, but it can be used safely. But it can also be used unsafely!
It basically boils down to: do you actually know what you're running as root? If the script you're running as root is located in a Temp dir that has world-writeable permissions (as a lot of MacOS App installers have done historically), then any malicious process could gain root access.
To execute a process as root safely:
Make sure that the permissions on the process-to-be-launched are root:root 0400 (or writeable only by root)
Specify the absolute path to the process-to-be-launched, and don't allow any malicious modification of that path
Further Reading
AuthorizationExecuteWithPrivileges() Reference Documentation
Get root dialog in Python on Mac OS X, Windows?
https://github.com/BusKill/buskill-app/issues/14
https://www.jamf.com/blog/detecting-insecure-application-updates-on-macos/
Related
I'm spawning multiple CMDs with a given python file, using subprocess.popen. All with an input() at the end. The problem is if there is any raised exception in the code the window just closes and I can't see what happened to it.
I want it to either way stay open no matter the error. so I can see it. Or get the error back at the main window like this script failed to run because of this..
I'm running this on Windows:
import sys
import platform
from subprocess import Popen,PIPE
pipelines = [("Name1","path1"),
("Name2","path2")]
# define a command that starts new terminal
if platform.system() == "Windows":
new_window_command = "cmd.exe /c start".split()
else: #XXX this can be made more portable
new_window_command = "x-terminal-emulator -e".split()
processes = []
for i in range(len(pipelines)):
# open new consoles, display messages
echo = [sys.executable, "-c",
"import sys; print(sys.argv[1]); from {} import {}; obj = {}(); obj.run(); input('Press Enter..')".format(pipelines[i][1],pipelines[i][0],pipelines[i][0])]
processes.append(Popen(new_window_command + echo + [pipelines[i][0]]))
for proc in processes:
proc.wait()
To see the error, try to wrap the desired code fragment in try / except
try:
...
except Exception as e:
print(e)
I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
I am writing a program which initiates a connection to a remote machine, then dynamically sending multiple commands to it by monitoring the response. Instead of using pexpect, what else can I use? I am trying to use subprocess.Popen, but communicate() method will kill the process.
Pexpect version: 2.4, http://www.bx.psu.edu/~nate/pexpect/pexpect.html
Referring to the API for subprocess in:
https://docs.python.org/2/library/subprocess.html#subprocess.Popen.communicate
Popen.communicate(input=None)
Interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate. The optional input argument should be a string to be sent to the child process, or None, if no data should be sent to the child.
Thanks
Refer the subprocess documentation to understand the basics here
You could do something like this ...
Again, this is just a pointer... this approach may/may not be a best fit for your use case.
Explore -> and Test to find what works for you!
import shlex
import subprocess
import sys
class Command(object):
""" Generic Command Interface ."""
def execute(self, cmd):
proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
stdout_value = proc.communicate()[0]
exit_value = proc.poll()
if exit_value:
logger.error('Command execution failed. Command : %s' % cmd)
return exit_value, stdout_value
if __name__ == '__main__':
cmd = Command()
host = '' # HOSTNAME GOES HERE
cmd_str = '' # YOUR COMMAND GOES HERE
cmdline = 'ksh -c "ssh root#{0} "{1}""'.format(host, cmd_str)
exit_value, stdout_value = cmd.execute(cmdline)
if exit_value == 0:
# execute other command/s
# you basically use the same logic as above
else:
# return Or execute other command/s
I need to write a script in Linux which can start a background process using one command and stop the process using another.
The specific application is to take userspace and kernel logs for android.
following command should start taking logs
$ mylogscript start
following command should stop the logging
$ mylogscript stop
Also, the commands should not block the terminal. For example, once I send the start command, the script run in background and I should be able to do other work on terminal.
Any pointers on how to implement this in perl or python would be helpful.
EDIT:
Solved: https://stackoverflow.com/a/14596380/443889
I got the solution to my problem. Solution essentially includes starting a subprocess in python and sending a signal to kill the process when done.
Here is the code for reference:
#!/usr/bin/python
import subprocess
import sys
import os
import signal
U_LOG_FILE_PATH = "u.log"
K_LOG_FILE_PATH = "k.log"
U_COMMAND = "adb logcat > " + U_LOG_FILE_PATH
K_COMMAND = "adb shell cat /proc/kmsg > " + K_LOG_FILE_PATH
LOG_PID_PATH="log-pid"
def start_log():
if(os.path.isfile(LOG_PID_PATH) == True):
print "log process already started, found file: ", LOG_PID_PATH
return
file = open(LOG_PID_PATH, "w")
print "starting log process: ", U_COMMAND
proc = subprocess.Popen(U_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process1 id = ", proc.pid
file.write(str(proc.pid) + "\n")
print "starting log process: ", K_COMMAND
proc = subprocess.Popen(K_COMMAND,
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
print "log process2 id = ", proc.pid
file.write(str(proc.pid) + "\n")
file.close()
def stop_log():
if(os.path.isfile(LOG_PID_PATH) != True):
print "log process not started, can not find file: ", LOG_PID_PATH
return
print "terminating log processes"
file = open(LOG_PID_PATH, "r")
log_pid1 = int(file.readline())
log_pid2 = int(file.readline())
file.close()
print "log-pid1 = ", log_pid1
print "log-pid2 = ", log_pid2
os.killpg(log_pid1, signal.SIGTERM)
print "logprocess1 killed"
os.killpg(log_pid2, signal.SIGTERM)
print "logprocess2 killed"
subprocess.call("rm " + LOG_PID_PATH, shell=True)
def print_usage(str):
print "usage: ", str, "[start|stop]"
# Main script
if(len(sys.argv) != 2):
print_usage(sys.argv[0])
sys.exit(1)
if(sys.argv[1] == "start"):
start_log()
elif(sys.argv[1] == "stop"):
stop_log()
else:
print_usage(sys.argv[0])
sys.exit(1)
sys.exit(0)
There are a couple of different approaches you can take on this:
1. Signal - you use a signal handler, and use, typically "SIGHUP" to signal the process to restart ("start"), SIGTERM to stop it ("stop").
2. Use a named pipe or other IPC mechanism. The background process has a separate thread that simply reads from the pipe, and when something comes in, acts on it. This method relies on having a separate executable file that opens the pipe, and sends messages ("start", "stop", "set loglevel 1" or whatever you fancy).
I'm sorry, I haven't implemented either of these in Python [and perl I haven't really written anything in], but I doubt it's very hard - there's bound to be a ready-made set of python code to deal with named pipes.
Edit: Another method that just struck me is that you simply daemonise the program at start, and then let the "stop" version find your deamonized process [e.g. by reading the "pidfile" that you stashed somewhere suitable], and then sends a SIGTERM for it to terminate.
I don't know if this is the optimum way to do it in perl, but for example:
system("sleep 60 &")
This starts a background process that will sleep for 60 seconds without blocking the terminal. The ampersand in shell means to do something in the background.
A simple mechanism for telling the process when to stop is to have it periodically check for the existence of a certain file. If the file exists, it exits.