I've got a long running, daemonized Python process that uses subprocess to spawn new child processes when certain events occur. The long running process is started by a user with super user privileges. I need the child processes it spawns to run as a different user (e.g., "nobody") while retaining the super user privileges for the parent process.
I'm currently using
su -m nobody -c <program to execute as a child>
but this seems heavyweight and doesn't die very cleanly.
Is there a way to accomplish this programmatically instead of using su? I'm looking at the os.set*uid methods, but the doc in the Python std lib is quite sparse in that area.
Since you mentioned a daemon, I can conclude that you are running on a Unix-like operating system. This matters, because how to do this depends on the kind operating system. This answer applies only to Unix, including Linux, and Mac OS X.
Define a function that will set the gid and uid of the running process.
Pass this function as the preexec_fn parameter to subprocess.Popen
subprocess.Popen will use the fork/exec model to use your preexec_fn. That is equivalent to calling os.fork(), preexec_fn() (in the child process), and os.exec() (in the child process) in that order. Since os.setuid, os.setgid, and preexec_fn are all only supported on Unix, this solution is not portable to other kinds of operating systems.
The following code is a script (Python 2.4+) that demonstrates how to do this:
import os
import pwd
import subprocess
import sys
def main(my_args=None):
if my_args is None: my_args = sys.argv[1:]
user_name, cwd = my_args[:2]
args = my_args[2:]
pw_record = pwd.getpwnam(user_name)
user_name = pw_record.pw_name
user_home_dir = pw_record.pw_dir
user_uid = pw_record.pw_uid
user_gid = pw_record.pw_gid
env = os.environ.copy()
env[ 'HOME' ] = user_home_dir
env[ 'LOGNAME' ] = user_name
env[ 'PWD' ] = cwd
env[ 'USER' ] = user_name
report_ids('starting ' + str(args))
process = subprocess.Popen(
args, preexec_fn=demote(user_uid, user_gid), cwd=cwd, env=env
)
result = process.wait()
report_ids('finished ' + str(args))
print 'result', result
def demote(user_uid, user_gid):
def result():
report_ids('starting demotion')
os.setgid(user_gid)
os.setuid(user_uid)
report_ids('finished demotion')
return result
def report_ids(msg):
print 'uid, gid = %d, %d; %s' % (os.getuid(), os.getgid(), msg)
if __name__ == '__main__':
main()
You can invoke this script like this:
Start as root...
(hale)/tmp/demo$ sudo bash --norc
(root)/tmp/demo$ ls -l
total 8
drwxr-xr-x 2 hale wheel 68 May 17 16:26 inner
-rw-r--r-- 1 hale staff 1836 May 17 15:25 test-child.py
Become non-root in a child process...
(root)/tmp/demo$ python test-child.py hale inner /bin/bash --norc
uid, gid = 0, 0; starting ['/bin/bash', '--norc']
uid, gid = 0, 0; starting demotion
uid, gid = 501, 20; finished demotion
(hale)/tmp/demo/inner$ pwd
/tmp/demo/inner
(hale)/tmp/demo/inner$ whoami
hale
When the child process exits, we go back to root in parent ...
(hale)/tmp/demo/inner$ exit
exit
uid, gid = 0, 0; finished ['/bin/bash', '--norc']
result 0
(root)/tmp/demo$ pwd
/tmp/demo
(root)/tmp/demo$ whoami
root
Note that having the parent process wait around for the child process to exit is for demonstration purposes only. I did this so that the parent and child could share a terminal. A daemon would have no terminal and would seldom wait around for a child process to exit.
There is an os.setuid() method. You can use it to change the current user for this script.
One solution is, somewhere where the child starts, to call os.setuid() and os.setgid() to change the user and group id and after that call one of the os.exec* methods to spawn a new child. The newly spawned child will run with the less powerful user without the ability to become a more powerful one again.
Another is to do it when the daemon (the master process) starts and then all newly spawned processes will have run under the same user.
For information look at the manpage for setuid.
The new versions of Python (3.9 onwards) support user and group option out of the box:
process = subprocess.Popen(args, user=username)
The new versions also provide a subprocess.run function. It is a simple wrapper around subprocess.Popen. While suprocess.Popen runs the commands in the background, subprocess.run runs the commands and wait for their completion.
Thus we can also do:
subprocess.run(args, user=username)
Actually, example with preexec_fn did not work for me.
My solution that is working fine to run some shell command from another user and get its output is:
apipe=subprocess.Popen('sudo -u someuser /execution',shell=True,stdout=subprocess.PIPE)
Then, if you need to read from the process stdout:
cond=True
while (cond):
line=apipe.stdout.getline()
if (....):
cond=False
Hope, it is useful not only in my case.
Related
I am running a python script as root, from this script I want to run on linux process as userA.
There is many answers on how to do that
but I also need to print the exit code recived from the process and that the real isuue here.
Is there a way of running a process as userA and also to print the return value ?
os.system("su userA -c 'echo $USER'")
answer = subprocess.call(["su userA -c './my/path/run.sh'"])
print(answer)
When you're running a script and you want to print its return code, you must wait until its execution is done before executing the print command.
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
http://docs.python.org/library/subprocess.html
In your case:
import subprocess
process = subprocess.Popen('su userA -c ./my/path/run.sh', shell=True, stdout=subprocess.PIPE)
process.wait()
print process.returncode
Reference:
https://stackoverflow.com/a/325474/13798864
If you use os.fork (with an os.setuid inside the child) you can collect the status using os.waitpid.
pid = os.fork()
if pid == 0:
os.setgid(...)
os.setgroups(...)
os.setuid(...)
# do something and collect the exit status... for example
#
# using os.system:
#
# return_value = os.system(.....) // 256
#
# or using subprocess:
#
# p = subprocess.Popen(....)
# out, err = p.communicate()
# return_value = p.returncode
#
# Or you can simply exec (this will not return to python but the
# exit status will still be visible in the parent). Note there are
# several os.exec* calls, so choose the one which you want.
#
# os.exec...
#
os._exit(return_value)
pid, status = os.waitpid(pid, 0)
print(f"Child exit code was {status // 256}")
Here I recently posted an answer to a related question which was not so much focused on the return value but does include some more details of the values that you might pass to the os.setuid etc calls.
I want to run a python program using sudo (e.g. sudo python test.py) but within the python program when I use os.system(<some command>) to invoke other processes, I want to run them as a non-root user.
Is this doable?
Thanks!
Example:
import os
import pwd
username = "nobody"
pwent = pwd.getpwnam(username)
uid = pwent.pw_uid
gid = pwent.pw_gid
pid = os.fork()
if pid == 0:
# child
# relinquish any privileged groups before we call setuid
# (not bothering to load the target user's supplementary groups here)
os.setgid(gid)
os.setgroups([])
# now relinquish root privs
os.setuid(uid)
# os.setuid should probably raise an exception if it fails,
# but I'm paranoid so...
if os.getuid() != uid:
print("setuid failed - bailing")
os._exit(1)
return_value = os.system("id") // 256 # or whatever
os._exit(return_value)
# parent
os.waitpid(pid, 0)
print("parent continues (still root here)")
os.system("id")
Gives:
uid=65534(nobody) gid=65534(nogroup) groups=65534(nogroup)
parent continues (still root here)
uid=0(root) gid=0(root) groups=0(root)
Another way to do this is to add a prefix sudo -u username in front of the command one wants to run to execute as that specific user. For example, one can use os.system('sudo -u user_a python test.py') to run test.py inside a python script as user_a.
I'm creating a python script that will copy files and folder over the network. it's cross-platform so I make an .exe file using cx_freeze
I used Popen method of the subprocess module
if I run .py file it is running as expected but when i create .exe subprocess is not created in the system
I've gone through all documentation of subprocess module but I didn't find any solution
everything else (I am using Tkinter that also works fine) is working in the .exe accept subprocess.
any idea how can I call subprocess in .exe.file ??
This file is calling another .py file
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail=db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path=detail[2]
if not os.path.exists(source_path):
showerror("Invalid Path","Please select valid path", parent=self.new_frame)
return
self.forms.new_scheduler.start_scheduler_button.destroy()
#Create stop scheduler button
if getattr(self.forms.new_scheduler, "stop_scheduler_button", None)==None:
self.forms.new_scheduler.stop_scheduler_button = tk.Button(self.new_frame, text='Stop scheduler', width=10, command=lambda:self.stop_scheduler_action(scheduler_id, scheduler_name, list_index))
self.forms.new_scheduler.stop_scheduler_button.grid(row=11, column=1, sticky=E, pady=10, padx=1)
scheduler_id=str(scheduler_id)
# Get python paths
if sys.platform == "win32":
proc = subprocess.Popen(['where', "python"], env=None, stdout=subprocess.PIPE)
else:
proc = subprocess.Popen(['which', "python"], env=None,stdout=subprocess.PIPE)
out, err = proc.communicate()
if err or not out:
showerror("", "Python not found", parent=self.new_frame)
else:
try:
paths = out.split(os.pathsep)
# Create python path
python_path = (paths[len(paths) - 1]).split('\n')[0]
cmd = os.path.realpath('scheduler.py')
#cmd='scheduler.py'
if sys.platform == "win32":
python_path=python_path.splitlines()
else:
python_path=python_path
# Run the scheduler file using scheduler id
proc = subprocess.Popen([python_path, cmd, scheduler_id], env=None, stdout=subprocess.PIPE)
message="Started the scheduler : %s" %(scheduler_name)
showinfo("", message, parent=self.new_frame)
#Add process id to scheduler table
process_id=proc.pid
#showinfo("pid", process_id, parent=self.new_frame)
def get_process_id(name):
child = subprocess.Popen(['pgrep', '-f', name], stdout=subprocess.PIPE, shell=False)
response = child.communicate()[0]
return [int(pid) for pid in response.split()]
print(get_process_id(scheduler_name))
# Add the process id in database
self.db.add_process_id(scheduler_id, process_id)
# Add the is_running status in database
self.db.add_status(scheduler_id)
except Exception as e:
showerror("", e)
And this file is called:
def scheduler_copy():
date= strftime("%m-%d-%Y %H %M %S", localtime())
logFile = scheduler_name + "_"+scheduler_id+"_"+ date+".log"
#file_obj=open(logFile, 'w')
# Call __init__ method of xcopy file
xcopy=XCopy(connection_ip, username , password, client_name, server_name, domain_name)
check=xcopy.connect()
# Cretae a log file for scheduler
file_obj=open(logFile, 'w')
if check is False:
file_obj.write("Problem in connection..Please check connection..!!")
return
scheduler_next_run=schedule.next_run()
scheduler_next_run="Next run at: " +str(scheduler_next_run)
# If checkbox_value selected copy all the file to new directory
if checkbox_value==1:
new_destination_path=xcopy.create_backup_directory(share_folder, destination_path, date)
else:
new_destination_path=destination_path
# Call backup method for coping data from source to destination
try:
xcopy.backup(share_folder, source_path, new_destination_path, file_obj, exclude)
file_obj.write("Scheduler completed successfully..\n")
except Exception as e:
# Write the error message of the scheduler to log file
file_obj.write("Scheduler failed to copy all data..\nProblem in connection..Please check connection..!!\n")
# #file_obj.write("Error while scheduling")
# return
# Write the details of scheduler to log file
file_obj.write("Total skipped unmodified file:")
file_obj.write(str(xcopy.skipped_unmodified_count))
file_obj.write("\n")
file_obj.write("Total skipped file:")
file_obj.write(str(xcopy.skipped_file))
file_obj.write("\n")
file_obj.write("Total copied file:")
file_obj.write(str(xcopy.copy_count))
file_obj.write("\n")
file_obj.write("Total skipped folder:")
file_obj.write(str(xcopy.skipped_folder))
file_obj.write("\n")
# file_obj.write(scheduler_next_run)
file_obj.close()
There is some awkwardness in your source code, but I won't spend time on that. For instance, if you want to find the source_path, it's better to use a for loop with break/else:
for detail in scheduler_detail:
source_path = detail[2]
break # found
else:
# not found: raise an exception
...
Some advice:
Try to separate the user interface code and the sub-processing, avoid mixing the two.
Use exceptions and exception handlers.
If you want portable code: avoid system call (there are no pgrep on Windows).
Since your application is packaged in a virtualenv (I make the assumption cx_freeze does this kind of thing), you have no access to the system-wide Python. You even don't have that on Windows. So you need to use the packaged Python (this is a best practice anyway).
If you want to call a Python script like a subprocess, that means you have two packaged applications: you need to create an exe for the main application and for the scheduler.py script. But, that's not easy to communicate with it.
Another solution is to use multiprocessing to spawn a new Python process. Since you don't want to wait for the end of processing (which may be long), you need to create daemon processes. The way to do that is explained in the multiprocessing module.
Basically:
import time
from multiprocessing import Process
def f(name):
print('hello', name)
if __name__ == '__main__':
p = Process(target=f, args=('bob',))
p.daemon = True
p.start()
# let it live and die, don't call: `p.join()`
time.sleep(1)
Of course, we need to adapt that with your problem.
Here is how I would do that (I removed UI-related code for clarity):
import scheduler
class SchedulerError(Exception):
pass
class YourClass(object):
def start_scheduler_action(self, scheduler_id, scheduler_name, list_index):
scheduler_detail = db.get_scheduler_detail_using_id(scheduler_id)
for detail in scheduler_detail:
source_path = detail[2]
break
else:
raise SchedulerError("Invalid Path", "Missing source path", parent=self.new_frame)
if not os.path.exists(source_path):
raise SchedulerError("Invalid Path", "Please select valid path", parent=self.new_frame)
p = Process(target=scheduler.scheduler_copy, args=('source_path',))
p.daemon = True
p.start()
self.db.add_process_id(scheduler_id, p.pid)
To check if your process is still running, I recommend you to use psutil. It's really a great tool!
You can define your scheduler.py script like that:
def scheduler_copy(source_path):
...
Multiprocessing vs Threading Python
Quoting this answer: https://stackoverflow.com/a/3044626/1513933
The threading module uses threads, the multiprocessing module uses processes. The difference is that threads run in the same memory space, while processes have separate memory. This makes it a bit harder to share objects between processes with multiprocessing. Since threads use the same memory, precautions have to be taken or two threads will write to the same memory at the same time. This is what the global interpreter lock is for.
Here, the advantage of multiprocessing over multithreading is that you can kill (or terminate) a process; you can't kill a thread. You may need psutil for that.
This is not an exact solution you are looking for, but following suggestion should be preferred for two reasons.
These are more pythonic way
subprocess is slightly expensive
Suggestions you can consider
Don't use subprocess for fetching system path. Try check os.getenv('PATH') to get env variable & try to find if python is in the path. For windows, one has to manually add Python path or else you can directly check in Program Files I guess
For checking process ID's you can try psutils. A wonderful answer is provided here at how do I get the process list in Python?
Calling another script from a python script. This does not look cool. Not bad, but I would not prefer this at all.
In above code, line - if sys.platform == "win32": has same value in if and else condition ==> you dont need a conditional statement here.
You wrote pretty fine working code to tell you. Keep Coding!
If you want to run a subprocess in an exe file, then you can use
import subprocess
program=('example')
arguments=('/command')
subprocess.call([program, arguments])
Okay I'm officially out of ideas after running each and every sample I could find on google up to 19th page. I have a "provider" script. The goal of this python script is to start up other services that run indefinitely even after this "provider" stopped running. Basically, start the process then forget about it but continue the script and not stopping it...
My problem: python-daemon... I have actions (web-service calls to start/stop/get status from the started services). I create the start commands on the fly and perform variable substitution on the config files as required.
Let's start from this point: I have a command to run (A bash script that executes a java process - a long running service that will be stopped sometime later).
def start(command, working_directory):
pidfile = os.path.join(working_directory, 'application.pid')
# I expect the pid of the started application to be here. The file is not created. Nothing is there.
context = daemon.DaemonContext(working_directory=working_directory,
pidfile=daemon.pidfile.PIDLockFile(pidfile))
with context:
psutil.Popen(command)
# This part never runs. Even if I put a simple print statement at this point, that never appears. Debugging in pycharms shows that my script returns with 0 on with context
with open(pidfile, 'r') as pf:
pid = pf.read()
return pid
From here on in my caller to this method I prepare a json object to return to the client which essentially contains an instance_id (don't mind it) and a pid (that'll be used to stop this process in another request.
What happens? After with context my application exits with status 0, nothing is returned, no json response gets created, no pidfile gets created only the executed psutil.Popen command runs. How can I achieve what I need? I need an independently running process and need to know its PID in order to stop it later on. The executed process must run even if the current python script stops for some reason. I can't get around the shell script as that application is not mine I have to use what I have.
Thanks for any tip!
#Edit:
I tried using simply the Popen from psutil/subprocess with somewhat more promising result.
def start(self, command):
import psutil/subprocess
proc = psutil.Popen(command)
return str(proc.pid)
Now If I debug the application and wait some undefined time on the return statement everything is working great! The service is running the pid is there, I can stop later on. Then I simply ran the provider without debugging. It returns the pid but the process is not running. Seems like Popen has no time to start the service because the whole provider stops faster.
#Update:
Using os.fork:
#staticmethod
def __start_process(command, working_directory):
pid = os.fork()
if pid == 0:
os.chdir(working_directory)
proc = psutil.Popen(command)
with open('application.pid', 'w') as pf:
pf.write(proc.pid)
def start(self):
...
__start_process(command, working_directory)
with open(os.path.join(working_directory, 'application.pid'), 'r') as pf:
pid = int(pf.read())
proc = psutil.Process(pid)
print("RUNNING" if proc.status() == psutil.STATUS_RUNNING else "...")
After running the above sample, RUNNING is written on console. After the main script exits because I'm not fast enough:
ps auxf | grep
No instances are running...
Checking the pidfile; sure it's there it was created
cat /application.pid
EMPTY 0bytes
From multiple partial tips i got, finally managed to get it working...
def start(command, working_directory):
pid = os.fork()
if pid == 0:
os.setsid()
os.umask(0) # I'm not sure about this, not on my notebook at the moment
os.execv(command[0], command) # This was strange as i needed to use the name of the shell script twice: command argv[0] [args]. Upon using ksh as command i got a nice error...
else:
with open(os.path.join(working_directory, 'application.pid'), 'w') as pf:
pf.write(str(pid))
return pid
That together solved the issue. The started process is not a child process of the running python script and won't stop when the script terminates.
Have you tried with os.fork()?
In a nutshell, os.fork() spawns a new process and returns the PID of that new process.
You could do something like this:
#!/usr/bin/env python
import os
import subprocess
import sys
import time
command = 'ls' # YOUR COMMAND
working_directory = '/etc' # YOUR WORKING DIRECTORY
def child(command, directory):
print "I'm the child process, will execute '%s' in '%s'" % (command, directory)
# Change working directory
os.chdir(directory)
# Execute command
cmd = subprocess.Popen(command
, shell=True
, stdout=subprocess.PIPE
, stderr=subprocess.PIPE
, stdin=subprocess.PIPE
)
# Retrieve output and error(s), if any
output = cmd.stdout.read() + cmd.stderr.read()
print output
# Exiting
print 'Child process ending now'
sys.exit(0)
def main():
print "I'm the main process"
pid = os.fork()
if pid == 0:
child(command, working_directory)
else:
print 'A subprocess was created with PID: %s' % pid
# Do stuff here ...
time.sleep(5)
print 'Main process ending now.'
sys.exit(0)
if __name__ == '__main__':
main()
Further info:
Documentation: https://docs.python.org/2/library/os.html#os.fork
Examples: http://www.python-course.eu/forking.php
Another related-question: Regarding The os.fork() Function In Python
I'm working on a program that requires me to keep track of the PIDs of specific Chrome/browser instances. This is the code I wrote for this:
def launch_procs():
low1 = Popen(['google-chrome-stable', 'http://www.google.com'])
med1 = Popen(['google-chrome-stable', 'http://www.netflix.com'])
high1 = Popen(['google-chrome-stable', 'http://www.facebook.com'])
return [low1.pid, med1.pid, high1.pid]
However, when I attempt to reference the PIDs later on in the program it seems that the PIDs have expired. Here is the error I get:
7894
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
7896
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
7901
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
Is the issue that Chrome doesn't assign permanent PIDs to its tabs/processes (i.e. it forks once a Chrome process launches and ditches the parent process)?
Note: This implementation is browser/implementation agnostic, I just need a way to obtain stable access to the PIDs of these launched processes. If anyone has suggestions on doing this they would be very much appreciated.
Thanks!
Chrome does not run as root under normal operating conditions. You can find several discussions for this here and here
There are several arguments that will allow you to circumvent this. By passing --user-data-dir and --no-sandbox you will be able to run chrome as root.
import os
from subprocess import Popen
line_count = 10
outfile = 'foo.txt'
cmd = 'sudo timeout 10 strace -p {} -o temp.out | cat temp.out | tail -{} > {}'
tab_sites = ['www.google.com', 'www.yahoo.com', 'www.msn.com']
for site in tab_sites:
chrome_proc = Popen(['google-chrome-stable', site, '--user-data-dir', '--no-sandbox'])
print(chrome_proc.pid)
os.system(cmd.format(chrome_proc.pid, line_count, outfile))
Alternatively you can use runuser with your command:
import os
import sys
from subprocess import Popen
line_count = 10
outfile = 'foo.txt'
cmd = 'sudo timeout 10 strace -p {} -o temp.out | cat temp.out | tail -{} > {}'
tab_sites = ['www.google.com', 'www.yahoo.com', 'www.msn.com']
for site in tab_sites:
chrome_proc = Popen(['runuser', '-u', sys.argv[1], 'google-chrome-stable', site])
print(chrome_proc.pid)
os.system(cmd.format(chrome_proc.pid, line_count, outfile))
Just pass in the username you want to run this under, sudo python trace_chrome.py your_user_name
I understand you aren't able to show your exact code which does make things tougher to be able to assist.
To see the Process ID's of your Chrome tabs you can open the Task Manager by pressing Shift Esc. I did some testing, and as you suspect, the PID is different than reported by Popen.
One way to get an accessable PID with Chrome is to use the option --temp-profile to create a new session for each site instead of using an existing one.