How to keep track of Chrome tab's PID in Python? - python

I'm working on a program that requires me to keep track of the PIDs of specific Chrome/browser instances. This is the code I wrote for this:
def launch_procs():
low1 = Popen(['google-chrome-stable', 'http://www.google.com'])
med1 = Popen(['google-chrome-stable', 'http://www.netflix.com'])
high1 = Popen(['google-chrome-stable', 'http://www.facebook.com'])
return [low1.pid, med1.pid, high1.pid]
However, when I attempt to reference the PIDs later on in the program it seems that the PIDs have expired. Here is the error I get:
7894
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
7896
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
7901
strace: attach: ptrace(PTRACE_ATTACH, ...): No such process
Is the issue that Chrome doesn't assign permanent PIDs to its tabs/processes (i.e. it forks once a Chrome process launches and ditches the parent process)?
Note: This implementation is browser/implementation agnostic, I just need a way to obtain stable access to the PIDs of these launched processes. If anyone has suggestions on doing this they would be very much appreciated.
Thanks!

Chrome does not run as root under normal operating conditions. You can find several discussions for this here and here
There are several arguments that will allow you to circumvent this. By passing --user-data-dir and --no-sandbox you will be able to run chrome as root.
import os
from subprocess import Popen
line_count = 10
outfile = 'foo.txt'
cmd = 'sudo timeout 10 strace -p {} -o temp.out | cat temp.out | tail -{} > {}'
tab_sites = ['www.google.com', 'www.yahoo.com', 'www.msn.com']
for site in tab_sites:
chrome_proc = Popen(['google-chrome-stable', site, '--user-data-dir', '--no-sandbox'])
print(chrome_proc.pid)
os.system(cmd.format(chrome_proc.pid, line_count, outfile))
Alternatively you can use runuser with your command:
import os
import sys
from subprocess import Popen
line_count = 10
outfile = 'foo.txt'
cmd = 'sudo timeout 10 strace -p {} -o temp.out | cat temp.out | tail -{} > {}'
tab_sites = ['www.google.com', 'www.yahoo.com', 'www.msn.com']
for site in tab_sites:
chrome_proc = Popen(['runuser', '-u', sys.argv[1], 'google-chrome-stable', site])
print(chrome_proc.pid)
os.system(cmd.format(chrome_proc.pid, line_count, outfile))
Just pass in the username you want to run this under, sudo python trace_chrome.py your_user_name
I understand you aren't able to show your exact code which does make things tougher to be able to assist.

To see the Process ID's of your Chrome tabs you can open the Task Manager by pressing Shift Esc. I did some testing, and as you suspect, the PID is different than reported by Popen.
One way to get an accessable PID with Chrome is to use the option --temp-profile to create a new session for each site instead of using an existing one.

Related

Simple use of Python’s Subprocess.call to open multiple Mac apps

I’d like to open a few apps using a very simple python script:
from subprocess import call
call("/Applications/Google Chrome.app/Contents/MacOS/Google Chrome")
call("/Applications/MongoDB Compass.app/Contents/MacOS/MongoDB Compass")
The problem is that opening them this way seems to open a terminal window along with the app itself - for chrome, it outputs this in the terminal for example:
Last login: Sun Oct 23 00:20:38 on ttys000
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome ; exit;
nick#Nicks-MBP ~ % /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome ; exit;
objc[3817]: Class WebSwapCGLLayer is implemented in both /System/Library/Frameworks/WebKit.framework/Versions/A/Frameworks/WebCore.framework/Versions/A/Frameworks/libANGLE-shared.dylib (0x7ffb45565ec8) and /Applications/Google Chrome.app/Contents/Frameworks/Google Chrome Framework.framework/Versions/106.0.5249.119/Libraries/libGLESv2.dylib (0x116ba9668). One of the two will be used. Which one is undefined.
So it hijacks the terminal and does not proceed to this next line:
call("/Applications/MongoDB Compass.app/Contents/MacOS/MongoDB Compass")
If I try to call these:
call(("/Applications/Google Chrome.app"))
call(("/Applications/MongoDB Compass.app"))
I get this error, with other posts stating that it may be a dir and not an app:
OSError: [Errno 13] Permission denied
How can this be fixed? Note that I do not want to do this despite it working:
os.system("open /Applications/" + app + ".app")
Because I need to be able to wait for the apps to finish opening before running another command, hence the use of Subprocess.call. Thank you.
UPDATE:
I now have this:
print("start")
call(
[("/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome")],
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.STDOUT,
)
print("end")
But the print("end") line only executes when I exit out of chrome. How can I get it to wait for Chrome to load and then print 'end' after? Also it requires Shell=True for some reason, otherwise it complains with:
FileNotFoundError: [Errno 2] No such file or directory: '/Applications/Google\\ Chrome.app/Contents/MacOS/Google\\ Chrome
Updated Answer
This also appears to work and doesn't involve the shell:
import subprocess as sp
print("start")
sp.run(["open", "-a", "Google Chrome"])
print("end")
Original Answer
This appears to do what you want, though I have no explanation as to why:
import subprocess as sp
print("start")
sp.Popen(
["/Applications/Google Chrome.app/Contents/MacOS/Google Chrome"],
stdin =sp.DEVNULL,
stdout=sp.DEVNULL,
stderr=sp.DEVNULL)
print("end")

How to kill python script and its children if another instance of it is already running

I need to write a piece of code that will check if another instance of a python script with the same name is running, kill it and all its children and do the job of the script. On my way to solution I have this code:
import os, sys
import time
import subprocess
from subprocess import PIPE,Popen
import os
import signal
if sys.argv[1] == 'boss':
print 'I am boss', \
"pid:", os.getpid(), \
"pgid:", os.getgid()
# kill previous process with same name
my_pid=os.getpid()
pgrep = Popen(['pgrep', '-f', 'subp_test.py'], stdout=PIPE)
prev_pids=pgrep.communicate()[0].split()
print 'previous processes:' , prev_pids
for pid in prev_pids:
if int(pid) !=int(my_pid):
print 'killing', pid
os.kill(int(pid), signal.SIGKILL)
# do the job
subprocess.call('python subp_test.py 1', shell=True)
subprocess.call('python subp_test.py 2', shell=True)
subprocess.call('python some_other_script.py', shell=True)
else:
p_num = sys.argv[1]
for i in range(20):
time.sleep(1)
print 'child', p_num, \
"pid:", os.getpid(), \
"pgid:", os.getgid(), \
":", i
This will kill all processes that have the substring 'subp_test.py' in its command but will not kill some_other_script.py or other programs without the 'subp_test.py' in it.
The calls the script subp_test.py will execute are unexpected but as I understand, they are supposed to be under it in the process tree.
So how do I access all the children of subp_test.py in order to kill them when a new instance of subp_test.py begins to run?
Also, are there better approaches to implement this logic?
I use Python 2.6.5 on Ubuntu 10.04.
Run your program under a new session group and write the session group id to a file. When your program starts, kill -SIGNAL -prevsessiongroup. This will kill all processes and their children etc unless one of them explicitly changes the session group. I have included urls which contain snippets of code you can use.
https://docs.python.org/2/library/os.html#os.setsid
http://web.archive.org/web/20131017130434/http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
you can use this command to kill all resources allocated to the python script:
kill -9 `ps -ef | grep your_python_script.py | grep -v grep | awk '{print $2}'`

Script to capture everything on screen

So I have this python3 script that does a lot of automated testing for me, it takes roughly 20 minutes to run, and some user interaction is required. It also uses paramiko to ssh to a remote host for a separate test.
Eventually, I would like to hand this script over to the rest of my team however, it has one feature missing: evidence collection!
I need to capture everything that appears on the terminal to a file. I have been experimenting with the Linux command 'script'. However, I cannot find an automated method of starting script, and executing the script.
I have a command in /usr/bin/
script log_name;python3.5 /home/centos/scripts/test.py
When I run my command, it just stalls. Any help would be greatly appreciated!
Thanks :)
Is a redirection of the output to a file what you need ?
python3.5 /home/centos/scripts/test.py > output.log 2>&1
Or if you want to keep the output on the terminal AND save it into a file:
python3.5 /home/centos/scripts/test.py 2>&1 | tee output.log
I needed to do this, and ended up with a solution that combined pexpect and ttyrec.
ttyrec produces output files that can be played back with a few different player applications - I use TermTV and IPBT.
If memory serves, I had to use pexpect to launch ttyrec (as well as my test's other commands) because I was using Jenkins to schedule the execution of my test, and pexpect seemed to be the easiest way to get a working interactive shell in a Jenkins job.
In your situation you might be able to get away with using just ttyrec, and skip the pexpect step - try running ttyrec -e command as mentioned in the ttyrec docs.
Finally, on the topic of interactive shells, there's an alternative to pexpect named "empty" that I've had some success with too - see http://empty.sourceforge.net/. If you're running Ubuntu or Debian you can install empty with apt-get install empty-expect
I actually managed to do it in python3, took a lot of work, but here is the python solution:
def record_log(output):
try:
with open(LOG_RUN_OUTPUT, 'a') as file:
file.write(output)
except:
with open(LOG_RUN_OUTPUT, 'w') as file:
file.write(output)
def execute(cmd, store=True):
proc = Popen(cmd.encode("utf8"), shell=True, stdout=PIPE, stderr=PIPE)
output = "\n".join((out.decode()for out in proc.communicate()))
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = template % (cmd, output)
print(output)
if store:
record_log(output)
return output
# SSH function
def ssh_connect(start_message, host_id, user_name, key, stage_commands):
print(start_message)
try:
ssh.connect(hostname=host_id, username=user_name, key_filename=key, timeout=120)
except:
print("Failed to connect to " + host_id)
for command in stage_commands:
try:
ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command(command)
except:
input("Paused, because " + command + " failed to run.\n Please verify and press enter to continue.")
else:
template = '''Command:\n====================\n%s\nResult:\n====================\n%s'''
output = ssh_stderr.read() + ssh_stdout.read()
output = template % (command, output)
record_log(output)
print(output)

Get root dialog in Python on Mac OS X, Windows?

How would I go about getting a privilege elevation dialog to pop up in my Python app? I want the UAC dialog on Windows and the password authentication dialog on Mac.
Basically, I need root privileges for part of my application and I need to get those privileges through the GUI. I'm using wxPython. Any ideas?
On Windows you cannot get the UAC dialog without starting a new process, and you cannot even start that process with CreateProcess.
The UAC dialog can be brought about by running another application that has the appropriate manifest file - see Running compiled python (py2exe) as administrator in Vista for an example of how to do this with py2exe.
You can also programatically use the runas verb with the win32 api ShellExecute http://msdn.microsoft.com/en-us/library/bb762153(v=vs.85).aspx - you can call this by using ctypes http://python.net/crew/theller/ctypes/ which is part of the standard library on python 2.5+ iirc.
Sorry don't know about Mac. If you give more detail on what you want to accomplish on Windows I might be able to provide more specific help.
I know the post is a little old, but I wrote the following as a solution to my problem (running a python script as root on both Linux and OS X).
I wrote the following bash-script to execute bash/python scripts with administrator privileges (works on Linux and OS X systems):
#!/bin/bash
if [ -z "$1" ]; then
echo "Specify executable"
exit 1
fi
EXE=$1
available(){
which $1 >/dev/null 2>&1
}
platform=`uname`
if [ "$platform" == "Darwin" ]; then
MESSAGE="Please run $1 as root with sudo or install osascript (should be installed by default)"
else
MESSAGE="Please run $1 as root with sudo or install gksu / kdesudo!"
fi
if [ `whoami` != "root" ]; then
if [ "$platform" == "Darwin" ]; then
# Apple
if available osascript
then
SUDO=`which osascript`
fi
else # assume Linux
# choose either gksudo or kdesudo
# if both are avilable check whoch desktop is running
if available gksudo
then
SUDO=`which gksudo`
fi
if available kdesudo
then
SUDO=`which kdesudo`
fi
if ( available gksudo && available kdesudo )
then
if [ $XDG_CURRENT_DESKTOP = "KDE" ]; then
SUDO=`which kdesudo`;
else
SUDO=`which gksudo`
fi
fi
# prefer polkit if available
if available pkexec
then
SUDO=`which pkexec`
fi
fi
if [ -z $SUDO ]; then
if available zenity; then
zenity --info --text "$MESSAGE"
exit 0
elif available notify-send; then
notify-send "$MESSAGE"
exit 0
elif available xmessage notify-send; then
xmessage -buttons Ok:0 "$MESSAGE"
exit 0
else
echo "$MESSAGE"
fi
fi
fi
if [ "$platform" == "Darwin" ]
then
$SUDO -e "do shell script \"$*\" with administrator privileges"
else
$SUDO $#
fi
Basically, the way I set up my system is that I keep subfolders inside the bin directories (e.g. /usr/local/bin/pyscripts in /usr/local/bin), and create symbolic links to the executables. This has three benefits for me:
(1) If I have different versions, I can easily switch which one is executed by changing the symbolic link and it keeps the bin directory cleaner (e.g. /usr/local/bin/gcc-versions/4.9/, /usr/local/bin/gcc-versions/4.8/, /usr/local/bin/gcc --> gcc-versions/4.8/gcc)
(2) I can store the scripts with their extension (helpful for syntax highlighting in IDEs), but the executables do not contain them because I like it that way (e.g. svn-tools --> pyscripts/svn-tools.py)
(3) The reason I will show below:
I name the script "run-as-root-wrapper" and place it in a very common path (e.g. /usr/local/bin) so python doesn't need anything special to locate it. Then I have the following run_command.py module:
import os
import sys
from distutils.spawn import find_executable
#===========================================================================#
def wrap_to_run_as_root(exe_install_path, true_command, expand_path = True):
run_as_root_path = find_executable("run-as-root-wrapper")
if(not run_as_root_path):
return False
else:
if(os.path.exists(exe_install_path)):
os.unlink(exe_install_path)
if(expand_path):
true_command = os.path.realpath(true_command)
true_command = os.path.abspath(true_command)
true_command = os.path.normpath(true_command)
f = open(exe_install_path, 'w')
f.write("#!/bin/bash\n\n")
f.write(run_as_root_path + " " + true_command + " $#\n\n")
f.close()
os.chmod(exe_install_path, 0755)
return True
In my actual python script, I have the following function:
def install_cmd(args):
exe_install_path = os.path.join(args.prefix,
os.path.join("bin", args.name))
if(not run_command.wrap_to_run_as_root(exe_install_path, sys.argv[0])):
os.symlink(os.path.realpath(sys.argv[0]), exe_install_path)
So if I have a script called TrackingBlocker.py (actual script I use to modify the /etc/hosts file to re-route known tracking domains to 127.0.0.1), when I call "sudo /usr/local/bin/pyscripts/TrackingBlocker.py --prefix /usr/local --name ModifyTrackingBlocker install" (arguments handled via argparse module), it installs "/usr/local/bin/ModifyTrackingBlocker", which is a bash script executing
/usr/local/bin/run-as-root-wrapper /usr/local/bin/pyscripts/TrackingBlocker.py [args]
e.g.
ModifyTrackingBlocker add tracker.ads.com
executes:
/usr/local/bin/run-as-root-wrapper /usr/local/bin/pyscripts/TrackingBlocker.py add tracker.ads.com
which then displays the authentification dialog needed to get the privileges to add:
127.0.0.1 tracker.ads.com
to my hosts file (which is only writable by a superuser).
If you want to simplify/modify it to run only certain commands as root, you could simply add this to your script (with the necessary imports noted above + import subprocess):
def run_as_root(command, args, expand_path = True):
run_as_root_path = find_executable("run-as-root-wrapper")
if(not run_as_root_path):
return 1
else:
if(expand_path):
command = os.path.realpath(command)
command = os.path.abspath(command)
command = os.path.normpath(command)
cmd = []
cmd.append(run_as_root_path)
cmd.append(command)
cmd.extend(args)
return subprocess.call(' '.join(cmd), shell=True)
Using the above (in run_command module):
>>> ret = run_command.run_as_root("/usr/local/bin/pyscripts/TrackingBlocker.py", ["status", "display"])
>>> /etc/hosts is blocking approximately 16147 domains
I'm having the same problem on Mac OS X. I have a working solution, but it's not optimal. I will explain my solution here and continue looking for a better one.
At the beginning of the program I check if I'm root or not by executing
def _elevate():
"""Elevate user permissions if needed"""
if platform.system() == 'Darwin':
try:
os.setuid(0)
except OSError:
_mac_elevate()
os.setuid(0) will fail if i'm not already root and that will trigger _mac_elevate() which relaunch my program from the start as administrator with the help of osascript. osascript can be used to execute applescript and other stuff. I use it like this:
def _mac_elevate():
"""Relaunch asking for root privileges."""
print "Relaunching with root permissions"
applescript = ('do shell script "./my_program" '
'with administrator privileges')
exit_code = subprocess.call(['osascript', '-e', applescript])
sys.exit(exit_code)
The problem with this is if I use subprocess.call as above I keep the current process running and there will be two instances of my app running giving two dock icons. If I use subprocess.Popen instead and let the non-priviledged process die instantly I can't make use of the exit code, nor can I fetch the stdout/stderr streams and propagate to the terminal starting the original process.
Using osascript with with administrator privileges is actually just Apple Script wrapping a call to AuthorizationExecuteWithPrivileges().
But you can call AuthorizationExecuteWithPrivileges() directly from Python3 with ctypes.
For example, the following parent script spawn_root.py (run as a non-root user) spawns a child process root_child.py (run with root privileges).
The user will be prompted to enter their password in the OS GUI pop-up. Note that this will not work on a headless session (eg over ssh). It must be run inside the GUI (eg Terminal.app).
After entering the user's password into the MacOS challenge dialog correctly, root_child.py executes a soft shutdown of the system, which requires root permission on MacOS.
Parent (spawn_root.py)
#!/usr/bin/env python3
import sys, ctypes
import ctypes.util
from ctypes import byref
sec = ctypes.cdll.LoadLibrary(ctypes.util.find_library("Security"))
kAuthorizationFlagDefaults = 0
auth = ctypes.c_void_p()
r_auth = byref(auth)
sec.AuthorizationCreate(None,None,kAuthorizationFlagDefaults,r_auth)
exe = [sys.executable,"root_child.py"]
args = (ctypes.c_char_p * len(exe))()
for i,arg in enumerate(exe[1:]):
args[i] = arg.encode('utf8')
io = ctypes.c_void_p()
sec.AuthorizationExecuteWithPrivileges(auth,exe[0].encode('utf8'),0,args,byref(io))
Child (root_child.py)
#!/usr/bin/env python3
import os
if __name__ == "__main__":
f = open( "root_child.out", "a" )
try:
os.system( "shutdown -h now" )
f.write( "SUCCESS: I am root!\n" )
except Exception as e:
f.write( "ERROR: I am not root :'(" +str(e)+ "\n" )
f.close()
Security Note
Obviously, any time you run something as root, you need to be very careful!
AuthorizationExecuteWithPrivileges() is deprecated, but it can be used safely. But it can also be used unsafely!
It basically boils down to: do you actually know what you're running as root? If the script you're running as root is located in a Temp dir that has world-writeable permissions (as a lot of MacOS App installers have done historically), then any malicious process could gain root access.
To execute a process as root safely:
Make sure that the permissions on the process-to-be-launched are root:root 0400 (or writeable only by root)
Specify the absolute path to the process-to-be-launched, and don't allow any malicious modification of that path
Sources
https://github.com/cloudmatrix/esky/blob/master/esky/sudo/sudo_osx.py
https://github.com/BusKill/buskill-app/issues/14
https://www.jamf.com/blog/detecting-insecure-application-updates-on-macos/

Run child processes as different user from a long running Python process

I've got a long running, daemonized Python process that uses subprocess to spawn new child processes when certain events occur. The long running process is started by a user with super user privileges. I need the child processes it spawns to run as a different user (e.g., "nobody") while retaining the super user privileges for the parent process.
I'm currently using
su -m nobody -c <program to execute as a child>
but this seems heavyweight and doesn't die very cleanly.
Is there a way to accomplish this programmatically instead of using su? I'm looking at the os.set*uid methods, but the doc in the Python std lib is quite sparse in that area.
Since you mentioned a daemon, I can conclude that you are running on a Unix-like operating system. This matters, because how to do this depends on the kind operating system. This answer applies only to Unix, including Linux, and Mac OS X.
Define a function that will set the gid and uid of the running process.
Pass this function as the preexec_fn parameter to subprocess.Popen
subprocess.Popen will use the fork/exec model to use your preexec_fn. That is equivalent to calling os.fork(), preexec_fn() (in the child process), and os.exec() (in the child process) in that order. Since os.setuid, os.setgid, and preexec_fn are all only supported on Unix, this solution is not portable to other kinds of operating systems.
The following code is a script (Python 2.4+) that demonstrates how to do this:
import os
import pwd
import subprocess
import sys
def main(my_args=None):
if my_args is None: my_args = sys.argv[1:]
user_name, cwd = my_args[:2]
args = my_args[2:]
pw_record = pwd.getpwnam(user_name)
user_name = pw_record.pw_name
user_home_dir = pw_record.pw_dir
user_uid = pw_record.pw_uid
user_gid = pw_record.pw_gid
env = os.environ.copy()
env[ 'HOME' ] = user_home_dir
env[ 'LOGNAME' ] = user_name
env[ 'PWD' ] = cwd
env[ 'USER' ] = user_name
report_ids('starting ' + str(args))
process = subprocess.Popen(
args, preexec_fn=demote(user_uid, user_gid), cwd=cwd, env=env
)
result = process.wait()
report_ids('finished ' + str(args))
print 'result', result
def demote(user_uid, user_gid):
def result():
report_ids('starting demotion')
os.setgid(user_gid)
os.setuid(user_uid)
report_ids('finished demotion')
return result
def report_ids(msg):
print 'uid, gid = %d, %d; %s' % (os.getuid(), os.getgid(), msg)
if __name__ == '__main__':
main()
You can invoke this script like this:
Start as root...
(hale)/tmp/demo$ sudo bash --norc
(root)/tmp/demo$ ls -l
total 8
drwxr-xr-x 2 hale wheel 68 May 17 16:26 inner
-rw-r--r-- 1 hale staff 1836 May 17 15:25 test-child.py
Become non-root in a child process...
(root)/tmp/demo$ python test-child.py hale inner /bin/bash --norc
uid, gid = 0, 0; starting ['/bin/bash', '--norc']
uid, gid = 0, 0; starting demotion
uid, gid = 501, 20; finished demotion
(hale)/tmp/demo/inner$ pwd
/tmp/demo/inner
(hale)/tmp/demo/inner$ whoami
hale
When the child process exits, we go back to root in parent ...
(hale)/tmp/demo/inner$ exit
exit
uid, gid = 0, 0; finished ['/bin/bash', '--norc']
result 0
(root)/tmp/demo$ pwd
/tmp/demo
(root)/tmp/demo$ whoami
root
Note that having the parent process wait around for the child process to exit is for demonstration purposes only. I did this so that the parent and child could share a terminal. A daemon would have no terminal and would seldom wait around for a child process to exit.
There is an os.setuid() method. You can use it to change the current user for this script.
One solution is, somewhere where the child starts, to call os.setuid() and os.setgid() to change the user and group id and after that call one of the os.exec* methods to spawn a new child. The newly spawned child will run with the less powerful user without the ability to become a more powerful one again.
Another is to do it when the daemon (the master process) starts and then all newly spawned processes will have run under the same user.
For information look at the manpage for setuid.
The new versions of Python (3.9 onwards) support user and group option out of the box:
process = subprocess.Popen(args, user=username)
The new versions also provide a subprocess.run function. It is a simple wrapper around subprocess.Popen. While suprocess.Popen runs the commands in the background, subprocess.run runs the commands and wait for their completion.
Thus we can also do:
subprocess.run(args, user=username)
Actually, example with preexec_fn did not work for me.
My solution that is working fine to run some shell command from another user and get its output is:
apipe=subprocess.Popen('sudo -u someuser /execution',shell=True,stdout=subprocess.PIPE)
Then, if you need to read from the process stdout:
cond=True
while (cond):
line=apipe.stdout.getline()
if (....):
cond=False
Hope, it is useful not only in my case.

Categories

Resources