PYTHONPATH variable missing when using os.execlpe to restart script as root - python

My end goal is to have a script that can be initially launched by a non-privileged user without using sudo, but will prompt for sudo password and self-elevate to root. I've been doing this with a bash wrapper script but would like something tidier that doesn't need an additional file.
Some googling found this question on StackOverflow where the accepted answer suggesting using os.execlpe to re-launch the script while retaining the same environment. I tried it, but it immediately failed to import a non-built-in module on the second run.
Investigating revealed that the PYTHONPATH variable is not carried over, while almost every other environment variable is (PERL5LIB is also missing, and a couple of others, but I'm not using them so they're not troubling me).
I have a brief little test script that demonstrates the issue:
#!/usr/bin/env python
import os
import sys
print(len(os.environ['PYTHONPATH']))
euid = os.geteuid()
if euid != 0:
print("Script not started as root. Running with sudo.")
args = ['sudo', sys,executable] + sys.argv + [os.environ]
os.execlpe('sudo', *args)
print("Success")
Expected output would be:
6548
Script not started as root. Running with sudo.
[sudo] password for esker:
6548
Success
But instead I'm getting a KeyError:
6548
Script not started as root. Running with sudo.
[sudo] password for esker:
Traceback (most recent call last):
File "/usr/home/esker/execlpe_test.py", line 5, in <module>
print(len(os.environ['PYTHONPATH']))
File "/vol/apps/python/2.7.6/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'PYTHONPATH'
What would be the cause of this missing variable, and how can I avoid it disappearing? Alternatively, is there a better way about doing this that won't result in running into the problem?

I found this very weird too, and couldn't find any direct way to pass the environment into the replaced process. But I didn't do a full system debugging either.
What I found to work as a workaround is this:
pypath = os.environ.get('PYTHONPATH', "")
args = ['sudo', f"PYTHONPATH={pypath}", sys.executable] + sys.argv
os.execvpe('sudo', args, os.environ)
I.e. explicitly pass PYTHONPATH= to the new process. Note that I prefer to use os.execvpe(), but it works the same with the other exec*(), given the correct call. See this answer for a good overview of the schema.
However, PATH and the rest of the environment is still it's own environment, as an initial print(os.environ) shows. But PYTHONPATH will be passed on this way.

You're passing the environment as arguments to your script instead of arguments to execlpe. Try this instead:
args = ['sudo', sys,executable] + sys.argv + [os.environ]
os.execvpe('sudo', args, os.environ)
If you just want to inherit the environment you can even
os.execvp('sudo', args)

Related

Run subprocess to call another python script without waiting

I have read way to many threads now and really lost.
Just trying to do something basic before I make it complicated.
so i have a script test.py
I want to call the script from within runme.py but without waiting so it will process the other chunk of code, but then when it gets to the end wait for test.py code to finish before continuing on.
I cant seem to figure out the correct syntax for the p = subprocess.Popen (I have tried so many)
and do I need the that to the test.py if its in the same directory?
here is what i have but cant get to work.
import subprocess
p = subprocess.Popen(['python test.py'])
#do some code
p.wait()
I cant seem to figure out the correct syntax for the p = subprocess.Popen (I have tried so many)
You want to pass it a list of arguments. The first argument is the program to run, python (although actually, you probably want sys.executable here); the second is the script that you want python to run. So:
p = subprocess.Popen(['python', 'test.py'])
and do I need the that to the test.py if its in the same directory?
This will work the same way as if you ran python test.py at the shell: it will just pass test.py as-is to python, and python will treat that as a path relative to the current working directory (CWD).
So, if test.py is in the CWD, this will just work.
If test.py is somewhere else, then you need to provide either an absolute path, or one relative to the CWD.
One common thing you want is that test.py is in not necessarily in the CWD, but instead it's in the same directory as the script/module that wants to launch it:
scriptpath = os.path.join(os.path.dirname(__file__), 'test.py')
… or in the same directory as the main script used to start your program:1
scriptpath = os.path.join(os.path.dirname(sys.argv[0]), 'test.py')
Either way, you just pass that as the argument:
p = subprocess.Popen(['python', scriptpath])
1. On some platforms, this may actually be a relative path. If you might have done an os.chdir since startup, it will now be wrong. If you need to handle that, you want to stash os.path.abspath(os.path.dirname(sys.argv[0])) in the main script at startup, then pass it down to other functions for them to use instead of calling dirname(argv[0]) themselves.

Python 3: subprocess, changing directory

I have a main project in C:/myproject/harry.py
harry.py is launching threads.
harry.py is saving and loading text documents every few seconds using an already established path self.relativePath = os.path.dirname(sys.argv[0])
Inside each thread, a subprocess is being called to activate a command line .exe file found in C:/myproject/betty/here.exe
I have tried all sorts of things to achieve this such as:
my_env = os.environ
my_env["PATH"] = "/usr/sbin:/sbin:" + my_env["PATH"]
doit = subprocess.Popen('cd betty/', 'here.exe -command', env=my_env)
doit.wait()
or
my_env = os.environ
my_env["PATH"] = "/usr/sbin:/sbin:" + my_env["PATH"]
doit = subprocess.Popen('here.exe -command', cwd='C:/myproject/betty/')
doit.wait()
Response:
FileNotFoundError: [WinError 2] The system cannot find the file specified
Is it possible to run the subprocess inside the subfolder with the custom path ... that will not interfere with the already established path self.relativePath
Thanks,
You were fairly close:
> doit = subprocess.Popen('here.exe -command', cwd='C:/myproject/betty/')
This would actually work if your command were called here.exe -command but of course, no such file exists. You want ['here.exe', '-command'] (or somewhat more unsafely and less efficiently add shell=True; but really, don't).
It seems you forgot to pass in env=my_env in this attempt, too; though does here.exe really require you to modify the PATH? And if it does, repeatedly creating a new copy for each new subprocess seems slightly wasteful.
You'll also want to switch to subprocess.run() or one of the legacy wrappers; you should really only use the low-level Popen() function directly from library functions.
On the other hand, does here.exe really need to run in a particular directory, and does that directory exist on your PATH? Windows is slightly weird and Windows programmers are often unaware of basic command-line usability design principles; but if here.exe is at all correctly written, perhaps you are actually looking for
s = subprocess.run(['c:/myproject/betty/here.exe', '-command'], env=my_env)
I have discovered the answer with some help from various stackoverflow posts on the topic as well as stumbling through possible solutions. It was not easy!
self.relativePath = os.path.dirname(sys.argv[0])
self.relativePath1 = self.relativePath + '\\your_subdirectoryHERE\\'
Be sure to include double slashes, to match os.path.dirname(sys.argv[0])
self.process = subprocess.Popen(self.relativePath1 + 'flare.exe -command', cwd=self.relativePath1)

Execute batch file in different directory

I have a a file structure like the following (Windows):
D:\
dir_1\
batch_1.bat
dir_1a\
batch_2.bat
dir_2\
main.py
For the sake of this question, batch_1.bat simply calls batch_2.bat, and looks like:
cd dir_1a
start batch_2.bat %*
Opening batch_1.bat from a command prompt indeed opens batch_2.bat as it's supposed to, and from there on, everything is golden.
Now I want my Python file, D:\dir_2\main.py, to spawn a new process which starts batch_1.bat, which in turn should start batch_2.bat. So I figured the following Python code should work:
import subprocess
subprocess.Popen(['cd "D:/dir_1"', "start batch_1.bat"], shell=True)
This results in "The system cannot find the path specified" being printed to my Python console. (No error is raised, of course.) This is due to the first command. I get the same result even if I cut it down to:
subprocess.Popen(['cd "D:/"'], shell=True)
I also tried starting the batch file directly, like so:
subprocess.Popen("start D:/dir_1/batch_1.bat", shell=True)
For reasons that I don't entirely get, this seems to just open a windows command prompt, in dir_2.
If I forego the start part of this command, then my Python process is going to end up waiting for batch_1 to finish, which I don't want. But it does get a little further:
subprocess.Popen("D:/dir_1/batch_1.bat", shell=True)
This results in batch_1.bat successfully executing... in dir_2, the directory of the Python script, rather than the directory of batch_1.bat, which results in it not being able to find dir_1a\ and hence, batch_2.bat is not executed at all.
I am left highly confused. What am I doing wrong, and what should I be doing instead?
Your question is answered here: Python specify popen working directory via argument
In a nutshell, just pass an optional cwd argument to Popen:
subprocess.Popen(["batch_1.bat"], shell=True, cwd=r'd:\<your path>\dir1')

Python function subprocess.check_output returns CalledProcessError: command returns non-zero exit status

As a follow-up to my previous question, which got solved quickly, I'm running the Python code below in WinPython:
import os, subprocess
os.chdir("C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/")
logtoolDir="C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/ "
#processEnv = {'JAVA_HOME': 'C:/Program Files/Java/jdk1.8.0_66/'}
args = r'"org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput.data"'
subprocess.check_output(['mvn', 'exec:exec', '-Dexec.args=' + args],
shell = True, cwd = logtoolDir)
And get the following error:
CalledProcessError: Command '['mvn', 'exec:exec', '-Dexec.args="org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput.data"']' returned non-zero exit status 1
The Apache Maven executable does not seem to run. My guess is that the arguments are being passed on to the program incorrectly. I couldn't find any typos in the args or the logtoolDir arguments, but maybe I'm missing something there? Any ideas?
UPDATE:
The mvn exec:exec was not running because check_output has somehow been unable to access Windows' environmental variables. I added the path variable to processEnv and now 'mvn','--version' in the check_output args confirms Maven runs. The code still doesn't run but I imagine it's probably an issue with how I've defined the directories.
Cheers.
Problem solved. Basically: a) subprocess.check_output could not read Windows' environment variables (e.g. PATH, JAVA_HOME), so I had to redefine the one I was using in processEnv and pass it along in the function's arguments. Also, b) The args variable was defined incorrectly. I needed to remove one set of quotation marks, and also make it raw using r.
The corrected code:
logtoolDir='C:/Users/Mohammad/Google Drive/PhD/Spyder workspace/production-consumption/logtool-examples/'
processEnv = {'JAVA_HOME': 'C:/Program Files/Java/jdk1.8.0_66/jre/',
'Path' : 'C:/Program Files/apache-maven-3.3.3/bin/'}
args = r"org.powertac.logtool.example.ProductionConsumption D:/PowerTAC/Logs/2015/log/powertac-sim-1.state testrunoutput2.data"
print(subprocess.check_output(['mvn', 'exec:exec', '-Dexec.args='+ args],
shell = True, env = processEnv, cwd = logtoolDir))
Unfortunately I can't find a way to go around using the shell = True argument, which probably won't be a problem since this will only be used for data analysis.
Cheers.

Using python subprocess to fake running a cmd from a terminal

We have a vendor-supplied python tool ( that's byte-compiled, we don't have the source ). Because of this, we're also locked into using the vendor supplied python 2.4. The way to the util is:
source login.sh
oupload [options]
The login.sh just sets a few env variables, and then 2 aliases:
odownload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_download_command.pyc "$#"
}
oupload () {
${PYTHON_CMD} ${OCLIPATH}/ocli/commands/word_upload_command.pyc "$#"
}
Now, when I run it their way - works fine. It will prompt for a username and password, then do it's thing.
I'm trying to create a wrapper around the tool to do some extra steps after it's run and provide some sane defaults for the utility. The problem I'm running into is I cannot, for the life of me, figure out how to use subprocess to successfully do this. It seems to realize that the original command isn't running directly from the terminal and bails.
I created a '/usr/local/bin/oupload' and copied from the original login.sh. Only difference is instead of doing an alias at the end, I actually run the command.
Then, in my python script, I try to run my new shell script:
if os.path.exists(options.zipfile):
try:
cmd = string.join(cmdargs,' ')
p1 = Popen(cmd, shell=True, stdin=PIPE)
But I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 112, in getAuth
Empty Username not legal
Unknown Error Encountered
SUMMARY:
Name: Empty Username not legal
Description: None
So it seemed like an extra carriage return was getting sent ( I tried rstripping all the options, didn't help ).
If I don't set stdin=PIPE, I get:
Enter Opsware Username: Traceback (most recent call last):
File "./command.py", line 31, in main
File "./controller.py", line 51, in handle
File "./controllers/word_upload_controller.py", line 81, in _handle
File "./controller.py", line 66, in _determineNew
File "./lib/util.py", line 83, in determineNew
File "./lib/util.py", line 109, in getAuth
IOError: [Errno 5] Input/output error
Unknown Error Encountered
I've tried other variations of using p1.communicate, p1.stdin.write() along with shell=False and shell=True, but I've had no luck in trying to figure out how to properly send along the username and password. As a last result, I tried looking at the byte code for the utility they provided - it didn't help - once I called the util's main routine with the proper arguments, it ended up core dumping w/ thread errors.
Final thoughts - the utility doesn't want to seem to 'wait' for any input. When run from the shell, it pauses at the 'Username' prompt. When run through python's popen, it just blazes thru and ends, assuming no password was given. I tried to lookup ways of maybe preloading the stdin buffer - thinking maybe the process would read from that if it was available, but couldn't figure out if that was possible.
I'm trying to stay away from using pexpect, mainly because we have to use the vendor's provided python 2.4 because of the precompiled libraries they provide and I'm trying to keep distribution of the script to as minimal a footprint as possible - if I have to, I have to, but I'd rather not use it ( and I honestly have no idea if it works in this situation either ).
Any thoughts on what else I could try would be most appreciated.
UPDATE
So I solved this by diving further into the bytecode and figuring out what I was missing from the compiled command.
However, this presented two problems -
The vendor code, when called, was doing an exit when it completed
The vendor code was writing to stdout, which I needed to store and operate on ( it contains the ID of the uploaded pkg ). I couldn't just redirect stdout, because the vendor code was still asking for the username/password.
1 was solved easy enough by wrapping their code in a try/except clause.
2 was solved by doing something similar to: https://stackoverflow.com/a/616672/677373
Instead of a log file, I used cStringIO. I also had to implement a fake 'flush' method, since it seems the vendor code was calling that and complaining that the new obj I had provided for stdout didn't supply it - code ends up looking like:
class Logger(object):
def __init__(self):
self.terminal = sys.stdout
self.log = StringIO()
def write(self, message):
self.terminal.write(message)
self.log.write(message)
def flush(self):
self.terminal.flush()
self.log.flush()
if os.path.exists(options.zipfile):
try:
os.environ['OCLI_CODESET'] = 'ISO-8859-1'
backup = sys.stdout
sys.stdout = output = Logger()
# UploadCommand was the command found in the bytecode
upload = UploadCommand()
try:
upload.main(cmdargs)
except Exception, rc:
pass
sys.stdout = backup
# now do some fancy stuff with output from output.log
I should note that the only reason I simply do a 'pass' in the except: clause is that the except clause is always called. The 'rc' is actually the return code from the command, so I will probably add handling for non-zero cases.
I tried to lookup ways of maybe preloading the stdin buffer
Do you perhaps want to create a named fifo, fill it with username/password info, then reopen it in read mode and pass it to popen (as in popen(..., stdin=myfilledbuffer))?
You could also just create an ordinary temporary file, write the data to it, and reopen it in read mode, again, passing the reopened handle as stdin. (This is something I'd personally avoid doing, since writing username/passwords to temporary files is often of the bad. OTOH it's easier to test than FIFOs)
As for the underlying cause: I suspect that the offending software is reading from stdin via a non-blocking method. Not sure why that works when connected to a terminal.
AAAANYWAY: no need to use pipes directly via Popen at all, right? I kinda laugh at the hackishness of this, but I'll bet it'll work for you:
# you don't actually seem to need popen here IMO -- call() does better for this application.
statuscode = call('echo "%s\n%s\n" | oupload %s' % (username, password, options) , shell=True)
tested with status = call('echo "foo\nbar\nbar\nbaz" |wc -l', shell = True) (output is '4', naturally.)
The original question was solved by just avoiding the issue and not using the terminal and instead importing the python code that was being called by the shell script and just using that.
I believe J.F. Sebastian's answer would probably work better for what was originally asked, however, so I'd suggest people looking for an answer to a similar question look down the path of using the pty module.

Categories

Resources