I am working on converting an older system of batch files to python. I have encountered a batch file that sets the environment variables for the batch file that called it. Those values are then used again in future calls to batch files. My environment variables seem to go down the flow of calls but I am unable to bring them back up. I have read two different threads on here that talk about something similar but its not exactly the same. Is it possible to retrieve these environment variables from the subprocess?
Python subprocess/Popen with a modified environment
Set Environmental Variables in Python with Popen
What I am doing now:
p = Popen(process_, cwd=wd, stdout=PIPE, shell=True,
universal_newlines=True, stderr=STDOUT, env=env)
The old flow of bat files:
foo.bat calls foo-setparam.bat
foo-setparam.bat sets some variables
foo.bat uses these variables
foo.bat calls bar.bat
bar.bat uses variables set by both foo.bat and foo-setparam.bat
Update:
I have tried adding "call " in front of my popen parameter. I got the same behavior.
It's generally not possible for a child process to set parent environment variables, unless you want to go down a mildly evil path of creating a remote thread in the parent process and using that thread to set variables on your behalf.
It's much easier to have the python script write a temp file that contains a sequence of set commands like:
set VAR1=newvalue1
set VAR2=newvalue2
Then modify the calling batch file to call your python script like:
python MyScript.py
call MyScript_Env.cmd
Related
the lldb extension script's entrypoint is:
def __lldb_init_module(debugger, internal_dict):
however, when I access debugger.target, it's No Value.
lots of tutorial shows how to use SBDebugger.CreateTarget to create new target and process, but now I'm debugging from IDE's debug workflow, I don't think I should create target myself, I just want to get(or wait for) the current debugging process(launched by IDE) and target, then invoke some commands as pro hand -p true -s false SIGPWR.
so the problem is how could I access the current target in the none-standalone-launch mode?
In command-line lldb there are two stages of reading in init files.
The ~/.lldbinit is read in the first phase, before any targets get made. That way commands in it can condition the construction of the targets. So modules sourced there won't see any targets.
Then if there is a .lldbinit in the CWD, that will get sourced after any targets specified on the command line. So that will see the target.
The equivalent in Xcode (if you are using a fairly recent Xcode) is the lldbinit you can specify in the run scheme's options tab.
The other way to do this is to set an auto-continue breakpoint on main, and source your .lldbinit at that point. Note, if you want to use process handle there has to be a running process, so you need to do that in a breakpoint somewhere...
I have a script that adds variables to the environment. That script is called through
subprocess.call('. myscript.sh', shell=True)
Is there a way I can get the modified environment and use it on my next subprocess call?This questions shows you can get the output of one call and chain it to another call Python subprocess: chaining commands with subprocess.run.Is there something similar with passing the environment?
You'll have to output the variables' content somehow. You're spawning a new process which will not propagate the environment variables back, so your python app will not see those values.
You could either make the script echo those to some file, or to the standard output if possible.
(Technically, it would be possible to stop the process and extract the values if you really wanted to hack that, but it's a bad idea.)
In Python I can access an environment variable as:
os.environ['FOO']
I would like to know if the variable was set previously via export or if it was set only for the current python script like so:
FOO=BAR python some-script.py
Basically I want to only use FOO if it was set like in the line above and not permanently defined per export.
Arguments to the python script itself unfortunately are no option here. This is a plugin and the parent application does not allow passing custom arguments it does not understand itself.
I was hoping I somehow could access the exact and full command (FOO=BAR python some-script.py) that started python but it appears like there is nothing like that. I guess if there was a feature like this it would be somewhere in the os or sys packages.
The environment is simply an array of C strings, there is no metainformation there which helps you find out whether or not the invoking shell had the variable marked for export or not.
On Linux, you could examine /proc/(pid)/environ of the parent PID (if you have suitable permissions) to see what's in the parent's permanent environment, but this is decidedly nonportable and brittle.
Spending time on this seems misdirected anyway; let the user pass the environment variable in any way they see fit.
I have a python application which i want to purpose as a multi as a multi terminal handler, i want each object to have it's own terminal separated from the rest each running it's own instance, exactly like when i run two or more separate terminals in Linux (/bin/sh or /bin/bash)
sample: (just logic not code)
first_terminal = terminalInstance()
second_terminal = terminalInstance()
first_result = first_terminal.doSomething("command")
second_result = second_terminal.doSomething("command")
i actually need to have each terminal to grab a stdin & stdout in a virtual environment and control them, this is why they must be seperate, is this possible in python range? i've seen alot of codes handling a single terminal but how do you do it with multiple terminals.
PS i don't want to include while loops (if possible) since i want to add scalability from dealing with 2 or more terminals to as much as my system can handle? is it possible to control them by reference giving each terminal a reference and then calling on that object and issuing a command?
The pexpect module (https://pypi.python.org/pypi/pexpect/), among others, allows you to launch programs via a pseudo-tty, which "allows your script to spawn a child application and control it as if a human were typing commands."
You can easily spawn multiple commands, each running in a separate pseudo-tty and represented by a separate object, and you can interact with each object separately. There is a lot of flexibility as to when/how you interact. You can send input to them, and read their output, either blocking or non-blocking, and incorporating timeouts and alternative outputs.
Here's a trivial session example (run bash, have it execute an "ls" command, gather the first line of output).
import pexpect
x = pexpect.spawn("/bin/bash")
x.sendline("ls")
x.expect("\n") # End of echoed command
x.expect("\n") # End of first line of output
print x.before # Print first line of output
Note that you'll receive all the output from the terminal, typically including an echoed copy of every character you send to it. If running something like a shell, you might also need to set the shell prompt (or determine the shell prompt in use) and use that in parsing the output (i.e. in finding the end of each command's output).
I have a series of scripts I am automating the calling of in python with subprocess.Popen. Basically, I call script A, then script B, then script C and so forth.
Script A sets a bunch of local shell variables, with commands such as set SOME_VARIABLE=SCRIPT_A, set PATH=%SCRIPT_A:/=\;%PATH%.
Script B and C then need to have the effects of this. In unix, you would call script A with "source script_a.sh". The effect lasts in the current command window. However, subprocess.Popen effectively launches a new window (kind of).
Apparently subprocess.Popen is not the command I want to do this. How would I do it?
edit I have tried parsing the file (which is all 'set' statements) and passing them as a dictionary to 'env' in subprocess.Popen, but it doesn't seem to have all worked..
You can use the env argument to Popen with a python dictionary containing the variables then you don't need to run the command that just sets variables.
if 'script A' get generated by another process, you will either need to change the other process so the output file is in a format that you can source (import) into your python script. or write a parser in python that can digest the vars out of 'Script A' setting them within your python script.
If all you want is to call a series of batch files using Python you could create a helper batch file which would call all these batch files like this
call scriptA;
call scriptB;
call scriptC;
and run this helper batch file using subprocess.Popen