Getting the value from python script into a powershell script - python

I have python script which creates a ticket.
I need to invoke the python script from within powershell script and
get the ticketnumber(12 digit long).
Approach#1:
I tried to use the exit(ticket_number) to get this done.It worked well as long as the number is not very large.
Ex.
exit(12345) from python translates to $LASTEXITCODE=12345 #good
exit(123456789123) from python translates to $LASTEXITCODE=-1 #not sure what is going wrong here
dummy.py
--------
print("hello")
exit(123456789123)
sample.ps1
----------
python dummy.py
Write-Host($LASTEXITCODE)
Approach#2:
Use of env variable
dummy.py
--------
import os
os.environ["TICKETNUMBER"] = "123456789123"
exit(0)
sample.ps1
----------
Get-ChildItem -Path Env:TEMP # good - able to get value
Get-ChildItem -Path Env:TICKETNUMBER # - error - ItemNotFoundException
So, I would like to know what is going wrong in each of the approaches.
Are there any better approaches to get this done - Please suggest.

You should not use exit codes to output a value, this simply isn't what they're meant to do. You can read more about exit codes here: https://shapeshed.com/unix-exit-codes/#what-is-an-exit-code-in-the-unix-or-linux-shell
Environment variables only work for passing around values when you're passing them to children. If you spawn a new process, said process will inherit the environment variables in scope of your current session. However, you can't change the environment variables of the parent (your session) from the child (the python runtime). Thus, in Powershell, your "TICKETNUMBER" environment variable is out of scope.
First of all let me say that there are many different ways to go about solving this. The solution that requires the least amount of work on your part would be to output to stdout, which allows you to output values for consumption by other processes. You can do this with print in python. You already did this but likely ran into issues due to your use of exit codes.
In Powershell, you can accept this input via the pipeline. There are a lot of ways to go about this, but in your example the $input variable will work.
dummy.py
--------
print("123123123")
sample.ps1
--------
Get-ChildItem -Path $input
You can then run py dummy.py | ./sample.ps1, which will return the directory listing of "./123123123".

Related

Output of interactive python to shell variable

There is an interactive python script something like
def myfunc():
print("enter value between 1 to 10")
i=int(input())
if(i<1 or i>10):
print("again")
myfunc()
else:
print(i)
I want to store the final output which is print(i) in a shell variable. Something like
python myFile.py | read a
Above query get stuck everytime i run the command. Is it possible to do that?
Even though ( read b | python myFile.py ) | read a defeats the purpose of interactive python function but this doesn't work as well. It works if myfunc() is non-interactive(not expecting user input). The function in reality takes some input, manipulates it, and then output the result in required format. I know it would be much easier to use either python or shell, but since i already wrote the python function, was wondering if it is possible to link both. If yes, is it also possible to add only final value to shell variable rather than all the print()
Same issue happens(terminal gets stuck) when i do
python myFile.py > someFilename
However file someFilename was created even though terminal was unresponsive. It seems shell is starting both the processes at the same time which makes sense. I am guessing if somehow python myfile.py executes independently before opening the pipe it could be possible, but i may be wrong.
If you are working on Linux or other Unix variants, would you please try:
import os
def myfunc():
tty = os.open("/dev/tty", os.O_WRONLY)
os.write(tty, "enter value between 1 to 10\n")
i=int(input())
if(i<1 or i>10):
os.write(tty, "again\n")
myfunc()
else:
print(i)
BTW if your shell is bash, it will be better to say:
read a < <(python myFile.py)
Otherwise read a is invoked in the subshell and the variable a
cannot be referred in the following codes.

Executing a profile load shell script from a python program [duplicate]

This question already has answers here:
how to "source" file into python script
(8 answers)
Closed 3 years ago.
I am struggling to execute a shell script from a Python program. The actual issue is the script is a load profile script and runs manually as :
. /path/to/file
The program can't be run as sh script as the calling programs are loading some configuration file and so must need to be run as . /path/to/file
Please do guide how can I integrate the same in my Python script? I am using subprocess.Popen command to run the script and as said the only way it works is to run as . /path/to/file and so not giving the right result.
Without knowledge of the precise reason the script needs to be sourced, this is slightly speculative.
The fundamental problem is this: How do I get a source command to take effect outside the shell script?
Let's say your sourced file does something like
export fnord="value"
This cannot (usefully) be run in a subshell (as a normally executed script would) because the environment variable and its value will be lost when the script terminates. The solution is to source (aka .) this snippet from an already running shell; then the value stays in that shell's environment until that shell terminates.
But Python is not a shell, and there is no general way for Python to execute arbitrary shell script code, short of reimplementing the shell in Python. You can reimplement a small subset of the shell's functionality with something like
with open('/path/to/file') as shell_source:
lines = shell_source.readlines()
for line in lines:
if line.strip().startswith('export '):
var, value = line[7:].strip().split('=', 1)
if value.startswith('"'):
value = value.strip('"')
elif value.startswith("'"):
value = value.strip("'")
os.environ[var] = value
with some very strict restrictions (let's not say naïve assumptions) on the allowable shell script syntax in the file. But what if the file contained something else than a series of variable assignments, or the assignment used something other than trivial quoted strings in the values? (Even the export might or might not be there. Its significance is to make the variable visible to subprocesses of the current shell; maybe that is not wanted or required? Also export variable=value is not portable; proper Bourne shell script syntax would use variable=value; export variable or one of the many variations.)
If you know what exactly your Python script needs from the shell script, maybe do something like
r = subprocess.run('. /path/to/file; printf "%s\n" "$somevariable"',
shell=True, capture_output=True, text=True)
os.environ['somevariable'] = r.stdout.split('\n')[-2]
to source the entire script in a subshell, then print to standard output the part you actually need, and capture that from your Python script (and assign it to an environment variable if that's what you eventually need to accomplish).

Pass variable from Python to Bash

I am writing a bash script in which a small python script is embedded. I want to pass a variable from python to bash. After a few search I only found method based on os.environ.
I just cannot make it work. Here is my simple test.
#!/bin/bash
export myvar='first'
python - <<EOF
import os
os.environ["myvar"] = "second"
EOF
echo $myvar
I expected it to output second, however it still outputs first. What is wrong with my script? Also is there any way to pass variable without export?
summary
Thanks for all answers. Here is my summary.
A python script embedded inside bash will run as child process which by definition is not able to affect parent bash environment.
The solution is to pass assignment strings out from python and eval it subsequently in bash.
An example is
#!/bin/bash
a=0
b=0
assignment_string=$(python -<<EOF
var1=1
var2=2
print('a={};b={}'.format(var1,var2))
EOF
)
eval $assignment_string
echo $a
echo $b
Unless Python is used to do some kind of operation on the original data, there's no need to import anything. The answer could be as lame as:
myvar=$(python - <<< "print 'second'") ; echo "$myvar"
Suppose for some reason Python is needed to spit out a bunch of bash variables and assignments, or (cautiously) compose code on-the-fly. An eval method:
myvar=first
eval "$(python - <<< "print('myvar=second')" )"
echo "$myvar"
Complementing the useful Cyrus's comment in question, you just can't do it. Here is why,
Setting an environment variable sets it only for the current process and any child processes it launches. os.environ will set it only for the shell that is running to execute the command you provided. When that command finishes, the shell goes away, and so does the environment variable.
You can pretty much do that with a shell script itself and just source it to reflect it on the current shell.
There are a few "dirty" ways of getting something like this done. Here is an example:
#!/bin/bash
myvar=$(python - <<EOF
print "second"
EOF
)
echo "$myvar"
The output of the python process is stored in a bash variable. It gets a bit messy if you want to return more complex stuff, though.
You can make python return val and pass it to bash:
pfile.py
print(100)
bfile.sh
var=$(python pfile.py)
echo "$var"
output: 100
Well, this may not be what you want but one option could be running the other batch commands in python using subprocess
import subprocess
x =400
subprocess.call(["echo", str(x)])
But this is more of a temporary work around. The other solutions are more along what you are looking for.
Hope I was able to help!

robot framework with pabot : is it possible to pass two different values to a variable in two tests

Example, I have file1.robot and file2.robotand each has ${var} as the variable. Can I pass 2 different values to this same ${var} in the command line? Something like pabot -v var:one:two file1.robot file2.robot where -v var:one:two would follow the order of the robot files; not by name but by how they were introduced in the command line?
This solution is not 100% what you've asked for, but maybe you can make it work.
In pabot readme file is mentioned something about shared set of variables and acquiring set for each running process. The documentation was bit unclear to me, but if you try following example, you'll see for yourself. It's basically pool of variables and each process can get set of variables from it and when it's done with it, it can return this set back to the pool.
Create your value set valueset.dat
[Set1]
USERNAME=user1
PASSWORD=password1
[Set2]
USERNAME=user2
PASSWORD=password2
create suite1.robot and suite2.robot. I've created 2 suites that are exactly the same. I just wanted to try to run 2 suites in parallel.
*** Settings ***
Library pabot.PabotLib
*** Test Cases ***
Foobar
${valuesetname}= Acquire Value Set
Log ${valuesetname}
${username}= Get Value From Set username
Log ${username}
# Release Value Set
And then run command pabot --pabotlib --resourcefile valueset.dat tests. If you check html report, you'll see that one suite used set1 and other used set2.
Hope this helps.
Cheers!
Another way is to use multiple argument files. One containing the first value for ${var} and the other containing the other.
This will execute the same test suite for both argument files.
pabot --agumentfile1 varone.args --argumentfile2 vartwo.args file.robot
=>
file.robot executed with varone.args
file.robot executed with vartwo.args

Broken-pipe Error Python subprocess [duplicate]

This question already has answers here:
'yes' reporting error with subprocess communicate()
(3 answers)
Closed 6 years ago.
I'm trying to launch several bash routines
from a GUI based software. The problem I'm facing is a piping issue.
Here the test bash-script (bashScriptTest.sh):
#!/bin/bash
#---------- Working
ls | sort | grep d > testFile.txt
cat testFile.txt
#---------- NOT working
echo $RANDOM > testFile2.txt
for i in `seq 1 15000`; do
echo $RANDOM >> testFile2.txt
done
awk '{print $1}' testFile2.txt | sort -g | head -1
And here the python script that creates the error:
import subprocess
#
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh", stdout=outfile, stderr=outfile)
print CLEAN.pid
OUTSEE=subprocess.Popen(['x-terminal-emulator', '-e','tail -f '+outfile.name])
As you can see from running the python script, the Broken-pipe error is encountered
not in the first three pipes (first line) but instead after the huge work done by awk.
I need to manage an huge quantities of routine and subroutines in bash
and also using the shell==True flag doesn't change a thing.
I tried to write everything in the most pythonic way but unfortunately there is no
chance I can rewrite all the piping step inside python.
Another thing to mention is that if you test the bash script inside a terminal
everything works fine.
Any help would be really appreciated. Thanks in advance!
EDIT 1:
The log file containing the error says:
bashScriptTest.sh
log.txt
stack.txt
testFile2.txt
test.py
3
sort: write failed: standard output: Broken pipe
sort: write error
Okay so this is a little bit obscure, but it just so happens that I ran across a similar issue while researching a question on the python-tutor mailing list some time ago.
The reason you're seeing different behavior when running your script via the subprocess module (in python) vs. bash directly, is that python overrides the disposition of SIGPIPEs to SIG_IGN (ignore) for all child processes (globally).
When the following pipeline is executed ...
awk '{print $1}' testFile2.txt | sort -g | head -1
... head will exit after it prints the first line of stdout from the sort command, due to the -1 flag. When the sort command attempts to write more lines to its stdout, a SIGPIPE is raised.
The default action of a SIGPIPE; when the pipeline is executed in a shell like bash, for example; is to terminate the sort command.
As stated earlier, python overrides the default action with SIG_IGN (ignore), so we end up with this bizarre, and somewhat inexplicable, behavior.
That's all well and good, but you might be wondering what to do now? It's dependant on the version of python you're using ...
For Python 3.2 and greater, you're already set. subprocess.Popen in 3.2 added the restore_signals parameter, which defaults to True, and effectively solves the issue without further action.
For previous versions, you can supply a callable to the preexec_fn argument of subprocess.Popen, as in ...
import signal
def default_sigpipe():
signal.signal(signal.SIGPIPE, signal.SIG_DFL)
# ...
with open('log.txt','w') as outfile:
CLEAN=subprocess.Popen("./bashScriptTest.sh",
stdout=outfile, stderr=outfile
preexec_fn=default_sigpipe)
I hope that helps!
EDIT: It should probably be noted that your program is actually functioning properly, AFAICT, as is. You're just seeing additional error messages that you wouldn't normally see when executing the script in a shell directly (for the reasons stated above).
See Also:
https://mail.python.org/pipermail/python-dev/2007-July/073831.html
https://bugs.python.org/issue1652

Categories

Resources