I have a shell script that has a command to run a python script. I want 4 variables (for eg: var1,var2,var3,var4) from the shell script to be used in the python script. Any suggestions how to do this?
For eg: I want to replace "lastTest44", firstTest44 and A-S00000582 with variables from the shell script.
driver.find_element_by_id("findKey_input").clear()
driver.find_element_by_id("findKey_input").send_keys("lastTest44")
driver.find_element_by_id("ST_View_lastTest44, firstTest44").click()
driver.find_element_by_link_text("A-S00000582").click()
Just use command line arguments:
Shell Script
a=1
b=2
python test1.py "$a" "$b"
Python Script
import sys
var1 = sys.argv[1]
var2 = sys.argv[2]
print var1, var2
What you're looking to use are called command line arguments. These are parameters that are specified at the time of calling the particular piece of code you're looking to run.
In Python, these are accessible through the sys module under a variable called argv. This is an array of all the arguments passed in from the caller, where each value within the array is a string.
For example, say the code I'm writing takes in parameters to draw a square. This could require 4 parameters - An x coordinate, y coordinate, a width, and a height. The Python code for this might look like this:
import sys
x = sys.argv[1]
y = sys.argv[2]
width = sys.argv[3]
height = sys.argv[4]
# Some more code follows.
A few things to note:
Each argument is of type string. This means that in this case, I could not perform any sort of arithmetic until converting them into the correct types that I want.
The first argument in sys.argv is the name of the script being run. You'll want to make sure that you start reading from the second position in the array sys.argv[1] instead of the typical zero-th index like you normally would.
There is some more detailed information here, which could lead you to better ways of handling command line arguments. To get started though, this would work well enough.
I think this will do what you want:
2014-06-05 09:37:57 [tmp]$ export VAR1="a"
2014-06-05 09:38:01 [tmp]$ export VAR2="b"
2014-06-05 09:38:05 [tmp]$ export VAR3="c"
2014-06-05 09:38:08 [tmp]$ export VAR4="d"
2014-06-05 09:38:12 [tmp]$ python
Python 2.7.3 (default, Feb 27 2014, 19:58:35)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from os import environ
>>> environ['VAR1']
'a'
>>> environ['VAR2']
'b'
>>> environ['VAR3']
'c'
>>> environ['VAR4']
'd'
>>> environ['VAR5']
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/UserDict.py", line 23, in __getitem__
raise KeyError(key)
KeyError: 'VAR5'
Remember to catch KeyError and respond accordingly or use the get method (from the dict class) and specify a default to be used when the key is not present:
>>> environ.get('VAR5', 'not present')
'not present'
more:
https://docs.python.org/2/library/os.html
I want to add for what worked for my case:
I have my variables in file which was being sourced in shell script.
Need to pass the variables to python from same file.
I have pandas and spark as well in my case
My expected result is to concatenate the path to pass in "tocsv" which is achieved
**Shell**:
. path/to/variable/source file # sourcing the variables
python path/to/file/extract.py "$OUTBOUND_PATH"
**Python**:
import sys
import pandas as pd
#If spark session is involved import the sparksession as well
outbound_path=sys.argv[1] #argument is collected from the file passed through shell
file_name="/report.csv"
geo.topandas().tocsv(outbound_path+file_name,mode="w+")
Related
I have seen plenty examples of running a python script from inside a bash script and either passing in variables as arguments or using export to give the child shell access, I am trying to do the opposite here though.
I am running a python script and have a separate file, lets call it myGlobalVariables.bash
myGlobalVariables.bash:
foo_1="var1"
foo_2="var2"
foo_3="var3"
My python script needs to use these variables.
For a very simple example:
myPythonScript.py:
print "foo_1: {}".format(foo_1)
Is there a way I can import them directly? Also, I do not want to alter the bash script if possible since it is a common file referenced many times elsewhere.
If your .bash file is formatted as you indicated - you might be able to just import it direct as a Python module via the imp module.
import imp
bash_module = imp.load_source("bash_module, "/path/to/myGlobalVariables.bash")
print bash_module.foo_1
You can also use os.environ:
Bash:
#!/bin/bash
# works without export as well
export testtest=one
Python:
#!/usr/bin/python
import os
os.environ['testtest'] # 'one'
I am very new to python, so I would welcome suggestions for more idiomatic ways to do this, but the following code uses bash itself to tell us which values get set by first calling bash with an empty environment (env -i bash) to tell us what variables are set as a baseline, then I call it again and tell bash to source your "variables" file, and then tell us what variables are now set. After removing some false-positives and an apparently-blank line, I loop through the "additional" output, looking for variables that were not in the baseline. Newly-seen variables get split (carefully) and put into the bash dictionary. I've left here (but commented-out) my previous idea for using exec to set the variables natively in python, but I ran into quoting/escaping issues, so I switched gears to using a dict.
If the exact call (path, etc) to your "variables" file is different than mine, then you'll need to change all of the instances of that value -- in the subprocess.check_output() call, in the list.remove() calls.
Here's the sample variable file I was using, just to demonstrate some of the things that could happen:
foo_1="var1"
foo_2="var2"
foo_3="var3"
if [[ -z $foo_3 ]]; then
foo_4="test"
else
foo_4="testing"
fi
foo_5="O'Neil"
foo_6='I love" quotes'
foo_7="embedded
newline"
... and here's the python script:
#!/usr/bin/env python
import subprocess
output = subprocess.check_output(['env', '-i', 'bash', '-c', 'set'])
baseline = output.split("\n")
output = subprocess.check_output(['env', '-i', 'bash', '-c', '. myGlobalVariables.bash; set'])
additional = output.split("\n")
# these get set when ". myGlobal..." runs and so are false positives
additional.remove("BASH_EXECUTION_STRING='. myGlobalVariables.bash; set'")
additional.remove('PIPESTATUS=([0]="0")')
additional.remove('_=myGlobalVariables.bash')
# I get an empty item at the end (blank line from subprocess?)
additional.remove('')
bash = {}
for assign in additional:
if not assign in baseline:
name, value = assign.split("=", 1)
bash[name]=value
#exec(name + '="' + value + '"')
print "New values:"
for key in bash:
print "Key: ", key, " = ", bash[key]
Another way to do it:
Inspired by Marat's answer, I came up with this two-stage hack. Start with a python program, let's call it "stage 1", which uses subprocess to call bash to source the variable file, as my above answer does, but it then tells bash to export all of the variables, and then exec the rest of your python program, which is in "stage 2".
Stage 1 python program:
#!/usr/bin/env python
import subprocess
status = subprocess.call(
['bash', '-c',
'. myGlobalVariables.bash; export $(compgen -v); exec ./stage2.py'
]);
Stage 2 python program:
#!/usr/bin/env python
# anything you want! for example,
import os
for key in os.environ:
print key, " = ", os.environ[key]
As stated in #theorifice answer, the trick here may be that such formatted file may be interpreted by both as bash and as python code. But his answer is outdated. imp module is deprecated in favour of importlib.
As your file has extension other than ".py", you can use the following approach:
from importlib.util import spec_from_loader, module_from_spec
from importlib.machinery import SourceFileLoader
spec = spec_from_loader("foobar", SourceFileLoader("foobar", "myGlobalVariables.bash"))
foobar = module_from_spec(spec)
spec.loader.exec_module(foobar)
I do not completely understand how this code works (where there are these foobar parameters), however, it worked for me. Found it here.
Python novice here.
I am trying to interact with the variables/objects in a python file that requires arguments for its data. Let's say I can only get this data from arguments, rather than make a script that includes arguments that would be passed (which means I must use execfile or subprocess.call).
Let's say this was my python file, foo.py:
bar = 123
print("ran foo.py, bar = %d" % bar)
Of course, there would be more to the file to parse arguments, but that is irrelevant.
Now, in a python shell (either python or ipython in a terminal):
>>> import subprocess
>>> subprocess.call("python foo.py 'args'", shell=True)
ran foo.py, bar = 123
0
>>> bar
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'bar' is not defined
As the example output above shows, bar is not defined after stdout from foo.py.
Is there any way to interact with variables or data objects in a python file called by execfile or subprocess.call? Normally I would eval in a REPL or run in an IDE so I can test out values and objects after execution, but I cannot figure out how to do this with a python file that builds data from parsed arguments.
Does anyone know how I can do this? Thanks for your consideration, at any rate.
Is foo.py under your control? If yes, simply change it, such that it can be run by being imported as module and interacted with.
If no, you may need to catch the output as string and newly build your variable bar from it.
How you capture the output of a subprocess is answered here for example:
Running shell command from Python and capturing the output
I want to redirect a file as standard input to my Python script but I get some errors as soon as it tries to collect the input. A simple MWE would be:
A script like this:
T = int(input())
for i in range(T):
stack = input()
And a command like this in the Window's cmd:
script.py > someOut.out < someIn.in
And my input file is going to have contents like:
[Int]
[String]
[String]
...
It gets the number of tests right but as soon as it spots a string, it always throws some exception. For example, for a file like:
1
kdjddhs
I get NameError: name 'kdjddhs' is not defined. At the same time, file:
1
+-=
throws:
File "<string>", line 1
+-=
^
SyntaxError: unexpected EOF while parsing
Why is that so? When I start the script through the interpreter, everything works fine. How can I handle input in such a way so that I can redirect standard input through command line as opposed to handling the actual text file through the script itself?
You are using wrong interpreter on the SheBang. I have tested the following code with both python version 2 and 3 (Note the shebang specifies the version):
#!/usr/bin/env python3
import sys
print(sys.version)
T = int(input())
for i in range(T):
stack = input()
print(stack)
Now, on python 2 nothing works. Both using the interpreter python2 ./test.py < data.in and invoking the file directly results in error:
Data:
1
stack-123
Output:
2.7.6 (default, Jun 22 2015, 17:58:13)
[GCC 4.8.2]
Traceback (most recent call last):
File "./test.py", line 9, in <module>
stack = input()
File "<string>", line 1, in <module>
NameError: name 'stack' is not defined
Using python3 both python3 ./test.py < data.in and ./test.py < ./data.in work as expected. Changing the shebang to #!/usr/bin/env python which does not specify the interpreter, the system's default python is used which in your case is python 2.x and results in error
I cannot really tell you why python 2 does not work - it seems it tries to evaluate stack-123 as a variable called "stack" and drops the -123 part... I will do some research on this and update the answer if I figure it out, but using the correct shebang will solve your problem.
Update: As #GurgenHovhannisyan says (+1 from me), which I completely forgot, in python2 you have to use raw_input() instead to achieve the same behavior. If you want this to work in both versions, define and use the following function:
def myInput():
try:
# this will fail in python 3
return raw_input()
except:
return input()
Hope it helps
First of all it doesn't matter through the interpreter or not . The python version here makes sense.
If you work with Python version 2., above mentioned input will work. If you are in Python 3., will not.
Because input() function works in different way depending of Python version
Python version 2:
input() - read input and evaluates
raw_input() - read input as a raw string
Python version 3:
input() - works as raw_input() in 2
raw_input() - there is not this function
So simple change your input to raw_input if you are in Python 2.
This should work the same for Python 2.x and Python 3.x:
from __future__ import print_function
import sys
stdin_reader = None
# python version check determines our choice of input function
try:
if (sys.version_info > (3, 0)):
stdin_reader = input
else:
stdin_reader = raw_input
# Yes yes, I know - don't catch all exceptions
# but here we want to quit if anything fails regardless of the error.
except:
print("Failed to determine Python version")
def echo_stdin():
""" Reads stdin and prints it back in upper case """
r = stdin_reader()
print(r.upper())
echo_stdin()
Output:
$ python3 echo_stdin.py < data_stdin
SOME INPUT LALALA
$ python echo_stdin.py < data_stdin
SOME INPUT LALALA
We can define an alias in ipython with the %alias magic function, like this:
>>> d
NameError: name 'd' is not defined
>>> %alias d date
>>> d
Fri May 15 00:12:20 AEST 2015
This escapes to the shell command date when you type d into ipython.
But I want to define an alias to execute some python code, in the current interpreter scope, rather than a shell command. Is that possible? How can we make this kind of alias?
I work in the interactive interpreter a lot, and this could save me a lot of commands I find myself repeating often, and also prevent some common typos.
The normal way to do this would be to simply write a python function, with a def. But if you want to alias a statement, rather than a function call, then it's actually a bit tricky.
You can achieve this by writing a custom magic function. Here is an example, which effectively aliases the import statement to get, within the REPL.
from IPython.core.magic import register_line_magic
#register_line_magic
def get(line):
code = f"import {line}"
print("-->", code)
exec(code, globals())
del get # in interactive mode-remove from scope so function doesn't shadow magic
edit: below is the previous code, for older versions of IPython
from IPython.core.magic_arguments import argument, magic_arguments
#magic_arguments()
#argument('module')
def magic_import(self, arg):
code = 'import {}'.format(arg)
print('--> {}'.format(code))
self.shell.run_code(code)
ip = get_ipython()
ip.define_magic('get', magic_import)
Now it is possible to execute get statements which are aliased to import statements.
Demo:
In [1]: get json
--> import json
In [2]: json.loads
Out[2]: <function json.loads>
In [3]: get potato
--> import potato
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<string> in <module>()
ImportError: No module named potato
In [4]:
Of course, this is extendible to arbitrary python code, and optional arguments are supported aswell.
I don't know since when IPython provides with macro. And now you can simply do this:
ipy = get_ipython()
ipy.define_macro('d', 'date')
You can put this code into any file located in ~/.ipython/profile_default/startup/, and then this macro will be automatically available when you start IPython.
However, a macro doesn't accept arguments. So pleaes keep this in mind before you choose to define a macro.
I'm setting os.environ['PYTHONHOME']="/home/user/OpenPrint/py2.6" in my Python script
But at the end of the script I need to clear this variable so that I can call another python script from a different location. Can someone tell me how to do that? I tried os.environ.clear() but that clears all the other variables too.
Use
os.environ.pop("PYTHONHOME")
See (minimal) documentation at http://docs.python.org/2/library/os.html
try
del os.environ["PYTHONHOME"]
this delete variable "PYTHONHOME" from os.environ dict.
To unset the environment variable only for the script being invoked, following will work too.
os.unsetenv('PYTHONHOME')
UPDATE:
If you have to delete the environment for rest of the flow os.environ.pop('PYTHONHOME') or del os.environ['PYTHONHOME'] is better. However, if you want to unset the environment variable only for the script you are forking at the end os.unsetenv('PYTHONHOME') works better as it still keeps the environment variable in current process environment. However it will also depend on how you are invoking the script.
Python documentation says
Unset (delete) the environment variable named key. Such changes to the
environment affect subprocesses started with os.system(), popen() or
fork() and execv().
See the example below.
Sample script (/tmp/env.py)
import os
print os.environ["VIFI"]
Now let's look at following.
vifi-a01:~ vifi$ python
Python 2.7.16 (default, Oct 16 2019, 00:34:56)
[GCC 4.2.1 Compatible Apple LLVM 10.0.1 (clang-1001.0.37.14)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> 'VIFI' in os.environ # EnvVar not present initially.
False
>>> os.environ['VIFI'] = 'V' # set the env var
>>> os.system('python /tmp/env.py') # child process/script gets it
V
0
>>> os.unsetenv('VIFI') # unset env only for child script
>>> os.system('python /tmp/env.py')
Traceback (most recent call last):
File "/tmp/env.py", line 2, in <module>
print os.environ["VIFI"]
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/UserDict.py", line 40, in __getitem__
raise KeyError(key)
KeyError: 'VIFI'
256
>>> 'VIFI' in os.environ # rest of the flow still has it
True
>>> os.environ['VIFI'] = 'V' # set it again for child process/script
>>> os.system('python /tmp/env.py')
V
0
>>>
>>> os.environ["VIFI"] = "V"
>>> ^D
vifi-a01:~ vifi$ echo $VIFI
vifi-a01:~ vifi$ printenv | grep "VIFI"
vifi-a01:~ vifi$
Btw, setting the environment by os.environ is only local to the process (and it's child processes) in which it is set. It has no effect on global environment variables as you can see at the end.
For unit tests you can use patch.dict to clear all environment vars
class FooTests(TestCase):
#mock.patch.dict(os.environ, {}, clear=True)
def test_something(self):
(Ref: https://adamj.eu/tech/2020/10/13/how-to-mock-environment-variables-with-pythons-unittest/)