I am trying to simplify a workflow that requires several individual scripts to be run. So far, I have been able to write a script that runs the other scripts but I have one issue that I can't seem to resolve. Each of the sub-scripts requires a file path and one argument within the path needs to be changed depending on who runs the scripts. Currently, I have to open each sub-script and manually changing this argument.
Is it possible to set this argument to a variable in the parent script, which can then be passed to the subscripts? Thus, it will only need to be set once and will no longer require it to be updated in each sub-script.
So far I have.....
import os
def driver(path: str):
path_base = path
path_use = os.path.join(path_base, 'docs', 'analysis', 'forecast')
file_cash = os.path.join(path_use, 'cash.py')
file_cap = os.path.join(path_use, 'cap.py')
exec(open(file_cash).read())
exec(open(file_cap).read())
return
if __name__ == '__main__':
driver(path=r'c:\users\[username]')
I would like to set path=r'c:\users\[username]' and then pass that to cash.py and cap.py.
Instead of trying to replicate the behaviour of the import statement, you should directly import these subscripts and pass the values you need them to use as function / method aruments. To import a script from a specific path, you can use importlib.import(), like this:
main.py
import os
def driver(path: str):
path_use = os.path.join(path, 'docs', 'analysis', 'forecast')
file_cash = os.path.join(path_use, 'cash.py')
file_cap = os.path.join(path_use, 'cap.py')
importlib.import(file_cash)
importlib.import(file_cap)
cash.cash("some_arg")
cap.cap("some_other_arg")
if __name__ == '__main__':
driver(path=r'c:\users\[username]')
Related
I have situation like this. I am creating a Python package. That Python package needs to use Redis, so I want to allow the user of the package to define the Redis url.
Here's how I attempted to do it:
bin/main.py
from my_package.main import run
from my_package.config import config
basicConfig(filename='logs.log', level=DEBUG)
# the user defines the redis url
config['redis_url'] = 'redis://localhost:6379/0'
run()
my_package/config.py
config = {
"redis_url": None
}
my_package/main.py
from .config import config
def run():
print(config["redis_url"]) # prints None instead of what I want
Unfortunately, it doesn't work. In main.py the value of config["redis_url"] is None instead of the url defined in bin/main.py file. Why is that? How can I make it work?
I could pass the config to the run() function, but then if I run some other function I will need to pass the config to that function as well. I'd like to pass it one time ideally.
for entry in os.scandir(document_dir)
if os.path.isdir(entry):
# some code goes here
else:
# else the file needs to be in a folder
file_path = entry.path.replace(os.sep, '/')
I am having trouble mocking os.scandir and the path attribute within the else statement. I am not able to mock the mock object's property I created in my unit tests.
with patch("os.scandir") as mock_scandir:
# mock_scandir.return_value = ["docs.json", ]
# mock_scandir.side_effect = ["docs.json", ]
# mock_scandir.return_value.path = PropertyMock(return_value="docs.json")
These are all the options I've tried. Any help is greatly appreciated.
It depends on what you realy need to mock. The problem is that os.scandir returns entries of type os.DirEntry. One possibility is to use your own mock DirEntry and implement only the methods that you need (in your example, only path). For your example, you also have to mock os.path.isdir. Here is a self-contained example for how you can do this:
import os
from unittest.mock import patch
def get_paths(document_dir):
# example function containing your code
paths = []
for entry in os.scandir(document_dir):
if os.path.isdir(entry):
pass
else:
# else the file needs to be in a folder
file_path = entry.path.replace(os.sep, '/')
paths.append(file_path)
return paths
class DirEntry:
def __init__(self, path):
self.path = path
def path(self):
return self.path
#patch("os.scandir")
#patch("os.path.isdir")
def test_sut(mock_isdir, mock_scandir):
mock_isdir.return_value = False
mock_scandir.return_value = [DirEntry("docs.json")]
assert get_paths("anydir") == ["docs.json"]
Depending on your actual code, you may have to do more.
If you want to patch more file system functions, you may consider to use pyfakefs instead, which patches the whole file system. This will be overkill for a single test, but can be handy for a test suite relying on file system functions.
Disclaimer: I'm a contributor to pyfakefs.
I have 2 files prgm.py and test.py
1.prgm.py
def move(self)
H=newtest.myfunction()
i= H.index(z)
user=newuser.my_function()
print(user[i])
How will i get user[i] in the other code named test.py
Use an import statement in the other file;
Like this - from prgm import move
Note: For this to work both of the files needs to be in the same folder or the path to the file you are importing needs to be in your PYTHONPATH
Instead of printing the result, you can simply return it. In the second file, you just import the function from this source file and call it.
Given the situation, move is actually a class method, so you need to import the whole class and instance it in the second file
prgm.py
class Example:
def move(self):
H = newtest.myfunction()
i = H.index(z)
user = newuser.my_function()
return user[i]
test.py
from prgm import Example
example = Example()
user = example.move()
# do things with user
I have a Python file (sql_script.py) with some methods to add/modify data into a SQL database, say
import_data_into_specifications_table
import_data_into_linkage_table
truncate_linkage_table
....(do_other_stuff_on_db)
connect_db
Sometimes I have to call only one of the methods, some others several of them
Until now what I did was modify the main method according to what I needed to do:
if __name__ == '__main__':
conn = connect_db()
import_data_into_specifications_table(conn= conn)
import_data_into_linkage_table(conn=conn)
conn.close()
But I find it a bad practice, as I always have to remember removing the main before committing the code
A possible option could be to write an external python file, say launch_sql_script.py), in which I write all possible combinations of methods I have to run, say:
def import_spec_and_linkage():
conn = connect_db()
import_data_into_specifications_table(conn= conn)
import_data_into_linkage_table(conn=conn)
conn.close()
...
if __name__ == '__main__':
import_spec_and_linkage()
It can be useful to version this file, but still I will need to modify the main code according to what I need to do.
Do you think this is a good practise? Do you have any other suggestions?
The simplest way is to use program arguments mechanism: describe intended action during script execution.
Get a peek at sys.argv
Here is the scratch:
def meow():
print("Meow!")
def bark():
print("Bark!")
def moo():
print("Moo!")
actions = {
"meow": meow,
"bark": bark,
"moo": moo,
}
from sys import argv
actions[argv[1]]()
If you're going to parse sophisticated program arguments, check out argparse library.
Option 1: Separate them into individual scripts and run each from command line
# import_data_into_specifications_table.py
if name == '__main__':
conn = connect_db() # import from a shared fiel
import_data_into_specifications_table(conn= conn)
# in bash
$ import_data_into_specifications_table
Option 2: Write one file that parses command line arguments
# my_sql_script.py
if name == '__main__':
conn = connect_db()
if args.spec_table: # use argumentparser to get these
import_data_into_specifications_table(conn=conn)
if args.linkage_table:
import_data_into_linkage_table(conn=conn)
...
# in bash
$ my_sql_script.py --spec_table --linkage_table
I would favour option 2 if the order of the operations doesn't matter or is always constant. If there are many permutations, I would go with option 1.
So, I am passing a environment variable from bash to python;
#!/usr/bin/env python2
import os
#connect("weblogic", "weblogic", url=xxx.xxx.xxx.xxx:xxxx)
os.environ['bash_variable']
via wlst.sh I can print exported bash_variable, but how do I execute stored variable? Basically, I am trying to remove the original connect statement and pass a variable that has said information. Thanks
Question though, why wouldn't you called the script with the variable as an argument and use sys.argv[] ?
By example something like this.
import os
import sys
import traceback
from java.io import *
from java.lang import *
wlDomain = sys.argv[1]
wlDomPath = sys.argv[2]
wlNMHost = sys.argv[3]
wlNMPort = sys.argv[4]
wlDPath="%s/%s" %(wlDomPath,wlDomain)
wlNMprop="/apps/bea/wls/scripts/.shadow/NM.prop"
try:
print "Connection to Node Manager"
print ""
loadProperties(wlNMprop)
nmConnect(username=NMuser,password=NMpass,host=wlNMHost,port=wlNMPort,domainName=wlDomain,domainDir=wlDPath,mType='ssl',verbose='true')
except:
print "Fatal Error : No Connection to Node Manager"
exit()
print "Connected to Node Manager"
The NM.prop file is a 600 file with the username/password for the NM.
EDIT :
So from what I understand you want to do something like this :
URLS = ['t3s://Host1:Port1','t3s://Host2:Port2','t3s://Host3:Port3']
for urls in URLS:
connect('somebody','password',urls)
{bunch of commands}
disconnect()
And the values of the list URLS would be define by the environment.
The way I see it you have 3 choices :
Have 1 script per environment, more or less identical save for the URLS list
Have 1 script but with a conditionnal branching on sys.argv[1] (the environment as a parameter) and create the list there.
Have 1 script which use a parameter file for each environment according to the environment. Each parameter file containing the list in question.
Something like that :
propENV = sys.argv[1]
propPath = "/path1/path2"
propFile = "%s/%s" %(propPath,propENV)
loadProperties(propFile)
I would probably use the properties file option myself as it is more flexible from an operational standpoint...at least IMHO.