python import the same file name programatically from different folders - python

I have the following structure
sites
- SiteA
- Settings.py
- SiteA
- Settings.py
- SiteA
- Settings.py
all the settings.py file has a variable
DOMAIN_KEY = ''
Is there a way to programmically import this structure and look through this variable in all the setting files and print it?
EDIT (Updated code):
with cd(os.path.join(APPLICATION_DIR, 'config', 'sites')):
output = run ("ls -l | grep \"^d\"|awk {'print $8'}") # run will execute and return list of folders
for file in output.split():
__module = __import__('%s' % file)
print __module
print __module.TIME_ZONE
print __module.SITE_ID
The import is not working properly, am not sure how I can import the setting files without knowing the folder name.

Supposing you do not have duplicate file names you can access these modules and its variables like this:
import SiteA.Settings as A, SiteB.Settings as B, SiteC.Settings as C
print(A.DOMAIN_KEY, B.DOMAIN_KEY, C.DOMAIN_KEY)
EDIT: Another option would be the __import__() statement, that lets you load a module with its name:
for char in "ABC":
mod = __import__("Site%c.Settings")
print(mod.DOMAIN_KEY)
EDIT2: according to the edit in the OP.
Why don't you just append the "Settings" module name after the folder you got from the script?
for file in output.split():
__module = __import__('%s.Settings' % file)
print __module.Settings.DOMAIN_KEY

Related

Call a function from different file where the file name and function name are read from a list

I have multiple functions stored in different files, Both file names and function names are stored in lists. Is there any option to call the required function without the conditional statements?
Example, file1 has functions function11 and function12,
def function11():
pass
def function12():
pass
file2 has functions function21 and function22
def function21():
pass
def function22():
pass
and I have the lists
file_name = ["file1", "file2", "file1"]
function_name = ["function12", "function22", "funciton12"]
I will get the list index from different function, based on that I need to call the function and get the output.
If the other function will give you a list index directly, then you don't need to deal with the function names as strings. Instead, directly store (without calling) the functions in the list:
import file1, file2
functions = [file1.function12, file2.function22, file1.function12]
And then call them once you have the index:
function[index]()
There are ways to do what is called "reflection" in Python and get from the string to a matching-named function. But they solve a problem that is more advanced than what you describe, and they are more difficult (especially if you also have to work with the module names).
If you have a "whitelist" of functions and modules that are allowed to be called from the config file, but still need to find them by string, you can explicitly create the mapping with a dict:
allowed_functions = {
'file1': {
'function11': file1.function11,
'function12': file1.function12
},
'file2': {
'function21': file2.function21,
'function22': file2.function22
}
}
And then invoke the function:
try:
func = allowed_functions[module_name][function_name]
except KeyError:
raise ValueError("this function/module name is not allowed")
else:
func()
The most advanced approach is if you need to load code from a "plugin" module created by the author. You can use the standard library importlib package to use the string name to find a file to import as a module, and import it dynamically. It looks something like:
from importlib.util import spec_from_file_location, module_from_spec
# Look for the file at the specified path, figure out the module name
# from the base file name, import it and make a module object.
def load_module(path):
folder, filename = os.path.split(path)
basename, extension = os.path.splitext(filename)
spec = spec_from_file_location(basename, path)
module = module_from_spec(spec)
spec.loader.exec_module(module)
assert module.__name__ == basename
return module
This is still unsafe, in the sense that it can look anywhere on the file system for the module. Better if you specify the folder yourself, and only allow a filename to be used in the config file; but then you still have to protect against hacking the path by using things like ".." and "/" in the "filename".
(I have a project that does something like this. It chooses the paths from a whitelist that is also under the user's control, so I have to warn my users not to trust the path-whitelist file from each other. I also search the directories for modules, and then make a whitelist of plugins that may be used, based only on plugins that are in the directory - so no funny games with "..". And I'm still worried I forgot something.)
Once you have a module name, you can get a function from it by name like:
dynamic_module = load_module(some_path)
try:
func = getattr(dynamic_module, function_name)
except AttributeError:
raise ValueError("function not in module")
At any rate, there is no reason to eval anything, or generate and import code based on user input. That is most unsafe of all.
Another alternative. This is not much safer than an eval() however.
Someone with access to the lists you read from the config file could inject malicious code in the lists you import.
I.e.
'from subprocess import call; subprocess.call(["rm", "-rf", "./*" stdout=/dev/null, stderr=/dev/null, shell=True)'
Code:
import re
# You must first create a directory named "test_module"
# You can do this with code if needed.
# Python recognizes a "module" as a module by the existence of an __init__.py
# It will load that __init__.py at the "import" command, and you can access the methods it imports
m = ["os", "sys", "subprocess"] # Modules to import from
f = ["getcwd", "exit", "call; call('do', '---terrible-things')"] # Methods to import
# Create an __init__.py
with open("./test_module/__init__.py", "w") as FH:
for count in range(0, len(m), 1):
# Writes "from module import method" to __init.py
line = "from {} import {}\n".format(m[count], f[count])
# !!!! SANITIZE THE LINE !!!!!
if not re.match("^from [a-zA-Z0-9._]+ import [a-zA-Z0-9._]+$", line):
print("The line '{}' is suspicious. Will not be entered into __init__.py!!".format(line))
continue
FH.write(line)
import test_module
print(test_module.getcwd())
OUTPUT:
The line 'from subprocess import call; call('do', '---terrible-things')' is suspicious. Will not be entered into __init__.py!!
/home/rightmire/eclipse-workspace/junkcode
I'm not 100% sure I'm understanding the need. Maybe more detail in the question.
Is something like this what you're looking for?
m = ["os"]
f = ["getcwd"]
command = ''.join([m[0], ".", f[0], "()"])
# Put in some minimum sanity checking and sanitization!!!
if ";" in command or <other dangerous string> in command:
print("The line '{}' is suspicious. Will not run".format(command))
sys.exit(1)
print("This will error if the method isnt imported...")
print(eval(''.join([m[0], ".", f[0], "()"])) )
OUTPUT:
This will error if the method isnt imported...
/home/rightmire/eclipse-workspace/junkcode
As pointed out by #KarlKnechtel, having commands come in from an external file is a gargantuan security risk!

Python3.6 |- Import a module from a list with __import__ -|

I'm creating a really basic program that simulates a terminal with Python3.6, its name is Prosser(The origin is that "Prosser" sounds like "Processer" and Prosser is a command processer).
A problem that I'm having is with command import, this is, all the commands are stored in a folder called "lib" in the root folder of Prosser and inside it can have folders and files, if a folder is in the "lib" dir it can't be named as folder anymore, its name now is package(But this doesn't care for now).
The interface of the program is just a input writed:
Prosser:/home/pedro/documents/projects/prosser/-!
and the user can type a command before the text, like a normal terminal:
Prosser:/home/pedro/documents/projects/prosser/-! console.write Hello World
let's say that inside the "lib" folder exists one folder called "console" and inside it has a file called "write.py" that has the code:
class exec:
def main(args):
print(args)
As you can see the first 2 lines is like a important structure for command execution: The class "exec" is the main class for the command execution and the def "main" is the main and first function that the terminal will read and execute also pass the arguments that the user defined, after that the command will be responsible to catch any error and do what it will be created to do.
At this moment, everything is ok, but now comes the true help that I need of U guys, the command import.
Like I writed the user can type any command, and in the example above I typed a "console.write Hello World" and exists one folder called "console" and one file "write.py". The point is that the packages can be defined by a "dot", this is:
-! write Hello World
Above I only typed "write" and this says that the file is only inside the "lib" folder, it doesn't has a package to storage and separate it, so it is a Freedom command(A command that doesn't has packages or nodes).
-! console.write Hello World
Now I typed above "console.write" and this says that the file has a package or node to storage and separate it, this means that it is a Tied command(A command that has packages or nodes).
With that, a file is separated from the package(s) with a dot, the more dots you put, more folders will be navigated to find the file and proceed to the next execution.
Code
Finnaly the code. With the import statement I tryied this:
import os
import form
curDir = os.path.dirname(os.path.abspath(__file__)) # Returns the current direrctory open
while __name__ == "__main__":
c = input('Prosser:%s-! ' % (os.getcwd())).split() # Think the user typed "task.kill"
path = c[0].split('.') # Returns a list like: ["task", "kill"]
try:
args = c[1:] # Try get the command arguments
format = form.formater(args) # Just a text formatation like create a new line with "$n"
except:
args = None # If no arguments the args will just turn the None var type
pass
if os.path.exists(curDir + "/lib/" + "/".join(path) + ".py"): # If curDir+/lib/task/kill.py exists
module = __import__("lib." + '.'.join(path)) # Returns "lib.task.kill"
module.exec.main(args) # Execute the "exec" class and the def "**main**"
else:
pathlast = path[-1] # Get the last item, in this case "kill"
path.remove(path[-1]) # Remove the last item, "kill"
print('Error: Unknow command: ' + '.'.join(path) + '. >>' + pathlast + '<<') # Returns an error message like: "Error: Unknow command: task. >>kill<<"
# Reset the input interface
The problem is that when the line "module = __import__("lib." + '.'.join(path))" is executed the console prints the error:
Traceback (most recent call last):
File "/home/pedro/documents/projects/prosser/main.py", line 18, in <module>
module.exec.main(path) # Execute the "exec" class and the def "**main**"
AttributeError: module 'lib' has no attribute 'exec'
I also tried to use:
module = \_\_import\_\_(curDir + "lib." + '.'.join(path))
But it gets the same error. I think it's lighter for now. I'd like if someone help me or find some replacement of the code. :)
I think you have error here:
You have diffirent path here:
if os.path.exists(curDir + "/lib/" + "/".join(path) + ".py")
And another here, you dont have curDir:
module = __import__("lib." + '.'.join(path)) # Returns "lib.task.kill"
You should use os.path.join to build paths like this:
module = __import__(os.path.join(curdir, 'lib', path + '.py'))

Paste.deploy - Is it possible to pass 2 config files?

I'm in the context of a pyramid app, which has a wsgi.py file that looks like this:
import os.path
import traceback
from paste.deploy import loadapp
from pyramid.paster import setup_logging
DEFAULT_CONF_FILE = "/etc/myconf.conf"
config = DEFAULT_CONF_FILE
try:
import mod_wsgi
process_group = mod_wsgi.process_group
config = os.path.join('/etc', process_group + '.conf')
except Exception as e:
print "There was an exception when trying to determine the configuration file from mod_wsgi: %s" % str(e)
traceback.print_exc()
if not os.path.isfile(config):
config = DEFAULT_CONF_FILE
setup_logging(config)
application = loadapp('config:' + config)
What I want to do is to be able to use 2 configuration files.
My first guess is just to write a new file where I put the content of the 2 config files in it, but it seems... ugly.
Reading the doc of paste.deploy, I found nothing that seems to be close to what I'd want to do, except maybe for the factories. Thing is, I'm not sure what they're for and I want to do something like:
app_factory('myconf1.conf', 'myconf2.conf')
and not:
app_factory('myconf1.conf', some_option='value', some_other_option='other value',...)
Am I missing something or is there just no way to use 2 conf files with paste.deploy and I'll just 'concatenate' the 2 files?
Thanks.
EDIT:
I've read this question which could look like what I want to do, but not quite (I shouldn't modify my conf file). I don't want to override a section in the base file. I really just want a concatenation of those two files without having to do it beforehand.

WLST execute stored variable "connect()" statement

So, I am passing a environment variable from bash to python;
#!/usr/bin/env python2
import os
#connect("weblogic", "weblogic", url=xxx.xxx.xxx.xxx:xxxx)
os.environ['bash_variable']
via wlst.sh I can print exported bash_variable, but how do I execute stored variable? Basically, I am trying to remove the original connect statement and pass a variable that has said information. Thanks
Question though, why wouldn't you called the script with the variable as an argument and use sys.argv[] ?
By example something like this.
import os
import sys
import traceback
from java.io import *
from java.lang import *
wlDomain = sys.argv[1]
wlDomPath = sys.argv[2]
wlNMHost = sys.argv[3]
wlNMPort = sys.argv[4]
wlDPath="%s/%s" %(wlDomPath,wlDomain)
wlNMprop="/apps/bea/wls/scripts/.shadow/NM.prop"
try:
print "Connection to Node Manager"
print ""
loadProperties(wlNMprop)
nmConnect(username=NMuser,password=NMpass,host=wlNMHost,port=wlNMPort,domainName=wlDomain,domainDir=wlDPath,mType='ssl',verbose='true')
except:
print "Fatal Error : No Connection to Node Manager"
exit()
print "Connected to Node Manager"
The NM.prop file is a 600 file with the username/password for the NM.
EDIT :
So from what I understand you want to do something like this :
URLS = ['t3s://Host1:Port1','t3s://Host2:Port2','t3s://Host3:Port3']
for urls in URLS:
connect('somebody','password',urls)
{bunch of commands}
disconnect()
And the values of the list URLS would be define by the environment.
The way I see it you have 3 choices :
Have 1 script per environment, more or less identical save for the URLS list
Have 1 script but with a conditionnal branching on sys.argv[1] (the environment as a parameter) and create the list there.
Have 1 script which use a parameter file for each environment according to the environment. Each parameter file containing the list in question.
Something like that :
propENV = sys.argv[1]
propPath = "/path1/path2"
propFile = "%s/%s" %(propPath,propENV)
loadProperties(propFile)
I would probably use the properties file option myself as it is more flexible from an operational standpoint...at least IMHO.

How do I access ${buildout:directory} from Python code?

I have a Pyramid web application managed with zc.buildout. In it, I need to read a file on disk, which is located in a sub-directory of buildout directory.
The problem is with determining the path to the file - I do not want to hard-code the absolute path and just providing a relative path does not work when serving the app in production (supposedly because the working directory is different).
So the promising "hooks" I am thinking about are:
the "root" buildout directory, which I can address in buildout.cfg as ${buildout:directory} - however, I can't figure out how can I "export" it so it can be accessed by the Python code
the location of the Paster's .ini file which starts the app
Like #MartijnPieters suggests in a comment on your own answer, I'd use collective.recipe.template to generate an entry in the .ini. I wondered myself how I could then access that data in my project, so I worked it out :-)
Let's work our way backwards to what you need. First in your view code where you want the buildout directory:
def your_view(request):
buildout_dir = request.registry.settings['buildout_dir']
....
request.registry.settings (see documentation) is a "dictonary-like deployment settings object". See deployment settings, that's the **settings that gets passed into your main method like def main(global_config, **settings)
Those settings are what's in the [app:main] part of your deployment.ini or production.ini file. So add the buildout directory there:
[app:main]
use = egg:your_app
buildout_dir = /home/you/wherever/it/is
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
But, and this is the last step, you don't want to have that hardcoded path in there. So generate the .ini with a template. The template development.ini.in uses a ${partname:variable} expansion language. in your case you need${buildout:directory}:
[app:main]
use = egg:your_app
buildout_dir = ${buildout:dir}
# ^^^^^^^^^^^^^^^
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
Add a buildout part in buildout.cfg to generate development.ini from development.ini.in:
[buildout]
...
parts =
...
inifile
...
[inifile]
recipe = collective.recipe.template
input = ${buildout:directory}/development.ini.in
output = ${buildout:directory}/development.ini
Note that you can do all sorts of cool stuff with collective.recipe.template. ${serverconfig:portnumber} to generate a matching port number in your production.ini and in your your_site_name.nginx.conf, for instance. Have fun!
If the path to the file relative to the buildout root or location of paster.ini is always the same, which it seems it is from your question, you could set it in paster.ini:
[app:main]
...
config_file = %(here)s/path/to/file.txt
Then access it from the registry as in Reinout's answer:
def your_view(request):
config_file = request.registry.settings['config_file']
Here's a rather clumsy solution I've devised:
In buildout.cfg I used extra-paths option of zc.recipe.egg to add the buildout directory to sys.path:
....
[webserver]
recipe = zc.recipe.egg:scripts
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
then I put a file called app_config.py into the buildout directory:
# This remembers the root of the installation (similar to {buildout:directory}
# so we can import it and use where we need access to the filesystem.
# Note: we could use os.getcwd() for that but it feels kinda wonky
# This is not directly related to Celery, we may want to move it somewhere
import os.path
INSTALLATION_ROOT = os.path.dirname(__file__)
Now we can import it in our Python code:
from app_config import INSTALLATION_ROOT
filename = os.path.join(INSTALLATION_ROOT, "somefile.ext")
do_stuff_with_file(filename)
If anyone knows a nicer solution you're welcome :)

Categories

Resources