Paste.deploy - Is it possible to pass 2 config files? - python

I'm in the context of a pyramid app, which has a wsgi.py file that looks like this:
import os.path
import traceback
from paste.deploy import loadapp
from pyramid.paster import setup_logging
DEFAULT_CONF_FILE = "/etc/myconf.conf"
config = DEFAULT_CONF_FILE
try:
import mod_wsgi
process_group = mod_wsgi.process_group
config = os.path.join('/etc', process_group + '.conf')
except Exception as e:
print "There was an exception when trying to determine the configuration file from mod_wsgi: %s" % str(e)
traceback.print_exc()
if not os.path.isfile(config):
config = DEFAULT_CONF_FILE
setup_logging(config)
application = loadapp('config:' + config)
What I want to do is to be able to use 2 configuration files.
My first guess is just to write a new file where I put the content of the 2 config files in it, but it seems... ugly.
Reading the doc of paste.deploy, I found nothing that seems to be close to what I'd want to do, except maybe for the factories. Thing is, I'm not sure what they're for and I want to do something like:
app_factory('myconf1.conf', 'myconf2.conf')
and not:
app_factory('myconf1.conf', some_option='value', some_other_option='other value',...)
Am I missing something or is there just no way to use 2 conf files with paste.deploy and I'll just 'concatenate' the 2 files?
Thanks.
EDIT:
I've read this question which could look like what I want to do, but not quite (I shouldn't modify my conf file). I don't want to override a section in the base file. I really just want a concatenation of those two files without having to do it beforehand.

Related

Call a function from different file where the file name and function name are read from a list

I have multiple functions stored in different files, Both file names and function names are stored in lists. Is there any option to call the required function without the conditional statements?
Example, file1 has functions function11 and function12,
def function11():
pass
def function12():
pass
file2 has functions function21 and function22
def function21():
pass
def function22():
pass
and I have the lists
file_name = ["file1", "file2", "file1"]
function_name = ["function12", "function22", "funciton12"]
I will get the list index from different function, based on that I need to call the function and get the output.
If the other function will give you a list index directly, then you don't need to deal with the function names as strings. Instead, directly store (without calling) the functions in the list:
import file1, file2
functions = [file1.function12, file2.function22, file1.function12]
And then call them once you have the index:
function[index]()
There are ways to do what is called "reflection" in Python and get from the string to a matching-named function. But they solve a problem that is more advanced than what you describe, and they are more difficult (especially if you also have to work with the module names).
If you have a "whitelist" of functions and modules that are allowed to be called from the config file, but still need to find them by string, you can explicitly create the mapping with a dict:
allowed_functions = {
'file1': {
'function11': file1.function11,
'function12': file1.function12
},
'file2': {
'function21': file2.function21,
'function22': file2.function22
}
}
And then invoke the function:
try:
func = allowed_functions[module_name][function_name]
except KeyError:
raise ValueError("this function/module name is not allowed")
else:
func()
The most advanced approach is if you need to load code from a "plugin" module created by the author. You can use the standard library importlib package to use the string name to find a file to import as a module, and import it dynamically. It looks something like:
from importlib.util import spec_from_file_location, module_from_spec
# Look for the file at the specified path, figure out the module name
# from the base file name, import it and make a module object.
def load_module(path):
folder, filename = os.path.split(path)
basename, extension = os.path.splitext(filename)
spec = spec_from_file_location(basename, path)
module = module_from_spec(spec)
spec.loader.exec_module(module)
assert module.__name__ == basename
return module
This is still unsafe, in the sense that it can look anywhere on the file system for the module. Better if you specify the folder yourself, and only allow a filename to be used in the config file; but then you still have to protect against hacking the path by using things like ".." and "/" in the "filename".
(I have a project that does something like this. It chooses the paths from a whitelist that is also under the user's control, so I have to warn my users not to trust the path-whitelist file from each other. I also search the directories for modules, and then make a whitelist of plugins that may be used, based only on plugins that are in the directory - so no funny games with "..". And I'm still worried I forgot something.)
Once you have a module name, you can get a function from it by name like:
dynamic_module = load_module(some_path)
try:
func = getattr(dynamic_module, function_name)
except AttributeError:
raise ValueError("function not in module")
At any rate, there is no reason to eval anything, or generate and import code based on user input. That is most unsafe of all.
Another alternative. This is not much safer than an eval() however.
Someone with access to the lists you read from the config file could inject malicious code in the lists you import.
I.e.
'from subprocess import call; subprocess.call(["rm", "-rf", "./*" stdout=/dev/null, stderr=/dev/null, shell=True)'
Code:
import re
# You must first create a directory named "test_module"
# You can do this with code if needed.
# Python recognizes a "module" as a module by the existence of an __init__.py
# It will load that __init__.py at the "import" command, and you can access the methods it imports
m = ["os", "sys", "subprocess"] # Modules to import from
f = ["getcwd", "exit", "call; call('do', '---terrible-things')"] # Methods to import
# Create an __init__.py
with open("./test_module/__init__.py", "w") as FH:
for count in range(0, len(m), 1):
# Writes "from module import method" to __init.py
line = "from {} import {}\n".format(m[count], f[count])
# !!!! SANITIZE THE LINE !!!!!
if not re.match("^from [a-zA-Z0-9._]+ import [a-zA-Z0-9._]+$", line):
print("The line '{}' is suspicious. Will not be entered into __init__.py!!".format(line))
continue
FH.write(line)
import test_module
print(test_module.getcwd())
OUTPUT:
The line 'from subprocess import call; call('do', '---terrible-things')' is suspicious. Will not be entered into __init__.py!!
/home/rightmire/eclipse-workspace/junkcode
I'm not 100% sure I'm understanding the need. Maybe more detail in the question.
Is something like this what you're looking for?
m = ["os"]
f = ["getcwd"]
command = ''.join([m[0], ".", f[0], "()"])
# Put in some minimum sanity checking and sanitization!!!
if ";" in command or <other dangerous string> in command:
print("The line '{}' is suspicious. Will not run".format(command))
sys.exit(1)
print("This will error if the method isnt imported...")
print(eval(''.join([m[0], ".", f[0], "()"])) )
OUTPUT:
This will error if the method isnt imported...
/home/rightmire/eclipse-workspace/junkcode
As pointed out by #KarlKnechtel, having commands come in from an external file is a gargantuan security risk!

WLST execute stored variable "connect()" statement

So, I am passing a environment variable from bash to python;
#!/usr/bin/env python2
import os
#connect("weblogic", "weblogic", url=xxx.xxx.xxx.xxx:xxxx)
os.environ['bash_variable']
via wlst.sh I can print exported bash_variable, but how do I execute stored variable? Basically, I am trying to remove the original connect statement and pass a variable that has said information. Thanks
Question though, why wouldn't you called the script with the variable as an argument and use sys.argv[] ?
By example something like this.
import os
import sys
import traceback
from java.io import *
from java.lang import *
wlDomain = sys.argv[1]
wlDomPath = sys.argv[2]
wlNMHost = sys.argv[3]
wlNMPort = sys.argv[4]
wlDPath="%s/%s" %(wlDomPath,wlDomain)
wlNMprop="/apps/bea/wls/scripts/.shadow/NM.prop"
try:
print "Connection to Node Manager"
print ""
loadProperties(wlNMprop)
nmConnect(username=NMuser,password=NMpass,host=wlNMHost,port=wlNMPort,domainName=wlDomain,domainDir=wlDPath,mType='ssl',verbose='true')
except:
print "Fatal Error : No Connection to Node Manager"
exit()
print "Connected to Node Manager"
The NM.prop file is a 600 file with the username/password for the NM.
EDIT :
So from what I understand you want to do something like this :
URLS = ['t3s://Host1:Port1','t3s://Host2:Port2','t3s://Host3:Port3']
for urls in URLS:
connect('somebody','password',urls)
{bunch of commands}
disconnect()
And the values of the list URLS would be define by the environment.
The way I see it you have 3 choices :
Have 1 script per environment, more or less identical save for the URLS list
Have 1 script but with a conditionnal branching on sys.argv[1] (the environment as a parameter) and create the list there.
Have 1 script which use a parameter file for each environment according to the environment. Each parameter file containing the list in question.
Something like that :
propENV = sys.argv[1]
propPath = "/path1/path2"
propFile = "%s/%s" %(propPath,propENV)
loadProperties(propFile)
I would probably use the properties file option myself as it is more flexible from an operational standpoint...at least IMHO.

cannot access webserver resources using virtualenv and webapp2

I wanted to create a simple app using webapp2. Because I have Google App Engine installed, and I want to use it outside of GAE, I followed the instructions on this page: http://webapp-improved.appspot.com/tutorials/quickstart.nogae.html
This all went well, my main.py is running, it is handling requests correctly. However, I can't access resources directly.
http://localhost:8080/myimage.jpg or http://localhost:8080/mydata.json
always returns a 404 resource not found page.
It doesn't matter if I put the resources on the WebServer/Documents/ or in the folder where the virtualenv is active.
Please help! :-)
(I am on a Mac 10.6 with Python 2.7)
(Adapted from this question)
Looks like webapp2 doesn't have a static file handler; you'll have to roll your own. Here's a simple one:
import mimetypes
class StaticFileHandler(webapp2.RequestHandler):
def get(self, path):
# edit the next line to change the static files directory
abs_path = os.path.join(os.path.dirname(__file__), path)
try:
f = open(abs_path, 'r')
self.response.headers.add_header('Content-Type', mimetypes.guess_type(abs_path)[0])
self.response.out.write(f.read())
f.close()
except IOError: # file doesn't exist
self.response.set_status(404)
And in your app object, add a route for StaticFileHandler:
app = webapp2.WSGIApplication([('/', MainHandler), # or whatever it's called
(r'/static/(.+)', StaticFileHandler), # add this
# other routes
])
Now http://localhost:8080/static/mydata.json (say) will load mydata.json.
Keep in mind that this code is a potential security risk: It allows any visitors to your website to read everything in your static directory. For this reason, you should keep all your static files to a directory that doesn't contain anything you'd like to restrict access to (e.g. the source code).

How do I access ${buildout:directory} from Python code?

I have a Pyramid web application managed with zc.buildout. In it, I need to read a file on disk, which is located in a sub-directory of buildout directory.
The problem is with determining the path to the file - I do not want to hard-code the absolute path and just providing a relative path does not work when serving the app in production (supposedly because the working directory is different).
So the promising "hooks" I am thinking about are:
the "root" buildout directory, which I can address in buildout.cfg as ${buildout:directory} - however, I can't figure out how can I "export" it so it can be accessed by the Python code
the location of the Paster's .ini file which starts the app
Like #MartijnPieters suggests in a comment on your own answer, I'd use collective.recipe.template to generate an entry in the .ini. I wondered myself how I could then access that data in my project, so I worked it out :-)
Let's work our way backwards to what you need. First in your view code where you want the buildout directory:
def your_view(request):
buildout_dir = request.registry.settings['buildout_dir']
....
request.registry.settings (see documentation) is a "dictonary-like deployment settings object". See deployment settings, that's the **settings that gets passed into your main method like def main(global_config, **settings)
Those settings are what's in the [app:main] part of your deployment.ini or production.ini file. So add the buildout directory there:
[app:main]
use = egg:your_app
buildout_dir = /home/you/wherever/it/is
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
But, and this is the last step, you don't want to have that hardcoded path in there. So generate the .ini with a template. The template development.ini.in uses a ${partname:variable} expansion language. in your case you need${buildout:directory}:
[app:main]
use = egg:your_app
buildout_dir = ${buildout:dir}
# ^^^^^^^^^^^^^^^
pyramid.reload_templates = true
pyramid.debug_authorization = false
...
Add a buildout part in buildout.cfg to generate development.ini from development.ini.in:
[buildout]
...
parts =
...
inifile
...
[inifile]
recipe = collective.recipe.template
input = ${buildout:directory}/development.ini.in
output = ${buildout:directory}/development.ini
Note that you can do all sorts of cool stuff with collective.recipe.template. ${serverconfig:portnumber} to generate a matching port number in your production.ini and in your your_site_name.nginx.conf, for instance. Have fun!
If the path to the file relative to the buildout root or location of paster.ini is always the same, which it seems it is from your question, you could set it in paster.ini:
[app:main]
...
config_file = %(here)s/path/to/file.txt
Then access it from the registry as in Reinout's answer:
def your_view(request):
config_file = request.registry.settings['config_file']
Here's a rather clumsy solution I've devised:
In buildout.cfg I used extra-paths option of zc.recipe.egg to add the buildout directory to sys.path:
....
[webserver]
recipe = zc.recipe.egg:scripts
eggs = ${buildout:eggs}
extra-paths = ${buildout:directory}
then I put a file called app_config.py into the buildout directory:
# This remembers the root of the installation (similar to {buildout:directory}
# so we can import it and use where we need access to the filesystem.
# Note: we could use os.getcwd() for that but it feels kinda wonky
# This is not directly related to Celery, we may want to move it somewhere
import os.path
INSTALLATION_ROOT = os.path.dirname(__file__)
Now we can import it in our Python code:
from app_config import INSTALLATION_ROOT
filename = os.path.join(INSTALLATION_ROOT, "somefile.ext")
do_stuff_with_file(filename)
If anyone knows a nicer solution you're welcome :)

python import the same file name programatically from different folders

I have the following structure
sites
- SiteA
- Settings.py
- SiteA
- Settings.py
- SiteA
- Settings.py
all the settings.py file has a variable
DOMAIN_KEY = ''
Is there a way to programmically import this structure and look through this variable in all the setting files and print it?
EDIT (Updated code):
with cd(os.path.join(APPLICATION_DIR, 'config', 'sites')):
output = run ("ls -l | grep \"^d\"|awk {'print $8'}") # run will execute and return list of folders
for file in output.split():
__module = __import__('%s' % file)
print __module
print __module.TIME_ZONE
print __module.SITE_ID
The import is not working properly, am not sure how I can import the setting files without knowing the folder name.
Supposing you do not have duplicate file names you can access these modules and its variables like this:
import SiteA.Settings as A, SiteB.Settings as B, SiteC.Settings as C
print(A.DOMAIN_KEY, B.DOMAIN_KEY, C.DOMAIN_KEY)
EDIT: Another option would be the __import__() statement, that lets you load a module with its name:
for char in "ABC":
mod = __import__("Site%c.Settings")
print(mod.DOMAIN_KEY)
EDIT2: according to the edit in the OP.
Why don't you just append the "Settings" module name after the folder you got from the script?
for file in output.split():
__module = __import__('%s.Settings' % file)
print __module.Settings.DOMAIN_KEY

Categories

Resources