Dynamically change variables in WLST script - python

I'm creating a bunch of .py WLST scripts (15-20) which will each check a different setting in a Weblogic environment. For example, password requirements, security settings, user properties etc.
However, I want to run these scripts in a number of WebLogic environments, all having different host URLs and credentials. Is there an easy way to dynamically change the connection details for each script as they are run in different environments:
script:
connect(x,y,z)
script in env 1:
connect('weblogic','welcome1','example-host1:7001')
script in env 2:
connect('weblogic','welcome2','example-host1:7001')
This is my first occasion asking a question on stackoverflow after using it as a source for the first couple of years of my career, so apologies if this issue is described poorly.

Simple Answer would be . Keep Environment related properties in the property file . And read those properties using Python (Jython)
from java.io import FileInputStream
propInputStream = FileInputStream("preprodenv.properties")
configProps = Properties()
configProps.load(propInputStream)
adminHost=configProps.get("admin.host)
adminPort=configProps.get("admin.port")
adminUserName=configProps.get("admin.userName")
adminPassword=configProps.get("admin.password")
# t3 or t3s depends upon your config
adminURL = "t3://"+adminHost+":"+adminPort
connect(adminUserName, adminPassword, adminURL)
Option#2
Keep the environment related information in properties and read using
loadProperties('c:/temp/myLoad.properties')
or pass it as argument to your wlst script -loadProperties='C:\temp\myLoad.properties'
Anything would work.

I am assuming that the hostnames will be different in different environments. The way we do this is by creating an "env shell script" which contains a mapping using simple case statements. We then create a wrapper script that iterates over various environments in the "env shell script". Does this help or do you need more details ?

Related

Running powershell scripts with python under one session

I try to create a python program which will deobfuscate powershell malware, which uses IEX. My python program is actually hooking the IEX function and instead of running the desired string, it will print the string.
Now my problem is that I have some .ps1 scripts (for examples 1.ps1, 2.ps1, etc..) and I want to run all of them under the same session so that by this, all the local variables created by 1.ps1 script, the 2.ps1 script will be able to use...
Now I tried so many ways, First I tried with subprocess but it always creates a new session for every time I enter a command (which is the path of the .ps1 file). Then I found this project at GitHub:
https://gist.github.com/MarkBaggett/a7c10195b2626c78009bf73bcdb6db20
Which is really awesome and did work but still, it seems that when I run the command ./1.ps1 it still does not store the local variables at the session (Maybe it opens a new one when running a script).
I tried to do also "Get-Content 1.ps1 | iex" but then it crashes since I have functions there for example:
function Invoke-Expression()
{
param(
[Parameter( `
Mandatory=$True, `
Valuefrompipeline = $True)]
[String]$Command
)
Write-Host $Command
}
taken from PSDecode project:
https://github.com/R3MRUM/PSDecode/blob/master/PSDecode.psm1#L28
Anyway, any ideas about how I can do this? I have those scripts on my desktop but no idea how to run them at the same session so they will use the same local variables...
Two things that I did though but they really suck:
1. Convert all the scripts to 1 script and run it, but in next run that I will use this program I might have 100 scripts or more and I don't really want to do this.
2. I can save the local variables from each script and load it to another yet I want to use it in the worst case scenario and still didn't get there.
Thank you so much for helping me and sorry for my grammar my English is not my mother language as you can see :)
Maybe you're looking for dot sourcing:
Runs a script in the current scope so that any functions, aliases, and variables that the script creates are added to the current scope.
PowerShell
. c:\scripts\sample.ps1
If so dot-source your ps1 files, and call the functions inside them.
Hope that helps.

Automating Configuration commands that require input from the user using the Fabric module

I am currently in the process of developing a python code that connects to a remote brocade switch using the fabric module and issue some configuration commands. The problem I am facing is when it comes to commands that require an input from the user (i.e. yes/no).
I read several posts that have advised to use Fabric's native settings methods as well as wexpect but none have been successful.
I checked the following links but none were able to help with my code:
how to handle interactive shell in fabric python
How to answer to prompts automatically with python fabric?
Python fabric respond to prompts in output
Below is an example of the command output that requires to be automated:
DS300B_Autobook:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
If the update includes changes to one or more traffic isolation
zones, you must issue the 'cfgenable' command for the changes
to take effect.
Do you want to save the Defined zoning configuration only? (yes, y, no, n): [no]
The code that I have written for this is show below (tried to make it exactly the same as the output the command provides):
with settings(prompts={"DS300B_Autobook:admin> cfgsave\n"
"You are about to save the Defined zoning configuration. This\n"
"action will only save the changes on Defined configuration.\n"
"If the update includes changes to one or more traffic isolation\n"
"zones, you must issue the 'cfgenable' command for the changes\n"
"to take effect.\n"
"Do you want to save the Defined zoning configuration only? (yes, y, no, n): [no] " : "yes"}):
c.run('cfgsave')
If there is a way to have it display the output of the command to the screen and prompt me to provide the input that would also be reasonable solution.

when using Watchman's watch-make I want to access the name of the changed files

I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.

How do I embed an Ipython Notebook in an iframe (new)

I have successfully achieved this using the method documented at Run IPython Notebook in Iframe from another Domain . However, this required editing the user config file. I was really hoping to be able to set this up via the command-line instead (for reasons).
http://ipython.org/ipython-doc/1/config/overview.html indicates that configuration via the command line is possible. However, all the examples are for simple true/false value assignment. To set the server up to allow embedding, it is necessary to set a value inside a dictionary. I can't work out how to pass a dictionary in through the command-line.
Another acceptable option would be a configuration overrides file.
Some people will wonder -- why all this trouble!?!
First of all, this isn't for production. I'm trying to support non-developers by writing a web-based application which integrates Ipython notebooks within it using iframes. Despite being on the same machine, it appears that the different port number used is enough to mean that I can't do simple iframe embedding without setting the x-frame insecurity bit.
Being able to do this via the command line lets me set the behaviour in the launch script rather than having to bundle a special configuration file inside my app, and also write an installer.
I really hope I've make the question clear enough! Thanks for any and all suggestions and help!
Looking over the IPython source for the loaders, it seems like it will execute whatever python code you put on the right hand side. I've not tested it, but based on the link you provided, you can probably pass something like
--NotebookApp.webapp_settings=dict('headers'=dict('X-Frame-Options'='ALLOW-FROM https://example.com/'))

Im in the beginning of the Flask tutorial for python, and I dont understand this paragraph

Usually, it is a good idea to load a configuration from a configurable file. This is what from_envvar() can do, replacing the from_object() line above:
app.config.from_envvar('FLASKR_SETTINGS', silent=True)
That way someone can set an environment variable called FLASKR_SETTINGS to specify a config file to be loaded which will then override the default values. The silent switch just tells Flask to not complain if no such environment key is set.
I am not too familiar with environment variables. I would like an explanation of the above paragraph in simple terms. My best guess is that when the program reads FLASKR_SETTING does that mean that on my own computer I have set up a mapping to this file with that name with something called an environment variable? Ive messed with my environment path before and to be honest I still don't understand it, so I came here looking for a clear answer
Environment variables are a name,value pair that are defined for a particular process running on a computer (windows or UNIX/LINUX etc.). They are not files. You can create your own environment variables and give it any name/value. For example, FLASKR_SETTING is the name of the environment variable who value could be set to a config file. On a UNIX terminal for example, you can do:
export FLASKR_SETTING = /somepath/config.txt
By doing the above, you have just created an environment variable named FLASKR_SETTING whose value is set to /somepath/config.txt. The reason you use environment variables is because you can tie them to a certain process and use on demand when your process starts. You don't have to worry about saving them in a file. In fact, you can create a launch script for your process/application that can set a variety of environment variables before you starting using the application.
In case of flask, app.config.from_envvar('FLASKR_SETTINGS', silent=True) sets the value of FLASKR_SETTINGS to the value from the env. variable. So it basically translates to:
- Find the config file (/somepath/config.txt etc.)
- lets say the contents of config file is:
SECRET_KEY="whatever"
DEBUG = True
- Then using the 2 above, it will be translated to:
app.config['SECRET_KEY'] = "whatever"
app.config['DEBUG'] = True
So this way, you can just update the config file as needed and you will not need to change your code.
Environment variables are a simple, ad-hoc way of passing information to programs. On unixy machines, from a command shell, it's as simple as
export FLASKR_SETTINGS=/path/to/settings.conf
/path/to/program
This is especially useful when installing programs to start up at reboot; the configuration can be easily included in the same setup script that launches the system program.

Categories

Resources