I am trying to create a simple python script that deploys my EAR file to the AdminServer of Weblogic. I have searched the internet and the documentation provided by Oracle, but I cannot find a way to determine if the application has been previously deployed. I would like my script to check if it has been, and if so, issue a redeploy command. If not, issue a deploy command.
I have tried to modify example scripts I've found, and although they've worked, they are not behaving as intended. One of the things I was trying to do was check (using the cd command) if my EAR was in the deployments folder of WebLogic and if it was, issue the redeploy. If not, it should throw an Exception, where I would issue the deploy. However, an Exception is thrown everytime when I issue the cd command in my script:
try:
print 'Checking for the existence of the ' + applicationName + ' application.....'
cd('C:\\Oracle\\Middleware\\user_projects\\domains\\base_domain\\config\\deployments\\MyTestEAR.ear\\')
print 'Redeploying....'
#Commands to redeploy....
except WLSTException:
#Commands to deploy
I'm running this script on Windows using execfile("C:\MyTestDeployer.py") command after setting my environment variables using the WLST Scripting Tool. Any ideas? I've also tried to use a different path in my cd command, but to no avail. Any ideas?
It works for me:
print 'stopping and undeploying ...'
try:
stopApplication('WebApplication')
undeploy('WebApplication')
print 'Redeploying...'
except Exception:
print 'Deploy...'
deploy('WebApplication', '/home/saeed/project/test/WebApplication/dist/WebApplication.war')
startApplication('WebApplication2')
I've done something like that in the past, but with a different approach...
I've used the weblogic.Deployer interface with the -listapps option to list the apps/libraries deployed to the domain, which I would then compare to the the display-name element of application.xml generated in the archive
The problem I've found using plain file-names, in my case, was that archives came with the date in which they were generated. Which would lead to a always false comparison.
Using the display-name, I've standardized the app name that would be deployed, and later on compared to a new archive to be redeployed.
Use the command listApplications() in Online mode to list all applications that are currently deployed in the WebLogic domain.
In the event of an error, the command returns a WLSTException.
Example :
wls:/mydomain/serverConfig> listApplications()
SamplesSearchWebApp
asyncServletEar
jspSimpleTagEar
ejb30
webservicesJwsSimpleEar
ejb20BeanMgedEar
xmlBeanEar
extServletAnnotationsEar
examplesWebApp
apache_xbean.jar
mainWebApp
jdbcRowSetsEar
Source : link
Related
So, I have a python script that I've 'compiled' into a .exe using PyInstaller. The script is using click for the command line interface. It will return either '0' or '1' depending on the passed parameter. It works on both my development machine as well as the machine running the website when called via command line.
C:\wamp64\www\nested_folders\SChecker.exe --serial="051326584"
But when called in PHP on the website via , where the passed $_serial is passed from a form POST :
$cmd = "C:\\wamp64\\www\\nested_folders\\SChecker.exe --serial=$_serial";
$cmd_result = shell_exec($cmd);
It always returns '0'. Always. I've checked the passed $cmd value as it is passed to the shell_exec() and the passed value works fine via command line.
Any ideas or help are much appreciated. Thanks
So I've now figured out what was going on.
I first created an equivalent process as a .bat file, to check if it was coming from the python script. But it persisted, as it turned out there was an error occurring but it wasn't being flagged back at all. It was identified by adding 2>&1 to the end of the cmd line that called the bat. This reveled I was getting an 'Access is denied' error.
This was due to the website not being run as my user, so didn't have the permissions to run the file.
I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.
I am running a T32 CMM script as below via command line(putting in a python wrapper) however I would like to know the status of T32 whether the script has been running successfully or was there an error,how can I that feedback from T32?
cd C:\T32\bin\windows64
Config.t32:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
; Environment variables
OS=
ID=T32
TMP=C:\Users\jhigh\AppData\Local\Temp
SYS=C:\T32
PBI=
USB
; Printer settings
PRINTER=WINDOWS
USAGE:-
t32marm.exe -s c:\Temp\vi_chip_cmd_line.cmm \\Filerlocation\data\files
The TRACE32 "API for Remote Control and JTAG Access" allows you to communicate with a running TRACE32 application.
To enable the API for your TRACE32 application, just add the following two lines to your TRACE32 start-configuration file ("config.t32"). Empty lines before and after the two lines are mandatory.
RCL=NETASSIST
PORT=20000
The usage of the API is described in the PDF api_remote.pdf, which is in the PDF folder of your TRACE32 installation or you can download it from http://www.lauterbach.com/manual.html
You can find examples on how to use the remote API with Python at http://www.lauterbach.com/scripts.html (Just search the page for "Python")
To check if your PRACTICE script ("vi_chip_cmd_line.cmm") is still running, use the API function T32_GetPracticeState();
I also suggest to create an artificial variable in the beginning of your script with Var.NEWGLOBAL int \state. During your scripted test, set the variable "\state" to increasing values with Var.Set \state=42. Via the TRACE32 command EVAL Var.VALUE(\state) and API call T32_EvalGet() you can get the current value of the variable "\state" and by doing so, you can check, if your script reached its final state.
Another approach would be to write a log-file from your PRACTICE script ("vi_chip_cmd_line.cmm") by using the TRACE32 command APPEND and read the log file from your Python script.
Please check your T32 installation for a demo on how to use the T32 API (demo/api/python). Keep in mind that it will not work without a valid license. It's also important that if you use Python inside 32-bit cygwin on a 64-bit host you need to load the 32-bit DLL.
Configuration:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
Python script:
import platform
import ctypes
# Adjust the path / name to the DLL
t32api = ctypes.CDLL("./t32api64.dll")
t32api.T32_Config(b"NODE=",b"localhost")
t32api.T32_Config(b"PORT=",b"20000")
t32api.T32_Config(b"PACKLEN=",b"1024")
t32api.T32_Init()
t32api.T32_Attach(1)
t32api.T32_Ping()
t32api.T32_Cmd(b"AREA")
t32api.T32_Exit()
Then you can use the commands / techniques that Holger has suggested:
T32_GetPracticeState()
to get the current run-state of PRACTICE. And / or set a variable inside your script
Var.Assign \state=1
Var.Assign \state=2
....
and then poll it using T32_ReadVariableValue()
I created the simple python script using pexpect, created one spwan process using
CurrentCommand = "ssh " + serverRootUserName + "#" + serverHostName
child = pexpect.spawn(CurrentCommand)
Now I am running some command like ls-a or "find /opt/license/ -name '*.xml'"
using code
child.run(mycommand)
it works fine if running from Pycharm but if running from terminal it is not working it is not able to find any file, I think it is looking into my local system.
Can anyone suggest me something. Thanks
As a suggestion, have a look at the paramiko library (or fabric, which uses it, but has a specific purpose), as this is a python interface to ssh. It might make your code a bit better and more resilient against bugs or attacks.
However, I think the issue comes from your use of run.
This function runs the given command; waits for it to finish; then returns all output as a string. STDERR is included in output. If the full path to the command is not given then the path is searched.
What you should look at is 'expect'. I.e. your spawn with spawn then you should use expect to wait for that to get to an appropiate point (such as connected, terminal ready after motd pushed etc (because ouy might have to put a username and password in etc).
Then you want to run sendline to send a line to the program. See the example:
http://pexpect.readthedocs.io/en/latest/overview.html
Hope that helps, and seriously, have a look at paramiko ;)
I'm looking at some code and it's using request.META['SERVER_NAME'] and checking if the first 9 characters match 'localhost'. In osx the value of SERVER_NAME is '1.0.0.127.in-addr.arpa', but if I do request.get_host() I get localhost:10002 (which is the way I access it in my browser). Other developers running the same code in linux and windows gets localhost as the value from META.
I've seen two other people asking related questions (in the comment section to the answer) Accessing request.META.SERVER_NAME in template and https://plus.google.com/+SamVilain/posts/8TortHZ7J5V. But I haven't found a way to have it behave the way I want. So my question: Is there a simple way to make django populate META[SERVER_NAME] with localhost? This is a rather large system and the check is made in lots of places so changing the variable for the call is something I'd really like to avoid.
While trying to fix another issue I had with the terminal showing a random hostname as the terminal prompt did I stumble upon scutil. Turned out when I ran the following command it also fixed my issue with django:
sudo scutil --set HostName localhost