I am running a T32 CMM script as below via command line(putting in a python wrapper) however I would like to know the status of T32 whether the script has been running successfully or was there an error,how can I that feedback from T32?
cd C:\T32\bin\windows64
Config.t32:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
; Environment variables
OS=
ID=T32
TMP=C:\Users\jhigh\AppData\Local\Temp
SYS=C:\T32
PBI=
USB
; Printer settings
PRINTER=WINDOWS
USAGE:-
t32marm.exe -s c:\Temp\vi_chip_cmd_line.cmm \\Filerlocation\data\files
The TRACE32 "API for Remote Control and JTAG Access" allows you to communicate with a running TRACE32 application.
To enable the API for your TRACE32 application, just add the following two lines to your TRACE32 start-configuration file ("config.t32"). Empty lines before and after the two lines are mandatory.
RCL=NETASSIST
PORT=20000
The usage of the API is described in the PDF api_remote.pdf, which is in the PDF folder of your TRACE32 installation or you can download it from http://www.lauterbach.com/manual.html
You can find examples on how to use the remote API with Python at http://www.lauterbach.com/scripts.html (Just search the page for "Python")
To check if your PRACTICE script ("vi_chip_cmd_line.cmm") is still running, use the API function T32_GetPracticeState();
I also suggest to create an artificial variable in the beginning of your script with Var.NEWGLOBAL int \state. During your scripted test, set the variable "\state" to increasing values with Var.Set \state=42. Via the TRACE32 command EVAL Var.VALUE(\state) and API call T32_EvalGet() you can get the current value of the variable "\state" and by doing so, you can check, if your script reached its final state.
Another approach would be to write a log-file from your PRACTICE script ("vi_chip_cmd_line.cmm") by using the TRACE32 command APPEND and read the log file from your Python script.
Please check your T32 installation for a demo on how to use the T32 API (demo/api/python). Keep in mind that it will not work without a valid license. It's also important that if you use Python inside 32-bit cygwin on a 64-bit host you need to load the 32-bit DLL.
Configuration:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
Python script:
import platform
import ctypes
# Adjust the path / name to the DLL
t32api = ctypes.CDLL("./t32api64.dll")
t32api.T32_Config(b"NODE=",b"localhost")
t32api.T32_Config(b"PORT=",b"20000")
t32api.T32_Config(b"PACKLEN=",b"1024")
t32api.T32_Init()
t32api.T32_Attach(1)
t32api.T32_Ping()
t32api.T32_Cmd(b"AREA")
t32api.T32_Exit()
Then you can use the commands / techniques that Holger has suggested:
T32_GetPracticeState()
to get the current run-state of PRACTICE. And / or set a variable inside your script
Var.Assign \state=1
Var.Assign \state=2
....
and then poll it using T32_ReadVariableValue()
Related
In Maximo 7.6.1.1, is there a way to execute Python/Jython scripts on demand?
For example, in other software such as ArcGIS Desktop, there is a window in the application called the Python Window:
In the ArcGIS python window, I can write any sort of script I want.
For example, I can write a script that loops through records in a table and updates values based on certain criteria. And I can execute it on demand.
Is there a way to do the equivalent in Maximo? Maybe in Eclipse?
You can execute a script -- even without launch point -- from any Java class (within maximo) using this piece of code:
ScriptDriverFactory.getInstance().getScriptDriver(ScriptName).runScript(ScriptName, Context);
...where Context is a hashmap for all variables that might be needed in the script.
It is not supported, but you can create and grant yourself an EXECUTE sig option in the autoscript application. This will enable an Execute action, allowing you to execute a script on demand. However, because no launch point was used to provide context, implicit variables and other context that you may be used to will not be available.
"On Demand Autoscript" is what I call a script that I develop with the intention of being executed from that Execute action. I have written On Demand scripts for doing things like resynchronizing nested workflows or preparing our data for an upgrade. On Demand scripts, though created the same way, are different from what the 7.6 documentation calls "Library scripts" in that, even though Library scripts aren't (necessarily) called from their own Launch Points, the script that calls them does usually provide some context / implicit variables.
An On Demand Autoscript usually looks something like this, which you can look up documentation on in the Maximo API JavaDocs.
from psdi.server import MXServer
server = MXServer.getMXServer()
security = server.lookup("SECURITY")
userInfo = security.getSystemUserInfo()
mboSet = server.getMboSet("SOMEOBJECT", userInfo)
try:
mboSet.setWhere("somecol = 'somevalue'")
mbo = mboSet.moveFirst()
while mbo:
print "do something with mbo %s: %s" % (
mbo.getUniqueIdentifer(), mbo.getString("DESCRIPTION"))
mbo = mboSet.moveNext()
if "applicable":
mboSet.save()
finally:
if not mboSet.isClosed():
mboSet.close()
From the above, it should be plain that you can easily "write a script that loops through records in a table and updates values based on certain criteria. And I can execute it on demand."
To build on #Preacher's answer:
Instructions for running an automation script on-demand (adding the EXECUTE sig option):
Application Designer --> AUTOSCRIPT:
Create an EXECUTE sig option (Add/Modify Signature Options)
Option: EXECUTE
Description: Execute Script
Advanced Signature Options: None
Ensure that your security group has that EXECUTE sig option in the Automation Scripts application:
(It might be enabled by default)
Screenshot
Log out of Maximo and back in again (to update your cached permissions with the change that was just made).
Create an automation script
Without launch points?
Automation Scripts application --> Create --> Script
Open the automation script.
The Execute Script action will appear in the left pane. Use it to run automation scripts on demand.
Screenshot
I am writing a watchman command with watchman-make and I'm at a loss when trying to access exactly what was changed in the directory. I want to run my upload.py script and inside the script I would like to access filenames of newly created files in /var/spool/cups-pdf/ANONYMOUS .
so far I have
$ watchman-make -p '/var/spool/cups-pdf/ANONYMOUS' -—run 'python /home/pi/upload.py'
I'd like to add another argument to python upload.py so I can have an exact filepath to the newly created file so that I can send the new file over to my database in upload.py,
I've been looking at the docs of watchman and the closest thing I can think to use is a trigger object. Please help!
Solution with watchman-wait:
Assuming project layout like this:
/posts/_SUBDIR_WITH_POST_NAME_/index.md
/Scripts/convert.sh
And the shell script like this:
#!/bin/bash
# File: convert.sh
SrcDirPath=$(cd "$(dirname "$0")/../"; pwd)
cd "$SrcDirPath"
echo "Converting: $SrcDirPath/$1"
Then we can launch watchman-wait like this:
watchman-wait . --max-events 0 -p 'posts/**/*.md' | while read line; do ./Scripts/convert.sh $line; done
When we changing file /posts/_SUBDIR_WITH_POST_NAME_/index.md the output will be like this:
...
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
Converting: /Users/.../Angular/dartweb_quickstart/posts/swift-on-android-building-toolchain/index.md
...
watchman-make is intended to be used together with tools that will perform a follow-up query of their own to discover what they want to do as a next step. For example, running the make tool will cause make to stat the various deps to bring things up to date.
That means that your upload.py script needs to know how to do this for itself if you want to use it with watchman.
You have a couple of options, depending on how sophisticated you want things to be:
Use pywatchman to issue an ad-hoc query
If you want to be able to run upload.py whenever you want and have it figure out the right thing (just like make would do) then you can have it ask watchman directly. You can have upload.py use pywatchman (the python watchman client) to do this. pywatchman will get installed if the the watchman configure script thinks you have a working python installation. You can also pip install pywatchman. Once you have it available and in your PYTHONPATH:
import pywatchman
client = pywatchman.client()
client.query('watch-project', os.getcwd())
result = client.query('query', os.getcwd(), {
"since": "n:pi_upload",
"fields": ["name"]})
print(result["files"])
This snippet uses the since generator with a named cursor to discover the list of files that changed since the last query was issued using that same named cursor. Watchman will remember the associated clock value for you, so you don't need to complicate your script with state tracking. We're using the name pi_upload for the cursor; the name needs to be unique among the watchman clients that might use named cursors, so naming it after your tool is a good idea to avoid potential conflict.
This is probably the most direct way to extract the information you need without requiring that you make more invasive changes to your upload script.
Use pywatchman to initiate a long running subscription
This approach will transform your upload.py script so that it knows how to directly subscribe to watchman, so instead of using watchman-make you'd just directly run upload.py and it would keep running and performing the uploads. This is a bit more invasive and is a bit too much code to try and paste in here. If you're interested in this approach then I'd suggest that you take the code behind watchman-wait as a starting point. You can find it here:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait
The key piece of this that you might want to modify is this line:
https://github.com/facebook/watchman/blob/master/python/bin/watchman-wait#L169
which is where it receives the list of files.
Why not triggers?
You could use triggers for this, but we're steering folks away from triggers because they are hard to manage. A trigger will run in the background and have its output go to the watchman log file. It can be difficult to tell if it is running, or to stop it running.
The interface is closer to the unix model and allows you to feed a list of files on stdin.
Speaking of unix, what about watchman-wait?
We also have a command that emits the list of changed files as they change. You could potentially stream the output from watchman-wait in your upload.py. This would make it have some similarities with the subscription approach but do so without directly using the pywatchman client.
I created the simple python script using pexpect, created one spwan process using
CurrentCommand = "ssh " + serverRootUserName + "#" + serverHostName
child = pexpect.spawn(CurrentCommand)
Now I am running some command like ls-a or "find /opt/license/ -name '*.xml'"
using code
child.run(mycommand)
it works fine if running from Pycharm but if running from terminal it is not working it is not able to find any file, I think it is looking into my local system.
Can anyone suggest me something. Thanks
As a suggestion, have a look at the paramiko library (or fabric, which uses it, but has a specific purpose), as this is a python interface to ssh. It might make your code a bit better and more resilient against bugs or attacks.
However, I think the issue comes from your use of run.
This function runs the given command; waits for it to finish; then returns all output as a string. STDERR is included in output. If the full path to the command is not given then the path is searched.
What you should look at is 'expect'. I.e. your spawn with spawn then you should use expect to wait for that to get to an appropiate point (such as connected, terminal ready after motd pushed etc (because ouy might have to put a username and password in etc).
Then you want to run sendline to send a line to the program. See the example:
http://pexpect.readthedocs.io/en/latest/overview.html
Hope that helps, and seriously, have a look at paramiko ;)
I'm creating a bunch of .py WLST scripts (15-20) which will each check a different setting in a Weblogic environment. For example, password requirements, security settings, user properties etc.
However, I want to run these scripts in a number of WebLogic environments, all having different host URLs and credentials. Is there an easy way to dynamically change the connection details for each script as they are run in different environments:
script:
connect(x,y,z)
script in env 1:
connect('weblogic','welcome1','example-host1:7001')
script in env 2:
connect('weblogic','welcome2','example-host1:7001')
This is my first occasion asking a question on stackoverflow after using it as a source for the first couple of years of my career, so apologies if this issue is described poorly.
Simple Answer would be . Keep Environment related properties in the property file . And read those properties using Python (Jython)
from java.io import FileInputStream
propInputStream = FileInputStream("preprodenv.properties")
configProps = Properties()
configProps.load(propInputStream)
adminHost=configProps.get("admin.host)
adminPort=configProps.get("admin.port")
adminUserName=configProps.get("admin.userName")
adminPassword=configProps.get("admin.password")
# t3 or t3s depends upon your config
adminURL = "t3://"+adminHost+":"+adminPort
connect(adminUserName, adminPassword, adminURL)
Option#2
Keep the environment related information in properties and read using
loadProperties('c:/temp/myLoad.properties')
or pass it as argument to your wlst script -loadProperties='C:\temp\myLoad.properties'
Anything would work.
I am assuming that the hostnames will be different in different environments. The way we do this is by creating an "env shell script" which contains a mapping using simple case statements. We then create a wrapper script that iterates over various environments in the "env shell script". Does this help or do you need more details ?
I have downloaded and installed the Perforce API for Python.
I'm able to run the examples on this page:
http://www.perforce.com/perforce/doc.current/manuals/p4script/03_python.html#1127434
But unfortunately the documentation seems incomplete. For example, the P4 class has a method called run_sync, but it's not documented anywhere (in fact, it doesn't even show up if you run dir(p4) in the Python interactive interpreter, despite the fact that you can use the method just fine in the interactive interpreter.)
So I'm struggling with figuring out how to use the API for anything beyond the trivial examples on the page I linked to above.
I would like to write a script which simply downloads the latest revision of a subdirectory to the filesystem of the computer running it and does nothing else. I don't want the server to change in any way. I don't want there to be any indication that the files came from Perforce (as opposed to if you get the files via the Perforce application, it'll mark the files in your file system as read only until you check them out or whatever. That's silly - I just need to pull down a snapshot of what the subdirectory looked like at the moment the script was run.)
The Python API follows the same basic structure as the command line client (both are very thin wrappers over the same underlying API), so you'll want to look at the command line client documentation; for example, look at "p4 sync" to understand how "run_sync" in P4Python works:
http://www.perforce.com/perforce/r14.2/manuals/cmdref/p4_sync.html
For the task you're describing I would do the following (I'll describe it in terms of Perforce commands since my Python is a little rusty; once you know what commands you're running it should be pretty simple to translate into Python, since the P4Python doc has examples of things like creating and modifying a client spec, which is the hardest part):
1) Create a client that maps the desired depot directory to the desired local filesystem location, e.g. if you want the directory "//depot/foo/..." downloaded to "/usr/team/foo" you'd make a client that looks like:
Client: mytempclient123847
Root: /usr/team/foo
View:
//depot/foo/... //mytempclient123847/...
You should set the "allwrite" option on the client since you said don't want the synced files to be read-only:
Options: allwrite noclobber nocompress unlocked nomodtime rmdir
2) Sync, using the "-p" option to minimize server impact (the server will not record that you "have" the files).
3) Delete the client.
(I'm omitting some details like making sure that you're authenticated correctly -- that's a whole other potential challenge depending on your server's security and whether it's using external authentication, but it sounds like that's not the part you're having trouble with.)