In Maximo 7.6.1.1, is there a way to execute Python/Jython scripts on demand?
For example, in other software such as ArcGIS Desktop, there is a window in the application called the Python Window:
In the ArcGIS python window, I can write any sort of script I want.
For example, I can write a script that loops through records in a table and updates values based on certain criteria. And I can execute it on demand.
Is there a way to do the equivalent in Maximo? Maybe in Eclipse?
You can execute a script -- even without launch point -- from any Java class (within maximo) using this piece of code:
ScriptDriverFactory.getInstance().getScriptDriver(ScriptName).runScript(ScriptName, Context);
...where Context is a hashmap for all variables that might be needed in the script.
It is not supported, but you can create and grant yourself an EXECUTE sig option in the autoscript application. This will enable an Execute action, allowing you to execute a script on demand. However, because no launch point was used to provide context, implicit variables and other context that you may be used to will not be available.
"On Demand Autoscript" is what I call a script that I develop with the intention of being executed from that Execute action. I have written On Demand scripts for doing things like resynchronizing nested workflows or preparing our data for an upgrade. On Demand scripts, though created the same way, are different from what the 7.6 documentation calls "Library scripts" in that, even though Library scripts aren't (necessarily) called from their own Launch Points, the script that calls them does usually provide some context / implicit variables.
An On Demand Autoscript usually looks something like this, which you can look up documentation on in the Maximo API JavaDocs.
from psdi.server import MXServer
server = MXServer.getMXServer()
security = server.lookup("SECURITY")
userInfo = security.getSystemUserInfo()
mboSet = server.getMboSet("SOMEOBJECT", userInfo)
try:
mboSet.setWhere("somecol = 'somevalue'")
mbo = mboSet.moveFirst()
while mbo:
print "do something with mbo %s: %s" % (
mbo.getUniqueIdentifer(), mbo.getString("DESCRIPTION"))
mbo = mboSet.moveNext()
if "applicable":
mboSet.save()
finally:
if not mboSet.isClosed():
mboSet.close()
From the above, it should be plain that you can easily "write a script that loops through records in a table and updates values based on certain criteria. And I can execute it on demand."
To build on #Preacher's answer:
Instructions for running an automation script on-demand (adding the EXECUTE sig option):
Application Designer --> AUTOSCRIPT:
Create an EXECUTE sig option (Add/Modify Signature Options)
Option: EXECUTE
Description: Execute Script
Advanced Signature Options: None
Ensure that your security group has that EXECUTE sig option in the Automation Scripts application:
(It might be enabled by default)
Screenshot
Log out of Maximo and back in again (to update your cached permissions with the change that was just made).
Create an automation script
Without launch points?
Automation Scripts application --> Create --> Script
Open the automation script.
The Execute Script action will appear in the left pane. Use it to run automation scripts on demand.
Screenshot
Related
GCP has a published create_instance() code snippet available here, which I've seen on SO in a couple places e.g. here. However, as you can see in the first link, it's from 2015 ("Copyright 2015 Google Inc"), and Google has since published another code sample for launching a GCE instance dated 2022. It's available on github here, and this newer create_instance function is what's featured in GCP's python API documentation here.
However, I can't figure out how to pass a startup script via metadata to run on VM startup using the modern python function. I tried adding
instance_client.metadata.items = {'key': 'startup-script',
'value': job_script}
to the create.py function (again, available here along with supporting utility functions it calls) but it threw an error that the instance_client doesn't have that attribute.
GCP's documentation page for starting a GCE VM with a startup script is here, where unlike most other similar pages, it contains code snippets only for console, gcloud and (REST)API; not SDK code snippets for e.g. Python and Ruby that might show how to modify the python create_instance function above.
Is the best practice for launching a GCE VM with a startup script from a python process really to send a post request or just wrap the gcloud command
gcloud compute instances create VM_NAME \
--image-project=debian-cloud \
--image-family=debian-10 \
--metadata-from-file=startup-script=FILE_PATH
...in a subprocess.run()? To be honest I wouldn't mind doing things that way since the code is so compact (the gcloud command at least, not the POST request way), but since GCP provides a create_instance python function I had assumed using/modifying-as-necessary that would be the best practice from within python...
Thanks!
So, the simplest (!) way with the Python library to create the equivalent of --metadata-from-file=startup-scripts=${FILE_PATH} is probably:
from google.cloud import compute_v1
instance = compute_v1.Instance()
metadata = compute_v1.Metadata()
metadata.items = [
{
"key":"startup-script",
"value":'#!/usr/bin/env bash\necho "Hello Freddie"'
}
]
instance.metadata = metadata
And another way is:
metadata = compute_v1.Metadata()
items = compute_v1.types.Items()
items.key = "startup-script"
items.value = """
#!/usr/bin/env bash
echo "Hello Freddie"
"""
metadata.items = [items]
NOTE In the examples, I'm embedding the content of the FILE_PATH in the script for convenience but you could, of course, use Python's open to achieve a more comparable result.
It is generally always better to use a library|SDK if you have one to invoke functionality rather than use subprocess to invoke the binary. As mentioned in the comments, the primary reason is that language-specific calls give you typing (more in typed languages), controlled execution (e.g. try) and error handling. When you invoke a subprocess its string-based streams all the way down.
I agree that the Python library for Compute Engine using classes feels cumbersome but, when you're writing a script, the focus could be on the long-term benefits of more explicit definitions vs. the short-term pain of the expressiveness. If you just wanna insert a VM, by all means using gcloud compute instances create (I do this all the time in Bash) but, if you want to use a more elegant language like Python, then I encourage you to use Python entirely.
CURIOSITY gcloud is written in Python. If you use Python subprocess to invoke gcloud commands, you're using Python to invoke a shell that runs Python to make a REST call ;-)
I need a solution to run python (.py) scripts from oracle (pl/sql). Is there any solution?
For example: I have a python script to send gmail and create Excel spreadsheet from Oracle database. But I have to call this with Oracle, and I also have to use parameters from Oracle.
DBMS_SCHEDULER might be of use.
First create a shell script that is a wrapper for your Python.
Then create the job.
begin
dbms_scheduler.create_program
(
program_name => 'PYEXCEL',
program_type => 'EXECUTABLE',
program_action => '/the_path/the_py_script_wrapper.ks',
enabled => TRUE,
comments => 'Call Python stuff'
);
end;
/
Note, jobs can be configured with parameters in case your script needs these.
Then run:
BEGIN
DBMS_SCHEDULER.RUN_JOB(
JOB_NAME => 'PYEXCEL',
USE_CURRENT_SESSION => FALSE);
END;
/
This is the 'purest' PLSQL only way I think.
TenG's method is the easiest path to what you are looking for but another method can be found using OS_COMMAND
http://plsqlexecoscomm.sourceforge.net/plsqldoc/os_command.html
This (very short) answer to a similar question provides one more possible solution using Jython inerpreter. May be you can find other answers in that thread helpful.
It is possible to call a Java method inside your PL/SQL that will load and run your .py script using a Jython Interpreter. It lacks an example though.
This article provides an example of how to run python code from java using the so called ProcessBuilder and any arbitrary python interpreter installed in the system.
Here another example of ProcessBuilder used to run python from Java which in turn can be run from PL/SQL.
Hope this helps.
Generally the database is isolated from the OS for security reasons. There are a couple of workarounds (*):
One is to write an external procedure which calls OS c code.
One is write a Java Stored Procedure which mimics an OS host command and runs a shell script. Find out more
I think the second option is better for your purposes. In either case you will need to persuade your DBA / security team to allow the granting of the required privileges.
Alternatively Oracle has an inbuilt package UTL_MAIL to send email from PL/SQL and there are third-party PL/SQL libraries which allow us to generate Excel spreadsheets from inside the database. These may be more suitable to your situation (depending on how much you need to re-use your python code).
The other alternative is drive the whole thing from python programs, and just connect to the database to get the data you need.
(*) For completeness, there is a third way to execute OS shell scripts from the database. We can attach pre-processor scripts to external tables which get run whenever we select from the external table. Find out more. But I don't think external tables are relevant in this scenario. And of course external tables also need the granting of OS privileges to the database, so it doesn't avoid that conversation with your DBA / security team.
I am running a T32 CMM script as below via command line(putting in a python wrapper) however I would like to know the status of T32 whether the script has been running successfully or was there an error,how can I that feedback from T32?
cd C:\T32\bin\windows64
Config.t32:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
; Environment variables
OS=
ID=T32
TMP=C:\Users\jhigh\AppData\Local\Temp
SYS=C:\T32
PBI=
USB
; Printer settings
PRINTER=WINDOWS
USAGE:-
t32marm.exe -s c:\Temp\vi_chip_cmd_line.cmm \\Filerlocation\data\files
The TRACE32 "API for Remote Control and JTAG Access" allows you to communicate with a running TRACE32 application.
To enable the API for your TRACE32 application, just add the following two lines to your TRACE32 start-configuration file ("config.t32"). Empty lines before and after the two lines are mandatory.
RCL=NETASSIST
PORT=20000
The usage of the API is described in the PDF api_remote.pdf, which is in the PDF folder of your TRACE32 installation or you can download it from http://www.lauterbach.com/manual.html
You can find examples on how to use the remote API with Python at http://www.lauterbach.com/scripts.html (Just search the page for "Python")
To check if your PRACTICE script ("vi_chip_cmd_line.cmm") is still running, use the API function T32_GetPracticeState();
I also suggest to create an artificial variable in the beginning of your script with Var.NEWGLOBAL int \state. During your scripted test, set the variable "\state" to increasing values with Var.Set \state=42. Via the TRACE32 command EVAL Var.VALUE(\state) and API call T32_EvalGet() you can get the current value of the variable "\state" and by doing so, you can check, if your script reached its final state.
Another approach would be to write a log-file from your PRACTICE script ("vi_chip_cmd_line.cmm") by using the TRACE32 command APPEND and read the log file from your Python script.
Please check your T32 installation for a demo on how to use the T32 API (demo/api/python). Keep in mind that it will not work without a valid license. It's also important that if you use Python inside 32-bit cygwin on a 64-bit host you need to load the 32-bit DLL.
Configuration:
RCL=NETASSIST
PORT=20000
PACKLEN=1024
Python script:
import platform
import ctypes
# Adjust the path / name to the DLL
t32api = ctypes.CDLL("./t32api64.dll")
t32api.T32_Config(b"NODE=",b"localhost")
t32api.T32_Config(b"PORT=",b"20000")
t32api.T32_Config(b"PACKLEN=",b"1024")
t32api.T32_Init()
t32api.T32_Attach(1)
t32api.T32_Ping()
t32api.T32_Cmd(b"AREA")
t32api.T32_Exit()
Then you can use the commands / techniques that Holger has suggested:
T32_GetPracticeState()
to get the current run-state of PRACTICE. And / or set a variable inside your script
Var.Assign \state=1
Var.Assign \state=2
....
and then poll it using T32_ReadVariableValue()
I would like to be able to click an "Alert" button on my rails application that then calls python to run a specific alert script.
My specific question is how do I get the click event to call the python script?
Check this SO post: Call python script from ruby
ruby provides a variety of methods, exec, system that all take the path to the ,py file along with arguments. Something like
result = exec("python /path/to/script/script.py params")
Or
system 'python /path/to/script/script.py', params1, params2
Or even backquotes
`python /path/to/script/script.py foo bar`
Now, the question is where to put this? I am assuming you have a controller that handles clicks, and it is this controller where you put this code. Check this link out How to integrate a standalone Python script into a Rails application?
Now details depend a lot on your python script. By 'alert script', do you mean a Javascript alert() dialog, a flash msg or anything else? It's always a good idea to refactor your python code to return a string result and the use that in ruby-on-rails. Even better, as far as possible, write the alert-creating-script in ruby itself! It might not be possible in a 3rd-party library setting, but for a simple stand-alone script, it's worth the effort!
I am trying to figure out an elegant way to share a variable between a windows form app and a python script running in the background. The variable would be used solely to update a progress bar in the windows form based on the the long running process in the python script. More specifically, a windows timer will fire every n seconds, check the variable, then update the progress bar value. Sound stupid enough yet? I'll try to explain the need for this below.
I have a windows app that lets a user define a number of parameters to fire off a long running process (python script). Without getting into unnecessary detail, this long running process will insert many (100k+ records) into a sqlite database over a significant period of time. In order to make the python script as performant as possible, I don't call commit on the sqlite database until the very end of the python script. Trying to query the sqlite database from the windows app (via System.Data.Sqlite) before the commit occurs always yields 0 records, regardless of far along the process is.
The windows app will know how many total records will be inserted by the python process, so determining progress will be straight-forward enough, assuming I can get access to a record count in the python script.
I know I could do this with a text file, but is there any better way?
Easiest solution is probably to just have the python script print to stdout: say each time an item is processed, print a line with a number representing how many items have been processed (or a percentage). Then have the forms application read the output line by line, updating the progressbar based on that information.
If you use IronPython instead, you can create the Form-with-progress-bar in Python and manipulate it directly.
Alternately, your Winforms app can host the script and share variables.