I would like to use Sensu Core to monitor python scripts and I am confused how to do it.
From Sensu documentation this requires Sensu Checks. In the provided example ruby script checks that chef-client is running:
#!/usr/bin/env ruby
# get the current list of processes
processes = `ps aux`
# determine if the chef-client process is running
running = processes.lines.detect do |process|
process.include?('chef-client')
end
# return appropriate check output and exit status code
if running
puts 'OK - Chef client process is running'
exit 0
else
puts 'WARNING - Chef client process is NOT running'
exit 1
end
How to implement such a check for a specific script and not the application? That is, how would I go about monitoring a specific python script (e.g. test.py) and not python in general?
So, I have been running some python scripts in sensu for my AWS Linux clients successfully , this is a good example of my check definition:
{
"checks": {
"check-rds-limit": {
"interval": 86400,
"command": "/etc/sensu/plugins/rds-limit-check.py",
"contacts": [
"cloud-ops"
],
"occurrences": 1,
"subscribers": [
"cloud-ops-subscription"
],
"handlers": [
"email",
"slack"
]
}
}
}
And your python plugin can start with defining the shebang path:
#!/usr/bin/env python
import sys
...
...
//<code condition for warning>
sys.exit(1)
//<code condition for critical>
sys.exit(2)
//<code condition where everything is fine>
sys.exit(0)
More generally the above script is searching for the string chef-client in the running processes. You could replace that with any other string, like test.py, which would detect if any program running has test.py in its name. (You might need to match the sub-string test.py if you run the program with python test.py, I don't know ruby.)
I would suggest that you use Sensu process check plugin for a more general function which includes more customization. Look at the other sensu plugins too.
Why not monitor the expected result or operation of the script rather than the process itself. Typically, we will setup checks to monitor an end point, in the case of a web application, or an observed behavior, such as messages in a database, to determine if the application is running.
There will be times when the process is technically running but not anything due to an error condition or resource issue. Monitoring the expected result is a much better option than watching a process.
Related
I'm writing a python script for an app lock. For this, I'm executing the Get-Process -Name "notepad++" PowerShell command using python subprocess to get the process id.
Now, using psutil I'm able to kill the process. But my objective is to minimize the windows in a while loop either using powershell/python. So, the program becomes unusable until the user enters the password.
With Powershell, you could do it with the use of the UIAutomationClient methods without having to rely on native calls.
Here is a small example to demonstrate how to check the window state and minimize the window if not.
Add-Type -AssemblyName UIAutomationClient
$MyProcess = Get-Process -Name "notepad++"
$ae = [System.Windows.Automation.AutomationElement]::FromHandle($MyProcess.MainWindowHandle)
$wp = $ae.GetCurrentPattern([System.Windows.Automation.WindowPatternIdentifiers]::Pattern)
# Your loop to make sure the window stay minimized would be here
# While...
$IsMinimized = $wp.Current.WindowVisualState -eq 'Minimized'
if (! $IsMinimized) { $wp.SetWindowVisualState('Minimized') }
# End While
Reference
How to switch minimized application to normal state
We have two scripts (one for Katalon and one for Python) that we want to launch from Jenkins.
First we want to launch Katalon and, at a certain point in the script, tell Jenkins that launch the python script. Then finished the python script, Jenkins should tell katalon that can continue.
Current jenkins pipeline code:
"pipeline {
agent any
stages {
stage('Unit Test') {
steps {
echo 'Hello Example'
bat """./katalon -noSplash -runMode=console projectPath="/Users/mypc/project/proyect1/example.prj" -retry=0 -
testSuitePath="Test Suites/IOS/TestSuiteAccount" -executionProfile="default" -
deviceId="example" -browserType="iOS" """
sleep 5
}
}
stage('Unit Test2') {
steps {
echo 'Start second test'
bat """python C:\\Users\\myPC\\Documents\\project\\project-katalon-code\\try_python.py"""
sleep 5
}
}
}
}"
In pseudocode it would be the following:
Katalon script:
my_job()
call_jenkins_to_start_python()
if jenkins.python_flag == True
my_job_continue()
Pipeline Jenkins script:
Katalon.start()
if katalon_sent_signal_to_start_python == True
start_python_job()
if python_finished_job_signal == True
send_katalon_signal_to_continue()
Would be a good solution to read/write an external file? Didn't find anything similar.
Thank you!
AFAIK, jenkins starts a sparate process for the bat () step and waits for it to finish. Also, communication between jenkins and the bat process is not possible: you trigger the script and you read the returned value and, if needed, stdout.
Additionally, I do not know if it is possible what you want, because I do not know Katalon at all. What you want requires Katalon waiting for a result from the python script and then, when this results reach Katalon, it should resume its execution.
I recommend you to first try the process without Jenkins: Create a windows script that does exactly what you want. If you are able to do that, you can then call that new script from jenkins, giving input or reading outputs as needed. Even as you suggested, using files for that.
I want to embed C++ in python application. I don't want to use Boost library.
If C++ function do assertion, I want to catch it and print error in my python application or get some detailed information like line number in python script that caused error. and main thing is "I want to proceed further in python execution flow"
How can I do it? I can't find any functions to get detailed assertion information in Python API or C++.
C++ Code
void sum(int iA, int iB)
{
assert(iA + iB >10);
}
Python Code
from ctypes import *
mydll = WinDLL("C:\\Users\\cppwrapper.dll")
try:
mydll.sum(10,3)
catch:
print "exception occurred"
# control should go to user whether exceptions occurs, after exception occurs if he provide yes then continue with below or else abort execution, I need help in this part as well
import re
for test_string in ['555-1212', 'ILL-EGAL']:
if re.match(r'^\d{3}-\d{4}$', test_string):
print test_string, 'is a valid US local phone number'
else:
print test_string, 'rejected'
Thanks in advance.
This can't really be done in exactly the way you say, (as was also pointed out in the comments).
Once the assertion happens and SIGABRT is sent to the process, it's in the operating system's hands what will happen, and generally the process will be killed.
The simplest way to recover from a process being killed, is to have the process launched by an external process. Like, a secondary python script, or a shell script. It's easy in bash scripting, for instance, to launch another process, check if it terminates normally or is aborted, log it, and continue.
For instance here's some bash code that executes command line $command, logs the standard error channel to a log file, checks the return code (which will be 130 or something for an SIGABRT) and does something in the various cases:
$command 2> error.log
error_code="$?"
if check_errs $error_code; then
# Do something...
return 0
else
# Do something else...
return 1
fi
Where check_errs is some other subroutine that you would write.
I have a problem with the way signals are propagated within a process group. Here is my situation and an explication of the problem :
I have an application, that is launched by a shell script (with a su). This shell script is itself launched by a python application using subprocess.Popen
I call os.setpgrp as a preexec_function and have verified using ps that the bash script, the su command and the final application all have the same pgid.
Now when I send signal USR1 to the bash script (the leader of the process group), sometimes the application see this signal, and sometimes not. I can't figure out why I have this random behavior (The signal is seen by the app about 50% of the time)
Here is he example code I am testing against :
Python launcher :
#!/usr/bin/env python
p = subprocess.Popen( ["path/to/bash/script"], stdout=…, stderr=…, preexec_fn=os.setpgrp )
# loop to write stdout and stderr of the subprocesses to a file
# not that I use fcntl.fcntl(p.stdXXX.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
p.wait()
Bash script :
#!/bin/bash
set -e
set -u
cd /usr/local/share/gios/exchange-manager
CONF=/etc/exchange-manager.conf
[ -f $CONF ] && . $CONF
su exchange-manager -p -c "ruby /path/to/ruby/app"
Ruby application :
#!/usr/bin/env ruby
Signal.trap("USR1") do
puts "Received SIGUSR1"
exit
end
while true do
sleep 1
end
So I try to send the signal to the bash wrapper (from a terminal or from the python application), sometimes the ruby application will see the signal and sometimes not. I don't think it's a logging issue as I have tried to replace the puts by a method that write directly to a different file.
Do you guys have any idea what could be the root cause of my problem and how to fix it ?
Your signal handler is doing too much. If you exit from within the signal handler, you are not sure that your buffers are properly flushed, in other words you may not be exiting gracefully your program. Be careful of new signals being received when the program is already inside a signal handler.
Try to modify your Ruby source to exit the program from the main loop as soon as an "exit" flag is set, and don't exit from the signal handler itself.
Your Ruby application becomes:
#!/usr/bin/env ruby
$done = false
Signal.trap("USR1") do
$done = true
end
until $done do
sleep 1
end
puts "** graceful exit"
Which should be much safer.
For real programs, you may consider using a Mutex to protect your flag variable.
I have two scripts, a python script and a perl script.
How can I make the perl script run the python script and then runs itself?
Something like this should work:
system("python", "/my/script.py") == 0 or die "Python script returned error $?";
If you need to capture the output of the Python script:
open(my $py, "|-", "python2 /my/script.py") or die "Cannot run Python script: $!";
while (<$py>) {
# do something with the input
}
close($py);
This also works similarly if you want to provide input for the subprocess.
The best way is to execute the python script at the system level using IPC::Open3. This will keep things safer and more readable in your code than using system();
You can easily execute system commands, read and write to them with IPC::Open3 like so:
use strict;
use IPC::Open3 ();
use IO::Handle (); #not required but good for portabilty
my $write_handle = IO::Handle->new();
my $read_handle = IO::Handle->new();
my $pid = IPC::Open3::open3($write_handle, $read_handle, '>&STDERR', $python_binary. ' ' . $python_file_path);
if(!$pid){ function_that_records_errors("Error"); }
#read multi-line data from process:
local $/;
my $read_data = readline($read_handle);
#write to python process
print $write_handle 'Something to write to python process';
waitpid($pid, 0); #wait for child process to close before continuing
This will create a forked process to run the python code. This means that should the python code fail, you can recover and continue with your program.
It may be simpler to run both scripts from a shell script, and use pipes (assuming that you're in a Unix environment) if you need to pass the results from one program to the other