I'm writing a python script for an app lock. For this, I'm executing the Get-Process -Name "notepad++" PowerShell command using python subprocess to get the process id.
Now, using psutil I'm able to kill the process. But my objective is to minimize the windows in a while loop either using powershell/python. So, the program becomes unusable until the user enters the password.
With Powershell, you could do it with the use of the UIAutomationClient methods without having to rely on native calls.
Here is a small example to demonstrate how to check the window state and minimize the window if not.
Add-Type -AssemblyName UIAutomationClient
$MyProcess = Get-Process -Name "notepad++"
$ae = [System.Windows.Automation.AutomationElement]::FromHandle($MyProcess.MainWindowHandle)
$wp = $ae.GetCurrentPattern([System.Windows.Automation.WindowPatternIdentifiers]::Pattern)
# Your loop to make sure the window stay minimized would be here
# While...
$IsMinimized = $wp.Current.WindowVisualState -eq 'Minimized'
if (! $IsMinimized) { $wp.SetWindowVisualState('Minimized') }
# End While
Reference
How to switch minimized application to normal state
Related
We have two scripts (one for Katalon and one for Python) that we want to launch from Jenkins.
First we want to launch Katalon and, at a certain point in the script, tell Jenkins that launch the python script. Then finished the python script, Jenkins should tell katalon that can continue.
Current jenkins pipeline code:
"pipeline {
agent any
stages {
stage('Unit Test') {
steps {
echo 'Hello Example'
bat """./katalon -noSplash -runMode=console projectPath="/Users/mypc/project/proyect1/example.prj" -retry=0 -
testSuitePath="Test Suites/IOS/TestSuiteAccount" -executionProfile="default" -
deviceId="example" -browserType="iOS" """
sleep 5
}
}
stage('Unit Test2') {
steps {
echo 'Start second test'
bat """python C:\\Users\\myPC\\Documents\\project\\project-katalon-code\\try_python.py"""
sleep 5
}
}
}
}"
In pseudocode it would be the following:
Katalon script:
my_job()
call_jenkins_to_start_python()
if jenkins.python_flag == True
my_job_continue()
Pipeline Jenkins script:
Katalon.start()
if katalon_sent_signal_to_start_python == True
start_python_job()
if python_finished_job_signal == True
send_katalon_signal_to_continue()
Would be a good solution to read/write an external file? Didn't find anything similar.
Thank you!
AFAIK, jenkins starts a sparate process for the bat () step and waits for it to finish. Also, communication between jenkins and the bat process is not possible: you trigger the script and you read the returned value and, if needed, stdout.
Additionally, I do not know if it is possible what you want, because I do not know Katalon at all. What you want requires Katalon waiting for a result from the python script and then, when this results reach Katalon, it should resume its execution.
I recommend you to first try the process without Jenkins: Create a windows script that does exactly what you want. If you are able to do that, you can then call that new script from jenkins, giving input or reading outputs as needed. Even as you suggested, using files for that.
I have two python scripts that use two different cameras for a project I am working on and I am trying to run them both inside a different script or within each other, either way is fine.
import os
os.system('python 1.py')
os.system('python 2.py')
My problem however is that they don't run at the same time, I have to quit the first one for the next to open. I also tried doing it with bash as well with the & shell operator
python 1.py &
python 2.py &
And this does in fact make them both run however the issue is that they both run endlessly in the background and I need to close them rather easily. Any suggestion what I can do to avoid the issues with these implementations
You could do it with multiprocessing
import os
import time
import psutil
from multiprocessing import Process
def run_program(cmd):
# Function that processes will run
os.system(cmd)
# Initiating Processes with desired arguments
program1 = Process(target=run_program, args=('python 1.py',))
program2 = Process(target=run_program, args=('python 2.py',))
# Start our processes simultaneously
program1.start()
program2.start()
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
# Wait 5 seconds and kill first program
time.sleep(5)
kill(program1.pid)
program1.join()
# Wait another 1 second and kill second program
time.sleep(1)
kill(program2.pid)
program2.join()
# Print current status of our programs
print('1.py alive status: {}'.format(program1.is_alive()))
print('2.py alive status: {}'.format(program2.is_alive()))
One possible method is to use systemd to control your process (i.e. treat them as daemons).
This is how I control my Python servers since they need to run in the background and be completely detached from the current tty so I can exit my connection to the machine and the continue processes continue. You can then also stop the server later using systemctl, as explained below.
Instructions:
Create a .service file and save it in /etc/systemd/system, with contents along the lines of:
[Unit]
Description=daemon one
[Service]
ExecStart=/path/to/1.py
and repeat with one going to 2.py.
Then you can use systemctl to control your daemons.
First reload all config files with:
systemctl daemon-reload
then start either of your daemons (where my_daemon.service is one of your unit files):
systemctl start my_daemon
it should now be running and you should find it in:
systemctl list-units
You can also check its status with:
systemctl status my_daemon
and stop/restart them with:
systemctl stop|restart my_daemon
Use subprocess.Popen. This will create a child process and return its pid.
pid = Popen("python 1.py").pid
And then check out these functions for communicating with the child process and checking if it is still running.
I am developing some Python (version 3.6.1) code to install an application in Windows 7. The code used is this:
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
The application is installed successfully. The problem is that it always requires a reboot after it is finished (a popup with a message "You must restart your system for the configuration changes made to to take effect. Click Yes to restart now or No if you plan to restart later.).
I tried to insert parameter "/forcerestart" (source here) in the installation command but it still stops to request the reboot:
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /forcerestart /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
Another attempt was to create a following command like this one below, although since the previous command is not finished yet (as per my understanding) I realized it will never be called:
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Does anyone had such an issue and could solve it?
As an ugly workaround, if you're not time-critical but you want to emphasise the "automatic" aspect, why not
run the installCMD in a thread
wait sufficiently long to be sure that the command has completed
perform the shutdown
like this:
import threading,time
def installApp():
winCMD = r'"C:\PowerBuild\setup.exe" /v"/qr /l C:\PowerBuild\TUmsi.log"'
output = subprocess.check_call(winCMD, shell = True)
t = threading.Thread(target=installApp)
t.start()
time.sleep(1800) # half-hour should be enough
rebootSystem = 'shutdown -t 0 /r /f'
subprocess.Popen(rebootSystem, stdout=subprocess.PIPE, shell=True)
Another (safer) way would be to find out which file is created last in the installation, and monitor for its existence in a loop like this:
while not os.path.isfile("somefile"):
time.sleep(60)
time.sleep(60) # another minute for safety
# perform the reboot
To be clean, you'd have to use subprocess.Popen for the installation process, export it as global and call terminate() on it in the main process, but since you're calling a shutdown that's not necessary.
(to be clean, we wouldn't have to do that hack in the first place)
I would like to use Sensu Core to monitor python scripts and I am confused how to do it.
From Sensu documentation this requires Sensu Checks. In the provided example ruby script checks that chef-client is running:
#!/usr/bin/env ruby
# get the current list of processes
processes = `ps aux`
# determine if the chef-client process is running
running = processes.lines.detect do |process|
process.include?('chef-client')
end
# return appropriate check output and exit status code
if running
puts 'OK - Chef client process is running'
exit 0
else
puts 'WARNING - Chef client process is NOT running'
exit 1
end
How to implement such a check for a specific script and not the application? That is, how would I go about monitoring a specific python script (e.g. test.py) and not python in general?
So, I have been running some python scripts in sensu for my AWS Linux clients successfully , this is a good example of my check definition:
{
"checks": {
"check-rds-limit": {
"interval": 86400,
"command": "/etc/sensu/plugins/rds-limit-check.py",
"contacts": [
"cloud-ops"
],
"occurrences": 1,
"subscribers": [
"cloud-ops-subscription"
],
"handlers": [
"email",
"slack"
]
}
}
}
And your python plugin can start with defining the shebang path:
#!/usr/bin/env python
import sys
...
...
//<code condition for warning>
sys.exit(1)
//<code condition for critical>
sys.exit(2)
//<code condition where everything is fine>
sys.exit(0)
More generally the above script is searching for the string chef-client in the running processes. You could replace that with any other string, like test.py, which would detect if any program running has test.py in its name. (You might need to match the sub-string test.py if you run the program with python test.py, I don't know ruby.)
I would suggest that you use Sensu process check plugin for a more general function which includes more customization. Look at the other sensu plugins too.
Why not monitor the expected result or operation of the script rather than the process itself. Typically, we will setup checks to monitor an end point, in the case of a web application, or an observed behavior, such as messages in a database, to determine if the application is running.
There will be times when the process is technically running but not anything due to an error condition or resource issue. Monitoring the expected result is a much better option than watching a process.
I have a problem with the way signals are propagated within a process group. Here is my situation and an explication of the problem :
I have an application, that is launched by a shell script (with a su). This shell script is itself launched by a python application using subprocess.Popen
I call os.setpgrp as a preexec_function and have verified using ps that the bash script, the su command and the final application all have the same pgid.
Now when I send signal USR1 to the bash script (the leader of the process group), sometimes the application see this signal, and sometimes not. I can't figure out why I have this random behavior (The signal is seen by the app about 50% of the time)
Here is he example code I am testing against :
Python launcher :
#!/usr/bin/env python
p = subprocess.Popen( ["path/to/bash/script"], stdout=…, stderr=…, preexec_fn=os.setpgrp )
# loop to write stdout and stderr of the subprocesses to a file
# not that I use fcntl.fcntl(p.stdXXX.fileno(), fcntl.F_SETFL, os.O_NONBLOCK)
p.wait()
Bash script :
#!/bin/bash
set -e
set -u
cd /usr/local/share/gios/exchange-manager
CONF=/etc/exchange-manager.conf
[ -f $CONF ] && . $CONF
su exchange-manager -p -c "ruby /path/to/ruby/app"
Ruby application :
#!/usr/bin/env ruby
Signal.trap("USR1") do
puts "Received SIGUSR1"
exit
end
while true do
sleep 1
end
So I try to send the signal to the bash wrapper (from a terminal or from the python application), sometimes the ruby application will see the signal and sometimes not. I don't think it's a logging issue as I have tried to replace the puts by a method that write directly to a different file.
Do you guys have any idea what could be the root cause of my problem and how to fix it ?
Your signal handler is doing too much. If you exit from within the signal handler, you are not sure that your buffers are properly flushed, in other words you may not be exiting gracefully your program. Be careful of new signals being received when the program is already inside a signal handler.
Try to modify your Ruby source to exit the program from the main loop as soon as an "exit" flag is set, and don't exit from the signal handler itself.
Your Ruby application becomes:
#!/usr/bin/env ruby
$done = false
Signal.trap("USR1") do
$done = true
end
until $done do
sleep 1
end
puts "** graceful exit"
Which should be much safer.
For real programs, you may consider using a Mutex to protect your flag variable.