pywinauto wait for window to appear and send key-press - python

I'm trying to make a small function that will wait until a certain window appear and then send a key-press (alt+i), i've been trying to do this with pywinauto but with no success.
from what i've read in the documantation i can use
pywinauto.application.WindowSpecification.Exists()
but i just can't understand how to specify what i'm looking for, i can use either the window title or the process name, but can't find a good explanation.
Also, is there a better or easier module to use besides pywinauto? i don't need to do complicated automation, just wait for a window and send some keys.
EDIT
Ok i found a solution, a simple function that loops forever
def auto_accept(*args):
while True:
try:
app = pywinauto.Application()
app.window_(title='Untitled - Notepad').SetFocus()
app.window_(title='Untitled - Notepad').TypeKeys("{1}{2}{3}")
except (pywinauto.findwindows.WindowNotFoundError, pywinauto.timings.TimeoutError):
pass
But now i always get a warning like "2015-07-13 12:18:02,887 INFO: Typed text to the Notepad: {1}{2}{3}" and i can't filter them out using the warnings module, is there another way to filter\disable them? it's an issue since when i create an exe using py2exe, after the program is closed it tells me there are errors, but the only errors are the warning i get from sendkeys.

You can simply use wait/wait_not methods for WindowSpecification object:
from pywinauto.application import Application
app = Application(backend="win32").start('executable')
app.WindowSpecification.wait('enabled').type_keys('%i') # % - alt, ^ - ctrl
WindowSpecification can be set with more details:
app.window(title='Title', class_name='#32770')
All possible parameters for window() method are the same as for find_elements function (this low-level function is not recommended for direct usage).
For long operation you can set timeout for single wait: wait('enabled', timeout=20) or set timeout for every wait globally: Timings.window_find_timeout = 10
EDIT: call this code after import pywinauto to disable logging:
import logging
logger = logging.getLogger('pywinauto')
logger.level = logging.WARNING # or higher
Logger levels:
Level Numeric value
CRITICAL 50
ERROR 40
WARNING 30
INFO 20
DEBUG 10
NOTSET 0

Related

How to make a python script stopable from another script?

TL;DR: If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time? (Without KeyboardInterrupt or killing the task)
--
I've recently posted this question: How to make my code stopable? (Not killing/interrupting)
The answers did address my question, but from a termination/interruption point of view, and that's not really what I wanted. (Although, my question didn't made that clear)
So, I'm rephrasing it.
I created a generic script for example purposes. So I have this class, that gathers data from a generic API and write the data into a csv. The code is started by typing python main.py on a terminal window.
import time,csv
import GenericAPI
class GenericDataCollector:
def __init__(self):
self.generic_api = GenericAPI()
self.loop_control = True
def collect_data(self):
while self.loop_control: #Can this var be changed from outside of the class? (Maybe one solution)
data = self.generic_api.fetch_data() #Returns a JSON with some data
self.write_on_csv(data)
time.sleep(1)
def write_on_csv(self, data):
with open('file.csv','wt') as f:
writer = csv.writer(f)
writer.writerow(data)
def run():
obj = GenericDataCollector()
obj.collect_data()
if __name__ == "__main__":
run()
The script is supposed to run forever OR until I command it to stop. I know I can just KeyboardInterrupt (Ctrl+C) or abruptly kill the task. That isn't what I'm looking for. I want a "soft" way to tell the script it's time to stop, not only because interruption can be unpredictable, but it's also a harsh way to stop.
If that script was running on a docker container (for example) you wouldn't be able to Ctrl+C unless you happen to be in the terminal/bash inside the docker.
Or another situation: If that script was made for a customer, I don't think it's ok to tell the customer, just use Ctrl+C/kill the task to stop it. Definitely counterintuitive, especially if it's a non tech person.
I'm looking for way to code another script (assuming that's a possible solution) that would change to False the attribute obj.loop_control, finishing the loop once it's completed. Something that could be run by typing on a (different) terminal python stop_script.py.
It doesn't, necessarily, needs to be this way. Other solutions are also acceptable, as long it doesn't involve KeyboardInterrupt or Killing tasks. If I could use a method inside the class, that would be great, as long I can call it from another terminal/script.
Is there a way to do this?
If you have a program that should run for an undetermined amount of time, how do you code something to stop it when the user decide it is time?
In general, there are two main ways of doing this (as far as I can see). The first one would be to make your script check some condition that can be modified from outside (like the existence or the content of some file/socket). Or as #Green Cloak Guy stated, using pipes which is one form of interprocess communication.
The second one would be to use the built in mechanism for interprocess communication called signals that exists in every OS where python runs. When the user presses Ctrl+C the terminal sends a specific signal to the process in the foreground. But you can send the same (or another) signal programmatically (i.e. from another script).
Reading the answers to your other question I would say that what is missing to address this one is a way to send the appropriate signal to your already running process. Essentially this can be done by using the os.kill() function. Note that although the function is called 'kill' it can send any signal (not only SIGKILL).
In order for this to work you need to have the process id of the running process. A commonly used way of knowing this is making your script save its process id when it launches into a file stored in a common location. To get the current process id you can use the os.getpid() function.
So summarizing I'd say that the steps to achieve what you want would be:
Modify your current script to store its process id (obtainable by using os.getpid()) into a file in a common location, for example /tmp/myscript.pid. Note that if you want your script to be protable you will need to address this in a way that works in non-unix like OSs like Windows.
Choose one signal (typically SIGINT or SIGSTOP or SIGTERM) and modify your script to register a custom handler using signal.signal() that addresses the graceful termination of your script.
Create another (note that it could be the same script with some command line paramater) script that reads the process id from the known file (aka /tmp/myscript.pid) and sends the chosen signal to that process using os.kill().
Note that an advantage of using signals to achieve this instead of an external way (files, pipes, etc.) is that the user can still press Ctrl+C (if you chose SIGINT) and that will produce the same behavior as the 'stop script' would.
What you're really looking for is any way to send a signal from one program to another, independent, program. One way to do this would be to use an inter-process pipe. Python has a module for this (which does, admittedly, seem to require a POSIX-compliant shell, but most major operating systems should provide that).
What you'll have to do is agree on a filepath beforehand between your running-program (let's say main.py) and your stopping-program (let's say stop.sh). Then you might make the main program run until someone inputs something to that pipe:
import pipes
...
t = pipes.Template()
# create a pipe in the first place
t.open("/tmp/pipefile", "w")
# create a lasting pipe to read from that
pipefile = t.open("/tmp/pipefile", "r")
...
And now, inside your program, change your loop condition to "as long as there's no input from this file - unless someone writes something to it, .read() will return an empty string:
while not pipefile.read():
# do stuff
To stop it, you put another file or script or something that will write to that file. This is easiest to do with a shell script:
#!/usr/bin/env sh
echo STOP >> /tmp/pipefile
which, if you're containerizing this, you could put in /usr/bin and name it stop, give it at least 0111 permissions, and tell your user "to stop the program, just do docker exec containername stop".
(using >> instead of > is important because we just want to append to the pipe, not to overwrite it).
Proof of concept on my python console:
>>> import pipes
>>> t = pipes.Template()
>>> t.open("/tmp/file1", "w")
<_io.TextIOWrapper name='/tmp/file1' mode='w' encoding='UTF-8'>
>>> pipefile = t.open("/tmp/file1", "r")
>>> i = 0
>>> while not pipefile.read():
... i += 1
...
At this point I go to a different terminal tab and do
$ echo "Stop" >> /tmp/file1
then I go back to my python tab, and the while loop is no longer executing, so I can check what happened to i while I was gone.
>>> print(i)
1704312

How To Get All Python Logs Only On Error?

I am using pythons built in logging class. I'd like to only log out when an error occurs, but when it does, to log out everything up until that point for debugging purposes.
It would be nice if I could reset this as well, so a long running process doesn't contain gigabytes of logs.
As an example. I have a process that processes one million widgets. Processing a widget can be complicated and involve several steps. If processing fails, knowing all of the logs for that widget up to that point would be helpful.
from random import randrange
logger = logging.getLogger()
for widget in widgetGenerator():
logger.reset()
widget.process(logger)
class Widget():
def process(self, logger):
logger.info('doing stuff')
logger.info('do more stuff')
if randrange(0, 10) == 5:
logger.error('something bad happened')
1 out of 10 times the following would be printed:
doing stuff
doing more stuff
something bad happened
But the normal logs would not be printed otherwise.
Can this be done with the logger as is or do I need to roll my own implementation?
Use a MemoryHandler to buffer records using a threshold of e.g. ERROR, and make the MemoryHandler's target attribute point to a handler which writes to e.g. console or file. Then output should only occur if the threshold (e.g. ERROR) is hit during program execution.

Robot Framework Extending Selenium To Handle Loading Element

I am using Robot Framework to do test automation and would like to know if there is a way for me to extend the Selenium Webdriver to handle a loading element universally. For reference when I say loading element I am referring to a custom Angular element.
I understand that I can do a Wait Until Element Not Visible, but what I really want to do is create a constant listener on the Selenium Webdriver to halt whenever this loading element is visible. I would like to not constantly use a robot keyword to check if the element is visible, and be forced to implement arbitrary-ish sleeps.
If my question is still not clear, below is some pseudocode / psuedoflow I was thinking of. I'm assuming Selenium is single threaded, otherwise this idea would be fruitless.
Selenium Webdriver:
AddListener(new LoadingListener)
LoadingListener
While ( LoadingElement is Visible)
Sleep 1s
Please respond with any feedback. If this isn't possible let me know, If you have any ideas also let me know.
I haven't ever done this, but you might be able to use the listener interface to call a function before or after every keyword. For example, the listener might look something like this (untested):
from robot.libraries.BuiltIn import BuiltIn
xpath = //*[#id='loading_element']
class CustomListener(object):
ROBOT_LISTENER_API_VERSION = 2
def start_keyword(self, name, attrs):
selib = BuiltIn().get_library_instance("SeleniumLibrary")
selib.wait_until_element_is_not_visible(xpath)
This may throw an exception if the element doesn't exist on the current page, and you also need to add code to handle the case where a keyword runs before the browser is open, but hopefully it gives the general idea.
For more information about the listener interface see the section titled Listener Interface in the Robot Framework User Guide.
Another direction you could take is to use the SeleniumLibrary keyword Execute Async Javascript.
The document states:
Executes asynchronous JavaScript code.
Similar to Execute Javascript except that scripts executed with this
keyword must explicitly signal they are finished by invoking the
provided callback. This callback is always injected into the executed
function as the last argument.
Scripts must complete within the script timeout or this keyword will
fail
With the following code example from the docs:
${result} = Execute Async JavaScript
... var callback = arguments[arguments.length - 1];
... function answer(){callback("text");};
... window.setTimeout(answer, 2000);
Should Be Equal ${result} text
Below is the solution I ended up using :)
Worked well, thank you for all comments and help!
from robot.libraries.BuiltIn import BuiltIn
from datetime import datetime
class LoaderListener(object):
ROBOT_LISTENER_API_VERSION = 2
ROBOT_LIBRARY_SCOPE = "GLOBAL"
def __init__(self):
self.ROBOT_LIBRARY_LISTENER = self
self.selib = BuiltIn().get_library_instance("SeleniumLibrary")
def _start_suite(self, name, attrs):
BuiltIn().log_to_console("Loader Listener Has Been Attached.")
def start_keyword(self, name, attrs):
try:
# If this doesn't fail we have the loader
self.selib.find_element("id=loader")
# Take time stamp for when we begin waiting
BuiltIn().log_to_console("The Page Is Loading.")
start_time = datetime.now()
# Wait Until Element Not Visible
# Sometimes this will fail, because by the time we get here the loader is gone.
# In that case, just continue as normal
try:
self.selib.wait_until_element_is_not_visible("id=loader")
except:
pass
# Take time stamp for when we have stopped waiting
end_time = datetime.now()
# Calculate time elapsed
elapsed_time = (end_time-start_time)
# Create output message and log to conosle
output_message = "The Page Has Finished Loading in {0}".format(elapsed_time)
BuiltIn().log_to_console(output_message)
except:
# We don't have the element...
pass

Add Custom Function Output to Python Logging Handler

Using logging to handle logging across multiple modules within a simulation framework with it's own 'time'.
Basically, I'm getting things like:
WARNING:Node[n0].App:RoutingTest:No Packet Count List set up yet; fudging it with an broadcast first
INFO:Node[n0].Layercake.ALOHA:Transmit to Any
INFO:Node[n0].Layercake.ALOHA:The timeout is 16.0910738255
WARNING:Node[n1].App:RoutingTest:No Packet Count List set up yet; fudging it with an broadcast first
INFO:Node[n1].Layercake.ALOHA:Transmit to Any
And while these happen more or less instantaneously in 'real' time it's tough to tell what that means in machine time.
Within the framework, there's a globally accessible Sim.now() that returns the current run time.
While I could go through all my logging uses and add this as an additional tail field, I'd rather add it as part of the base logging handler, however a scan through the relevant documentation and searches here and google haven't turned up anything directly relevant. There was one guy asking almost the same question but didn't get an appropriate response
In essence, I want to up date the base handler to prefix all log calls with a call to this function, effectively
logline="[{T}]:{msg}".format(T=Sim.now(), msg=logmsg)
Any pointers?
You could write a custom Formatter:
import logging
from sim import Sim
class SimNowPrefixFormatter(logging.Formatter):
def format(self, record):
log_message = super(SimNowPrefixFormatter, self).format(record)
return "[{}]:{}".format(Sim.now(), log_message)
# Your base logging handler
handler = logging.StreamHandler()
handler.setFormatter(SimNowPrefixFormatter("%(levelname)s:%(message)s"))
root_logger = logging.getLogger()
root_logger.addHandler(handler)

Thread-Switching in PyDbg

I've tried posting this in the reverse-engineering stack-exchange, but I thought I'd cross-post it here for more visibility.
I'm having trouble switching from debugging one thread to another in pydbg. I don't have much experience with multithreading, so I'm hoping that I'm just missing something obvious.
Basically, I want to suspend all threads, then start single stepping in one thread. In my case, there are two threads.
First, I suspend all threads. Then, I set a breakpoint on the location where EIP will be when thread 2 is resumed. (This location is confirmed by using IDA). Then, I enable single-stepping as I would in any other context, and resume Thread 2.
However, pydbg doesn't seem to catch the breakpoint exception! Thread 2 seems to resume and even though it MUST hit that address, there is no indication that pydbg is catching the breakpoint exception. I included a "print "HIT BREAKPOINT" inside pydbg's internal breakpoint handler, and that never seems to be called after resuming Thread 2.
I'm not too sure about where to go next, so any suggestions are appreciated!
dbg.suspend_all_threads()
print dbg.enumerate_threads()[0]
oldcontext = dbg.get_thread_context(thread_id=dbg.enumerate_threads()[0])
if (dbg.disasm(oldcontext.Eip) == "ret"):
print disasm_at(dbg,oldcontext.Eip)
print "Thread EIP at a ret"
addrstr = int("0x"+(dbg.read(oldcontext.Esp + 4,4))[::-1].encode("hex"),16)
print hex(addrstr)
dbg.bp_set(0x7C90D21A,handler=Thread_Start_bp_Handler)
print dbg.read(0x7C90D21A,1).encode("hex")
dbg.bp_set(oldcontext.Eip + dbg.instruction.length,handler=Thread_Start_bp_Handler)
dbg.set_thread_context(oldcontext,thread_id=dbg.enumerate_threads()[0])
dbg.context = oldcontext
dbg.resume_thread(dbg.enumerate_threads()[0])
dbg.single_step(enable=True)
return DBG_CONTINUE
Sorry about the "magic numbers", but they are correct as far as I can tell.
One of your problems is that you try to single step through Thread2 and you only refer to Thread1 in your code:
dbg.enumerate_threads()[0] # <--- Return handle to the first thread.
In addition, the code the you posted is not reflective of the complete structure of your script, which makes it hard to judge wether you have other errors or not. You also try to set breakpoint within the sub-brach that disassembles your instructions, which does not make a lot of sense to me logically. Let me try to explain what I know, and lay it out in an organized manner. That way you might look back at your code, re-think it and correct it.
Let's start with basic framework of debugging an application with pydbg:
Create debugger instance
Attache to the process
Set breakpoints
Run it
Breakpoint gets hit - handle it.
This is how it could look like:
from pydbg import *
from pydbg.defines import *
# This is maximum number of instructions we will log
MAX_INSTRUCTIONS = 20
# Address of the breakpoint
func_address = "0x7C90D21A"
# Create debugger instance
dbg = pydbg()
# PID to attach to
pid = int(raw_input("Enter PID: "))
# Attach to the process with debugger instance created earlier.
# Attaching the debugger will pause the process.
dbg.attach(pid)
# Let's set the breakpoint and handler as thread_step_setter,
# which we will define a little later...
dbg.bp_set(func_address, handler=thread_step_setter)
# Let's set our "personalized" handler for Single Step Exception
# It will get triggered if execution of a thread goes into single step mode.
dbg.set_callback(EXCEPTION_SINGLE_STEP, single_step_handler)
# Setup is done. Let's run it...
dbg.run()
Now having the basic structure, let's define our personalized handlers for breakpoint and single stepping. The code snippet below defines our "custom" handlers. What will happen is when breakpoint hits we will iterate through threads and set them to single step mode. It will in turn trigger single step exception, which we will handle and disassemble MAX_INSTRUCTIONS amount of instructions:
def thread_step_setter(dbg):
dbg.suspend_all_threads()
for thread_id in dbg.enumerate_threads():
print "Single step for thread: 0x%08x" % thread_id
h_thread = dbg.open_thread(thread_id)
dbg.single_step(True, h_thread)
dbg.close_handle(h_thread)
# Resume execution, which will pass control to step handler
dbg.resume_all_threads()
return DBG_CONTINUE
def single_step_handler(dbg):
global total_instructions
if instructions == MAX_INSTRUCTION:
dbg.single_step(False)
return DBG_CONTINUE
else:
# Disassemble the instruction
current_instruction = dbg.disasm(dbg.context,Eip)
print "#%d\t0x%08x : %s" % (total_instructions, dbg.context.Eip, current_instruction)
total_instructions += 1
dbg.single_step(True)
return DBG_CONTINUE
Discloser: I do not guarantee that the code above will work if copied and pasted. I typed it out and haven't tested it. However, if basic understanding is acquired, the small syntactical error could be easily fixed. I apologize in advanced if I have any. I don't currently have means or time to test it.
I really hope it helps you out.

Categories

Resources