Using Qt Event Handlers in squishtest - python

I'm using squishtest library for manipulating Qt application from my Python code and attempting to use event handlers as follows:
import squishtest
def handle_mouse_event(event):
print 'Clicked!'
squishtest.startApplication('application')
squishtest.installEventHandler('QMouseEvent', handle_mouse_event)
Unfortunately this doesn't work, i.e. nothing happens on clicking elements inside the app, however it works in case I run equivalent of this code inside the Squish IDE in Squish runtime:
import squish
def handle_mouse_event(event):
print 'Clicked!'
squish.startApplication('application')
squish.installEventHandler('QMouseEvent', handle_mouse_event)
What is the difference and how to get event handlers working with squishtest?
Python 2.7.14, Squish 6.3.1, Ubuntu 16.04

(Reposting as answer as suggested by original poster.)
This works for me on Linux and Windows with Squish 6.3.x+, using the Python installation in the Squish package - but admittedly I have added a snooze(5) at the end of the script to have some time to "mouse around" over the application's window, then even the mouse movement based events trigger execution of the event handler function.
Another side effect of using snooze() is that the event loops keep being spun, which is not the case when using time.sleep() (which you later mentioned to have used before).
If it still does not work for you I recommend to contact the technical support of froglogic Squish.

Related

Win32API Mouse vs Real Mouse Click

I have recently started using win32api to simulate mouse events and was wondering if it was at all detectable?
For example, does the api follow the exact same process/run the exact same commands as if done when using a real mouse - or are there some slight differences which can be detected? Furthermore, is this the same case with win32com SendKeys (via Shell Script/Python)?
I ask, because in the past I have had a few applications detect the Java robot library - but they all seem to work fine when using the Python win32api. Thanks.
The SendInput function will insert input events into the same queue as a hardware device but the events are marked with a LLMHF_INJECTED flag that can be detected by hooks. To avoid this flag you probably have to write a custom driver.

better way to automate mouse&keyboard using pyautogui

I wrote a script using pyautogui that should start an program (an IDE) and then start using it.
This is the script so far:
#! python3
# mouseNow.py - Displays the mouse cursor's current position.
import pyautogui, sys, subprocess
from time import sleep
x,y = 1100,550
subprocess.call([r'C:\...exe', arg1, arg2])
pyautogui.click(x,y)
sleep(5) # 2 sec should suffice but this is for safety
pyautogui.typewrite(my_string)
pyautogui.press('enter')
This works well but I want to be portable. The x,y values were determined by where the program prompt appears on screen after I start the program, but this is not portable, I think. Is there a way to point the mouse to the prompt without giving const parameters? something like move_mouse_to_window_of_this_process_after_starting_it()
Also, I use sleep() so I would write the data to the window after it appears, but I guess it's not a good way (some PC will run this much slower, I guess), so is there a way to know when the prompt appeared and then do the pyautogui.typewrite(my_string)?
EDIT: I found a simple solution for the move_mouse_to_window_of_this_process_after_starting_it()
:
>>> pyautogui.hotkey('alt', 'tab')
If you need portable and reliable solution, you have to find a library that supports accessibility technologies to access GUI elements by text. Basic technologies are:
Win32 API, MS UI Automation (Windows)
AT-SPI (Linux)
Apple Accessibility API (MacOS)
There are several open-source GUI automation libraries supporting some of these technologies (usually 1 or 2). Python solutions:
pywinauto on Windows (both Win32 API & MS UIA, see Getting Started Guide)
pyatspi2 on Linux
pyatom on MacOS
There is also a thread on StackOverflow regarding hard sleeps vs flexible waiting.
Enjoy! :)
The way you are interacting with the .exe excludes alternatives to coordinates or blind firing (Tab, Tab, Enter etc..).
If the application has an API, you could interact with it programatically.
If it doesn't you can only try to match the location for x screen resolutions, and this only if the GUI is used in Fullscreen/windowed Fullscreen.

Python Detect Screensaver Start/Stop Event on Windows

How can a Python 2.7 script detect the start/stop event of a Windows Screensaver?
You can actually do that with Pywin32. Pywin provides Python bindings for the Win32 API and for COM.
Regarding your question, it allows you to "listen" to windows events - such as start/stop event of a Windows ScreenSaver.
You should be able to get the current state of the screen saver by doing something like this:
import win32gui, win32con
def CheckScreenSaverState():
return win32gui.SystemParametersInfo(win32con.SPI_GETSCREENSAVERRUNNING)
In this example we use the win32gui which allows access to the GUI of a windows application. By calling win32gui.SystemParametersInfo (which is a function that actually belongs to the windows GUI messaging system - out of scope for this explanation, read more here), we are able to get the state of the screen saver, using the SPI_GETSCREENSAVERRUNNING constant (which is an internal constant of the Windows OS). This method I wrote should return a boolean value of the state of the screen saver (True is running False if not).
I didn't have time to test it, but tell me how it went I might be able to help you further.
Good luck,
Tom.

How to simulate a real keyboard's keypress in Python/PyQt?

I need to write a virtual keyboard for typing texts, I'm planning to use Python and Qt (PyQt) library for this task. problem is I don't know how to simulate KeyPress not as internal Qt event, but as simulation of a real keyboard so I could work with this keyboard as like with real one - interacting with application on my computer. I can't find anything in Qt documentation about this. So is there any way to do it through PyQt/Qt, or I need to use some Python library, and which exactly?
I understand that this is a PyQt question, but at the request of the OP will give a c++ example in case it helps in finding the Python solution.
On the c++ side simulating the keyboard is done by posting keypress events to the application's event loop. These may be considered 'internal Qt events', but are the exact same interface as would be received for a physical key press. They are accomplished as follows:
QKeyEvent *event = new QKeyEvent ( QEvent::KeyPress, Qt::Key_Enter);
QCoreApplication::postEvent (receiver, event);
Looking through the PyQt QCoreApplcation API, the postEvent function also exists, so it should be possible to do something analagous (unfortunately I can't offer an example as I'm unfamiliar with writting python scripts).
I had the same problem.
Pyautogui is very good for this and it is stupid simple.
import pyautogui
pyautogui.typewrite("the stuff")
Or if you want to actually simulate pressing a literal keypress use:
import pyautogui
pyautogui.keypressDown("the stuff")
pyautogui.keypressUp("the stuff")
Here's the documentation: https://pyautogui.readthedocs.org/en/latest/
Hope this helps.

How can I add a simple non-interactive gui to my python application?

I have written a little python utility that monitors my typing speed, using pyxhook to hook keyboard events, and a thread timer to update my words per minute number.
Right now it just prints to the terminal every 2 seconds.
How can I make this appear in a little always-on-top gui box?
I tried playing around with tkinter, but the mainloop() function doesn't like my key listener and timer. It seems I can only run the gui OR my event handlers, but not both.
Unfortunately I don't think I can use the keyhandler in tkinter, since I am wanting to capture events from other windows.
Any suggestions?
I don't know how to go about doing this in tk, but I've been using PySide lately and I know you could use that.
One way to do it in pyside would be with two classes running in separate threads that communicate using the Qt signal & slot mechanism available in pyside. One class would subclass QThread & get methods that run your existing code & pass the data via signals to the Ui class. The 2nd class would be the one for your gui elements. it would call for an instance of the first class, connect the signals & slots, then start it & begin drawing the display.
resources if you go the pyside route:
http://www.matteomattei.com/pyside-signals-and-slots-with-qthread-example/
search 'pyside dock widget' on this site
search for github's pyside examples
https://pyside.github.io/docs/pyside/PySide/QtCore/QThread.html?highlight=qthread

Categories

Resources