The script below works perfectly when I want to click, for example, the Start button on Windows, but when I try to click a button in a certain GUI program it does not have any effect.
Is it possible that this program has disabled virtual mouse clicks?
If so, can I circumvent this somehow?
import win32api, win32con
Po=win32api.GetCursorPos()
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,Po[0],Po[1],0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,Po[0],Po[1],0,0)
mouse_event (and SendInput, which is the preferred API to use for input) have a couple of tricky bits, it's a good idea to
read the MSDN page for mouse_event fully and carefully before using it - pay attention to the small print: in particular, the x and y values are not pixels, so you can't just put in values you get from GetCursorPos.
It just happens that 0,0 is the bottom-left corner, so points near the bottom left will be roughly in the same area of the screen, but the further away from that you get, the more the pixel values diverge from the actual units that this API uses: so it can appear to work for positions near the start button (assuming it's in the bottom left of the screen), but for other values, it may appear to be clicking somewhere else, which sounds similar to what you are seeing.
From MSDN:
dx [in]
Type: DWORD
The mouse's absolute position along the x-axis or its amount of motion since the last mouse event was generated, depending on the setting of MOUSEEVENTF_ABSOLUTE. Absolute data is specified as the mouse's actual x-coordinate; relative data is specified as the number of mickeys moved. A mickey is the amount that a mouse has to move for it to report that it has moved.
So first of all, you need the MOUTEVENTF_ABSOLUTE flag. But that's not all:
Remarks
...
If MOUSEEVENTF_ABSOLUTE value is specified, dx and dy contain normalized absolute coordinates between 0 and 65,535. The event procedure maps these coordinates onto the display surface. Coordinate (0,0) maps onto the upper-left corner of the display surface, (65535,65535) maps onto the lower-right corner.
...so you'll need to scale your target coordinates appropriately before passing them to this API.
Mouse events generated by programs have an "injected" property that an app can filter if they want - for example MMO clients often filter those to avoid bots
Those zeros in the mouse_event let you set some properties, you might want to research if those will let you overcome the injected flag, although I can't find a way immediately
below code worked for me
pyautogui.mouseDown(fromx, fromy)
time.sleep(2)
pyautogui.mouseUp(tox, toy)
pyautogui.mouseUp(1000,400)
Related
I am making a fun python program to automate couple clicks for me. I want to use pyAutoGui library provided by python to make this program. I am struggled to find the coordinates of elements that I want to click. Are there any ways that I can find coordinates of my elements?
You can use pyautogui.mouseInfo().
MouseInfo is an application to display the XY position and RGB color information of the pixel currently under the mouse. Works on Python 2 and 3. This is useful for GUI automation planning.
The full documentation is at https://mouseinfo.readthedocs.io/en/latest/
Point Position (for Windows) is a simple tool that lets you pick the coordinates for any point on your screen (using X,Y axis). Simply point one of the four corner arrows at the spot on your screen that you want to define and click the button to display the X/Y coordinates.
I need to find a way to code the following
1.there's a geometry object that contains an array of points that are then drawn on a canvas widget (got this covered)
2.when you left click on the canvas it checks if you clicked withing a certain margin of an existing point and if that's true a point in the array is selected (got this covered in terms of searching for the point and selecting it)
3.Once selected the point will follow the mouse until the mouse button is released.
Using the Motion event on it's own doesn't seem to work as it seems the function is called over and over while the button is pressed. So I'd need to trigger the search function when the button is pressed them the move function when the button is held.
I'd be grateful for pointers.
Thanks to Dan Getz I did the following:
-bind the point selection to select point and store index in self.selectedPoint
-bind move function to using the self.selectedPoint to indicate the selected point in the array, then passing the events x,y coords to the array as new coords for the selected point
-bind the clearSelected function to to set the self.selectedPoint to -1 thus clearing the selection
The problem I'm still having is that when moving the point I update the screen while the mouse is held which produces quite a bit of flickering. I'm wondering if there's anything I can do to prevent that.
I'm trying to program an experiment in which I want to find out how humans cognitively segment movement streams. For example, if the movement stream could is a person climbing a flight of stairs, each step might be a single segment.
The study is bascially a replication of this one here, but with another set of stimuli: http://dl.acm.org/citation.cfm?doid=2010325.2010326
Each trial should be structured like the following:
Present a video of a motion stream. Display a bar beneath the video that has a marker that moves in sync with the current time of the video (very similar to GUI of a video player).
Present that video again, but now let the participant add stationary markers to the bar beneath the video by pressing a key. The marker is supposed to be placed at the time point in the video bar that corresponds with the time the buttom was pressed (e.g. when the video is 100 seconds long and the buttom was pressed 10 seconds into the video, it should be placed at the 10% mark of the bar).
My instructor suggested programming the whole thing using PsychoPy. PsychoPy currently only supports Python 2.7.
I've looked into the program and it looks promising. One can display a video easily and the rating scale class is similar to the bar we want to implement. However, several features are missing, namely:
One can only set a single marker, subjects should be able to set multiples
As mentioned in point (1) we want to have a marker that moves in synch with the video.
When a key press occurs a marker should be placed at the point in the bar that corresponds with the current time point in the video.
Hence my questions: Do you have any tips for implementing the features described above using the PsychoPy module?
I don't know how much this gets into recommendation question territory, but in case you know of a module for writing experiment GUIs that has widgets with the features we want for this experiment I would be curious to hear about them.
PsychoPy is a good choice for this. The rating scale however (as you note) is probably not the right tool for creating the markers. You can make simple polygon shapes though, which could serve as your multiple markers as well as the continuous time indicator.
e.g. you could make a polygon stimulus with three vertices (to make a triangle indicator) and set its location to be something like this (assuming you are using normalised coordinates):
$[((t/movie_duration) * 2 - 1) , -0.9]
t is a Builder variable that represents the time elapsed in the current trial in seconds. The centre of the screen is at coordinates [0, 0]. So the code above would make the pointer move smoothly from the left hand edge of the screen to the right, close to the bottom edge of the screen, reaching the right hand edge once the move ends. Set the polygon's position field to update every frame so that the animation is continuous.
movie_duration is a placeholder variable for the duration of the movie in seconds. You could specify this in your conditions file, or you can query the movie component to get its duration I think, something like:
$[((t/movie_stim_name.duration()) * 2 - 1) , -0.9]
You could leave markers on the screen in response to keypresses in a similar way, but this would require a little Python code in a code component.
I would like to know if there is a simple way to get coordinates of an area selected with the mouse on screen?
Imagine, I have a small gui, clicking a "select" button then I draw a select area on my screen and it returns the top-left / bottom-right coordinate of my selected area.
And also which kind of gui should I use to be multi platform compatible?
wxPython / wkTinker, any other?
Thanks to point me on the right direction.
The first question depends on the second, but usually you have something like a mousePressed and a mouseRelease event which both provide coordinates (see here for Tkinter). A selected rectangle is defined by the coordinates of those two events.
The second question is rather subjective and also depends on what you want to do exactly. But another option would be PyQt.
Suppose I have drawn simple text (say just the letter 'x') with some font parameters (like size 20 font, etc.) onto an (x,y) location in a QLabel that holds a QPixmap. What are the relevant methods that I will need to override in order to detect a mouse event when a click occurs "precisely" over one of these drawn x's.
My first instinct is to lookup the stored (x,y) locations of drawn points and if the mouse current position is inside a tiny box around that (x,y) position, then I will execute some functionality, like allowing the user to adjust that point, and if not then execute normal mouse event functionality. However, the points I am interacting with are tracked features on human faces, like eyes, etc. Often, when someone is turning at an odd angle with the camera, etc., two distinct tracked points can be pretty much right on top of each other. Thus, no matter how well the user focuses on the desired key point, if there is some tolerance in the logic criteria in mouse event handling, there will be cases where the logic believes two different points are being clicked.
I want to minimize this sort of issue without making unreasonable precision of the click criteria. Is there some fundamentally different way to interpret selection of text (as in, when text is drawn to a pixmap, does there become any sort of attribute of that pixmap that is aware that text has been drawn at (x,y) and so that's what the user must be trying to click on?)
Any advice or examples of this sort of thing in PyQT would be greatly appreciated.
Although you alluded to your plans for user interaction in your previous question, it is now clear that the Graphics View Framework may be more appropriate for what you are trying to do.
This is distinctly different from drawing in a widget. With this framework, you create a scene composed of graphic items (QGraphicsItem subclasses) and then assigne the scene to a view. Through the view, you can interact with the items in the scene. They can generate click events and even be dragged around. The documentation is extensive and looks complicated but conceptually, I think it is easier to understand.