I need to find a way to code the following
1.there's a geometry object that contains an array of points that are then drawn on a canvas widget (got this covered)
2.when you left click on the canvas it checks if you clicked withing a certain margin of an existing point and if that's true a point in the array is selected (got this covered in terms of searching for the point and selecting it)
3.Once selected the point will follow the mouse until the mouse button is released.
Using the Motion event on it's own doesn't seem to work as it seems the function is called over and over while the button is pressed. So I'd need to trigger the search function when the button is pressed them the move function when the button is held.
I'd be grateful for pointers.
Thanks to Dan Getz I did the following:
-bind the point selection to select point and store index in self.selectedPoint
-bind move function to using the self.selectedPoint to indicate the selected point in the array, then passing the events x,y coords to the array as new coords for the selected point
-bind the clearSelected function to to set the self.selectedPoint to -1 thus clearing the selection
The problem I'm still having is that when moving the point I update the screen while the mouse is held which produces quite a bit of flickering. I'm wondering if there's anything I can do to prevent that.
Related
Fairly simple one, this. For example:
a = canvas.create_circle(0,0,50,50,outline='red',width=3,fill='')
b = canvas.create_circle(0,0,50,50,outline='red',width=3,fill='red')
b will respond to click events anywhere in the circle, whereas a will only respond to clicks on the outline.
Is there a better way to solve this than simply using an almost-transparent colour for the fill?
The answer depends somewhat on how you define "better". It is true that clicks don't register if the objects don't have a fill color. One option is to put the click event on the canvas itself, then use the canvas find_closest or find_overlapping methods to find the object nearest the cursor.
You could use a polygon instead of an oval:
a = canvas.create_polygon(100,100,50,150,100,200,150,150, outline='red', fill='', smooth=1)
Edit:
A polygon is sensitive to mouse clicks even if it has no fill color (or outline for that matter).
see the canvas docs: http://www.tcl.tk/man/tcl/TkCmd/canvas.htm
It's a bit late but here is solution to the problem. In your case you can notice that if you click on the outline of the no fill object itself the click event will trigger.
(Don't know why but it behaves this way)
Now if you remove both the outline and fill i.e.
a = canvas.create_circle(0,0,50,50,outline='',fill='')
The Invisible (no fill, no outline) will trigger the click event just like the other circle 'b'.
So what you can do is create an invisible circle (no fill, no outline), Bind it to the trigger event.
Then create another circle right on top of it (same co-ordinates) with your desired outline parameters.
This will give the illusion of a single outlined circle with no fill and will still trigger the click event.
I would like to know if there is a simple way to get coordinates of an area selected with the mouse on screen?
Imagine, I have a small gui, clicking a "select" button then I draw a select area on my screen and it returns the top-left / bottom-right coordinate of my selected area.
And also which kind of gui should I use to be multi platform compatible?
wxPython / wkTinker, any other?
Thanks to point me on the right direction.
The first question depends on the second, but usually you have something like a mousePressed and a mouseRelease event which both provide coordinates (see here for Tkinter). A selected rectangle is defined by the coordinates of those two events.
The second question is rather subjective and also depends on what you want to do exactly. But another option would be PyQt.
The script below works perfectly when I want to click, for example, the Start button on Windows, but when I try to click a button in a certain GUI program it does not have any effect.
Is it possible that this program has disabled virtual mouse clicks?
If so, can I circumvent this somehow?
import win32api, win32con
Po=win32api.GetCursorPos()
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTDOWN,Po[0],Po[1],0,0)
win32api.mouse_event(win32con.MOUSEEVENTF_LEFTUP,Po[0],Po[1],0,0)
mouse_event (and SendInput, which is the preferred API to use for input) have a couple of tricky bits, it's a good idea to
read the MSDN page for mouse_event fully and carefully before using it - pay attention to the small print: in particular, the x and y values are not pixels, so you can't just put in values you get from GetCursorPos.
It just happens that 0,0 is the bottom-left corner, so points near the bottom left will be roughly in the same area of the screen, but the further away from that you get, the more the pixel values diverge from the actual units that this API uses: so it can appear to work for positions near the start button (assuming it's in the bottom left of the screen), but for other values, it may appear to be clicking somewhere else, which sounds similar to what you are seeing.
From MSDN:
dx [in]
Type: DWORD
The mouse's absolute position along the x-axis or its amount of motion since the last mouse event was generated, depending on the setting of MOUSEEVENTF_ABSOLUTE. Absolute data is specified as the mouse's actual x-coordinate; relative data is specified as the number of mickeys moved. A mickey is the amount that a mouse has to move for it to report that it has moved.
So first of all, you need the MOUTEVENTF_ABSOLUTE flag. But that's not all:
Remarks
...
If MOUSEEVENTF_ABSOLUTE value is specified, dx and dy contain normalized absolute coordinates between 0 and 65,535. The event procedure maps these coordinates onto the display surface. Coordinate (0,0) maps onto the upper-left corner of the display surface, (65535,65535) maps onto the lower-right corner.
...so you'll need to scale your target coordinates appropriately before passing them to this API.
Mouse events generated by programs have an "injected" property that an app can filter if they want - for example MMO clients often filter those to avoid bots
Those zeros in the mouse_event let you set some properties, you might want to research if those will let you overcome the injected flag, although I can't find a way immediately
below code worked for me
pyautogui.mouseDown(fromx, fromy)
time.sleep(2)
pyautogui.mouseUp(tox, toy)
pyautogui.mouseUp(1000,400)
Suppose I have drawn simple text (say just the letter 'x') with some font parameters (like size 20 font, etc.) onto an (x,y) location in a QLabel that holds a QPixmap. What are the relevant methods that I will need to override in order to detect a mouse event when a click occurs "precisely" over one of these drawn x's.
My first instinct is to lookup the stored (x,y) locations of drawn points and if the mouse current position is inside a tiny box around that (x,y) position, then I will execute some functionality, like allowing the user to adjust that point, and if not then execute normal mouse event functionality. However, the points I am interacting with are tracked features on human faces, like eyes, etc. Often, when someone is turning at an odd angle with the camera, etc., two distinct tracked points can be pretty much right on top of each other. Thus, no matter how well the user focuses on the desired key point, if there is some tolerance in the logic criteria in mouse event handling, there will be cases where the logic believes two different points are being clicked.
I want to minimize this sort of issue without making unreasonable precision of the click criteria. Is there some fundamentally different way to interpret selection of text (as in, when text is drawn to a pixmap, does there become any sort of attribute of that pixmap that is aware that text has been drawn at (x,y) and so that's what the user must be trying to click on?)
Any advice or examples of this sort of thing in PyQT would be greatly appreciated.
Although you alluded to your plans for user interaction in your previous question, it is now clear that the Graphics View Framework may be more appropriate for what you are trying to do.
This is distinctly different from drawing in a widget. With this framework, you create a scene composed of graphic items (QGraphicsItem subclasses) and then assigne the scene to a view. Through the view, you can interact with the items in the scene. They can generate click events and even be dragged around. The documentation is extensive and looks complicated but conceptually, I think it is easier to understand.
Is it possible to get the x y coordinates of the insertion cursor in a Tkinter Text widget? I'm trying to make a popup menu that pops up next to the insertion cursor.
The bbox method can be used to get the bounding box of an index. You can use that to get the position relative to the window. You can use the winfo method to get the x/y of the window.
Typically popups appear next to the mouse rather than the insertion point (though typically, right-clicking first sets the insertion point and then displays a menu).