Can anyone help or point me in the right direction for figuring out how to create a drag and draw rectangular box to be used as a selection tool in PyGtk? I am presently using an event box with a drawable window and the user can click once in the upper left and once in the lower right corner of the portion of image they would like to choose which will then draw a rectangle over the selection, but a drag and draw rectangle will allow the user to better adjust and get better accuracy.
I have looked quite a few places for information or a tutorial on this but I haven't found much. I am relatively new to Gtk+ so perhaps this is so simple that no one has to ask.
Actually, this doesn't seem all that lamebrained at all. It is actually quite specific, and a little challenging.
I'll give you the steps to start you off, but as you're beginning (and you didn't post any specific code), it would be better for you to create the code yourself based on documentation and my hints.
By the way, look up the official PyGTK documentation - that should be your definitive source for all the objects and functions of PyGTK. It is very well written and exhaustive, and I rarely have to look more than five minutes to find what I need.
What I suggest you do is use three signals, connected to your drawing area.
button-press-event
button-release-event
motion-notify-event
Create three callbacks (tutorial here), one for each event. Connect your drawing area to your events and callbacks (again, see tutorial. You may need to go through a few pages on it.).
You are going to also need to create two boolean variables on the global level (above the main class, at the same level you import modules.) The first controls whether the selection tool is chosen (call it "Select_On"), and the second for if it is active (call it "Select_Active")
On the button you use to start the select tool, set "Select_On" to "True". This should probably be a toggle button, so make sure you set it up so "Select_On" gets set to off if you toggle the button off.
On button-press-event, create the object for selecting. What you're going with now actually should work well. Also, set "Select_Active" to "True".
On motion-notify-event, change the size of your object based on cursor position. Refer to that documentation for that particular kind of object to learn how to change its size, and refer here for how to get the cursor position.
Be prepared to write an algorithm to determine how to change the size of the selection object based on the cursor position. If you need help with that, feel free to ask for it in a separate question.
On button-release-event, set "Select_Active" to "False", and call all your code for actually confirming the selection.
As an aside, the benefit to using the "motion-notify-event" is that, as soon as the cursor leaves the widget you're selection in, the selection box stops changes sizes. The cursor must re-enter the widget to continue changing the selection box size.
I hope all that works for you, and wishing you the very best on your project!
Related
I am developing a wxpython project where I am drawing a diagram on to a panel that I need to be able to zoom in/out to this diagram(a directed acyclic graph in my case). I will achieve this by mouse scroll when the cursor is on the panel, however that is not a part of my question. I need an advice from an experienced person about the method I am using for zooming. So far I thought as doing,
There are lines, rectangles and texts inside rectangles within this diagram. So maybe I could increase/decrease their length/size with the chosen mouse event. But it is hard to keep it balanced because rectangles are connected with lines their angles should not change, and texts inside the rectanges should stay in the middle of them.
Other method I thought of doing is to search for a built-in zoom method. Which I heard about something like Scale. However I have some questions about this method. Will this work on vector drawings(like mine) rather than images. And will it be scaling only the panel I chose and not the whole screen ? After I hear your advice about this, I will look deeper into this, but now I am a bit clueless.
Sorry if my question is too theoretical. But I felt I needed help in the area. Thanks in advance.
Note: Zooming not necessarily applied by scrolling.
Note2: My research also led me to FloatCanvas. Is this suitable to my needs ?
Yes, from your description FloatCanvas would certainly meet your needs.
Another possibility to consider would be the wx.GraphicsContext and related classes. It is vector-based (instead of raster) and supports the use of a transformation matrix which would make zooming, rotating, etc. very easy. However, the actual drawing and management of the shapes and such would probably require more work for you than using FloatCanvas.
I wanted to know if anyone knew where to start in terms of recreating this sort of functionality?
http://www.learningnuke.com/wp-content/uploads/nukewipepreview.png
In the picture you can drag the centre line to reveal Image A or Image B or parts of each, interactively.
I want to be able to wipe/reveal across two images, maybe it's possible doing some sort of interactive crop of sorts.
Wanting to add this feature to a window in Maya, so maybe with QT, but not essential.
Just some pointers would be great.
I can tell you that this is possible via Qt/PyQt in maya. You can create a dialog that displays QPixmaps with some form of mouse interaction to control their display. I would forget about trying to extend the actual Render View as this would be a pain in the ass.
Just focus on a Qt solution. Unfortunately beyond this, I'm not sure what more I can offer unless you have a specific question about its implementation.
I would probably stack the QPixmaps on top of each other inside of custom QLabel widgets. The QLabel would have a custom mousepress/move event that would resize maybe the right edge to simulate the wipe effect, and reveal the one stacked underneath.
Also, it does resemble the functionality of a QSplitter so that might also work, with an image on each side of the layout and a custom style to the split bar.
I am trying to convert my Python Code to Java. I need a GUI that is similar to python where I can use widgetname.place(x,y) to place objects anywhere I want in the window. I want to be able to specify where the object is placed in the window. I have tried GridLayout, GridBagLayout, BoxLayout and FlowLayout. None of those are allowing me to secify x and y coordinates to place my objects(text fields, labels, buttons) where ever I want. I need to be able to specify where the object goes on the screen using x and y coordinates.
Anyone have any ideas?
This can be done setting your LayoutManager to null, but it's highly discouraged precisely because it annihilates the goal of layouts, which is to be able to have good-looking frames, regardless of the look and feel, screen resolution, etc.
You'd better learn how to use layout managers, because that's the good way to design a GUI.
There's a Swing tutorial that gives a concise example on positioning widgets absolutely.
If you do start using LayoutManagers I recommend using TableLayout because it far easier and more powerful than GridBagLayout.
http://java.sun.com/products/jfc/tsc/articles/tablelayout/
Hopefully your need to do absolute positioning is relatively small because it's not flexible should the user resize the window, and your components need to change their size. If you are trying to build a component that renders to X,Y to draw graphics you can subclass JComponent and override paint().
This is about wxPython.
I would like to have 2 Panels laying one over the other:
PanelBG should be some sort of a "background", with its own GridBagSizer with subPanels, StaticTexts and so on;
PanelFG should be the "foreground" panel, also with its own GridBagSizer with some StaticTexts, Buttons... but a transparent background, in such a way that PanelBG is visible wherever PanelFG doesn't lay widgets.
I need both Panels to stretch to all the sides of the frame, even when resizing the window, though never changing the reciprocal proportions, that's why I'm not sure if there's a way to use absolute positioning.
In case you are wondering, the reason why I don't want to use a single Panel is that merging the 2 GridBoxSizers would require me to place many many more cells in the sizer, because rows and columns of foreground and background don't always coincide, and I should split them in many cells, with grid dimensions growing up to hundreds**2.
Since the content I want to put in the foreground needs to be updated and refreshed quite often, this would require redrawing all the cells every time, which would take 10 - 20 seconds to complete the operation (tested). Updating only the foreground would require just some hundredths of a second instead.
Thank you!
This would be at least partially a change of direction, but it might be worth examining what other rendering options you have.
In particular, I'm thinking of wxWebKit (http://wxwebkit.kosoftworks.com/), which would let you do layering, etc. using the WebKit browser rendering engine. I'm not sure whether it's at a stage that would provide everything you need since I haven't actually used it, but even if it doesn't work then it may be an approach worth trying - using HTML/CSS for part of your display, while wrapping the whole in a wxPython app.
As I understand it, this is a calendar with rectangles for the days containing the events for the days.
The simple thing would be to use a wxGrid, with seven columns and four or five rows, to represent the months. You would then place the events into the grid cell for the correct date. The wxGrid widget would look after the details of refreshing everything properly.
Using wxGrid you might lose a little control over the exact appearance, though wxGrid is very flexible and feature rich once you learn its many methods, but you would save yourself having to write large amounts of code that would require significant effort to debug.
Suppose I have drawn simple text (say just the letter 'x') with some font parameters (like size 20 font, etc.) onto an (x,y) location in a QLabel that holds a QPixmap. What are the relevant methods that I will need to override in order to detect a mouse event when a click occurs "precisely" over one of these drawn x's.
My first instinct is to lookup the stored (x,y) locations of drawn points and if the mouse current position is inside a tiny box around that (x,y) position, then I will execute some functionality, like allowing the user to adjust that point, and if not then execute normal mouse event functionality. However, the points I am interacting with are tracked features on human faces, like eyes, etc. Often, when someone is turning at an odd angle with the camera, etc., two distinct tracked points can be pretty much right on top of each other. Thus, no matter how well the user focuses on the desired key point, if there is some tolerance in the logic criteria in mouse event handling, there will be cases where the logic believes two different points are being clicked.
I want to minimize this sort of issue without making unreasonable precision of the click criteria. Is there some fundamentally different way to interpret selection of text (as in, when text is drawn to a pixmap, does there become any sort of attribute of that pixmap that is aware that text has been drawn at (x,y) and so that's what the user must be trying to click on?)
Any advice or examples of this sort of thing in PyQT would be greatly appreciated.
Although you alluded to your plans for user interaction in your previous question, it is now clear that the Graphics View Framework may be more appropriate for what you are trying to do.
This is distinctly different from drawing in a widget. With this framework, you create a scene composed of graphic items (QGraphicsItem subclasses) and then assigne the scene to a view. Through the view, you can interact with the items in the scene. They can generate click events and even be dragged around. The documentation is extensive and looks complicated but conceptually, I think it is easier to understand.