I use a wx.PaintDC() to draw shapes on a panel. After drawing the shapes, when I left click and drag mouse, a rubberband (transparent rectangle) is drawn over shapes. While dragging the mouse, for each motion of mouse, an EVT_PAINT is sent and everything (all shapes and rectangle) is redrawn.
How do I just draw the rubberband over the existing shapes (I don't want to redraw the shapes), I mean, it would be nice if I can save the existing shapes on some DC object and just draw the rubberband on it. So that the application will draw it faster.
You presumably want to have a look at wx.Overlay. Look here for an example.
Related
It appears that in Python graphics.py, new objects are drawn behind existing objects. So if I draw a blue circle and THEN place a rectangular box over half the circle, you find the circle in on top of the rectangular box. The box, even though it was written last, is appearing behind the circle. Is there a way to control this behavior so the rectangle would appear on top of the circle and not behind it? I want to avoid having to undraw the circle, draw the rectangle, then redraw the circle, so that the rectangle is on top of the circle instead of behind it.
Certainly, this becomes more cumbersome as you have more and more overlapped objects.
Say I have an opaque window for the user to click and drag on, creating a selection rectangle. Is it possible to make just that rectangular area transparent, leaving the rest of the window opaque? Kind of like this image. The black being the opaque part and the white being the transparent part. All part of the same widget.
Edit: Transparent as in being able to see behind the window.
I'm trying to create a panel on wxPython with a user-specified bitmap on the background, where a number of shapes can be dragged.
The expected behaviour is:
User selects an image file on an open file dialog before the panel is initialised;
The image becomes the background of the panel and is scaled to fit the panel while keeping an aspect ratio that depends on a previous user input;
A few circles appear over the image and can be dragged by the user.
I've been able to implement this with no functional problems, but I've been having some trouble with background flicker and so far, the solution I've found that results in the smallest amount of flickering is:
Create a BufferedDC from the loaded image when the panel is created;
Create a PaintDC inside the EVT_PAINT handler;
StretchBlit the BufferedDC into the PaintDC;
Draw the circles on the PaintDC;
Refresh the panel on any event that changes the circles' position or visibility.
Since the circles are draggable, one of these events is mouse motion, so the panel is refreshed every time the mouse moves over the panel, causing flickering.
How can I implement this behaviour in a way that eliminates background flicker?
I have a QGraphicsItem with drawn shapes (below image). How do I detect if the mouse pointer is over the circle, the text or the green rect? All shapes were drawn using the painter method (i.e.: painter.drawText()).
Would be possible make this using a QGraphicsItem into its parent (also a QGraphicsItem) and use the hover mouse events?
The solution you suggested is the easiest approach--rather than draw all of the circles from a single GraphicsItem, make each circle its own GraphicsItem and make them children of the original GraphicsItem. Then you can handle mouse hover events individually for each circle.
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.