WxPython zooming technique - python

I am developing a wxpython project where I am drawing a diagram on to a panel that I need to be able to zoom in/out to this diagram(a directed acyclic graph in my case). I will achieve this by mouse scroll when the cursor is on the panel, however that is not a part of my question. I need an advice from an experienced person about the method I am using for zooming. So far I thought as doing,
There are lines, rectangles and texts inside rectangles within this diagram. So maybe I could increase/decrease their length/size with the chosen mouse event. But it is hard to keep it balanced because rectangles are connected with lines their angles should not change, and texts inside the rectanges should stay in the middle of them.
Other method I thought of doing is to search for a built-in zoom method. Which I heard about something like Scale. However I have some questions about this method. Will this work on vector drawings(like mine) rather than images. And will it be scaling only the panel I chose and not the whole screen ? After I hear your advice about this, I will look deeper into this, but now I am a bit clueless.
Sorry if my question is too theoretical. But I felt I needed help in the area. Thanks in advance.
Note: Zooming not necessarily applied by scrolling.
Note2: My research also led me to FloatCanvas. Is this suitable to my needs ?

Yes, from your description FloatCanvas would certainly meet your needs.
Another possibility to consider would be the wx.GraphicsContext and related classes. It is vector-based (instead of raster) and supports the use of a transformation matrix which would make zooming, rotating, etc. very easy. However, the actual drawing and management of the shapes and such would probably require more work for you than using FloatCanvas.

Related

Best way to isolate a Rubik's Cube from background

I have been working on a python program using opencv that will help the user solve the Rubik's Cube. The most important & complicated part is identifying the cube to read the value of each of its sides.
I have had a decent amount of luck so far but am wanting to change the processing pipeline a little. I think it would make sense to isolate the cube from its background before trying to detect the (rounded) square stickers and read their colors.
Attached is an example of the sort of frame we would be dealing with. I'm not sure what the best method for isolating the cube from the background would be. I have tried background selection, which seemed somewhat promising (though the mask was very grainy & blotchy). But I am also wondering if it would make more sense to use something like object detection.
I have considered just making a "dumb" crop which makes the user align the cube within a reticle. However, I would prefer a more elegant solution and don't mind spending the additional time that entails.
Edit: maybe the mild bokeh could be used to identify the background, or is it too miniscule to detect consistently?
Thanks for any help!

Pygame image collision

I have a pygame program where there's a face in the center. What I want the program to do is have a bunch of objects on the screen, all irregular. Some would be circles, others would be cut-out pictures of objects like surf boards, chairs, bananas, etc. The user would be able to drag the objects around, and they'd collide with each other and the face in the center, and so be unable to pass through them. Could anyone show me how I would do this? Thanks!
-EDIT- And by not be able to pass through, I mean they'd move along the edge of the object, trying to follow the mouse.
What you are looking for is functionality usually provided by a so-called physics engine. For very basic shapes, it is simple enough to code the basic functionality yourself. (The simplest case for 2D shapes is the collision detection between circles).
Collision detection gets pretty hard pretty quickly, especially if you want to do it at a reasonably fast rate (such as you would need for the sort of project you are describing) and also especially if you are dealing with arbitrary, non-regular shapes (which your description seems to indicate). So, unless you are interested in learning how to code an optimized collision detection system, I suggest you google for python physics engines. I have never used any, so I can't personally recommend one.
Good luck!

2D Game Engine - Implementing a Camera

So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)

Problems with layering sprites in pygame?

I have been building a simple game using the pygame library for python. Here is a link to the repository.
https://github.com/stmfunk/alienExplorer
The issue I am having is with predictably overlaying sprites on top of each other. The clouds in this code are seemingly placed randomly above and below the alien. Although this behaviour is actually desirable in this example I'd like to know why it is behaving randomly and how can I make it behave as I want it too in future. I plan on adding objects which I want to remain in the background in future.
Thanks for the help!
Also I'm not sure if it is best practise to insert code directly or to link a repository so I'd appreciate it if somebody gave me advice on that.
I'm not sure about the technicalities, but what I have noticed from my own personal experiences is that sprites in the same group will always be ordered randomly. I assume that in the code above all the clouds are in one group and the alien in another and a third group containing all them (and if not, then I would suggest that that is an elegant way of doing it, especially to solve your problem). The solution with your problem above would be to instead of drawing the group with everything in it, draw the individual groups in order of layers (draw the ones at the bottom first and then go up)

What is the most performant way to implement zoom to a cairo-drawn canvas?

I have a wx.ScrolledWindow where is drawn on using cairo. I have implemented a zoom-functionality which right now redraws the whole content.
But as there will be up to 200 curves to draw I should consider a more performant solution.
I have thought of these:
Buffering images for the zoom factors -1/+1 (Memory consuming)
Using librsvg and buffer an SVG image (I have read something about this. Does librsvg work under Windows too?)
Storing the cairo.Context after drawing groups of curves, and on zoom restoring it (just an idea.. is that possible?)
Are there other possibilities, and: what is the best solution?
Thanks a lot
Not really a concrete answer to your question, but I was faced with the same problem and just switched to matplotlib where a zoom and pan function is already implemented. I am not sure though if it is super performant. I have the feeling my program was running more smoothly before.
I also tried out floatcanvas and floatcanvas2 but was not really happy with both of them.
If you're double-buffering anyway, why not do a quick bitmap scale as a "preview" while waiting for the newly redrawn vector image? I confess I don't know how to do this. But if you can make it work, it should work! :)

Categories

Resources