In a "multitouch" environement, any application showed on a surface can be rotated/scaled to the direction of an user. Actual solution is to drawing the application on a FBO, and draw a rotated/scaled rectangle with the texture on it. I don't think it's good for performance, and all graphics cards don't provide FBO.
The idea is to clip the rendering viewport in the direction of user.
Since glViewport cannot be used for that, is another way exist to achieve that ?
(glViewport use (x, y, width, height), and i would like (x, y, width, height, rotation from center?))
PS: rotating the modelview or projection matrix will not help, i would like to "rotate the clipping plan" generated by glViewport. (only part of the all scene).
There's no way to have a rotated viewport in OpenGL, you have to handle it manually. I see the following possible solutions :
Keep on using textures, perhaps using glCopyTexSubImage instead of FBOs, as this is basic OpenGL feature. If your target platforms are hardware accelerated, performance should be ok, depending on the number of viewports you need on your desk, as this is a very common use case nowadays.
Without textures, you could setup your glViewport to the screen-aligned bounding rectangle (rA) of your rotated viewport (rB) (setting also proper scissor testing area). Then draw a masking area, possibly only in depth or stencil buffer, filling the (rA - rB) area, that will prevent further drawing on those pixels. Then draw normally your application, using a glRotate to adjust you projection matrix, so that the rendering is properly oriented according to rB.
If you already have the code set up to render your scene, try adding a glRotate() call to the viewmodel matrix setup, to "rotate the camera" before rendering the scene.
Related
I am doing 2D graphics using graphics.py. I wonder if there is a way I can access the display buffer after all the 2D geometries are drawn and before they show up on display. I need to do a post process on the drawn image. For example, tasks I am planning include anti-aliasing for a smoother transition on the edge and applying a geometry warp map for a correct projection on a curved screen, etc. Can anyone tell me how to access the display buffer? If I have to go down to tkinter Canvas, will it work and how?
With the Python ReportLab library's canvas, the paradigm seems to be to apply transforms before drawing your primitives.
My desire is to scale what I've already drawn to fit the page size.
The problem is that I cannot know the extents of the drawn objects until I have drawn it entirely. This is because the input for drawing objects is a stream, which does not expose the maximum extents beforehand.
Right now, I have to:
Draw my objects
Check their extents
In a new canvas, set the scale to fit the extents to the page size
Draw objects again
In some cases, the object drawing can take a few seconds. And doing this twice really feels burdensome.
Is there any way to do this faster?
I made a magnify feature yesterday as a test for a game I'm going to work on. It looks ok but I'd prefer to only zoom in on the area seen through the glass.
Does anybody know how zoom in on a dynamic radius with python/pygame? Will I have to tileset my image or brute force it with some kind of intensive layer blit? I'm not sure whether or not this is something to ask here.
Here's a video demo to show what I'm talking about - https://youtu.be/_pVyb0bns3k
I don't think there's any shortcuts here. I would recommend drawing the unzoomed screen as normal, blitting in the zoomed area as a circle, then blitting in the magnifying glass over top. The first and last bits you obviously know how to do already, so let's look at blitting in the zoomed area.
To do this, I would
take the portion of the image to be zoomed in as a square, and copy it to a new Surface. (Remember that this will be smaller than the resulting area.)
Use pygame.transform.smoothscale to expand it to be large enough to cover the entire magnifying glass area
Use masking to turn it from a square into a circle. See Pygame - is there any way to only blit or update in a mask for details.
Blit the result.
Don't worry too much about performance. You're only doing this once per frame.
I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.