I made a magnify feature yesterday as a test for a game I'm going to work on. It looks ok but I'd prefer to only zoom in on the area seen through the glass.
Does anybody know how zoom in on a dynamic radius with python/pygame? Will I have to tileset my image or brute force it with some kind of intensive layer blit? I'm not sure whether or not this is something to ask here.
Here's a video demo to show what I'm talking about - https://youtu.be/_pVyb0bns3k
I don't think there's any shortcuts here. I would recommend drawing the unzoomed screen as normal, blitting in the zoomed area as a circle, then blitting in the magnifying glass over top. The first and last bits you obviously know how to do already, so let's look at blitting in the zoomed area.
To do this, I would
take the portion of the image to be zoomed in as a square, and copy it to a new Surface. (Remember that this will be smaller than the resulting area.)
Use pygame.transform.smoothscale to expand it to be large enough to cover the entire magnifying glass area
Use masking to turn it from a square into a circle. See Pygame - is there any way to only blit or update in a mask for details.
Blit the result.
Don't worry too much about performance. You're only doing this once per frame.
Related
I am creating a infinity scrolling 2d space battle game in pygame. I have just implemented a parallax star background, but my framerate is now 30fps. Do you how to get more performance, i've tried creating blitting instead of draw.circle, and hwsurface, but cant increase fps.
Download zip
I would suggest that you do not create the star background on the fly the way it sounds like you are doing. Instead create a surface that contains it at the start and then scroll through that. Wrap it around on itself so that as you go past the edge it continues from the beginning. You will want to make sure that the two edges blend and appear seamless.
Then you can just blit the part of the background onto the screen and you will not find it has much impact on your performance vs generating it on the fly all the time.
In fact if you are doing it ahead of time you can do something fancier than just circles. You can do stars with different sizes and brightness (color). Maybe even a glare effect.
Edited after comment from OP:
I thought from what you described you were scrolling it not rotating. If you are doing rotations, I assume that you are using pygame.transform.rotate() for the rotations. One thing I have done in the past if I need rotated images is again to rotate them before hand and save them. That can work well if you are just rotating (even saving 360 rotated images is not that bad memory wise), but if you are scrolling and rotating then that would not work
I have designed my whole pygame to work for 1920x1080 resolution.
However, I have to adapt it for smaller resoltion.
There is a lot of hardcoded value in the code.
Is there a simple way to change the resolution, like resizing the final image at the end of each loop, just before drawing it ?
You can use this : pygame.transform.scale or better (but less efficient) pygame.transform.smoothscale.
To do that, just change the reference surface where you draw (screen) to a generic surface. And after, just resize it, and put it on screen.
I can show you some code if you don't understand how it's work. Just ask.
i usually create a base resolution and then whenever the screen is resized, i scale all the assets and surfaces by ratios.
This works well if you have assets that are of large resolution and you have scaled then down but would pixelate for smaller images.
you can also create multiple assets file for each resolution and when ever your resolution goes above one of the available asset resolution you can change the image. you can think in in context of css media query to better understand.
I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.
I have got some surfaces in Pygame with a transparent background. They're all the same size. But there's a different sized circle drawn on each of them, so the circle doesn't exactly fit the image.
Here are some example images (I took a screenshot in Photoshop so you can clearly see the transparency and the size of the images):
Now I want to remove the transparent border around the image so the circle exactly fits into the image. I don't want the surface to be circle shaped, I don't think that's possible, but I want that the surface doesn't have blank columns on the left and right and that it doesn't have any blank rows on the top and the bottom. The wanted results:
The circle on the surfaces changes size every frame so I have to recalculate the new surfaces every frame.
I already Googled it, but I haven't found anything for Pygame surfaces yet. I also tried making my own function but it looks ugly and much worse: the framerate drops from 50 (if I don't call the function) to 30 fps (if I do call the function). I tested it a little bit and I found out that smaller circles take longer to process than bigger circles. How can I do this, but faster. If you want I can show the function I made.
The surface object has a method called get_bounding_rect which is where we will start. The function returns the smallest rect possible which contains all of the non-transparent pixels on the surface.
pixel_rect = image.get_bounding_rect()
With the size of this rect, we can create a new surface:
trimmed_surface = pygame.Surface(pixel_rect.size)
Now blit the portion of image contained within pixel_rect onto trimmed_surface:
trimmed_surface.blit(image, (0,0), pixel_rect)
At this point, trimmed_surface should be a surface the same size as pixel_rect, with the unwanted transparent rows and columns "trimmed" off of the original surface.
Documentation for Surface.get_bounding_rect: http://www.pygame.org/docs/ref/surface.html#Surface.get_bounding_rect
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.