I am creating a infinity scrolling 2d space battle game in pygame. I have just implemented a parallax star background, but my framerate is now 30fps. Do you how to get more performance, i've tried creating blitting instead of draw.circle, and hwsurface, but cant increase fps.
Download zip
I would suggest that you do not create the star background on the fly the way it sounds like you are doing. Instead create a surface that contains it at the start and then scroll through that. Wrap it around on itself so that as you go past the edge it continues from the beginning. You will want to make sure that the two edges blend and appear seamless.
Then you can just blit the part of the background onto the screen and you will not find it has much impact on your performance vs generating it on the fly all the time.
In fact if you are doing it ahead of time you can do something fancier than just circles. You can do stars with different sizes and brightness (color). Maybe even a glare effect.
Edited after comment from OP:
I thought from what you described you were scrolling it not rotating. If you are doing rotations, I assume that you are using pygame.transform.rotate() for the rotations. One thing I have done in the past if I need rotated images is again to rotate them before hand and save them. That can work well if you are just rotating (even saving 360 rotated images is not that bad memory wise), but if you are scrolling and rotating then that would not work
Related
I am new to OpenCV. I am working on a project that involves tracking and detecting a spinning roulette ball. Here is the video I want to use: https://www.youtube.com/watch?v=IzZNaVQ3FnA&list=LL_a67IPXKsmu48W4swCQpMQ&index=7&t=0s
I want to get the ball time for 1 revolution. But the ball is quite fast and hard to detect. I am not sure how to overcome this.
What would be the best algorithm for doing this?
By subtracting successive images, you will isolate the ball as a (slightly curved) line segment. Both its length and its angular position are cues for the speed.
Anyway, these parameters are a little tricky to extract for a side view, as the ellipse has to be "unprojected" to a top view, to see the original circle. You need to know the relative position of the wheel and the viewer, which you most probably don't know.
An approximate solution is obtained by stretching the ellipse in the direction of the small axis.
I made a magnify feature yesterday as a test for a game I'm going to work on. It looks ok but I'd prefer to only zoom in on the area seen through the glass.
Does anybody know how zoom in on a dynamic radius with python/pygame? Will I have to tileset my image or brute force it with some kind of intensive layer blit? I'm not sure whether or not this is something to ask here.
Here's a video demo to show what I'm talking about - https://youtu.be/_pVyb0bns3k
I don't think there's any shortcuts here. I would recommend drawing the unzoomed screen as normal, blitting in the zoomed area as a circle, then blitting in the magnifying glass over top. The first and last bits you obviously know how to do already, so let's look at blitting in the zoomed area.
To do this, I would
take the portion of the image to be zoomed in as a square, and copy it to a new Surface. (Remember that this will be smaller than the resulting area.)
Use pygame.transform.smoothscale to expand it to be large enough to cover the entire magnifying glass area
Use masking to turn it from a square into a circle. See Pygame - is there any way to only blit or update in a mask for details.
Blit the result.
Don't worry too much about performance. You're only doing this once per frame.
I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.
So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.