This question already has answers here:
How do I rotate an image around its center using Pygame?
(6 answers)
How to rotate an image(player) to the mouse direction?
(2 answers)
Closed 2 years ago.
I'm making a heads-up display, similar to what you'd find in a fighter jet (http://i1.ytimg.com/vi/RYjBjT79hLo/hqdefault.jpg).
Some of the elements just stay in one place on the screen, like the airspeed and altitude. That's easy.
The part I'm having trouble with is the pitch ladder (the bars that show 5, 10, 15, etc degrees above/below the horizon) and horizon line. The way I'm trying to implement it is to have a png file that has all the bars from -90 to +90 already created. This file is much much larger in resolution than my display window, so only the bars that correspond to the current pitch, +/-10 degrees, are displayed on the screen. As the user pitches up, the image moves down, and vice versa. As the drift angle increases (the angle between the heading and the velocity vector, ie, sideslip), the HUD moves left or right. The problem is in rotation. I'd like to be able to specify the location of the center point of the image and the rotation angle, not just the top left corner, and have pygame render it appropriately. I'm not quite clear on exactly how rects work though, which is part of the problem.
My math background tells me the easiest way to do this would be to get an array of each pixel value on the screen, then multiply by a transformation matrix based on the pitch and roll, but I don't think pygame actually lets you access that low-level information.
[EDIT] On second thought, a pixel array may not be easiest. A transformation matrix edits the values of each element in the array, but a pixel array contains RGB information in each element. A transformation matrix would just edit the color of each pixel, not actually transform them. hmmmm.... [/EDIT]
It seems like pygame really really wants everything to be boxed up in rectangles that are parallel to the screen borders, which is kind of a problem for a rolling aircraft. I've thought about using OpenGL instead, but that seems overkill. Can anyone help me do this in pygame? Thanks!
Related
I have coordinates for 2 corners https://prnt.sc/w2jryh (x and y coordinates for d and b points of the square). And I need to create screenshot within the area of this square, but when I am trying to do that, it is failing, either getting too much in screenshot, or too less. What may be the magic formula for that :) This is what I tried:
pyautogui.screenshot("testScr.png",region=(blackRookCornerX,whiteRookCornerY,whiteRookCornerX,blackRookCornerY))
basically taking coordinates and trying get the right screenshot. Coordinates are correct here.
From their docs
There is also an optional region keyword argument, if you do not want a screenshot of the entire screen. You can pass a four-integer tuple of the left, top, width, and height of the region to capture:
The first two numbers should be the x,y coordinates of the top left corner of where you want to take a shot, the third number is how far right/left to go (in pixels) and the fourth is how far up/down to go (in pixels).
Try this:
pyautogui.screenshot("testScr.png", region=(blackRookCornerX, whiteRookCornerY, 100, 100))
Start with a broad number like 100 and then slowly whittle away until you have the perfect screenshot.
You could make a hotkey to use for each corner, to collect the coordinates; Simply put your mouse in those corners and press each hotkey. Then once you have done that for both corners and have two variables, use those variables for your screenshot.
Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find a way to translate the visual board into a e.g a 2d array to feed into the A.I to compute the robots moves.
I have a line detection working which draws lines on along the edges of the squares (and also returns edges in canny as a prior step). Moreover I detect green and red (the squares of my board are green and red) and return these both as a mask each.
I also have a sphere detection in place to detect the position of the pieces and some black and white color detection returning a mask each with the black or white detected areas.
My question is how I can now combine these things I have and as a result get some type of array out of which I can deduct information over in which squares my pieces are ?
Like how would i build the 2d array (or connect any 8x8) array to the image of the board with the lines and/or the masks of the red/green tiles ? I guess I have to do some type of calibration ?
And secondly is there a way to somehow overlay the masks so that I then know which pieces are in which squares ?
Well, first of all remember that chess always starts with the same pieces on the same positions e.g. black knight starts at 8-B which can be [1][7] in your 2D array. If I were you I would start with a 2D array with the begin positions of all the chess pieces.
As to knowing which pieces are where: you do not need to recognize the pieces themselves. What I would do if I were you is detect the empty spots on the chessboard which is actually quite easy in comparison to really recognizing the different chess pieces.
Once your detection system detects that one of the previously empty spots is now no longer empty you know that a chess piece was moved there. Since you can also detect a new open spot(the spot where the chess piece came from) you also know the exact chess piece which was moved. If you keep track of this list during the whole game you can always know which pieces are moved and which pieces are where.
Edit:
As noted in the comments my answer was based on chess instead of checkers. The idea is however still the same but instead of chess pieces you can now put men and kings in the 2D array.
Based on either the edge detector or the red/green square detector, calculate the center coordinates of each square on the game board. For example, average the x-coordinate of the left and right edge of a square to get the x-coordinate of the square's center. Similarly, average the y-coordinate of the top and bottom edge to get the y-coordinate of the center.
It might also be possible to find the top, left, bottom and right edge of the board and then interpolate to find the centers of all the squares. The sides of each square are probably more than a hundred pixels in length, so the calculations don't need to be that accurate.
To determine where the pieces are, iterate of a list of the center coordinates and look at the color of the pixel. If it is red or green, the square is empty. If it is black or white, the square has a corresponding piece in it. Use the information to fill an array with the information for the AI.
If the images are noisy, it might be necessary to average several pixels near the center or to average the center pixel over several frames.
It would work best if the camera is above the center of the board. If it is off to the side, the edges wouldn't be parallel/orthogonal in the picture, which might complicate the math for finding the centers.
This question already has answers here:
Pygame rotating cubes around axis
(1 answer)
Does PyGame do 3d?
(13 answers)
Depth issue with 3D graphics
(1 answer)
Close range 3d display messed up
(1 answer)
Closed 2 years ago.
Okay, so me and a friend are attempting to create a 3D engine of sorts and we've recently made our way past all the preliminary bugs. At last, we managed to get a block of dirt rendered (I'd post an image but I don't yet have enough reputation). Now, however, we are running into some more problems and this time we have no idea what's wrong.
See, when you move to the either side of the block, holes begin randomly developing in the faces. Example, example. I suppose I ought to explain how our rendering works - we're using Pygame (without any real 3D library, that's kinda the point), and couldn't figure out how we should transform images when moving about in 3D space. So, we decided to split all the pixels into separate Tile classes and render them as polygons with color - much easier, just have to calculate the vertices instead of mapping an image. The way we make sure holes like these don't happen is by sorting them all according to distance from the camera, and rendering in that order.
At some point we'll implement optimizations like backface culling, but until then we have this big problem: our distance function doesn't seem to be working properly!
Here's what we have to calculate distance:
def pointdist(self, point):
return (self.x - point[0]) * (self.x - point[0]) + (self.y - point[1]) * (self.y - point[1]) + (self.z - point[2]) * (self.z - point[2])
We have a list master_buffer that holds all the tiles, and puts the ones to be rendered in draw_buffer - right now, that's all of them, because we don't have any filtering or optimizations implemented yet.
for tile in self.master_buffer:
self.draw_buffer.append(tile)
self.draw_buffer.sort(key=(lambda tile: c.pointdist(tile.center)))
After that, we just go through draw_buffer, calculate, and render. That's all the filtering we're doing, so we can't fathom why it'd form such holes.
If you need additional examples or parts of the code, feel free to ask. Advice on our method in general is also welcome.
RESPONSES:
In response to calculating the sum twice, I'm
doing this for performance - I've read that multiplication is
significantly faster than exponents in python when the exponent is
small, so that is what I am doing.
In response to my question not being clear enough, I am trying to figure out why there are holes in my cube. That is made quite clear, in my opinion.
In response to not having square roots, that's because I don't need the distance, I need to compare distances. If a > b, then sqrt(a) > sqrt(b) always, so leaving out sqrt will improve performance.
c, my camera, is not inside the cube when this is happening. The drawing code performs the calculations, and immediately renders - nothing can happen in between.
EDITS:
I think it may be helpful for everyone to know that the only angle at
which there are no problems is when viewing top and front faces - any
other faces viewed, and things will start falling apart.
Also, if it helps, the axes are: +x to the right, +y going backwards, and +z downward.
Seeing as no one has been able to find a problem, we will attempt to take the approach of calculating distance for each face rather than individual tiles. It will mean we are locked into using cubes, but that's the best route for now.
I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.