in " Drawing point of view for an object in PyQt6 " we made a point of view for an object.
and in " Collision detection between circle and rectangle " we found methods how to detect collision between circles and rectangles
now I want to detect collision between POV and other objects like this.
I wrote my code in python with PySide6.
How should I do that?
How should I detect collision between POV and other objects?
If you have the option of working directly with a bitmap, you can paint the obstacles, then paint the POV and check if every pixel is free.
Otherwise, you can decompose the POV as the union of a disk and a sector, and detect the collisions with the disk or the sector. Detecting collisions between two circles is not a big deal (compare the distance between the centers to the sum of the radii), and you linked to the collision between a disk and a rectangle. Detecting collisions with a sector is more challenging.
You can decompose this operation as a collision with a disk and with an angle, itself defined as the intersection of two half-planes. Intersection with a half-plane is not difficult. For a disk, compare the algebraic distance of the center to the border line and the disk radius. For a rectangle, check that at least one of the corners lies in the half plane.
Related
I have an image containing a bunch of circular features. I'd like to know exactly the exact location of the centre of a particular circle and its radius (preferably with sub-pixel accuracy).
Rather than using a circle detector, which will try to find all of the circles, is there a method in OpenCV for fitting a circle to the image? Like this:
Update:
I have tried using the Hough circle detection method, and it seems to get confused about whether the circle should be on the inside or outside edge of the black line. The circle jumps around between the inside and outside edges, or sometimes tries to do both.
All I can think of now is, if you know the approximate centre and radius, search for all the circles and use least fitting squares with the circle equation to find the one you are looking for.
Basically, I'm working on a robot arm that will play checkers.
There is a camera attached above the board supplying pictures (or even videomaterial but I guess that is just a series of images and since checkers is not really a fast paced game I can just take a picture every few seconds and go from there)
I need to find a way to translate the visual board into a e.g a 2d array to feed into the A.I to compute the robots moves.
I have a line detection working which draws lines on along the edges of the squares (and also returns edges in canny as a prior step). Moreover I detect green and red (the squares of my board are green and red) and return these both as a mask each.
I also have a sphere detection in place to detect the position of the pieces and some black and white color detection returning a mask each with the black or white detected areas.
My question is how I can now combine these things I have and as a result get some type of array out of which I can deduct information over in which squares my pieces are ?
Like how would i build the 2d array (or connect any 8x8) array to the image of the board with the lines and/or the masks of the red/green tiles ? I guess I have to do some type of calibration ?
And secondly is there a way to somehow overlay the masks so that I then know which pieces are in which squares ?
Well, first of all remember that chess always starts with the same pieces on the same positions e.g. black knight starts at 8-B which can be [1][7] in your 2D array. If I were you I would start with a 2D array with the begin positions of all the chess pieces.
As to knowing which pieces are where: you do not need to recognize the pieces themselves. What I would do if I were you is detect the empty spots on the chessboard which is actually quite easy in comparison to really recognizing the different chess pieces.
Once your detection system detects that one of the previously empty spots is now no longer empty you know that a chess piece was moved there. Since you can also detect a new open spot(the spot where the chess piece came from) you also know the exact chess piece which was moved. If you keep track of this list during the whole game you can always know which pieces are moved and which pieces are where.
Edit:
As noted in the comments my answer was based on chess instead of checkers. The idea is however still the same but instead of chess pieces you can now put men and kings in the 2D array.
Based on either the edge detector or the red/green square detector, calculate the center coordinates of each square on the game board. For example, average the x-coordinate of the left and right edge of a square to get the x-coordinate of the square's center. Similarly, average the y-coordinate of the top and bottom edge to get the y-coordinate of the center.
It might also be possible to find the top, left, bottom and right edge of the board and then interpolate to find the centers of all the squares. The sides of each square are probably more than a hundred pixels in length, so the calculations don't need to be that accurate.
To determine where the pieces are, iterate of a list of the center coordinates and look at the color of the pixel. If it is red or green, the square is empty. If it is black or white, the square has a corresponding piece in it. Use the information to fill an array with the information for the AI.
If the images are noisy, it might be necessary to average several pixels near the center or to average the center pixel over several frames.
It would work best if the camera is above the center of the board. If it is off to the side, the edges wouldn't be parallel/orthogonal in the picture, which might complicate the math for finding the centers.
I have a panoramic one shot lens from here: http://www.0-360.com/ and I wrote a script using the python image library to "unwrap" the image into a panorama. I want to automate this process though, as currently I have to specify the center of the image. Also, getting the radius of the circle would be good too. The input image looks like this:
And the "unwrapped" image looks like this:
So far I have been trying the Hough Circle detection. The issues I have is selecting the correct values to use. Also, sometimes, dark objects near the center circle seem to throw it off.
Other Ideas I had:
Hough Line detection of the unwrapped image. Basically, choose center pixel as center, then unwrap and see if the lines on the top and bottom are straight or "curvy". If not straight, then keep trying with different centers.
Moments/blob detection. Maybe I can find the center blob and find the center of that. The problem is sometimes I get a bright ring in the center of the dark disk as seen in the image above. Also, the issue with dark objects near the center.
Paint the top bevel of the mirror a distinct color like green to make circle detection easier? If I use green and only use the green channel, would the detection be easier?
Whats the best method I should try and use to get the center of this image and possibly the radius of the outer and inner rings.
As your image have multiple circle with common centre you can move that way, like
Detect circle with Hough circle and consider circle with common centre.
Now check the ratio for co-centred circle, as your image keep that ratio constant.
I guess don't make it too fancy. The black center is at the center of the image, right? Cut a square ROI close to the image center and look for 'black' region there. Store all the 'black' pixel locations and find their center. You may consider using CMYK color space for detecting the black region.
How do I tell python to detect whether two objects/ images touch each other? For example when an image of pacman touches an image of a ghost?
http://www.pygame.org/docs/ref/rect.html#pygame.Rect.colliderect
colliderect()
test if two rectangles overlap
colliderect(Rect) -> bool
Returns true if any portion of either rectangle overlap (except the
top+bottom or left+right edges).
If the only collision detection between sprites is between pac-man and other objects, then just call colliderect on every combination of pacman's collision rectangle and every other collision rectangle.
If every single combination of collisions can be meaningful, then produce a big list of all of them and colliderect each rectangle with each rectangle further along in the list.
Every collision that occurs, you can choose to do something - you could even call both objects, passing the other object that collided, and thereby allowing the logic to be contained within one or both of the objects.
I'm assuming you're using Sprites for your pacman and ghost? If so you want one of the sprite collision functions: http://www.pygame.org/docs/ref/sprite.html#pygame.sprite.spritecollide
Otherwise, use the Rect collision Patashu links.
I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.