How to make something rotate based on where another object is - python

I am making a scrolling background type of game, kind of like Mario. I have a character that walks on the ground and can go left or right, and I want to introduce a flying eyeball that is suspended in the sky that follows the character's movements. I have an eyeball png (the eye of chuthulhu from Terraria). All I want it to do is rotate based on where the character is, and seem to stare at it (the next step is to have it shoot lasers at the character). How would I do this? Thanks in advance.

You could use the PyGame function pygame.transform.rotate() or pygame.transform.rotozoom() to pre-create rotated versions of your eyeball.
It sounds like the eyeball will pass over the top of the player when the player changes walking directions (since it's always following). So simply comparing the difference in sideways-position between the player and the eyeball should be enough to determine which of the pre-rotated eyeball images to choose.
If the eyeball's X co-ordinate is a long way from the player, then a slight angle is needed. As the player gets closer to having the eyeball above them, the angle should change to the point where the eye is looking straight down. This corresponds to the difference between the eyeball-X and player-X being small (tending towards zero).
Maybe even a simple mapping table between X-difference and sprite-image would be enough.

Related

How can I fit circles into a shape using python?

So for a project, I gotta make a web-site that fills a shape with circles that wont intersact at any point.The user is going to upload a shape, and also choose the radius of the circles, and the code is going to place as many circles(with the chosen radius) as it can into the shape.
For example, if the user uploads an 16cmx16cm square and chooses 4cm as the radius of the circles, the system is going to place as many circles with a radius of 4cm as possible into the square, and the circles wont intersact at any point.
I tried many things using python and failed eveytime. The shape can be anything, it can be completely random and no matter what the shape is, the site has to find out where to place the circles with the selected radius, place the circles, and show the final shape. I dont know if there is a way to do this without python, but I am open to every suggestion-solution.
You could try the package circle-packing. It looks like you can get the behavior you want by setting the arguments rho_max and rho_min of the class ShapeFill to the radius provided by user. I've not used it so cannot attest to its' correctness or usability. Please let us know if it works for you.
Note: The license is GPLV2 so keep in mind the implications. And don't forget to attribute.
I believe filling it with the actual possible maximum amount would be far from easy, if you actually just want fill it and don't care about the best solution then it's fairly easy.
just start to for the top left corner place a circle, if collides with another circle or the shape it, shift it to the right of an arbitrary small amount and try again. once you reached the end on the right side, move it down and to the left and start the process again.

Creating a vector in order to measure distance from a sprite to another vector

I'm currently just starting to create a game with Pyglet featuring a race car, which will drive around a track, of which I intend to later add Tensorflow.
Is there any way which I can draw a vector from a certain position on the car (i.e the back, front or right hand side)? I would then ideally be able to measure the distance from the side of the track in that direction through a vector I will set on the side, not sure if this is possible or not.

Best OpenCV algorithm for detecting fast moving ball?

I am new to OpenCV. I am working on a project that involves tracking and detecting a spinning roulette ball. Here is the video I want to use: https://www.youtube.com/watch?v=IzZNaVQ3FnA&list=LL_a67IPXKsmu48W4swCQpMQ&index=7&t=0s
I want to get the ball time for 1 revolution. But the ball is quite fast and hard to detect. I am not sure how to overcome this.
What would be the best algorithm for doing this?
By subtracting successive images, you will isolate the ball as a (slightly curved) line segment. Both its length and its angular position are cues for the speed.
Anyway, these parameters are a little tricky to extract for a side view, as the ellipse has to be "unprojected" to a top view, to see the original circle. You need to know the relative position of the wheel and the viewer, which you most probably don't know.
An approximate solution is obtained by stretching the ellipse in the direction of the small axis.

why does my animation jiggle slightly in pygame?

So I am in the process of coding a simple pong game but right now the ball sometimes has a small weird jiggle. It doesn't mess up gameplay but the jiggle is certainly visible. I monitored the speed of the ball and it seems to have a constant integer speed. So why does the ball jitter slightly sometimes even though the speed of the ball remains the same?
A code sample would be ideal but some of the problems you may be facing include:
Pixel jitter because the ball needs to move at a speed that does not sync up with the frame rate of the game (so in 1 frame the ball move 3 pixels but in another it moves 2).
Integer rounding. The position of the ball may be rounding to unexpected locations, so go over your code and check calculations of the balls movement.
It may be helpful to look at this: https://stackoverflow.com/questions/14538991/smoother-motion-using-pygame.
However these are just speculation because I'm missing the source code.
Hope this helps :)

2D Game Engine - Implementing a Camera

So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)

Categories

Resources