So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)
Related
I've worked with several methods in OpenCV to identify moving objects and track them based on changing pixels or color, but nothing regarding the area where these objects are moving, so I'm coming here to ask if anyone has a clue about this topic.
Conceptually the idea is pretty simple: let's say we have a bunch of moving objects in a video
As these objects pass by we would like to identify the boundaries of these objects or "trails":
By the end of the video, or after a time set, the idea would be to know what these boundaries are so we can compute them (area for instance):
My hunch would be to use Lucas-Kanade Optical Flow to track the corner points as the objects pass by and keep the ones further away, but so far nothing has worked and therefore I'm unsure this is the proper approach.
Would anyone have a clue about the approach to take? Thanks!
I am developing a wxpython project where I am drawing a diagram on to a panel that I need to be able to zoom in/out to this diagram(a directed acyclic graph in my case). I will achieve this by mouse scroll when the cursor is on the panel, however that is not a part of my question. I need an advice from an experienced person about the method I am using for zooming. So far I thought as doing,
There are lines, rectangles and texts inside rectangles within this diagram. So maybe I could increase/decrease their length/size with the chosen mouse event. But it is hard to keep it balanced because rectangles are connected with lines their angles should not change, and texts inside the rectanges should stay in the middle of them.
Other method I thought of doing is to search for a built-in zoom method. Which I heard about something like Scale. However I have some questions about this method. Will this work on vector drawings(like mine) rather than images. And will it be scaling only the panel I chose and not the whole screen ? After I hear your advice about this, I will look deeper into this, but now I am a bit clueless.
Sorry if my question is too theoretical. But I felt I needed help in the area. Thanks in advance.
Note: Zooming not necessarily applied by scrolling.
Note2: My research also led me to FloatCanvas. Is this suitable to my needs ?
Yes, from your description FloatCanvas would certainly meet your needs.
Another possibility to consider would be the wx.GraphicsContext and related classes. It is vector-based (instead of raster) and supports the use of a transformation matrix which would make zooming, rotating, etc. very easy. However, the actual drawing and management of the shapes and such would probably require more work for you than using FloatCanvas.
I have a pygame program where there's a face in the center. What I want the program to do is have a bunch of objects on the screen, all irregular. Some would be circles, others would be cut-out pictures of objects like surf boards, chairs, bananas, etc. The user would be able to drag the objects around, and they'd collide with each other and the face in the center, and so be unable to pass through them. Could anyone show me how I would do this? Thanks!
-EDIT- And by not be able to pass through, I mean they'd move along the edge of the object, trying to follow the mouse.
What you are looking for is functionality usually provided by a so-called physics engine. For very basic shapes, it is simple enough to code the basic functionality yourself. (The simplest case for 2D shapes is the collision detection between circles).
Collision detection gets pretty hard pretty quickly, especially if you want to do it at a reasonably fast rate (such as you would need for the sort of project you are describing) and also especially if you are dealing with arbitrary, non-regular shapes (which your description seems to indicate). So, unless you are interested in learning how to code an optimized collision detection system, I suggest you google for python physics engines. I have never used any, so I can't personally recommend one.
Good luck!
I am working on an OpenGL project where I need to be able to click on stuff in 3D space. As far as I can tell gluUnproject() will do that job. But I have heard unexpected things might happen, and the accuracy will be thrown off. It could just be that these people used it wrong, or something else. Is there anything unusual I should know about gluUnproject()?
I once asked a question, which contains what you seem to be searching, click here to see my question.
But basically what you can use gluUnproject() for is to calculate 2D Screen Coordinates (Probably Mouse Coordinates) to 3D World Space Coordinates.
Then you can calculate two points. The first point could be the point on the near plane and the second point could be at the far plane, thereby you can create a line which you then can use to perform collision detection with.
The above images comes from a post (click here to see the post), the post actually describes and tells about probably what you seem to be seeking.
I have been building a simple game using the pygame library for python. Here is a link to the repository.
https://github.com/stmfunk/alienExplorer
The issue I am having is with predictably overlaying sprites on top of each other. The clouds in this code are seemingly placed randomly above and below the alien. Although this behaviour is actually desirable in this example I'd like to know why it is behaving randomly and how can I make it behave as I want it too in future. I plan on adding objects which I want to remain in the background in future.
Thanks for the help!
Also I'm not sure if it is best practise to insert code directly or to link a repository so I'd appreciate it if somebody gave me advice on that.
I'm not sure about the technicalities, but what I have noticed from my own personal experiences is that sprites in the same group will always be ordered randomly. I assume that in the code above all the clouds are in one group and the alien in another and a third group containing all them (and if not, then I would suggest that that is an elegant way of doing it, especially to solve your problem). The solution with your problem above would be to instead of drawing the group with everything in it, draw the individual groups in order of layers (draw the ones at the bottom first and then go up)