I want to make a game in panda3d with support for touch because I want it to be playable on my windows tablet also without attaching a keyboard.
What I want to do is, find a way to draw 2d shapes that don't change when the camera is rotated. I want to add a dynamic analog pad so I must be able to animate it when the d-pad is used with mouse/touch.
Any help will be appreciated
Make those objects children of base.render2d, base.aspect2d or base.pixel2d. For proper GUI elements take a look at DirectGUI, for "I just want to throw these images up on the screen" at CardMaker.
Related
I am making a fun python program to automate couple clicks for me. I want to use pyAutoGui library provided by python to make this program. I am struggled to find the coordinates of elements that I want to click. Are there any ways that I can find coordinates of my elements?
You can use pyautogui.mouseInfo().
MouseInfo is an application to display the XY position and RGB color information of the pixel currently under the mouse. Works on Python 2 and 3. This is useful for GUI automation planning.
The full documentation is at https://mouseinfo.readthedocs.io/en/latest/
Point Position (for Windows) is a simple tool that lets you pick the coordinates for any point on your screen (using X,Y axis). Simply point one of the four corner arrows at the spot on your screen that you want to define and click the button to display the X/Y coordinates.
Let me make my question more clear. I am making a program where the program simulates a certain set of keypresses in a video game when a certain 'graphic' fully 'charges' up. This graphic is basically a vertical bar that fills up all the way to the top. How can I use python to interpret this graphic and return some info when the bar is visually fully charged up. The position of the graphic on the screen is always consistent and the state of the bar when it is indeed fully charged up, is always the same.
Probably the easiest way to achieve that is to use the ImageGrab module from Pillow.
And then use some pixels in the snapshot to determine if the bar is filling.
Pillow docs - https://pillow.readthedocs.io/en/3.0.x/reference/ImageGrab.html
I would like to create a home screen(menu) for my project using pygame(?).
I have a piTFT 2.8" Capactive display from adafruit.
I have design the whole display menu. Take a look on this demo screen for example:
Now I would like to built this on my pi. is there any easy way to position each element as can be seen on the attached image?
The display is 320 x 240, and I think if I try position the elements blindly it will take a lot of time which in this case, I dont really have spare time to waste.
Have you got any other suggestions about the use of pygame? Would you suggest me something different?
This answer may be a bit general, but what I do to find each position is go to paint or some paint program and then I add markers where positions are, then I can select rectangles and find there positions. It would be hard to find the positions by itself just from that image. Do you have a mac, windows, linux?'
just paste your menu, into the paint program and draw rectangles around certain icons, after that just go to the top left of the rect and it will tell you the position, so you will get both the width, height, and position.
So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)
Am designing a piping software, right now it works on 2D. I implemented a very simple frame with wx.paintDC() it basically goes like this:
def OnDrawing(self, evt):
dc = wx.PaintDC(self.leftWindow)
self.leftWindow.PrepareDC(dc)
dc.Clear()
for image in self.images[1:]:
x = image[1][0]
y = image[1][1]
img = wx.Image(image[0], wx.BITMAP_TYPE_ANY)
bmp = wx.BitmapFromImage(img)
dc.DrawBitmap(bmp, x, y, True)
The result is this [1]. The buttons on the right are used to add sections (pipes, valves, etc) to the right frame. when you click on a button the program calculate the position and draw it, so the frame its non interactive, you cant clic on the segments of pipe or valves, cant resize it, etc.
This its very easy and simple, but as a new programmer it cost me some time (and am fairly proud of it). now I want to improve that, what I want to do now is to create a 3D-like interactive frame, where the user could create "by mouse" the pipe diagram, click on them to change properties etc.
what am aiming for its something like these [2] [3]. with a isometric background like this [4]
I guess thats not going to be easy (but neither was for me in the beginning what I did), but am decided to keep trying and studying to make it. What I want from you guys is directions..
Now I dont know where to start, am wondering "is this possible on wx?", "should I use openGL or something?". I need you to point to the right direction.
is this possible to implement with only wx? or I need pyopengl (witch I dont know anything about), or something like that?
thanks!!!...
You might want to investigate Python-Ogre. Ogre is an open source 3D engine, and Python-Ogre allows you to manipulate the scene through Python. This could allow you to focus on the user interface, instead of learning how to draw textured triangles with pyopengl.
http://python-ogre.org/