How to accurately track mouse movements in python, tkinter? - python

So i am trying to make a simple Drawpad using python and tkinter, and i have pretty much got it working, except there is massive gaps in between the individual dots.
Heres an example
I was wondering if there is a more accurate mouse tracking method? Heres my code for making the dots.
def draw(event):
global DotSize
arr = []
arr.append([event.x, event.y])
for i in arr:
yes=i[0]
no=i[1]
Canvas.create_oval(yes,no,yes,no, fill="black", width=DotSize)
//Theres a load more code in here but i dont think its relevant.
Canvas.bind("<B1 Motion>",draw)
Thanks!

It looks like you are redrawing every pixel every time. First you draw one oval. Then you draw two. then you draw three, and so on. You only need to draw the last item in the array.
As for the accuracy -- what you are doing is the most accurate way to track mouse movement in tkinter. There's nothing more efficient than binding to <B1-Motion>.

Related

Updating matplotlib annotation position without redrawing all data

I've essentially recreated the matlab data cursor on my custom pyplot figure tool. It allows dragging and left/right arrow key presses to update an annotation box's position along a line, etc. My problem comes from when there are a large number of lines. The data cursor still works, but dragging its position around is a nightmare and is extremely slow. The problem stems, I'm rather certain, from my canvas.draw() calls since it's redrawing the entire figure every time the data cursor moves.
I'm working towards implementing options like blit() and "restore_region()", but I'm having great difficulty implementing them. Can anyone provide even a shell of how to make this work? I can't include actual code due to the classification of the code itself, but I can give a small pseudo-code example.
def create_anno(self):
# Do annotation creation stuff here
self.background = self.figure.canvas.copy_from_bbox(self.axis.bbox)
def key_press(self):
self.figure.canvas.restore_region(self.background)
# Do update position/text stuff here
self.figure.canvas.blit(self.axis.bbox)
self.figure.canvas.flush_events()
The key_press function is called normally, but the annotation box doesn't move anymore. Most examples use draw_artist() between their restore_region() and blit() calls, but annotation doesn't have that. I just want to update a little annotation box's position across a figure without having to redraw the entire axis. Any ideas?
Thanks!

Create a menu home screen for piTFT using pygame

I would like to create a home screen(menu) for my project using pygame(?).
I have a piTFT 2.8" Capactive display from adafruit.
I have design the whole display menu. Take a look on this demo screen for example:
Now I would like to built this on my pi. is there any easy way to position each element as can be seen on the attached image?
The display is 320 x 240, and I think if I try position the elements blindly it will take a lot of time which in this case, I dont really have spare time to waste.
Have you got any other suggestions about the use of pygame? Would you suggest me something different?
This answer may be a bit general, but what I do to find each position is go to paint or some paint program and then I add markers where positions are, then I can select rectangles and find there positions. It would be hard to find the positions by itself just from that image. Do you have a mac, windows, linux?'
just paste your menu, into the paint program and draw rectangles around certain icons, after that just go to the top left of the rect and it will tell you the position, so you will get both the width, height, and position.

2D Game Engine - Implementing a Camera

So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)

How to read pixel colours using Python turtle.py

I'm preparing exercises for school classes involving Python's turtle library.
The students are already drawing terrific pictures, but I want them to be able to detect existing pictures and colours in order to modify the behaviour of their program.
For example I would like to provide them with code which draws a maze using turtle, and then they can write the code to navigate the turtle around the maze (don't worry, I'll start simpler).
Is there a way to detect the colour of the pixels already drawn by the turtle?
Thanks!
Turtle uses Tkinter canvas, which you can get using turtle.getcanvas(), and according to this you cannot read the colour of a pixel without using a workaround of converting the canvas to a picture (bitmap) and read the bitmap.
You could try to keep an open array to work as the bitmap of your canvas and update it yourself as you draw new elements on the canvas, although that seems impractical unless the maze is simple and 'squary'.
I would use an array keep all x and y that is used for the maze in an array like stated above. Then have a size of a box around the turtle defined for detecting purposes.

Get the value of an individual pixel

I'm making Tetris and im trying to make a line clear function. I have a way to do this that is a bit complicated and im looking to simplify.
My idea is to check some pixel RGB values and see if they're not black (the color of the background). Is there a quick way to get the RGB value of a pixel on the screen?
pygame.PixelArray should do the trick. You can use it on the screen surface.
But personally, I wouldn't recommend that you use the pixels as a reference.
Handling everything logically is a lot better.
As Icfseth noted "Handling everything logically is a lot better."
Tetris is an array of blocks that occupy a logical space that has a granularity larger than a pixel. Your game model should operate in "block coordinates" and detect whether a space is open or filled based on that. The screen is just a representation or "view" of the block space.
Try first just moving a 16px square around a 320x320px screen in steps of 16px. You could keep track of every pixel, but it makes much more sense to have have the block position ranging from x = [0..20], y = [0..20], and let the display code worry about how to show a block at position (2,3).

Categories

Resources