How to tell the current shape of my cursor? - python

When automating clicks and keystrokes, I often have to wait until a button finishes loading and becomes clickable. The only indicator of this is, to my knowledge, the shape of my cursor. In any programming language (preferably Python), is there any way to tell if it has changed from a pointer to a hand?

You could take a screenshot then use the PIL imaging library combined with the current cursor position to find out how wide the mouse is. It might not work very well if there was a white or black background though.

Related

Column selection with mouse drag using Python/Tkinter Text function

I am developing a specialized text editor for a company. I am developing in Python 3.7.4 and I'm using Tkinter. One of the features I would like to develop is to select a range of columns/rows by dragging the mouse pointer from a starting point. If you're familiar with Notepad++, it is equivalent to their Alt + mouse drag feature. I've included a short video demonstrating this feature in Notepad++. https://www.youtube.com/watch?v=BEDURCPTrHo&feature=youtu.be
I have not found a similar keystroke combination in Tkinter that allows this within a Text box. I am familiar with most elements of the Text method, and know how to determine row/column index, selection range, insert/delete, bbox attributes, etc. And I also know how to bind mouse events (click, move, release). And finally I am aware of the notion of monospace fonts and calculating row/column for the mouse event x/y coordinates. I'm just wondering if anybody has written any code to do this, or knows of a Tkinter Text method feature that I may be overlooking.
Thanks in advance for your help.

2D Game Engine - Implementing a Camera

So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)

Draw with OpenGL offscreen

Is there a way to use OpenGL to draw offscreen? What I want to do is this: I want to be able to use functions like glVertex, and get the result in a 2D pixel array.
I am using Python. I tried using PyGame, but it's not working very well. The problem with PyGame is that uses a window event though i don't need it. In addition, I had to draw to scene + flip the screen twice in order to access screen pixels using glReadPixels.
An other problem is that I can't have more that one window at once.
Is there any proper way to accomplish what I am trying to do?
What you are asking for seems to be two things in one... you want an off-screen buffer (FBO) and you want to get the contents of the framebuffer in client memory.
Can you indicate which version of GL you are targeting?
If you are targeting OpenGL 3.0+, then you can use FBOs (Framebuffer Objects) and PBOs (Pixel Buffer Objects) to do this efficiently. However, since you are using glVertex, I do not think you need to bother with efficiency. I would focus on learning to use Framebuffer Objects for the time being.
If you are not using GL3 you might have access to the old EXT FBO extension, but if you do not have that even you might need a PBuffer.
Note that PBuffers and Pixel Buffer Objects are two different things even though they sound the same. Before GL3/FBOs, WGL, GLX, etc. had special platform-specific functionality called Pixel Buffers for drawing off-screen.

Cad plotters and Python

I'm wandering what the best way is to send a fully scaled (1:1) dxf drawing to a cad plotter using python. Has anyone here ever done this?
For those who want to know why:
I've written a program for my employer that automates the drawing of detailed
schematics, apparently so our engineering dept can spend more time doing nothing. The issue now is that they would like to completely eliminate acad since it's only used to plot the finished drawing.
Mind you these drawings are used for non-trivial things like checking the dimensions of critical components used in commercial jetliners.
In case anyone else runs into this problem (pretty unlikely) I though I'd post briefly what I did in the end:
1.) Wrote a short script to capture the dxf as a BMP (basicaly just a screen grab that appends the scale to the drawing)
2.) Wrote a print dialog with PyQt4 that's a clone of Autocads plot window except that it has to pull the scaling info from the BMP.
My python skills are awfull so there's likely better solutions but this worked.

Creating a Selection tool using PyGtk

Can anyone help or point me in the right direction for figuring out how to create a drag and draw rectangular box to be used as a selection tool in PyGtk? I am presently using an event box with a drawable window and the user can click once in the upper left and once in the lower right corner of the portion of image they would like to choose which will then draw a rectangle over the selection, but a drag and draw rectangle will allow the user to better adjust and get better accuracy.
I have looked quite a few places for information or a tutorial on this but I haven't found much. I am relatively new to Gtk+ so perhaps this is so simple that no one has to ask.
Actually, this doesn't seem all that lamebrained at all. It is actually quite specific, and a little challenging.
I'll give you the steps to start you off, but as you're beginning (and you didn't post any specific code), it would be better for you to create the code yourself based on documentation and my hints.
By the way, look up the official PyGTK documentation - that should be your definitive source for all the objects and functions of PyGTK. It is very well written and exhaustive, and I rarely have to look more than five minutes to find what I need.
What I suggest you do is use three signals, connected to your drawing area.
button-press-event
button-release-event
motion-notify-event
Create three callbacks (tutorial here), one for each event. Connect your drawing area to your events and callbacks (again, see tutorial. You may need to go through a few pages on it.).
You are going to also need to create two boolean variables on the global level (above the main class, at the same level you import modules.) The first controls whether the selection tool is chosen (call it "Select_On"), and the second for if it is active (call it "Select_Active")
On the button you use to start the select tool, set "Select_On" to "True". This should probably be a toggle button, so make sure you set it up so "Select_On" gets set to off if you toggle the button off.
On button-press-event, create the object for selecting. What you're going with now actually should work well. Also, set "Select_Active" to "True".
On motion-notify-event, change the size of your object based on cursor position. Refer to that documentation for that particular kind of object to learn how to change its size, and refer here for how to get the cursor position.
Be prepared to write an algorithm to determine how to change the size of the selection object based on the cursor position. If you need help with that, feel free to ask for it in a separate question.
On button-release-event, set "Select_Active" to "False", and call all your code for actually confirming the selection.
As an aside, the benefit to using the "motion-notify-event" is that, as soon as the cursor leaves the widget you're selection in, the selection box stops changes sizes. The cursor must re-enter the widget to continue changing the selection box size.
I hope all that works for you, and wishing you the very best on your project!

Categories

Resources