Interactive Tool/library for displaying rectangles - python

I need to find library / tool that based on the parameters from python (width/height position x/y of the rectangle) can display rectangles. It is necessary that position of the presented rectangles can be changed on the display via dragging and their position can be used as an input for functions e.g rotation. Would be great if I could connect rectangles with lines as it is the next step
Have you heard about such library/ tool?
Currently i am displaying them as png

Related

Using python to solve vtk dcm file annotation

I want to make a ct labeling software,for target detection,But I don't know how to realize the function of drawing rectangle.
The read image has three perspectives. After labeling, each perspective will display a rectangle box corresponding to the perspective, and the length of one perspective rectangle box will change accordingly for the other two perspectives.
like mimics
enter image description here
I have try vtkBorderWidget and vtkBoxWidget but not use

Getting coordinate of elements on my laptop screen

I am making a fun python program to automate couple clicks for me. I want to use pyAutoGui library provided by python to make this program. I am struggled to find the coordinates of elements that I want to click. Are there any ways that I can find coordinates of my elements?
You can use pyautogui.mouseInfo().
MouseInfo is an application to display the XY position and RGB color information of the pixel currently under the mouse. Works on Python 2 and 3. This is useful for GUI automation planning.
The full documentation is at https://mouseinfo.readthedocs.io/en/latest/
Point Position (for Windows) is a simple tool that lets you pick the coordinates for any point on your screen (using X,Y axis). Simply point one of the four corner arrows at the spot on your screen that you want to define and click the button to display the X/Y coordinates.

How to get and set pixel's color value using graphics.py in python3

I'm implementing Boundary Fill Algorithm for polygons in python.How to get and set the color of a pixel?
I'm using graphics.py file.
Zelle graphics provides operators to manipulate pixels of images as documented in the code:
The library also provides a very simple class for pixel-based image
manipulation, Pixmap. A pixmap can be loaded from a file and displayed
using an Image object. Both getPixel and setPixel methods are provided
for manipulating the image.
But not higher level objects like polygons.
This answer to Get color of coordinate of figure drawn with Python Zelle graphics shows how to get the fill color of an object like a polygon located at a given (x, y) coordinate using the tkinter underpinnings of Zelle graphics. I doubt this technique can be used to set the color of a pixel of a polygon, however.

Create histogram based on XY coordinates

I would like to be able to have a user drag points across an XY-plane, resulting in a histogram (in Python 3.3).
Consider the following picture, in which the red shows the motion the mouse made (start of arrow is the CLICK, end of the arrow is when the user LETS GO):
Is there any package this could be accomplished in, or do you consider would be of great help? The goal is to be able to create a discrete histogram having this shape.
I guess what I need is to be able to record a dragged path?
In R you can use locator() to register left mouse clicks on the current device. You could take these locations and build the histogram from this. I'm quite sure this will only works with discrete clicks, not a smooth dragging motion. See ?locator() for more details about this function.
http://pygooglechart.slowchop.com/
Is a Python wrapper for the Google Chart API.
Just take a look at the Documentation.
Especially check the Examples from the github
https://github.com/gak/pygooglechart/tree/master/examples

How to avoid copying the level Surface every frame in worms-like game?

I am working on a game that has destructible terrain (like in the game Worms, or Scorched Earth) and uses pixel perfect collision detection via masks.
The level is a single surface and how it works now is that I create a copy every frame, draw all sprites that need drawing on it, then blit the visible area to the display surface.
Is there any way to avoid copying the whole level surface every frame and still be able to use the pixel perfect collision tools found in pygame?
I tried blitting the level surface first, then blitting every sprite on the screen (with their blit coordinates adjusted by the camera, except for the player character whose coordinates are static), but in that case the collision detection system falls apart and I can't seem to be able to fix it.
UPDATE
I have managed to make it work the following way:
When drawing the sprites, I convert their game world coordinates (which are basically coordinates relative to the origin of the level bitmap) to screen coordinates (coordinates relative to the camera, which is the currently visible area of the level).
During the collision detection phase I use the coordinates and bounding boxes that are positioned relative to the level surface; so just like above. The thing is that the camera's position is bound to the player's position which is not and should not have been a static value (I am really not sure how I managed to not realize that for so long).
While this fixes my problem, the answer below is a much more comprehensive look on how to improve performance in a situation like this.
I am also open to suggestions to use other libraries that would make the ordeal easier, or faster. I have thought about pyglet and rabbyt, but it looks like the same problem exists there.
This is an issue that used to come up a lot in the days before graphics accelerators, when computers were slow. You basically want to minimize the work required to refresh the screen. You are on the right track, but I recommend the following:
Keep a copy of the background available offscreen, as you are doing
now.
Allocate a working bitmap that is the same size as the screen.
For each sprite, compute the bounding rectangle (bounding box) for
its new and old positions.
If the new and old bounding boxes overlap, combine them into one
larger box. If they do not overlap, treat them separately.
Group all the bounding boxes into sets that overlap. They might all
end up in one set (when the sprites are close to each other), or
each bounding box might be in a set by itself (when the sprites are
far apart).
Copy the background to regions of the working bitmap corresponding
to each bounding box set.
Copy the sprites for each set to the working bitmap in their new
positions (in the correct z-order, of course!).
Finally, copy the finished offscreen bitmap to the display surface,
set bounding box by set bounding box.
This approach minimizes the amount of copying that you have to do, both of background and sprite. If the sprites are small relative to the display area, the savings should be significant. The worst case is where the sprites are all arranged on a diagonal line, just barely overlapping each other. In this case, you might want to switch to a more generalized bounding shape than a box. Take a look at QuickDraw Regions for an example: Wikipedia Discussion Patent Source.
Now, you may be thinking that the work to group the bounding boxes into sets is a O(n^2) operation, and you would be right. But it grows only with the square of the number of sprites. 16 sprites implies 256 comparisons. That's probably less work than a single sprite blit.
I focused on minimizing the pixel copying work. I must admin I am not familiar with the particulars of your collision detection library, but I get the idea. Hopefully that is compatible with the algorithm I have proposed.
Good luck. If you finish the game and post it online, put a link to it in your question or a comment.

Categories

Resources