Zelle graphics: Access and Manipulate Display Buffer? - python

I am doing 2D graphics using graphics.py. I wonder if there is a way I can access the display buffer after all the 2D geometries are drawn and before they show up on display. I need to do a post process on the drawn image. For example, tasks I am planning include anti-aliasing for a smoother transition on the edge and applying a geometry warp map for a correct projection on a curved screen, etc. Can anyone tell me how to access the display buffer? If I have to go down to tkinter Canvas, will it work and how?

Related

ReportLab Scale Canvas after Drawing (Fit to Page)

With the Python ReportLab library's canvas, the paradigm seems to be to apply transforms before drawing your primitives.
My desire is to scale what I've already drawn to fit the page size.
The problem is that I cannot know the extents of the drawn objects until I have drawn it entirely. This is because the input for drawing objects is a stream, which does not expose the maximum extents beforehand.
Right now, I have to:
Draw my objects
Check their extents
In a new canvas, set the scale to fit the extents to the page size
Draw objects again
In some cases, the object drawing can take a few seconds. And doing this twice really feels burdensome.
Is there any way to do this faster?

How to get and set pixel's color value using graphics.py in python3

I'm implementing Boundary Fill Algorithm for polygons in python.How to get and set the color of a pixel?
I'm using graphics.py file.
Zelle graphics provides operators to manipulate pixels of images as documented in the code:
The library also provides a very simple class for pixel-based image
manipulation, Pixmap. A pixmap can be loaded from a file and displayed
using an Image object. Both getPixel and setPixel methods are provided
for manipulating the image.
But not higher level objects like polygons.
This answer to Get color of coordinate of figure drawn with Python Zelle graphics shows how to get the fill color of an object like a polygon located at a given (x, y) coordinate using the tkinter underpinnings of Zelle graphics. I doubt this technique can be used to set the color of a pixel of a polygon, however.

Pygame Large Surfaces

I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.

Looking for parallel function to actionscript's BitmapData.draw() but in OpenGL

I have a flash application that I have been working on for 11 months, and would like to translate it to a different language / platform, preferably Python and OpenGL.
One of the main features in my program is to draw flash vector graphics (or display objects) and then redraw them to a bitmap texture. Is there any way to do this in OpenGL? Basically to draw some polygons on the screen, and then draw these polygons onto a texture. If the texture is displayed directly below the polygons , and the polygons are in motion, then there is a dragging/drawing/painting effect.
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/BitmapData.html#draw() --> here is the flash function which I use.
Hopefully someone who is knowledgable in OpenGL & Actionscript would be able to answer this question or provide me with some details. Thankyou!
OpenGL doesn't provide any features for drawing your typical 2D vector graphics. It's a very generic API, but mostly suited for 3D solutions. Implementing the rendering capabilities of Flash in OpenGL is possible, but a lot of work to do yourself.
If you want only a subset (drawing sprites, triangles, convex polygons, lines; alpha blending), then yes, OpenGL may be a good and quick solution.
Otherwise, there's a standard called OpenVG which might be what you want. There are several implementations, some of which may already run on hardware. I haven't tried it so far, though - you'll have to check that one yourself.

Rotating a glViewport?

In a "multitouch" environement, any application showed on a surface can be rotated/scaled to the direction of an user. Actual solution is to drawing the application on a FBO, and draw a rotated/scaled rectangle with the texture on it. I don't think it's good for performance, and all graphics cards don't provide FBO.
The idea is to clip the rendering viewport in the direction of user.
Since glViewport cannot be used for that, is another way exist to achieve that ?
(glViewport use (x, y, width, height), and i would like (x, y, width, height, rotation from center?))
PS: rotating the modelview or projection matrix will not help, i would like to "rotate the clipping plan" generated by glViewport. (only part of the all scene).
There's no way to have a rotated viewport in OpenGL, you have to handle it manually. I see the following possible solutions :
Keep on using textures, perhaps using glCopyTexSubImage instead of FBOs, as this is basic OpenGL feature. If your target platforms are hardware accelerated, performance should be ok, depending on the number of viewports you need on your desk, as this is a very common use case nowadays.
Without textures, you could setup your glViewport to the screen-aligned bounding rectangle (rA) of your rotated viewport (rB) (setting also proper scissor testing area). Then draw a masking area, possibly only in depth or stencil buffer, filling the (rA - rB) area, that will prevent further drawing on those pixels. Then draw normally your application, using a glRotate to adjust you projection matrix, so that the rendering is properly oriented according to rB.
If you already have the code set up to render your scene, try adding a glRotate() call to the viewmodel matrix setup, to "rotate the camera" before rendering the scene.

Categories

Resources