I want to do convert array of integers into some sort of 'picture' using PyQt (I've decided to do my app in Qt). I have array like this:
Array = [
[0,0,1,0,0],
[0,1,0,1,0],
[1,0,0,0,1],
[0,1,0,1,0],
[0,0,1,0,0]]
Now I want to rewrite it into picture, by replacing each integer by for example square 10x10 pixels. I have definition for each value in array in RGB. What's more This is some kind of game of life, so it must refresh on each step and shouldn't be slow. Maybe somethinf similar to OpenCV?
Thanks in advance!
Cheers,
Mateusz
You could easily do the above with QGraphicsScene and QGraphicsView. In order to get good performance, you'll want to call setViewport(QGLWidget()) on your QGraphicsView instance. Create a subclass of QGraphicsItem to represent an element in your array. You'll then even be able to animate the changes if you want.
If you do want animations or are demonstrating some progression such as in Conway's Game of Life you might also want to take a look at QTimeLine.
You can look up the equivalent python-based documentation on either the PyQt* or PySide websites. Both PyQt and PySide use a nearly identical API so for most everything you can use them interchangeably.
*Note: The PyQt website is inaccessible at the time of this writing
You should probably use QT’s graphics libraries for performance. Another, maybe simpler way could be to use PIL (Python Imaging Library) or some Python bindings to the ImageMagick or MagickWand library (I haven't found a good and current one) and use NumPy’s arrays for calculations and manipulation, and draw on a surface or canvas using PyGame, QT or some other GUI toolkit.
In PIL there is PIL.Image.fromarray(np_array, 'RGBA'), that reads suitable NumPy arrays – the datatype must usually be dtype=int8 and the shape is (height, width, n_channels).
For a very simple graphics format that uses ascii byte values, see NetPBM.
Related
I have written some code in Python which allows 3D objects to be defined in 3D object space and mapped onto a 2D screen. Currently the finished 2D polygons are drawn on the screen using the PyGame library, which works effectively, but I would like to go the full way and write code myself to complete the drawing operations PyGame does for me. This means I would like to manually control the drawing of each pixel on the screen, with the use of GPU support to accelerate the entire rendering process. From some reading it seems OpenGL is suitable for this sort of thing, but I'm not sure what the complete purpose of OpenGL is and whether I could achieve what I am trying to do in a better way. Do I really need to use OpenGL? Or is there another way for me to directly access my GPU to draw at the pixel by pixel level?
It sounds like OpenGL's programmable shaders are what you're looking for (in particular fragment shaders). They run massively parallel on a pixel-by-pixel basis, in the sense that basically you write a function that takes a single pixel location and computes its color. Note that this means that the individual pixels can't exchange information, though there are certain ways around that.
(Technically when I said "pixel" I meant "fragment", which is sort of a generalized version of a pixel.)
Is there a way to use OpenGL to draw offscreen? What I want to do is this: I want to be able to use functions like glVertex, and get the result in a 2D pixel array.
I am using Python. I tried using PyGame, but it's not working very well. The problem with PyGame is that uses a window event though i don't need it. In addition, I had to draw to scene + flip the screen twice in order to access screen pixels using glReadPixels.
An other problem is that I can't have more that one window at once.
Is there any proper way to accomplish what I am trying to do?
What you are asking for seems to be two things in one... you want an off-screen buffer (FBO) and you want to get the contents of the framebuffer in client memory.
Can you indicate which version of GL you are targeting?
If you are targeting OpenGL 3.0+, then you can use FBOs (Framebuffer Objects) and PBOs (Pixel Buffer Objects) to do this efficiently. However, since you are using glVertex, I do not think you need to bother with efficiency. I would focus on learning to use Framebuffer Objects for the time being.
If you are not using GL3 you might have access to the old EXT FBO extension, but if you do not have that even you might need a PBuffer.
Note that PBuffers and Pixel Buffer Objects are two different things even though they sound the same. Before GL3/FBOs, WGL, GLX, etc. had special platform-specific functionality called Pixel Buffers for drawing off-screen.
When I use pylab and python under Linux to draw and show an image, like in the following example:
img = pylab.imread(filename)
pylab.imshow(img)
pylab.show()
pylab.draw()
When I do so, a new window pops up with the image.
My question: How can I influence the position and the size?
The whole point of pylab's Image stuff is that you get a np.array of pixel data.
So, you can just do this:
img = pylab.imread(filename)
img = img * myTransformationMatrix
pylab.imshow(img)
If that immediately tells you what you need to know, great. If you don't understand what matrix multiplication has to do with rotating, translating, and scaling images, pylab is probably not the image library you want to use. Just use PIL.
If you're trying to manipulate the windows, rather than the images, pylab is really not meant for that.
You probably want to use TkInter, the windowing library that comes built-in with Python. It's can be ugly, clunky, and slow, and some advanced uses are either impossible or require you to write Tcl code instead of Python… but for simple stuff, it's not going to be a step down from pylab. In fact, it's what pylab uses under the covers.
If you start to hit the limits of TkInter, it's time to look at an external windowing library. You can go with a full GUI framework like Gtk+, Qt, or wx. The Python bindings to the three aren't that different; the important difference is that in the slightly different models of how GUIs work, so read about them and pick the model you like best. Alternatively, you can use something like pygame, which does very bare-bones windowing (the kind of thing games would need, rather than, say, word processors).
I would like to generate 2D images of 3D books with custom covers on demand.
Ideally, I'd like to import a 3D model of a book (created by an artist), change the cover texture to the custom one, and export a bitmap image (jpeg, png, etc...). I'm fairly ignorant about 3D graphics, so I'm not sure if that's possible or feasible, but it describes what I want to do. Another method would be fine if it accomplishes something similar. Like maybe I could start with a rendered 2D image and distort the custom cover somehow then put it in the right place over the original image?
It would be best if I could do this using Python, but if that's not possible, I'm open to other solutions.
Any suggestions on how to accomplish this?
Sure it's possible.
Blender would probably be overkill, but you can script blender with python, so that's one solution.
The latter solution is (I'm pretty sure) what most of those e-book cover generators do, which is why they always look a little off.
The PIL is an excellent tool for manipulating images and pixel data, so if you wanted to distort your own, that would be a great tool to look at, and if it goes too slow it's trivial to convert the image to a numpy array so you can get some speedup.
So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast.
My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem).
I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program.
My program will chiefly be dealing in loading images and writing/reading to/from surfaces.
Have you tried the Python Imaging Library? You'd still have to communicate the data back to pygame via frombuffer or somesuch to do the blitting, but the PIL can handle the pixel access.
I checked out pyglet, and saw that it works well for static per pixel collision, when the image is not manipulated too much; however, I'm not sure how well it works with a dynamic image.
In short, I'm looking for a library that's able to quickly display a buffer of pixels. This buffer will be constantly changing, so fast access and blitting is essential. This could be done in C with relative ease using SDL; however, I'm looking for a similar method in Python. I'm not even too worried about hardware acceleration at this point, although it would certainly be nice.
Check Python bindings of Simple and Fast Multimedia Library. From it's documentation:
It implements the same 2D drawing and OpenGL-related functions (see their base class sf::RenderTarget for more details), the difference is that the result is stored in an off-screen texture rather than being show in a window.
Rendering to a texture can be useful in a variety of situations:
precomputing a complex static texture (like a level's background from multiple tiles)
applying post-effects to the whole scene with shaders
creating a sprite from a 3D object rendered with OpenGL
etc.
Check also methods contains and intersects of sf::Rect< T > Class Template.