Moderngl: Render VAO with multiple shaders - python

I'm doing some stuff with 2D opengl rendering.
Is there a way to render a vertex array object but have the data be passed through multiple shaders? For example, a shader that applies a normal map to the texture, and then a shader that blurs the image. It would be very difficult and unclean to combine the two shaders into one let alone potentially combining more than 2 shaders. This is my current code for creating the vertex array object:
# TEX_COORDS = [0, 1, 1, 1,
# 0, 0, 1, 0]
# TEX_INDICES = [0, 1, 2,
# 1, 2, 3]
# self.vertices looks something like this: [-1, -1, 1, -1, -1, 1, 1, 1], but with different coordinates
self.vbo = self.ctx.buffer(struct.pack("8f", *self.vertices))
self.uv_map = self.ctx.buffer(struct.pack("8f", *TEX_COORDS))
self.ibo = self.ctx.buffer(struct.pack("6I", *TEX_INDICES))
self.vao_content = [(self.vbo, "2f", "vertexPos"), (self.uv_map, "2f", "vertexTexCoord")]
self.vao = self.ctx.vertex_array(self.program, self.vao_content, self.ibo) # self.program is the shader program object
And I'm doing texture.use() (texture being a moderngl texture object) and then self.vao.render() to render it onto the screen.

A single rendering call will only ever use a single set of vertex, fragment, and other shaders. You cannot chain together shaders for a particular stage via the API; you must manufacture a single such shader that does those multiple things.
How you go about that process is up to you. You can have one shader that has all possible operations, with a bunch of uniform variables that define which operations will be applied. Or you can dynamically build shaders to fit particular needs.

Related

How do I incorporate Vertex Buffer Array into my graphics program?

I am writing a 3d graphics toolkit for Python and PyQt5, using PyOpenGL. I am writing my own shaders to go with it, if that helps. What I am trying to do is go from using glBegin to using a Vertex Buffer Array. I have found the following on using VBOs:
http://www.songho.ca/opengl/gl_vbo.html - I could only scrape together a bit of information from this because it is in C/C++.
How to get VBOs to work with Python and PyOpenGL - This was in Python2 and was fairly limiting as a result.
I cannot, however, piece together what I need to take the vertexes of each of my shape objects and compile them into a scene VBO. I also have no idea how the data in an array is laid out. My initGL and paintGL functions are below, as are my vertex and fragment shaders' GLSL code.
def initGL(self):
self.vertProg = open(self.vertPath, 'r')
self.fragProg = open(self.fragPath, 'r')
self.vertCode = self.vertProg.read()
self.fragCode = self.fragProg.read()
self.vertShader = shaders.compileShader(self.vertCode, GL_VERTEX_SHADER)
self.fragShader = shaders.compileShader(self.fragCode, GL_FRAGMENT_SHADER)
self.shader = shaders.compileProgram(self.vertShader, self.fragShader)
#paintGL uses shape objects, such as cube() or mesh(). Shape objects require the following:
#a list named 'vertices' - This list is a list of points, from which edges and faces are drawn.
#a list named 'wires' - This list is a list of tuples which refer to vertices, dictating where to draw wires.
#a list named 'facets' - This list is a list of tuples which refer to vertices, ditating where to draw facets.
#a bool named 'render' - This bool is used to dictate whether or not to draw the shape.
#a bool named 'drawWires' - This bool is used to dictate whether wires should be drawn.
#a bool named 'drawFaces' - This bool is used to dictate whether facets should be drawn.
def paintGL(self):
shaders.glUseProgram(self.shader)
glLoadIdentity()
gluPerspective(45, self.sizeX / self.sizeY, 0.1, 110.0) #set perspective?
glTranslatef(0, 0, self.zoomLevel) #I used -10 instead of -2 in the PyGame version.
glRotatef(self.rotateDegreeV + self.vOffset, 1, 0, 0) #I used 2 instead of 1 in the PyGame version.
glRotatef(self.rotateDegreeH, 0, 0, 1)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
for s in self.shapes:
if s.drawWires:
glBegin(GL_LINES)
for w in s.wires:
for v in w:
glVertex3fv(s.vertices[v])
glEnd()
if s.drawFaces:
glBegin(GL_QUADS)
for f in s.facets:
for v in f:
glVertex3fv(s.vertices[v])
glEnd()
Vertex shader:
#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment shader:
#version 120
void main() {
gl_FragColor = vec4( 0, 1, 0, 1 );
}
In the final form of this project, I want to have information in my buffer for vertex positions, color, and maybe even glow. (That will be achieved when I put this to ray marching eventually.) I also need a way to specify whether or not I should draw the wires and faces.
How do I set up and configure one or more VBOs to transfer all this information to the GPU and OpenGL?
Python 3.7.6, Windows 10
After a while longer researching, I decided to try using less specific search terms. I eventually stumbled upon this site: https://www.metamost.com/opengl-with-python/

How to add text (QLabel) outside (on top) grid layout?

I'm using Pyqt5 to build a very simple GUI.
In this window I want to place some text with info on top of a grid layout. The grid is made of 2 columns and I want the text to go full width (like HTML attribute colspan). I can't find a way to place the entirety of the text.
Text is: "Return a list of same-distance points couples from a file of 2D points"
I tried setting the Qlabel containing the text as the 1x1 element of the grid and give it a width of 2 columns, I tried place i manually with the move function; either of these solutions does not show the text properly.
class MatchDistance(QWidget):
def initUI(self):
super().initUI()
self.setWindowTitle('Match distance')
info_label = QLabel("Return a list of same-distance points couples from a file of 2D points", self)
info_label.move(10, 10)
# QPushButton and QLineEdit setup [...]
self.grid.addWidget(self.input_file_1, 1, 1)
self.grid.addWidget(self.output_file, 2, 1)
self.grid.addWidget(self.btn_input_1, 1, 2)
self.grid.addWidget(self.btn_output, 2, 2)
self.grid.addWidget(self.btn_run, 3, 2)
self.grid.addWidget(self.btn_mainwindow, 4, 2)
self.setWindowTitle("script#1: Match distance")
self.show()
Your description is confusing so I will not refer to your implementation, but to respond you must take into account the following:
The indices that set the row and column start at 0.
If you use the layouts then you can no longer use move since the position is handled by the layout.
Considering the above, the solution is:
self.grid.addWidget(info_label, 0, 0, 1, 2)
self.grid.addWidget(self.input_file_1, 1, 0)
self.grid.addWidget(self.output_file, 2, 0)
self.grid.addWidget(self.btn_input_1, 1, 1)
self.grid.addWidget(self.btn_output, 2, 1)
self.grid.addWidget(self.btn_run, 3, 1)
self.grid.addWidget(self.btn_mainwindow, 4, 1)

visualize constructions composed of hexahedrons

I wrote a Python script using matplotlib, to visualize and rotate polycubes together with planes or axes that reveal symmetry in the polycube. Up to a certain point I succeeded as you can see from the illustrations below. There you see one of seventy-seven possible 3D polycubes of six cubes. My script uses matplotlib and it seems to do a nice job: matplotlib allows me to rotate the polycube and view it at any wanted angle. There is a big problem with most polycubes, which is illustrated in the second image below. A little after I start rotating the polycube, matplotlib shows planes that are partly behind other planes and are therefore partly invisble and not to be drawn or only partly drawn.
I have been searching a lot on forums and with Google, but to no avail. There were hits suggesting that I should use mayavi instead of matplotlib. So I studied extensively on mayavi. I spent literally weeks trying to figure out how to get mayavi going. The hits on docs.enthought seemed promising at first, but although mayavi is clearly suited to the purpose and superb in visualizing objects, I cannot find understandable documentation. I could use a real programmers guide on vtk or tvtk. There is a lot of documentation, but mostly for designers, not programmers. If not available (?) I would also be happy with an example of a script with hexahedrons or irregular grids, that works in canopy's (1.6.2) implementation of python.
I've modified an example from the vtk documentation (http://www.vtk.org/gitweb?p=VTK.git;a=blob_plain;f=Examples/DataManipulation/Python/BuildUGrid.py)
This example can be turned into a function which can create the geometry from a point coordinate array and an element array.
I haven't used any guides for vtk, I usually refer to the python examples found here: http://www.vtk.org/Wiki/VTK/Examples/Python
import vtk
# Initialize the vtkPoints variable and set the number of points
points = vtk.vtkPoints()
points.SetNumberOfPoints(8)
# Add points to the variable, with the point number first, then the x, y, z coordinates.
# For demonstration purposes, I started numbering the ponts at 10 (normally they would start at 0).
points.InsertPoint(0, 0, 0, 0)
points.InsertPoint(1, 1, 0, 0)
points.InsertPoint(2, 1, 1, 0)
points.InsertPoint(3, 0, 1, 0)
points.InsertPoint(4, 0, 0, 1)
points.InsertPoint(5, 1, 0, 1)
points.InsertPoint(6, 1, 1, 1)
points.InsertPoint(7, 0, 1, 1)
points.InsertPoint(8, 0, 0, 1.1)
points.InsertPoint(9, 1, 0, 1.1)
points.InsertPoint(10, 1, 1, 1.1)
points.InsertPoint(11, 0, 1, 1.1)
points.InsertPoint(12, 0, 0, 2)
points.InsertPoint(13, 1, 0, 2)
points.InsertPoint(14, 1, 1, 2)
points.InsertPoint(15, 0, 1, 2.5)
# Define the hexahedron, then set the point Ids of the hexahedron cell/element.
# From the documentation: points (0,1,2,3) is the base of the hexahedron which, using the right hand rule, forms a
# quadrilaterial whose normal points in the direction of the opposite face (4,5,6,7)
aHexahedron1 = vtk.vtkHexahedron()
aHexahedron1.GetPointIds().SetId(0, 0) # Cell point 0 corresponds to point 0 which was defined above
aHexahedron1.GetPointIds().SetId(1, 1)
aHexahedron1.GetPointIds().SetId(2, 2)
aHexahedron1.GetPointIds().SetId(3, 3)
aHexahedron1.GetPointIds().SetId(4, 4)
aHexahedron1.GetPointIds().SetId(5, 5)
aHexahedron1.GetPointIds().SetId(6, 6)
aHexahedron1.GetPointIds().SetId(7, 7)
# Define a second hexahedron
aHexahedron2 = vtk.vtkHexahedron()
aHexahedron2.GetPointIds().SetId(0, 8) # Cell point 0 corresponds to point 8 which was defined above
aHexahedron2.GetPointIds().SetId(1, 9)
aHexahedron2.GetPointIds().SetId(2, 10)
aHexahedron2.GetPointIds().SetId(3, 11)
aHexahedron2.GetPointIds().SetId(4, 12)
aHexahedron2.GetPointIds().SetId(5, 13)
aHexahedron2.GetPointIds().SetId(6, 14)
aHexahedron2.GetPointIds().SetId(7, 15)
# Define an unstructured grid.
aHexahedronGrid = vtk.vtkUnstructuredGrid()
# Add the hexahedron to the unstructured grid
# Note: this operation defines the point ids, and not the actual point coordinates
aHexahedronGrid.InsertNextCell(aHexahedron1.GetCellType(), aHexahedron1.GetPointIds())
aHexahedronGrid.InsertNextCell(aHexahedron2.GetCellType(), aHexahedron2.GetPointIds())
# Set the points which includes the coordinates. The point ids defined in the line above correspond to the point ids
# that were defined earlier (i.e. points.InsertPoint(10, 0, 0, 0))
aHexahedronGrid.SetPoints(points)
# Now we have defined one hexahedron, and added it an unstructured grid.
# We could create more hexahedrons, and add them to the same unstructured grid.
# To view the unstructured grid, we need to define a mapper and set the unstructured grid as the input
aHexahedronMapper = vtk.vtkDataSetMapper()
aHexahedronMapper.SetInputData(aHexahedronGrid)
# Define an actor, and set the mapper as the input
aHexahedronActor = vtk.vtkActor()
aHexahedronActor.SetMapper(aHexahedronMapper)
# Create the usual rendering stuff.
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
iren.SetInteractorStyle(vtk.vtkInteractorStyleTrackballCamera()) # Change the rotation type from the default to 'trackball'
ren.SetBackground(.1, .2, .4)
# Add the actor to the renderer to actually view the geometry
ren.AddActor(aHexahedronActor)
# Render the scene and start interaction.
iren.Initialize()
renWin.Render()
iren.Start()

python, vtk glyphs with independent position, orientation, color, and height

Using Python and VTK I am trying to render 10k cylinders to visualize gradient directions. I have reviewed multiple examples on the internet but none of them shows how you can, at the same time, change the position, orientation, color and height of each rendered cylinder independently.
I know it is possible to create an actor per cylinder but with 10k cylinders that would be very slow.
The following example code is my work in progress on this problem. I first define some data for 2 cylinders and next I try to visualize them with VTK.
What works is the position, orientation and color of the cylinders.
What does not work is the height of each cylinder.
Possible solutions might be to use different cylinder sources to get the heights right. Yet I don't know how to apply those.
Perhaps a more experienced VTK programmer can enlighten me?
from vtk import *
# input data, every row is for a different item
positions = [[0, 0, 0],
[1.5, 0, 0]]
orientations = [[1.0, 0.0, 0.0],
[0.0, 1.0, 1.0]]
colors = [[255, 0, 0],
[0, 255, 255]]
heights = [1,
2]
# rendering of those two defined cylinders
points = vtkPoints()
points.InsertNextPoint(*positions[0])
points.InsertNextPoint(*positions[1])
polydata = vtkPolyData()
polydata.SetPoints(points)
color_def = vtkUnsignedCharArray()
color_def.SetNumberOfComponents(3)
color_def.SetNumberOfTuples(polydata.GetNumberOfPoints())
color_def.InsertTuple3(0, *colors[0])
color_def.InsertTuple3(1, *colors[1])
polydata.GetPointData().SetScalars(color_def)
pointNormalsArray = vtkDoubleArray()
pointNormalsArray.SetNumberOfComponents(3)
pointNormalsArray.SetNumberOfTuples(polydata.GetNumberOfPoints())
pointNormalsArray.SetTuple(0, orientations[0])
pointNormalsArray.SetTuple(1, orientations[1])
polydata.GetPointData().SetNormals(pointNormalsArray)
cyl_source = vtkCylinderSource()
cyl_source.SetResolution(10)
cyl_source.SetHeight(0.8)
cyl_source.SetRadius(0.1)
cyl_source.Update()
glyph = vtkGlyph3D()
glyph.SetInputData(polydata)
glyph.SetSourceConnection(cyl_source.GetOutputPort())
glyph.SetColorModeToColorByScalar()
glyph.SetVectorModeToUseNormal()
glyph.ScalingOff()
mapper = vtkPolyDataMapper()
mapper.SetInputConnection(glyph.GetOutputPort())
actor = vtkActor()
actor.SetMapper(mapper)
ren = vtkRenderer()
ren.AddActor(actor)
renwin = vtk.vtkRenderWindow()
renwin.AddRenderer(ren)
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renwin)
renwin.Render()
iren.Initialize()
iren.Start()
It seems that http://www.vtk.org/Wiki/VTK/Examples/Python/Visualization/ClampGlyphSizes has something similar to what you need (I haven't tried):
# Tell glyph which attribute arrays to use for what
glyph.SetInputArrayToProcess(0,0,0,0,'Elevation') # scalars
glyph.SetInputArrayToProcess(1,0,0,0,'RTDataGradient') # vectors
# glyph.SetInputArrayToProcess(2,0,0,0,'nothing') # normals
glyph.SetInputArrayToProcess(3,0,0,0,'RTData') # colors
From the documentation http://www.vtk.org/doc/nightly/html/classvtkGlyph3D.html#details :
You can set what arrays to use for the scalars, vectors, normals, and
color scalars by using the SetInputArrayToProcess methods in
vtkAlgorithm. The first array is scalars, the next vectors, the next
normals and finally color scalars.
vtkGlyphMapper and vtkGlyph3DMapper allow you to set a 3-dimensional ScaleArray, which you can then add to the point data:
glyph.SetScaleArray("CylinderScales")
scaleArray = vtkDoubleArray()
scaleArray.SetName("CylinderScales")
scaleArray.SetNumberOfComponents(3)
scaleArray.SetNumberOfTuples(polydata.GetNumberOfPoints())
scaleArray.SetTuple(0, (1, 1, 1))
scaleArray.SetTuple(1, (1, 1, 2))
polydata.GetPointData().AddArray(scaleArray)
The second cylinder should be twice as tall as the first, unless I got my axes wrong.

"painting" one array onto another using python / numpy

I'm writing a library to process gaze tracking in Python, and I'm rather new to the whole numpy / scipy world. Essentially, I'm looking to take an array of (x,y) values in time and "paint" some shape onto a canvas at those coordinates. For example, the shape might be a blurred circle.
The operation I have in mind is more or less identical to using the paintbrush tool in Photoshop.
I've got an interative algorithm that trims my "paintbrush" to be within the bounds of my image and adds each point to an accumulator image, but it's slow(!), and it seems like there's probably a fundamentally easier way to do this.
Any pointers as to where to start looking?
In your question you describe a Gaussian filter, for which scipy has support via a package.
For example:
from scipy import * # rand
from pylab import * # figure, imshow
from scipy.ndimage import gaussian_filter
# random "image"
I = rand(100, 100)
figure(1)
imshow(I)
# gaussian filter
J = gaussian_filter(I, sigma=10)
figure(2)
imshow(J)
Of course, you can apply this on the whole image, or just on a patch, using slicing:
J = array(I) # copy image
J[30:70, 30:70] = gaussian_filter(I[30:70, 30:70], sigma=1) # apply filter to subregion
figure(2)
imshow(2)
For basic image manipulation, the Python Image library (PIL) is probably what you want.
NOTE:
for "painting" with a "brush", I think you could just create a boolean mask array with your brush. For instance:
# 7x7 boolean mask with the "brush" (example: a _crude_ circle)
mask = array([[0, 0, 1, 1, 1, 0, 0],
[0, 1, 1, 1, 1, 1, 0],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1],
[0, 1, 1, 1, 1, 1, 0],
[0, 0, 1, 1, 1, 0, 0]], dtype=bool)
# random image
I = rand(100, 100)
# apply filter only on mask
# compute the gauss. filter only on the 7x7 subregion, not the whole image
I[40:47, 40:47][mask] = gaussian_filter(I[40:47, 40:47][mask], sigma=1)
You should really look into Andrew Straw's motmot and libcamiface. He uses it for fly behaviour experiments but it's a flexible library for doing just the kind of image acquisition and processing you're doing I think. There's a video of his presentation at SciPy2009.
As for the paintbrush scenario you mention, I'd make a copy of the image with the .copy() method, keep the paintbrush image in an array, and simply add it with
arr[first_br_row:last_br_row, first_br_col:last_br_col] += brush[first_row:last_row, first_col:last_col]
where you set first_br_row, last_br_row first_br_col, last_br_col to address the subimage where you want to add the brush and first_row, last_row, first_col, last_col to clip the brush (normally set them to 0 and # rows/cols - 1, but adjust when you're near enough to the image boundary to only want to paint part of the brush).
Hope all that helps.
Doing a little of math in Fourier space may help: a translation (convolution by a dirac) is equal to a simple multiplication by a phase in Fourier... this makes your paintbrush move to the exact place (a similar solution than catchmeifyoutry & dwf, but this allows a translation finer than the pixel, like 2.5, alas with some ringing). Then, a sum of such strokes is the sum of these operations.
In code:
import numpy
import pylab
from scipy import mgrid
def FTfilter(image, FTfilter):
from scipy.fftpack import fftn, fftshift, ifftn, ifftshift
from scipy import real
FTimage = fftshift(fftn(image)) * FTfilter
return real(ifftn(ifftshift(FTimage)))
def translate(image, vec):
"""
Translate image by vec (in pixels)
"""
u = ((vec[0]+image.shape[0]/2) % image.shape[0]) - image.shape[0]/2
v = ((vec[1]+image.shape[1]/2) % image.shape[1]) - image.shape[1]/2
f_x, f_y = mgrid[-1:1:1j*image.shape[0], -1:1:1j*image.shape[1]]
trans = numpy.exp(-1j*numpy.pi*(u*f_x + v*f_y))
return FTfilter(image, trans)
def occlude(image, mask):
# combine in oclusive mode
return numpy.max(numpy.dstack((image, mask)), axis=2)
if __name__ == '__main__':
Image = numpy.random.rand(100, 100)
X, Y = mgrid[-1:1:1j*Image.shape[0], -1:1:1j*Image.shape[1]]
brush = X**2 + Y**2 < .05 # relative size of the brush
# shows the brush
pylab.imshow(brush)
# move it to some other position / use a threshold to avoid ringing
brushed = translate(brush, [20, -10.51]) > .6
pylab.imshow(brushed)
pylab.imshow(occlude(Image, brushed))
more_strokes = [[40, -15.1], [-40, -15.1], [-25, 15.1], [20, 10], [0, -10], [25, -10.51]]
for stroke in more_strokes:
brushed = brushed + translate(brush, stroke) > .6
pylab.imshow(occlude(Image, brushed))
OpenCV uses numpy arrays and has basic drawing functions: circles, elipses, polylines...
To draw a line you can call
cv.line(array,previous_point,new_point,colour,thickness=x)
each time you get a mouse event.
Have you looked into Tkinter?
Python Image Library may be some help too.

Categories

Resources