We recently encountered strange behavior of VTK (v7) after rendering a transparent object for the second time using depth peeling.
The first render looks nice, rendering transparency as it should. After closing the render window and creating another one (same set-up), vtk shows an empty render and the application/python crashes after closing that window.
Problem Event Name: APPCRASH
Application Name: Test.exe
Application Version: 0.0.0.0
Application Timestamp: 57be97a5
Fault Module Name: nvoglv64.DLL
Fault Module Version: 9.18.13.2762
Fault Module Timestamp: 526ed933
Exception Code: c0000005
Exception Offset: 000000000062e180
I included a small example below, both python (3.5) and C++ seem to behave similarly.
c++:
#include "vtkCylinderSource.h"
#include "vtkPolyDataMapper.h"
#include "vtkActor.h"
#include "vtkRenderer.h"
#include "vtkRenderWindow.h"
#include "vtkRenderWindowInteractor.h"
#include "vtkProperty.h"
#include "vtkCamera.h"
#include "vtkSmartPointer.h"
int main()
{
// This creates a polygonal cylinder model with eight circumferential facets
// (i.e, in practice an octagonal prism).
vtkSmartPointer<vtkCylinderSource> cylinder =
vtkSmartPointer<vtkCylinderSource>::New();
cylinder->SetResolution(8);
// The mapper is responsible for pushing the geometry into the graphics library.
// It may also do color mapping, if scalars or other attributes are defined.
vtkSmartPointer<vtkPolyDataMapper> cylinderMapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
cylinderMapper->SetInputConnection(cylinder->GetOutputPort());
// The actor is a grouping mechanism: besides the geometry (mapper), it
// also has a property, transformation matrix, and/or texture map.
// Here we set its color and rotate it around the X and Y axes.
vtkSmartPointer<vtkActor> cylinderActor =
vtkSmartPointer<vtkActor>::New();
cylinderActor->SetMapper(cylinderMapper);
cylinderActor->GetProperty()->SetColor(1.0000, 0.3882, 0.2784);
cylinderActor->RotateX(30.0);
cylinderActor->RotateY(-45.0);
cylinderActor->GetProperty()->SetOpacity(0.5);
// The renderer generates the image
// which is then displayed on the render window.
// It can be thought of as a scene to which the actor is added
vtkSmartPointer<vtkRenderer> renderer =
vtkSmartPointer<vtkRenderer>::New();
renderer->AddActor(cylinderActor);
renderer->SetBackground(0.1, 0.2, 0.4);
// Zoom in a little by accessing the camera and invoking its "Zoom" method.
renderer->ResetCamera();
renderer->GetActiveCamera()->Zoom(1.5);
// The render window is the actual GUI window
// that appears on the computer screen
vtkSmartPointer<vtkRenderWindow> renderWindow =
vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->SetSize(200, 200);
renderWindow->AddRenderer(renderer);
// 1. Use a render window with alpha bits (as initial value is 0 (false)):
renderWindow->SetAlphaBitPlanes(true);
// 2. Force to not pick a framebuffer with a multisample buffer
// (as initial value is 8):
renderWindow->SetMultiSamples(0);
// 3. Choose to use depth peeling (if supported) (initial value is 0 (false)):
renderer->SetUseDepthPeeling(true);
// 4. Set depth peeling parameters
// - Set the maximum number of rendering passes (initial value is 4):
renderer->SetMaximumNumberOfPeels(100);
// - Set the occlusion ratio (initial value is 0.0, exact image):
renderer->SetOcclusionRatio(0.1);
// The render window interactor captures mouse events
// and will perform appropriate camera or actor manipulation
// depending on the nature of the events.
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
// This starts the event loop and as a side effect causes an initial render.
renderWindowInteractor->Start();
// This creates a polygonal cylinder model with eight circumferential facets
// (i.e, in practice an octagonal prism).
vtkSmartPointer<vtkCylinderSource> cylinder2 =
vtkSmartPointer<vtkCylinderSource>::New();
cylinder2->SetResolution(8);
// The mapper is responsible for pushing the geometry into the graphics library.
// It may also do color mapping, if scalars or other attributes are defined.
vtkSmartPointer<vtkPolyDataMapper> cylinderMapper2 =
vtkSmartPointer<vtkPolyDataMapper>::New();
cylinderMapper2->SetInputConnection(cylinder2->GetOutputPort());
// The actor is a grouping mechanism: besides the geometry (mapper), it
// also has a property, transformation matrix, and/or texture map.
// Here we set its color and rotate it around the X and Y axes.
vtkSmartPointer<vtkActor> cylinderActor2 =
vtkSmartPointer<vtkActor>::New();
cylinderActor2->SetMapper(cylinderMapper2);
cylinderActor2->GetProperty()->SetColor(1.0000, 0.3882, 0.2784);
cylinderActor2->RotateX(30.0);
cylinderActor2->RotateY(-45.0);
cylinderActor2->GetProperty()->SetOpacity(0.5);
// The renderer generates the image
// which is then displayed on the render window.
// It can be thought of as a scene to which the actor is added
vtkSmartPointer<vtkRenderer> renderer2 =
vtkSmartPointer<vtkRenderer>::New();
renderer2->AddActor(cylinderActor);
renderer2->SetBackground(0.1, 0.2, 0.4);
// Zoom in a little by accessing the camera and invoking its "Zoom" method.
renderer2->ResetCamera();
renderer2->GetActiveCamera()->Zoom(1.5);
// The render window is the actual GUI window
// that appears on the computer screen
vtkSmartPointer<vtkRenderWindow> renderWindow2 =
vtkSmartPointer<vtkRenderWindow>::New();
renderWindow2->SetSize(200, 200);
renderWindow2->AddRenderer(renderer2);
// 1. Use a render window with alpha bits (as initial value is 0 (false)):
renderWindow2->SetAlphaBitPlanes(true);
// 2. Force to not pick a framebuffer with a multisample buffer
// (as initial value is 8):
renderWindow2->SetMultiSamples(0);
// 3. Choose to use depth peeling (if supported) (initial value is 0 (false)):
renderer2->SetUseDepthPeeling(true);
// 4. Set depth peeling parameters
// - Set the maximum number of rendering passes (initial value is 4):
renderer2->SetMaximumNumberOfPeels(100);
// - Set the occlusion ratio (initial value is 0.0, exact image):
renderer2->SetOcclusionRatio(0.1);
// The render window interactor captures mouse events
// and will perform appropriate camera or actor manipulation
// depending on the nature of the events.
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor2 =
vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor2->SetRenderWindow(renderWindow2);
// This starts the event loop and as a side effect causes an initial render.
renderWindowInteractor2->Start();
return 0;
}
and python:
import vtk
###
# First Render
###
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
ren.SetBackground([1., 1., 1.])
ren.SetUseDepthPeeling(1)
ren.SetOcclusionRatio(0.1)
ren.SetMaximumNumberOfPeels(100)
renWin.SetMultiSamples(0)
renWin.SetAlphaBitPlanes(1)
# create a renderwindowinteractor
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
# create source
source = vtk.vtkCylinderSource()
source.SetCenter(0, 0, 0)
source.SetRadius(5.0)
source.SetHeight(7.0)
source.SetResolution(100)
source.Update()
# mapper
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputData(source.GetOutput())
# actor
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetOpacity(0.5)
# assign actor to the renderer
ren.AddActor(actor)
ren.ResetCamera()
# enable user interface interactor
iren.Initialize()
renWin.Render()
# print(ren)
iren.Start()
# close_window(iren)
# del renWin, ren
###
# Second Render
###
ren = vtk.vtkRenderer()
renWin = vtk.vtkRenderWindow()
renWin.AddRenderer(ren)
ren.SetBackground([1., 1., 1.])
ren.SetUseDepthPeeling(1)
ren.SetOcclusionRatio(0.1)
ren.SetMaximumNumberOfPeels(100)
renWin.SetMultiSamples(0)
renWin.SetAlphaBitPlanes(1)
# create a renderwindowinteractor
iren = vtk.vtkRenderWindowInteractor()
iren.SetRenderWindow(renWin)
# create source
source = vtk.vtkCylinderSource()
source.SetCenter(0, 0, 0)
source.SetRadius(5.0)
source.SetHeight(7.0)
source.SetResolution(100)
source.Update()
# mapper
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputData(source.GetOutput())
# actor
actor = vtk.vtkActor()
actor.SetMapper(mapper)
actor.GetProperty().SetOpacity(0.5)
# assign actor to the renderer
ren.AddActor(actor)
ren.ResetCamera()
# enable user interface interactor
iren.Initialize()
renWin.Render()
iren.Start()
Related
I am currently making the Panel widgets that could manipulate 3D shaped with the use of PyVista / VTK extension, but I could not find a way to modify the camera viewpoint of another subplot.
Given an example in VTKInterative. It was found that Panel encapsulates pl.ren_win as vtkpan, and its camera position could access and edited through vtkpan.camera following the param.Parameterized manner. But when using two or more subplots. vtkpan.camera is sticking to the first subplot.
pl = pv.Plotter(shape=(1, 2))
pyramid = pv.Pyramid()
pl.subplot(0, 0) # A
actor_A = pl.add_mesh(pyramid, color='r')
pl.camera_position = np.random.random((3,3))
pl.set_scale(0.8, 0.8, 0.8)
pl.subplot(0, 1) # B
actor_B = pl.add_mesh(pyramid, color='b')
pl.camera_position = np.random.random((3,3))
pl.set_scale(0.8, 0.8, 0.8)
vtkpan = pn.panel(pl.ren_win, sizing_mode='fixed', width=400)
vtkpan
vtkpan.camera = {'position': np.array(vtkpan.camera['position']) * [-1],
'viewUp': np.array(vtkpan.camera['viewUp']) * [-1]}
actor_B.GetProperty().SetColor(np.random.random((3)))
vtkpan.synchronize() # change actor_B properties
By modifying vtkpan.camera params, only the left camera viewpoint was changed. Even though modifying VTK camera directly with list(pl.ren_win.GetRenderers())[0].GetActiveCamera().SetViewUp([...]) nothing has happened, suggesting that synchronize() only syncs the renderer's actors. I know that there is a workaround to separate it into 2 vtkpan. However, if there is a way, please tell me.
I am writing a 3d graphics toolkit for Python and PyQt5, using PyOpenGL. I am writing my own shaders to go with it, if that helps. What I am trying to do is go from using glBegin to using a Vertex Buffer Array. I have found the following on using VBOs:
http://www.songho.ca/opengl/gl_vbo.html - I could only scrape together a bit of information from this because it is in C/C++.
How to get VBOs to work with Python and PyOpenGL - This was in Python2 and was fairly limiting as a result.
I cannot, however, piece together what I need to take the vertexes of each of my shape objects and compile them into a scene VBO. I also have no idea how the data in an array is laid out. My initGL and paintGL functions are below, as are my vertex and fragment shaders' GLSL code.
def initGL(self):
self.vertProg = open(self.vertPath, 'r')
self.fragProg = open(self.fragPath, 'r')
self.vertCode = self.vertProg.read()
self.fragCode = self.fragProg.read()
self.vertShader = shaders.compileShader(self.vertCode, GL_VERTEX_SHADER)
self.fragShader = shaders.compileShader(self.fragCode, GL_FRAGMENT_SHADER)
self.shader = shaders.compileProgram(self.vertShader, self.fragShader)
#paintGL uses shape objects, such as cube() or mesh(). Shape objects require the following:
#a list named 'vertices' - This list is a list of points, from which edges and faces are drawn.
#a list named 'wires' - This list is a list of tuples which refer to vertices, dictating where to draw wires.
#a list named 'facets' - This list is a list of tuples which refer to vertices, ditating where to draw facets.
#a bool named 'render' - This bool is used to dictate whether or not to draw the shape.
#a bool named 'drawWires' - This bool is used to dictate whether wires should be drawn.
#a bool named 'drawFaces' - This bool is used to dictate whether facets should be drawn.
def paintGL(self):
shaders.glUseProgram(self.shader)
glLoadIdentity()
gluPerspective(45, self.sizeX / self.sizeY, 0.1, 110.0) #set perspective?
glTranslatef(0, 0, self.zoomLevel) #I used -10 instead of -2 in the PyGame version.
glRotatef(self.rotateDegreeV + self.vOffset, 1, 0, 0) #I used 2 instead of 1 in the PyGame version.
glRotatef(self.rotateDegreeH, 0, 0, 1)
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
for s in self.shapes:
if s.drawWires:
glBegin(GL_LINES)
for w in s.wires:
for v in w:
glVertex3fv(s.vertices[v])
glEnd()
if s.drawFaces:
glBegin(GL_QUADS)
for f in s.facets:
for v in f:
glVertex3fv(s.vertices[v])
glEnd()
Vertex shader:
#version 120
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment shader:
#version 120
void main() {
gl_FragColor = vec4( 0, 1, 0, 1 );
}
In the final form of this project, I want to have information in my buffer for vertex positions, color, and maybe even glow. (That will be achieved when I put this to ray marching eventually.) I also need a way to specify whether or not I should draw the wires and faces.
How do I set up and configure one or more VBOs to transfer all this information to the GPU and OpenGL?
Python 3.7.6, Windows 10
After a while longer researching, I decided to try using less specific search terms. I eventually stumbled upon this site: https://www.metamost.com/opengl-with-python/
I am using folium to generate some maps and one of the features I am including is the markercluster overlay - as I am frequently plotting thousands of points on a map. The clustering groups GPS points of varying quantities together and overlays a number on top of the map icon, which represents how many points have been grouped together into that cluster. By default, the fewer points grouped together in a cluster will result in a green color for the map icon and the more points grouped together will be more towards the red spectrum. Ideally, I would like to reverse this, so that when there are a lot of consolidated points in one location, the icon will be green. Whereas when there are only a few consolidated points, the color will be red. I'm thinking this needs to be edited in the branca module somewhere, but I'm not sure and generally pretty unfamiliar with how branca works. Any help is much appreciated.
Here's an example of how marker clusters are typically created:
import folium
from folium.plugins import MarkerCluster
#Create the map image and establish the center point
mapImage = folium.Map(location=[40.165505, -99.788130],
zoom_start=12,
tiles='OpenStreetMap')
#Create the marker cluster group, which organizes all the gps points put into it
marker_cluster_group = MarkerCluster(name='Cluster Icons')
#This is just a reference to a default google mapping icon, purely optional
pointIcon_url = "http://maps.google.com/mapfiles/kml/shapes/shaded_dot.png"
#Create the icon object
icon = folium.features.CustomIcon(pointIcon_url, icon_size=(15, 15))
#Create the marker/gps point and add it to the cluster group
folium.Marker([40.058377, -99.939192], icon=icon).add_to(marker_cluster_group)
#Adding the cluster group to the map image
marker_cluster_group.add_to(mapImage)
You can provide the MarkerCluster class with an argument icon_create_function which will style the cluster icons:
https://github.com/python-visualization/folium/blob/8595240517135d1637ca4cf7cc624045f1d911b3/folium/plugins/marker_cluster.py#L31
Here you can see an example of how that function should look like:
https://github.com/Leaflet/Leaflet.markercluster#customising-the-clustered-markers
So it's a Javascript function, that you provide to folium as a string.
With the help of #Conengmo 's response, I was able to get the info I needed and modify it as needed to create the below.
import folium
from folium.plugins import MarkerCluster
#Create the map image and establish the center point
mapImage = folium.Map(location=[40.165505, -99.788130],
zoom_start=12,
tiles='OpenStreetMap')
#Create a variable to store your javascript function (written as a string), which adjusts the default css functionality
#The below are the attributes that I needed for my project, but they can be whatever is needed for you
icon_create_function = """
function(cluster) {
var childCount = cluster.getChildCount();
var c = ' marker-cluster-';
if (childCount < 50) {
c += 'large';
} else if (childCount < 300) {
c += 'medium';
} else {
c += 'small';
}
return new L.DivIcon({ html: '<div><span>' + childCount + '</span></div>', className: 'marker-cluster' + c, iconSize: new L.Point(40, 40) });
}
"""
#Create the marker cluster group, which organizes all the gps points put into it
marker_cluster_group = MarkerCluster(name='Cluster Icons', icon_create_function=icon_create_function)
#This is just a reference to a default google mapping icon, purely optional
pointIcon_url = "http://maps.google.com/mapfiles/kml/shapes/shaded_dot.png"
#Create the icon object
icon = folium.features.CustomIcon(pointIcon_url, icon_size=(15, 15))
#Create the marker/gps point and add it to the cluster group
folium.Marker([40.058377, -99.939192], icon=icon).add_to(marker_cluster_group)
#Adding the cluster group to the map image
marker_cluster_group.add_to(mapImage)
import vtk
import pickle
from numpy import *
data_matrix = Ilog
dataImporter = vtk.vtkImageImport()
# The preaviusly created array is converted to a string of chars and imported.
data_string = data_matrix.tostring()
dataImporter.CopyImportVoidPointer(data_string, len(data_string))
# The type of the newly imported data is set to unsigned char (uint8)
dataImporter.SetDataScalarTypeToUnsignedChar()
# must be told this is the case.
dataImporter.SetNumberOfScalarComponents(1)
# The following two functions describe how the data is stored and the dimensions of the array it is stored in. For this
# simple case, all axes are of length 75 and begins with the first element. For other data, this is probably not the case.
# I have to admit however, that I honestly don't know the difference between SetDataExtent() and SetWholeExtent() although
# VTK complains if not both are used.
dataImporter.SetDataExtent(0, 727, 0, 727, 0, 24)
dataImporter.SetWholeExtent(0, 727, 0, 727, 0, 24)
# The following class is used to store transparency-values for later retrieval. In our case, we want the value 0 to be
# completely opaque whereas the three different cubes are given different transparency-values to show how it works.
alphaChannelFunc = vtk.vtkPiecewiseFunction()
alphaChannelFunc.AddPoint(0, 1)
alphaChannelFunc.AddPoint(0.5, 0.9)
alphaChannelFunc.AddPoint(0.7, 0.9)
alphaChannelFunc.AddPoint(1, 0)
colorFunc = vtk.vtkColorTransferFunction()
colorFunc.AddRGBPoint(0, 1,1,1)
colorFunc.AddRGBPoint(0.2, 0.9, 0.9, 0.9)
colorFunc.AddRGBPoint(0.7, 0.2, 0.2, 0.2)
colorFunc.AddRGBPoint(1, 0, 0, 0)
# The preavius two classes stored properties. Because we want to apply these properties to the volume we want to render,
# we have to store them in a class that stores volume properties.
volumeProperty = vtk.vtkVolumeProperty()
volumeProperty.SetColor(colorFunc)
volumeProperty.SetScalarOpacity(alphaChannelFunc)
# This class describes how the volume is rendered (through ray tracing).
compositeFunction = vtk.vtkVolumeRayCastCompositeFunction()
# We can finally create our volume. We also have to specify the data for it, as well as how the data will be rendered.
volumeMapper = vtk.vtkVolumeRayCastMapper()
volumeMapper.SetVolumeRayCastFunction(compositeFunction)
volumeMapper.SetInputConnection(dataImporter.GetOutputPort())
# The class vtkVolume is used to pair the previously declared volume as well as the properties to be used when rendering that volume.
volume = vtk.vtkVolume()
volume.SetMapper(volumeMapper)
volume.SetProperty(volumeProperty)
# With almost everything else ready, its time to initialize the renderer and window, as well as creating a method for exiting the application
renderer = vtk.vtkRenderer()
renderWin = vtk.vtkRenderWindow()
renderWin.AddRenderer(renderer)
renderInteractor = vtk.vtkRenderWindowInteractor()
renderInteractor.SetRenderWindow(renderWin)
# We add the volume to the renderer ...
renderer.AddVolume(volume)
# ... set background color to white ...
renderer.SetBackground(0.5,0.5,0.7)
# ... and set window size.
renderWin.SetSize(800,800)
# A simple function to be called when the user decides to quit the application.
def exitCheck(obj, event):
if obj.GetEventPending() != 0:
obj.SetAbortRender(1)
# Tell the application to use the function as an exit check.
renderWin.AddObserver("AbortCheckEvent", exitCheck)
renderInteractor.Initialize()
# Because nothing will be rendered without any input, we order the first render manually before control is handed over to the main-loop.
renderWin.Render()
renderInteractor.Start()
Ilog is logical matrix of size 728x728x25 whose cross-section looks like
In this image the red color signifies the value 1 and the blue color signifies the value 0.
but when the above code is compiled the output is always a box like
.
The matrix contains values just zeros and ones. Using that logic the value with zeros have given full transparency and the values with zero have full opacity.
I am trying to make a small application using PyQt5 and PyOpenGL. Everything works fine, however rendering takes way too long with even only one sphere. I tried different routes to try and optimise the speed of the app, and right now I am using a simple QWindow with an OpenGLSurface.
I managed to figure out that it is the context.swapBuffers call that takes a long time to complete and varies between approx. 0.01s (which is fine) and 0.05s (which is way to long), when displaying 1 sphere with some shading and 240 vertices.
Now my questions are the following: Is this normal? If so, is there a way to speed this process up or is this related to how pyqt works, since it is a python wrap around the library? Basically: is there any way for me to continue developing this program without needing to learn c++. It's quite a simple application that just needs to visualise some atomic structure and be able to manipulate it.
Is there another gui toolkit I could maybe use to have less overhead when working with OpenGL from pyopengl?
This is the definition that does the rendering:
def renderNow(self):
if not self.isExposed():
return
self.m_update_pending = False
needsInitialize = False
if self.m_context is None:
self.m_context = QOpenGLContext(self)
self.m_context.setFormat(self.requestedFormat())
self.m_context.create()
needsInitialize = True
self.m_context.makeCurrent(self)
if needsInitialize:
self.m_gl = self.m_context.versionFunctions()
self.m_gl.initializeOpenGLFunctions()
self.initialize()
self.render()
self.m_context.swapBuffers(self)
if self.m_animating:
self.renderLater()
I am using OpenGl directly without using Qt opengl definitions, the format for the surface is given by:
fmt = QSurfaceFormat()
fmt.setVersion(4, 2)
fmt.setProfile(QSurfaceFormat.CoreProfile)
fmt.setSamples(4)
fmt.setSwapInterval(1)
QSurfaceFormat.setDefaultFormat(fmt)
Edit1:
Some more clarification on how my code works:
def render(self):
t1 = time.time()
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
wtvMatrix = self.camera.get_wtv_mat()
transformMatrix = matrices.get_projection_matrix(60, self.width() / self.height(), 0.1, 30, matrix=wtvMatrix)
transformMatrixLocation = glGetUniformLocation(self.shader,"transformMatrix")
glUniformMatrix4fv(transformMatrixLocation,1,GL_FALSE,transformMatrix)
eye_pos_loc = glGetUniformLocation(self.shader, "eye_world_pos0")
glUniform3f(eye_pos_loc, self.camera.position[0], self.camera.position[1], self.camera.position[2])
glDrawElementsInstanced(GL_TRIANGLES,self.num_vertices,GL_UNSIGNED_INT,None,self.num_objects)
print("drawing took:{}".format(time.time()-t1))
self.frame+=1
t1=time.time()
self.m_context.swapBuffers(self)
print('swapping buffers took:{}'.format(time.time()-t1))
This is the only drawElementsInstanced that I call. Shaders are set up as follows (sorry for the mess):
VERTEX_SHADER = compileShader("""#version 410
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_colour;
layout(location = 2) in vec3 vertex_normal;
layout(location = 3) in mat4 model_mat;
layout(location = 7) in float mat_specular_intensity;
layout(location = 8) in float mat_specular_power;
uniform mat4 transformMatrix;
uniform vec3 eye_world_pos0;
out vec3 normal0;
out vec3 colour;
out vec3 world_pos;
out float specular_intensity;
out float specular_power;
out vec3 eye_world_pos;
void main () {
colour = vertex_colour;
normal0 = (model_mat*vec4(vertex_normal,0.0)).xyz;
world_pos = (model_mat*vec4(vertex_position,1.0)).xyz;
eye_world_pos = eye_world_pos0;
specular_intensity = mat_specular_intensity;
specular_power = mat_specular_power;
gl_Position = transformMatrix*model_mat*vec4(vertex_position,1.0);
}""", GL_VERTEX_SHADER)
FRAGMENT_SHADER = compileShader("""#version 410
in vec3 colour;
in vec3 normal0;
in vec3 world_pos;
in float specular_intensity;
in float specular_power;
in vec3 eye_world_pos;
out vec4 frag_colour;
struct directional_light {
vec3 colour;
float amb_intensity;
float diff_intensity;
vec3 direction;
};
uniform directional_light gdirectional_light;
void main () {
vec4 ambient_colour = vec4(gdirectional_light.colour * gdirectional_light.amb_intensity,1.0f);
vec3 light_direction = -gdirectional_light.direction;
vec3 normal = normalize(normal0);
float diffuse_factor = dot(normal,light_direction);
vec4 diffuse_colour = vec4(0,0,0,0);
vec4 specular_colour = vec4(0,0,0,0);
if (diffuse_factor>0){
diffuse_colour = vec4(gdirectional_light.colour,1.0f) * gdirectional_light.diff_intensity*diffuse_factor;
vec3 vertex_to_eye = normalize(eye_world_pos-world_pos);
vec3 light_reflect = normalize(reflect(gdirectional_light.direction,normal));
float specular_factor = dot(vertex_to_eye, light_reflect);
if(specular_factor>0) {
specular_factor = pow(specular_factor,specular_power);
specular_colour = vec4(gdirectional_light.colour*specular_intensity*specular_factor,1.0f);
}
}
frag_colour = vec4(colour,1.0)*(ambient_colour+diffuse_colour+specular_colour);
}""", GL_FRAGMENT_SHADER)
Now the code that I use when I want to rotate the scene is the following (the camera updates etc are as normally done afaik):
def mouseMoveEvent(self, event):
dx = event.x() - self.lastPos.x()
dy = event.y() - self.lastPos.y()
self.lastPos = event.pos()
if event.buttons() & QtCore.Qt.RightButton:
self.camera.mouse_update(dx,dy)
elif event.buttons()& QtCore.Qt.LeftButton:
pass
self.renderNow()
Some final info: All vertex info needed in the shaders is given through a vao that I initialized and bound earlier in the initialize definition, does not contain too many objects (I'm just testing and it uses an icosahedron with 2 subdivisions to render a sphere, also, I removed the duplicate vertices but that did not do anything since that really should not be the bottleneck I think).
To answer some questions: I did try with varius different versions of opengl just for gigglez, no changes, tried without vsync, nothing changes, tried with different sample sizes, no changes.
Edit2:
Might be a clue: the swapBuffers takes around 0.015s most of the time, but when I start moving around a lot, it stutters and jumps up to 0.05s for some renders. Why is this happening? From what I understand, every render has to process all the data anyways?
By the way OpenGL works, the rendering commands you submit are sent to the GPU and executed asynchronously (frankly even the process of sending them to the GPU is asynchronous). When you request to display the back buffer by a call to swapBuffers the display driver must wait till the content of the back buffer finishes rendering (i.e. all previously issued commands finish executing), and only then it can swap the buffers.†
If you experience low frame rate then you shall optimize your rendering code, that is the stuff you submit to the GPU. Switching to C++ will not help you here (though it would be a great idea independently).
EDIT: You say that when you do nothing then your swapBuffers executes in 0.015 seconds, which is suspiciously ~1/60th of a second. It implies that your rendering code is efficient enough to render at 60 FPS and you have no reason to optimize it yet. What probably happens is that your call to renderNow() from mouseMoveEvent causes re-rendering the scene more than 60 times per second, which is redundant. Instead you should call renderLater() in mouseMoveEvent, and restructure your code accordingly.
NOTE: you call swapBuffers twice, once in render() and once in renderNow() immediately after.
DISCLAIMER: I'm not familiar with PyOpenGL.
† swapBuffer may also execute asynchronously, but even then if the display driver swaps buffers faster than you can render you will eventually block on the swapBuffer call.