This question already has answers here:
What is the proper way to modify OpenGL vertex buffer?
(3 answers)
Closed 2 years ago.
I'd like to create polygons with draggable vertices in PyOpenGL. Having read around a bit, VBOs seemed like a sensible way to achieve this.
Having never used VBOs before, I'm having trouble figuring out how to dynamically update them - ideally I'd like to just modify elements of a numpy array of vertices, then propagate only the elements that changed up to the GPU. I had assumed that the OpenGL.arrays.vbo.VBO wrapper did this automagically with its copy_data() method, but it seems not.
Here's a silly example:
from OpenGL import GL as gl
from OpenGL import GLUT as glut
from OpenGL.arrays import vbo
import numpy as np
class VBOJiggle(object):
def __init__(self,nvert=100,jiggliness=0.01):
self.nvert = nvert
self.jiggliness = jiggliness
verts = 2*np.random.rand(nvert,2) - 1
self.verts = np.require(verts,np.float32,'F')
self.vbo = vbo.VBO(self.verts)
def draw(self):
gl.glClearColor(0,0,0,0)
gl.glClear(gl.GL_COLOR_BUFFER_BIT)
gl.glEnableClientState(gl.GL_VERTEX_ARRAY)
self.vbo.bind()
gl.glVertexPointer(2,gl.GL_FLOAT,0,self.vbo)
gl.glColor(0,1,0,1)
gl.glDrawArrays(gl.GL_LINE_LOOP,0,self.vbo.data.shape[0])
gl.glDisableClientState(gl.GL_VERTEX_ARRAY)
self.vbo.unbind()
self.jiggle()
glut.glutSwapBuffers()
def jiggle(self):
# jiggle half of the vertices around randomly
delta = (np.random.rand(self.nvert//2,2) - 0.5)*self.jiggliness
self.verts[:self.nvert:2] += delta
# the data attribute of the vbo is the same as the numpy array
# of vertices
assert self.verts is self.vbo.data
# # Approach 1:
# # it seems like this ought to work, but it doesn't - all the
# # vertices remain static even though the vbo's data gets updated
# self.vbo.copy_data()
# Approach 2:
# this works, but it seems unnecessary to copy the whole array
# up to the GPU, particularly if the array is large and I have
# modified only a small subset of vertices
self.vbo.set_array(self.verts)
if __name__ == '__main__':
glut.glutInit()
glut.glutInitDisplayMode( glut.GLUT_DOUBLE | glut.GLUT_RGB )
glut.glutInitWindowSize( 250, 250 )
glut.glutInitWindowPosition( 100, 100 )
glut.glutCreateWindow( None )
demo = VBOJiggle()
glut.glutDisplayFunc( demo.draw )
glut.glutIdleFunc( demo.draw )
glut.glutMainLoop()
To completely answer this question, I have to mention the OpenGL buffer update first.
The OpenGL instruction glBufferData creates and initializes a buffer object's data store. An existing data store of an buffer object is completely destroyed and a new data store (possibly with a different size) is created. If a data pointer is passed to the function, then the data store is completely initialized by the data. The size of the buffer and the size of the provided data is assumed to be equal.
glBufferSubData updates the entire data or a subset of the data of an existing data store. The data store is assumed to be created before, by glBufferData. No data store is destroyed or created.
Of course, technically glBufferData can always be use instead of glBufferSubData, but glBufferSubData will perform much better, because the expensive buffer creation (allocation) is eliminated.
Using
self.vbo.set_array(self.verts)
is a bad idea, because as seen in the implementation (PyOpenGL/OpenGL/arrays/vbo.py), this method creates a completely new buffer, with a possibly new size and will force the recreation of the buffer object's data store (because of self.copied = False).
If the buffer was created before, then self.vbo.copy_data() will update the data by glBufferSubData (see if self.copied: in copy_data). To make this work the buffer has to be the currently bound buffer (self.vbo.bind()). Further a copy information has to be set (VBO.copy_segments). The copy information is stated by the item setter (VBO.__setitem__).
This means, in "Approach 1" you would have to do something like the following:
self.vbo[:] = self.verts
self.vbo.bind()
self.vbo.copy_data()
Since OpenGL.arrays.vbo is nothing more than a wrapper for the OpenGL buffer instructions, I would prefer to use glBufferSubData directly, which will perform best in cases like this:
# Approach 3:
# Direct buffer update by `glBufferSubData`
self.vbo.bind()
self.vbo.implementation.glBufferSubData(self.vbo.target, 0, self.vbo.data)
With this approach even subsets of the data store can be updated. Note the 2nd parameter of glBufferSubData is a byte offset to the buffer objects data store. Further there is an overloaded implementation, which can process a buffer size and a direct data pointer.
Related
EDIT: Documentation for the relevant library methods!
Full disclosure - I know next to nothing about C/C++ and literally nothing about pointers and buffers and all that fun stuff. Apologies if this is a stupid question, but I've been working on this for hours and haven't got very far at all.
So I have an external C library that I'm referencing from a Ruby script. There's a bunch of different functions which I need to access, and I've got some simple ones working, but I'm struggling with a more complicated function that I need to work.
The good part - I have working example code in Python.
The challenge I'm facing is around the whole buffer side of things using FFI - I've not found a lot of relevant examples on how to make this work, and I'm struggling a bit.
Here's the function I want to replicate in Ruby using FFI:
def getActiveID32():
"""
Will return a tuple consisting number of bits
read and actual data. Minimum 8 byte of data will
be returned.
"""
rawData = ""
buffer_size = ctypes.c_short(32);
pcproxlib.GetActiveID32.restype = ctypes.c_short
# create a buffer of given size to pass it
# to get the raw data.
raw_data_tmp = (ctypes.c_ubyte * buffer_size.value)()
#as per documentation 250 millisecond sleep is required
# to get the raw data.
time.sleep(250/1000.0)
nbBits = pcproxlib.GetActiveID32(raw_data_tmp , buffer_size)
bytes_to_read = int((nbBits + 7) / 8) ;
# will read at least 8 bytes
if bytes_to_read < 8:
bytes_to_read = 8
for i in range(0 , bytes_to_read):
temp_buf = "%02X " % raw_data_tmp[i]
rawData = temp_buf + rawData
return (nbBits , rawData)
Here's the example code that I have working for the simple functions (eg BeepNow but not for this more complicated function, getActiveID32.
Included is the code I've been playing with, but that is clearly too simplified, and doesn't work.
require 'ffi'
module PCProxLib
extend FFI::Library
ffi_lib File.expand_path('lib/32/libhidapi-hidraw.so.0')
ffi_lib File.expand_path('lib/32/libpcProxAPI.so')
attach_function :usbConnect, [], :short
attach_function :getPartNumberString, [], :string
attach_function :getActiveID32, [:string, :int], :int
attach_function :getActiveID, [], :int
attach_function :BeepNow, [:int, :bool], :bool
end
puts PCProxLib.usbConnect()
puts PCProxLib.BeepNow(3, false)
sleep 0.25
buffer = ""
puts PCProxLib.getActiveID32(buffer, 64)
puts buffer
Thanks heaps for any help :)
EDIT:
Based on #matt's comment, I've revised the code to the following:
require 'ffi'
module PCProxLib
extend FFI::Library
ffi_lib File.expand_path('lib/32/libhidapi-hidraw.so.0')
ffi_lib File.expand_path('lib/32/libpcProxAPI.so')
attach_function :usbConnect, [], :short
attach_function :USBDisconnect, [], :short
attach_function :getActiveID32, [:pointer, :int], :int
end
PCProxLib.usbConnect()
puts PCProxLib.getPartNumberString() // verifying connection to reader
def read_card
sleep 0.5
buffer = FFI::MemoryPointer.new(:uint8, 32)
bits_written = PCProxLib.getActiveID32(buffer, 32)
puts bits_written
puts buffer.read_bytes(32)
end
20.times { read_card }
PCProxLib.USBDisconnect()
When I scan a card (while the code is looping), the bits_written value jumps from 0 to 32, which seems good. However the buffer.read_bytes(32) value is always null.
I've tried some other MemoryPointer methods (like read, read_string etc but get the same results.
If I try .inspecting the buffer.read_bytes(32), I get:
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"
...which is interesting? This is the same, whether I scan a card or not.
It looks like the function getActiveID32 has a signature that looks something like
int getActiveID32(uint8_t *buffer, int size)
and you would use it by first creating a buffer for it to write into, and then passing a pointer to that buffer and its size as arguments when you call it, something like:
// Define BUFFER_SIZE first.
uint8_t *buffer = malloc(BUFFER_SIZE);
int bits_written = getActiveID32(buffer, BUFFER_SIZE);
// There are now bits_written bits in buffer for you to use.
// You will need to call free on buffer when you are finished.
With Ruby’s FFI you can do this with a Pointer. First declare the function to accept a pointer (a string is like a pointer, but in this case we need to use a pointer directly):
attach_function :getActiveID32, [:pointer, :int], :int
Now when you want to call it, you need to first create a MemoryPointer of the desired size, and pass it as the parameter to the function. FFI will then convert this into a pointer to the allocated memory when calling the C function.
# Assuming you want a 32 byte buffer.
buffer = FFI::MemoryPointer.new(:uint8, 32)
bits_written = PCProxLib.getActiveID32(buffer, 32)
Now buffer will contain the data written by the function. You can access this data using the methods on MemoryPointer, for example to copy that data into a Ruby (binary) string you can use buffer.read_bytes(32).
MemoryPointer will also handle freeing the allocated memory when it is garbage collected.
So in a nutshell I am needing to export the vertex normals from a character into a text file, or whatever, and then reimport them onto the same character in a different scene.
I have the import export part working in a method that I think is ok, but actually going through the loop and setting the normal on each vertex is taking over twenty minutes and usually overloads the ram on my machine and crashes maya.
I guess I am looking for a way to make my code more efficient or just run faster, any advice would be appreciated. Thanks.
def ImoNorms(self):
ll = mc.ls ('head.vtxFace[*][*]')
input = open('My desktop.txt', 'r')
spltOne = ll[:len(ll)/2]
spltTwo = ll[len(ll)/2:]
i = 0
for each in spltOne:
CurrentLine = input.readline()
kk = re.split(r'\[|\]|\,|\/n|\ ',CurrentLine)
aa = float(kk[1])
aa = round(aa, 3)
bb = float(kk[3])
bb = round(bb,3)
cc = float(kk[5])
cc = round(cc,3)
mc.select(each)
mc.polyNormalPerVertex(xyz =(aa, bb, cc))
i = i + 1
if i%1000 == 0:
print i
init()
Sorry for the formatting issues, still new to this site.
+1 to using OpenMaya if you want better performance.
Check out MFnMesh.getNormals and MFnMesh.setNormals. I admit I haven't used these methods myself, but if it's anything like MFnMesh.setPoints it should be a significant boost in speed as it's setting the normals all at once. Seems like you don't have to deal with its history either.
Here's an example on its usage that will re-direct all of a sphere's vert normals to point down. (Go to Display->Polygons->Vertex Normals to visualize the normals)
import maya.OpenMaya as OpenMaya
# Create a sphere to change vert normals with
mesh_obj, _ = cmds.polySphere()
# Wrap sphere as MDagPath object
sel = OpenMaya.MSelectionList()
sel.add(mesh_obj)
dag_path = OpenMaya.MDagPath()
sel.getDagPath(0, dag_path)
# Pass sphere to mesh function set
mesh_fn = OpenMaya.MFnMesh(dag_path)
# Create empty vector array
vec_array = OpenMaya.MFloatVectorArray()
# Get sphere normals and stuff data in our vector array
mesh_fn.getNormals(vec_array, OpenMaya.MSpace.kObject)
for i in range( vec_array.length() ):
# Point all normals downwards
vec_array[i].assign( OpenMaya.MFloatVector(0, -1, 0) )
# Apply normals back to sphere
mesh_fn.setNormals(vec_array, OpenMaya.MSpace.kObject)
You may also want to consider a different way on how you read your file instead of reading each line one by one. Maybe using json.dumps to store the data to a file and json.loads to retrieve the data. This could potentially speed things up.
What you have will be very slow because you are creating a lot of history on your mesh: if you exit out of the loop after a couple of iterations you'll see that your mesh is accumulating a polyNormalPerVertex node for every iteration.
Unfortunately there's no way to turn of construction history for this command (which seems like an oversight: most commands have a ch flag for exactly this purpose). So the first thing to try is to add an mc.delete(ch=True) after every polyNormalPerVertex call. That will be much faster and it might be enough for what you're doing.
Otherwise you'll need to use the OpenMaya api, which is a bit harder than cmds but will let you do bulk normal operations faster. I would get the cmds version working first and see if it's good enough, it should be much more performant without the history overhead.
UPDATE
Unless you want to learn the API to do this, the right thing is probably:
Save out the mesh with the right normals as an MB or MA file
in the other file, load that mesh and use Transfer Attributes to copy the normals across.
Delete history and remove the mesh you loaded in (2)
That gets you off the hook for both making your own file format and for perf issues, plus it gives you pre-made options for cases where the topology does not line up for some reason
I am a mathematician. Recently, I became the editor of the puzzles and problems column for a well-known magazine. Occasionally, I need to create a figure to accompany a problem or solution. These figures mostly relate to 2D (occasionally, 3D) euclidean geometry (lines, polygons, circles, plus the occasional ellipse or other conic section). The goal is obtaining figures of very high quality (press-ready), with Computer Modern ("TeX") textual labels. My hope is finding (or perhaps helping write!) a relatively high-level Python library that "knows" euclidean geometry in the sense that natural operations (e.g., drawing a perpendicular line to a given one passing through a given point, bisecting a given angle, or reflecting a figure A on a line L to obtain a new figure A') are already defined in the library. Of course, the ability to create figures after their elements are defined is a crucial goal (e.g., as Encapsulated Postscript).
I know multiple sub-optimal solutions to this problem (some partial), but I don't know of any that is both simple and flexible. Let me explain:
Asymptote (similar to/based on Metapost) allows creating extremely high-quality figures of great complexity, but knows almost nothing about geometric constructions (it is a rather low-level language) and thus any nontrivial construction requires quite a long script.
TikZ with package tkz-euclide is high-level, flexible and also generates quality figures, but its syntax is so heavy that I just cry for Python's simplicity in comparison. (Some programs actually export to TikZ---see below.)
Dynamic Geometry programs, of which I'm most familiar with Geogebra, often have figure-exporting features (EPS, TikZ, etc.), but are meant to be used interactively. Sometimes, what one needs is a figure based on hard specs (e.g., exact side lengths)---defining objects in a script is ultimately more flexible (if correspondingly less convenient).
Two programs, Eukleides and GCLC, are closest to what I'm looking for: They generate figures (EPS format; GCLC also exports to TikZ). Eukleides has the prettiest, simplest syntax of all the options (see the examples), but it happens to be written in C (with source available, though I'm not sure about the license), rather limited/non-customizable, and no longer maintained. GCLC is still maintained but it is closed-source, its syntax is significantly worse than Eukleides's, and has certain other unnatural quirks. Besides, it is not available for Mac OS (my laptop is a Mac).
Python has:
Matplotlib, which produces extremely high-quality figures (particularly of functions or numerical data), but does not seem to know about geometric constructions, and
Sympy has a geometry module which does know about geometric objects and constructions, all accessible in delightful Python syntax, but seems to have no figure-exporting (or even displaying?) capabilities.
Finally, a question: Is there a library, something like "Figures for Sympy/geometry", that uses Python syntax to describe geometric objects and constructions, allowing to generate high-quality figures (primarily for printing, say EPS)?
If a library with such functionality does not exist, I would consider helping to write one (perhaps an extension to Sympy?). I will appreciate pointers.
There is a way to generate vector images with matplotlob, outputting with the library io to a vector image (SVG) with this approach.
I personally tried to run the code of the approach (generate a vectorial histogram) in that webpage as a python file, and it worked.
The code:
import numpy as np
import matplotlib.pyplot as plt
import xml.etree.ElementTree as ET
from io import BytesIO
import json
plt.rcParams['svg.fonttype'] = 'none'
# Apparently, this `register_namespace` method is necessary to avoid garbling
# the XML namespace with ns0.
ET.register_namespace("", "http://www.w3.org/2000/svg")
# Fixing random state for reproducibility
np.random.seed(19680801)
# --- Create histogram, legend and title ---
plt.figure()
r = np.random.randn(100)
r1 = r + 1
labels = ['Rabbits', 'Frogs']
H = plt.hist([r, r1], label=labels)
containers = H[-1]
leg = plt.legend(frameon=False)
plt.title("From a web browser, click on the legend\n"
"marker to toggle the corresponding histogram.")
# --- Add ids to the svg objects we'll modify
hist_patches = {}
for ic, c in enumerate(containers):
hist_patches['hist_%d' % ic] = []
for il, element in enumerate(c):
element.set_gid('hist_%d_patch_%d' % (ic, il))
hist_patches['hist_%d' % ic].append('hist_%d_patch_%d' % (ic, il))
# Set ids for the legend patches
for i, t in enumerate(leg.get_patches()):
t.set_gid('leg_patch_%d' % i)
# Set ids for the text patches
for i, t in enumerate(leg.get_texts()):
t.set_gid('leg_text_%d' % i)
# Save SVG in a fake file object.
f = BytesIO()
plt.savefig(f, format="svg")
# Create XML tree from the SVG file.
tree, xmlid = ET.XMLID(f.getvalue())
# --- Add interactivity ---
# Add attributes to the patch objects.
for i, t in enumerate(leg.get_patches()):
el = xmlid['leg_patch_%d' % i]
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Add attributes to the text objects.
for i, t in enumerate(leg.get_texts()):
el = xmlid['leg_text_%d' % i]
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Create script defining the function `toggle_hist`.
# We create a global variable `container` that stores the patches id
# belonging to each histogram. Then a function "toggle_element" sets the
# visibility attribute of all patches of each histogram and the opacity
# of the marker itself.
script = """
<script type="text/ecmascript">
<![CDATA[
var container = %s
function toggle(oid, attribute, values) {
/* Toggle the style attribute of an object between two values.
Parameters
----------
oid : str
Object identifier.
attribute : str
Name of style attribute.
values : [on state, off state]
The two values that are switched between.
*/
var obj = document.getElementById(oid);
var a = obj.style[attribute];
a = (a == values[0] || a == "") ? values[1] : values[0];
obj.style[attribute] = a;
}
function toggle_hist(obj) {
var num = obj.id.slice(-1);
toggle('leg_patch_' + num, 'opacity', [1, 0.3]);
toggle('leg_text_' + num, 'opacity', [1, 0.5]);
var names = container['hist_'+num]
for (var i=0; i < names.length; i++) {
toggle(names[i], 'opacity', [1, 0])
};
}
]]>
</script>
""" % json.dumps(hist_patches)
# Add a transition effect
css = tree.getchildren()[0][0]
css.text = css.text + "g {-webkit-transition:opacity 0.4s ease-out;" + \
"-moz-transition:opacity 0.4s ease-out;}"
# Insert the script and save to file.
tree.insert(0, ET.XML(script))
ET.ElementTree(tree).write("svg_histogram.svg")
Previously, you need to pip install the required libraries on the top lines, and it successfully saved a SVG file with a plot (you can read the file and zoomwant in the histogram and you will get no pixels, as the image is generated with mathematicals functions).
It (obviously for our time) uses python 3.
You then could import the SVG image within your TeX document for the publication rendering.
I hope it may help.
Greetings,
Javier.
I am transmiting images by sockets from a camera that runs wince :(
The images in the camera are just float arrays created using realloc for the given x * y size
On the other end, I am receiving these images in python.
I have this code working doing
img_dtype = np.float32
img_rcv = np.empty((img_y, img_x),
dtype = img_dtype)
p = sck.recv_into(img_rcv,
int(size_bytes),
socket.MSG_WAITALL)
if size_bytes != p:
print "Mismatch between expected and received data amount"
return img_rcv
I am a little bit confused about the way numpy creates its arrays and I am wondering if this img_rcv will be compatible with the way recv_into works.
My questions are:
How safe is this?
Does the memory allocation for the numpy array will be known for recv_into?
Are the numpy arrays creation routines equivalent to a malloc?
It is just working because I am lucky?
The answers are:
safe
yes, via the buffer interface
yes, in the sense that you get a block of memory you can work with
no
Let
import pyopencl as cl
import pyopencl.array as cl_array
import numpy
a = numpy.random.rand(50000).astype(numpy.float32)
mf = cl.mem_flags
What is the difference between
a_gpu = cl.Buffer(self.ctx, mf.READ_ONLY | mf.COPY_HOST_PTR, hostbuf=a)
and
a_gpu = cl_array.to_device(self.ctx, self.queue, a)
?
And what is the difference between
result = numpy.empty_like(a)
cl.enqueue_copy(self.queue, result, result_gpu)
and
result = result_gpu.get()
?
Buffers are CL's version of malloc, while pyopencl.array.Array is a workalike of numpy arrays on the compute device.
So for the second version of the first part of your question, you may write a_gpu + 2 to get a new arrays that has 2 added to each number in your array, whereas in the case of the Buffer, PyOpenCL only sees a bag of bytes and cannot perform any such operation.
The second part of your question is the same in reverse: If you've got a PyOpenCL array, .get() copies the data back and converts it into a (host-based) numpy array. Since numpy arrays are one of the more convenient ways to get contiguous memory in Python, the second variant with enqueue_copy also ends up in a numpy array--but note that you could've copied this data into an array of any size (as long as it's big enough) and any type--the copy is performed as a bag of bytes, whereas .get() makes sure you get the same size and type on the host.
Bonus fact: There is of course a Buffer underlying each PyOpenCL array. You can get it from the .data attribute.
To answer the first question, Buffer(hostbuf=...) can be called with anything that implements the buffer interface (reference). pyopencl.array.to_device(...) must be called with an ndarray (reference). ndarray implements the buffer interface and works in either place. However, only hostbuf=... would be expected to work with for example a bytearray (which also implements the buffer interface). I have not confirmed this, but it appears to be what the docs suggest.
On the second question, I am not sure what type result_gpu is supposed to be when you call get() on it (did you mean Buffer.get_host_array()?) In any case, enqueue_copy() works between combination of Buffer, Image and host, can have offsets and regions, and can be asynchronous (with is_blocking=False), and I think these capabilities are only available that way (whereas get() would be blocking and return the whole buffer). (reference)