pyopengl change brightness of textures - python

I have a program where I render two textures that are bound to some polys and add the results using this:
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD)
This works well and fast but I would like a way to change the brightness of each texture before adding them, like a gain value. This value needs to change at runtime so I can't just bake my brightness into my texture.
Also the nature of my program means I won't know how many textures I will be blending until runtime so I need a solution that will work with n textures.
Does anyone know how I would do this?

If you're able to use fragment shaders, you should write code in the fragment shader that changes the brightness with a shader parameter that your application passes in. This approach will be both fast and flexible.

Related

Why my render gets stripy shadows when rendering in panda3d?

I'm using Panda3D to render .obj files for a project related to 3D printing. For the project I need to parse a GCODE (a file that 3d printers use to print a model) and generate a .obj with that file. I have successfully achieved generating the .obj/ However, when I render the .obj with a panda sightly modified example (https://docs.panda3d.org/1.10/python/more-resources/samples/shadows) I get some weird shadow stripes:
I guess part of the problem might be related with the obj having multiple layers:
Any idea on how to prevent these stripes? The stripes move and are less obvious when I change the position of the camera, but I need to fix the camera at the position of the first image.
This is called "shadow acne". In this case it happens because your model has double-sided surfaces, so parts of the inside are casting a shadow on the outside (due to imprecision of the shadow buffer). The easiest way to resolve this would be to ensure that your models aren't double-sided, but it also helps to ensure your light frustum is as small as possible and to increase the resolution and depth bits of the shadow buffer.
Alternatively, you can apply a depth offset in Panda3D to help alleviate this issue.

Do I need to use OpenGL to draw at the pixel by pixel level (Python). Is there a way I can do such a thing without using a code library?

I have written some code in Python which allows 3D objects to be defined in 3D object space and mapped onto a 2D screen. Currently the finished 2D polygons are drawn on the screen using the PyGame library, which works effectively, but I would like to go the full way and write code myself to complete the drawing operations PyGame does for me. This means I would like to manually control the drawing of each pixel on the screen, with the use of GPU support to accelerate the entire rendering process. From some reading it seems OpenGL is suitable for this sort of thing, but I'm not sure what the complete purpose of OpenGL is and whether I could achieve what I am trying to do in a better way. Do I really need to use OpenGL? Or is there another way for me to directly access my GPU to draw at the pixel by pixel level?
It sounds like OpenGL's programmable shaders are what you're looking for (in particular fragment shaders). They run massively parallel on a pixel-by-pixel basis, in the sense that basically you write a function that takes a single pixel location and computes its color. Note that this means that the individual pixels can't exchange information, though there are certain ways around that.
(Technically when I said "pixel" I meant "fragment", which is sort of a generalized version of a pixel.)

Pygame, change resolution of my whole game

I have designed my whole pygame to work for 1920x1080 resolution.
However, I have to adapt it for smaller resoltion.
There is a lot of hardcoded value in the code.
Is there a simple way to change the resolution, like resizing the final image at the end of each loop, just before drawing it ?
You can use this : pygame.transform.scale or better (but less efficient) pygame.transform.smoothscale.
To do that, just change the reference surface where you draw (screen) to a generic surface. And after, just resize it, and put it on screen.
I can show you some code if you don't understand how it's work. Just ask.
i usually create a base resolution and then whenever the screen is resized, i scale all the assets and surfaces by ratios.
This works well if you have assets that are of large resolution and you have scaled then down but would pixelate for smaller images.
you can also create multiple assets file for each resolution and when ever your resolution goes above one of the available asset resolution you can change the image. you can think in in context of css media query to better understand.

Draw with OpenGL offscreen

Is there a way to use OpenGL to draw offscreen? What I want to do is this: I want to be able to use functions like glVertex, and get the result in a 2D pixel array.
I am using Python. I tried using PyGame, but it's not working very well. The problem with PyGame is that uses a window event though i don't need it. In addition, I had to draw to scene + flip the screen twice in order to access screen pixels using glReadPixels.
An other problem is that I can't have more that one window at once.
Is there any proper way to accomplish what I am trying to do?
What you are asking for seems to be two things in one... you want an off-screen buffer (FBO) and you want to get the contents of the framebuffer in client memory.
Can you indicate which version of GL you are targeting?
If you are targeting OpenGL 3.0+, then you can use FBOs (Framebuffer Objects) and PBOs (Pixel Buffer Objects) to do this efficiently. However, since you are using glVertex, I do not think you need to bother with efficiency. I would focus on learning to use Framebuffer Objects for the time being.
If you are not using GL3 you might have access to the old EXT FBO extension, but if you do not have that even you might need a PBuffer.
Note that PBuffers and Pixel Buffer Objects are two different things even though they sound the same. Before GL3/FBOs, WGL, GLX, etc. had special platform-specific functionality called Pixel Buffers for drawing off-screen.

Python pixel manipulation library

So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast.
My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem).
I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program.
My program will chiefly be dealing in loading images and writing/reading to/from surfaces.
Have you tried the Python Imaging Library? You'd still have to communicate the data back to pygame via frombuffer or somesuch to do the blitting, but the PIL can handle the pixel access.
I checked out pyglet, and saw that it works well for static per pixel collision, when the image is not manipulated too much; however, I'm not sure how well it works with a dynamic image.
In short, I'm looking for a library that's able to quickly display a buffer of pixels. This buffer will be constantly changing, so fast access and blitting is essential. This could be done in C with relative ease using SDL; however, I'm looking for a similar method in Python. I'm not even too worried about hardware acceleration at this point, although it would certainly be nice.
Check Python bindings of Simple and Fast Multimedia Library. From it's documentation:
It implements the same 2D drawing and OpenGL-related functions (see their base class sf::RenderTarget for more details), the difference is that the result is stored in an off-screen texture rather than being show in a window.
Rendering to a texture can be useful in a variety of situations:
precomputing a complex static texture (like a level's background from multiple tiles)
applying post-effects to the whole scene with shaders
creating a sprite from a 3D object rendered with OpenGL
etc.
Check also methods contains and intersects of sf::Rect< T > Class Template.

Categories

Resources