I'm using Panda3D to render .obj files for a project related to 3D printing. For the project I need to parse a GCODE (a file that 3d printers use to print a model) and generate a .obj with that file. I have successfully achieved generating the .obj/ However, when I render the .obj with a panda sightly modified example (https://docs.panda3d.org/1.10/python/more-resources/samples/shadows) I get some weird shadow stripes:
I guess part of the problem might be related with the obj having multiple layers:
Any idea on how to prevent these stripes? The stripes move and are less obvious when I change the position of the camera, but I need to fix the camera at the position of the first image.
This is called "shadow acne". In this case it happens because your model has double-sided surfaces, so parts of the inside are casting a shadow on the outside (due to imprecision of the shadow buffer). The easiest way to resolve this would be to ensure that your models aren't double-sided, but it also helps to ensure your light frustum is as small as possible and to increase the resolution and depth bits of the shadow buffer.
Alternatively, you can apply a depth offset in Panda3D to help alleviate this issue.
Related
Since I am not certain Python is even a reasonable choice for my goals, let alone which method within Python to select, a bit of background: I am looking to clean up very damaged game polygonal models for 3D printing, so must handle modeling errors and also eliminate hidden surfaces, producing well-formed, solid objects. To that end, I wish to employ a very direct (i.e. brute force for robustness) voxel conversion approach, using my GPU to check visibility by rendering the model's polygons from each voxel's perspective (camera centered at voxel, then multiple angles). If the initial background color is still in the rendered image after the model is drawn in a different color, it must be visible from outside the model and is therefore empty space.
So, what is the best way to approach this task in Python? Is Python even capable of it with a reasonable degree of performance? Only one model, so polygon count is low, but many voxels to be determined as they must be created at a fine resolution. Tried Googling, but what I found seemed geared towards displaying to the screen. Thanks very much.
I have written some code in Python which allows 3D objects to be defined in 3D object space and mapped onto a 2D screen. Currently the finished 2D polygons are drawn on the screen using the PyGame library, which works effectively, but I would like to go the full way and write code myself to complete the drawing operations PyGame does for me. This means I would like to manually control the drawing of each pixel on the screen, with the use of GPU support to accelerate the entire rendering process. From some reading it seems OpenGL is suitable for this sort of thing, but I'm not sure what the complete purpose of OpenGL is and whether I could achieve what I am trying to do in a better way. Do I really need to use OpenGL? Or is there another way for me to directly access my GPU to draw at the pixel by pixel level?
It sounds like OpenGL's programmable shaders are what you're looking for (in particular fragment shaders). They run massively parallel on a pixel-by-pixel basis, in the sense that basically you write a function that takes a single pixel location and computes its color. Note that this means that the individual pixels can't exchange information, though there are certain ways around that.
(Technically when I said "pixel" I meant "fragment", which is sort of a generalized version of a pixel.)
I have a set of images and I want to find out which images are of the same object. Here are the different scenarios on how the object may be different among the images:
Images of the object may be rotated 90, 180, or 270 degrees
The object may be in a different spot in the image, but always in full fiew
The object may be flipped within the image, either horizontal or vertical
I started by using the histogram of the image and MSE but I am getting incorrect results as some of the objects have the same color distribution.
I am going for speed here as my initial data set is 1000 images and will grow as the algorithm gets more mature. So my initial thought was numpy/scipy but I am rather lost here. I have no experience in this area. I have read through the other posts on SO but they seem rather narrow in their scope. Does anyone have any experience or thoughts on how I could approach this?
Edit:
I do not have access to load any modules that are not part of the Python installation that is loaded on the machine that is running the script. Anaconda was installed so there are quite a few modules at my disposal, but no OpenCV.
Edit
Attached example files and answer file.
Example Files
So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)
So I'm going through the beginning stages of producing a game in Python, and I'm looking for a library that is able to manipulate pixels and blit them relatively fast.
My first thought was pygame, as it deals in pure 2D surfaces, but it only allows pixel access through pygame.get_at(), pygame.set_at() and pygame.get_buffer(), all of which lock the surface each time they're called, making them slow to use. I can also use the PixelArray and surfarray classes, but they are locked for the duration of their lifetimes, and the only way to blit them to a surface is to either copy the pixels to a new surface, or use surfarray.blit_array, which requires creating a subsurface of the screen and blitting it to that, if the array is smaller than the screen (if it's bigger I can just use a slice of the array, which is no problem).
I don't have much experience with PyOpenGL or Pyglet, but I'm wondering if there is a faster library for doing pixel manipulation in, or if there is a faster method, in Pygame, for doing pixel manupilation. I did some work with SDL and OpenGL in C, and I do like the idea of adding vertex/fragment shaders to my program.
My program will chiefly be dealing in loading images and writing/reading to/from surfaces.
Have you tried the Python Imaging Library? You'd still have to communicate the data back to pygame via frombuffer or somesuch to do the blitting, but the PIL can handle the pixel access.
I checked out pyglet, and saw that it works well for static per pixel collision, when the image is not manipulated too much; however, I'm not sure how well it works with a dynamic image.
In short, I'm looking for a library that's able to quickly display a buffer of pixels. This buffer will be constantly changing, so fast access and blitting is essential. This could be done in C with relative ease using SDL; however, I'm looking for a similar method in Python. I'm not even too worried about hardware acceleration at this point, although it would certainly be nice.
Check Python bindings of Simple and Fast Multimedia Library. From it's documentation:
It implements the same 2D drawing and OpenGL-related functions (see their base class sf::RenderTarget for more details), the difference is that the result is stored in an off-screen texture rather than being show in a window.
Rendering to a texture can be useful in a variety of situations:
precomputing a complex static texture (like a level's background from multiple tiles)
applying post-effects to the whole scene with shaders
creating a sprite from a 3D object rendered with OpenGL
etc.
Check also methods contains and intersects of sf::Rect< T > Class Template.