I find myself using ScatterLayout a lot (Scatter inside of Scatter and so on...). One feature that escapes me is applying some kind of alpha mask to a Scatter canvas. The idea would be to perform simple shape knockouts that correspond to a background image or other simple depth tricks.
I feel like this is doable with some basic OpenGL or maybe the kivy.graphics.stencil_instructions. I am not particularly comfortable with heavy OpenGL commands (I don't know how to debug them), but wrapping some simple commands into a Widget Class is just fine.
So here is what I am getting from other sources, but I want to take this beyond primitive solids (like Rectangle):
Updated with a texture based direction (code is untested!)
with self.canvas:
# Hopefully I can build my "transparency mask" manually with a Texture
texture = Texture.create(size=(64, 64))
size = 64 * 64 * 3
buf = [int(x * 255 / size) for x in range(size)]
buf = b''.join(map(chr, buf))
texture.blit_buffer(buf, colorfmt='rgb', bufferfmt='ubyte')
StencilPush()
Rectangle(texture=texture, pos=self.pos, size=(64, 64))
#use mask
StencilUse()
"""
Will we get an image based mask on all drawing commands placed here?
"""
StencilPop()
MultiTexture Canvas is exactly the answer. Try using your own custom PNG and you will see, make them as complex as you like and it works beautifully. I am going to include the important pieces of code that make it work, and consider the question Answered, although I may yet reopen if I run into rendering order problems (since I am not using a static PNG for my application.
The very simple shader definition you need to include as a docstring (it will be autoloaded, see the link above if I have stripped this too far:
fs_multitexture = '''
$HEADER$
// New uniform that will receive texture at index 1
uniform sampler2D texture1;
void main(void) {
// multiple current color with both texture (0 and 1).
// currently, both will use exactly the same texture coordinates.
gl_FragColor = frag_color * \
texture2D(texture0, tex_coord0) * \
texture2D(texture1, tex_coord0);
}
'''
And a bare minimum Widget class that demonstrates the texture filtering process that creates this simple magic:
class MultitextureWidget(Widget):
def __init__(self, **kwargs):
self.canvas = RenderContext()
self.canvas.shader.fs = fs_multitexture
with self.canvas:
Color(1, 1, 1)
BindTexture(source='mtexture2.png', index=1)
Rectangle(size=(150, 150), source='mtexture1.png', pos=(500, 200))
self.canvas['texture1'] = 1
super(MultitextureWidget, self).__init__(**kwargs)
Clock.schedule_interval(self.update_glsl, 0)
def update_glsl(self, *largs):
self.canvas['projection_mat'] = Window.render_context['projection_mat']
self.canvas['modelview_mat'] = Window.render_context['modelview_mat']
Related
I'm using Pytorch's transforms.Compose and in my dataset I have 1200x1600 (Height x Width) images.
I want to crop the images starting from the Top Left Corner (0,0) so that I can have 800x800 images.
I was looking in Pytorch documentation but I didn't find anything to solve my problem, so I copied the source code of center_crop in my project and modified it as follows:
def center_crop(img: Tensor, output_size: List[int]):
# .... Other stuff of Pytorch
# ....
# Original Pytorch Code (that I commented)
crop_top = int((image_height - crop_height + 1) * 0.5)
crop_left = int((image_width - crop_width + 1) * 0.5)
# ----
# My modifications:
crop_top = crop_left = 0
return crop(img, crop_top, crop_left, crop_height, crop_width)
But basically I think this is quite an overkill, if it's possible I'd like to avoid to copy their code and modify it.
Isn't there anything that already implements the desired behaviour by default, is there?
I used Lambda transforms in order to define a custom crop
from torchvision.transforms.functional import crop
def crop800(image):
return crop(image, 0, 0, 800, 800)
data_transforms = {
'images': transforms.Compose([transforms.ToTensor(),
transforms.Lambda(crop800),
transforms.Resize((400, 400))])}
I'm a 17 year old programmer, trying to program an isometric game in python, with pygame. After finishing a tile engine, working with not good looking, gimp-drawn PNG's, I wondered, if it would be possible to render some Tiles by texture. I hope I provided all what's needed to understand, what's my issue and please excuse my not perfect English.
Simply what I want to do, is to generate a 128 by 128 Pixel width Image of an Isometric Tile, using the following picture as texture for all three sides of the Block:
(Links here because I'm not yet allowed to put pictures in, due to it's my first post)
To explain better, what I'm trying to do, I have drawn this little picture:
I have already searched the Internet for about 2 hours and didn't come to a solution, except for the top part of the Tile, here is what I already got in Code:
This is the Image Manipulation Module, the transformToRightPart() is the method where I need help:
import pygame
class Image(object):
'''
Use this Module to create Tiles by Texture to use them later in the Tileengine.
It is important to run pygame.init() before creating objects of this class!
Contains unfinished Elements!
'''
def __init__(self, path):
self.loadFromPath(path)
def getIMG(self):
assert self.originalIMG is not None, "No picture to return"
if not self.IMG == None:
return self.IMG
else:
return self.originalIMG
def loadFromPath(self, path):
'''
Don't do convert() or convert_alpha() here,
as Objects of this class are created during the loading process,
with no pygame.display() created.
'''
self.originalIMG = pygame.image.load(path)
self.IMG = None
def transformToTopPart(self):
'''
Transforms the loaded Image to the Top Part of an Isometric Tile, with the Dimensions 2:1,
said in Pixels: 128 px Width by 64 px Height.
'''
self.IMG = pygame.transform.rotate(self.originalIMG, 45)
self.IMG = pygame.transform.scale(self.IMG, (128, 64))
def transformToRightPart(self):
'''
TODO!! Don't ask how (X.X)
Transforms the loaded Image to the right Part of an Isometric Tile.
'''
assert False, "This method isn't finished, try something different ;)"
def transformToLeftPart(self):
'''
Transforms the loaded Image to the left Part of an Isometric Tile.
Due to the nice geometric fact, that the shape of the left part,
is just the flipped right part shape and we don't lose quality by flipping,
we do this little trick, to enshorten the code.
'''
self.originalIMG = pygame.transform.flip(self.originalIMG, True, False)
self.transformToRightPart()
self.IMG = pygame.transform.flip(self.IMG, True, False)
self.originalIMG = pygame.transform.flip(self.originalIMG, True, False)
And this is the Module, which creates a window with the tile to render:
import pygame, sys
from ImageManipulation import Image
from pygame.locals import *
if __name__ == '__main__':
pygame.init()
FPS=20
fpsClock = pygame.time.Clock()
picture = Image("Stone_Floor_texture.png")
picture.transformToTopPart()
DISPLAY = pygame.display.set_mode((400,400),0,32)
while True:
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
DISPLAY.blit(picture.getIMG(),(0,0))
pygame.display.update()
fpsClock.tick(FPS)
The output of the code looks like this:
What I'm trying to achieve is, that it looks, something like this:
Big thanks to Spektre for all the effort he made, trying to help me, but all in all, after two days of over-thinking the problem and bug-fixing, I came up with a solution myself. It might not be as fast or efficient as targeting the pixels directly in an array, like Spektre did in his c++ example, but it is a way, you're only dependencies are pygame, and it is easy to understand.
What did I do? - I wrote two functions, the first getting a surface containing only a single column of another surface, with an index, referring to the x position of the column.
And the second, calculating a coefficient, how far down each row should get moved, if the last row should get shifted down a certain amount of pixels and then returning a surface with the shifted picture.
Here is the magic Code:
import pygame
from pygame.locals import *
from pygame import Surface
def getColumn(surface, index):
assert index <= surface.get_width(), "index can't be bigger, than surface width"
height = surface.get_height()
subsurf = Surface((1,height)) # Create Surface 1 px by picture-height high, to store the output in
subsurf.blit(surface.subsurface(pygame.Rect( (index,0),(1,height) )),(0,0)) # Blit a one pixel width subsurface with x Position at index of the image to subsurf
return subsurf
def shiftRightDown(surface, pixels):
size = surface.get_size()
newSize = (size[0], size[1]+pixels)
coeff = pixels / size[0]
returnSurface = Surface(newSize)
for i in range(size[1]): # here happens the magic
returnSurface.blit(getColumn(surface, i), (i,0+int(i*coeff)))
return returnSurface
After all, big respect to Spektres coding skills, even though I'm to dumb to understand anything from the c plus plus example, as I'm a total beginner.
Well I did this by simply copy the texture pixels into sprite using plane projections (basis vectors of each side) + some rescaling as the texture does not correspond with your sprite resolution. I did it in C++ so here my commented code (you can extract the equations from it):
// [constants]
const int sxs=128; // target sprite resolution [pixels]
const int sys=128;
const int height=32; // height/thickness of your tile [pixels]
const DWORD cback=0x00FFFFFF; // background color (or invisible for the sprite)
// [variables]
DWORD **ptxr,**pspr; // direct pixel access pointers (any 32bit variable type)
Graphics::TBitmap *txr,*spr; // VCL bitmaps
int txs,tys,x,y,x0,y0,xx,yy,th;
// [init]
// create VCL bitmaps (can ignore this)
txr=new Graphics::TBitmap; // input texture
spr=new Graphics::TBitmap; // output sprite
// load texture
txr->LoadFromFile("texture.bmp");
txs=txr->Width;
tys=txr->Height;
// prepare sprite resolution
spr->SetSize(sxs,sys);
// allow direct pixel access
txr->HandleType=bmDIB; txr->PixelFormat=pf32bit; ptxr=new DWORD*[tys]; for (y=0;y<tys;y++) ptxr[y]=(DWORD*)txr->ScanLine[y];
spr->HandleType=bmDIB; spr->PixelFormat=pf32bit; pspr=new DWORD*[sys]; for (y=0;y<sys;y++) pspr[y]=(DWORD*)spr->ScanLine[y];
// [render sprite]
th=height*(txs-1)/(sxs-1); // height of tile in texture [pixels]
// clear
for (y=0;y<sys;y++)
for (x=0;x<sxs;x++)
pspr[y][x]=cback;
// top side
x0=0; y0=(sys*3/4)-height;
for (y=0;y<tys;y++)
for (x=0;x<txs;x++)
{
// isometric projection of top side
xx=x0+(x+y)*(sxs-1)/((txs-1)*2);
yy=y0+(x-y)*(sxs-1)/((txs-1)*4);
// copy pixel from texture to sorite
if ((xx>=0)&&(xx<sxs)&&(yy>=0)&&(yy<sys))
pspr[yy][xx]=ptxr[y][x];
}
// left side
x0=0; y0=(sys*3/4)-height;
for (y=0;(y<tys)&&(y<th);y++)
for (x=0;x<txs;x++)
{
// isometric projection of top side
xx=x0+(x )*(sxs-1)/((txs-1)*2);
yy=y0+(x+(4*y))*(sxs-1)/((txs-1)*4);
// copy pixel from texture to sorite
if ((xx>=0)&&(xx<sxs)&&(yy>=0)&&(yy<sys))
pspr[yy][xx]=ptxr[y][x];
}
// right side
x0=sxs/2; y0=sys-height-1;
for (y=0;(y<txs)&&(y<th);y++) // x,y are swapped to avoid connection seems
for (x=0;x<tys;x++)
{
// isometric projection of top side
xx=x0+(+x )*(sxs-1)/((txs-1)*2);
yy=y0+(-x+(4*y))*(sxs-1)/((txs-1)*4);
// copy pixel from texture to sorite
if ((xx>=0)&&(xx<sxs)&&(yy>=0)&&(yy<sys))
pspr[yy][xx]=ptxr[x][y];
}
// here do your stuff with your sprite spr I render source and resulting images into bitmap to show on screen
// you can ignoe this
bmp->SetSize(txs+5+sxs,max(tys,sys));
bmp->Canvas->Brush->Color=clBtnFace;
bmp->Canvas->FillRect(TRect(0,0,bmp->Width,bmp->Height));
bmp->Canvas->Draw(0,0,txr);
bmp->Canvas->Draw(txs+5,0,spr);
// [exit]
// release memory
delete[] ptxr;
delete[] pspr;
if (txr) delete txr; txr=NULL;
if (spr) delete spr; spr=NULL;
The texture must be square otherwise the right side rendering will have access violation troubles not to mention visible seams ...
Here output sprite example of this code:
Now how it works:
ignore the VCL init/load/exit stuff handling images as the important stuff is just the rendering.
Each part consist of setting start point (red square) and convert texture x,y coordinates into offset from that start point in plane projection basis vectors (the black arrows).
And the offset is also multiplied by the resolution ratio between texture and sprite to handle their different sizes.
Look here to understand the direct pixel access I used:
Display an array of color in C
PS
You can add lighting to enhance 3D look ... This is how it looks when top side is 100% left side is 75% and right side is 50% of intensity:
simulating light coming from above left side
I have an almost-working piece of code (I hope). In the update method of this class, random black points should be drawn at locations bounded by the width and height of the window - the problem is that the points are not drawn. A gtk window containing the background image that is loaded with the cairo ImageSurface.create_from_png(BG_IMG) is displayed and I've also verified that the update function is called (every 17ms with a gobject.timeout_add callback function). I've searched here and elsewhere, but I can't quite see what's wrong with this code..
class Screen(gtk.DrawingArea):
__gsignals__ = {"expose-event": "override"}
def do_expose_event(self, event):
self.cr = self.window.cairo_create()
self.cr.rectangle(event.area.x, event.area.y, event.area.width, event.area.height)
self.cr.clip()
self.draw(*self.window.get_size())
def draw(self, width, height):
x = y = 0
self.bg = c.ImageSurface.create_from_png(BG_IMG)
self.cr.set_source_surface(self.bg, x, y)
self.cr.paint()
def update(self):
x = randint(0, DOCK_W)
y = randint(0, DOCK_H)
self.cr.rectangle(x, y, 1, 1)
self.cr.set_source_rgba(0, 0, 0, 1)
self.cr.fill()
self.cr.paint()
Anybody have some insights into why this is code is failing? Big thanks in advance!
Solved
I was unaware that a new cairo context could be used at each draw operation. That turned out to be the main problem.
Generally speaking, you should not draw directly to the window outside of an expose event. And do not keep the cairo context for later use: create one for each event run.
When you want to draw your points, just do: widget.queue_draw(), and a new expose event will be delivered to you ASAP. But note that in the expose event you will have to paint all the points, not just the new one.
There a useful optimization to your type of code: from the timer do not call queue_draw as it is fairly inefficient. Instead just draw the new point. However that doesn't excuse you to draw all the points in the do_expose_event, as an expose event can happen at any time and you do not want to lose the already painted points.
To do the one-point draw you have to create a new cairo context, but you do not need to save it:
def update(self):
cr = self.window.cairo_create()
x = randint(0, DOCK_W)
y = randint(0, DOCK_H)
self.points.append((x,y)) #for the expose event ;-)
cr.rectangle(x, y, 1, 1)
cr.set_source_rgba(0, 0, 0, 1)
cr.fill()
cr.paint()
Another common optimization, particularly if you have a lot of points is to keep the painted image in a bitmap, so when the expose event happens, you simply blit the bitmap, instead of iterating all along the list of points.
I have used the Python Imaging Library to load a .ttf font. Here is the code:
self.rect = Image.new("RGBA", (600,100), (255,255,255))
self.draw = ImageDraw.Draw(self.rect)
self.font = ImageFont.truetype("font.ttf", 96)
self.draw.text((5,0), "activatedgeek", (0,0,0), font=self.font)
self.texture = self.loadFont(self.rect)
Here is the loadFont() function of the respective class:
def loadFont(self, im):
try:
ix, iy, image = im.size[0], im.size[1], im.tostring("raw", "RGBA", 0, -1)
except SystemError:
ix, iy, image = im.size[0], im.size[1], im.tostring("raw", "RGBX", 0, -1)
retid = gl.glGenTextures(1)
gl.glPixelStorei(gl.GL_UNPACK_ALIGNMENT,1)
gl.glBindTexture(gl.GL_TEXTURE_2D,retid)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MAG_FILTER, gl.GL_LINEAR_MIPMAP_LINEAR)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_MIN_FILTER, gl.GL_LINEAR_MIPMAP_LINEAR)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_S, gl.GL_CLAMP)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_TEXTURE_WRAP_T, gl.GL_CLAMP)
gl.glTexParameteri(gl.GL_TEXTURE_2D, gl.GL_GENERATE_MIPMAP, gl.GL_TRUE)
gl.glTexImage2D(gl.GL_TEXTURE_2D,0,3,ix,iy,0,gl.GL_RGBA,gl.GL_UNSIGNED_BYTE,image)
return retid
Here is a snapshot I have taken using glReadPixels() unfortunately same as one rendered on the window created using PyQt.
It shows an unwanted border, some artefact. Please help me rectify this.
Have you considered using a more reasonable wrap state, such as GL_CLAMP_TO_EDGE? I have a strong feeling that this is related to border color beyond the edges of your texture image.
There are a number of approaches you could take to solve an issue like this, ranging from pre-multiplied alpha to an extra texel border around the entire image, but the simplest thing to try would be GL_CLAMP_TO_EDGE.
GL_CLAMP is something of a joke as far as wrap modes go, it does not clamp the range of texture coordinates to texel centers and calamity ensues when the nearest texel becomes the border color. Needless to say, this behavior is usually undesirable.
I am working on a sensor-based Python application built on a PyQt4 GUI. The sensor is generating 16-bit measurements... 256 16-bit "pixels" per "line". A square "image" is acquired by obtaining 256 lines, resulting in a (256,256) Numpy array of 16-bit numbers. I simply want to display this as a grayscale image. The sensor loop is running in a QThread and emits a QImage signal. The signal connects to a slot that renders the data in the main GUI by packing it into a 32-bit RGB image. Of course, to pack the 16bit grayscale pixels into a 32-bit RGB image, I am forced to scale the 16-bit pixels to 8-bit and a substantial amount of dynamic range is lost. A MWE is provided that shows my current strategy (this is obviously not my larger threaded sensor-based application... it simply extracts the salient portions). Please note that I am a Python beginner and I'm doing my best to keep up...
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Grayscale to RGB32 QPixmap tests
"""
import sys
import numpy as np
from PyQt4 import QtGui, QtCore
class PixmapTest(QtGui.QWidget):
def __init__(self):
super(PixmapTest, self).__init__()
self.initUI()
def initUI(self):
imglayout = QtGui.QHBoxLayout(self)
img_16bit = np.random.randint(0,65535,size=(256,256)).astype(np.uint32)
img_16bit_to_8bit = (img_16bit / 65535.0 * 255).astype(np.uint32)
packed_img_array = (255 << 24 | (img_16bit_to_8bit) << 16 | (img_16bit_to_8bit) << 8 | (img_16bit_to_8bit)).flatten()
img = QtGui.QImage(packed_img_array, 256, 256, QtGui.QImage.Format_RGB32)
pixmap = QtGui.QPixmap(img.scaledToWidth(img.width()*2))
imglabel = QtGui.QLabel(self)
imglabel.setPixmap(pixmap)
imglayout.addWidget(imglabel)
self.setLayout(imglayout)
self.move(300, 200)
self.setWindowTitle('QPixmap Test')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
form = PixmapTest()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Specifically, my questions are:
Is there a better way? The solution has to remain "lightweight" (i.e., PyQt4 QImage/QPixmap). I can't use Matplotlib or anything heavyweight as it is too slow. The closer to native Python/Numpy the better. I realize this is ultimately a limitation of the QImage class, but I was hoping there was a clever solution I'm just not seeing that lets me keep the current signal/slot "wiring" I have.
Through experimentation, I've found that I have to declare all arrays that ultimately get processed to end up in the QImage as np.uint32 (though np.int32 seems to work as well). It doesn't work if I simply declare the penultimate array as uint32/int32. I don't understand why.
I've played around with altering luminosity with Y' = 0.2126 * R + 0.7152 * G + 0.0722 * B and other similar conversions. Probably "polishing a turd" here, but I thought I'd include this because other answers on SX seem to indicate this is important. Notwithstanding the loss of dynamic range, it seems to work to simply assign the same value to R, G, B as in my MWE.
As requested in a comment below, here is a histogram of some sample data from the sensor to illustrate the dynamic range:
Here I use some function data for demo:
y, x = np.mgrid[-10:10:256j, -10:10:256j]
data = ((np.sin(y**2 + x**2) + 2) * 1000).astype(np.uint16)
img_8bit = (data / 256.0).astype(np.uint8) # use the high 8bit
img_8bit = ((data - data.min()) / (data.ptp() / 255.0)).astype(np.uint8) # map the data range to 0 - 255
img = QtGui.QImage(img_8bit.repeat(4), 256, 256, QtGui.QImage.Format_RGB32)
When use the high 8bit, it looks like:
When map min & max value to (0, 255), it looks like:
To convert the 8bit image to 32bit, you can just call img_8bit.repeat(4), this will repeat every byte 4 times, so the memory can be viewed as an uint32 buffer. Since you create the QImage by Format_RGB32 not Format_ARGB32, the most significant byte is not used.