I'm working on writing some PDF generation code for a Django based website using ReportLab / Platypus to generate the PDF.
I've subclassed PageTemplate so I can have some consistent page trim and have included code to generate multi-column layouts for me to suit requirements. I've currently got showBoundary=1 on for debugging.
However, I'm only seeing the first frame boundary appearing when I render a two column layout. What might be going wrong?
class ReportPageTemplate(PageTemplate):
def __init__(self, id='basic', columns=1, pagesize=A4, leftMargin=(2*cm), bottomMargin=(2.1*cm), colmargin=(0.5*cm)):
(right, top) = pagesize
right -= leftMargin
top -= bottomMargin
height = top - bottomMargin
width = (right - leftMargin)
# Subtract out blank space between columns
colwidth = (width - ((columns - 1) * colmargin)) / columns
frames = []
for col in range(columns):
left = leftMargin + (col * (colwidth + colmargin))
frames.append(Frame(left, bottomMargin, colwidth, height, showBoundary=1))
PageTemplate.__init__(self, id=id, frames=frames, pagesize=pagesize)
def beforeDrawPage(self, canvas, doc):
print self.id
(width, height) = canvas._pagesize
canvas.setLineWidth(0.2 * cm)
canvas.line(0.5*cm, height - (2*cm), width - (0.5*cm), height - (2*cm))
canvas.line(0.5*cm, (2*cm), width - (0.5*cm), (2*cm))
Ah, well I feel foolish.
The second frame is only rendered if the first one fills up. For testing purposes I needed to include a FrameBreak object to force it to draw both columns.
The code is actually already working.
Related
I am trying to fill the image area outside of a custom curved shape in Pycairo, however am struggling to achieve this. I have managed to get the result I require by stroking the shape with a large thickness and drawing multiple shapes of increasing size on top of each other, however this solution is inefficient (I care about efficiency as I will be needing to draw 1200 shapes quickly, which currently takes 1 minute). I think there might be a way to use a mask or clip or something similar, but can't find anything online that helps. If there is a way to specify that the stroke is drawn only outside the path, not on both sides, that could also be a solution.
Anyone out there no of a better way to achieve this?
Here's the code I use to draw a curved shape, the calculate_curve_handles function just returns two curve handles between the two sides of the shape based on the curve_point_1 and 2 offsets. The polygon function returns the vertex locations for an N sided polygon
vertices = polygon(num_sides, shape_radius + (scale * (line_thickness-20)), rotation, [x + offset[0], y + offset[1]])
for i in range(len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
cr.set_line_cap(cairo.LINE_CAP_ROUND)
cr.fill()
This is the desired result, achieved with many stroked objects layered on top of each other:
This is what I get when I try to use cr.fill() on the curved path:
Ok, I just figured out that if I move the move_to() function outside of the for loop for the vertices, it draws the shape properly.
Then by setting the fill rule to cr.set_fill_rule(cairo.FILL_RULE_EVEN_ODD) and drawing a large rectangle behind the shape, I can get the desired effect int even less time.
cr.move_to(vertices[0][0], vertices[0][1])
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
I found a solution that works for now. Basically for every side of the shape, I find a point that extends from the vector between the centre of the object and the vertex, well outside the drawing area. Then I fill each line segment as a separate shape
def calculate_bounds(start_point, end_point, centre_point):
direction = np.subtract(start_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_1 = start_point + normalised_dir * 5000
direction = np.subtract(end_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_2 = end_point + normalised_dir * 5000
return bound_1, bound_2
Then the code for drawing the polygon is:
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
bound_1, bound_2 = calculate_bounds(start_point, end_point, [x + offset[0], y + offset[1]])
cr.line_to(bound_2[0], bound_2[1])
cr.line_to(bound_1[0], bound_1[1])
cr.fill_preserve()
cr.stroke()
I am making a langton's ant cellular automata program, and I want the user to be able to pan and zoom. Right now, I have all my rectangles (grid squares) stored as a dictionary, and to move/zoom, I iterate through all of them and apply the transformation needed.
def zoom(self, factor, center_x, center_y):
for x in range(WIDTH):
for y in range(HEIGHT):
rect = self.rects[x][y]
self.rects[x][y].x = (rect.x - center_x)*factor + center_x
self.rects[x][y].y = (rect.y - center_y)*factor + center_y
self.rects[x][y].width = rect.width * factor
self.rects[x][y].height = rect.height * factor
However, with the amount of rectangles (32,000), it takes a second or to do pan and zoom. Is there any better way of doing it than this? Thanks!
Here is the full code
Yes. Use OpenGL transformation matrices to apply transformations. These will be calculated on the GPU for performance gain.
pyglet.graphics.Group lets you group together such transformations in order to apply them automatically to Pyglet primitives when drawing them.
Example
We create a CameraGroup that pans and zooms objects into view.
import pyglet.gl as gl
import pyglet.shapes
class CameraGroup(Group):
def __init__(self, window, *args, **kwargs):
super().__init__(*args, **kwargs)
self.win = window
def set_state(self):
gl.glPushMatrix()
x = -(self.win.x - self.win.width // 2)
y = -(self.win.y - self.win.height // 2)
gl.glTranslatef(x, y, 0.0)
gl.glScalef(self.win.factor, self.win.factor, 1.0)
def unset_state(self):
gl.glPopMatrix()
The example assumes you have the properties center_x, center_y and factor on your window.
Apply the group by attaching it to Pyglet objects.
cam_group = CameraGroup(main_win)
rect = pyglet.shapes.Rectangle(250, 300, 400, 200, color=(255, 22, 20), batch=batch, group=cam_group)
When rect gets rendered the group transformations area applied automatically.
You can also construct more complex groups if needed.
There is a camera example for pyglet in the examples folder.
https://github.com/pyglet/pyglet/blob/pyglet-1.5-maintenance/examples/camera.py
As shown in the documentation of Open3D, you can use the get_view_control.rotate() function to rotate the object inside the viewer. But it does not specify the type (degree, radian etc.). If I use a value of around 2100 it looks like a full turn, but after putting those in a loop, it turns out this is not the exact value for turning 360 degrees. Also I don't see it mentioned anywhere in the documentation of Open3D.
I want to capture depth images at different angles for a full 360 degree (x,y,z). This is a piece of my code:
class Viewer:
def __init__(self, on, of, fd): #objectname, objectFile and folderdirectory
self.index = 0
self.objectName = on
self.objectFile = of
self.folderDirectory = fd
self.vis = o3d.visualization.Visualizer()
self.view = o3d.visualization.ViewControl()
self.pcd = o3d.io.read_triangle_mesh(self.folderDirectory + self.objectFile)
def depthFullCapture(self, times):
self.numberOfTimes = times
def captureDepth(vis):
print('Capturing')
self.depth = vis.capture_depth_float_buffer(False)
plt.imsave((self.folderDirectory + 'images/' + self.objectName + '_{:05d}.png'.format(self.index)),np.asarray(self.depth), dpi = 1)
np.savetxt((self.folderDirectory + 'text/' + self.objectName + '_{:05d}.txt'.format(self.index)),self.depth,fmt='%.2f',delimiter=',')
vis.register_animation_callback(rotate)
def rotate(vis):
print('Rotating')
ctr = vis.get_view_control()
if(self.index % 25 == 0):
self.vis.reset_view_point(True)
ctr.rotate(0,((2100/25)*(self.index/25)))
else:
ctr.rotate(84, 0)
ctr.set_zoom(0.75)
self.index += 1
if not (self.index == 625):
vis.register_animation_callback(captureDepth)
else:
vis.register_animation_callback(None)
vis.destroy_window()
self.vis.create_window(width = 200, height = 200)
self.vis.add_geometry(self.pcd)
self.vis.register_animation_callback(captureDepth)
self.vis.run()
So can anyone explain the correct value/type for turning a certain degrees? Or is there another/better way to do this? Thanks in advance! If anything is not clear, please ask :)
The actual answer can be found in the C documentation:
const double open3d::visualization::ViewControl::ROTATION_RADIAN_PER_PIXEL = 0.003
the rotation units are pixels:
x and y are the distances the mouse cursor has moved. xo and yo are the original point coordinate the mouse cursor started to move from. Coordinates are measured in screen coordinates relative to the top-left corner of the window client area.
You were very close.
0.003 [radian/pixel] * (180/pi) [degrees/radian] = 0.1719 [degrees/pixel]
OR
5.8178 [pixels/degree]
Taking
360 [degrees/rotation] * 5.8178 [pixels/degree] = 2094.3951 [pixels/rotation]
As I know from the example in Open3D Docs (see also this link), get_view_control.rotate() takes 4 arguments: x, y, xo, and yo, all of them float values in degrees.
Surely this answer comes too late and can be expanded, maybe you can tell us what you learnt!
I created a bitmap font, basically a 256x256 png image where each character occupies 8x8 tile. I want to use it with Pillow as ImageFont but there's no info on this in Pillow docs. It says I can load bitmap fonts like this
font = ImageFont.load("arial.pil")
but "PIL uses its own font file format to store bitmap fonts." so I guess png file won't work. How can I tell PIL to use said bitmap and where each character is on it?
Not a complete answer, but too much for a comment, and it may be useful or spur someone else to work out the other 60% :-)
I may delete it if anyone else comes up with something better...
You can go to the Pillow repository on Github and download a ZIP file of the code.
If you go in there and nose around you will find two things that appear to work hand-in-hand, namely a .PIL file and a .PBM file.
In Tests/fonts there is a file called 10x20.pbm which is actually a PNG file if you look inside it. So, if you change its name to 10x20.png you can view it and it looks like this:
By the way, if you want to split that into 10x20 size chunks with one letter in each, you can use ImageMagick in Terminal like this:
convert 10x20.pbm -crop 10x20 char_%d.png
and you will get a bunch of files called char_0.png, char_1.png etc. The first 4 look like this:
If you look in src/PIL/FontFile.py there is this code that seems to know how to access/generate the metrics for a font:
#
# The Python Imaging Library
# $Id$
#
# base class for raster font file parsers
#
# history:
# 1997-06-05 fl created
# 1997-08-19 fl restrict image width
#
# Copyright (c) 1997-1998 by Secret Labs AB
# Copyright (c) 1997-1998 by Fredrik Lundh
#
# See the README file for information on usage and redistribution.
#
from __future__ import print_function
import os
from . import Image, _binary
WIDTH = 800
def puti16(fp, values):
# write network order (big-endian) 16-bit sequence
for v in values:
if v < 0:
v += 65536
fp.write(_binary.o16be(v))
##
# Base class for raster font file handlers.
class FontFile(object):
bitmap = None
def __init__(self):
self.info = {}
self.glyph = [None] * 256
def __getitem__(self, ix):
return self.glyph[ix]
def compile(self):
"Create metrics and bitmap"
if self.bitmap:
return
# create bitmap large enough to hold all data
h = w = maxwidth = 0
lines = 1
for glyph in self:
if glyph:
d, dst, src, im = glyph
h = max(h, src[3] - src[1])
w = w + (src[2] - src[0])
if w > WIDTH:
lines += 1
w = (src[2] - src[0])
maxwidth = max(maxwidth, w)
xsize = maxwidth
ysize = lines * h
if xsize == 0 and ysize == 0:
return ""
self.ysize = h
# paste glyphs into bitmap
self.bitmap = Image.new("1", (xsize, ysize))
self.metrics = [None] * 256
x = y = 0
for i in range(256):
glyph = self[i]
if glyph:
d, dst, src, im = glyph
xx = src[2] - src[0]
# yy = src[3] - src[1]
x0, y0 = x, y
x = x + xx
if x > WIDTH:
x, y = 0, y + h
x0, y0 = x, y
x = xx
s = src[0] + x0, src[1] + y0, src[2] + x0, src[3] + y0
self.bitmap.paste(im.crop(src), s)
self.metrics[i] = d, dst, s
def save(self, filename):
"Save font"
self.compile()
# font data
self.bitmap.save(os.path.splitext(filename)[0] + ".pbm", "PNG")
# font metrics
with open(os.path.splitext(filename)[0] + ".pil", "wb") as fp:
fp.write(b"PILfont\n")
fp.write((";;;;;;%d;\n" % self.ysize).encode('ascii')) # HACK!!!
fp.write(b"DATA\n")
for id in range(256):
m = self.metrics[id]
if not m:
puti16(fp, [0] * 10)
else:
puti16(fp, m[0] + m[1] + m[2])
So hopefully someone has time/knowledge of how to put those two together to enable you to generate the metrics file for your PNG. I think you just need something that does the last 10 lines of that code for your PNG.
There appear to be 23 bytes of header which you can simply replicate, and then there are 256 "entries", i.e. 1 for each of 256 glyphs. Each entry has 10 numbers in it, and each number is 16-bit big endian.
Let's look at the header:
dd if=10x20.pil bs=23 count=1| xxd -c23 | more
00000000: 5049 4c66 6f6e 740a 3b3b 3b3b 3b3b 3230 3b0a 4441 5441 0a PILfont.;;;;;;20;.DATA.
Then you can see the entries using the command below to skip the header and group nicely:
dd if=10x20.pil bs=23 iseek=1| xxd -g2 -c20
which gives:
Column 1 appears to be the width of the glyph.
Column 7 is the x-offset of the left edge of the glyph in the image and column 9 is the x-offset of the right edge of the glyph in the image. So you will see that column 7 on each line is the same as column 9 on the previous line, i.e. that the glyphs abutt each other going across the image.
If you look at this extract from further down the file, you can see it starts a new row of glyphs in the output image in the middle of the extract (marked in red). That tells us that the bitmap should be no more than 800 pixels wide and that column 8 is the y-offset of the top of the glyph in the bitmap file and column 10 is the y-offset of the bottom of the glyph in the bitmap. You should see that when a new line row of glyphs starts in the bitmap file that x goes to zero and column 8 takes the previous value from column 10.
pdb.gimp_paintbrush_default seems to be very slow (several seconds, for 500 dots using a standard brush. Lines are worse, obviously). Is this the way it is? Is there a way to speed things up when drawing straight lines using the user selected brush?
pythonfu console code:
from random import randint
img=gimp.image_list()[0]
drw = pdb.gimp_image_active_drawable(img)
width = pdb.gimp_image_width(img)
height = pdb.gimp_image_height(img)
point_number = 500
while (point_number > 0):
x = randint(0, width)
y = randint(0, height)
pdb.gimp_paintbrush_default(drw,2,[x,y])
point_number -= 1
I've been working on something very similar and ran into this problem also. Here's one technique that I found that made my function about 5 times faster:
Create a temporary image
Copy the layer you are working with to the temporary image
Do the drawing on the temporary layer
Copy the temporary layer on top of the original layer
I believe this speeds stuff up because GIMP doesn't have to draw the edits to the screen, but I'm not 100% sure. Here's my function:
def splotches(img, layer, size, variability, quantity):
gimp.context_push()
img.undo_group_start()
width = layer.width
height = layer.height
temp_img = pdb.gimp_image_new(width, height, img.base_type)
temp_img.disable_undo()
temp_layer = pdb.gimp_layer_new_from_drawable(layer, temp_img)
temp_img.insert_layer(temp_layer)
brush = pdb.gimp_brush_new("Splotch")
pdb.gimp_brush_set_hardness(brush, 1.0)
pdb.gimp_brush_set_shape(brush, BRUSH_GENERATED_CIRCLE)
pdb.gimp_brush_set_spacing(brush, 1000)
pdb.gimp_context_set_brush(brush)
for i in range(quantity):
random_size = size + random.randrange(variability)
x = random.randrange(width)
y = random.randrange(height)
pdb.gimp_context_set_brush_size(random_size)
pdb.gimp_paintbrush(temp_layer, 0.0, 2, [x, y, x, y], PAINT_CONSTANT, 0.0)
gimp.progress_update(float(i) / float(quantity))
temp_layer.flush()
temp_layer.merge_shadow(True)
# Delete the original layer and copy the new layer in its place
new_layer = pdb.gimp_layer_new_from_drawable(temp_layer, img)
name = layer.name
img.remove_layer(layer)
pdb.gimp_item_set_name(new_layer, name)
img.insert_layer(new_layer)
gimp.delete(temp_img)
img.undo_group_end()
gimp.context_pop()