I have the following script which generates a number of circles in a box on top of a bigger circle. The output is to a PDF, which I would like to be tightly bounded to the ink extents.
#!/usr/bin/env python
import math
import cairocffi as cairo
import random
def DrawFilledCircle(x,y,radius,rgba):
ctx.set_source_rgba(*rgba)
ctx.arc(x,y,radius,0,2*math.pi)
ctx.fill()
def DrawCircle(x,y,radius,rgba=(0,0,0,1)):
ctx.set_source_rgba(*rgba)
ctx.arc(x,y,radius,0,2*math.pi)
ctx.stroke()
surface = cairo.RecordingSurface(cairo.CONTENT_COLOR_ALPHA, None)
ctx = cairo.Context (surface)
DrawCircle(200,200,150,(0,0,0,1))
for i in range(1000):
DrawFilledCircle(200+(150-300*random.random()), 200+(150-300*random.random()), 4, (0,0,0,0.5))
#This part will change throughout the question
extents = surface.ink_extents()
pdfout = cairo.PDFSurface ("circle.pdf", extents[2], extents[3])
pdfctx = cairo.Context (pdfout)
pdfctx.set_source_surface(surface,0,0)
Running the script produces the following.
As you can see, the width and height are correct, but the origin needs to be shifted. Accordingly, I tried this:
pdfout = cairo.PDFSurface ("circle.pdf", extents[2], extents[3])
pdfctx = cairo.Context (pdfout)
pdfctx.set_source_surface(surface,extents[0],extents[1])
This just shifted things further to the bottom right.
That suggested using negative coordinates to shift things the other way:
pdfout = cairo.PDFSurface ("circle.pdf", extents[2], extents[3])
pdfctx = cairo.Context (pdfout)
pdfctx.set_source_surface(surface,-extents[0],-extents[1])
But that didn't work either:
As a back-up, I could make a shell call to pdfcrop, but that seems like a ridiculous workaround.
What can I do to achieve what I'm trying to do? Where am I going wrong? How can a lasting peace be achieved in Cairo?
Related
I am trying to create a volume in Gmsh (using Python API) by cutting some small cylinders from a bigger one.
When I do that, I expect to have one surface for each cutted region, instead, I get the result in the figure. I have highlighted in red the surfaces that give me the problem (some cutted regions behave as expected), as you can see, instead of one surface I get two, that sometimes aren't even equal.
gmsh creates more surfaces than expected:
So, my questions are:
Why gmsh behaves like that?
How can I fix this as I need predictable behavior?
Below is the code I used to generate the geometry.
The code to work requires some parameters such as core_height, core_inner_radius and core_outer_radius, the number of small cylinders and their radius.
gmsh.initialize(sys.argv)
#gmsh.initialize()
gmsh.clear()
gmsh.model.add("circle_extrusion")
inner_cyl_tag = 1
outer_cyl_tag = 2
inner_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_inner_radius, tag = inner_cyl_tag)
outer_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_outer_radius, tag = outer_cyl_tag)
core_tag = 3
cut1 = gmsh.model.occ.cut([(3,outer_cyl)],[(3,inner_cyl)], tag = core_tag)
#create a set of filled cylinders
#set position
angle_vector = np.linspace(0,2*np.pi,number_of_hp+1)
pos_x = hp_radial_position*np.cos(angle_vector)
pos_y = hp_radial_position*np.sin(angle_vector)
pos_z = 0.0
#cut one cylinder at the time and assign the new core tag
for ii in range(0,len(angle_vector)):
old_core_tag = core_tag
heat_pipe = gmsh.model.occ.addCylinder(pos_x[ii], pos_y[ii], pos_z, 0, 0, core_height,hp_outer_radius, tag =-1)
core_tag = heat_pipe+1
core = gmsh.model.occ.cut([(3,old_core_tag)],[(3,heat_pipe)], tag = core_tag)
gmsh.model.occ.synchronize()
#get volume entities and assign physical groups
volumes = gmsh.model.getEntities(dim=3)
solid_marker = 1
gmsh.model.addPhysicalGroup(volumes[0][0], [volumes[0][1]],solid_marker)
gmsh.model.setPhysicalName(volumes[0][0],solid_marker, "solid_volume")
#get surfaces entities and apply physical groups
surfaces = gmsh.model.getEntities(dim=2)
surface_markers= np.arange(1,len(surfaces)+1,1)
for ii in range(0,len(surfaces)):
gmsh.model.addPhysicalGroup(2,[surfaces[ii][1]],tag = surface_markers[ii])
#We finally generate and save the mesh:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.option.setNumber("Mesh.MshFileVersion", 2.2) #save in ASCII 2 format
gmsh.write(mesh_name+".msh")
# Launch the GUI to see the results:
#if '-nopopup' not in sys.argv:
# gmsh.fltk.run()
gmsh.finalize()
I do not think that you have additional surfaces in the sense of gmsh.model.occ surfaces. To me this looks like your volume mesh is sticking out of your surface mesh, i.e. volume and surface mesh do not fit together.
Here is what I did to check your case:
First I added the following lines at the beginning of our code to get a minimum working example:
import gmsh
import sys
import numpy as np
inner_cyl_tag = 1
outer_cyl_tag = 2
core_height = 1
core_inner_radius = 0.1
core_outer_radius = 0.2
number_of_hp = 5
hp_radial_position = 0.1
hp_outer_radius = 0.05
What I get with this code is the following:
To visualize it like this go to "Tools"-->"Options"-->"Mesh" and check "2D element faces", "3D element edges" and "3D element faces".
You can see that there are some purple triangles sticking out of the green/yellowish surfaces triangles of the inner surfaces.
You could try to visualize your case the same way and check <--> uncheck the "3D element faces" a few times.
So here is the solution for this behaviour, I did not know that gmsh behaves like this myself. It seems that when you create your mesh and refine it the refinement will be applied on the 2D surface mesh and the 3D volume mesh seperately, which means that those two meshes are not connected after the refinement anymore. What I did next was to try what happens if you create the 2D mesh only, refine it, and then create the 3D mesh, i.e.:
replace:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
by:
gmsh.model.mesh.generate(2)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.model.mesh.generate(3)
The result then looks like this:
I hope that this was actually your problem. But in future it would be good if you could provide us a minimum working example of code that we can copy-paste and get the same visualization you showed us in your image.
I am trying to animate a plot using visvis.
This is the example code they have:
import visvis as vv
# read image
ims = [vv.imread('astronaut.png')]
# make list of images: decrease red channel in subsequent images
for i in range(9):
im = ims[i].copy()
im[:,:,0] = im[:,:,0]*0.9
ims.append(im)
# create figure, axes, and data container object
a = vv.gca()
m = vv.MotionDataContainer(a)
# create textures, loading them into opengl memory, and insert into container.
for im in ims:
t = vv.imshow(im)
t.parent = m
and I added:
app = vv.use()
app.Run()
This worked. But I needed to animate a plot, not an image, so I tried doing this:
import visvis as vv
from visvis.functions import getframe
# create figure, axes, and data container object
a = vv.gca()
m = vv.MotionDataContainer(a, interval=100)
for i in range(3):
vv.plot([0, 2+i*10], [0, 2+i*10])
f = getframe(a)
t = vv.imshow(f)
t.parent = m
a.SetLimits(rangeX=[-2, 25], rangeY=[-2, 25])
app = vv.use()
app.Run()
The axes are being initialized very big, that is why I am using set limits, and the output is not animated. I am getting only the last frame so a line from (0,0) to (22, 22).
Does anyone know a way of doing this with visvis?
It turns out adding the frame as a child of MotionDataContainer was not the way to go. The function vv.plot returns an instance of the class Line, and one should add the line as a child. If anyone is having the same problem, I could write a more detailed answer.
EDIT Adding a more detailed answer as requested:
To animate a plot made of lines, one must simply add the lines as children of MotionDataContainer. Taking my example in the question above, one would write:
import visvis as vv
# create figure, axes, and data container object
a = vv.gca()
m = vv.MotionDataContainer(a, interval=100)
for i in range(3):
line = vv.plot([0, 2+i*10], [0, 2+i*10])
line.parent = m
app = vv.use()
app.Run()
In my special case, I even needed to animate multiple lines being drawn at the same time.
To do this, I ended up defining a new class that, like MotionDataContainer, also inherits from MotionMixin, and change the class attribute delta which specifies how many objects should be made visible at the same time. For that, one has to also rewrite the function _SetMotionIndex.
(See visvis official source code: https://github.com/almarklein/visvis/blob/master/wobjects/motion.py)
Disclaimer: Concerning the animation of multiple objects, I have no idea if this is the intended use or if this is the easiest solution, but this is what worked for me.
I'm using Python to generate images using dashed lines for stippling. The period of the dashing is constant, what changes is dash/space ratio. This produces something like this:
However in that image the dashing has a uniform origin and this creates unsightly vertical gutters. So I tried to randomize the origin to remove the gutters. This sort of works but there is an obvious pattern:
Wondering where this comes from I made a very simple test case with stacked dashed straight lines:
dash ratio: 50%
dash period 20px
origin shift from -10px to +10px using random.uniform(-10.,+10.)(*) (after an initial random.seed()
And with added randomness:
So there is still pattern. What I don't understand is that to get a visible gutter you need to have 6 or 7 consecutive values falling in the same range (says, half the total range), which should be a 1/64 probability but seems to happen a lot more often in the 200 lines generated.
Am I misunderstanding something? Is it just our human brain which is seeing patterns where there is none? Could there be a better way to generate something more "visually random" (python 2.7, and preferably without installing anything)?
(*) partial pixels are valid in that context
Annex: the code I use (this is a Gimp script):
#!/usr/bin/env python
# -*- coding: iso-8859-15 -*-
# Python script for Gimp (requires Gimp 2.10)
# Run on a 400x400 image to see something without having to wait too much
# Menu entry is in "Test" submenu of image menubar
import random,traceback
from gimpfu import *
def constant(minShift,maxShift):
return 0
def triangle(minShift,maxShift):
return random.triangular(minShift,maxShift)
def uniform(minShift,maxShift):
return random.uniform(minShift,maxShift)
def gauss(minShift,maxShift):
return random.gauss((minShift+maxShift)/2,(maxShift-minShift)/2)
variants=[('Constant',constant),('Triangle',triangle),('Uniform',uniform),('Gauss',gauss)]
def generate(image,name,generator):
random.seed()
layer=gimp.Layer(image, name, image.width, image.height, RGB_IMAGE,100, LAYER_MODE_NORMAL)
image.add_layer(layer,0)
layer.fill(FILL_WHITE)
path=pdb.gimp_vectors_new(image,name)
# Generate path, horizontal lines are 2px apart,
# Start on left has a random offset, end is on the right edge right edge
for i in range(1,image.height, 2):
shift=generator(-10.,10.)
points=[shift,i]*3+[image.width,i]*3
pdb.gimp_vectors_stroke_new_from_points(path,0, len(points),points,False)
pdb.gimp_image_add_vectors(image, path, 0)
# Stroke the path
pdb.gimp_context_set_foreground(gimpcolor.RGB(0, 0, 0, 255))
pdb.gimp_context_set_stroke_method(STROKE_LINE)
pdb.gimp_context_set_line_cap_style(0)
pdb.gimp_context_set_line_join_style(0)
pdb.gimp_context_set_line_miter_limit(0.)
pdb.gimp_context_set_line_width(2)
pdb.gimp_context_set_line_dash_pattern(2,[5,5])
pdb.gimp_drawable_edit_stroke_item(layer,path)
def randomTest(image):
image.undo_group_start()
gimp.context_push()
try:
for name,generator in variants:
generate(image,name,generator)
except Exception as e:
print e.args[0]
pdb.gimp_message(e.args[0])
traceback.print_exc()
gimp.context_pop()
image.undo_group_end()
return;
### Registration
desc="Python random test"
register(
"randomize-test",desc,'','','','',desc,"*",
[(PF_IMAGE, "image", "Input image", None),],[],
randomTest,menu="<Image>/Test",
)
main()
Think of it like this: a gutter is perceptible until it is obstructed (or almost so). This only happens when two successive lines are almost completely out of phase (with the black segments in the first line lying nearly above the white segments in the next). Such extreme situations only happens about one out of every 10 rows, hence the visible gutters which seem to extend around 10 rows before being obstructed.
Looked at another way -- if you print out the image, there really are longish white channels through which you can easily draw a line with a pen. Why should your mind not perceive them?
To get better visual randomness, find a way to make successive lines dependent rather than independent in such a way that the almost-out-of-phase behavior appears more often.
There's at least one obvious reason why we see a pattern in the "random" picture : the 400x400 pixels are just the same 20x400 pixels repeated 20 times.
So every apparent movement is repeated 20 times in parallel, which really helps the brain analyzing the picture.
Actually, the same 10px wide pattern is repeated 40 times, alternating between black and white:
You could randomize the dash period separately for each line (e.g. between 12 and 28):
Here's the corresponding code :
import numpy as np
import random
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = [13, 13]
N = 400
def random_pixels(width, height):
return np.random.rand(height, width) < 0.5
def display(table):
plt.imshow(table, cmap='Greys', interpolation='none')
plt.show()
display(random_pixels(N, N))
def stripes(width, height, stripe_width):
table = np.zeros((height, width))
cycles = width // (stripe_width * 2) + 1
pattern = np.concatenate([np.zeros(stripe_width), np.ones(stripe_width)])
for i in range(height):
table[i] = np.tile(pattern, cycles)[:width]
return table
display(stripes(N, N, 10))
def shifted_stripes(width, height, stripe_width):
table = np.zeros((height, width))
period = stripe_width * 2
cycles = width // period + 1
pattern = np.concatenate([np.zeros(stripe_width), np.ones(stripe_width)])
for i in range(height):
table[i] = np.roll(np.tile(pattern, cycles), random.randrange(0, period))[:width]
return table
display(shifted_stripes(N, N, 10))
def flexible_stripes(width, height, average_width, delta):
table = np.zeros((height, width))
for i in range(height):
stripe_width = random.randint(average_width - delta, average_width + delta)
period = stripe_width * 2
cycles = width // period + 1
pattern = np.concatenate([np.zeros(stripe_width), np.ones(stripe_width)])
table[i] = np.roll(np.tile(pattern, cycles), random.randrange(0, period))[:width]
return table
display(flexible_stripes(N, N, 10, 4))
Posting my final solution as an answer, but please upvote others.
John Coleman has a point when he says:
To get better visual randomness, find a way to make successive lines dependent rather than independent in such a way that the almost-out-of-phase behavior appears more often.
So, finally, the best way to avoid gutters is to forego randomness and have a very fixed scheme of shifts, and one that works well is a 4-phase 0,25%,75%,50% cycle:
OK, there is still slight diamond pattern, but it is much less visible than the patterns introduced by the random schemes I tried.
This is slightly counter-intuitive, but as you add random elements together the randomness gets less. If I follow correctly the range of each element is 10px - 30px. So the total size of 10 elements is 100px to 300px, but the distribution is not even across that range. The extremes are very unlikely and on average it will be pretty close to 200px, so that fundamental 20px pattern will emerge. Your random distribution needs to avoid this.
EDIT: I see I slightly misunderstood, and all dashes are are 20px with a random offset. So, I think looking at any 1 vertical set of dashes would appear random, but that same random set is repeated across the page, giving the pattern.
I'd like to render an ASCII art world map given this GeoJSON file.
My basic approach is to load the GeoJSON into Shapely, transform the points using pyproj to Mercator, and then do a hit test on the geometries for each character of my ASCII art grid.
It looks (edit: mostly) OK when centered one the prime meridian:
But centered on New York City (lon_0=-74), and it suddenly goes haywire:
I'm fairly sure I'm doing something wrong with the projections here. (And it would probably be more efficient to transform the ASCII map coordinates to lat/lon than transform the whole geometry, but I am not sure how.)
import functools
import json
import shutil
import sys
import pyproj
import shapely.geometry
import shapely.ops
# Load the map
with open('world-countries.json') as f:
countries = []
for feature in json.load(f)['features']:
# buffer(0) is a trick for fixing polygons with overlapping coordinates
country = shapely.geometry.shape(feature['geometry']).buffer(0)
countries.append(country)
mapgeom = shapely.geometry.MultiPolygon(countries)
# Apply a projection
tform = functools.partial(
pyproj.transform,
pyproj.Proj(proj='longlat'), # input: WGS84
pyproj.Proj(proj='webmerc', lon_0=0), # output: Web Mercator
)
mapgeom = shapely.ops.transform(tform, mapgeom)
# Convert to ASCII art
minx, miny, maxx, maxy = mapgeom.bounds
srcw = maxx - minx
srch = maxy - miny
dstw, dsth = shutil.get_terminal_size((80, 20))
for y in range(dsth):
for x in range(dstw):
pt = shapely.geometry.Point(
(srcw*x/dstw) + minx,
(srch*(dsth-y-1)/dsth) + miny # flip vertically
)
if any(country.contains(pt) for country in mapgeom):
sys.stdout.write('*')
else:
sys.stdout.write(' ')
sys.stdout.write('\n')
I made edit at the bottom, discovering new problem (why there is no Canada and unreliability of Shapely and Pyproj)
Even though its not exactly solving the problem, I believe this attitude has more potential than using pyproc and Shapely and in future, if you will do more Ascii art, will give you more possibilites and flexibility. Firstly I will write pros and cons.
PS: Initialy I wanted to find problem in your code, but I had problems with running it, because pyproj was returning me some error.
PROS
1) I was able to extract all points (Canada is really missing) and rotate image
2) The processing is very fast and therefore you can create Animated Ascii art.
3) Printing is done all at once without need to loop
CONS (known Issues, solvable)
1) This attitude is definetly not translating geo-coordinates correctly - too plane, it should look more spherical
2) I didnt take time to try to find out solution to filling the borders, so only borders has '*'. Therefore this attitude needs to find algorithm to fill the countries. I think it shouldnt be problem since the JSON file contains countries separated
3) You need 2 extra libs beside numpy - opencv(you can use PIL instead) and Colorama, because my example is animated and I needed to 'clean' terminal by moving cursor to (0,0) instead of using os.system('cls')
4) I made it run only in python 3. In python 2 it works too but I am getting error with sys.stdout.buffer
Change font size on terminal to lowest point so the the printed chars fit in terminal. Smaller the font, better resolution
The animation should look like the map is 'rotating'
I used little bit of your code to extract the data. Steps are in the commentaries
import json
import sys
import numpy as np
import colorama
import sys
import time
import cv2
#understand terminal_size as how many letters in X axis and how many in Y axis. Sorry not good name
if len(sys.argv)>1:
terminal_size = (int(sys.argv[1]),int(sys.argv[2]))
else:
terminal_size=(230,175)
with open('world-countries.json') as f:
countries = []
minimal = 0 # This can be dangerous. Expecting negative values
maximal = 0 # Expecting bigger values than 0
for feature in json.load(f)['features']: # getting data - I pretend here, that geo coordinates are actually indexes of my numpy array
indexes = np.int16(np.array(feature['geometry']['coordinates'][0])*2)
if indexes.min()<minimal:
minimal = indexes.min()
if indexes.max()>maximal:
maximal = indexes.max()
countries.append(indexes)
countries = (np.array(countries)+np.abs(minimal)) # Transform geo-coordinates to image coordinates
correction = np.abs(minimal) # because geo-coordinates has negative values, I need to move it to 0 - xaxis
colorama.init()
def move_cursor(x,y):
print ("\x1b[{};{}H".format(y+1,x+1))
move = 0 # 'rotate' the globe
for i in range(1000):
image = np.zeros(shape=[maximal+correction+1,maximal+correction+1]) #creating clean image
move -=1 # you need to rotate with negative values
# because negative one are by numpy understood. Positive one will end up with error
for i in countries: # VERY STRANGE,because parsing the json, some countries has different JSON structure
if len(i.shape)==2:
image[i[:,1],i[:,0]+move]=255 # indexes that once were geocoordinates now serves to position the countries in the image
if len(i.shape)==3:
image[i[0][:,1],i[0][:,0]+move]=255
cut = np.where(image==255) # Bounding box
if move == -1: # creating here bounding box - removing empty edges - from sides and top and bottom - we need space. This needs to be done only once
max_x,min_x = cut[0].max(),cut[0].min()
max_y,min_y = cut[1].max(),cut[1].min()
new_image = image[min_x:max_x,min_y:max_y] # the bounding box
new_image= new_image[::-1] # reverse, because map is upside down
new_image = cv2.resize(new_image,terminal_size) # resize so it fits inside terminal
ascii = np.chararray(shape = new_image.shape).astype('|S4') #create container for asci image
ascii[:,:]='' #chararray contains some random letters - dunno why... cleaning it
ascii[:,-1]='\n' #because I pring everything all at once, I am creating new lines at the end of the image
new_image[:,-1]=0 # at the end of the image can be country borders which would overwrite '\n' created one step above
ascii[np.where(new_image>0)]='*' # transforming image array to chararray. Better to say, anything that has pixel value higher than 0 will be star in chararray mask
move_cursor(0,0) # 'cleaning' the terminal for new animation
sys.stdout.buffer.write(ascii) # print into terminal
time.sleep(0.025) # FPS
Maybe it would be good to explain what is the main algorithm in the code. I like to use numpy whereever I can. The whole thing is that I pretend that coordinates in the image, or whatever it may be (in your case geo-coordinates) are matrix indexes. I have then 2 Matrixes - Real Image and Charray as Mask. I then take indexes of interesting pixels in Real image and for the same indexes in Charray Mask I assign any letter I want. Thanks to this, the whole algorithm doesnt need a single loop.
About Future posibilities
Imagine you will also have information about terrain(altitude). Let say you somehow create grayscale image of world map where gray shades expresses altitude. Such grayscale image would have shape x,y. You will prepare 3Dmatrix with shape = [x,y,256]. For each layer out of 256 in the 3D matrix, you assign one letter ' ....;;;;### and so on' that will express shade.
When you have this prepared, you can take your grayscale image where any pixel will actually have 3 coordinates: x,y and shade value. So you will have 3 arrays of indexes out of your grascale map image -> x,y,shade. Your new charray will simply be extraction of your 3Dmatrix with layer letters, because:
#Preparation phase
x,y = grayscale.shape
3Dmatrix = np.chararray(shape = [x,y,256])
table = ' ......;;;;;;;###### ...'
for i in range(256):
3Dmatrix[:,:,i] = table[i]
x_indexes = np.arange(x*y)
y_indexes = np.arange(x*y)
chararray_image = np.chararray(shape=[x,y])
# Ready to print
...
shades = grayscale.reshape(x*y)
chararray_image[:,:] = 3Dmatrix[(x_indexes ,y_indexes ,shades)].reshape(x,y)
Because there is no loop in this process and you can print chararray all at once, you can actually print movie into terminal with huge FPS
For example if you have footage of rotating earth, you can make something like this - (250*70 letters), render time 0.03658s
You can ofcourse take it into extreme and make super-resolution in your terminal, but resulting FPS is not that good: 0.23157s, that is approximately 4-5 FPS. Interesting to note is, that this attitude FPS is enourmous, but terminal simply cannot handle printing, so this low FPS is due to limitations of terminal and not of calculation as calculation of this high resolution took 0.00693s, that is 144 FPS.
BIG EDIT - contradicting some of above statements
I accidentaly opened raw json file and find out, there is CANADA and RUSSIA with full correct coordinates. I made mistake to rely on the fact that we both didnt have canada in the result, so I expected my code is ok. Inside JSON, the data has different NOT-UNIFIED structure. Russia and Canada has 'Multipolygon', so you need to iterate over it.
What does it mean? Dont rely on Shapely and pyproj. Obviously they cant extract some countries and if they cant do it reliably, you cant expect them to do anything more complicated.
After modifying the code, everything is allright
CODE: This is how to load the file correctly
...
with open('world-countries.json') as f:
countries = []
minimal = 0
maximal = 0
for feature in json.load(f)['features']: # getting data - I pretend here, that geo coordinates are actually indexes of my numpy array
for k in range((len(feature['geometry']['coordinates']))):
indexes = np.int64(np.array(feature['geometry']['coordinates'][k]))
if indexes.min()<minimal:
minimal = indexes.min()
if indexes.max()>maximal:
maximal = indexes.max()
countries.append(indexes)
...
I need to find window position and size, but I cannot figure out how. For example if I try:
id.get_geometry() # "id" is Xlib.display.Window
I get something like this:
data = {'height': 2540,
'width': 1440,
'depth': 24,
'y': 0, 'x': 0,
'border_width': 0
'root': <Xlib.display.Window 0x0000026a>
'sequence_number': 63}
I need to find window position and size, so my problem is: "y", "x" and "border_width" are always 0; even worse, "height" and "width" are returned without window frame.
In this case on my X screen (its dimensions are 4400x2560) I expected x=1280, y=0, width=1440, height=2560.
In other words I'm looking for python equivalent for:
#!/bin/bash
id=$1
wmiface framePosition $id
wmiface frameSize $id
If you think Xlib is not what I want, feel free to offer non-Xlib solution in python if it can take window id as argument (like the bash script above). Obvious workaround to use output of the bash script in python code does not feel right.
You are probably using reparenting window manager, and because of this id window has zero x and y. Check coordinates of parent window (which is window manager frame)
Liss posted the following solution as a comment:
from ewmh import EWMH
ewmh = EWMH()
def frame(client):
frame = client
while frame.query_tree().parent != ewmh.root:
frame = frame.query_tree().parent
return frame
for client in ewmh.getClientList():
print frame(client).get_geometry()
I'm copying it here because answers should contain the actual answer, and to prevent link rot.
Here's what I came up with that seems to work well:
from collections import namedtuple
import Xlib.display
disp = Xlib.display.Display()
root = disp.screen().root
MyGeom = namedtuple('MyGeom', 'x y height width')
def get_absolute_geometry(win):
"""
Returns the (x, y, height, width) of a window relative to the top-left
of the screen.
"""
geom = win.get_geometry()
(x, y) = (geom.x, geom.y)
while True:
parent = win.query_tree().parent
pgeom = parent.get_geometry()
x += pgeom.x
y += pgeom.y
if parent.id == root.id:
break
win = parent
return MyGeom(x, y, geom.height, geom.width)
Full example here.
In the same idea as #mgalgs, but more direct, I ask the root window to translate the (0,0) coordinate of the target window :
# assuming targetWindow is the window you want to know the position of
geometry = targetWindow.get_geometry()
position = geometry.root.translate_coords(targetWindow.id, 0, 0)
# coordinates are in position.x and position.y
# if you are not interested in the geometry, you can do directly
import Xlib.display
position = Xlib.display.Display().screen().root.translate_coords(targetWindow.id, 0, 0)
This gives the position of the client region of the targeted window (ie. without borders, title bar and shadow decoration created by the window manage). If you want to include them, replace targetWindow with targetWindow.query_tree().parent (or second parent).
Tested with KUbuntu 20.04 (ie KDE, Plasma and KWin decoration).