How can I get and set the window (any windows program) position and size with python?
Assuming you're on Windows, try using pywin32's win32gui module with its EnumWindows and GetWindowRect functions.
If you're using Mac OS X, you could try using appscript.
For Linux, you can try one of the many interfaces to X11.
Edit: Example for Windows (not tested):
import win32gui
def callback(hwnd, extra):
rect = win32gui.GetWindowRect(hwnd)
x = rect[0]
y = rect[1]
w = rect[2] - x
h = rect[3] - y
print("Window %s:" % win32gui.GetWindowText(hwnd))
print("\tLocation: (%d, %d)" % (x, y))
print("\t Size: (%d, %d)" % (w, h))
def main():
win32gui.EnumWindows(callback, None)
if __name__ == '__main__':
main()
You can get the window coordinates using the GetWindowRect function. For this, you need a handle to the window, which you can get using FindWindow, assuming you know something about the window (such as its title).
To call Win32 API functions from Python, use pywin32.
As Greg Hewgill mentioned, if you know the name of the window, you can simply use win32gui's FindWindow, and GetWindowRect. This is perhaps a little cleaner and efficient than previous methods.
from win32gui import FindWindow, GetWindowRect
# FindWindow takes the Window Class name (can be None if unknown), and the window's display text.
window_handle = FindWindow(None, "Diablo II")
window_rect = GetWindowRect(window_handle)
print(window_rect)
#(0, 0, 800, 600)
For future reference: PyWin32GUI has now moved to Github
this can return window rect from window title
Code
def GetWindowRectFromName(name:str)-> tuple:
hwnd = ctypes.windll.user32.FindWindowW(0, name)
rect = ctypes.wintypes.RECT()
ctypes.windll.user32.GetWindowRect(hwnd, ctypes.pointer(rect))
# print(hwnd)
# print(rect)
return (rect.left, rect.top, rect.right, rect.bottom)
if __name__ == "__main__":
print(GetWindowRectFromName('CALC'))
pass
Environment
Python 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 07:34:03) [MSC v.1916 64 bit (AMD64)] on win32
Windows 10 Pro 1909
For Linux you can use the tool I made here. The tool was meant for a slightly different use but you can use the API directly for your needs.
Install tool
sudo apt-get install xdotool xprop xwininfo
git clone https://github.com/Pithikos/winlaunch.git && cd winlaunch
In terminal
>>> from winlaunch import *
>>> wid, pid = launch('firefox')
>>> win_pos(wid)
[3210, 726]
wid and pid stand for window id and process id respectively.
This code will work on windows. It return the position and size of the active window.
from win32gui import GetWindowText, GetForegroundWindow
print(GetWindowRect(GetForegroundWindow()))
Something not mentioned in any of the other responses is that in newer Windows (Vista and up), "the Window Rect now includes the area occupied by the drop shadow.", which is what win32gui.GetWindowRect and ctypes.windll.user32.GetWindowRect are interfacing with.
If you want to get the positions and sizes without the dropshadow, you can:
Manually remove them. In my case there were 10 pixels on the left, bottom and right which had to be pruned.
Use the dwmapi to extract the DWMWA_EXTENDED_FRAME_BOUNDS as mentioned in the article
On using the dwmapi.DwmGetWindowAttribute (see here):
This function takes four arguments: The hwnd, the identifier for the attribute we are interested in, a pointer for the data structure in which to write the attribute, the size of this data structure. The identifier we get by checking this enum. In our case, the attribute DWMWA_EXTENDED_FRAME_BOUNDS is on position 9.
import ctypes
from ctypes.wintypes import HWND, DWORD, RECT
dwmapi = ctypes.WinDLL("dwmapi")
hwnd = 133116 # refer to the other answers on how to find the hwnd of your window
rect = RECT()
DMWA_EXTENDED_FRAME_BOUNDS = 9
dwmapi.DwmGetWindowAttribute(HWND(hwnd), DWORD(DMWA_EXTENDED_FRAME_BOUNDS),
ctypes.byref(rect), ctypes.sizeof(rect))
print(rect.left, rect.top, rect.right, rect.bottom)
Lastly: "Note that unlike the Window Rect, the DWM Extended Frame Bounds are not adjusted for DPI".
Related
I'm trying so hard and pushing myself to my limits, but I just can't figure out how to resize a terminal to my desire. Is there any way that someone can help me solve it? I would like the terminal to be solved with its unique code in different operating systems or you could just try to solve it in one or more lines of code.
#!usr/bin/python
# -*- coding: utf-8 -*-
### Requirements for default python
from __future__ import absolute_import
from __future__ import print_function
from __future__ import generators
### Available for all python sources
from sys import platform
from os import system
class MainModule(object):
def __init__(self, terminal_name, terminal_x, terminal_y):
self.terminal_name = terminal_name
self.terminal_x = terminal_x
self.terminal_y = terminal_y
if platform == "linux" or platform == "linux2":
# Code to resize a terminal for linux distros only
if platform == "win32" or platform == "win64":
# Code to resize a terminal for windows only
if platform == "darwin":
# Code to resize a terminal for mac only
As you seem to have discovered, the implementation is platform-specific. You'll have to write code to do this for each platform.
On Windows, there are Windows APIs that can be used to do this. You can leverage Windows APIs directly using the ctypes module. One example of this can be seen in the PyGetWindow package. Other tools like AutoHotkey (via ahk Python package), and PyWinAuto are alternative tools to do this for Windows.
# example using the AHK package on Windows
from ahk import AHK
ahk = AHK()
win = ahk.find_window(title=b'Untitled - Notepad')
win.move(x=200, y=300, width=500, height=800)
On MacOS, you can write an apple script to resize the window and launch osascript from a subprocess.
# Using applescript on MacOS
import subprocess
APPLICATION_NAME = "Safari"
X = 300
Y = 30
WIDTH = 1200
HEIGHT = 900
APPLESCRIPT = f"""\
tell application "{APPLICATION_NAME}"
set bounds of front window to {X}, {Y}, {WIDTH}, {HEIGHT}
end tell
"""
subprocess.run(['osascript', '-e', APPLESCRIPT], capture_output=True)
For Linux, as Jeff mentions in the comments, the Linux implementation will depend on the window manager used of which there are many. But for popular platforms like Ubuntu, you may rely on existing tools like the wmctrl package or similar packages.
# ref: https://askubuntu.com/a/94866
import subprocess
WINDOW_TITLE = "Terminal" # or substring of the window you want to resize
x = 0
y = 0
width = 100
height = 100
subprocess.run(["wmctrl", "-r", WINDOW_TITLE, "-e", f"0,{x},{y},{width},{height}"])
Though, if you are writing a game or similar, you can get around this a different way. For example, pygame lets you set your window size or in text-based terminal applications, curses (or a popular wrapper for curses, blessings) can be used to detect terminal size and you can resize your application dynamically, which may take some changing of your current code to do.
height = curses.LINES
width = curses.COLS
redraw(width, height) # you implement this to change how your app writes to the terminal
Is there a built-in function or straight-forward way to get the resolution of a maximized window in Python (e.g. on Windows full screen without the task bar)?
I have tried several things from other posts, which present some major drawbacks:
ctypes
import ctypes
user32 = ctypes.windll.user32
screensize = user32.GetSystemMetrics(0), user32.GetSystemMetrics(1)
Simple, but I get the resolution of the full screen.
tkinter
import tkinter as tk
root = tk.Tk() # Create an instance of the class.
root.state('zoomed') # Maximized the window.
root.update_idletasks() # Update the display.
screensize = [root.winfo_width(), root.winfo_height()]
root.mainloop()
Works, but it isn't really straight-forward and above all, I don't know how to exit the loop with root.destroy() or root.quit() successfully. Closing the window manually is of course not an option.
matplotlib
import matplotlib.pyplot as plt
plt.figure(1)
plt.switch_backend('QT5Agg')
figManager = plt.get_current_fig_manager()
figManager.window.showMaximized()
print(plt.gcf().get_size_inches())
[6.4 4.8] is then printed, but if I click on the created window, and execute print(plt.gcf().get_size_inches()) again, I get [19.2 10.69] printed, which I find higly inconsistent! (As you can imagine, having to interact to get that final value is definitely not an option.)
According to [MS.Docs]: GetSystemMetrics function (emphasis is mine):
SM_CXFULLSCREEN
16
The width of the client area for a full-screen window on the primary display monitor, in pixels. To get the coordinates of the portion of the screen that is not obscured by the system taskbar or by application desktop toolbars, call the SystemParametersInfo function with the SPI_GETWORKAREA value.
Same thing for SM_CYFULLSCREEN.
Example:
>>> import ctypes as ct
>>>
>>>
>>> SM_CXSCREEN = 0
>>> SM_CYSCREEN = 1
>>> SM_CXFULLSCREEN = 16
>>> SM_CYFULLSCREEN = 17
>>>
>>> user32 = ct.windll.user32
>>> GetSystemMetrics = user32.GetSystemMetrics
>>>
>>> # #TODO: Never forget about the 2 lines below !!!
>>> GetSystemMetrics.argtypes = [ct.c_int]
>>> GetSystemMetrics.restype = ct.c_int
>>>
>>> GetSystemMetrics(SM_CXSCREEN), GetSystemMetrics(SM_CYSCREEN) # Entire (primary) screen
(1920, 1080)
>>> GetSystemMetrics(SM_CXFULLSCREEN), GetSystemMetrics(SM_CYFULLSCREEN) # Full screen window
(1920, 1017)
Regarding the #TODO in the code: check [SO]: C function called from Python via ctypes returns incorrect value (#CristiFati's answer).
If you don't want the window to persist simply remove the mainloop method from the tkinter code.
import tkinter as tk
root = tk.Tk() # Create an instance of the class.
root.state('zoomed') # Maximized the window.
root.update_idletasks() # Update the display.
screensize = [root.winfo_width(), root.winfo_height()]
I also found this that might be helpful and more what you are looking for; I am using Linux, so I am unable to test it.
from win32api import GetSystemMetrics
print("Width =", GetSystemMetrics(0))
print("Height =", GetSystemMetrics(1))
How do I get monitor resolution in Python?
TL;DR
I am fiddling with a Raspberry Pi 2 and a 2.8" TFT touch screen attached to the Pi's GPIO. The Pi is also connected to a HDMI monitor.
My issue is that my Python3 pygame script is not able to use the TFT screen, but always displays on my HDMI screen instead.
Some background
I've installed the latest vanilla Raspbian ready-to-use distro and followed the TFT screen installation steps, everything works well: the TFT can display the console and X without issue. The touchscreen is calibrated and moves the cursor correctly. I can also see a new framebuffer device as /dev/fb1.
I've tried the following to test this new device:
sudo fbi -T 2 -d /dev/fb1 -noverbose -a my_picture.jpg
=> This successfully displays the pic on the TFT screen
while true; do sudo cat /dev/urandom > /dev/fb1; sleep .01; done
=> This successfully displays statics on the TFT screen
However, when I run this Python3/pygame script, the result appears in the HDMI screen consistently and not on the TFT screen:
#!/usr/bin/python3
import os, pygame, time
def setSDLVariables():
print("Setting SDL variables...")
os.environ["SDL_FBDEV"] = "/dev/fb1"
os.environ["SDL_VIDEODRIVER"] = driver
print("...done")
def printSDLVariables():
print("Checking current env variables...")
print("SDL_VIDEODRIVER = {0}".format(os.getenv("SDL_VIDEODRIVER")))
print("SDL_FBDEV = {0}".format(os.getenv("SDL_FBDEV")))
def runHW5():
print("Running HW5...")
try:
pygame.init()
except pygame.error:
print("Driver '{0}' failed!".format(driver))
size = (pygame.display.Info().current_w, pygame.display.Info().current_h)
print("Detected screen size: {0}".format(size))
lcd = pygame.display.set_mode(size)
lcd.fill((10,50,100))
pygame.display.update()
time.sleep(sleepTime)
print("...done")
driver = 'fbcon'
sleepTime= 0.1
printSDLVariables()
setSDLVariables()
printSDLVariables()
runHW5()
The script above runs as follow:
pi#raspberrypi:~/Documents/Python_HW_GUI $ ./hw5-ThorPy-fb1.py
Checking current env variables...
SDL_VIDEODRIVER = None
SDL_FBDEV = None
Setting SDL variables...
...done
Checking current env variables...
SDL_VIDEODRIVER = fbcon
SDL_FBDEV = /dev/fb1
Running HW5...
Detected screen size: (1920, 1080)
...done
I have tried different drivers (fbcon, directfb, svgalib...) without success.
Any help or idea would be greatly appreciated, I've been through a lot of doc, manuals and samples and just ran out of leads :/ Furthermore, it appears that a lot of people have succeeded in getting Python3/pygame to output to their TFT screen via /dev/fb1.
I have been fiddling around that for far too many hours now, but at least I have found what I'd call a decent workaround, if not a solution.
TL;DR
I've kept using pygame for building my graphics/GUI, and switched to evdev for handling the TFT touch events. The reason for using evdev rather than pygame's built-in input management (or pymouse, or any other high level stuff) is explained in the next section.
In a nutshell, this program builds some graphics in memory (RAM, not graphic) using pygame, and pushes the built graphics as bytes into the TFT screen framebuffer directly. This bypasses any driver so it is virtually compatible with any screen accessible through a framebuffer, however it also bypasses any potential optimizations coming along what would be a good driver.
Here is a code sample that makes the magic happen:
#!/usr/bin/python3
##
# Prerequisites:
# A Touchscreen properly installed on your system:
# - a device to output to it, e.g. /dev/fb1
# - a device to get input from it, e.g. /dev/input/touchscreen
##
import pygame, time, evdev, select, math
# Very important: the exact pixel size of the TFT screen must be known so we can build graphics at this exact format
surfaceSize = (320, 240)
# Note that we don't instantiate any display!
pygame.init()
# The pygame surface we are going to draw onto.
# /!\ It must be the exact same size of the target display /!\
lcd = pygame.Surface(surfaceSize)
# This is the important bit
def refresh():
# We open the TFT screen's framebuffer as a binary file. Note that we will write bytes into it, hence the "wb" operator
f = open("/dev/fb1","wb")
# According to the TFT screen specs, it supports only 16bits pixels depth
# Pygame surfaces use 24bits pixels depth by default, but the surface itself provides a very handy method to convert it.
# once converted, we write the full byte buffer of the pygame surface into the TFT screen framebuffer like we would in a plain file:
f.write(lcd.convert(16,0).get_buffer())
# We can then close our access to the framebuffer
f.close()
time.sleep(0.1)
# Now we've got a function that can get the bytes from a pygame surface to the TFT framebuffer,
# we can use the usual pygame primitives to draw on our surface before calling the refresh function.
# Here we just blink the screen background in a few colors with the "Hello World!" text
pygame.font.init()
defaultFont = pygame.font.SysFont(None,30)
lcd.fill((255,0,0))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((0, 255, 0))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((0,0,255))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((128, 128, 128))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
##
# Everything that follows is for handling the touchscreen touch events via evdev
##
# Used to map touch event from the screen hardware to the pygame surface pixels.
# (Those values have been found empirically, but I'm working on a simple interactive calibration tool
tftOrig = (3750, 180)
tftEnd = (150, 3750)
tftDelta = (tftEnd [0] - tftOrig [0], tftEnd [1] - tftOrig [1])
tftAbsDelta = (abs(tftEnd [0] - tftOrig [0]), abs(tftEnd [1] - tftOrig [1]))
# We use evdev to read events from our touchscreen
# (The device must exist and be properly installed for this to work)
touch = evdev.InputDevice('/dev/input/touchscreen')
# We make sure the events from the touchscreen will be handled only by this program
# (so the mouse pointer won't move on X when we touch the TFT screen)
touch.grab()
# Prints some info on how evdev sees our input device
print(touch)
# Even more info for curious people
#print(touch.capabilities())
# Here we convert the evdev "hardware" touch coordinates into pygame surface pixel coordinates
def getPixelsFromCoordinates(coords):
# TODO check divide by 0!
if tftDelta [0] < 0:
x = float(tftAbsDelta [0] - coords [0] + tftEnd [0]) / float(tftAbsDelta [0]) * float(surfaceSize [0])
else:
x = float(coords [0] - tftOrig [0]) / float(tftAbsDelta [0]) * float(surfaceSize [0])
if tftDelta [1] < 0:
y = float(tftAbsDelta [1] - coords [1] + tftEnd [1]) / float(tftAbsDelta [1]) * float(surfaceSize [1])
else:
y = float(coords [1] - tftOrig [1]) / float(tftAbsDelta [1]) * float(surfaceSize [1])
return (int(x), int(y))
# Was useful to see what pieces I would need from the evdev events
def printEvent(event):
print(evdev.categorize(event))
print("Value: {0}".format(event.value))
print("Type: {0}".format(event.type))
print("Code: {0}".format(event.code))
# This loop allows us to write red dots on the screen where we touch it
while True:
# TODO get the right ecodes instead of int
r,w,x = select.select([touch], [], [])
for event in touch.read():
if event.type == evdev.ecodes.EV_ABS:
if event.code == 1:
X = event.value
elif event.code == 0:
Y = event.value
elif event.type == evdev.ecodes.EV_KEY:
if event.code == 330 and event.value == 1:
printEvent(event)
p = getPixelsFromCoordinates((X, Y))
print("TFT: {0}:{1} | Pixels: {2}:{3}".format(X, Y, p [0], p [1]))
pygame.draw.circle(lcd, (255, 0, 0), p , 2, 2)
refresh()
exit()
More details
A quick recap on what I wanted to achieve: my goal is to display content onto a TFT display with the following constraints:
Be able to display another content on the HDMI display without interference (e.g. X on HDMI, the output of a graphical app on the TFT);
be able to use the touch capability of the TFT display for the benefit of the graphical app;
make sure the point above would not interfere with the mouse pointer on the HDMI display;
leverage Python and Pygame to keep it very easy to build whatever graphics/GUI I'd fancy;
keep a less-than-decent-but-sufficient-for-me framerate, e.g. 10 FPS.
Why not using pygame/SDL1.2.x as instructed in many forums and the adafruit TFT manual?
First, it doesn't work, at all. I have tried a gazillion versions of libsdl and its dependencies and they all failed consistently. I've tried forcing some libsdl versions downgrades, same with pygame version, just to try to get back to what the software was when my TFT screen was released (~2014). Then I aslo tried switching to C and handle SDL2 primitives directly.
Furthermore, SDL1.2 is getting old and I believe it is bad practice to build new code on top of old one. That said, I am still using pygame-1.9.4...
So why not SDL2? Well, they have stopped (or are about to stop) supporting framebuffers. I have not tried their alternative to framebuffers, EGL, as it got more complex the further I digged and it did not look too engaging (so old it felt like necro-browsing). Any fresh help or advice on that would be greatly appreciated BTW.
What about the touchscreen inputs?
All the high level solutions that work in a conventional context are embedding a display. I've tried pygame events, pymouse and a couple others that would not work in my case as I got rid of the notion of display on purpose. That's why I had to go back to a generic and low level solution, and the internet introduced my to evdev, see the commented code above for more details.
Any comment on the above would be greatly appreciated, these are my first step with Raspbian, Python and TFT screens, I reckon I most probably have missed some pretty obvious stuff along the way.
I'm trying to click somewhere on the desktop, I'm using python with win32 api, I'm using python 32 bit but my computer is a 64 bit computer. I believe the lParam variable isn't holding the value I'm expecting, and I'm still a bit confused about this variable itself, lets say I import it from wintypes can anyone tell me how to use it? Why does my function below not work?
I have a function as following, this doesn't seem to work:
def clickDesktop(x=0, y=0):
# Get handle to desktop window
desktop = win32gui.GetDesktopWindow()
# Create variable lParam that contains the x-coordinate in the low-order word while
# the high-order word contains the y coordinate.
lParam = y << 16 | x
# Click at x, y in the desktop window
win32gui.PostMessage(desktop, win32con.WM_LBUTTONDOWN, MK_LBUTTON, lParam)
win32gui.PostMessage(desktop, win32con.WM_LBUTTONUP, 0, lParam)
The following code works with Python33 on Windows 7.
I used ctypes.
The LPARAM parameter for WM_LBUTTONDBLCLK combines x and y in a single 32 bits value.
When I run that code, it opens the "My Computer" Icon, located at the upper left corner of my Desktop (my TaskBar is also on the left, hence the high value of 110 for x).
from ctypes import windll
WM_LBUTTONDBLCLK = 0x0203
MK_LBUTTON = 0x0001
if __name__=='__main__':
hProgman = windll.User32.FindWindowW( "Progman", 0 )
if hProgman != 0:
hFolder = windll.User32.FindWindowExW( hProgman, 0, "SHELLDLL_DefView", 0 )
if hFolder != 0:
hListView = windll.User32.FindWindowExW( hFolder, 0, "SysListView32", 0 )
if hListView != 0:
windll.User32.PostMessageW( hListView, WM_LBUTTONDBLCLK, MK_LBUTTON,
110 + (65536*32) )
EDIT
the WM_LBUTTON* messages are normally posted by Windows to the window under the pointer. The desktop window has child windows, and that's those child windows which are "under the pointer". If you want to use the PostMessage API, you need to know to what window you will post the message.
If you don't want to bother with windows hierarchy, the just use SendInput. Window will then do the work for you and finally post the mouse message to the correct handle.
It may be easier to install pywinauto and use ClickInput in combination with find_windows and Rectangle
Links to implementation:
https://code.google.com/p/pywinauto/source/browse/pywinauto/controls/HwndWrapper.py?name=0.4.2#1465
https://code.google.com/p/pywinauto/source/browse/pywinauto/findwindows.py?name=0.4.2#81
https://code.google.com/p/pywinauto/source/browse/pywinauto/handleprops.py?name=0.4.2#135
I'm trying to take a screen shot of an applet running inside a
browser. The applet is using JOGL (OpenGL for Java) to display 3D
models. (1) The screen shots always come out either black or white.The
current solution uses the usual GDI calls. Screen shots of applets not
running OpenGL are fine.
A few examples of JOGL apps can be found here https://jogl-demos.dev.java.net/
(2) Another thing I'm trying to achieve is to get the scrollable area
inside the screen shot as well.
I found this code on the internet which works fine except for the 2
issues mentioned above.
import win32gui as wg
import win32ui as wu
import win32con
def copyBitMap(hWnd, fname):
wg.SetForegroundWindow(hWnd)
cWnd = wu.CreateWindowFromHandle(hWnd)
rect = cWnd.GetClientRect()
(x,y) = (rect[2] - rect[0], rect[3] - rect[1])
hsrccDc = wg.GetDC(hWnd)
hdestcDc = wg.CreateCompatibleDC(hsrccDc)
hdestcBm = wg.CreateCompatibleBitmap(hsrccDc, x, y)
wg.SelectObject(hdestcDc, hdestcBm.handle)
wg.BitBlt(hdestcDc, 0, 0, x, y, hsrccDc, rect[0], rect[1], win32con.SRCCOPY)
destcDc = wu.CreateDCFromHandle(hdestcDc)
bmp = wu.CreateBitmapFromHandle(hdestcBm.handle)
bmp.SaveBitmapFile(destcDc, fname)
Unless you are trying to automate it, I would just use a Firefox extension for this. There are a number of them returned from a search for "screenshot" that can take a screenshot of the entire browser page including the scrollable area:
FireShot
Screengrab
Snapper (for older Firefox versions)
However, I apologize, I don't know enough about Python to debug your specific issue if you are indeed trying to do it programmatically.
Here is one way to do it by disabling dwm (Desktop Window Manager) composition before taking the screen shot, but this causes the whole screen to blink whenever its enabled/disabled.
from ctypes import WinDLL
from time import sleep
import win32gui as wg
import win32ui as wu
import win32con
def copyBitMap(hWnd, fname):
dwm = WinDLL("dwmapi.dll")
dwm.DwmEnableComposition(0)
wg.SetForegroundWindow(hWnd)
# Give the window sometime to redraw itself
sleep(2)
cWnd = wu.CreateWindowFromHandle(hWnd)
rect = cWnd.GetClientRect()
(x,y) = (rect[2] - rect[0], rect[3] - rect[1])
hsrccDc = wg.GetDC(hWnd)
hdestcDc = wg.CreateCompatibleDC(hsrccDc)
hdestcBm = wg.CreateCompatibleBitmap(hsrccDc, x, y)
wg.SelectObject(hdestcDc, hdestcBm.handle)
wg.BitBlt(hdestcDc, 0, 0, x, y, hsrccDc, rect[0], rect[1], win32con.SRCCOPY)
destcDc = wu.CreateDCFromHandle(hdestcDc)
bmp = wu.CreateBitmapFromHandle(hdestcBm.handle)
bmp.SaveBitmapFile(destcDc, fname)
dwm.DwmEnableComposition(1)
Grabbing an OpenGL window may be quite hard in some cases, since the OpenGL is being rendered by the GPU directly into its frame buffer. The same applies to DirectX windows and Video overlay windows.
Why not using the Screenshot class of JOGL??
com.jogamp.opengl.util.awt.Screenshot in JOGL 2.0 beta