Screen Capture Under Win7 of JOGL Applet - python

I'm trying to take a screen shot of an applet running inside a
browser. The applet is using JOGL (OpenGL for Java) to display 3D
models. (1) The screen shots always come out either black or white.The
current solution uses the usual GDI calls. Screen shots of applets not
running OpenGL are fine.
A few examples of JOGL apps can be found here https://jogl-demos.dev.java.net/
(2) Another thing I'm trying to achieve is to get the scrollable area
inside the screen shot as well.
I found this code on the internet which works fine except for the 2
issues mentioned above.
import win32gui as wg
import win32ui as wu
import win32con
def copyBitMap(hWnd, fname):
wg.SetForegroundWindow(hWnd)
cWnd = wu.CreateWindowFromHandle(hWnd)
rect = cWnd.GetClientRect()
(x,y) = (rect[2] - rect[0], rect[3] - rect[1])
hsrccDc = wg.GetDC(hWnd)
hdestcDc = wg.CreateCompatibleDC(hsrccDc)
hdestcBm = wg.CreateCompatibleBitmap(hsrccDc, x, y)
wg.SelectObject(hdestcDc, hdestcBm.handle)
wg.BitBlt(hdestcDc, 0, 0, x, y, hsrccDc, rect[0], rect[1], win32con.SRCCOPY)
destcDc = wu.CreateDCFromHandle(hdestcDc)
bmp = wu.CreateBitmapFromHandle(hdestcBm.handle)
bmp.SaveBitmapFile(destcDc, fname)

Unless you are trying to automate it, I would just use a Firefox extension for this. There are a number of them returned from a search for "screenshot" that can take a screenshot of the entire browser page including the scrollable area:
FireShot
Screengrab
Snapper (for older Firefox versions)
However, I apologize, I don't know enough about Python to debug your specific issue if you are indeed trying to do it programmatically.

Here is one way to do it by disabling dwm (Desktop Window Manager) composition before taking the screen shot, but this causes the whole screen to blink whenever its enabled/disabled.
from ctypes import WinDLL
from time import sleep
import win32gui as wg
import win32ui as wu
import win32con
def copyBitMap(hWnd, fname):
dwm = WinDLL("dwmapi.dll")
dwm.DwmEnableComposition(0)
wg.SetForegroundWindow(hWnd)
# Give the window sometime to redraw itself
sleep(2)
cWnd = wu.CreateWindowFromHandle(hWnd)
rect = cWnd.GetClientRect()
(x,y) = (rect[2] - rect[0], rect[3] - rect[1])
hsrccDc = wg.GetDC(hWnd)
hdestcDc = wg.CreateCompatibleDC(hsrccDc)
hdestcBm = wg.CreateCompatibleBitmap(hsrccDc, x, y)
wg.SelectObject(hdestcDc, hdestcBm.handle)
wg.BitBlt(hdestcDc, 0, 0, x, y, hsrccDc, rect[0], rect[1], win32con.SRCCOPY)
destcDc = wu.CreateDCFromHandle(hdestcDc)
bmp = wu.CreateBitmapFromHandle(hdestcBm.handle)
bmp.SaveBitmapFile(destcDc, fname)
dwm.DwmEnableComposition(1)

Grabbing an OpenGL window may be quite hard in some cases, since the OpenGL is being rendered by the GPU directly into its frame buffer. The same applies to DirectX windows and Video overlay windows.

Why not using the Screenshot class of JOGL??
com.jogamp.opengl.util.awt.Screenshot in JOGL 2.0 beta

Related

Raspberry Pi Out of Memory - Read Only SD card - tmpfs and overlay 100% used [duplicate]

TL;DR
I am fiddling with a Raspberry Pi 2 and a 2.8" TFT touch screen attached to the Pi's GPIO. The Pi is also connected to a HDMI monitor.
My issue is that my Python3 pygame script is not able to use the TFT screen, but always displays on my HDMI screen instead.
Some background
I've installed the latest vanilla Raspbian ready-to-use distro and followed the TFT screen installation steps, everything works well: the TFT can display the console and X without issue. The touchscreen is calibrated and moves the cursor correctly. I can also see a new framebuffer device as /dev/fb1.
I've tried the following to test this new device:
sudo fbi -T 2 -d /dev/fb1 -noverbose -a my_picture.jpg
=> This successfully displays the pic on the TFT screen
while true; do sudo cat /dev/urandom > /dev/fb1; sleep .01; done
=> This successfully displays statics on the TFT screen
However, when I run this Python3/pygame script, the result appears in the HDMI screen consistently and not on the TFT screen:
#!/usr/bin/python3
import os, pygame, time
def setSDLVariables():
print("Setting SDL variables...")
os.environ["SDL_FBDEV"] = "/dev/fb1"
os.environ["SDL_VIDEODRIVER"] = driver
print("...done")
def printSDLVariables():
print("Checking current env variables...")
print("SDL_VIDEODRIVER = {0}".format(os.getenv("SDL_VIDEODRIVER")))
print("SDL_FBDEV = {0}".format(os.getenv("SDL_FBDEV")))
def runHW5():
print("Running HW5...")
try:
pygame.init()
except pygame.error:
print("Driver '{0}' failed!".format(driver))
size = (pygame.display.Info().current_w, pygame.display.Info().current_h)
print("Detected screen size: {0}".format(size))
lcd = pygame.display.set_mode(size)
lcd.fill((10,50,100))
pygame.display.update()
time.sleep(sleepTime)
print("...done")
driver = 'fbcon'
sleepTime= 0.1
printSDLVariables()
setSDLVariables()
printSDLVariables()
runHW5()
The script above runs as follow:
pi#raspberrypi:~/Documents/Python_HW_GUI $ ./hw5-ThorPy-fb1.py
Checking current env variables...
SDL_VIDEODRIVER = None
SDL_FBDEV = None
Setting SDL variables...
...done
Checking current env variables...
SDL_VIDEODRIVER = fbcon
SDL_FBDEV = /dev/fb1
Running HW5...
Detected screen size: (1920, 1080)
...done
I have tried different drivers (fbcon, directfb, svgalib...) without success.
Any help or idea would be greatly appreciated, I've been through a lot of doc, manuals and samples and just ran out of leads :/ Furthermore, it appears that a lot of people have succeeded in getting Python3/pygame to output to their TFT screen via /dev/fb1.
I have been fiddling around that for far too many hours now, but at least I have found what I'd call a decent workaround, if not a solution.
TL;DR
I've kept using pygame for building my graphics/GUI, and switched to evdev for handling the TFT touch events. The reason for using evdev rather than pygame's built-in input management (or pymouse, or any other high level stuff) is explained in the next section.
In a nutshell, this program builds some graphics in memory (RAM, not graphic) using pygame, and pushes the built graphics as bytes into the TFT screen framebuffer directly. This bypasses any driver so it is virtually compatible with any screen accessible through a framebuffer, however it also bypasses any potential optimizations coming along what would be a good driver.
Here is a code sample that makes the magic happen:
#!/usr/bin/python3
##
# Prerequisites:
# A Touchscreen properly installed on your system:
# - a device to output to it, e.g. /dev/fb1
# - a device to get input from it, e.g. /dev/input/touchscreen
##
import pygame, time, evdev, select, math
# Very important: the exact pixel size of the TFT screen must be known so we can build graphics at this exact format
surfaceSize = (320, 240)
# Note that we don't instantiate any display!
pygame.init()
# The pygame surface we are going to draw onto.
# /!\ It must be the exact same size of the target display /!\
lcd = pygame.Surface(surfaceSize)
# This is the important bit
def refresh():
# We open the TFT screen's framebuffer as a binary file. Note that we will write bytes into it, hence the "wb" operator
f = open("/dev/fb1","wb")
# According to the TFT screen specs, it supports only 16bits pixels depth
# Pygame surfaces use 24bits pixels depth by default, but the surface itself provides a very handy method to convert it.
# once converted, we write the full byte buffer of the pygame surface into the TFT screen framebuffer like we would in a plain file:
f.write(lcd.convert(16,0).get_buffer())
# We can then close our access to the framebuffer
f.close()
time.sleep(0.1)
# Now we've got a function that can get the bytes from a pygame surface to the TFT framebuffer,
# we can use the usual pygame primitives to draw on our surface before calling the refresh function.
# Here we just blink the screen background in a few colors with the "Hello World!" text
pygame.font.init()
defaultFont = pygame.font.SysFont(None,30)
lcd.fill((255,0,0))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((0, 255, 0))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((0,0,255))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
lcd.fill((128, 128, 128))
lcd.blit(defaultFont.render("Hello World!", False, (0, 0, 0)),(0, 0))
refresh()
##
# Everything that follows is for handling the touchscreen touch events via evdev
##
# Used to map touch event from the screen hardware to the pygame surface pixels.
# (Those values have been found empirically, but I'm working on a simple interactive calibration tool
tftOrig = (3750, 180)
tftEnd = (150, 3750)
tftDelta = (tftEnd [0] - tftOrig [0], tftEnd [1] - tftOrig [1])
tftAbsDelta = (abs(tftEnd [0] - tftOrig [0]), abs(tftEnd [1] - tftOrig [1]))
# We use evdev to read events from our touchscreen
# (The device must exist and be properly installed for this to work)
touch = evdev.InputDevice('/dev/input/touchscreen')
# We make sure the events from the touchscreen will be handled only by this program
# (so the mouse pointer won't move on X when we touch the TFT screen)
touch.grab()
# Prints some info on how evdev sees our input device
print(touch)
# Even more info for curious people
#print(touch.capabilities())
# Here we convert the evdev "hardware" touch coordinates into pygame surface pixel coordinates
def getPixelsFromCoordinates(coords):
# TODO check divide by 0!
if tftDelta [0] < 0:
x = float(tftAbsDelta [0] - coords [0] + tftEnd [0]) / float(tftAbsDelta [0]) * float(surfaceSize [0])
else:
x = float(coords [0] - tftOrig [0]) / float(tftAbsDelta [0]) * float(surfaceSize [0])
if tftDelta [1] < 0:
y = float(tftAbsDelta [1] - coords [1] + tftEnd [1]) / float(tftAbsDelta [1]) * float(surfaceSize [1])
else:
y = float(coords [1] - tftOrig [1]) / float(tftAbsDelta [1]) * float(surfaceSize [1])
return (int(x), int(y))
# Was useful to see what pieces I would need from the evdev events
def printEvent(event):
print(evdev.categorize(event))
print("Value: {0}".format(event.value))
print("Type: {0}".format(event.type))
print("Code: {0}".format(event.code))
# This loop allows us to write red dots on the screen where we touch it
while True:
# TODO get the right ecodes instead of int
r,w,x = select.select([touch], [], [])
for event in touch.read():
if event.type == evdev.ecodes.EV_ABS:
if event.code == 1:
X = event.value
elif event.code == 0:
Y = event.value
elif event.type == evdev.ecodes.EV_KEY:
if event.code == 330 and event.value == 1:
printEvent(event)
p = getPixelsFromCoordinates((X, Y))
print("TFT: {0}:{1} | Pixels: {2}:{3}".format(X, Y, p [0], p [1]))
pygame.draw.circle(lcd, (255, 0, 0), p , 2, 2)
refresh()
exit()
More details
A quick recap on what I wanted to achieve: my goal is to display content onto a TFT display with the following constraints:
Be able to display another content on the HDMI display without interference (e.g. X on HDMI, the output of a graphical app on the TFT);
be able to use the touch capability of the TFT display for the benefit of the graphical app;
make sure the point above would not interfere with the mouse pointer on the HDMI display;
leverage Python and Pygame to keep it very easy to build whatever graphics/GUI I'd fancy;
keep a less-than-decent-but-sufficient-for-me framerate, e.g. 10 FPS.
Why not using pygame/SDL1.2.x as instructed in many forums and the adafruit TFT manual?
First, it doesn't work, at all. I have tried a gazillion versions of libsdl and its dependencies and they all failed consistently. I've tried forcing some libsdl versions downgrades, same with pygame version, just to try to get back to what the software was when my TFT screen was released (~2014). Then I aslo tried switching to C and handle SDL2 primitives directly.
Furthermore, SDL1.2 is getting old and I believe it is bad practice to build new code on top of old one. That said, I am still using pygame-1.9.4...
So why not SDL2? Well, they have stopped (or are about to stop) supporting framebuffers. I have not tried their alternative to framebuffers, EGL, as it got more complex the further I digged and it did not look too engaging (so old it felt like necro-browsing). Any fresh help or advice on that would be greatly appreciated BTW.
What about the touchscreen inputs?
All the high level solutions that work in a conventional context are embedding a display. I've tried pygame events, pymouse and a couple others that would not work in my case as I got rid of the notion of display on purpose. That's why I had to go back to a generic and low level solution, and the internet introduced my to evdev, see the commented code above for more details.
Any comment on the above would be greatly appreciated, these are my first step with Raspbian, Python and TFT screens, I reckon I most probably have missed some pretty obvious stuff along the way.

Capturing a window in Python

I'm currently looking to find a module, or code that would allow me to capture another processes window.
I've tried working with ImageGrab, however that just captures an area of the screen rather than binding to a specific process window. Since I'm working with a small monitor I can't guarantee that something won't lap over onto the captured area of the screen, so sadly the ImageGrab solution isn't good enough.
You can achieve this using win32gui.
from PIL import ImageGrab
import win32gui
hwnd = win32gui.FindWindow(None, r'Window_Title')
win32gui.SetForegroundWindow(hwnd)
dimensions = win32gui.GetWindowRect(hwnd)
image = ImageGrab.grab(dimensions)
image.show()
You could also move the window to a preferred position if a small screen is the problem.
win32gui.MoveWindow(hwnd, 0, 0, 500, 700, True)
see win32gui.MoveWindow

Python window detection [duplicate]

How can I get and set the window (any windows program) position and size with python?
Assuming you're on Windows, try using pywin32's win32gui module with its EnumWindows and GetWindowRect functions.
If you're using Mac OS X, you could try using appscript.
For Linux, you can try one of the many interfaces to X11.
Edit: Example for Windows (not tested):
import win32gui
def callback(hwnd, extra):
rect = win32gui.GetWindowRect(hwnd)
x = rect[0]
y = rect[1]
w = rect[2] - x
h = rect[3] - y
print("Window %s:" % win32gui.GetWindowText(hwnd))
print("\tLocation: (%d, %d)" % (x, y))
print("\t Size: (%d, %d)" % (w, h))
def main():
win32gui.EnumWindows(callback, None)
if __name__ == '__main__':
main()
You can get the window coordinates using the GetWindowRect function. For this, you need a handle to the window, which you can get using FindWindow, assuming you know something about the window (such as its title).
To call Win32 API functions from Python, use pywin32.
As Greg Hewgill mentioned, if you know the name of the window, you can simply use win32gui's FindWindow, and GetWindowRect. This is perhaps a little cleaner and efficient than previous methods.
from win32gui import FindWindow, GetWindowRect
# FindWindow takes the Window Class name (can be None if unknown), and the window's display text.
window_handle = FindWindow(None, "Diablo II")
window_rect = GetWindowRect(window_handle)
print(window_rect)
#(0, 0, 800, 600)
For future reference: PyWin32GUI has now moved to Github
this can return window rect from window title
Code
def GetWindowRectFromName(name:str)-> tuple:
hwnd = ctypes.windll.user32.FindWindowW(0, name)
rect = ctypes.wintypes.RECT()
ctypes.windll.user32.GetWindowRect(hwnd, ctypes.pointer(rect))
# print(hwnd)
# print(rect)
return (rect.left, rect.top, rect.right, rect.bottom)
if __name__ == "__main__":
print(GetWindowRectFromName('CALC'))
pass
Environment
Python 3.8.2 | packaged by conda-forge | (default, Apr 24 2020, 07:34:03) [MSC v.1916 64 bit (AMD64)] on win32
Windows 10 Pro 1909
For Linux you can use the tool I made here. The tool was meant for a slightly different use but you can use the API directly for your needs.
Install tool
sudo apt-get install xdotool xprop xwininfo
git clone https://github.com/Pithikos/winlaunch.git && cd winlaunch
In terminal
>>> from winlaunch import *
>>> wid, pid = launch('firefox')
>>> win_pos(wid)
[3210, 726]
wid and pid stand for window id and process id respectively.
This code will work on windows. It return the position and size of the active window.
from win32gui import GetWindowText, GetForegroundWindow
print(GetWindowRect(GetForegroundWindow()))
Something not mentioned in any of the other responses is that in newer Windows (Vista and up), "the Window Rect now includes the area occupied by the drop shadow.", which is what win32gui.GetWindowRect and ctypes.windll.user32.GetWindowRect are interfacing with.
If you want to get the positions and sizes without the dropshadow, you can:
Manually remove them. In my case there were 10 pixels on the left, bottom and right which had to be pruned.
Use the dwmapi to extract the DWMWA_EXTENDED_FRAME_BOUNDS as mentioned in the article
On using the dwmapi.DwmGetWindowAttribute (see here):
This function takes four arguments: The hwnd, the identifier for the attribute we are interested in, a pointer for the data structure in which to write the attribute, the size of this data structure. The identifier we get by checking this enum. In our case, the attribute DWMWA_EXTENDED_FRAME_BOUNDS is on position 9.
import ctypes
from ctypes.wintypes import HWND, DWORD, RECT
dwmapi = ctypes.WinDLL("dwmapi")
hwnd = 133116 # refer to the other answers on how to find the hwnd of your window
rect = RECT()
DMWA_EXTENDED_FRAME_BOUNDS = 9
dwmapi.DwmGetWindowAttribute(HWND(hwnd), DWORD(DMWA_EXTENDED_FRAME_BOUNDS),
ctypes.byref(rect), ctypes.sizeof(rect))
print(rect.left, rect.top, rect.right, rect.bottom)
Lastly: "Note that unlike the Window Rect, the DWM Extended Frame Bounds are not adjusted for DPI".

pygame dual monitors and fullscreen

I am using pygame to program a simple behavioral test. I'm running it on my macbook pro and have almost all the functionality working. However, during testing I'll have a second, external monitor that the subject sees and the laptop monitor. I'd like to have the game so up fullscreen on the external monitor and not on the laptop's monitor so that I can monitor performance. Currently, the start of the file looks something like:
#! /usr/bin/env python2.6
import pygame
import sys
stdscr = curses.initscr()
pygame.init()
screen = pygame.display.set_mode((1900, 1100), pygame.RESIZABLE)
I was thinking of starting the game in a resizable screen, but that OS X has problems resizing the window.
Pygame doesn't support two displays in a single pygame process(yet). See the question here and developer answer immediately after, where he says
Once SDL 1.3 is finished then pygame will get support for using multiple windows in the same process.
So, your options are:
Use multiple processes. Two pygame instances, each maximized on its own screen, communicating back and forth (you could use any of: the very cool python multiprocessing module, local TCP, pipes, writing/reading files, etc)
Set the same resolution on both of your displays, and create a large (wide) window that spans them with your information on one half and the user display on the other. Then manually place the window so that the user side is on their screen and yours is on the laptop screen. It's hacky, but might a better use of your time than engineering a better solution ("If it's studpid and it works, it ain't stupid" ;).
Use pyglet, which is similar to pygame and supports full screen windows: pyglet.window.Window(fullscreen=True, screens[1])
Good luck.
I do not know if you can do this in OS X, but this is worth mentioning for the Windows users out there, if you just want to have your program to run full screen on the second screen and you are on windows, just set the other screen as the main one.
The setting can be found under Rearrange Your Displays in settings.
So far for me anything that I can run on my main display can run this way, no need to change your code.
I did something silly but it works.
i get the number of monitors with get_monitors()
than i use SDL to change the pygame window's display position by adding to it the width of the smallest screen, to be sure that the window will be positionned in the second monitor.
from screeninfo import get_monitors
numberOfmonitors = 0
smallScreenWidth = 9999
for monitor in get_monitors():
#getting the smallest screen width
smallScreenWidth = min(smallScreenWidth, monitor.width)
numberOfmonitors += 1
if numberOfmonitors > 1:
x = smallScreenWidth
y = 0
#this will position the pygame window in the second monitor
os.environ['SDL_VIDEO_WINDOW_POS'] = "%d,%d" % (x,y)
#you can check with a small window
#screen = pygame.display.set_mode((100,100))
#or go full screen in second monitor
screen = pygame.display.set_mode((0, 0), pygame.FULLSCREEN)
#if you want to do other tasks on the laptop (first monitor) while the pygame window is being displayed on the second monitor, you shoudn't use fullscreen but instead get the second monitor's width and heigh using monitor.width and monitor.height, and set the display mode like
screen = pygame.display.set_mode((width,height))
display = pyglet.canvas.get_display()
display = display.get_screens()
win = pyglet.window.Window(screen=display[1])
------------------------------------------------------
screen=display[Номер монитора]
------------------------------------------------------
display = pyglet.canvas.get_display()
display = display.get_screens()
print(display) # Все мониторы которые есть

Digital Image cropping in Python

Got this question from a professor, a physicist.
I am a beginner in Python programming. I am not a computer professional I am a physicist. I was trying to write a code in python for my own research which involves a little image processing.
All I need to do is to display an image and then select a region of interest using my mouse and finally crop out the selected region. I can do this in Matlab using the ginput() function.
I tried using PIL. But I find that after I issue the command Image.show(), the image is displayed but then the program halts there unless I exit from the image window. Is there any way to implement what I was planning. Do I need to download any other module? Please advise.
While I agree with David that you should probably just use GIMP or some other image manipulation program, here is a script (as I took it to be an exercise to the reader) using pygame that does what you want. You will need to install pygame as well as the PIL, usage would be:
scriptname.py <input_path> <output_path>
Actual script:
import pygame, sys
from PIL import Image
pygame.init()
def displayImage( screen, px, topleft):
screen.blit(px, px.get_rect())
if topleft:
pygame.draw.rect( screen, (128,128,128), pygame.Rect(topleft[0], topleft[1], pygame.mouse.get_pos()[0] - topleft[0], pygame.mouse.get_pos()[1] - topleft[1]))
pygame.display.flip()
def setup(path):
px = pygame.image.load(path)
screen = pygame.display.set_mode( px.get_rect()[2:] )
screen.blit(px, px.get_rect())
pygame.display.flip()
return screen, px
def mainLoop(screen, px):
topleft = None
bottomright = None
runProgram = True
while runProgram:
for event in pygame.event.get():
if event.type == pygame.QUIT:
runProgram = False
elif event.type == pygame.MOUSEBUTTONUP:
if not topleft:
topleft = event.pos
else:
bottomright = event.pos
runProgram = False
displayImage(screen, px, topleft)
return ( topleft + bottomright )
if __name__ == "__main__":
screen, px = setup(sys.argv[1])
left, upper, right, lower = mainLoop(screen, px)
im = Image.open(sys.argv[1])
im = im.crop(( left, upper, right, lower))
im.save(sys.argv[2])
Hope this helps :)
For what it's worth (coming from another physicist), I would just do this in an image processing program like the GIMP. The main benefit of doing this task in Python (or any language) would be to save time by automating the process, but unless you - well, the professor - can somehow develop an algorithm to automatically figure out what part of the image to crop, there doesn't seem to be much time to be saved by automation.
If I remember correctly, GIMP is actually scriptable, possibly with Python, so it might be possible to write a time-saving GIMP script to do what your professor friend wants.
Image.show() just calls whatever simple picture viewer it can find on the current platform, one that may or may not have a crop-and-save facility.
If you are on a Windows box and you just need to make it work on your machine, set the ‘Open with...’ association to make it so running an image loads it into an editor of your choice. On OS X and *nix you'd want to hack the _showxv() method at the bottom of Image.py to change the command used to open the image.
If you do actually need to provide a portable solution, you'll need to use a UI framework to power your cropping application. The choices boil down to Tkinter (ImageTk.py gives you a wrapper for displaying PIL images in Tk), PyQT4 (ImageQt in PIL 1.1.6 gives you a wrapper for displaying images in QT4) or wxPython (a higher-level application authoring toolkit using wxWidgets). It'll be quite a bit of work to get the hang of a full UI kit, but you'll be able to completely customise how your application's interface will work.
Is there a script in python like a library to auto crop images :
Automatically crop image
What you are looking for is the module: matplotlib, it emulates Matlab. See the ginput() function. That allow you to find the bounding box, then you can use crop from PIL.
http://matplotlib.sourceforge.net/api/figure_api.html

Categories

Resources