How can I tell when Windows is changing a monitors power state?
It seems that, when Windows wants to start the screen saver or turn the monitor off, it will send a WM_SYSCOMMAND to the topmost window with a wParam of SC_SCREENSAVE (to start the screen saver) or a wParam of SC_MONITORPOWER and a lParam of 1 or 2 (to turn the monitor off). This message will then be passed to DefWindowProc, which will actually do the action. So, if your window happens to be the topmost one, you can intercept these events and ignore them (or do anything else you want before passing them to DefWindowProc).
On Windows Vista, there seems to be a more intuitive, and more reliable, way to know the monitor power state. You call RegisterPowerSettingNotification to tell the system to send your window a WM_POWERBROADCAST message with a wParam of PBT_POWERSETTINGCHANGE and a lParam pointing to a POWERBROADCAST_SETTING structure.
I cannot test either of them since I currently do not have any computer with Windows nearby. I hope, however, they point you in the right direction.
References:
The Old New Thing : Fumbling around in the dark and stumbling across the wrong solution
Recursive hook ... - borland.public.delphi.nativeapi.win32 | Google Groups
Registering for Power Events (Windows)
Related
I want to make a script that detects when taskbar icon flashes, and activates a program. I would like to use AutoIt or the Windows API.
How to detect when a program's taskbar icon starts flashing?
Use the RegisterShellHookWindow API and listen for HSHELL_FLASH messages.
http://msdn.microsoft.com/en-us/library/windows/desktop/ms644989(v=vs.85).aspx
To answer your question directly, there is no easy (documented and reliable) way to detect the flashing of the window. It occurs as a result of FlashWindow/FlashWindowEx. A very intrusive and heavy-handed option is to perform global hooking of both APIs. You could do this by injecting a DLL to every usermode application and performing a local hook/detour which notifies some central executable you own.
However, there is a greater underlying problem with what you are proposing, which makes it extremely undesirable. Imagine an application which constantly flashes when it does not have focus. Your app would set it to the foreground. What would happen if there were two such applications?
Using a WH_SHELL hook as Raymond suggests is not too difficult and is done by calling SetWindowsHookEx as so:
SetWindowsHookEx(WH_SHELL, hook_proc, NULL, dwPID);
This sets a shell hook with the HOOKPROC as hook_proc and dwPID is the thread which we want to associate the hook with. Since you mention that you already know which program you want to target, I'll assume you have a HWND to that window already. You need to generate the dwPID, which can be done as so:
DWORD dwID = GetWindowThreadProcessId(hwnd, NULL)
This will populate dwPID with the associated PID of the HWND. For the next step, I assume the hook procedure to be in the current executable as opposed to a DLL. The hook procedure might be something like this:
LRESULT CALLBACK hook_proc(int nCode, WPARAM wParam, LPARAM lParam) {
if (nCode == HSHELL_REDRAW && lParam){
SetForegroundWindow(hwnd); // assumed hwnd is a global
}
return CallNextHookEx(NULL, nCode, wParam, lParam);
}
The code above has not been tested and might contain mistakes but should give you a general idea of what to do.
One important thing to note with window hooks is that SetWindowHookEx must be called from a program with the same bitiness as the target. i.e. if your target is 64 bit, the caller of SetWindowHookEx must also be 64 bit. Also, after you are done, you should cleanup by removing your hook with UnhookWindowsHookEx.
A python evdev device has a .grab() function that prevents other processes from getting input events on the device. Is there any way to limit this to specific events from a device?
For my example, if I .grab() a pen input device that has pressure sensitivity and tilt and 2 click buttons on the side, how would I 'grab' ONLY the 2 click buttons but let the rest of the input (the tip, pressure sensitivity and tilt) be caught by the rest of the system as normal?
One of my pen buttons is normally a right click mouse event. I want to make it do something else but it still pops up the right click menu so I'm trying to figure out how to stop that.
I tried doing the grab and ungrab when the event occurs. Like event > grab > do my stuff > ungrab. But that is obviously too late and the OS still pops up the menu.
I tried doing the full grab, then in the event loop if it is a button press do my stuff, otherwise create a UInput event injection by just passing the event back to the system. This was a bit of a tangled mess. Permissions are required. When I finally got past that, the movement was offset and the pressure/tilt wasn't working... I think it is something to do with the DigiMend driver that actually makes that stuff work and/or xinput settings I have to pass to calibrate the tablet. But I'm not interested in writing all the pressure/tilt functionality from scratch or anything like that, so I need the DigiMend stuff to work as normal. So I gave up on this idea for now.
The only other thought I had was figure out why the OS defaults to the behavior it does and see if I can just manually disable the actions (i.e. Why does it think that button is a right mouse click and make it think that button is nothing instead.)
So I guess this is a 3 level question.
Can I achieve the the grab functionality on select events instead of the device as a whole?
If the passthrough idea was better, is there a way to achieve this without having to do any permission modifications and be able to pass the exact event (i.e. no offset and such that I experienced?)
If evdev does not have this ability or it'd be easier to do in another way, like disabling the defaults for the pen in the OS somehow, I am open to suggestions. I am using Kubuntu 20.04 if that helps.
Any help would be appreciated, let me know if more info is needed, thanks in advance!
I ended up going with #3 and using xinput. Figured I'd put up this answer for now in case others come across this and want to do something similar.
The workaround was actually kind of simple. I just use xinput to remap the 2 buttons. So evdev doesn't have to grab at all. Just disable those buttons and everything goes normally except those, which I listen for with evdev.
xinput set-button-map {} 1 0 0 4 5 6 7
My device has 7 buttons and are normally mapped 1-7. Which are all mouse equivalents of left click, middle click, right click, etc...
By using that string and passing the device ID in for the {} I just run that with subprocess first. And voila, no more right click menu. And I can use evdev to map the events to whatever I want.
We have a game at work where if you can send the security guy an email from someone's unlocked computer you get a prize. This Halloween I am setting a trap.
I have a simple program called systems-engage that starts a key listener and opens my inbox programmatically. When someone starts using the keyboard I want my program to launch a full-screen visual assault of horror film images with extremely loud screaming.
I can handle everything else mentioned, I just need a dead simple way to open a full screen window that can only be closed by an escape sequence I define in code.
I'm going for lowest hanging fruit here (Objective-C, C++, Java, python ruby, JavaScript hell whatever gets the job done quick and dirty.
I read a primer on opening a full screen window in Objective-C but it can be closed really easily. The point of this prank is to shame my co-worker for invading my computer for at least 10 or 20 seconds and I can't do that if he can just hit Appl-Q.
Happy Halloween!
To get something like this with a Cocoa app, you can place the following code in your app delegate's - (void)applicationDidFinishLaunching: (or similar):
// Set the key equivalent of the "Quit" menu item to something other than ⌘-Q.
// In this case, ^-⌥-⌘-Q.
// !!! Verify this and make sure you remember it or else you're screwed. !!!
NSMenu *mainMenu = [NSApplication sharedApplication].mainMenu;
NSMenu *appMenu = [[mainMenu itemAtIndex:0] submenu];
NSMenuItem *quitItem = [appMenu itemWithTitle:#"Quit <Your App Name Here>"];
quitItem.keyEquivalentModifierMask = NSEventModifierFlagControl | NSEventModifierFlagOption | NSEventModifierFlagCommand;
quitItem.keyEquivalent = #"q";
// Enable "kiosk mode" -- when fullscreen, hide the dock and menu bar, and prevent the user from switching away from the app or force-quitting.
[NSApplication sharedApplication].presentationOptions = NSApplicationPresentationHideDock
| NSApplicationPresentationHideMenuBar
| NSApplicationPresentationDisableProcessSwitching
| NSApplicationPresentationDisableForceQuit
| NSApplicationPresentationDisableSessionTermination;
// Remove the window's close button, making it no longer close with ⌘-W.
self.window.styleMask = self.window.styleMask & ~NSWindowStyleMaskClosable;
// Make the window take up the whole screen and make it full-screen.
[self.window setFrame:[[NSScreen mainScreen] frame] display:YES];
[self.window toggleFullScreen:self];
This will make a "kiosk" type app which can only be closed via the custom quit shortcut you set (or, you know, shutting down the computer forcibly). The presentation options prevents the user from accessing the menu bar, dock, and app switching (via ⌘-Tab) or spaces, bringing up the force-quit window, or bringing up the shutdown/restart/sleep window. Basically, make sure you set up a keyboard shortcut that you remember to terminate the app, otherwise, you're going to be locked out of your machine short of forcibly powering it off. It's a total PITA.
Of course, some of these customizations can be done in Interface Builder too (setting the key equivalent of the "Quit" menu item is easier there, and you can turn off the window's close control as well, as mentioned in comments above), but I just wanted to include this as code so it was more transparent (rather than uploading an Xcode project).
Happy Halloween! 😈
I'm helping to implement an experiment using PsychoPy on a Windows 8 tablet. It doesn't seem to be possible to get direct access to touch events through either PsychoPy, or the pyglet or PyGame interfaces.
Most other sources I've found have referred to using mouse move events in place of touch events. This works fine for recording position, but for recording time it doesn't work for us. We would like to collect the timing of the start of the touch, whereas the mouse event comes at the end of the touch.
Does anyone know of a way to do this, either in PsychoPy or by importing another library into the experiment?
Update: Logging ioHub mouse events, it looks like press and release mouse events are both sent at the end of the touch. This makes sense as this is the point at which the OS is sure that the touch is not a swipe. (Also, it will decide whether the touch is a left- or right-click depending on the duration of the touch).
I've managed this using a hook into the WndProc, it's not pretty but it works. The solution, for posterity:
https://github.com/alisdt/pywmtouchhook
A brief summary:
I used a combination of ctypes and pywin32 (unfortunately neither alone could do the job) to register the target HWND to receive touch messages, and replace its WndProc, passing through all non-touch messages to the original WndProc.
Hi I am new to this whole coding thing I was suggested to use Python. The version I have now is 2.7. I need help with making a transparent window to the copacity of 100 so that you can actually see through it and I also want to know how to make a fairy thick, out line of a rectangle in the colour red.
Help me please :S Thanks!
Unfortunatelly, there is not such an easy thing as sa "trasnparent window" - althougmodern widnow managaers do have various opacity controls for the windows, those just affect the windows as a whole - and do not integrate at all with the program running "inside" the windows. There may even be, for some of them, a way to call functions to explicitly set up the opacity level of a given window, but I don't think it willbe possible for all of them.
That said, it is possible to get grab of the "root" window and draw directly on the screen - -bypassing the window manager. There are APIs for that at least on Windows and Linux (you have to mention in what operating system you need that working) - but it will amount to not a trivial research work, since this is not what is expected of a "well behaved app" - for which the GUI toolkits are written and documented. You will need to write xlib code in Linux, and directly call win32 api's on windows - both are possible from Python - as possible as under-documented.
And once you get to draw the rectangle, since you are bypassing the window manager, you willhave to care about every low-level detail of your app: mouse event handling, screen-redrawing (and notifying the system of drawing taking effect over other windows), and so on.