Take a screenshot that includes the cursor - python

I am trying to make a program with pyautogui that does different things depending on how the cursor looks (for example the cursor looks different when you are resizing a window or something like that), however when taking a screenshot with pyautogui.screenshot() the cursor itself isn't included in the image. Is there any way to take a screenshot with python that will include the cursor? photo of what I mean

There is no way (that I know of, anyway) you can include the cursor in the screenshot using pyautogui.
But there are 2 hacky work-arounds :-
You could press the hotkeys for taking the screenshot, i.e., just make the system take the screenshot, and get the image that way. For windows, it is win + prtscn, as far as I remember.
You could get the position of the mouse at the time of taking the screenshot, get an image of a cursor from the net, and overlay the cursor over the screenshot taken by pyautogui using PIL or any other library as you wish.

Related

Colab - Box for input() in python code too long so I need to scroll back to read the question

I plan to use colab notebooks to teach my pupils (12-13 year olds) Python. We will start with using input() and display with print() to ask simple questions and display the answer.
One problem I have come across is when I use input() in a code cell with an input string a really long input box is produced (in the output area) to get the input value. It is so long that the window scrolls to the end of this empty input box, and you have to scroll back to read the question, even when the window is at full screen width . This is really strange as the input box is completely empty does not need to to be bigger than the window or frame.
Are there settings I can change to prevent this or can I do something with the css to reduce the size of this input box?
This may seem a trivial problem, but little things like this can be distracting and add to the frustration of learning the language so I would like to prevent it if I can.
I have attached a picture that shows the problem.
picture that shows the problem.
I looked at the problem you are trying to solve. Colab fits inside my window perfectly when I size the window to full screen. I think this might be a problem with your browser. Are you using the latest version of Firefox or Chrome?
I just encountered the same problem. For me, it was because I was plotting a graph using the matplotlib.pyplot library before prompting for the input. After I removed that, it worked just fine.

Is there a way to detect the position of the upper-left (or right) pixel of a program in python?

I'm working on a project in python that's meant to be clicking in specific locations in another program window. Currently, I'm taking a screenshot and then detecting the button in said screenshot with OpenCV, and then clicking there. I feel that scanning the whole screenshot every frame for this button to be quite a bit of work. This button will show up in the same spot relative to the program window. This whole process is meant to account for the window not being in the same exact location. I wanted to know if there was a way to detect the position of a program window without taking a screenshot and having to sift through it? Like, program.getPosition or something, and getting an X Y position of maybe its upper-left pixel. From there I could add the numbers I need to navigate to the button. Does such a thing exist?

Python image recognition with pyautogui

When I try to recognize an image with pyautogui it just says: None
import pyautogui
s = pyautogui.locateOnScreen('Dark.png')
print s
When I ran this code the picture was on my screen but it still failed.
Pyautogui.locateOnScreen has a parameter that specifies the 'confidence' you have in the image you enter.
This way, pyautogui will deal with slight pixel deviations.
For example:
import pyautogui
s = pyautogui.locateOnScreen('Dark.png', confidence=0.9)
print(s)
For more information, see https://buildmedia.readthedocs.org/media/pdf/pyautogui/latest/pyautogui.pdf.
It's pixel perfect.
It can't find the image if it is not 100% match.
For example, I cropped an area with an Opera extension. Then I ran my script with Firefox, and pyautogui did not recognize it.
Don't let your image get resized or compressed by screen capture software or extensions.
Use the same window/screen (size, resolution) as where you saved your screenshot.
On my system, I get this if the picture is on a second monitor. If I move it to the main screen, the image is located successfully.
It looks like multiple-monitor functionality is not yet implemented:
From http://pyautogui.readthedocs.org/en/latest/roadmap.html
Future features planned (specific versions not planned yet):
Find a list of all windows and their captions.
Click coordinates relative to a window, instead of the entire screen.
Make it easier to work on systems with multiple monitors.
...

pyautogui.locateCenterOnScreen() returns None instead of coordinates

import pyautogui
print (pyautogui.locateCenterOnScreen("C:\Users\Venkatesh_J\PycharmProjects\mouse_event\mouse_event.png"))
Instead of returning coordinates, it returns None.
My problem is Solved when I took screenshot by pyautogui inbuilt function rather than taking WIN+Printscr because if we took screenshot by WIN+Printscr then pixel density and other image related data may be different in comparison to pyautogui inbuilt function.
Maybe this thing worked for you, for me it worked.
For Ex - wifi.png so first I took full screenshot and I cropped it from that full image then I put this in my code shown below
import pyautogui
print(pyautogui.locateCenterOnScreen('wifi.png'))
Seems like it couldn't find anything matching your image on the screen.
locateCenterOnScreen(image, grayscale=False) - Returns (x, y) coordinates of the center of the first found instance of the image on the screen. Returns None if not found on the screen.
The initial problem is quite simple - the library does not find the image passed represented on the screen and therefore returns None rather than the co-ordinates as it says it will in the docs.
However, there is a possible misunderstanding here, in particular from a user who posted a bounty on the question and posed a similar question here.. A comment was made
"The pictures are on my desktop"
When you use this function, you pass in a filename as a string. The library then loads the image file and looks for the picture on screen (not the filename). pyautogui.locatecentreonscreen() will look for the actual image if it is visible on the screen. It does not look for files on the desktop, or file icons with the same name as the image passed to it.
Example
Say you have a file with the name flower.jpg containing the following image, saved on your desktop.
With no other windows open, run:
coords = pyautogui.locateCenterOnScreen('C:\\Richard\\Users\\flower.jpg')
print(coords)
The result is None
This is because that image is not displayed on my screen even though an icon is on the desktop, with the name flower.jpg. This is true even if that icon is a small scale version of the flower.
However, if I leave the image visible (as I'm preparing this post) and do the same thing, I get co-ordinates - e.g.:
As you see - because the actual image is on the screen, the library finds it, with co-ordinates 524,621
In summary if the library doesn't find the image displayed to the user on the screen, it will return None. Note the image has to be visible to the user at the point at which the code is running. It won't find the icon on your desktop, or similar, or the image in a window that is "hidden" behind another. Is that what you're trying to do?
Are you sure that the image is of the same size as of the icon?
If not pyautogui.locateCenterOnScreen() will raise TypeError: 'NoneType' object is not iterable
Also make sure that the full icon is visible and looks the same as the image:"C:\Users\Venkatesh_J\PycharmProjects\mouse_event\mouse_event.png"
Hope the problem is solved!
Building off of what Don Kirby said, no matching image was found on the screen. You could open the image in, for example, Windows Photo Gallery, (or Tk) and then pyautogui would find it.
Good explanation, is there any library that work better than pyautogui? I mean it wants excatly the same picture on the screen. We need similar sometimes. – GLHF May 11 '16 at 15:45
Try using this code line:
pyautogui.locateCenterOnScreen("yourscreenshot.PNG", confidence=0.9)
I believe confidence range from 0.1-0.9.
Unless you have several pictures looking almost alike, this might solve the exception.
If that doesn't work try making a second screenshot with more/less of the original image and write this code:
try:
pyautogui.locateCenterOnScreen("yourscreenshot.PNG", confidence=0.9)
except TypeError:
pyautogui.locateCenterOnScreen("yourscreenshot2.PNG", confidence=0.9)
This will give it a second try with a slightly different picture, and hopefully not return a TypeError.
If you can't use pyautogui.locateCenterOnScreen() because of image problem , try using the snipping tool (if you are on Windows) to take screenshots.It works.
Also make sure that you have downloaded the "Pillow" module
Try this :
pip install opencv-contrib-python
It confused me a lot that I ran the same code:
coords =pyautogui.locateCenterOnScreen('C:\\test.jpg')
in two different virtual environment( X and Y, almost same) returned None and Point(x=1543, y=461).
I read Aleks's answer and guess it use the parameter confidence implicitly when opencv-contrib-python in current environment(which Y had but X hadn't).
I didn't dig in but just installed opencv-contrib-python in virtual environment X and solved my problem.

Python PIL-ImageGrab inaccurate when capturing game pixels

I am trying to capture the pixels of a game to script a bot. I have a simple function:
def printPixel():
while True:
flags, hcursor, (x,y) = win32gui.GetCursorInfo()
print x,y,':',ImageGrab.grab().getpixel((x,y))
This prints the current x,y coords and the RGB value of that pixel. This works as expected on my desktop hovering over various icons and such, but the same function does not work in-game. Any thoughts?
edit: When I save the image to a file and perform this same operation on the saved image, it works perfectly in-game. However, it is way slower. I'd like to operate on the image in memory, and not from a file.
Video games often deal with th graphical system directly for performance reasons, so some of the typical windows apis might not work on them. Try and take a screenshot by pressing the print screen button. If that captures your screen than you can take a screenshot in python and check the image you have captured taking into account the cursor position.
To take a screenshot on windows you can check out this answer to the question Fastest way to take a screenshot with python on windows it uses the win32gui library as you are using.

Categories

Resources