When I try to recognize an image with pyautogui it just says: None
import pyautogui
s = pyautogui.locateOnScreen('Dark.png')
print s
When I ran this code the picture was on my screen but it still failed.
Pyautogui.locateOnScreen has a parameter that specifies the 'confidence' you have in the image you enter.
This way, pyautogui will deal with slight pixel deviations.
For example:
import pyautogui
s = pyautogui.locateOnScreen('Dark.png', confidence=0.9)
print(s)
For more information, see https://buildmedia.readthedocs.org/media/pdf/pyautogui/latest/pyautogui.pdf.
It's pixel perfect.
It can't find the image if it is not 100% match.
For example, I cropped an area with an Opera extension. Then I ran my script with Firefox, and pyautogui did not recognize it.
Don't let your image get resized or compressed by screen capture software or extensions.
Use the same window/screen (size, resolution) as where you saved your screenshot.
On my system, I get this if the picture is on a second monitor. If I move it to the main screen, the image is located successfully.
It looks like multiple-monitor functionality is not yet implemented:
From http://pyautogui.readthedocs.org/en/latest/roadmap.html
Future features planned (specific versions not planned yet):
Find a list of all windows and their captions.
Click coordinates relative to a window, instead of the entire screen.
Make it easier to work on systems with multiple monitors.
...
Related
I need to take a screenshot of my entire screen for some automated tests I need to perform.
I was able to do this, using driver.get_screenshot_as_file , but the problem is that it only takes the picture of the web page, I need to get the whole picture from the browser, since the data I need to check is in the devtools.
Pic:
enter image description here
I need this:
enter image description here
Thankss!
You can use the package pyautogui to get screenshot of the desktop on the os level. This take screenshot of the entire desktop rather than just the webpage.
import pyautogui
pyautogui.screenshot().save('screenshot.png')
Another alternative to pyautogui would be PIL's ImageGrab. The advantage is that you are able to specify a bounding box:
from PIL import ImageGrab
image = ImageGrab.grab(bbox=None) # bbox=None gives you the whole screen
image.save("your_browser.png")
# for later cv2 use:
import numpy
import cv2
image_cv2 = cv2.cvtColor(numpy.array(image), cv2.COLOR_BGR2RGB)
This also makes it possible to adapt to your browser's window size and only capture its specific window. You can get your browsers bounding box as shown in this answer: https://stackoverflow.com/a/3260811/20161430.
From a speed perspective, it doesn't seem to make much of a difference whether you are using pyautogui or PIL.
I open up Calculator from window. I use the snipping tool to copy an image of the number 7 button. I paste the image into the paint software and save it as a png file and save it in a directory on my desktop.
I open up the calculator, use this code to locate where the image is on the screen. However the code return a blank space when normally it should return the position of the image on the screen. The first time I ran it, it gave me a coordination but the second time, it just shows me a blank space and I have been trying to figure out why. I kept doing it over and over, re-copied, re-saved the image and rerun the code and it's still the same result, blank. Was wondering what could be the reason.
>>> import pyautogui
>>> pyautogui.locateOnScreen('C:\\Users\\js\\Desktop\\jsPython\\seven2.png')
Maybe you should check your path string.For example, this code runs fine:
import pyautogui
print(pyautogui.locateOnScreen("C:\Python27\source\pyautogui\images\startIcon.png"))
I think you've made a typo in your path string.
Even better solution is to use absolute path.For example :
import pyautogui,os
print(pyautogui.locateOnScreen(os.path.abspath("images\startIcon.png")))
import pyautogui
print (pyautogui.locateCenterOnScreen("C:\Users\Venkatesh_J\PycharmProjects\mouse_event\mouse_event.png"))
Instead of returning coordinates, it returns None.
My problem is Solved when I took screenshot by pyautogui inbuilt function rather than taking WIN+Printscr because if we took screenshot by WIN+Printscr then pixel density and other image related data may be different in comparison to pyautogui inbuilt function.
Maybe this thing worked for you, for me it worked.
For Ex - wifi.png so first I took full screenshot and I cropped it from that full image then I put this in my code shown below
import pyautogui
print(pyautogui.locateCenterOnScreen('wifi.png'))
Seems like it couldn't find anything matching your image on the screen.
locateCenterOnScreen(image, grayscale=False) - Returns (x, y) coordinates of the center of the first found instance of the image on the screen. Returns None if not found on the screen.
The initial problem is quite simple - the library does not find the image passed represented on the screen and therefore returns None rather than the co-ordinates as it says it will in the docs.
However, there is a possible misunderstanding here, in particular from a user who posted a bounty on the question and posed a similar question here.. A comment was made
"The pictures are on my desktop"
When you use this function, you pass in a filename as a string. The library then loads the image file and looks for the picture on screen (not the filename). pyautogui.locatecentreonscreen() will look for the actual image if it is visible on the screen. It does not look for files on the desktop, or file icons with the same name as the image passed to it.
Example
Say you have a file with the name flower.jpg containing the following image, saved on your desktop.
With no other windows open, run:
coords = pyautogui.locateCenterOnScreen('C:\\Richard\\Users\\flower.jpg')
print(coords)
The result is None
This is because that image is not displayed on my screen even though an icon is on the desktop, with the name flower.jpg. This is true even if that icon is a small scale version of the flower.
However, if I leave the image visible (as I'm preparing this post) and do the same thing, I get co-ordinates - e.g.:
As you see - because the actual image is on the screen, the library finds it, with co-ordinates 524,621
In summary if the library doesn't find the image displayed to the user on the screen, it will return None. Note the image has to be visible to the user at the point at which the code is running. It won't find the icon on your desktop, or similar, or the image in a window that is "hidden" behind another. Is that what you're trying to do?
Are you sure that the image is of the same size as of the icon?
If not pyautogui.locateCenterOnScreen() will raise TypeError: 'NoneType' object is not iterable
Also make sure that the full icon is visible and looks the same as the image:"C:\Users\Venkatesh_J\PycharmProjects\mouse_event\mouse_event.png"
Hope the problem is solved!
Building off of what Don Kirby said, no matching image was found on the screen. You could open the image in, for example, Windows Photo Gallery, (or Tk) and then pyautogui would find it.
Good explanation, is there any library that work better than pyautogui? I mean it wants excatly the same picture on the screen. We need similar sometimes. – GLHF May 11 '16 at 15:45
Try using this code line:
pyautogui.locateCenterOnScreen("yourscreenshot.PNG", confidence=0.9)
I believe confidence range from 0.1-0.9.
Unless you have several pictures looking almost alike, this might solve the exception.
If that doesn't work try making a second screenshot with more/less of the original image and write this code:
try:
pyautogui.locateCenterOnScreen("yourscreenshot.PNG", confidence=0.9)
except TypeError:
pyautogui.locateCenterOnScreen("yourscreenshot2.PNG", confidence=0.9)
This will give it a second try with a slightly different picture, and hopefully not return a TypeError.
If you can't use pyautogui.locateCenterOnScreen() because of image problem , try using the snipping tool (if you are on Windows) to take screenshots.It works.
Also make sure that you have downloaded the "Pillow" module
Try this :
pip install opencv-contrib-python
It confused me a lot that I ran the same code:
coords =pyautogui.locateCenterOnScreen('C:\\test.jpg')
in two different virtual environment( X and Y, almost same) returned None and Point(x=1543, y=461).
I read Aleks's answer and guess it use the parameter confidence implicitly when opencv-contrib-python in current environment(which Y had but X hadn't).
I didn't dig in but just installed opencv-contrib-python in virtual environment X and solved my problem.
I am trying to capture the pixels of a game to script a bot. I have a simple function:
def printPixel():
while True:
flags, hcursor, (x,y) = win32gui.GetCursorInfo()
print x,y,':',ImageGrab.grab().getpixel((x,y))
This prints the current x,y coords and the RGB value of that pixel. This works as expected on my desktop hovering over various icons and such, but the same function does not work in-game. Any thoughts?
edit: When I save the image to a file and perform this same operation on the saved image, it works perfectly in-game. However, it is way slower. I'd like to operate on the image in memory, and not from a file.
Video games often deal with th graphical system directly for performance reasons, so some of the typical windows apis might not work on them. Try and take a screenshot by pressing the print screen button. If that captures your screen than you can take a screenshot in python and check the image you have captured taking into account the cursor position.
To take a screenshot on windows you can check out this answer to the question Fastest way to take a screenshot with python on windows it uses the win32gui library as you are using.
I have a Python script that displays images fullscreen on a BeagleBoard with the GUI disabled. The script is started when the board boots. For this I use PyGame which works perfectly fine. Except for some reason the image qwality is scaled down. Because the images are stored in HQ I assume that PyGame resamples the image. I was unable to find out where this can be changed so I decided to replace PyGame, it also seems a bit much to "just" display an image.
I have the code below to display the image. According to documentation the default image viewer will show the image. (Which is supposed to be XV). But as soon as I run the code below where image is a filepath I get "sh: xv: not found".
from PIL import Image
im = Image.open(image)
im.show()
So I tried to install the XV package but can't find how to install it for Angstrom.
My question could either be "How to display images fullscreen with Python?" (To which the answer was supposed to be the code above). Or the question is "How can I get XV installed on Angstrom?" (What is the package name for opkg install)
I did search, but haven't found something that works...
Image.show() in PIL is more intended for debugging than actual, production use. It is hardcoded to call xv <temp-image-file-pil-creates>. You can hack around this (make a symbolic link called xv that will call some other image viewer), but it's still a rather bad way to go about it.
I don't know enough about the BeagleBoard to tell you the best/canonical way to display an image fullscreen, but if you got halfway there with PyGame, perhaps you can post your code and the community can help you fix the quality problem.
If the image is getting downscaled to fit the screen, you might look into using transform.smoothscale to scale the image manually (to avoid losing quality).