I'm hoping this is a simple issue and I'm just missing something. I have a script saved in two different locations, on our shared server and locally on my desktop. When I run the script from the server I get what appears to be easygui error.
Traceback (most recent call last):
File "Z:\Python\module1.py", line 35, in <module>
reply = buttonbox(msg=msg,image=IMG)
TypeError: buttonbox() got an unexpected keyword argument 'msg'
This I can get around. For some reason, message is required for the version that is saved on the server and msg is required for the version saved on my desktop. That is ok since it at least works. What breaks this for me is the image feature. It works in the version on my desktop but I have no idea how to get it to work on the version on our server. Full code shown below:
import PIL
from PIL import Image
import os
from easygui import *
import sys
print sys.version, sys.version_info
WORKDIR = "c:\\temp"
DESKTOP = 'c:' + os.environ['HOMEPATH'] + "\Desktop"
os.chdir(DESKTOP)
IMAGES = os.listdir(DESKTOP+"\\New Items Images")
for IMAGE in IMAGES:
path = DESKTOP+"\\New Items Images\\"+IMAGE
#Creates a Tkinter-compatible photo image, which can be used everywhere Tkinter expects an image object.
img = Image.open(path)
width, height = img.size
if width >= height:
basewidth = 600
wpercent = (basewidth / float(img.size[0]))
hsize = int((float(img.size[1]) * float(wpercent)))
else:
baseheight = 600
hpercent = (baseheight / float(img.size[1]))
wsize = int((float(img.size[0]) * float(hpercent)))
img = img.resize((basewidth, hsize), PIL.Image.ANTIALIAS)
img.save(DESKTOP + "\\" + IMAGE)
IMG = DESKTOP+"\\"+IMAGE
SKU = "sku"
msg = "Is %s acceptable?\n%s\n%sx%s" % (IMAGE, SKU, width, height)
reply = buttonbox(msg=msg,image=IMG)
if ynbox == 1:
print "This would now get pushed to CA"
`
Generally speaking I know that this is probably ugly code. But that aside, what the end game goal here is to open an image, display it to the user, then delete all created imaged from desktop. Any advice or help would be greatly appreciated.
Perhaps its the way its imported and its picking up buttonbox from another library and not recognising the parameters.
does this work?
import easygui
easygui.buttonbox(msg=msg, image=IMG)
or even plain data like this work?
import easygui
easygui.buttonbox('Click on your favorite flavor.', 'Favorite Flavor', ('Chocolate', 'Vanilla', 'Strawberry'))
if neither of those work you might want to check the version of easygui and the documentation related.
or perhaps its a conflict with the class "ButtonBox" and instantiation function "buttonbox" they have that are named the same except one doesn't use kwargs and for some reason there is an issue there.
https://github.com/robertlugg/easygui/blob/master/easygui/boxes/button_box.py#L110
try instantiate a class version like this
bb = ButtonBox("message", "title", ('Chocolate', 'Vanilla'), None, None, None, None)
reply = bb.run()
Related
I'm currently trying to write a ROS Publisher/Subscriber setup that passes image binary opened by PIL. I'd like to not have to use OpenCV due to operating restrictions, and I was wondering if there was a way to do so. This is my current code:
#!/usr/bin/env python
import rospy
from PIL import Image
from sensor_msgs.msg import Image as sensorImage
from rospy.numpy_msg import numpy_msg
import numpy
def talker():
pub = rospy.Publisher('image_stream', numpy_msg(sensorImage), queue_size=10)
rospy.init_node('image_publisher', anonymous=False)
rate = rospy.Rate(0.5)
while not rospy.is_shutdown():
im = numpy.array(Image.open('test.jpg'))
pub.publish(im)
rate.sleep()
if __name__ == '__main__'
try:
talker()
except ROSInterruptException:
pass
which on pub.publish(im) attempt throws:
TypeError: Invalid number of arguments, args should be ['header', 'height', 'width', 'encoding', 'is_bigendian', 'step', 'data'] args are (array([[[***array data here***]]], dtype=uint8),)
How would I transform the image into the right form, or is there a conversion method/different message type that supports just sending raw binary over the ROS connection?
Thanks
Indeed Mark Setchell's answer works perfectly (ignoring the alpha channel in this example):
#!/usr/bin/env python
import rospy
import urllib2 # for downloading an example image
from PIL import Image
from sensor_msgs.msg import Image as SensorImage
import numpy as np
if __name__ == '__main__':
pub = rospy.Publisher('/image', SensorImage, queue_size=10)
rospy.init_node('image_publisher')
im = Image.open(urllib2.urlopen('https://cdn.sstatic.net/Sites/stackoverflow/Img/apple-touch-icon.png'))
im = im.convert('RGB')
msg = SensorImage()
msg.header.stamp = rospy.Time.now()
msg.height = im.height
msg.width = im.width
msg.encoding = "rgb8"
msg.is_bigendian = False
msg.step = 3 * im.width
msg.data = np.array(im).tobytes()
pub.publish(msg)
I don't know anything about ROS, but I use PIL a lot, so if someone else knows better, please ping me and I will delete this "best guess" answer.
So, it seems you need to make something like this from a PIL Image. So you need:
'header',
'height',
'width',
'encoding',
'is_bigendian',
'step',
'data'
So, assuming you do this:
im = Image.open('test.jpg')
you should be able to use:
something you'll need to work out
im.height from PIL Image
im.width from PIL Image
probably const std::string RGB8 = "rgb8"
probably irrelevant because data is 8-bit
probably im.width * 3 as it's 3 bytes per pixel RGB
np.array(im).tobytes()
Before anyone marks this answer down, nobody said answers have to be complete - they can just be "hopefully helpful"!
Note that if your input image is PNG format, you should check im.mode and if it is "P" (i.e. palette mode) immediately run:
im = im.convert('RGB')
to make sure it is 3-channel RGB.
Note that if your input image is PNG format and contains an alpha channel, you should change the encoding to "rgba8" and set step = im.width * 4.
This is what I'm trying:
import ctypes
import os
drive = "F:\\"
folder = "Keith's Stuff"
image = "midi turmes.png"
image_path = os.path.join(drive, folder, image)
SPI_SETDESKWALLPAPER = 20
ctypes.windll.user32.SystemParametersInfoA(SPI_SETDESKWALLPAPER, 0, image_path, 3)
Basicaly, this code is obviously supposed to set the desktop background to midi turmes.png, it changes the desktop, however, for some odd reason, it's always a green background (my personalized settings in windows is a green background behind the image) how do I fix this and make the desktop look like this?: http://i.imgur.com/VqMZF6H.png
The following works for me. I'm using Windows 10 64-bit and Python 3.
import os
import ctypes
from ctypes import wintypes
drive = "c:\\"
folder = "test"
image = "midi turmes.png"
image_path = os.path.join(drive, folder, image)
SPI_SETDESKWALLPAPER = 0x0014
SPIF_UPDATEINIFILE = 0x0001
SPIF_SENDWININICHANGE = 0x0002
user32 = ctypes.WinDLL('user32')
SystemParametersInfo = user32.SystemParametersInfoW
SystemParametersInfo.argtypes = ctypes.c_uint,ctypes.c_uint,ctypes.c_void_p,ctypes.c_uint
SystemParametersInfo.restype = wintypes.BOOL
print(SystemParametersInfo(SPI_SETDESKWALLPAPER, 0, image_path, SPIF_UPDATEINIFILE | SPIF_SENDWININICHANGE))
The important part is to make sure to use a Unicode string for image_path if using SystemParametersInfoW, and a byte string if using SystemParametersInfoA. Remember that in Python 3 strings are default Unicode.
It is also good practice to set argtypes and restype as well. You can even "lie" and set the third argtypes parameter to c_wchar_p for SystemParametersInfoW and then ctypes will validate that you are passing a Unicode string and not a byte string.
I've been trying to run the following code at startup on a Raspberry Pi:
#!/usr/bin/python3
import numpy
import math
import cv2
#this is python 3 specific
import urllib.request
from enum import Enum
from VisionProcessor import VisionProcessor
from GripPipeline import GripPipeline
from networktables import NetworkTables
import time
import logging
from networktables.util import ntproperty
#proper networktables setup
logging.basicConfig(level=logging.DEBUG)
NetworkTables.initialize(server='10.17.11.76')
#create the field to talk to on the network table
class NTClient(object):
angle_difference = ntproperty('/Raspberry Pi/angle difference', 0)
distance_from_target = ntproperty('/Raspberry Pi/distance from target', 0)
n = NTClient()
frame = cv2.VideoCapture('https://frc:frc#10.17.11.11/mjpg/video.mjpg')
if(frame == None):
print("error: camera not found. check connection")
#pipeline = GripPipeline()
pipeline = VisionProcessor()
print("pipeline created")
def get_image():
ret, img_array = frame.read()
# cv2.imwrite("frame.jpg", img_array)
return img_array
def find_distance(width, height, y):
#distances are in inches
KNOWN_WIDTH = 6.25
KNOWN_DISTANCE = 12.0
KNOWN_PIXELS = 135.5
KNOWN_HEIGHT = 424.0
focal_length = (KNOWN_PIXELS * KNOWN_DISTANCE)/KNOWN_WIDTH
#hypotenuse = (KNOWN_WIDTH * focal_length)/width
distance = (KNOWN_WIDTH * focal_length)/width
#0.2125 degrees per pixel vertical
# theta = (0.2125) * (240 - y)
# distance = KNOWN_HEIGHT * (math.tan((math.pi / 2) - math.radians(theta)))
return distance
x = True
while x:
print("while loop entered")
img = get_image()
print("image gotten")
center_point = [160, 120]
file = open('output.txt', 'a')
try:
current_point, size, y = pipeline.process(img)
#negative means turn left, positive means turn right
pixel_difference = center_point[0] - current_point[0]
#4.7761 pixels per degree
angle_difference = (float)(pixel_difference) / 4.7761
n.angle_difference = angle_difference
target_width = size[0]
target_height = size[1]
distance = find_distance(target_width, target_height, y)
n.distance_from_target = distance
print("angle")
file.write("angle: ")
print(n.angle_difference)
file.write(str(angle_difference))
print(" distance: ")
file.write("distance")
print(distance)
file.write(str(distance))
file.write("\n")
except UnboundLocalError:
print(":(")
except (TypeError, cv2.error) as e:
print(":(")
# x = False
I've been doing this by editing the /etc/rc.local file, and the script has been running "successfully". The Pi shows ~25% CPU usage upon startup, and it remains consistent while the script is running, so I can see when it is active (I'm not running any other processes on this Pi). Using ps -aux shows the active python3 process. However, it's not outputting anything, either to the output.txt file or to the networktables.
My end goal is to get it to output successfully to the networktable. If I run it normally (e.g. not at startup, via python3 pipeline-test.py in the terminal), it correctly outputs to both output.txt and the networktable. I added output.txt as a way to ensure that I'm getting correct output, and it's working just fine except when it's run at startup.
Does anyone have an idea of what could be wrong? If any more info is needed, I can do my best to provide it.
EDIT: for some reason, when I copied my code over from Github, it lost all the indentation. The code in use is here.
To start with, the /etc/rc.local script executes as root, thus in the root directory. You will need to add the full file path to your python program. This may or may not solve the issue.
python /dir/dir/python_program
You can record the output of this program in an error file. Make the file
sudo nano /home/pi/error.log
In that file, just type anything, and exit (ctrl + x) saving the changes. Then you edit the rc.local so that the messages are appended to the file
python /dir/dir/python_program > /home/pi/error.log &
now perform a reboot
sudo reboot
the pi will boot, and run the program, after a few minutes, pkill python and view the /home/pi/error.log file. This will give you a better idea of what's going on with your programs "Fail state"
I notice in your program you call a file. rather than output.txt, you will need the full path to the file, since the program is executed in the root directory at startup. this will need to be changed in all instances where your program calls any file.
if you then get a permissions error in the log file, run the following
sudo chmod 777 -R /filepath_to_your_script
I'm having trouble mocking out the an imported module in a unit test. I'm trying to mock the PIL Image class in my module tracker.models using the mock module. I understand you are supposed to mock things where they are used, so I've written #mock.patch('tracker.models.Image') as my decorator for the unit test. I am trying to check whether the downloaded image gets opened as a PIL Image. The mock patch seems to be overwriting the entire Image module. Here is the error I'm getting when I run the test:
File "/home/ubuntu/workspace/tracker/models.py", line 40, in set_photo
width, height = image.size
ValueError: need more than 0 values to unpack
Here's my unit test:
test_models.py
#responses.activate
#mock.patch('tracker.models.Image')
def test_set_photo(self, mock_pil_image):
# Initialize data
hammer = Product.objects.get(name="Hammer")
fake_url = 'http://www.example.com/prod.jpeg'
fake_destination = 'Hammer.jpeg'
# Mock successful image download using sample image. (This works fine)
with open('tracker/tests/test_data/small_pic.jpeg', 'r') as pic:
sample_pic_content = pic.read()
responses.add(responses.GET, fake_url, body=sample_pic_content, status=200, content_type='image/jpeg')
# Run the actual method
hammer.set_photo(fake_url, fake_destination)
# Check that it was opened as a PIL Image
self.assertTrue(mock_pil_image.open.called,
"Failed to open the downloaded file as a PIL image.")
Here is the piece of code it is testing.
tracker/models.py
class Product(models.Model):
def set_photo(self, url, filename):
image_request_result = requests.get(url)
image_request_result.content
image = Image.open(StringIO(image_request_result.content))
# Shrink photo if needed
width, height = image.size # Unit test fails here
max_size = [MAX_IMAGE_SIZE, MAX_IMAGE_SIZE]
if width > MAX_IMAGE_SIZE or height > MAX_IMAGE_SIZE:
image.thumbnail(max_size)
image_io = StringIO()
image.save(image_io, format='JPEG')
self.photo.save(filename, ContentFile(image_io.getvalue()))
You need to configure the return value of Image.open to include a size attribute:
opened_image = mock_pil_image.open.return_value
opened_image.size = (42, 83)
Now when your function-under-test calls Image.open the returned MagicMock instance will have a size attribute that is a tuple.
You could do the same thing for any other methods or attributes that need to return something.
The opened_image reference is then also useful for testing other aspects of your function-under-test; you can now assert that image.thumbnail and image.save were called:
opened_image = mock_pil_image.open.return_value
opened_image.size = (42, 83)
# Run the actual method
hammer.set_photo(fake_url, fake_destination)
# Check that it was opened as a PIL Image
self.assertTrue(mock_pil_image.open.called,
"Failed to open the downloaded file as a PIL image.")
self.assertTrue(opened_image.thumbnail.called)
self.assertTrue(opened_image.save.called)
This lets you test very accurately if your thumbnail size logic works correctly, for example, without having to test if PIL is doing what it does; PIL is not being tested here, after all.
I was writing a similar test, but my function was using Image.open as a context manager (with Image.open(<filepath>) as img:). Thanks to Martijn Pieters' answer and this one I was able to get my test to work with:
mock_pil_image.open.return_value.__enter__.return_value.size = (42, 83)
ImageGrab from PIL would have been ideal. I'm looking for similar functionality, specifically the ability to define the screenshot's bounding box. I've been looking for a library to do so on Mac OS X but haven't had any luck. I also wasn't able to find any sample code to do it (maybe pyobjc?).
While not exactly what you want, in a pinch you might just use:
os.system("screencapture screen.png")
Then open that image with the Image module. I'm sure a better solution exists though.
Here's how to capture and save a screenshot with PyObjC, based on my answer here
You can capture the entire screen, or specify a region to capture. If you don't need to do that, I'd recommend just calling the screencapture command (more features, more robust, and quicker - the initial PyObjC import alone can take around a second)
import Quartz
import LaunchServices
from Cocoa import NSURL
import Quartz.CoreGraphics as CG
def screenshot(path, region = None):
"""region should be a CGRect, something like:
>>> import Quartz.CoreGraphics as CG
>>> region = CG.CGRectMake(0, 0, 100, 100)
>>> sp = ScreenPixel()
>>> sp.capture(region=region)
The default region is CG.CGRectInfinite (captures the full screen)
"""
if region is None:
region = CG.CGRectInfinite
# Create screenshot as CGImage
image = CG.CGWindowListCreateImage(
region,
CG.kCGWindowListOptionOnScreenOnly,
CG.kCGNullWindowID,
CG.kCGWindowImageDefault)
dpi = 72 # FIXME: Should query this from somewhere, e.g for retina displays
url = NSURL.fileURLWithPath_(path)
dest = Quartz.CGImageDestinationCreateWithURL(
url,
LaunchServices.kUTTypePNG, # file type
1, # 1 image in file
None
)
properties = {
Quartz.kCGImagePropertyDPIWidth: dpi,
Quartz.kCGImagePropertyDPIHeight: dpi,
}
# Add the image to the destination, characterizing the image with
# the properties dictionary.
Quartz.CGImageDestinationAddImage(dest, image, properties)
# When all the images (only 1 in this example) are added to the destination,
# finalize the CGImageDestination object.
Quartz.CGImageDestinationFinalize(dest)
if __name__ == '__main__':
# Capture full screen
screenshot("/tmp/testscreenshot_full.png")
# Capture region (100x100 box from top-left)
region = CG.CGRectMake(0, 0, 100, 100)
screenshot("/tmp/testscreenshot_partial.png", region=region)
While I do understand that this thread is close to five years old now, I'm answering this in the hope that it helps people in future.
Here's what worked for me, based on an answer in this thread (credit goes to ponty ) : Take a screenshot via a python script. [Linux]
https://github.com/ponty/pyscreenshot
Install:
easy_install pyscreenshot
Example:
import pyscreenshot
# fullscreen
screenshot=pyscreenshot.grab()
screenshot.show()
# part of the screen
screenshot=pyscreenshot.grab(bbox=(10,10,500,500))
screenshot.show()
# save to file
pyscreenshot.grab_to_file('screenshot.png')
Pillow has since added ImageGrab support for macOS!
However it's not in v2.9 (as of right now the latest) so I just added this file to my local module.
The code is as below:
#
# The Python Imaging Library
# $Id$
#
# screen grabber (macOS and Windows only)
#
# History:
# 2001-04-26 fl created
# 2001-09-17 fl use builtin driver, if present
# 2002-11-19 fl added grabclipboard support
#
# Copyright (c) 2001-2002 by Secret Labs AB
# Copyright (c) 2001-2002 by Fredrik Lundh
#
# See the README file for information on usage and redistribution.
#
from . import Image
import sys
if sys.platform not in ["win32", "darwin"]:
raise ImportError("ImageGrab is macOS and Windows only")
if sys.platform == "win32":
grabber = Image.core.grabscreen
elif sys.platform == "darwin":
import os
import tempfile
import subprocess
def grab(bbox=None):
if sys.platform == "darwin":
fh, filepath = tempfile.mkstemp('.png')
os.close(fh)
subprocess.call(['screencapture', '-x', filepath])
im = Image.open(filepath)
im.load()
os.unlink(filepath)
else:
size, data = grabber()
im = Image.frombytes(
"RGB", size, data,
# RGB, 32-bit line padding, origin lower left corner
"raw", "BGR", (size[0]*3 + 3) & -4, -1
)
if bbox:
im = im.crop(bbox)
return im
def grabclipboard():
if sys.platform == "darwin":
fh, filepath = tempfile.mkstemp('.jpg')
os.close(fh)
commands = [
"set theFile to (open for access POSIX file \""+filepath+"\" with write permission)",
"try",
"write (the clipboard as JPEG picture) to theFile",
"end try",
"close access theFile"
]
script = ["osascript"]
for command in commands:
script += ["-e", command]
subprocess.call(script)
im = None
if os.stat(filepath).st_size != 0:
im = Image.open(filepath)
im.load()
os.unlink(filepath)
return im
else:
debug = 0 # temporary interface
data = Image.core.grabclipboard(debug)
if isinstance(data, bytes):
from . import BmpImagePlugin
import io
return BmpImagePlugin.DibImageFile(io.BytesIO(data))
return data
from subprocess import call
import time
from time import gmtime, strftime
# Take screenshot every 10 seconds and store in the folder where the
# code file is present on disk. To stop the script press Cmd+Z/C
def take_screen_shot():
# save screen shots where
call(["screencapture", "Screenshot" + strftime("%Y-%m-%d %H:%M:%S", gmtime()) + ".jpg"])
def build_screen_shot_base():
while True:
take_screen_shot()
time.sleep(10)
build_screen_shot_base()
I found that using webkit2png was the most convenient solution for me on OS X.
brew install webkit2png
webkit2png http://stackoverflow.com