I have been attempting to work an iCreate, roomba without a vacuum cleaner attached using Python 2.7.1 and have created working code. When I type each line in by hand it works perfectly, however when putting all the code in together it stalls and does not operate.
import Create
import VideoCapture
from PIL import Image, Imagechops
import os
robot = Create.Create(3)
camera = VideoCapture.Device(0, 1)
(rgb_red, rgb_green, rgb_blue) = (0, 0, 0)
red = Image.open("Red.jpeg")
(redr, redg, redb) = red.getpixel((0, 0))
blue = Image.open("Blue.jpeg")
(bluer, blueg, blueb) = blue.getpixel((0, 0))
green = Image.open("Green.jpeg")
(greenr, greeng, greenb) = green.getpixel((0, 0))
yellow = Image.open("Yellow.jpeg")
(yellowr, yellowg, yellowb) = yellow.getpixel((0, 0))
camera.getImage(0, 0, 'tl')
camera.saveSnapshot('CurrentPicture.jpeg', 0, 0, 'tl')
pic = Image.open("CurrentPicture.jpeg")
(rgb_red, rgb_green, rgb_blue) = pic.getpixel((0, 0))
os.remove("C:\Python27\CurrentPicture.jpeg")
while 0 == 0:
if((rgb_red - redr) < (rgb_green - greeng)) and ((rgb_red - redr) < (rgb_blue - blueb)):
robot.stop()
elif((rgb_blue - blueb) < (rgb_green - greeng)) and ((rgb_blue - blueb) < (rgb_red - redr)):
robot.turn(45, 40)
elif((rgb_green - greeng) < (rgb_red - redr)) and ((rgb_green - greeng) < (rgb_blue - blueb)):
robot.move(50, 50)
camera.saveSnapshot('CurrentPicture.jpeg', 0, 0, 'tl')
pic = Image.open("CurrentPicture.jpeg")
(rgb_red, rgb_green, rgb_blue) = pic.getpixel((0, 0))
os.remove("C:\Python27\CurrentPicture.jpeg")
Are there any issues with IDLE for running multiple lines and just not working, I am not terribly sure what I should be asking. It is just that nothing happens when I run that entire block together but line by line entering works.
-Any help is greatly appreciated.
Instead if pasting the code into IDLE, save it into a file, and run it like this:
python yourfile.py
while 0 == 0: You might want while True: instead.
red = Image.open("Red.jpeg")
(redr, redg, redb) = red.getpixel((0, 0)) is a very complex way of saying RED = (255, 0, 0)
Related
I have the following code :
def make_override_editable():
for area in bpy.context.screen.areas:
if area.type == "OUTLINER":
ctx = bpy.context.copy()
ctx["area"] = area
with bpy.context.temp_override(area=area):
print(bpy.context.area.type)
bpy.ops.outliner.liboverride_operation(
type="OVERRIDE_LIBRARY_CREATE_HIERARCHY",
selection_set="SELECTED_AND_CONTENT",
)
The following line when is ran through the script :
print(bpy.context.area.type)
outputs OUTLINER
but I still get the error that I have the incorrect context :
RuntimeError: Operator bpy.ops.outliner.liboverride_operation.poll() failed, context is incorrect
which normally works in blender text editor with 3 lines :
bpy.context.area.type = 'OUTLINER'
bpy.ops.outliner.liboverride_operation(type="OVERRIDE_LIBRARY_CREATE_HIERARCHY",selection_set="SELECTED_AND_CONTENT")
bpy.context.area.type = 'TEXT_EDITOR'
I'm using python in a much more complex script with QT.
any suggestions ?
I'm trying to make a linked skeleton (Armature) editable so I can change it's pose.
I've also searched for a low level function that I can use, but did not had any success.
or perhaps is there another method to LINK and animation to a LINKED Armature ?
I've tried this :
def make_override_editable():
for area in bpy.context.screen.areas:
if area.type == "OUTLINER":
ctx = bpy.context.copy()
ctx["area"] = area
with bpy.context.temp_override(area=area):
print(bpy.context.area.type)
bpy.ops.outliner.liboverride_operation(
type="OVERRIDE_LIBRARY_CREATE_HIERARCHY",
selection_set="SELECTED_AND_CONTENT",
)
And was expecting to have this
bpy.ops.outliner.liboverride_operation(type="OVERRIDE_LIBRARY_CREATE_HIERARCHY",selection_set="SELECTED_AND_CONTENT")
this works even if the Armature is linked.
no need to use operators.
# Reset the Armature to the default POSE.
for n in bpy.context.object.pose.bones:
n.location = (0, 0, 0)
n.rotation_quaternion = (1, 0, 0, 0)
n.rotation_axis_angle = (0, 0, 1, 0)
n.rotation_euler = (0, 0, 0)
n.scale = (1, 1, 1)
I'm trying to get the led to go forward like a second hand.
np = neopixel.NeoPixel(machine.Pin(2), led)
while True:
t = utime.localtime()
h = int(utime.localtime()[3]) + time_zone
m = utime.localtime()[4]
s = utime.localtime()[5]
np[h] = (64, 64, 64)
np[m] = (64, 64, 64)
np[s] = (64, 64, 64)#turn on
time.sleep(1)
sc = s - 1
np[sc] = (0, 0, 0)#turn off
np.write()
but i think my code is not a good idea.
I'm not entirely clear how you want your display to look, but maybe this helps. With the code below, the "seconds" led will never overwrite the "hours" or "minutes" led.
Critically, we also reset everything to "off" other than the leds we're lighting for the current hour, minute, and second.
import machine
import neopixel
# on my micropython devices, 'time' and 'utime' refer to the same module
import time
np = neopixel.NeoPixel(machine.Pin(2), 60)
while True:
t = time.localtime()
h, m, s = (int(x) for x in t[3:6])
# set everything else to 0
for i in range(60):
np[i] = (0, 0, 0)
np[s] = (0, 0, 255)
np[m] = (0, 255, 0)
np[h] = (255, 0, 0)
np.write()
time.sleep(0.5)
I don't have a neopixel myself, so I wrote a simulator that I used for testing this out with regular Python.
I've been struggling to come up with a script that allows me to take screenshots of my desktop more than once per every second. I'm using Win10.
PIL:
from PIL import ImageGrab
import time
while True:
im = ImageGrab.grab()
fname = "dropfolder/%s.png" %int(time.time())
im.save(fname,'PNG')
Results 1.01 seconds per image.
PyScreeze (https://github.com/asweigart/pyscreeze):
import pyscreeze
import time
while True:
fname = "dropfolder/%s.png" %int(time.time())
x = pyscreeze.screenshot(fname)
Results 1.00 seconds per image.
Win32:
import win32gui
import win32ui
import win32con
import time
w=1920 #res
h=1080 #res
while True:
wDC = win32gui.GetWindowDC(0)
dcObj=win32ui.CreateDCFromHandle(wDC)
cDC=dcObj.CreateCompatibleDC()
dataBitMap = win32ui.CreateBitmap()
dataBitMap.CreateCompatibleBitmap(dcObj, w, h)
cDC.SelectObject(dataBitMap)
cDC.BitBlt((0,0),(w, h) , dcObj, (0,0), win32con.SRCCOPY)
fname = "dropfolder/%s.png" %int(time.time())
dataBitMap.SaveBitmapFile(cDC, fname)
dcObj.DeleteDC()
cDC.DeleteDC()
win32gui.ReleaseDC(0, wDC)
win32gui.DeleteObject(dataBitMap.GetHandle())
Results 1.01 seconds per image.
Then I stumbled into thread (Fastest way to take a screenshot with python on windows) where it was suggested that gtk would yield phenomenal results.
However using gtk:
import gtk
import time
img_width = gtk.gdk.screen_width()
img_height = gtk.gdk.screen_height()
while True:
screengrab = gtk.gdk.Pixbuf(
gtk.gdk.COLORSPACE_RGB,
False,
8,
img_width,
img_height
)
fname = "dropfolder/%s.png" %int(time.time())
screengrab.get_from_drawable(
gtk.gdk.get_default_root_window(),
gtk.gdk.colormap_get_system(),
0, 0, 0, 0,
img_width,
img_height
).save(fname, 'png')
Results 2.34 seconds per image.
It seems to me like I'm doing something wrong, because people have been getting great results with gtk.
Any advices how to speed up the process?
Thanks!
Your first solution should be giving you more than one picture per second. The problem though is that you will be overwriting any pictures that occur within the same second, i.e. they will all have the same filename. To get around this you could create filenames that include 10ths of a second as follows:
from PIL import ImageGrab
from datetime import datetime
while True:
im = ImageGrab.grab()
dt = datetime.now()
fname = "pic_{}.{}.png".format(dt.strftime("%H%M_%S"), dt.microsecond // 100000)
im.save(fname, 'png')
On my machine, this gave the following output:
pic_1143_24.5.png
pic_1143_24.9.png
pic_1143_25.3.png
pic_1143_25.7.png
pic_1143_26.0.png
pic_1143_26.4.png
pic_1143_26.8.png
pic_1143_27.2.png
In case anyone cares in 2022: You can try my newly created project DXcam: I think for raw speed it's the fastest out there (in python, and without going too deep into the rabbit hole). It's originally created for a deep learning pipeline for FPS games where the higher FPS you get the better. Plus I (am trying to) design it to be user-friendly:
For a screenshot just do
import dxcam
camera = dxcam.create()
frame = camera.grab() # full screen
frame = camera.grab(region=(left, top, right, bottom)) # region
For screen capturing:
camera.start(target_fps=60) # threaded
for i in range(1000):
image = camera.get_latest_frame() # Will block until new frame available
camera.stop()
I copied the part of the benchmarks section from the readme:
DXcam
python-mss
D3DShot
Average FPS
238.79
75.87
118.36
Std Dev
1.25
0.5447
0.3224
The benchmarks is conducted through 5 trials on my 240hz monitor with a constant 240hz rendering rate synced w/the monitor (using blurbuster ufo test).
You can read more about the details here: https://github.com/ra1nty/DXcam
This solution uses d3dshot.
def d3dgrab(rect=(0, 0, 0, 0), spath=r".\\pictures\\cache\\", sname="", title=""):
""" take a screenshot by rect. """
sname = sname if sname else time.strftime("%Y%m%d%H%M%S000.jpg", time.localtime())
while os.path.isfile("%s%s" % (spath, sname)):
sname = "%s%03d%s" % (sname[:-7], int(sname[-7:-4]) + 1, sname[-4:])
xlen = win32api.GetSystemMetrics(win32con.SM_CXSCREEN)
ylen = win32api.GetSystemMetrics(win32con.SM_CYSCREEN)
assert 0 <= rect[0] <= xlen and 0 <= rect[2] <= xlen, ValueError("Illegal value of X coordination in rect: %s" % rect)
assert 0 <= rect[1] <= ylen and 0 <= rect[3] <= ylen, ValueError("Illegal value of Y coordinatoin in rect: %s" % rect)
if title:
hdl = win32gui.FindWindow(None, title)
if hdl != win32gui.GetForegroundWindow():
win32gui.SetForegroundWindow(hdl)
rect = win32gui.GetWindowRect(hdl)
elif not sum(rect):
rect = (0, 0, xlen, ylen)
d = d3dshot.create(capture_output="numpy")
return d.screenshot_to_disk(directory=spath, file_name=sname, region=rect)
I think it can be helped
sname = sname if sname else time.strftime("%Y%m%d%H%M%S000.jpg", time.localtime())
while os.path.isfile("%s%s" % (spath, sname)):
sname = "%s%03d%s" % (sname[:-7], int(sname[-7:-4]) + 1, sname[-4:])
And it's fastest way to take screenshot I found.
I use a program which can generate a picture. I saved it by
img.save("/usr/lib/python2.6/site-packages/openstackdashboard/static/dashboard/img/validate.jpeg")
return strs # strs is picture's data
Everything goes right when run it alone . But " IOError " occured when I call it by
from .auth_code import Create_Validate_Code
auth_code_str = Create_Validate_Code()
And horizon says " [Errno 13] Permission denied: '/usr/lib/python2.6/site-packages/openstack-dashboard/static/dashboard/img/validate.jpeg' ". Could someone help me ? Thanks a lot .
This is all code to create a picture
#!/usr/bin/env python
import random
import Image, ImageDraw, ImageFont, ImageFilter
_letter_cases = "1234567890"
_upper_cases = _letter_cases.upper()
_numbers = ''.join(map(str, range(3, 10)))
init_chars = ''.join((_letter_cases, _upper_cases, _numbers))
fontType="/usr/share/fonts/lohit-tamil/Lohit-Tamil.ttf"
def create_lines(draw,n_line,width,height):
line_num = random.randint(n_line[0],n_line[1])
for i in range(line_num):
begin = (random.randint(0, width), random.randint(0, height))
end = (random.randint(0, width), random.randint(0, height))
draw.line([begin, end], fill=(0, 0, 0))
def create_points(draw,point_chance,width,height):
chance = min(100, max(0, int(point_chance)))
for w in xrange(width):
for h in xrange(height):
tmp = random.randint(0, 100)
if tmp > 100 - chance:
draw.point((w, h), fill=(0, 0, 0))
def create_strs(draw,chars,length,font_type, font_size,width,height,fg_color):
c_chars = random.sample(chars, length)
strs = ' %s ' % ' '.join(c_chars)
font = ImageFont.truetype(font_type, font_size)
font_width, font_height = font.getsize(strs)
draw.text(((width - font_width) / 3, (height - font_height) / 3),strs, font=font, fill=fg_color)
return ''.join(c_chars)
def Create_Validate_Code(size=(120, 30),
chars=init_chars,
img_type="GIF",
mode="RGB",
bg_color=(255, 255, 255),
fg_color=(0, 0, 255),
font_size=18,
font_type=fontType,
length=4,
draw_lines=True,
n_line=(1, 2),
draw_points=True,
point_chance = 2):
width, height = size
img = Image.new(mode, size, bg_color)
draw = ImageDraw.Draw(img)
if draw_lines:
create_lines(draw,n_line,width,height)
if draw_points:
create_points(draw,point_chance,width,height)
strs = create_strs(draw,chars,length,font_type, font_size,width,height,fg_color)
params = [1 - float(random.randint(1, 2)) / 100,
0,
0,
0,
1 - float(random.randint(1, 10)) / 100,
float(random.randint(1, 2)) / 500,
0.001,
float(random.randint(1, 2)) / 500
]
img = img.transform(size, Image.PERSPECTIVE, params)
img = img.filter(ImageFilter.EDGE_ENHANCE_MORE)
img.save("/usr/lib/python2.6/site-packages/openstack-dashboard/static/dashboard/img/validate.jpeg")
return strs
The code to create and save the file is inside the function Create_Validate_Code. In your initial version, you never call that function anywhere. Therefore, it never tries to create and save the file, so it never fails.
When you add this:
from .auth_code import Create_Validate_Code
auth_code_str = Create_Validate_Code()
… now you're calling the function. So now it fails. It has nothing whatsoever to do with the third-party module you're using; you could do the same thing with just this:
Create_Validate_Code()
Meanwhile, the reason that creating the file fails is that you don't have write access to directories in the middle of your system's site-packages. This is by design. This is why operating systems have permissions in the first place—to stop some buggy or malicious code run as a normal user from screwing up programs and data needed by the entire system.
Create the file somewhere you do have access to, like some place in your home directory, or the temporary directory, or whatever's appropriate to whatever you're trying to do, and the problem will go away.
Have you tried running the final app as Administrator/root? That usually fixes any "Permission denied" errors while programming.
You shouldn't save data deep within your Python installation. It's really bad practice, which is why the OS is preventing you from doing it. Save the picture somewhere in your home folder.
I've gotten OpenCV working with Python and I can even detect a face through my webcam. What I really want to do though, is see movement and find the point in the middle of the blob of movement. The camshift sample is close to what I want, but I don't want to have to select which portion of the video to track. Bonus points for being able to predict the next frame.
Here's the code I have currently:
#!/usr/bin/env python
import cv
def is_rect_nonzero(r):
(_,_,w,h) = r
return (w > 0) and (h > 0)
class CamShiftDemo:
def __init__(self):
self.capture = cv.CaptureFromCAM(0)
cv.NamedWindow( "CamShiftDemo", 1 )
self.storage = cv.CreateMemStorage(0)
self.cascade = cv.Load("/usr/local/share/opencv/haarcascades/haarcascade_mcs_upperbody.xml")
self.last_rect = ((0, 0), (0, 0))
def run(self):
hist = cv.CreateHist([180], cv.CV_HIST_ARRAY, [(0,180)], 1 )
backproject_mode = False
i = 0
while True:
i = (i + 1) % 12
frame = cv.QueryFrame( self.capture )
if i == 0:
found = cv.HaarDetectObjects(frame, self.cascade, self.storage, 1.2, 2, 0, (20, 20))
for p in found:
# print p
self.last_rect = (p[0][0], p[0][1]), (p[0][2], p[0][3])
print self.last_rect
cv.Rectangle( frame, self.last_rect[0], self.last_rect[1], cv.CV_RGB(255,0,0), 3, cv.CV_AA, 0 )
cv.ShowImage( "CamShiftDemo", frame )
c = cv.WaitKey(7) % 0x100
if c == 27:
break
if __name__=="__main__":
demo = CamShiftDemo()
demo.run()
Found a solution at How do I track motion using OpenCV in Python?