Raspberry Pi - Python & Flask web control with Adafruit DotStar LEDS - python

apologies if this isn't the right place to ask, but I did some searching and couldn't find much to point me in the right direction. I wasn't quite sure what to search for. I am a novice with python and programming in general, but usually can do enough googling and stealing other code snippets to get my projects running. However I'm at a bit of a roadblock here.
I need to control an Adafruit DotStar lightstrip with a flask web browser app. I've been able to get the flask app working, I've done a simple proof of concept with turning an LED on and off etc., and I can start my lightstrip script but the code I'm trying to run for the lightstrip needs to loop continuously and still be able to change "modes". I have several different images that display on the light strip and I would like to be able to select which one(s) is/are playing, but for now mainly I would just like to be able to start and stop a "shuffle all" mode. If I run the module in a while loop it just loops forever and I can't change the argument to a different "mode". I built a simple script based on Adafruit's DotStar library (specifically the image persistence of vision script, I'm just using PNG images as the map for the different lightstrip "shows").
It all currently works except it only runs each mode once obviously. I had it all in a while loop and it just looped the first selected mode forever and I was unable to turn it off or switch modes. I also thought maybe I should use multiprocessing, and I looked into getting that working, but I couldn't figure out how to stop a process once it started.
Here is the light strip script:
(the 'off' mode is just a black image. I'm sure theres a cleaner way to do this but I'm not sure on how to do that either)
import Image
from dotstar import Adafruit_DotStar
import random
def lightstrip(mode):
loopLength = 120 #loop length in pixels
fade = "/home/pi/lightshow/images/fade.png"
sparkle = "/home/pi/lightshow/images/sparkle.png"
steeplechase = "/home/pi/lightshow/images/steeplechase.png"
bump = "/home/pi/lightshow/images/bump.png"
spaz = "/home/pi/lightshow/images/spaz.png"
sine = "/home/pi/lightshow/images/sine.png"
bounce = "/home/pi/lightshow/images/bounce.png"
off = "/home/pi/lightshow/images/null.png"
numpixels = 30
datapin = 23
clockpin = 24
strip = Adafruit_DotStar(numpixels, 100000)
rOffset = 3
gOffset = 2
bOffset = 1
strip.begin()
if mode == 1:
options = [fade, sparkle, steeplechase, bump, spaz, sine, bounce]
print "Shuffling All..."
if mode == 2:
options = [bump, spaz, sine, bounce]
print "Shuffling Dance..."
if mode == 3:
options = [fade, sparkle, steeplechase]
print "Shuffling Chill..."
if mode == 0:
choice = off
print "Lightstrip off..."
if mode != 0:
choice = random.choice(options)
print "Loading..."
img = Image.open(choice).convert("RGB")
pixels = img.load()
width = img.size[0]
height = img.size[1]
print "%dx%d pixels" % img.size
# Calculate gamma correction table, makes mid-range colors look 'right':
gamma = bytearray(256)
for i in range(256):
gamma[i] = int(pow(float(i) / 255.0, 2.7) * 255.0 + 0.5)
# Allocate list of bytearrays, one for each column of image.
# Each pixel REQUIRES 4 bytes (0xFF, B, G, R).
print "Allocating..."
column = [0 for x in range(width)]
for x in range(width):
column[x] = bytearray(height * 4)
# Convert entire RGB image into column-wise BGR bytearray list.
# The image-paint.py example proceeds in R/G/B order because it's counting
# on the library to do any necessary conversion. Because we're preparing
# data directly for the strip, it's necessary to work in its native order.
print "Converting..."
for x in range(width): # For each column of image...
for y in range(height): # For each pixel in column...
value = pixels[x, y] # Read pixel in image
y4 = y * 4 # Position in raw buffer
column[x][y4] = 0xFF # Pixel start marker
column[x][y4 + rOffset] = gamma[value[0]] # Gamma-corrected R
column[x][y4 + gOffset] = gamma[value[1]] # Gamma-corrected G
column[x][y4 + bOffset] = gamma[value[2]] # Gamma-corrected B
print "Displaying..."
count = loopLength
while (count > 0):
for x in range(width): # For each column of image...
strip.show(column[x]) # Write raw data to strip
count = count - 1
And the main.py script for running the web app:
from flask import *
from lightshow import *
from multiprocessing import Process
import RPi.GPIO as GPIO
import Image
from dotstar import Adafruit_DotStar
import random
import time
app = Flask(__name__)
#app.route("/")
def hello():
return render_template('index.html')
#app.route("/lightstrip/1", methods=['POST'])
def shuffleall():
lightstrip(1)
return ('', 204)
#app.route("/lightstrip/2", methods=['POST'])
def shuffledance():
lightstrip(2)
return ('', 204)
#app.route("/lightstrip/3", methods=['POST'])
def shufflechill():
lightstrip(3)
return ('', 204)
#app.route("/lightstrip/0", methods=['POST'])
def off():
lightstrip(0)
return ('', 204)
if __name__ == "__main__":
app.run(host='0.0.0.0', debug=True)
Again I'm at a bit of a loss here, this may be simple fix or I may be approaching it totally wrong but any and all help would be appreciated. I am a complete beginner to approaching a problem like this. Thank you

Here's an example showing how to start and stop processes using multiprocessing and psutil. In this example the task_runner kills any running processes before starting a new one.
from flask import Flask
import multiprocessing
import psutil
app = Flask(__name__)
def blink(var):
while True:
# do stuff
print(var)
def task_runner(var):
processes = psutil.Process().children()
for p in processes:
p.kill()
process = multiprocessing.Process(target=blink, args=(var,))
process.start()
#app.route("/red")
def red():
task_runner('red')
return 'red started'
#app.route("/blue")
def blue():
task_runner('blue')
return 'blue started'
if __name__ == "__main__":
app.run()
For your question, the task_runner would look something like:
def task_runner(mode):
processes = psutil.Process().children()
for p in processes:
p.kill()
process = multiprocessing.Process(target=lightstrip, args=(mode,))
process.start()

Related

I need some help to solve this problem with my opencv program please?

I am using opencv for a project which displays an image and after you close the displayed image, that same image will open as well as a new additional image. The code is below but it is only displaying one image and not 2.
import cv2
import time
import random
import os
k = 0
rep = 0
window_name = "Monkey Virus"
files = os.listdir("Z:\Y10 Python\images wow\img")
delay = random.randint(0,10)
monkeyChoice = random.randint(1,len(files))
image = "Z:\\Y10 Python\\images wow\\img\\" + str(monkeyChoice) + ".jpg"
monkeyHist = 1
def draw_img():
global monkeyHist
if rep == 0:
time.sleep(delay)
monkeyHist += 1
img = cv2.imread(image, cv2.COLOR_BGR2RGB)
cv2.imshow(window_name, img)
cv2.setWindowProperty(window_name, cv2.WND_PROP_TOPMOST, 1)
cv2.waitKey(0)
draw_img()
rep = rep + 1
if cv2.getWindowProperty('Monkey Virus', cv2.WND_PROP_VISIBLE) < 1:
while k < monkeyHist:
draw_img()
You call cv2.imshow(...) followed by cv2.waitKey(0) on the main thread. Therefore, after the first call, the thread will be blocked and no further code will be executed until the user presses a key. If you want to show a second image, you need to call cv2.imshow() again with a different window_name argument before the call to cv2.waitKey(0).
(Also
while k < monkeyHist:
draw_img()
is probably an endless loop, as your draw_img() function only ever increases monkeyHist, but never increases k; therefore, k is forever smaller than monkeyHist.)

PyAV: how to display multiple video streams to the screen at the same time

I'm just learning to work with video frames and new to python language. I need to display multiple video streams to the screen at the same time using PyAV.
The code below works fine for one camera. Please help me to display multiple cameras on the screen. What should I add or fix in this code?
dicOption={'buffer_size':'1024000','rtsp_transport':'tcp','stimeout':'20000000','max_delay':'200000'}
video = av.open("rtsp://viewer:vieweradmin#192.16.5.69:80/1", 'r',format=None,options=dicOption, metadata_errors='nostrict')
try:
for packet in video.demux():
for frame in packet.decode():
if packet.stream.type == 'video':
print(packet)
print(frame)
img = frame.to_ndarray(format='bgr24')
#time.sleep(1)
cv2.imshow("Video", img)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
except KeyboardInterrupt:
pass
cv2.destroyAllWindows()
Playing multiple streams with PyAV is possible but not trivial. The main challenge is decoding multiple streams simultaneously, which in a single-threaded program can take longer than the frame rate of the videos would require. Unfortunately threads won't be of help here (Python allows only one thread to be active at any given time), so the solution is to build a multi-process architecture.
I created the code below for a side project, it implements a simple multi-stream video player using PyAV and OpenCV. It creates a separate background process to decode each stream, using queues to send the frames to the main process. Because the queues have limited size, there is no risk of decoders outpacing the main process — if a frame is not retrieved by the time the next one is ready, its process will block until the main process catches up.
All streams are assumed to run at the same frame rate.
import av
import cv2
import numpy as np
import logging
from argparse import ArgumentParser
from math import ceil
from multiprocessing import Process, Queue
from time import time
def parseArguments():
r'''Parse command-line arguments.
'''
parser = ArgumentParser(description='Video player that can reproduce multiple files simultaneoulsy')
parser.add_argument('paths', nargs='+', help='Paths to the video files to be played')
parser.add_argument('--resolution', type=int, nargs=2, default=[1920, 1080], help='Resolution of the combined video')
parser.add_argument('--fps', type=int, default=15, help='Frame rate used when playing video contents')
return parser.parse_args()
def decode(path, width, height, queue):
r'''Decode a video and return its frames through a process queue.
Frames are resized to `(width, height)` before returning.
'''
container = av.open(path)
for frame in container.decode(video=0):
# TODO: Keep image ratio when resizing.
image = frame.to_rgb(width=width, height=height).to_ndarray()
queue.put(image)
queue.put(None)
class GridViewer(object):
r'''Interface for displaung video frames in a grid pattern.
'''
def __init__(self, args):
r'''Create a new grid viewer.
'''
size = float(len(args.paths))
self.cols = ceil(size ** 0.5)
self.rows = ceil(size / self.cols)
(width, height) = args.resolution
self.shape = (height, width, 3)
self.cell_width = width // self.cols
self.cell_height = height // self.rows
cv2.namedWindow('Video', cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO | cv2.WINDOW_GUI_EXPANDED)
cv2.resizeWindow('Video', width, height)
def update(self, queues):
r'''Query the frame queues and update the viewer.
Return whether all decoders are still active.
'''
grid = np.zeros(self.shape, dtype=np.uint8)
for (k, queue) in enumerate(queues):
image = queue.get()
if image is None:
return False
(i, j) = (k // self.cols, k % self.cols)
(m, n) = image.shape[:2]
a = i * self.cell_height
b = a + m
c = j * self.cell_width
d = c + n
grid[a:b, c:d] = image
grid = cv2.cvtColor(grid, cv2.COLOR_RGB2BGR)
cv2.imshow('Video', grid)
cv2.waitKey(1)
return True
def play(args):
r'''Play multiple video files in a grid interface.
'''
grid = GridViewer(args)
queues = []
processes = []
for path in args.paths:
queues.append(Queue(1))
processes.append(Process(target=decode, args=(path, grid.cell_width, grid.cell_height, queues[-1]), daemon=True))
processes[-1].start()
period = 1.0 / args.fps
t_start = time()
t_frame = 0
while grid.update(queues):
# Spin-lock the thread as necessary to maintain the frame rate.
while t_frame > time() - t_start:
pass
t_frame += period
# Terminate any lingering processes, just in case.
for process in processes:
process.terminate()
def main():
logging.disable(logging.WARNING)
play(parseArguments())
if __name__ == '__main__':
main()

Can't seem to get uasyncio working in a micropython script for a PyBoard

I am designing a new time/score keeper for an air hockey table using a PyBoard as a base. My plan is to use a TM1627 (4x7seg) for time display, rotary encoder w/ button to set the time, IR and a couple 7segs for scoring, IR reflector sensors for goallines, and a relay to control the fan.
I'm getting hung up trying to separate the clock into its own thread while focusing on reading the sensors. Figured I could use uasyncio to split everything up nicely, but I can't figure out where to put the directives to spin off a thread for the clock and eventually the sensors.
On execution right now, it appears the rotary encoder is assigned the default value, no timer is started, the encoder doesn't set the time, and the program returns control to REPL rather quickly.
Prior to trying to async everything, I had the rotary encoder and timer working well. Now, not so much.
from rotary_irq_pyb import RotaryIRQ
from machine import Pin
import tm1637
import utime
import uasyncio
async def countdown(cntr):
# just init min/sec to any int > 0
min = sec = 99
enableColon = True
while True:
# update the 4x7seg with the time remaining
min = abs(int((cntr - utime.time()) / 60))
sec = (cntr - utime.time()) % 60
#print(str(), str(sec), sep=':' )
enableColon = not enableColon # alternately blink the colon
tm.numbers(min, sec, colon = enableColon)
if(min + sec == 0): # once both reach zero, break
break
await uasyncio.sleep(500)
X1 = pyb.Pin.board.X1
X2 = pyb.Pin.board.X2
Y1 = pyb.Pin.board.Y1
Y2 = pyb.Pin.board.Y2
button = pyb.Pin(pyb.Pin.board.X3, pyb.Pin.IN)
r = RotaryIRQ(pin_num_clk=X1,
pin_num_dt=X2,
min_val=3,
max_val=10,
reverse=False,
range_mode=RotaryIRQ.RANGE_BOUNDED)
tm = tm1637.TM1637(clk = Y1, dio = Y2)
val_old = val_new = 0
while True:
val_new = r.value()
if(val_old != val_new):
val_old = val_new
print(str(val_new))
if(button.value()): # save value as minutes
loop = uasyncio.get_event_loop()
endTime = utime.time() + (60 * val_new)
loop.create_task(countdown(endTime))
r.close() # Turn off Rotary Encoder
break
#loop = uasyncio.get_event_loop()
#loop.create_task(countdown(et))
#loop.run_until_complete(countdown(et))
I'm sure it's something simple, but this is the first non-CLI python script I've done, so I'm sure there are a bunch of silly mistakes. Any assistance would be appreciated.

python, cv2.imshow(), raspberryPi and a black screen

Currently trying write code with a GUI which will allow for toggling on/off image processing. Ideally the code will allow for turning on/off window view, real time image processing (pretty basic), and controlling an external board.
The problem I'm having revolves around the cv2.imshow() function. A few months back I made a push to increase processing rates by switching from picamera to cv2 where I can perform more complex computations like background subtraction without having to call python all the time. using the bcm2835-v4l2 package, I was able to pull images directly from the picamera using cv2.
fast forward 6 months and while trying to update the code, I find that the function cv2.imshow() does not display correctly anymore. I thought it might be a problem with bcm2835-v4l2 but tests using matplotlib show that the connection is fine. it appears to have everything to do with cv2.imshow() or so I guess.
I am actually creating a separate thread using the threading module for image capture and I am wondering if this could be the culprit. I don't think so though as typing in the commands
import cv2
camera = cv2.VideoCapture(0)
grabbed,frame = camera.read()
cv2.imshow(frame)
produces the same black screen
Down below is my code I am using (on the RPi3) and some images show the error and what is expected.
as for reference here are the details about my system
Raspberry pi3
raspi stretch
python 3.5.1
opencv 3.4.1
Code
import cv2
from threading import Thread
import time
import numpy as np
from tkinter import Button, Label, mainloop, Tk, RIGHT
class GPIOControllersystem:
def __init__(self,OutPinOne=22, OutPinTwo=27,Objsize=30,src=0):
self.Objectsize = Objsize
# Build GUI controller
self.TK = Tk() # Place TK GUI class into self
# Variables
self.STSP = 0
self.ShutdownVar = 0
self.Abut = []
self.Bbut = []
self.Cbut = []
self.Dbut = []
# setup pi camera for aquisition
self.resolution = (640,480)
self.framerate = 60
# Video capture parameters
(w,h) = self.resolution
self.bytesPerFrame = w * h
self.Camera = cv2.VideoCapture(src)
self.fgbg = cv2.createBackgroundSubtractorMOG2()
def Testpins(self):
while True:
grabbed,frame = self.Camera.read()
frame = self.fgbg.apply(frame)
if self.ShutdownVar ==1:
break
if self.STSP == 1:
pic1, pic2 = map(np.copy,(frame,frame))
pic1[pic1 > 126] = 255
pic2[pic2 <250] = 0
frame = pic1
elif self.STSP ==1:
time.sleep(1)
cv2.imshow("Window",frame)
cv2.destroyAllWindows()
def MProcessing(self):
Thread(target=self.Testpins,args=()).start()
return self
def BuildGUI(self):
self.Abut = Button(self.TK,text = "Start/Stop System",command = self.CallbackSTSP)
self.Bbut = Button(self.TK,text = "Change Pump Speed",command = self.CallbackShutdown)
self.Cbut = Button(self.TK,text = "Shutdown System",command = self.callbackPumpSpeed)
self.Dbut = Button(self.TK,text = "Start System",command = self.MProcessing)
self.Abut.pack(padx=5,pady=10,side=RIGHT)
self.Bbut.pack(padx=5,pady=10,side=RIGHT)
self.Cbut.pack(padx=5,pady=10,side=RIGHT)
self.Dbut.pack(padx=5,pady=10,side=RIGHT)
Label(self.TK, text="Controller").pack(padx=5, pady=10, side=RIGHT)
mainloop()
def CallbackSTSP(self):
if self.STSP == 1:
self.STSP = 0
print("stop")
elif self.STSP == 0:
self.STSP = 1
print("start")
def CallbackShutdown(self):
self.ShutdownVar = 1
def callbackPumpSpeed(self):
pass
if __name__ == "__main__":
GPIOControllersystem().BuildGUI()
Using matplotlib.pyplot.imshow(), I can see that the connection between the raspberry pi camera and opencv is working through the bcm2835-v4l2 connection.
However when using opencv.imshow() the window result in a blackbox, nothing is displayed.
Update: so while testing I found out that when I perform the following task
import cv2
import matplotlib
camera = cv2.VideoCapture(0)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
grab,frame = camera.read()
matplotlib.pyplot.imshow(frame)
update was solved and not related to the main problem. This was a buffering issue. Appears to have no correlation to cv2.imshow()
on a raspberry you should work with
from picamera import PiCamera
checkout pyimagesearch for that

Concurrent functions running in separate process using pygame and multiprocessing

Suppose we want to drive an autonomous car by predicting image labels from a previous set of images and labels collected (A Machine Learning application). For this task, the car is connected via bluetooth serial (rfcomm) to the Host Computer (A PC with *NIX) and the images are streamed directly from an Android phone using IP Webcam, meanwhile, the PC is running a program that links this two functions, displaying the captured images in a drawing environment created by pygame, and sending the instructions back to the car using serial.
At the moment, I've tried to implement those processes using the multiprocessing module, the seemed to work, but when I execute the client, the drawing function (if __name__ == '__main__') works after the getKeyPress() function ends.
The question is: It is possible to parallelize or synchronize the drawing fuinction enclosed within the if __name__ == '__main__' with the process declared in getKyPress(), such that the program works in two independent processes?
Here's the implemented code so far:
import urllib
import time
import os
import sys
import serial
import signal
import multiprocessing
import numpy as np
import scipy
import scipy.io as sio
import matplotlib.image as mpimg
from pygame.locals import *
PORT = '/dev/rfcomm0'
SPEED = 115200
ser = serial.Serial(PORT)
status = False
move = None
targets = []
inputs = []
tic = False
def getKeyPress():
import pygame
pygame.init()
global targets
global status
while not status:
pygame.event.pump()
keys = pygame.key.get_pressed()
targets, status = processOutputs(targets, keys)
targets = np.array(targets)
targets = flattenMatrix(targets)
sio.savemat('targets.mat', {'targets':targets})
def rgb2gray(rgb):
r, g, b = np.rollaxis(rgb[...,:3], axis = -1)
return 0.299 * r + 0.587 * g + 0.114 * b
def processImages(inputX, inputs):
inputX = flattenMatrix(inputX)
if len(inputs) == 0:
inputs = inputX
elif inputs.shape[1] >= 1:
inputs = np.hstack((inputs, inputX))
return inputs
def flattenMatrix(mat):
mat = mat.flatten(1)
mat = mat.reshape((len(mat), 1))
return mat
def send_command(val):
connection = serial.Serial( PORT,
SPEED,
timeout=0,
stopbits=serial.STOPBITS_TWO
)
connection.write(val)
connection.close()
def processOutputs(targets, keys):
global move
global status
global tic
status = False
keypress = ['K_p', 'K_UP', 'K_LEFT', 'K_DOWN', 'K_RIGHT']
labels = [1, 2, 3, 4, 5]
commands = ['p', 'w', 'r', 'j', 's']
text = ['S', 'Up', 'Left', 'Down', 'Right']
if keys[K_q]:
status = True
return targets, status
else:
for i, j, k, g in zip(keypress, labels, commands, text):
cmd = compile('cond = keys['+i+']', '<string>', 'exec')
exec cmd
if cond:
move = g
targets.append(j)
send_command(k)
break
send_command('p')
return targets, status
targetProcess = multiprocessing.Process(target=getKeyPress)
targetProcess.daemon = True
targetProcess.start()
if __name__ == '__main__':
import pygame
pygame.init()
w = 288
h = 352
size=(w,h)
screen = pygame.display.set_mode(size)
c = pygame.time.Clock() # create a clock object for timing
pygame.display.set_caption('Driver')
ubuntu = pygame.font.match_font('Ubuntu')
font = pygame.font.Font(ubuntu, 13)
inputs = []
try:
while not status:
urllib.urlretrieve("http://192.168.0.10:8080/shot.jpg", "input.jpg")
try:
inputX = mpimg.imread('input.jpg')
except IOError:
status = True
inputX = rgb2gray(inputX)/255
out = inputX.copy()
out = scipy.misc.imresize(out, (352, 288), interp='bicubic', mode=None)
scipy.misc.imsave('input.png', out)
inputs = processImages(inputX, inputs)
print inputs.shape[1]
img=pygame.image.load('input.png')
screen.blit(img,(0,0))
pygame.display.flip()
c.tick(1)
if move != None:
text = font.render(move, False, (255, 128, 255), (0, 0, 0))
textRect = text.get_rect()
textRect.centerx = 20 #screen.get_rect().centerx
textRect.centery = 20 #screen.get_rect().centery
screen.blit(text, textRect)
pygame.display.update()
if status:
targetProcess.join()
sio.savemat('inputs.mat', {'inputs':inputs})
except KeyboardInterrupt:
targetProcess.join()
sio.savemat('inputs.mat', {'inputs':inputs})
targetProcess.join()
sio.savemat('inputs.mat', {'inputs':inputs})
Thanks in advance.
I would personally suggest writing this without using the multiprocessing module: it uses fork() which has unspecified effects with most complex libraries, like in this case pygame.
You should try to write this as two completely separate programs. It forces you to think about what data needs to go from one to the other, which is both a bad and a good thing (as it may clarify things). You can use some inter-process communication facility, like the stdin/stdout pipe; e.g. in one program (the "main" one) you start the other as a sub-process like this:
popen = subprocess.Popen([sys.executable, '-u', 'my_subproc.py'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
(The -u is for unbuffered.)
Then read/write the data to popen.stdin/popen.stdout in the parent process, and to sys.stdin/sys.stdout in the subprocess. The simplest example would be if the two processes only need a synchronization signal, e.g. the parent process waits in a loop for the subprocess to say "next please". To do this the subprocess does print 'next please', and the parent process does popen.stdin.readline(). (The print goes to sys.stdin in the subprocess.)
Unrelated small note:
keypress = ['K_p', ...]
...
cmd = compile('cond = keys['+i+']', '<string>', 'exec')
exec cmd
if cond:
This looks like very heavy code to just do:
keypress = [K_p, ...] # not strings, directly the values
...
if keys[i]:
My suggestion is to use separate threads.
#At the beginning
import threading
#Instead of def getKeyPress()
class getKeyPress(threading.Thread):
def run(self):
import pygame
pygame.init()
global targets
global status
while not status:
pygame.event.pump()
keys = pygame.key.get_pressed()
targets, status = processOutputs(targets, keys)
targets = np.array(targets)
targets = flattenMatrix(targets)
sio.savemat('targets.mat', {'targets':targets})
#Instead of
#targetProcess = multiprocessing.Process(target=getKeyPress)
#targetProcess.daemon = True
#targetProcess.start()
gkp = getKeyPress()
gkp.start()
An alternative would be creating two different scripts and using sockets to handle the inter-process communication.

Categories

Resources