python livestreamer stream to image - python

The current code I have:
from livestreamer import Livestreamer
session = Livestreamer()
stream = session.streams('http://www.twitch.tv/esl_csgo')
stream = stream['source']
fd = stream.open()
with open("/tmp/stream.dat", 'wb') as f:
while True:
data = fd.read(1024)
f.write(data)
I would like to get a frame out of this stream and cut it before storing as a png image
This piece of code works and it can be played with vlc player without a problem but I would like to attain a frame without saving files the entire time to reduce IO on hard disks.
I've tried to use cv2 but I couldn't find my way around the API after installing it https://github.com/BVLC/caffe/wiki/Ubuntu-16.04-or-15.10-OpenCV-3.1-Installation-Guide

Related

From JPG to b64encode to cv2.imread()

For a program I am writing, I am transferring an image from one computer - using base64.b64encode(f.read(image)) - and trying to read it in the receiving script without saving it to hard drive (in an effort to minimize process time). I'm having a hard time figuring out how to read the image into OpenCV without saving it locally.
Here is what my code for sending the image looks like:
f = open(image.jpg)
sendthis = f.read()
f.close()
databeingsent = base64.b64encode(sendthis)
client.publish('/image',databeingsent,0)
# this is an MQTT publish, details for SO shouldn't be relevant
Meanwhile, here is the code receiving it. (This is in an on_message function, since I'm using MQTT for the transfer.)
def on_message(client, userdata, msg): # msg.payload is incoming data
img = base64.b64decode(msg.payload)
source = cv2.imread(img)
cv2.imshow("image", source)
After the message decodes, I have the error:
"TypeError: Your input type is not a numpy array".
I've done some searching, and I can't seem to find a relevant solution - some exist regarding converting from text files to numpy using b64, but none really relate to using an image and immediately reading that decoded data into OpenCV without the intermediary step of saving it to the harddrive (using the inverse process used to read the file in the "send" script).
I'm still pretty new to Python and OpenCV, so if there's a better encoding method to send the image - whatever solves the problem. How the image is sent is irrelevant, so long as I can read it in on the receiving end without saving it as a .jpg to disk.
Thanks!
You can get a numpy array from you decoded data using:
import numpy as np
...
img = base64.b64decode(msg.payload)
npimg = np.fromstring(img, dtype=np.uint8)
Then you need imdecode to read the image from a buffer in memory. imread is meant to load an image from a file.
So:
import numpy as np
...
def on_message(client, userdata, msg): # msg.payload is incoming data
img = base64.b64decode(msg.payload);
npimg = np.fromstring(img, dtype=np.uint8);
source = cv2.imdecode(npimg, 1)
From the OpenCV documentation we can see that:
imread : Loads an image from a file.
imdecode : Reads an image from a buffer in memory.
Seem a better way to do what you want.

Piped FFMPEG won't write frames correctly

I am using Python's Image module to load JPEGs and modify them. After I have a modified image, I want to load that image in to a video, using more modified images as frames in my video.
I have 3 programs written to do this:
ImEdit (My image editing module that I wrote)
VideoWriter (writes to an mp4 file using FFMPEG) and
VideoMaker (The file I'm using to do everything)
My VideoWriter looks like this...
import subprocess as sp
import os
import Image
FFMPEG_BIN = "ffmpeg"
class VideoWriter():
def __init__(self,xsize=480,ysize=360,FPS=29,
outDir=None,outFile=None):
if outDir is None:
print("No specified output directory. Using default.")
outDir = "./VideoOut"
if outFile is None:
print("No specified output file. Setting temporary.")
outFile = "temp.mp4"
if (outDir and outFile) is True:
if os.path.exists(outDir+outFile):
print("File path",outDir+outFile, "already exists:",
"change output filename or",
"overwriting will occur.")
self.outDir = outDir
self.outFile = outFile
self.xsize,self.ysize,self.FPS = xsize,ysize,FPS
self.buildWriter()
def setOutFile(self,fileName):
self.outFile = filename
def setOutDir(self,dirName):
self.outDir = dirName
def buildWriter(self):
commandWriter = [FFMPEG_BIN,
'-y',
'-f', 'rawvideo',
'-vcodec','mjpeg',
'-s', '480x360',#.format(480,
'-i', '-',
'-an', #No audio
'-r', str(29),
'./{}//{}'.format(self.outDir,self.outFile)]
self.pW = sp.Popen(commandWriter,
stdin = sp.PIPE)
def writeFrame(self,ImEditObj):
stringData = ImEditObj.getIm().tostring()
im = Image.fromstring("RGB",(309,424),stringData)
im.save(self.pW.stdin, "JPEG")
self.pW.stdin.flush()
def finish(self):
self.pW.communicate()
self.pW.stdin.close()
ImEditObj.getIm() returns an instance of a Python Image object
This code works to the extent that I can load one frame in to the video and no matter how many more calls to writeFrame that I do, the video only every ends up being one frame long. I have other code that works as far as making a video out of single frames and that code is nearly identical to this code. I don't know what difference there is though that makes this code not work as intended where the other code does work.
My question is...
How can I modify my VideoWriter class so that I can pass in an instance of an Python's Image object and write that frame to an output file? I also would like to be able to write more than one frame to the video.
I've spent 5 hours or more trying to debug this, having not found anything helpful on the internet, so if I missed any StackOverflow questions that would point me in the right direction, those would be appreciated...
EDIT:
After a bit more debugging, the issue may have been that I was trying to write to a file that already existed, however, this doesn't make much sense with the -y flag in my commandWriter. the -y flag should overwrite any file that already exists. Any thoughts on that?
I suggest that you follow the OpenCV tutorial in writing videos. This is a very common way of writing video files from Python, so you should find many answers on the internet, if you can't get certain things to work.
Note that the VideoWriter will discard (and won't write) any frames that are not in the exact same pixel size that you give it on initialization.

Failed to GET matplotlib generated png in django

I want to serve matplotlib generated images with django.
If the image is a static png file, the following code works great:
from django.http import HttpResponse
def static_image_view(request):
response = HttpResponse(mimetype='image/png')
with open('test.png', 'rb') as f:
response.write(f.read())
return response
However, if the image is dynamically generated:
import numpy as np
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
def dynamic_image_view(request):
response = HttpResponse(mimetype='image/png')
fig = plt.figure()
plt.plot(np.random.rand(100))
plt.savefig(response, format='png')
plt.close(fig)
return response
When accessing the url in Chrome (v36.0), the image will show up for a few seconds, then disappear and turn to the alt text. It seems that the browser doesn't know the image has already finished loading and waits until timeout. Checking with Chrome > Tools > Developer tools > Network supports this hypothesis: although the image appears after only about 1 sec, the status of the corresponding http request becomes "failed" after about 5 sec.
Note again, this strange phenomenon occurs only with the dynamically generated image, so it shouldn't be Chrome's problem (though it doesn't happen with IE or FireFox, presumably due to different rules in dealing with timeout requests).
To make it more tricky (i.e., hard to reproduce), it seems to be network speed dependent. It happens if I access the url from an IP in China, but not if via a proxy in the US (which seems to be faster visiting the host on which django is running)...
According to #HSquirrel, I tested writing the png into temporary disk file. Strangely, saving file with matplotlib didn't work,
plt.savefig('MPL.png', format='png')
with open('MPL.png', 'rb') as f:
response.write(f.read())
while saving file with PIL worked:
import io
from PIL import Image
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
im = Image.open(f)
im.save('PIL.png', 'PNG')
Attempt of getting rid of temp file failed:
im.save(response, 'PNG')
However, if I generate the image data stream with PIL rather than matplotlib, temporary disk file would be unnecessary. The following code works:
from PIL import Image, ImageDraw
im = Image.new('RGBA', (256,256), (0,255,0,255))
draw = ImageDraw.Draw(im)
draw.line((100,100, 150,200), fill=128, width=3)
im.save(response, 'PNG')
Finally, savefig(response, 'jepg') has no problem at all.
Have you tried saving the image to disk and then returning that? (you can periodically clear your disk of such generated images based on their time of creation)
If that gives the same problem, it might be a problem with the way the png is generated. Than you could use some kind of image library (like PIL) to make sure all your png's are (re)generated in a way that works with all browsers.
EDIT:
I've checked the png you've linked and I've played around with it a bit, opening and saving it with different programs and with PIL. I get different binary data every time. It seems each program decides which chunks to keep and which to remove. They all encode the png image data differently as well (as far as I can see, I am by no means a specialist in this, I just looked at the binary data based on the specs).
There are a few different paths you can take:
1.The quick and dirty one:
import io
from PIL import Image
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
im = Image.open(f)
tempfilename = generatetempfilename()
im.save(tempfilename, 'PNG')
with open(tempfilename, 'rb') as f:
response.write(f.read())
2.Adapt how matplotlib makes PNG files (possibly by just using PIL for
it as well). See
http://matplotlib.org/users/customizing.html#customizing-matplotlib
3.If it's an option for you, use jpeg.
4.Figure out what's wrong with the PNG generated by matplotlib and fix
it binary (I don't recommend this). You can use xxd (linux command: xxd test.png) to figure out how the files look in binary and then see how things go using the png spec: overview chunk spec

Downloading the first frame of a twitch.tv stream

Using this api I've managed to download stream data, but I can't figure out how to parse it. I've looked at the RMTP format, but it doesn't seem to match.
from livestreamer import Livestreamer
livestreamer = Livestreamer()
# set to a stream that is actually online
plugin = livestreamer.resolve_url("http://twitch.tv/froggen")
streams = plugin.get_streams()
stream = streams['mobile_High']
fd = stream.open()
data = fd.read()
I've uploaded an example of the data here.
Ideally I wouldn't have to parse it as video, I only need the first keyframe as an image. Any help would be greatly appreciated!
Update: Ok, I got OpenCV working, it works for grabbing the first frame of a random video file I had. However, it produced a nonsense image when I used the same code on file with stream data.
Alright, I figured it out. Made sure to write as binary data, and OpenCV is able to decode the first video frame. The resulting image had R and B channels switched, but that was easily corrected. Downloading about 300 kB seems to be enough to be sure that the full image is there.
import time, Image
import cv2
from livestreamer import Livestreamer
# change to a stream that is actually online
livestreamer = Livestreamer()
plugin = livestreamer.resolve_url("http://twitch.tv/flosd")
streams = plugin.get_streams()
stream = streams['mobile_High']
# download enough data to make sure the first frame is there
fd = stream.open()
data = ''
while len(data) < 3e5:
data += fd.read()
time.sleep(0.1)
fd.close()
fname = 'stream.bin'
open(fname, 'wb').write(data)
capture = cv2.VideoCapture(fname)
imgdata = capture.read()[1]
imgdata = imgdata[...,::-1] # BGR -> RGB
img = Image.fromarray(imgdata)
img.save('frame.png')
# img.show()

Audio Recording in Python

I want to record short audio clips from a USB microphone in Python. I have tried pyaudio, which seemed to fail communicating with ALSA, and alsaaudio, the code example of which produces an unreadable files.
So my question: What is the easiest way to record clips from a USB mic in Python?
This script records to test.wav while printing the current amplitute:
import alsaaudio, wave, numpy
inp = alsaaudio.PCM(alsaaudio.PCM_CAPTURE)
inp.setchannels(1)
inp.setrate(44100)
inp.setformat(alsaaudio.PCM_FORMAT_S16_LE)
inp.setperiodsize(1024)
w = wave.open('test.wav', 'w')
w.setnchannels(1)
w.setsampwidth(2)
w.setframerate(44100)
while True:
l, data = inp.read()
a = numpy.fromstring(data, dtype='int16')
print numpy.abs(a).mean()
w.writeframes(data)

Categories

Resources