Video streaming after image processiong from server to web client - python

I am still a beginner and I am trying to develop a Home Surveillance application with facial recognition, where the main idea is to stream the image from the server to the web browser including additional data such as the name of the person detected/ unknown based on detection with OpenCV and other information from a database in case the person is recognized, such as sex, age, etc.
I did something that works, but I don't know how efficient it is, thus I would like some opinions.
I am using flask as the web server and socketio for transferring real-time data.
The actual app runs as follows: I have a function that starts at the beginning of a thread, imageProcessing(), which takes the frames from the camera and does all the image processing (face detection and recognition), and stores the frame and persons detected in one global variable, respectively array with persons recognized/ unknown. Next, the web server app begins and after the connection is established between the web client and web server, the frame and array of persons are sent along with additional information via web sockets to the web client. The delay is less than a second. So everything is running on the same app. So the questions here are:
Is it efficient to have a thread that does image processing running inside the server app?
Is it efficient and secure to use the global variable such that the function imageProcessing() is storing the image from camera and then this global variable at the same time is used by the socket.emit() which sends the frame to web client?
Here is part of the code of the main app. The function is irrelevant since just stores a result to a global variable and that's why I did not place it here.
```
import camera #this is a file containg the imageProcessing function and frame global variable
cameraThread = threading.Thread(target = camera.imageProcessing)
cameraThread.start()
eventlet.monkey_patch()
app = Flask(__name__)
socketio = SocketIO(app, logger=True, async_mode='eventlet')
#socketio.on('connect', namespace='/web')
def connect_web():
print('[INFO] Web client connected: {}'.format(request.sid))
#socketio.on('message', namespace='/web')
def handle(msg):
while True:
message = {'image': camera.encode_image(camera.frame),'person': camera.person}
time.sleep(0.1)
socketio.emit('stream_response', message, namespace='/web', broadcast=True)
#socketio.on('disconnect', namespace='/local')
def disconnect_web():
print('[INFO] Web client disconnected: {}'.format(request.sid))
#app.route('/', methods=['POST', 'GET'])
def index():
return render_template('index.html', async_mode=socketio.async_mode)
if __name__ == '__main__':
socketio.run(app,host="127.0.0.1", port=5000)
Initially I tried to make a separate client where the image was taken from cameras and processed, then sent the result via web sockets to the web server and then again to the web client via web sockets, but when doing so, I had a delay of ~15 seconds.. I suppose this approach of having the web server independent from image processing app shall be better in theory from different reasons, but I do not know why I have such a delay..
Shall I still keep an independence between image processing client -> web server -> web client? If so then any suggestion for using another alternative/ technology since I have such a delay for this scenario using socket.io?
How would you develop such an app from an architecture design point of view? E.g Would you use WebSockets for streaming video or some other technology? Would you place image processing into a separate client/ app and transfer the result to the web server or all processing on the same server?

Related

Streaming video file from flask server to react js client

I am creating a "Youtube Clone" using Python Flask on the backend and React JS on the server side, and I currently need to stream the video from a file on the server.
This is what I currently have:
#app.route('/api/video/<string:video_id>')
def ShowVideo(video_id):
video = Video.query.filter_by(id = video_id).first() #"Video" is the video database (using SQLAlchemy)
if video is None:
return {"return":"401"} #video doesn't exist in database
#return ??
now I have the video id and path in the database, and I need to stream it over http to a React JS website
I've seen people using OpenCV to stream, and put an image in the JS client, but from what I've seen it does not work with a video file already on the computer (I can be mistaken, I'm not sure).
Another thing I've seen is using JS to send the video directly to the browser (like in this video, but I don't think this will work for my circumstances)
Does anyone have any idea?

Django celery and channels example

I have a Django app and need to generate files that can take up to a minute, so I pass that off to a background worker.
Currently, the process works as follows. I post to the server that replies with a URL that I can poll. I then poll the server every 2 seconds and either sends back "busy" or a url of where the file is located in my S3 bucket.
I want to replace this polling with Django channels, but unsure what is the best way to achieve this as I can't find any examples online. is channels even intended for something like this?
My current thoughts are the following:
Start the file generation as soon as the client opens a connection to on a specific route (previously this would have been a post)
The background task gets started as soon as the client connects and get the channel name as a paramenter
Once it is done it sends back the file path to the consumer which in turn sends it to the browser where I'll use JS to create a download button.
Below is an example:
#shared_task
def my_bg_task(channel_name):
#some long running calc here
channel_layer = get_channel_layer()
async_to_sync(channel_layer.send)(channel_name, {'type': 'generation_done', 'f_path': 'path/to/s3/bucket'})
class ReloadConsumer(WebsocketConsumer):
def connect(self):
my_bg_task.delay(self.channel_name)
self.accept()
def generation_done(self, event):
self.send(text_data=json.dumps({event}))
Is this the best way to achieve this?
Obviously from a security point of the it should not be accessible to anybody other than the user that opened the connection.

Can I persist an http connection (or other data) across Flask requests?

I'm working on a Flask app which retrieves the user's XML from the myanimelist.net API (sample), processes it, and returns some data. The data returned can be different depending on the Flask page being viewed by the user, but the initial process (retrieve the XML, create a User object, etc.) done before each request is always the same.
Currently, retrieving the XML from myanimelist.net is the bottleneck for my app's performance and adds on a good 500-1000ms to each request. Since all of the app's requests are to the myanimelist server, I'd like to know if there's a way to persist the http connection so that once the first request is made, subsequent requests will not take as long to load. I don't want to cache the entire XML because the data is subject to frequent change.
Here's the general overview of my app:
from flask import Flask
from functools import wraps
import requests
app = Flask(__name__)
def get_xml(f):
#wraps(f)
def wrap():
# Get the XML before each app function
r = requests.get('page_from_MAL') # Current bottleneck
user = User(data_from_r) # User object
response = f(user)
return response
return wrap
#app.route('/one')
#get_xml
def page_one(user_object):
return 'some data from user_object'
#app.route('/two')
#get_xml
def page_two(user_object):
return 'some other data from user_object'
if __name__ == '__main__':
app.run()
So is there a way to persist the connection like I mentioned? Please let me know if I'm approaching this from the right direction.
I think you aren't approaching this from the right direction because you place your app too much as a proxy of myanimelist.net.
What happens when you have 2000 users? Your app end up doing tons of requests to myanimelist.net, and a mean user could definitely DoS your app (or use it to DoS myanimelist.net).
This is a much cleaner way IMHO :
Server side :
Create a websocket server (ex: https://github.com/aaugustin/websockets/blob/master/example/server.py)
When a user connects to the websocket server, add the client to a list, remove it from the list on disconnect.
For every connected users, do frequently check myanimelist.net to get the associated xml (maybe lower the frequence the more online users you get)
for every xml document, make a diff with your server local version, and send that diff to the client using the websocket channel (assuming there is a diff).
Client side :
on receiving diff : update the local xml with the differences.
disconnect from websocket after n seconds of inactivity + when disconnected add a button on the interface to reconnect
I doubt you can do anything much better assuming myanimelist.net doesn't provide a "push" API.

Recursive static content display with Flask

G'day,
I've written a piece of software in Python 3.x with PySide which dynamically displays images (as a slideshow) from a network directory. The software works as intended for local machines (i.e. the slideshow is displayed on the machine which is running the software).
I would like to run the software on a centralised server, which then serves each image of the slideshow temporarily on a local network web page. I've managed to serve static content in my code using the Flask library. Below is a snippet of my code:
def ServeImage(self, DisplayImage):
app = Flask(__name__)
#app.route("/")
def main():
return '''<img src='{}' />'''.format(url_for('static', filename=DisplayImage))
app.run(host='10.2.x.x', port=5000)
'DisplayImage' is the output image path from my current code. At present, the above will correctly display the image in the browser at the designated local address. However, I'm unsure how to return to the slideshow component of the software (where the DisplayImage is generated). It appears that once ServeImage() has been called, the code terminates. I believe it's due to the way this example Flask function was put together. What I'm hoping for is detailed in the pseudo-code below:
def ServeImage(self, DisplayImage):
app = Flask(__name__)
'''<img src='{}' />'''.format(url_for('static', filename=DisplayImage))
app.run(host='10.2.x.x', port=5000)
#Timer
self.ImageNumber += 1
self.SlideShow()
I don't know whether this is possible. As mentioned above, it appears that when app.run() is called, that particular address is served forever, so any code coming after app.run() will never be run. What I'd like is: once the image has been served (i.e. once app.run() has been called), the ImageNumber() variable is changed, and the main SlideShow() function is called, which in turn generates the next DisplayImage to be served on the web page.
Any clarification/suggestions here would be great!
Thanks!

Requests timeout between App engine and EC2

My webapp has two parts:
a GAE server which handles web requests and sends them to an EC2 REST server
an EC2 REST server which does all the calculations given information from GAE and sends back results
It works fine when the calculations are simple. Otherwise, I would have timeout error on the GAE side.
I realized that there are some approaches for this timeout issue. But after some researches, I found (please correct me if I am wrong):
taskqueue would not fit my needs since some of the calculations could take more than half an hours.
'GAE backend instance' works when I reserved another instance all the time. But since I have already resered an EC2 instance, I would like to find some "cheap" solutions (not paying GAE backend instance and EC2 at the same time)
'GAE Asynchronous Requests' also not an option, since it still wait for response from EC2 although users can send other requests while they are waiting
Below is a simple case of my code, and it asks:
users to upload a csv
parse this csv and send information to EC2
generate output page given response from EC2
OutputPage.py
from przm import przm_batchmodel
class OutputPage(webapp.RequestHandler):
def post(self):
form = cgi.FieldStorage()
thefile = form['upfile']
#this is where uploaded file will be processed and sent to EC2 for computing
html= przm_batchmodel.loop_html(thefile)
przm_batchoutput_backend.przmBatchOutputPageBackend(thefile)
self.response.out.write(html)
app = webapp.WSGIApplication([('/.*', OutputPage)], debug=True)
przm_batchmodel.py### This is the code which sends info. to EC2
def loop_html(thefile):
#parses uploaded csv and send its info. to the REST server, the returned value is a html page.
data= csv.reader(thefile.file.read().splitlines())
response = urlfetch.fetch(url=REST_server, payload=data, method=urlfetch.POST, headers=http_headers, deadline=60)
return response
At this moment, my questions are:
Is there a way on the GAE side allow me to just send the request to EC2 without waiting for its response? If this is possible, on the EC2 side, I can send users emails to notify them when the results are ready.
If question 1 is not possible. Is there a way to create a monitor on EC2 which will invoke the calculation once information are received from GAE side?
I appreciate any suggestions.
Here are some points:
For Question 1 : You do not need to wait on the GAE side for EC2 to complete its work. You are already using URLFetch to send the data across to EC2. As long as it is able to send that data across over to the EC2 side within 60 seconds and its size is not more than 10MB, then you are fine.
You will need to make sure that you have a Receipt Handler on the EC2 side that is capable of collecting this data from above and sending back an Ack. An Ack will be sufficient for the GAE side to track the activity. You can then always write some code on the EC2 side to send back the response to the GAE side that the conversion is done or as you mentioned, you could send an email off if needed.
I suggest that you create your own little tracker on the GAE side. For e.g. when the File is uploaded, created a Task and send back the Ack immediately to the client. Then you can use a Cron Job or Task Queue on the App Engine side to simply send off the work to EC2. Do not wait for EC2 to complete its job. Then let EC2 report back to GAE that its work is done for a particular Task Id and send off and email (if required) to notify the users that the work is done. In fact, EC2 can even report back with a batch of Task Ids that it completed, instead of sending a notification for each Task Id.

Categories

Resources