I have a Django application and I want to create a UI from which users will be able to play videos. Videos are dynamically generated and saved on my web server.
I am running a Django application on an Apache web-server through mod_wsgi. It takes a long time to buffer/load video; please suggest how can I improve it. I am using Ubuntu Server with 16 GB RAM, quad core processor and 1 TB SSD.
I would suggest using a cloud provider such as Amazon Web Services.
Have a look at django-storages, this will help you get your videos from django up to AWS. You'll store your video's on Amazon's S3, then, if you have an international audience use CloudFront to stream. You can use RTMP to do this (that's "proper" streaming as apposed to incremental download while playing).
On the browser side you probably just want to use the <video> tag (see MDN). You'll be saving the source attribute of the video (a reference to the location of the video on cloudfront) on your django model and then adding it into your templates.
Related
We have deployed a django server (nginx/gunicorn/django) but to scale the server there are multiple instances of same django application running.
Here is the diagram (architecture):
Each blue rectangle is a Virtual Machine.
HAProxy sends all request to example.com/admin to Server 3.other requests are divided between Server 1 and Server 2.(load balance).
Old Problem:
Each machine has a media folder and when admin Uploads something the uploaded media is only on Server 3. (normal users can't upload anything)
We solved this by sending all requests to example.com/media/* to Server 3 and nginx from Server3 serves all static files and media.
Problem right now
We are also using sorl-thumbnail.
When a requests comes for example.com/,sorl-thumbnail tries to access the media file but it doesn't exist on this machine because it's on Server3.
So now all requests to that machine(server 1 or 2) get 404 for that media file.
One solution that comes to mind is to make a shared partition between all 3 machines and use it as media.
Another solution is to sync all media folders after each upload but this solution has problem and that is we have almost 2000 requests per second and sometimes sync might not be fast enough and sorl-thumbnail creates the database record of empty file and 404 happens.
Thanks in advance and sorry for long question.
You should use an object store to save and serve your user uploaded files. django-storages makes the implementation really simple.
If you don’t want to use cloud based AWS S3 or equivalent, you can host your own on-prem S3 compatible object store with minio.
On your current setup I don’t see any easy way to fix where the number of vm s are dynamic depending on load.
If you have deployment automation then maybe try out rsync so that the vm takes care of syncing files with other vms.
Question: What was the problem?
we got 404 on other machines because normal requests (requests asking for a template) would get a 404 not found on thumbnail media.
real problem was with sorl-thumbnail template tags.
Here is what we ended up doing:
In models that needed a thumbnail, we added functions to create that specific thumbnail.
and using a post-save signal in the admin machine called all those functions to make sure all the thumbnails were created after save and the table for sorl-thumbnail is filled.
now in templates instead of calling sorl-thumbnail template tags now we call a function in model.
The app I am currently hosting on Heroku allows users to submit photos. Initially, I was thinking about storing those photos on the filesystem, as storing them in the database is apparently bad practice.
However, it seems there is no permanent filesystem on Heroku, only an ephemeral one. Is this true and, if so, what are my options with regards to storing photos and other files?
It is true. Heroku allows you to create cloud apps, but those cloud apps are not "permanent" - they are instances (or "slugs") that can be replicated multiple times on Amazon's EC2 (that's why scaling is so easy with Heroku). If you were to push a new version of your app, then the slug will be recompiled, and any files you had saved to the filesystem in the previous instance would be lost.
Your best bet (whether on Heroku or otherwise) is to save user submitted photos to a CDN. Since you are on Heroku, and Heroku uses AWS, I'd recommend Amazon S3, with optionally enabling CloudFront.
This is beneficial not only because it gets around Heroku's ephemeral "limitation", but also because a CDN is much faster, and will provide a better service for your webapp and experience for your users.
Depending on the technology you're using, your best bet is likely to stream the uploads to S3 (Amazon's storage service). You can interact with S3 with a client library to make it simple to post and retrieve the files. Boto is an example client library for Python - they exist for all popular languages.
Another thing to keep in mind is that Heroku file systems are not shared either. This means you'll have to be putting the file to S3 with the same application as the one handling the upload (instead of say, a worker process). If you can, try to load the upload into memory, never write it to disk and post directly to S3. This will increase the speed of your uploads.
Because Heroku is hosted on AWS, the streams to S3 happen at a very high speed. Keep that in mind when you're developing locally.
I have developed a face recognition software.It detects and identify human faces infront of the connected web camera.Now I need to deploy it in a website.so that,Anyone with a computer should be able to access this service through this website and should be able to perform face detection and identification using the camera in his premises.
Is it possible to integrate python application with website?
Is Django framework is suitable for my work?
Can anybody recommed any tutorials in this direction?
This has got absolutely nothing to do with Django. Django is running in your server, whereas you need to capture image at your client. Therefore it has to do about your front-end and not your back-end.
Traditionally this has been a feature not at all possible. A web browser could not access the client's peripherals and end of story. Flash, activex etc have been workarounds for this.
HTML5 now allows it. Read more on MDN about MediaDevices.getUserMedia().
Unfortunately this is still fresh at the time of writing and is only supported by some browser versions: read more on caniuse.com.
You could use some js library for feature detection such as modernizr.
I am not a Python programmer, or a YouTube API specialist. This is also my first StackOverflow post. I am OK with Linux, programming generally, and bash scripting.
I am currently testing the youtube-upload program. I have developed a bash script, which determines what videos are not already uploaded by comparing md5 checksums, and creates a logged bash script which then uploads new videos with descriptions, tags, and titles using the youtube-upload Python app. The upload process is working well.
However, the videos uploaded do not take on the default YouTube channel settings, including the default playlist allocation and most advanced settings (some of which are available when using a browser to upload). This means I have to browse the uploaded videos one by one to change change the playlist the video belongs to, and most settings in the "Advanced Settings" tab for the video, which is time consuming for hundreds of short videos.
I could not find any indication on Google that the V3 YouTube API exposes playlist allocation or many advanced settings.
I have the following questions:
Does anyone know whether playlist allocation and advanced settings are currently exposed by the YouTube API? If not, will they ever be?
If the API does expose the control, would anyone know how I can use bash (or Python - which seems relatively easy to learn) change the playlist allocation and advanced settings for uploaded videos, or new videos being uploaded by YouTube-upload?
If the API does not expose the control, is there another method I am not aware of where I can create code that controls the browser (FireFox) to change these settings for all uploaded videos?
It seems to me that if the YouTube API does not expose these settings, then no process performing YouTube uploads can be fully automated. My goal is to fully automate the entire process, including playlist and advanced settings selection.
The API to link a resource to a playlist is PlaylistItems at https://developers.google.com/youtube/v3/docs/playlistItems.
You can follow the list example to see/get the playlist Ids (if you forgot it...). Then use insert method to add your resource to a playlist.
Extract from insert Reference:
You must specify a value for these properties:
snippet.playlistId
snippet.resourceId
You can set values for these properties:
snippet.playlistId
snippet.position
snippet.resourceId
contentDetails.note
contentDetails.startAt
contentDetails.endAt
you can try this. I am not a code master neither but it works
give it a shot ytb_up based selenium
https://github.com/wanghaisheng/ytb-up
features YOU MAY NEED
proxy support
auto detect whether need a proxy
2. cookie support
for those multiple channels under same google account
3. schedule time publish
you can explictly specify a date and time for each video or you can set publish policy and daily public count,for example,daily count is 4,you got 5 videos,then first 4 will be published 1 day after the upload date ,the other 1 will be 2 days after the upload date
4. fix google account verify
OK, so a quick summary of my setup and what i want to accomplish:
I have a rails 2.3.5 server that runs my website. I have a flash application on my site where users can upload images directly to s3.
When an upload is completed, rails is notified.
At the point where the image is finished uploading to s3 and rails is notified, i want rails to send some post to something located on ec2 to create two thumbnails (110x110 and 600x600).
When the thumbnails are created and transferred to s3, i want whatever process on ec2 to send a post back to rails to notify that the thumbnail creation is finished and are on s3.
What is the best way to accomplish this? I have looked at tools such as knife very briefly but I'm not familiar at all with using ec2 or similar services.
Thanks
for those like me who looked this up, AWS now offers Lambda
AWS Lambda is a compute service that makes it easy for you to build
applications that respond quickly to new information. AWS Lambda runs
your code in response to events such as image uploads, in-app
activity, website clicks, or outputs from connected devices. You can
use AWS Lambda to extend other AWS services with custom logic, or
create your own back-end that operates at AWS scale, performance, and
security. With AWS Lambda, you can easily create discrete,
event-driven applications that execute only when needed and scale
automatically from a few requests per day to thousands per second.
Here is a great walkthrough that answers this question perfectly, Handling Amazon S3 Events. The idea is to have a node.js package -the Labmda- get notified about S3 Bucket events (object-created in our case), get the uploaded object, resize it then finally save it in some other bucket for thumbnails. Since you will have a node.js app, you can basically make any kind of requests to any service you want after the thumbnail got saved.
The process that I would use is the following:
Once the image is uploaded to S3, rails gets notified and it adds a message to an Amazon SQS Queue (see http://aws.amazon.com/sqs/)
A background process running on EC2 checks the queue and processes any messages, generating the thumbnails
Once a thumbnail is generated, a notification is sent using Amazon SNS (see http://aws.amazon.com/sns/) and your rails app respons to this notification