I am making a music player application with Pygame, initially, I was playing songs that I have downloaded in my local system, now I want to share this app as an executable file with my friends, now the issue is they may not have songs already downloaded on their system nor will the location be the same, hence I thought of uploading the song files on to the cloud, Mega in my case, I am a noob and this is the first time I'm exploring something with cloud storage and streaming, Now I want to access the songs stored in my cloud account and play it in my application, basically I need all possible access to every file stored in cloud, since the app does quite a few things like shuffle, next song's title, loop, etc.
My question now is How to stream the audio from my cloud account in my Python desktop app?
I chose Mega since it offers 20GB of storage for a free account, but I am not sure if this is even a suitable platform for what I'm trying to achieve or is this only for cloud storage, any insights in the right direction to the next move would be really helpful.
Update: I tried using mega API, but I'm facing errors logging in
Are there any alternatives or does anyone know how to get rid of this error mega.errors.RequestError: EACCESS, Access violation faced during logging in.
The code I used for logging in:
from mega import Mega
mega = Mega()
m = mega.login(email, password)
m = mega.login()
I have had success in obtaining files from mega using the mega.py package. Install with
pip install mega.py
And then obtain files to play with:
from mega import Mega
import tempfile
# Login to mega
mega = Mega()
email="XXXXXX#XXXXX.XXX"
password="XXXXXXXXXXXX"
m = mega.login(email, password)
# Get the file descriptor of a previously uploaded file
filename='file_example_MP3_700KB.mp3'
filedesc = m.find(filename)
with tempfile.TemporaryDirectory() as tmpdir:
# Download the file to a temporary directory
downloaded_file_name = m.download(filedesc, tmpdir)
# replace the following with pygame code for playing the downloaded_file_name file
print(downloaded_file_name)
Where email, password, and filename are replaced as appropriate.
Note that if my email/password are not correct I get a RequestError as in the question.
mega.errors.RequestError: ENOENT, Object (typically, node or user) not found
Once the file is downloaded it can be played in pygame as in https://www.geeksforgeeks.org/python-playing-audio-file-in-pygame/
A possibility to stream audio data is to first download the data, write the binary audio data into a virtual file (into memory) and then play audio from this file. The following example demonstrates how to play a wave file from a remote location:
from io import BytesIO
import requests
import simpleaudio
response = requests.get("https://www2.cs.uic.edu/~i101/SoundFiles/CantinaBand3.wav")
audio_stream = BytesIO()
audio_stream.write(response.content)
audio_stream.seek(0)
wave_obj = simpleaudio.WaveObject.from_wave_file(audio_stream)
wave_obj.sample_rate = 44000 # TODO: set this appropriately
play_obj = wave_obj.play() # start playing audio from bytes IO
play_obj.wait_done()
This example first downloads the file and then plays it. In order to properly stream the audio, you might to introduce asynchronous processing, e.g., based on threads or async programming, where one thread will fill the byte buffer and another plays from the buffer.
Related
I'm trying to upload a .mp4 video to be processed by Python server in Heroku. In my server.py, I've have following code:
# Initialize the Flask application
MYDIR = os.path.dirname(__file__)
app = Flask(__name__)
app.config['UPLOAD_PATH'] = 'uploads'
cors = CORS(app, resources={r"/api/*": {"origins": "*"}})
# route http posts to this method
#app.route('/api/uploadFile', methods=['POST'])
def uploadFile():
r = request # this is the incoming request
uploaded_file = r.files['uploadedfile']
# save the incoming video file locally
if uploaded_file.filename != '':
content = r.files['uploadedfile'].read()
uploaded_file.save(os.path.join(MYDIR, uploaded_file.filename))
However, when I run heroku run bash and do ls I don't see the file at all. I've looked at the previous posts regarding this and doing what many have suggested, i.e., to have an absolute path os.path.join(MYDIR, uploaded_file.filename). But I still don't see the file. I know that Heroku is an ephemeral system and doesn't store files, but all I want to do is process the video file and after it's done, I'll myself delete it. Oh by the way, the video is approx 25MB in size. I tried a low quality version which is around 4MB and the problem still persists, so my guess is it is not the size of the video that's the problem.
Also, when I read the content via content = r.files['uploadedfile'].read(), I do see that content is approximately of length 25M suggesting that I'm indeed receiving the video correctly.
Ideally, I would love to convert the byte stream to something like list of frames and then process it using opencv. But I thought, I can, as a first step, just save it locally, then read and process. I don't care that it is deleted after 5 minutes of my processing is done.
Any help would be greatly appreciated. Banged my head around for a day looking for a solution. Thanks!!!
Just want to add that the commands above give me no error. So it seems like everything worked but then when I do heroku run bash, I see no file anywhere. It would've been great if there was an error or an warning provided by the system :-(.
Also, I tried running things locally on localhost server and it does store the file. So it works locally on my machine, just can't seem to save it in Heroku.
I am planning to write my own live stream server in python using the HLS protocol for a small project.
My idea is to use Amazon S3 for storage and have the python server just output the m3u8 file.
This is all straightforward, now to the problem: I want to stream live video from a camera over an unreliable network and if there is a congestion the player could end up with completing playing of the last file referenced in the m3u8 file. Can i in some way mark the stream as a live stream having the player try again reloading the m3u8 for a specific time looking for a next segment or how should live streaming using HLS be handled?
Maybe live streaming over HLS is not supported?
This is explictly allowed in the HLS spec as a "live Playlist". There are a few things you need to be aware of, but notably, from section 6.2.1:
The server MUST NOT change the Media Playlist file, except to:
o Append lines to it (Section 6.2.1).
And if we take a look at Section 4.3.3.4:
The EXT-X-ENDLIST tag indicates that no more Media Segments will be added to the Media Playlist file. It MAY occur anywhere in the Media Playlist file.
In other words, if a playlist does not include the #EXT-X-ENDLIST tag, it's expected that the player will keep loading the playlist from whatever source it originally loaded it from with some frequency, looking for the server to append segments to the playlist.
Most players will do this taking into account current network conditions so they have a chance to get new segments before the playback is caught up. If the server needs to interrupt the segments, or otherwise introduce a gap, it has the responsibility to introduce a #EXT-X-DISCONTINUITY-SEQUENCE tag.
Apple provides a more concrete example of a Live Playlist on their developer website.
I am using Google Colab and I would like to use my custom libraries / scripts, that I have stored on my local machine. My current approach is the following:
# (Question 1)
from google.colab import drive
drive.mount("/content/gdrive")
# Annoying chain of granting access to Google Colab
# and entering the OAuth token.
And then I use:
# (Question 2)
!cp /content/gdrive/My\ Drive/awesome-project/*.py .
Question 1:
Is there a way to avoid the mounting of the drive entriely? Whenever the execution context changes (e.g. when I select "Hardware Acceleration = GPU", or when I wait an hour), I have to re-generate and re-enter the OAuth token.
Question 2:
Is there a way to sync files between my local machine and my Google Colab scripts more elegently?
Partial (not very satisfying answer) regarding Question 1: I saw that one could install and use Dropbox. Then you can hardcode the API Key into the application and mounting is done, regardless of whether or not it is a new execution context. I wonder if a similar approach exists based on Google Drive as well.
Question 1.
Great question and yes there is- I have been using this workaround which is particularly useful if you are a researcher and want other to be able to re run your code- or just 'colab'orate when working with larger datasets. The below method has worked well working as a team and there are challenges to each person having their own version of datasets.
I have used this regularly on 30 + Gb of image files downloaded and unzipped to colab run time.
The file id is in the link provided when you share from google drive
you can also select multiple files and select share all and then get a generate for example a .txt or .json file which you can parse and extract the file id's.
from google_drive_downloader import GoogleDriveDownloader as gdd
#some file id/ list of file ids parsed from file urls.
google_fid_id = '1-4PbytN2awBviPS4Brrb4puhzFb555g2'
destination = 'dir/dir/fid'
#if zip file ad kwarg unzip=true
gdd.download_file_from_google_drive(file_id=google_fid_id,
destination, unzip=True)
A url parsing function to get file ids from a list of urls might look like this:
def parse_urls():
with open('/dir/dir/files_urls.txt', 'r') as fb:
txt = fb.readlines()
return [url.split('/')[-2] for url in txt[0].split(',')]
One health warning is that you can only repeat this a small number of times in a 24 hour window for the same files.
Here's the gdd git repo:
https://github.com/ndrplz/google-drive-downloader
here is an working example (my own) of how it works inside bigger script:
https://github.com/fdsig/image_utils
Question 2.
You can connect to a local run time but this also means using local resources gpu/cpu etc.
Really hope this helps :-).
F~
If your code isn't secret, you can use git to sync your local codes to github. Then, git clone to Colab with no need for any authentication.
Just for fun, I'm trying to write a Kodi addon for personal use capable of playing some torrent files through Quasar addon. Having my torrent files on a web site, I'm trying to play the videos, but I don't know what value assign to the url variable in this simple example:
addon_handle = int(sys.argv[1])
xbmcplugin.setContent(addon_handle, 'movies')
url = 'http://mysite/video.torrent'
li = xbmcgui.ListItem('First test!', iconImage='DefaultVideo.png')
xbmcplugin.addDirectoryItem(handle=addon_handle, url=url, listitem=li)
xbmcplugin.endOfDirectory(addon_handle)
The example above doesn't work as I suspected, although it works if points directly to a video file. Is there any way to tell kodi that is a torrent file or specify in some way the plugin needed to the play the torrent?
Any help?
I am pushing video to workers for a cloud dataflow pipeline. I have been advised to use beam directly to manage my objects. I can't understand the best practices for downloading objects. I can see the class
Apache Beam IO GCP So one could use it like so:
def read_file(element,local_path):
with beam.io.gcp.gcsio.GcsIO().open(element, 'r') as f:
Where element is the gcs path read from a previous beam step.
Checking out the available methods, downloader looks like.
f.downloader
Download with 57507840/57507840 bytes transferred from url https://www.googleapis.com/storage/v1/b/api-project-773889352370-testing/o/Clips%2F00011.MTS?generation=1493431837327161&alt=media
This message makes it seem like it has been downloaded, it has the right file size (57mb). But where does it go? I would like to pass a variable (local_path), so that subsequent process can handle the object. The class doesn't seem accept a path destination, its not in current working directory, /tmp/ or downloads folder. I'm testing locally on OSX before I deploy.
Am I using this tool correctly? I know that streaming video bytes may be preferable for large videos, we'll get to that once I understand basic functions. I'll open a separate question for streaming into memory (named pipe?) to be read by opencv.