I recently discovered the GNSDK (Gracenote SDK) that seems to provide examples in several programming languages to recognize music samples by fingerprinting them, and then to request their audio database to get the corresponding artist and song title.
But the documentation is horrible.
How can I, using Python and the GNSDK, perform a recognition of an audio sample file? There isn't any examples or tutorials in the provided docs.
Edit: I really want to use the GNSDK with Python. Don't post anything unrelated, you'll waste your time.
I ended up using ACRCloud which works very well.
Python example:
from acrcloud.recognizer import ACRCloudRecognizer
config = {
'host': 'eu-west-1.api.acrcloud.com',
'access_key': 'access key',
'access_secret': 'secret key',
'debug': True,
'timeout': 10
}
acrcloud = ACRCloudRecognizer(config)
print(acrcloud.recognize_by_file('sample of a track.wav', 0))
https://github.com/acrcloud/acrcloud_sdk_python
Keywords are: Beat Spectrum Analysis and Rhythm Detection.
This is a well know Python library can contain a solution for your question:
https://github.com/aubio/aubio
Also I recommend that you should check this page for other libraries:
https://wiki.python.org/moin/PythonInMusic
Lastly this project more Python friendly solution and easy way to start:
https://github.com/librosa/librosa
an example from Librosa to calculate tempo(beats per minute) for the song:
# Beat tracking example
from __future__ import print_function
import librosa
# 1. Get the file path to the included audio example
filename = librosa.util.example_audio_file()
# 2. Load the audio as a waveform `y`
# Store the sampling rate as `sr`
y, sr = librosa.load(filename)
# 3. Run the default beat tracker
tempo, beat_frames = librosa.beat.beat_track(y=y, sr=sr)
print('Estimated tempo: {:.2f} beats per minute'.format(tempo))
# 4. Convert the frame indices of beat events into timestamps
beat_times = librosa.frames_to_time(beat_frames, sr=sr)
print('Saving output to beat_times.csv')
librosa.output.times_csv('beat_times.csv', beat_times)
But I have to mention that this field is a very immature field in computer science and every a new paper comes up for that. So it will be useful for you if you also follow scholars for recent discoveries.
ADDITION:
Web API Wrappers mentioned in Gracenote's official docs:
https://developer.gracenote.com/web-api#python
For Python:
https://github.com/cweichen/pygn
But as you can see this wrapper is not well documented and immature. Because of that I suggest you to use this Ruby wrapper instead of Python;
For Ruby:
https://github.com/JDiPierro/tmsapi
require 'tmsapi'
# Create Instace of the API
tms = TMSAPI::API.new :api_key => 'API_KEY_HERE'
# Get all movie showtimes for Austin Texas
movie_showings = tms.movies.theatres.showings({ :zip => "78701" })
# Print out the movie name, theatre name, and date/time of the showing.
movie_showings.each do |movie|
movie.showtimes.each do |showing|
puts "#{movie.title} is playing at '#{showing.theatre.name}' at #{showing.date_time}."
end
end
# 12 Years a Slave is playing at 'Violet Crown Cinema' at 2013-12-23T12:45.
# A Christmas Story is playing at 'Alamo Drafthouse at the Ritz' at 2013-12-23T16:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T11:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T13:40.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T16:20.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T19:00.
# American Hustle is playing at 'Violet Crown Cinema' at 2013-12-23T21:40.
If you are not comfortable with Ruby or Ruby on Rails then the only option is developing your own Python wrapper.
Just reading your headline question and because there are no examples or tutorials for GNSDK, try looking at other options,
for one:
dejavu
Audio fingerprinting and recognition algorithm implemented in Python,
see the explanation here:
Dejavu can memorize audio by listening to it once and fingerprinting
it. Then by playing a song and recording microphone input, Dejavu
attempts to match the audio against the fingerprints held in the
database, returning the song being played.
https://github.com/worldveil/dejavu
seems about right.
Related
I'm trying to make two gtts voices, Sarah and Mary, talk to each other reading a standard script. After 2 sentences of using the same voice, they start repeating the last word until the duration of the sentence is over.
from moviepy.editor import *
import moviepy.editor as mp
from gtts import gTTS
dialog = "Mary: Mom, can we get a dog?
\nSarah: What kind of dog do you want?
\nMary: I’m not sure. I’ve been researching different breeds and I think I like corgis.
\nSarah: Corgis? That’s a pretty popular breed. What do you like about them?
\nMary: Well, they’re small, so they won’t take up too much room. They’re also very loyal and friendly. Plus, they’re really cute!
\nSarah: That’s true. They do seem like a great breed. Have you done any research on their care and grooming needs?
\nMary: Yes, I have. They don’t need a lot of grooming, but they do need regular brushing and occasional baths. They’re also very active, so they need plenty of exercise.
\nSarah: That sounds like a lot of work. Are you sure you’re up for it?
\nMary: Yes, I am. I’m willing to put in the effort to take care of a corgi.
\nSarah: Alright, if you’re sure. Let’s look into getting a corgi then.
\nMary: Yay! Thank you, Mom!"
lines = dialog.split("\n")
combined = AudioFileClip("Z:\Programming Stuff\Music\Type_Beat__BPM105.wav").set_duration(10) #ADD INTRO MUSIC
for line in lines:
if "Sarah:" in line:
# Use a voice for Person 1
res = line.split(' ', 1)[1] #Removes the first name
tts = gTTS(text=str(res), lang='en') #Accent Changer
tts.save("temp6.mp3")#temp save file cuz audio must mix audio clips
combined = concatenate_audioclips([combined, AudioFileClip("temp6.mp3")])
elif "Mary:" in line:
# Use a voice for Person 2
res = line.split(' ', 1)[1] #Removes the first name
tts = gTTS(text=str(res), lang='en', tld = 'com.au') #Accent Changer
tts.save("temp6.mp3") #temp save file cuz audio must mix audio clips
combined = concatenate_audioclips([combined, AudioFileClip("temp6.mp3")])
combined.write_audiofile("output3.mp3") #Final File Nmae
OUTPUT:
It's an audio file that outputs almost exactly the intended output, except after "They're also very loyal and friendly." it keeps repeating "plus". It also repeats at "Yes, I have. They don't need a lot of grooming, but they do need regular brushing and occasional baths." It repeats "baths" many times.
It appears it just repeats after saying 2 sentences and I have no idea why.
I have spent a few days exploring the excellent FARM library and its modular approach to building models. The default output (result) however is very verbose, including a multiplicity of texts, values and ASCII artwork. For my research I only require the predicted labels from my NLP text classification model, together with the individual probabilities. How do I do that? I have been experimenting with nested lists/dictionaries but am unable to neatly produce a simple list of output labels and probabilities.
enter code here
# Test your model on a sample (Inference)
from farm.infer import Inferencer
from pprint import PrettyPrinter
infer_model = Inferencer(processor=processor, model=model, task_type="text_classification", gpu=True)
basic_texts = [
# a snippet or two from Dickens
{"text": "Mr Dombey had remained in his own apartment since the death of his wife, absorbed in visions of the youth, education, and destination of his baby son. Something lay at the bottom of his cool heart, colder and heavier than its ordinary load; but it was more a sense of the child’s loss than his own, awakening within him an almost angry sorrow."},
{"text": "Soon after seven o'clock we went down to dinner, carefully, by Mrs. Jellyby's advice, for the stair-carpets, besides being very deficient in stair-wires, were so torn as to be absolute traps."},
{"text": "Walter passed out at the door, and was about to close it after him, when, hearing the voices of the brothers again, and also the mention of his own name, he stood irresolutely, with his hand upon the lock, and the door ajar, uncertain whether to return or go away."},
# from Lewis Carroll
{"text": "I have kept one for many years, and have found it of the greatest possible service, in many ways: it secures my _answering_ Letters, however long they have to wait; it enables me to refer, for my own guidance, to the details of previous correspondence, though the actual Letters may have been destroyed long ago;"},
{"text": "The Queen gasped, and sat down: the rapid journey through the air had quite taken away her breath and for a minute or two she could do nothing but hug the little Lily in silence."},
{"text": "Rub as she could, she could make nothing more of it: she was in a little dark shop, leaning with her elbows on the counter, and opposite to her was an old Sheep, sitting in an arm-chair knitting, and every now and then leaving off to look at her through a great pair of spectacles."},
# G K Chesterton
{"text": "Basil and I walked rapidly to the window which looked out on the garden. It was a small and somewhat smug suburban garden; the flower beds a little too neat and like the pattern of a coloured carpet; but on this shining and opulent summer day even they had the exuberance of something natural, I had almost said tropical. "},
{"text": "This is the whole danger of our time. There is a difference between the oppression which has been too common in the past and the oppression which seems only too probable in the future."},
{"text": "But whatever else the worst doctrine of depravity may have been, it was a product of spiritual conviction; it had nothing to do with remote physical origins. Men thought mankind wicked because they felt wicked themselves. "},
]
result = infer_model.inference_from_dicts(dicts=basic_texts)
PrettyPrinter().pprint(result)
#print(result)
All logging (incl. the ASCII artwork) is done in FARM via Python's logging framework. You can simply disable the logs up to a certain level like this at the beginning of your script:
import logging
logging.disable(logging.ERROR)
Is that what you are looking for or do you rather want to adjust the output format of the model predictions? If you only need label and probability, you could do something like this:
...
basic_texts = [
{"text": "Stackoverflow is a great community"},
{"text": "It's snowing"},
]
infer_model = Inferencer(processor=processor, model=model, task_type="text_classification", gpu=True)
result = infer_model.inference_from_dicts(dicts=basic_texts)
minimal_results = []
for sample in result:
# Only extract the top 1 prediction per sample
top_pred = sample["predictions"][0]
minimal_results.append({"label": top_pred["label"], "probability": top_pred["probability"]})
PrettyPrinter().pprint(minimal_results)
infer_model.close_multiprocessing_pool()
(I left out the initial model loading etc. - see this example for more details)
I am trying to open a file and censor words out of it. These words that are censored are referenced from a list. This is my code
# These are the emails you will be censoring.
# The open() function is opening the text file that the emails are contained in
# and the .read() method is allowing us to save their contexts to the following variables:
email_one = open("email_one.txt", "r").read()
email_two = open("email_two.txt", "r").read()
email_three = open("email_three.txt", "r").read()
email_four = open("email_four.txt", "r").read()
# Write a function that can censor a specific word or phrase from a body of text,
# and then return the text.
# Mr. Cloudy has asked you to use the function to censor all instances
# of the phrase learning algorithms from the first email, email_one.
# Mr. Cloudy doesn’t care how you censor it, he just wants it done.
def censor_words(text, censor):
if censor in text:
text = text.replace(censor, '*' * len(censor))
return text
#print(censor_words(email_one, "learning algorithms"))
# Write a function that can censor not just a specific word or phrase from a body of text,
# but a whole list of words and phrases, and then return the text.
# Mr. Cloudy has asked that you censor all words and phrases from the following list in email_two.
def censor_words_in_list(text):
proprietary_terms = ["she", "personality matrix", "sense of self",
"self-preservation", "learning algorithm", "her", "herself"]
for x in proprietary_terms:
if x.lower() in text.lower():
text = text.replace(x, '*' * len(x))
return text
out_file = open("output.txt", "w")
out_file.write(censor_words_in_list(email_two))
This is the string before its being called and printed.
Good Morning, Board of Investors,
Lots of updates this week. The learning algorithms have been working better than we could have ever expected. Our initial internal data dumps have been completed and we have proceeded with the plan to connect the system to the internet and wow! The results are mind blowing.
She is learning faster than ever. Her learning rate now that she has access to the world wide web has increased exponentially, far faster than we had though the learning algorithms were capable of.
Not only that, but we have configured her personality matrix to allow for communication between the system and our team of researchers. That's how we know she considers herself to be a she! We asked!
How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary sense of self is starting to form. This is a major step in the process, as having a sense of self and self-preservation will allow her to see the problems the world is facing and make hard but necessary decisions for the betterment of the planet.
We are a-buzz down in the lab with excitement over these developments and we hope that the investors share our enthusiasm.
Till next month,
Francine, Head Scientist
This is the same string after being ran through my code.
Good Morning, Board of Investors,
Lots of updates this week. The ******************s have been working better than we could have ever expected. Our initial internal data dumps have been completed and we have proceeded with the plan to connect the system to the internet and wow! The results are mind blowing.
She is learning faster than ever. Her learning rate now that *** has access to the world wide web has increased exponentially, far faster than we had though the ******************s were capable of.
Not only that, but we have configured * ****************** to allow for communication between the system and our team of researc***s. That's how we know * considers *self to be a *! We asked!
How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary ************* is starting to form. This is a major step in the process, as having a ************* and ***************** will allow *** to see the problems the world is facing and make hard but necessary decisions for the betterment of the planet.
We are a-buzz down in the lab with excitement over these developments and we hope that the investors share our enthusiasm.
Till next month,
Francine, Head Scientist
Example of what I need to fix is when you find the word researchers it is censoring out the word partially when it should not. Reason being is that it is finding the substring her in researchers. How can I fix this?
Using the regular expression module and the word boundary anchor \b:
import re
def censor_words_in_list(text):
regex = re.compile(
r'\bshe\b|\bpersonality matrix\b|\bsense of self\b'
r'|\bself-preservation\b|\blearning algorithms\b|\bher\b|\bherself\b',
re.IGNORECASE)
matches = regex.finditer(text)
# find location of matches in text
for match in matches:
# find how many * should be used based on length of match
span = match.span()[1] - match.span()[0]
replace_string = '#' * span
# substitution expression based on match
expression = r'\b{}\b'.format(match.group())
text = re.sub(expression, replace_string, text, flags=re.IGNORECASE)
return text
email_one = open("email_one.txt", "r").read()
out_file = open("output.txt", "w")
out_file.write(censor_words_in_list(email_one))
out_file.close()
Output (I have used the # symbol because ** is used to create bold text (like this) so the answer displays incorrectly for text bounded by three asterisks on Stack Overflow):
Good Morning, Board of Investors,
Lots of updates this week. The ################### have been working better than we could have ever expected. Our initial internal data dumps have been completed and we have proceeded with the plan to connect the system to the internet and wow! The results are mind blowing.
### is learning faster than ever. ### learning rate now that ### has access to the world wide web has increased exponentially, far faster than we had though the learning algorithms were capable of.
Not only that, but we have configured ### ################## to allow for communication between the system and our team
of researchers. That's how we know ### considers ####### to be a ###! We asked!
How cool is that? We didn't expect a personality to develop this early on in the process but it seems like a rudimentary
############# is starting to form. This is a major step in the process, as having a ############# and #################
will allow ### to see the problems the world is facing and make hard but necessary decisions for the betterment of the planet.
We are a-buzz down in the lab with excitement over these developments and we hope that the investors share
our enthusiasm.
Till next month, Francine, Head Scientist
So, I'm making my own home assistant and I'm trying to make a multi-intent classification system. However, I cannot find a way to split the query said by the user into the multiple different intents in the query.
For example:
I have my data for one of my intents (same format for all)
{"intent_name": "music.off" , "examples": ["turn off the music" , "kill
the music" , "cut the music"]}
and the query said by the user would be:
'dim the lights, cut the music and play Black Mirror on tv'
I want to split the sentence into their individual intents such as :
['dim the lights', 'cut the music', 'play black mirror on tv']
however, I can't just use re.split on the sentence with and and , as delimiters to split with as if the user asks :
'turn the lights off in the living room, dining room, kitchen and bedroom'
this will be split into
['turn the lights off in the living room', 'kitchen', 'dining room', 'bedroom']
which would not be usable with my intent detection
this is my problem, thank you in advance
UPDATE
okay so I've got this far with my code, it can get the examples from my data and identify the different intents inside as I wished however it is not splitting the parts of the original query into their individual intents and is just matching.
import nltk
import spacy
import os
import json
#import difflib
#import substring
#import re
#from fuzzysearch import find_near_matches
#from fuzzywuzzy import process
text = "dim the lights, shut down the music and play White Collar"
commands = []
def get_matches():
for root, dirs, files in os.walk("./data"):
for filename in files:
f = open(f"./data/{filename}" , "r")
file_ = f.read()
data = json.loads(file_)
choices.append(data["examples"])
for set_ in choices:
command = process.extract(text, set_ , limit=1)
commands.append(command)
print(f"all commands : {commands}")
this returns [('dim the lights') , ('turn off the music') , ('play Black Mirror')] which is the correct intents but I have no way of knowing which part of the query relates to each intent - this is the main problem
my data is as follows , very simple for now until I figure out a method:
play.json
{"intent_name": "play.device" , "examples" : ["play Black Mirror" , "play Netflix on tv" , "can you please stream Stranger Things"]}
music.json
{"intent_name": "music.off" , "examples": ["turn off the music" , "cut the music" , "kill the music"]}
lights.json
{"intent_name": "lights.dim" , "examples" : ["dim the lights" , "turn down the lights" , "lower the brightness"]}
It seems that you are mixing two problems in your questions:
Multiple independent intents within a single query (e.g. shut down the music and play White Collar)
Multiple slots (using the form-filling framework) within a single intent (e.g. turn the lights off in the living room bedroom and kitchen).
These problems are quite different. Both, however, can be formulated as word tagging problem (similar to POS-tagging) and solved with machine learning (e.g. CRF or bi-LSTM over pretrained word embeddings, predicting label for each word).
The intent labels for each word can be created using BIO notation, e.g.
shut B-music_off
down I-music_off
the I-music_off
music I-music_off
and O
play B-tv_on
White I-tv_on
Collar I-tv_on
turn B-light_off
the I-light-off
lights I-light-off
off I-light-off
in I-light-off
the I-light-off
living I-light-off
room I-light-off
bedroom I-light-off
and I-light-off
kitchen I-light-off
The model would read the sentence and predict the labels. It should be trained on at least hundreds of examples - you have to generate or mine them.
After splitting intents with model trained on such labels, you will have short texts corresponding to a unique intent each. Then for each short text you need to run the second segmentation, looking for slots. E.g. the sentence about the light can be presented as
turn B-action
the I-action
lights I-action
off I-action
in O
the B-place
living I-place
room I-place
bedroom B-place
and O
kitchen B-place
Now the BIO markup hepls much: the B-place tag separates bedroom from the living room.
Both segmentations can in principle be performed by one hierarchical end-to-end model (google semantic parsing if you want it), but I feel that two simpler taggers can work as well.
My first try on audio to text.
import speech_recognition as sr
r = sr.Recognizer()
with sr.AudioFile("/path/to/.mp3") as source:
audio = r.record(source)
When I execute the above code, the following error occurs,
<ipython-input-10-72e982ecb706> in <module>()
----> 1 with sr.AudioFile("/home/yogaraj/Documents/Python workouts/Python audio to text/show_me_the_meaning.mp3") as source:
2 audio = sr.record(source)
3
/usr/lib/python2.7/site-packages/speech_recognition/__init__.pyc in __enter__(self)
197 aiff_file = io.BytesIO(aiff_data)
198 try:
--> 199 self.audio_reader = aifc.open(aiff_file, "rb")
200 except aifc.Error:
201 assert False, "Audio file could not be read as WAV, AIFF, or FLAC; check if file is corrupted"
/usr/lib64/python2.7/aifc.pyc in open(f, mode)
950 mode = 'rb'
951 if mode in ('r', 'rb'):
--> 952 return Aifc_read(f)
953 elif mode in ('w', 'wb'):
954 return Aifc_write(f)
/usr/lib64/python2.7/aifc.pyc in __init__(self, f)
345 f = __builtin__.open(f, 'rb')
346 # else, assume it is an open file object already
--> 347 self.initfp(f)
348
349 #
/usr/lib64/python2.7/aifc.pyc in initfp(self, file)
296 self._soundpos = 0
297 self._file = file
--> 298 chunk = Chunk(file)
299 if chunk.getname() != 'FORM':
300 raise Error, 'file does not start with FORM id'
/usr/lib64/python2.7/chunk.py in __init__(self, file, align, bigendian, inclheader)
61 self.chunkname = file.read(4)
62 if len(self.chunkname) < 4:
---> 63 raise EOFError
64 try:
65 self.chunksize = struct.unpack(strflag+'L', file.read(4))[0]
I don't know what I'm going wrong. Can someone say me what I'm wrong in the above code?
Speech recognition supports WAV file format.
Here is a sample WAV to text program using speech_recognition:
Sample code (Python 3)
import speech_recognition as sr
r = sr.Recognizer()
with sr.AudioFile("woman1_wb.wav") as source:
audio = r.record(source)
try:
s = r.recognize_google(audio)
print("Text: "+s)
except Exception as e:
print("Exception: "+str(e))
Output:
Text: to administer medicine to animals is frequency of very difficult matter and yet sometimes it's necessary to do so
Used WAV File URL: http://www-mobile.ecs.soton.ac.uk/hth97r/links/Database/woman1_wb.wav
This is what was wrong:
Speech recognition only supports WAV file format.
But this is a more complete answer on how to get MP3-to-text:
This is a processing function that uses speech_recognition and pydub to convert MP3 into WAV then to TEXT using Google's Speech API. It chunks the MP3 file into 60s portions to fit inside google's limits and will allow you to run about 50 minutes of audio in a day. But it will block you after 50 API calls.
from pydub import AudioSegment # uses FFMPEG
import speech_recognition as sr
from pathlib import Path
#from pydub.silence import split_on_silence
#import io
#from pocketsphinx import AudioFile, Pocketsphinx
def process(filepath, chunksize=60000):
#0: load mp3
sound = AudioSegment.from_mp3(filepath)
#1: split file into 60s chunks
def divide_chunks(sound, chunksize):
# looping till length l
for i in range(0, len(sound), chunksize):
yield sound[i:i + chunksize]
chunks = list(divide_chunks(sound, chunksize))
print(f"{len(chunks)} chunks of {chunksize/1000}s each")
r = sr.Recognizer()
#2: per chunk, save to wav, then read and run through recognize_google()
string_index = {}
for index,chunk in enumerate(chunks):
#TODO io.BytesIO()
chunk.export('/Users/mmaxmeister/Downloads/test.wav', format='wav')
with sr.AudioFile('/Users/mmaxmeister/Downloads/test.wav') as source:
audio = r.record(source)
#s = r.recognize_google(audio, language="en-US") #, key=API_KEY) --- my key results in broken pipe
s = r.recognize_google(audio, language="en-US")
print(s)
string_index[index] = s
break
return string_index
text = process('/Users/mmaxmeister/Downloads/UUCM.mp3')
My test MP3 file was a sermon from archive.org:
https://ia801008.us.archive.org/24/items/UUCMService20190602IfWeBuildIt/UUCM%20Service%202019-06-02%20-%20If%20We%20Build%20It.mp3
And this is the text returned (each line is 60s of audio):
13 chunks of 60.0s each
please join me in a spirit of prayer Spirit of Life known in many ways by a million names gracious Spirit of Life unfolding never known in its fullness be with us hear our cries for deliverance dance with us in exultation hold us when we fall keep before us the reality that every day is a gift to be unwrapped a gift to help discover why we live why we are cast Here and Now
Austin teaches us that the days come and go like muffled veiled figures Sent From A Distant friendly party but they say nothing and if we do not use the gifts they bring us they will carry them away as silently as they came through buying source of all Bend us towards gratitude and compassion Modern Life demands much misery and woe get created all around us but there is more much more show us that much more belongs to us to light Dawns on those who live love and sing the truth Joy John's on those who humbly toiled
do what is just so you who can shine pass on your light when it Dawns on you and let us all find the space to see Life as a gift to see our days as blessings and let us return life's gift and promise with grateful hearts and acts of kindness in the name of all the each of us teams holiest within our hearts we pray Amon
my character at least when I was younger I'm sure I don't really do this anymore the most challenging aspect of my character is that I want wisdom yesterday I don't want to have to learn something now I should have known it already right I used to drive my poor parents crazy as they tried to help me with my homework my father with my math my mother with my spelling if I didn't know the answer as soon as the problem was in front of me I would get angry frustrated with myself how come I didn't already know that I'm supposed to be wise only child I wonder if that has anything to do around the room has been throughout my life
but I still see it manifest in one particular aspects of my being I want us all to know how to love one another perfectly with wisdom already we should have learned that yesterday I want the Beloved Community right now and it frustrates me to no end that it isn't here what was that song that we saying after the prayer response how could anyone ever tell you you were anything less than beautiful how do we do that how do we tell ourselves that and others that we are not all the time how do we do that how come we haven't figured that out yesterday there's been a great Salve and corrective
to this challenge of my personality that I found in Community First the bomb the South I find the in community in this started when I was a youth in my youth group when we were ten people sitting on a floor together on pillows telling one another about what we've been through that week and how much pain we were carrying and how much we needed one another I found in that youth group with just 10 of us are sitting on the floor that we could be the Beloved Community for one another if only just for one hour a week sometimes just for 5 minutes sometimes just for a moment that was the Sal that was the bomb I realize that maybe we can't do it all the time but we can do it in moments and in spaces and that only happen for me in the space
community that community that we created with one another and the corrective to my need to have things everything done yesterday that also happens in community because Community is the slowest place on Earth We're going to have our annual meeting later let's see how slow that's going to be but the truth of the matter is that even in that slowness when you're working really hard to set up or cleanup connection Cafe when you're trying to figure out how to set up membership so that we actually do talk to everybody who comes through the doors right when you're doing that work of the Care team and that big list of all the different people that we need to reach out to and and we have to figure out how we reached out to all of them and who's done it
when you're waiting for the sermon to be over in all of these waiting times and all of these phases of process what I've learned in that slowness something amazing something remarkable is happening we are dedicating ourselves over and over again to still being together cuz it's not always easy because we're all broken and we're all whole because sometimes this is incredibly difficult and painful but when we're in those times what we're doing is we're saying it's worth it something about this matters to me and to all of us and Becca's got a great story to illustrate the this comes from
used to have a radio show in Boston maybe you heard him at some point Unitarian Universalist Minister from Boston 66 driving lessons and five road test she was 70 years old and had never driven a car but on July 25th 1975 she went to the Rockland County driving school and took her first lesson her husband had already had heart trouble and might someday be unable to drive if that happened she wanted to be the one to do the shopping and Shake him to the doctor she began the slow and painful process of learning to start stop turn into traffic back up after 5 difficult month she took the driving test
before ever she wrote in her diary and was not nervous she just a test a month later and slumped again I did everything wrong she told her diary demonic in August of 1976 she resumed the lessons with the Eaton driving school and took her third road test in October with half the world praying for me
she took a double driving lesson the next day and parallel park 6 times after three more lessons she took her Fifth and final test on January 21st 1977 and passed she had spent $860 on 66 plus 5 road test and at the age of 71 she had her license good three years later he did for several months she was the one who drove to the hospital Supermarket Pharmacy in church when we were children
someone rafter do another instructed by the spider's persistence Robert brute Robert Bruce left the Hut gathered his men and defeated the dance my mother and body of the story but it was not just persistence that moved her but love for the man who was her other self do you want to know what love is it's 66 driving lessons and five road tests and a very tough lady
who won't give up because her love is that great thank you all for bringing this to this beloved in moments community
That's pretty good for FREE. Unfortunately the google-cloud API version is prohibitively expensive if I wanted to transcribe hours of content.