I have a python script (excerpt shown below) that reads a sensor value. Unfortunately, it runs only for 5 - 60 minutes at a time and then suddenly stops. Is there a way I can efficiently make this run forever? Is there any reason why a python script like this couldn't run forever on a Raspberry Pi, or does python automatically limit the duration of a script?
while True:
current_reading = readadc(current_sensor, SPICLK, SPIMOSI, SPIMISO, SPICS)
current_sensed = (1000.0 * (0.0252 * (current_reading - 492.0))) - correction_factor
values.append(current_sensed)
if len(values) > 40:
values.pop(0)
if reading_number > 500:
reading_number = 0
reading_number = reading_number + 1
if ( reading_number == 500 ):
actual_current = round((sum(values)/len(values)), 1)
# open up a cosm feed
pac = eeml.datastream.Cosm(API_URL, API_KEY)
#send data
pac.update([eeml.Data(0, actual_current)])
# send data to cosm
pac.put()
It appears as though your loop lacks a delay and is constantly appending your "values" array, which will likely cause you to run out of memory in a fairly short period of time. I recommend adding a delay to avoid appending the values array every instant.
Adding a delay:
import time
while True:
current_reading = readadc(current_sensor, SPICLK, SPIMOSI, SPIMISO, SPICS)
current_sensed = (1000.0 * (0.0252 * (current_reading - 492.0))) - correction_factor
values.append(current_sensed)
if len(values) > 40:
values.pop(0)
if reading_number > 500:
reading_number = 0
reading_number = reading_number + 1
if ( reading_number == 500 ):
actual_current = round((sum(values)/len(values)), 1)
# open up a cosm feed
pac = eeml.datastream.Cosm(API_URL, API_KEY)
#send data
pac.update([eeml.Data(0, actual_current)])
# send data to cosm
pac.put()
time.sleep(1)
This should, in theory, run forever and Python does not limit script execution automagically. I'd guess you're hitting a problem with the readadc or the pac feed hanging and locking the script up or an exception in the execution (but you should see that if executing the script from command line). Does the script hang or does it stop and exit?
If you can output some data using print() and see it on the Pi, you can just add some simple debugging lines to see where it is hanging - you may or may not be able to fix it easily with a timeout argument. An alternative would also be to thread the script and run the loop body as a thread with the main thread acting as a watchdog and killing the processing threads if they take too long to do their thing.
Related
My Problem:
The web app I'm building relies on real-time transcription of a user's voice along with timestamps for when each word begins and ends.
Google's Speech-to-Text API has a limit of 4 minutes for streaming requests but I want users to be able to run their mic's for as long as 30 minutes if they so choose.
Thankfully, Google provides its own code examples for how to make successive requests to their Speech-to-Text API in a way that mimics endless streaming speech recognition.
I've adapted their Python infinite streaming example for my purposes (see below for my code). The timestamps provided by Google are pretty accurate but the issue is that when I exceed the streaming limit (4 minutes) and a new request is made, the timestamped transcript returned by Google's API from the new request is off by as much as 5 seconds or more.
Below is an example of the output when I adjust the streaming limit to 10 seconds (so a new request to Google's Speech-to-Text API begins every 10 seconds).
The timestamp you see printed next to each transcribed response (the 'corrected_time' in the code) is the timestamp for the end of the transcribed line, not the beginning. These timestamps are accurate for the first request but are off by ~4 seconds in the second request and ~9 seconds in the third request.
In a Nutshell, I want to make sure that when the streaming limit is exceeded and a new request is made, the timestamps returned by Google for that new request are adjusted accurately.
My Code:
To help you understand what's going on, I would recommend running it on your machine (only takes a couple of minutes to get working if you have a Google Cloud service account).
I've included more detail on my current diagnosis below the code.
#!/usr/bin/env python
"""Google Cloud Speech API sample application using the streaming API.
NOTE: This module requires the dependencies `pyaudio`.
To install using pip:
pip install pyaudio
Example usage:
python THIS_FILENAME.py
"""
# [START speech_transcribe_infinite_streaming]
import os
import re
import sys
import time
from google.cloud import speech
import pyaudio
from six.moves import queue
# Audio recording parameters
STREAMING_LIMIT = 20000 # 20 seconds (originally 4 mins but shortened for testing purposes)
SAMPLE_RATE = 16000
CHUNK_SIZE = int(SAMPLE_RATE / 10) # 100ms
# Environment Variable set for Google Credentials. Put the json service account
# key in the root directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'YOUR_SERVICE_ACCOUNT_KEY.json'
def get_current_time():
"""Return Current Time in MS."""
return int(round(time.time() * 1000))
class ResumableMicrophoneStream:
"""Opens a recording stream as a generator yielding the audio chunks."""
def __init__(self, rate, chunk_size):
self._rate = rate
self.chunk_size = chunk_size
self._num_channels = 1
self._buff = queue.Queue()
self.closed = True
self.start_time = get_current_time()
self.restart_counter = 0
self.audio_input = []
self.last_audio_input = []
self.result_end_time = 0
self.is_final_end_time = 0
self.final_request_end_time = 0
self.bridging_offset = 0
self.last_transcript_was_final = False
self.new_stream = True
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
channels=self._num_channels,
rate=self._rate,
input=True,
frames_per_buffer=self.chunk_size,
# Run the audio stream asynchronously to fill the buffer object.
# This is necessary so that the input device's buffer doesn't
# overflow while the calling thread makes network requests, etc.
stream_callback=self._fill_buffer,
)
def __enter__(self):
self.closed = False
return self
def __exit__(self, type, value, traceback):
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
# Signal the generator to terminate so that the client's
# streaming_recognize method will not block the process termination.
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(self, in_data, *args, **kwargs):
"""Continuously collect data from the audio stream, into the buffer."""
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self):
"""Stream Audio from microphone to API and to local buffer"""
while not self.closed:
data = []
"""
THE BELOW 'IF' STATEMENT IS WHERE THE ERROR IS LIKELY OCCURRING
This statement runs when the streaming limit is hit and a new request is made.
"""
if self.new_stream and self.last_audio_input:
chunk_time = STREAMING_LIMIT / len(self.last_audio_input)
if chunk_time != 0:
if self.bridging_offset < 0:
self.bridging_offset = 0
if self.bridging_offset > self.final_request_end_time:
self.bridging_offset = self.final_request_end_time
chunks_from_ms = round(
(self.final_request_end_time - self.bridging_offset)
/ chunk_time
)
self.bridging_offset = round(
(len(self.last_audio_input) - chunks_from_ms) * chunk_time
)
for i in range(chunks_from_ms, len(self.last_audio_input)):
data.append(self.last_audio_input[i])
self.new_stream = False
# Use a blocking get() to ensure there's at least one chunk of
# data, and stop iteration if the chunk is None, indicating the
# end of the audio stream.
chunk = self._buff.get()
self.audio_input.append(chunk)
if chunk is None:
return
data.append(chunk)
# Now consume whatever other data's still buffered.
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
self.audio_input.append(chunk)
except queue.Empty:
break
yield b"".join(data)
def listen_print_loop(responses, stream):
"""Iterates through server responses and prints them.
The responses passed is a generator that will block until a response
is provided by the server.
Each response may contain multiple results, and each result may contain
multiple alternatives; Here we print only the transcription for the top
alternative of the top result.
In this case, responses are provided for interim results as well. If the
response is an interim one, print a line feed at the end of it, to allow
the next result to overwrite it, until the response is a final one. For the
final one, print a newline to preserve the finalized transcription.
"""
for response in responses:
if get_current_time() - stream.start_time > STREAMING_LIMIT:
stream.start_time = get_current_time()
break
if not response.results:
continue
result = response.results[0]
if not result.alternatives:
continue
transcript = result.alternatives[0].transcript
result_seconds = 0
result_micros = 0
if result.result_end_time.seconds:
result_seconds = result.result_end_time.seconds
if result.result_end_time.microseconds:
result_micros = result.result_end_time.microseconds
stream.result_end_time = int((result_seconds * 1000) + (result_micros / 1000))
corrected_time = (
stream.result_end_time
- stream.bridging_offset
+ (STREAMING_LIMIT * stream.restart_counter)
)
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
if result.is_final:
sys.stdout.write("FINAL RESULT # ")
sys.stdout.write(str(corrected_time/1000) + ": " + transcript + "\n")
stream.is_final_end_time = stream.result_end_time
stream.last_transcript_was_final = True
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if re.search(r"\b(exit|quit)\b", transcript, re.I):
sys.stdout.write("Exiting...\n")
stream.closed = True
break
else:
sys.stdout.write("INTERIM RESULT # ")
sys.stdout.write(str(corrected_time/1000) + ": " + transcript + "\r")
stream.last_transcript_was_final = False
def main():
"""start bidirectional streaming from microphone input to speech API"""
client = speech.SpeechClient()
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=SAMPLE_RATE,
language_code="en-US",
max_alternatives=1,
)
streaming_config = speech.StreamingRecognitionConfig(
config=config, interim_results=True
)
mic_manager = ResumableMicrophoneStream(SAMPLE_RATE, CHUNK_SIZE)
print(mic_manager.chunk_size)
sys.stdout.write('\nListening, say "Quit" or "Exit" to stop.\n\n')
sys.stdout.write("End (ms) Transcript Results/Status\n")
sys.stdout.write("=====================================================\n")
with mic_manager as stream:
while not stream.closed:
sys.stdout.write(
"\n" + str(STREAMING_LIMIT * stream.restart_counter) + ": NEW REQUEST\n"
)
stream.audio_input = []
audio_generator = stream.generator()
requests = (
speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
responses = client.streaming_recognize(streaming_config, requests)
# Now, put the transcription responses to use.
listen_print_loop(responses, stream)
if stream.result_end_time > 0:
stream.final_request_end_time = stream.is_final_end_time
stream.result_end_time = 0
stream.last_audio_input = []
stream.last_audio_input = stream.audio_input
stream.audio_input = []
stream.restart_counter = stream.restart_counter + 1
if not stream.last_transcript_was_final:
sys.stdout.write("\n")
stream.new_stream = True
if __name__ == "__main__":
main()
# [END speech_transcribe_infinite_streaming]
My Current Diagnosis
The 'corrected_time' is not being set correctly when new requests are made. This is due to the 'bridging_offset' not being set correctly. So what we need to look at is the 'generator()' method in the 'ResumableMicrophoneStream' class.
In the 'generator()' method, there is an 'if' statement which is run when the streaming limit is hit and a new request is made
if self.new_stream and self.last_audio_input:
Its purpose appears to be to take any lingering audio data that wasn't finished being transcribed before the streaming limit was hit and add it to the buffer before any new audio chunks so that it's transcribed in the new request.
It is also the responsibility of this 'if' statement to set the 'bridging offset' but I'm not entirely sure what this offset represents. All I know is that however it is being set, it is not being set accurately.
Time offset values show the beginning and the end of each spoken word
that is recognized in the supplied audio. A time offset value
represents the amount of time that has elapsed from the beginning of
the audio, in increments of 100ms.
This tells us that the offset you are receiving for each of the timestamps that you are running within your project will always make the timestamps from start to finish. That would be my guess as to why it’s causing your application problems.
for i in range(1, 10):
QgsMessageLog.logMessage(str(i), tag="Validating", level=QgsMessageLog.INFO)
time.sleep(1)
QGIS will wait for the plugin to finish. ( 123456789 )
It is bring to me result after finished work.
So i can't see step by step logging at QGIS. ( 1... 2... 3... 4... ... 9 )
I would like to see it printed each time it is executed.
How can i do that?
Never use time.sleep() in the main thread of the GUI, that function is block the execution of the GUI so you get that behavior, in your particular case you can use QTimeLine:
def onframeChanged(val):
msg = str(val)
QgsMessageLog.logMessage(msg, tag="Validating", level=QgsMessageLog.INFO)
n = 9
self.timeline = QtCore.QTimeLine(n*1000)
self.timeline.setFrameRange(0, n)
self.timeline.frameChanged.connect(onframeChanged)
self.timeline.finished.connect(self.timeline.deleteLater)
self.timeline.start()
If you want the task to run periodically you can use QTimer:
def onTimeout():
msg = QtCore.QDateTime.currentDateTime().toString()
QgsMessageLog.logMessage(msg, tag="Validating", level=QgsMessageLog.INFO)
self.timer = QtCore.QTimer(interval=1000)
self.timer.timeout.connect(onTimeout)
self.timer.start()
I'm writing a small IRC bot which uses data grabbed from a JSON API. It's a list of events, which contains at the front any running event, followed by any future events.
I'd like to alert the channel twice before each event: an hour prior, and five minutes prior. To do this I'm attempting to create threading.Timer events which trigger at the appropriate time. I chose this method because the IRC bot requires running in an infinite loop, so the alerts need to run in their own thread. I've several times had the first alert work, but only if it triggers within ~5 minutes of the bot starting. If I start it say, 8 hours before the next event, it won't trigger the method at all. Here's the alert code: I've cut out the portions of the code which parse the time from the API, since that is already working and is just clutter for purposes of this question.
USE = pytz.timezone('US/Eastern')
ZULU=pytz.timezone('Zulu')
NET = 0
r = requests.get('http://api.pathofexile.com/leagues?type=event')
events=r.json()
NextEvent=events[0];
now = datetime.datetime.now(USE)
nextEventTime = events[0]['startAt']
/\/\
timeConverted=datetime.datetime(eventTimeY, eventTimeM, eventTimeD, eventTimeH, eventTimeMin, 0, 0).replace(tzinfo=ZULU)
until = timeConverted - now
NET = until.total_seconds()
#Race Alerts
def hourAlert():
r = requests.get('http://api.pathofexile.com/leagues?type=event')
events=r.json()
NextEvent=events[0];
now = datetime.datetime.now(USE)
nextEventTime = events[0]['startAt']
\/\/
timeConverted=datetime.datetime(eventTimeY, eventTimeM, eventTimeD, eventTimeH, eventTimeMin, 0, 0).replace(tzinfo=ZULU)
until = timeConverted - now
NET = until.total_seconds()
print("HOURALERT")
if until.total_seconds() > 0:
irc.send(bytes('PRIVMSG '+channel+' '+"EVENT ALERT - 1 HOUR - "+NextEvent['id'] +' - Occurs at '+NextEvent['startAt']+' - '+NextEvent['url']+'\r\n', 'UTF-8')) #gives event info
print("Starting Timer to event - "+str(NET-300))
sAlert = threading.Timer(NET-300, startAlert)
sAlert.start()
else:
NextEvent=events[1];
now = datetime.datetime.now(USE)
nextEventTime = events[1]['startAt']
\/\/
timeConverted=datetime.datetime(eventTimeY, eventTimeM, eventTimeD, eventTimeH, eventTimeMin, 0, 0).replace(tzinfo=ZULU)
until = timeConverted - now
NET = until.total_seconds()
irc.send(bytes('PRIVMSG '+channel+' '+"EVENT ALERT - 1 HOUR - "+NextEvent['id'] +' - Occurs at '+NextEvent['startAt']+' - '+NextEvent['url']+'\r\n', 'UTF-8')) #gives event info
print("Starting Timer to next event - "+str(NET-300))
sAlert = threading.Timer(NET-300, startAlert)
sAlert.start()
def startAlert():
r = requests.get('http://api.pathofexile.com/leagues?type=event')
events=r.json()
NextEvent=events[0];
now = datetime.datetime.now(USE)
nextEventTime = events[1]['startAt']
\/\/
timeConverted=datetime.datetime(eventTimeY, eventTimeM, eventTimeD, eventTimeH, eventTimeMin, 0, 0).replace(tzinfo=ZULU)
until = timeConverted - now
NET = until.total_seconds()
print("STARTALERT")
irc.send(bytes('PRIVMSG '+channel+' '+"EVENT ALERT - STARTING IN 5 MINUTES - "+NextEvent['id'] +' - '+NextEvent['url']+'\r\n', 'UTF-8')) #gives event info
NextEvent=events[1];
if until.total_seconds() > 3600:
print("Starting Timer to 1hr - " + str(NET-3600))
hAlert = threading.Timer(NET-3600, hourAlert)
hAlert.start()
else:
print("Starting Timer to event - " +str(NET-300))
sAlert = threading.Timer(NET-300, startAlert)
sAlert.start()
if until.total_seconds() > 3600:
print("Starting Timer to 1hr - " + str(NET-3600))
hAlert = threading.Timer(NET-3600, hourAlert)
hAlert.start()
elif until.total_seconds() > 300:
print("Starting Timer to event - " + str(NET-300))
hAlert = threading.Timer(NET-300, startAlert)
hAlert.start()
else:
NextEvent=events[1];
now = datetime.datetime.now(USE)
nextEventTime = events[1]['startAt']
\/\/
timeConverted=datetime.datetime(eventTimeY, eventTimeM, eventTimeD, eventTimeH, eventTimeMin, 0, 0).replace(tzinfo=ZULU)
until = timeConverted - now
NET = until.total_seconds()
print("Starting Timer to next event 1h - "+str(NET-3600))
hAlert = threading.Timer(NET-3600, hourAlert)
hAlert.start()
#End Alerts
I suspected for a time that my issue was using the timedelta.seconds value instead of total_seconds(), but I wont know if that works for a few hours yet. The main reason I'm asking here is because this code works in a testbed: if I tell it that the next event is in 30 seconds, and to trigger 15 and 5 before the event, it works just fine. However, when I bring the code back to multi-hour timespans, the events often won't trigger. They don't error out, they just straight don't work: not even the alert prints happen.
Thanks for any help you guys might be!
edit: I should mention, this is the first python program that I've written, but I'm a fairly experienced coder overall. My problem might be language related in that I'm doing something in a way python doesn't want me to. I'm unsure, for example, if nesting events like this would cause them to never trigger, because each is waiting for the full resolution of the child's function before triggering its own.
edit2: After more work, I can get the first alert to work, but despite creating the next Timer according to print statements, it never triggers. Is something with the nesting of triggers wrong?
I'm creating a LAN speed test which creates a data file in a specified location of a specified size and records the speed at which it is created/read. For the most part this is working correctly, there is just one problem: the read speed is ridiculously fast because all it's doing is timing how long it takes for the file to open, rather than how long it takes for the file to actually be readable (if that makes sense?).
So far I have this:
import time
import pythoncom
from win32com.client import Dispatch
import os
# create file - write speed
myPath = input('Where do you want to write the file?')
size_MB = int(input('What sized file do you want to test with? (MB)'))
size_B = size_MB * 1024 * 1024
fName = '\pydatafile'
#start timer
start = time.clock()
f = open(myPath + fName,'w')
f.write("\x00" * size_B)
f.close()
# how much time it took
elapsed = (time.clock() -start)
print ("It took", elapsed, "seconds to write the", size_MB, "MB file")
time.sleep(1)
writeMBps = size_MB / elapsed
print("That's", writeMBps, "MBps.")
time.sleep(1)
writeMbps = writeMBps * 8
print("Or", writeMbps, "Mbps.")
time.sleep(2)
# open file - read speed
startRead = time.clock()
f = open(myPath + fName,'r')
# how much time it took
elapsedRead = (time.clock() - startRead)
print("It took", elapsedRead,"seconds to read the", size_MB,"MB file")
time.sleep(1)
readMBps = size_MB / elapsedRead
print("That's", readMBps,"MBps.")
time.sleep(1)
readMbps = readMBps * 8
print("Or", readMbps,"Mbps.")
time.sleep(2)
f.close()
# delete the data file
os.remove(myPath + fName)
# record results on Excel
xl = Dispatch('Excel.Application')
xl.visible= 0
wb = xl.Workbooks.Add(r'C:\File\Location')
ws = wb.Worksheets(1)
# Write speed result
#
# loop until empty cell is found in column
col = 1
row = 1
empty = False
while not empty:
val = ws.Cells(row,col).value
print("Looking for next available cell to write to...")
if val == None:
print("Writing result to cell")
ws.Cells(row,col).value = writeMbps
empty = True
row += 1
# Read speed result
#
# loop until empty cell is found in column
col = 2
row = 1
empty = False
while not empty:
val = ws.Cells(row,col).value
print("Looking for next available cell to write to...")
if val == None:
print("Writing result to cell")
ws.Cells(row,col).value = readMbps
empty = True
row += 1
xl.Run('Save')
xl.Quit()
pythoncom.CoUninitialize()
How can I make this so the read speed is correct?
Thanks a lot
Try to actually read the file:
f = open(myPath + fName, 'r')
f.read()
Or (if the file is too large to fit in memory):
f = open(myPath + fName, 'r')
while f.read(1024 * 1024):
pass
But operating system could still make read fast by caching file content. You've just written it there! And even if you manage to disable caching, your measurement (in addition to network speed) could include the time to write data to file server disk.
If you want network speed only, you need to use 2 separate machines on LAN. E.g. run echo server on one machine (e.g. by enabling Simple TCP/IP services or writing and running your own). Then run Python echo client on another machine that sends some data to echo server, makes sure it receives the same data back and measures the turnaround time.
I'm not sure if anyone else has this problem, but I'm getting an exception "Too big query offset" when using a cursor for chaining tasks on appengine development server (not sure if it happens on live).
The error occurs when requesting a cursor after 4000+ records have been processed in a single query.
I wasn't aware that offsets had anything to do with cursors, and perhaps its just a quirk in sdk for app engine.
To fix, either shorten the time allowed before task is deferred (so fewer records get processed at a time) or when checking time elapsed you can also check the number of records processed is still within range. e.g, if time.time() > end_time or count == 2000.Reset count and defer task. 2000 is an arbitrary number, I'm not sure what the limit should be.
EDIT:
After making the above mentioned changes, the never finishes executing. The with_cursor(cursor) code is being called, but seems to start at the beginning each time. Am I missing something obvious?
The code that causes the exception is as follows:
The table "Transact" has 4800 rows. The error occurs when transacts.cursor() is called when time.time() > end_time is true. 4510 records have been processed at the time when the cursor is requested, which seems to cause the error (on development server, haven't tested elsewhere).
def some_task(trans):
tts = db.get(trans)
for t in tts:
#logging.info('in some_task')
pass
def test_cursor(request):
ret = test_cursor_task()
def test_cursor_task(cursor = None):
startDate = datetime.datetime(2010,7,30)
endDate = datetime.datetime(2010,8,30)
end_time = time.time() + 20.0
transacts = Transact.all().filter('transactionDate >', startDate).filter('transactionDate <=',endDate)
count =0
if cursor:
transacts.with_cursor(cursor)
trans =[]
logging.info('queue_trans')
for tran in transacts:
count+=1
#trans.append(str(tran))
trans.append(str(tran.key()))
if len(trans)==20:
deferred.defer(some_task, trans, _countdown = 500)
trans =[]
if time.time() > end_time:
logging.info(count)
if len(trans)>0:
deferred.defer(some_task, trans, _countdown = 500)
trans =[]
logging.info('time limit exceeded setting next call to queue')
cursor = transacts.cursor()
deferred.defer(test_cursor_task, cursor)
logging.info('returning false')
return False
return True
return HttpResponse('')
Hope this helps someone.
Thanks
Bert
Try this again without using the iter functionality:
#...
CHUNK = 500
objs = transacts.fetch(CHUNK)
for tran in objs:
do_your_stuff
if len(objs) == CHUNK:
deferred.defer(my_task_again, cursor=str(transacts.cursor()))
This works for me.