Hello i have a Person Detector script i want to send an info if any person detected by mail.In order to prevent mail spamming i need a timer for sendMail function.Function might be triggered anytime but it will only respond if its not on cooldown.
I tried using async task but couldn't implemented because if a person detected it goes to a loop where it sends email every 5 minutes even if there isn’t any person detected after the first sight.
Example:
Person detection script is running.
Person detected on camera -> Send an email(start the 5 minute cooldown)
Person sighted again after 2 minutes(didn't send any email because there is still 3 minutes cooldown).
Person sighted after 6 minutes send another email(because 5 minute cooldown is over).
Summary of my code.(Necessary parts only detection and sending mail works cooldown (timer) doesn't work
async def sendAlert():
server.sendmail(sender_email, receiver_email, message)
print('sent!')
await asyncio.sleep(300)
if __name__ == "__main__":
while True:
for i in range(len(boxes)):
if classes[i] == 1 and scores[i] > threshold:
with smtplib.SMTP_SSL("smtp.gmail.com", port, context=context) as server:
sendAlert(server)
box = boxes[i]
cv2.rectangle(img,(box[1],box[0]),(box[3],box[2]),(255,0,0),2)
If there is a person detected, script will send an alert by email.Afterwards if a person detected again in 5 minutes sendAlert function shouldn't respond until 5 minutes passed
I agree with #Prune that you need to create a small (minimal) use-case and present your code so that it is not only relevant to you, but also relevant to others. Additionally, your question should have a section with a verifiable example. Without these attributes, your question becomes hard for people to grasp, solve and/or suggest any verifiable solution.
However, as I understand, you have some action (sending email if a person is detected) that you would like to perform after certain cool-off period. So, in other words, you want a mechanism of keeping track of time. Hence, you would need the datetime library.
So, your pseudo code should look something like this:
Pseudo Code
import datetime
start = capture_timestamp()
cutoff = '00:05:00'
dt_cutoff = read_in_cutoff_as_timedelta(cutoff)
if person_detected:
now = capture_timestamp()
dt = now - start
if dt >= dt_cutoff:
# Send notification
send_email_notification()
else:
# Do not Send Notification
print('now: {} | dt: {}'.format(now, dt))
You could use datetime.datetime.utcnow() for timestamp. And datetime.timedelta() for defining dt_cutoff. For reading in a time-string as time you could do this:
tm = datetime.datetime.strptime(cutoff).time()
dt_cutoff = datetime.timedelta(hours = tm.hour, minutes = tm.minute, seconds = tm.second)
I hope this gives you some idea about how to model this.
Additional Resources
https://www.guru99.com/date-time-and-datetime-classes-in-python.html
https://docs.python.org/3/library/datetime.html
https://thispointer.com/python-how-to-convert-a-timestamp-string-to-a-datetime-object-using-datetime-strptime/
Complete Solution
Now, finally if you are in a hurry to use a ready made solution, you may use the following class object as shown. All you would need is to instantiate the class object by specifying your cool-off-period (timer_cutoff) and then call the method is_timeout(). If this returns True, then you send your notification. There is also an obj.istimeout attribute which stores this decision (True/False).
import time
# Set cutoff time to 2 seconds to test the output
# after 5 seconds: expect istimeout = True
# and instantiate the TimeTracker class object.
ttk = TimeTracker(timer_cutoff = '00:00:02') # 'HH:MM:SS'
# Wait for 3 seconds
time.sleep(3)
print('start timestamp: {}'.format(ttk.timestamp_start_str))
print('cutoff timestamp'.format(ttk.timestamp_cutoff_str))
print('timer_cutoff: {}'.format(ttk.timer_cutoff_str))
# Now check if cutoff time reached
ttk.is_timeout()
print('Send Notification: {}'.format(ttk.istimeout))
print('now_timestamp: {}'.format(ttk.timestamp_now_str))
class TimeTracker
Here is the class TimeTracker class:
import datetime
class TimeTracker(object):
def __init__(self,
timer_cutoff = '00:05:00',
cutoff_strformat = '%H:%M:%S'):
self.timer_cutoff_str = timer_cutoff
self.cutoff_strformat = cutoff_strformat
self.timestamp_start, self.timestamp_start_str = self.get_timestamp()
self.dt_cutoff = None # timedelta for cutoff
self.timestamp_cutoff = None
self.timestamp_cutoff_str = None
self.update_timestamp_cutoff()
self.timestamp_now = None
self.timestamp_now_str = None
self.dt_elapsed = None
self.istimeout = False
def get_timestamp(self):
ts = datetime.datetime.utcnow()
tss = str(ts)
return (ts, tss)
def readin_cutoff_as_timedelta(self):
td = datetime.datetime.strptime(self.timer_cutoff_str,
self.cutoff_strformat)
tdm = td.time()
self.dt_cutoff = datetime.timedelta(hours = tdm.hour,
minutes = tdm.minute,
seconds = tdm.second)
def update_timestamp_cutoff(self):
self.readin_cutoff_as_timedelta()
self.timestamp_cutoff = self.timestamp_start + self.dt_cutoff
self.timestamp_cutoff_str = str(self.timestamp_cutoff)
def time_elapsed(self):
self.dt_elapsed = self.timestamp_now - self.timestamp_start
def is_timeout(self):
self.timestamp_now, self.timestamp_now_str = self.get_timestamp()
self.time_elapsed()
if (self.dt_elapsed < self.dt_cutoff):
self.istimeout = False
else:
self.istimeout = True
return self.istimeout
Related
Is it possible to use multi processing in Django on a request.
#so if I send a request to http://127.0.0.1:8000/wallet_verify
def wallet_verify(request):
walelts = botactive.objects.all()
#here I check if the user want to be included in the process or not so if they set it to True then i'll include them else ignore.
for active in walelts:
check_active = active.active
if check_active == True:
user_is_active = active.user
#for the ones that want to be included I then go to get their key data.
I need to get both api and secret so then I loop through to get the data from active users.
database = Bybitapidatas.objects.filter(user=user_is_active)
for apikey in database:
apikey = apikey.apikey
for apisecret in database:
apisecret = apisecret.apisecret
#since I am making a request to an exchange endpoint I can only include one API and secret at a time . So for 1 person at a time this is why I want to run in parallel.
for a, b in zip(list(Bybitapidatas.objects.filter(user=user_is_active).values("apikey")), list(Bybitapidatas.objects.filter(user=user_is_active).values("apisecret"))):
session =spot.HTTP(endpoint='https://api-testnet.bybit.com/', api_key=a['apikey'], api_secret=b['apisecret'])
#here I check to see if they have balance to open trades if they have selected to be included.
GET_USDT_BALANCE = session.get_wallet_balance()['result']['balances']
for i in GET_USDT_BALANCE:
if 'USDT' in i.values():
GET_USDT_BALANCE = session.get_wallet_balance()['result']['balances']
idx_USDT = GET_USDT_BALANCE.index(i)
GET_USDTBALANCE = session.get_wallet_balance()['result']['balances'][idx_USDT]['free']
print(round(float(GET_USDTBALANCE),2))
#if they don't have enough balance I skip the user.
if round(float(GET_USDTBALANCE),2) < 11 :
pass
else:
session.place_active_order(
symbol="BTCUSDT",
side="Buy",
type="MARKET",
qty=10,
timeInForce="GTC"
)
How can I run this process in parallel while looping through the database to also get data for each individual user.
I am still new to coding so hope I explained that it makes sense.
I have tried multiprocessing and pools but then I get that the app has not started yet and I have to run it outside of wallet_verify is there a way to do it in wallet_verify
and when I send the Post Request.
Any help appreciated.
Filtering the Database to get Users who have set it to True
Listi - [1,3](these are user ID's Returned
processess = botactive.objects.filter(active=True).values_list('user')
listi = [row[0] for row in processess]
Get the Users from the listi and perform the action.
def wallet_verify(listi):
# print(listi)
database = Bybitapidatas.objects.filter(user = listi)
print("---------------------------------------------------- START")
for apikey in database:
apikey = apikey.apikey
print(apikey)
for apisecret in database:
apisecret = apisecret.apisecret
print(apisecret)
start_time = time.time()
session =spot.HTTP(endpoint='https://api-testnet.bybit.com/', api_key=apikey, api_secret=apisecret)
GET_USDT_BALANCE = session.get_wallet_balance()['result']['balances']
for i in GET_USDT_BALANCE:
if 'USDT' in i.values():
GET_USDT_BALANCE = session.get_wallet_balance()['result']['balances']
idx_USDT = GET_USDT_BALANCE.index(i)
GET_USDTBALANCE = session.get_wallet_balance()['result']['balances'][idx_USDT]['free']
print(round(float(GET_USDTBALANCE),2))
if round(float(GET_USDTBALANCE),2) < 11 :
pass
else:
session.place_active_order(
symbol="BTCUSDT",
side="Buy",
type="MARKET",
qty=10,
timeInForce="GTC"
)
print ("My program took", time.time() - start_time, "to run")
print("---------------------------------------------------- END")
return HttpResponse("Wallets verified")
Verifyt is what I use for the multiprocessing since I don't want it to run without being requested to run. also initialiser starts apps for each loop
def verifyt(request):
with ProcessPoolExecutor(max_workers=4, initializer=django.setup) as executor:
results = executor.map(wallet_verify, listi)
return HttpResponse("done")
```
I am trying to develop a listener for the Windows event log. It should wait until anything new is added and when this happens it should catch the new event and pass it as an object so I can create a handler. I have found some things online but nothing has worked so far. I am using win32evtlog and win32event.
The code I have so far is this:
import win32evtlog
import win32event
server = 'localhost' # name of the target computer to get event logs
logtype = 'Application' # 'Application' # 'Security'
hand = win32evtlog.OpenEventLog(server,logtype)
flags = win32evtlog.EVENTLOG_BACKWARDS_READ|win32evtlog.EVENTLOG_SEQUENTIAL_READ
total = win32evtlog.GetNumberOfEventLogRecords(hand)
print(total)
h_evt = win32event.CreateEvent(None, 1, 0, "evt0")
for x in range(10):
notify = win32evtlog.NotifyChangeEventLog(hand, h_evt)
wait_result = win32event.WaitForSingleObject(h_evt, win32event.INFINITE)
print("notify", notify)
The output after I run it and force one event to happen is this:
865
notify None
After this it gets stuck and does not catch any other events.
Thank you in advance for any help
I noticed 2 things. First, you create a manual reset event, but never reset it. Second, you should only need to call win32evtlog.NotifyChangeEventLog once.
import win32evtlog
import win32event
import win32api
server = 'localhost' # name of the target computer to get event logs
logtype = 'MyAppSource'
hand = win32evtlog.OpenEventLog(server, logtype)
flags = win32evtlog.EVENTLOG_FORWARDS_READ | win32evtlog.EVENTLOG_SEQUENTIAL_READ
total = win32evtlog.GetNumberOfEventLogRecords(hand)
print(total)
total += 1
h_evt = win32event.CreateEvent(None, False, False, "evt0")
notify = win32evtlog.NotifyChangeEventLog(hand, h_evt)
for x in range(10):
wait_result = win32event.WaitForSingleObject(h_evt, win32event.INFINITE)
readlog = win32evtlog.ReadEventLog(hand, flags, total)
for event in readlog:
print(f"{event.TimeGenerated.Format()} : [{event.RecordNumber}] : {event.SourceName}")
total += len(readlog)
win32evtlog.CloseEventLog(hand)
win32api.CloseHandle(h_evt)
You can change MyAppSource to your source name. For example, on my computer I have:
If I want to monitor "Dell", for example: logtype = 'Dell'
The method above will only work for first level sources. If you want to go to a deeper level, use win32evtlog.EvtSubscribe(). Here, I use it with a callback - borrowing the xml code from the answer linked to in the comments.
import win32evtlog
import xml.etree.ElementTree as ET
channel = 'Microsoft-Windows-Windows Defender/Operational'
def on_event(action, context, event_handle):
if action == win32evtlog.EvtSubscribeActionDeliver:
xml = ET.fromstring(win32evtlog.EvtRender(event_handle, win32evtlog.EvtRenderEventXml))
# xml namespace, root element has a xmlns definition, so we have to use the namespace
ns = '{http://schemas.microsoft.com/win/2004/08/events/event}'
event_id = xml.find(f'.//{ns}EventID').text
level = xml.find(f'.//{ns}Level').text
channel = xml.find(f'.//{ns}Channel').text
execution = xml.find(f'.//{ns}Execution')
process_id = execution.get('ProcessID')
thread_id = execution.get('ThreadID')
time_created = xml.find(f'.//{ns}TimeCreated').get('SystemTime')
print(f'Time: {time_created}, Level: {level} Event Id: {event_id}, Channel: {channel}, Process Id: {process_id}, Thread Id: {thread_id}')
print(xml.find(f'.//{ns}Data').text)
print()
handle = win32evtlog.EvtSubscribe(
channel,
win32evtlog.EvtSubscribeToFutureEvents,
None,
Callback = on_event)
# Wait for user to hit enter...
input()
win32evtlog.CloseEventLog(handle)
The command timedif takes two message IDs and calculates the difference in time that they were sent, accurate to 2 decimal places.
This is what I have made:
#bot.command(name='timedif', help='', aliases=['snowflake', 'timediff'])
async def timedif(ctx, id1, id2):
try:
id1 = int(id1)
id2 = int(id2)
except:
await ctx.reply("Check your message ID's! They are incorrect!")
msg1 = await ctx.fetch_message(id1)
msg2 = await ctx.fetch_message(id2)
time1 = msg1.created_at
time2 = msg2.created_at
ts_diff = time2 - time1
secs = abs(ts_diff.total_seconds())
days,secs=divmod(secs,secs_per_day:=60*60*24)
hrs,secs=divmod(secs,secs_per_hr:=60*60)
mins,secs=divmod(secs,secs_per_min:=60)
secs=round(secs, 2)
answer='{} secs'.format(secs)
if secs > 60:
answer='{} mins and {} secs'.format(int(mins),secs)
if mins > 60:
answer='{} hrs, {} mins and {} secs'.format(int(hrs),int(mins),secs)
if hrs > 24:
answer='{} days, {} hrs, {} mins and {} secs'.format(int(days),int(hrs),int(mins),secs)
embed = discord.Embed(title="**Time Difference**", description=f"""IDs: {id1}, {id2}
Time difference between the 2 IDs:
{answer}""")
await ctx.reply(embed=embed)
I tested the bot but I encountered a problem:
For this code, it can only fetch messages from the same channel as ‘ctx’. But if I am taking one of the IDs from another channel (ie. a DM channel), the bot cannot access it. Is there an algorithm/function that allows me to calculate the time difference of any 2 ID’s? I think it’s possible as a lot of bots have been doing it recently.
To clarify: for messages from ctx.channel, it works fine and is able to calculate the time diff. The only problem lies in it not able to fetch messages from other channels.
Calculating the time difference between any two Discord IDs doesn't require any API requests. Since the creation time of every snowflake object is encoded within that 19-21 digits. When read in binary, bit 22 and up is the timestamp.
For recent discord.py versions, there's already a helper method for you!
time1 = discord.utils.snowflake_time(int(id1))
time2 = discord.utils.snowflake_time(int(id2))
ts_diff = time2 - time1
secs = abs(ts_diff.total_seconds())
If this doesn't exist for your version yet, it's simple to implement snowflake_time():
import datetime
def snowflake_time(id):
return datetime.datetime.utcfromtimestamp(((id >> 22) + 1420070400000) / 1000)
This method works for any Discord ID (message ID, channel ID, guild ID, category ID, audit log ID, etc)
Discord Snowflake Structure: https://discord.com/developers/docs/reference#snowflakes.
My Problem:
The web app I'm building relies on real-time transcription of a user's voice along with timestamps for when each word begins and ends.
Google's Speech-to-Text API has a limit of 4 minutes for streaming requests but I want users to be able to run their mic's for as long as 30 minutes if they so choose.
Thankfully, Google provides its own code examples for how to make successive requests to their Speech-to-Text API in a way that mimics endless streaming speech recognition.
I've adapted their Python infinite streaming example for my purposes (see below for my code). The timestamps provided by Google are pretty accurate but the issue is that when I exceed the streaming limit (4 minutes) and a new request is made, the timestamped transcript returned by Google's API from the new request is off by as much as 5 seconds or more.
Below is an example of the output when I adjust the streaming limit to 10 seconds (so a new request to Google's Speech-to-Text API begins every 10 seconds).
The timestamp you see printed next to each transcribed response (the 'corrected_time' in the code) is the timestamp for the end of the transcribed line, not the beginning. These timestamps are accurate for the first request but are off by ~4 seconds in the second request and ~9 seconds in the third request.
In a Nutshell, I want to make sure that when the streaming limit is exceeded and a new request is made, the timestamps returned by Google for that new request are adjusted accurately.
My Code:
To help you understand what's going on, I would recommend running it on your machine (only takes a couple of minutes to get working if you have a Google Cloud service account).
I've included more detail on my current diagnosis below the code.
#!/usr/bin/env python
"""Google Cloud Speech API sample application using the streaming API.
NOTE: This module requires the dependencies `pyaudio`.
To install using pip:
pip install pyaudio
Example usage:
python THIS_FILENAME.py
"""
# [START speech_transcribe_infinite_streaming]
import os
import re
import sys
import time
from google.cloud import speech
import pyaudio
from six.moves import queue
# Audio recording parameters
STREAMING_LIMIT = 20000 # 20 seconds (originally 4 mins but shortened for testing purposes)
SAMPLE_RATE = 16000
CHUNK_SIZE = int(SAMPLE_RATE / 10) # 100ms
# Environment Variable set for Google Credentials. Put the json service account
# key in the root directory
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'YOUR_SERVICE_ACCOUNT_KEY.json'
def get_current_time():
"""Return Current Time in MS."""
return int(round(time.time() * 1000))
class ResumableMicrophoneStream:
"""Opens a recording stream as a generator yielding the audio chunks."""
def __init__(self, rate, chunk_size):
self._rate = rate
self.chunk_size = chunk_size
self._num_channels = 1
self._buff = queue.Queue()
self.closed = True
self.start_time = get_current_time()
self.restart_counter = 0
self.audio_input = []
self.last_audio_input = []
self.result_end_time = 0
self.is_final_end_time = 0
self.final_request_end_time = 0
self.bridging_offset = 0
self.last_transcript_was_final = False
self.new_stream = True
self._audio_interface = pyaudio.PyAudio()
self._audio_stream = self._audio_interface.open(
format=pyaudio.paInt16,
channels=self._num_channels,
rate=self._rate,
input=True,
frames_per_buffer=self.chunk_size,
# Run the audio stream asynchronously to fill the buffer object.
# This is necessary so that the input device's buffer doesn't
# overflow while the calling thread makes network requests, etc.
stream_callback=self._fill_buffer,
)
def __enter__(self):
self.closed = False
return self
def __exit__(self, type, value, traceback):
self._audio_stream.stop_stream()
self._audio_stream.close()
self.closed = True
# Signal the generator to terminate so that the client's
# streaming_recognize method will not block the process termination.
self._buff.put(None)
self._audio_interface.terminate()
def _fill_buffer(self, in_data, *args, **kwargs):
"""Continuously collect data from the audio stream, into the buffer."""
self._buff.put(in_data)
return None, pyaudio.paContinue
def generator(self):
"""Stream Audio from microphone to API and to local buffer"""
while not self.closed:
data = []
"""
THE BELOW 'IF' STATEMENT IS WHERE THE ERROR IS LIKELY OCCURRING
This statement runs when the streaming limit is hit and a new request is made.
"""
if self.new_stream and self.last_audio_input:
chunk_time = STREAMING_LIMIT / len(self.last_audio_input)
if chunk_time != 0:
if self.bridging_offset < 0:
self.bridging_offset = 0
if self.bridging_offset > self.final_request_end_time:
self.bridging_offset = self.final_request_end_time
chunks_from_ms = round(
(self.final_request_end_time - self.bridging_offset)
/ chunk_time
)
self.bridging_offset = round(
(len(self.last_audio_input) - chunks_from_ms) * chunk_time
)
for i in range(chunks_from_ms, len(self.last_audio_input)):
data.append(self.last_audio_input[i])
self.new_stream = False
# Use a blocking get() to ensure there's at least one chunk of
# data, and stop iteration if the chunk is None, indicating the
# end of the audio stream.
chunk = self._buff.get()
self.audio_input.append(chunk)
if chunk is None:
return
data.append(chunk)
# Now consume whatever other data's still buffered.
while True:
try:
chunk = self._buff.get(block=False)
if chunk is None:
return
data.append(chunk)
self.audio_input.append(chunk)
except queue.Empty:
break
yield b"".join(data)
def listen_print_loop(responses, stream):
"""Iterates through server responses and prints them.
The responses passed is a generator that will block until a response
is provided by the server.
Each response may contain multiple results, and each result may contain
multiple alternatives; Here we print only the transcription for the top
alternative of the top result.
In this case, responses are provided for interim results as well. If the
response is an interim one, print a line feed at the end of it, to allow
the next result to overwrite it, until the response is a final one. For the
final one, print a newline to preserve the finalized transcription.
"""
for response in responses:
if get_current_time() - stream.start_time > STREAMING_LIMIT:
stream.start_time = get_current_time()
break
if not response.results:
continue
result = response.results[0]
if not result.alternatives:
continue
transcript = result.alternatives[0].transcript
result_seconds = 0
result_micros = 0
if result.result_end_time.seconds:
result_seconds = result.result_end_time.seconds
if result.result_end_time.microseconds:
result_micros = result.result_end_time.microseconds
stream.result_end_time = int((result_seconds * 1000) + (result_micros / 1000))
corrected_time = (
stream.result_end_time
- stream.bridging_offset
+ (STREAMING_LIMIT * stream.restart_counter)
)
# Display interim results, but with a carriage return at the end of the
# line, so subsequent lines will overwrite them.
if result.is_final:
sys.stdout.write("FINAL RESULT # ")
sys.stdout.write(str(corrected_time/1000) + ": " + transcript + "\n")
stream.is_final_end_time = stream.result_end_time
stream.last_transcript_was_final = True
# Exit recognition if any of the transcribed phrases could be
# one of our keywords.
if re.search(r"\b(exit|quit)\b", transcript, re.I):
sys.stdout.write("Exiting...\n")
stream.closed = True
break
else:
sys.stdout.write("INTERIM RESULT # ")
sys.stdout.write(str(corrected_time/1000) + ": " + transcript + "\r")
stream.last_transcript_was_final = False
def main():
"""start bidirectional streaming from microphone input to speech API"""
client = speech.SpeechClient()
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.LINEAR16,
sample_rate_hertz=SAMPLE_RATE,
language_code="en-US",
max_alternatives=1,
)
streaming_config = speech.StreamingRecognitionConfig(
config=config, interim_results=True
)
mic_manager = ResumableMicrophoneStream(SAMPLE_RATE, CHUNK_SIZE)
print(mic_manager.chunk_size)
sys.stdout.write('\nListening, say "Quit" or "Exit" to stop.\n\n')
sys.stdout.write("End (ms) Transcript Results/Status\n")
sys.stdout.write("=====================================================\n")
with mic_manager as stream:
while not stream.closed:
sys.stdout.write(
"\n" + str(STREAMING_LIMIT * stream.restart_counter) + ": NEW REQUEST\n"
)
stream.audio_input = []
audio_generator = stream.generator()
requests = (
speech.StreamingRecognizeRequest(audio_content=content)
for content in audio_generator
)
responses = client.streaming_recognize(streaming_config, requests)
# Now, put the transcription responses to use.
listen_print_loop(responses, stream)
if stream.result_end_time > 0:
stream.final_request_end_time = stream.is_final_end_time
stream.result_end_time = 0
stream.last_audio_input = []
stream.last_audio_input = stream.audio_input
stream.audio_input = []
stream.restart_counter = stream.restart_counter + 1
if not stream.last_transcript_was_final:
sys.stdout.write("\n")
stream.new_stream = True
if __name__ == "__main__":
main()
# [END speech_transcribe_infinite_streaming]
My Current Diagnosis
The 'corrected_time' is not being set correctly when new requests are made. This is due to the 'bridging_offset' not being set correctly. So what we need to look at is the 'generator()' method in the 'ResumableMicrophoneStream' class.
In the 'generator()' method, there is an 'if' statement which is run when the streaming limit is hit and a new request is made
if self.new_stream and self.last_audio_input:
Its purpose appears to be to take any lingering audio data that wasn't finished being transcribed before the streaming limit was hit and add it to the buffer before any new audio chunks so that it's transcribed in the new request.
It is also the responsibility of this 'if' statement to set the 'bridging offset' but I'm not entirely sure what this offset represents. All I know is that however it is being set, it is not being set accurately.
Time offset values show the beginning and the end of each spoken word
that is recognized in the supplied audio. A time offset value
represents the amount of time that has elapsed from the beginning of
the audio, in increments of 100ms.
This tells us that the offset you are receiving for each of the timestamps that you are running within your project will always make the timestamps from start to finish. That would be my guess as to why it’s causing your application problems.
I have following normal function which creates Celery Group and tries to run all the subtasks in the Group at specific time:
def run_sms_task(smstask):
if smstask:
phones = []
for user in smstask.userlist.users.all():
phones.append(user.profile.phone)
tasks = []
for phone in phones:
tasks.append(send_sms_async.s(phone, smstask.text))
job = group(tasks)
result = job.apply_async(eta=smstask.starts_at)
result.save()
return result.id
return None
All the subtasks are fired when I call this function and not at the defined 'starts_at'. What is wrong? Thanks!
P.S. For testing reasons I have wrote a function which works fine for me if to launch tasks separately:
def run_sms_task_test1(smstask):
if smstask:
phones = []
for user in smstask.userlist.users.all():
phones.append(user.profile.phone)
tasks = []
for phone in phones:
send_sms_async.apply_async([phone, smstask.text], eta=smstask.starts_at)
return None
May be this is timezone issue.
Try to use following solution
from pytz import timezone
from datetime import datetime
london_tz = pytz.timezone('Europe/London')
london_dt = london_tz.localize(datetime.datetime(year, month, day, hour, min))
starts_at = london_dt.astimezone(pytz.UTC)
result = job.apply_async(eta=starts_at)
Hope this is helps you.