Mongodb finding latest entry not updating after new insert - python

I am trying learning async programming by making a discord bot, and for the discord bot I am trying to create a bump reminder function.
My goal for the bump reminder function is to log the first bump, wait two hours, send the reminder message, then wait for a user to bump, validate the bump and log for the second bump, for every successful bump, the bump time will be logged into a mongodb database.
I will also need to design it so that if the bot restart itself, during anytime, it will send the bump reminder based on the most recent bump time that is logged in the mongodb database.
This is my bump reminder class:
https://gist.github.com/bensonchow123/02f321e182d2ce251ce8d08ceb88456e
A bump time out is 2 hours, and there is a bump check loop that is looped for every 10 seconds
#tasks.loop(seconds=10)
async def _bump_reminder(self):
if not self.waiting_bump:
bump_now = await self._check_bump()
if bump_now:
await self._send_reminder_message()
And for my question, the line if now > (last_bump_time + timedelta(hours=2)):doesn't seems to work, it will return true anyways, resulting in the bot keep sending the bump reminder message again and again.
The function to receive the time of last bump from the most recent sucessful bump insert is this:
async def _last_bump_time(self):
last_bump = bump_db.find_one(sort=[('$natural',ASCENDING)])
last_bump = last_bump["date"]
return datetime.strptime(last_bump,"%S:%M:%H:%d:%m:%Y:%z")
I think the reason that causes this is because in this last_bump_time function, this line last_bump = bump_db.find_one(sort=[('$natural',ASCENDING)]) which is used to find the most recent database insert, don't seem to update after a successful bump. resulting in the bot keep sending the reminder message as now is still > last bump time.
This is the function to find the current time
async def _now(self):
return datetime.now(timezone.utc)
The function that logs the successful bump into my database is this:
async def _log_successful_bump(self, bumper_id):
now = await self._now()
bumper = {"id": bumper_id, "date": now.strftime("%S:%M:%H:%d:%m:%Y:%z")}
bump_db.insert_one(bumper)
Example of a successful bump insert
_id
:
629385d8a441ee722cc76336
id
:
736933464292589568
date
:
"24:40:14:29:05:2022:+0000"

Related

python websockets message queue getting too long, resulting in stale messages

Having the following design problem: I open a websocket connection and send multiple messages every second to get open risk.
What’s happening is the message queue is building up wit a lot of “stale” messages (from frequently sending messages), and when I send a trade order out and it fills, the risk is delayed as it has to wait for the while loop to iterate through all the old messages, before it updates with the correct risk
Ie, the deque in websockets["messages"] is getting really long, and it takes the while loop some time to catch up to the "recent" messages
Sample code:
import websockets
url = 'wss://www.deribit.com/ws/api/v2'
risk1 = None
risk2 = None
async with websockets.connect(url) as ws:
# send many messages every second, which builds up a lot of messages in the queue
await send_message("private/get_positions", "btc-perpetual")
await send_message("private/get_positions", "eth-perpetual")
....
await send_message("private/get_positions", "sol-perpetual")
# messages from above build up in a deque, which gets iterated one-at-a-time in while loop
response = await ws.recv()
if response["id"] == 100:
risk1 = response["result"]
elif response["id"] == 200:
risk2 = response["result"]
else:
pass
# message queue gets long, and messages go stale (response from ws.recv() ), resulting in out-of-date risk
risk_usd = calculate_risk(risk1, risk2)
if risk_usd > 0:
await post_order()
some ideas ive had, but not sure if good practice:
ignore message if > x seconds
unpack the websockets["messages"] and choose last item
note: there are multiple variables (risk1, risk2) getting updated with each iteration, and ALL of them need to be up-to-date

Pubsub from a Function published in the background?

The whole reason I use PubSub is in order to not make my Cloud Function to wait, so things happens auto. To publish a topic from a Function, the code Google Docs show is :
# Publishes a message to a Cloud Pub/Sub topic.
def publish(topic_name,message):
# Instantiates a Pub/Sub client
publisher = pubsub_v1.PublisherClient()
if not topic_name or not message:
return ('Missing "topic" and/or "message" parameter.', 400)
# References an existing topic
topic_path = publisher.topic_path(PROJECT_ID, topic_name)
message_json = json.dumps({
'data': {'message': message},
})
message_bytes = message_json.encode('utf-8')
# Publishes a message
try:
publish_future = publisher.publish(topic_path, data=message_bytes)
publish_future.result() # Verify the publish succeeded
return 'Message published.'
except Exception as e:
print(e)
return (e, 500)
Which means Function is waiting for respond, but i want my Function to spend 0 seconds on this. How can I publish and forget ? not wait for respond? (without more dependencies?)
As you can see from the comments in the code, it is waiting to make sure that the publish succeeded. It's not waiting for any sort of response from any of the subscribers on that topic. It's extremely important the code wait until the publish succeeds, otherwise the message might not actually be sent at all, and you risk losing that data entirely. This is because Cloud Functions terminates the code and locks down CPU and I/O after the function returns.
If you really want to risk it, you could try removing the call to result(), but I don't think it's a good idea if you want a reliable system.
You can schedule your functions to run at certain times of the day or every 'interval' time. In this example, this would go into your index.js file and deployed to your functions.
The code would run 'every minute' in the background. The error would simply return to your logs in google cloud console.
If you are using firestore and need to manage documents, you can make the function run on specific events like on document create or update etc..
https://firebase.google.com/docs/functions/firestore-events
EDIT: Not exactly sure if this example matches your use case but hope this example helps
exports.scheduledFx = functions.pubsub.schedule('every minute').onRun(async (context) => {
// Cron time string Description
// 30 * * * * Execute a command at 30 minutes past the hour, every hour.
// 0 13 * * 1 Execute a command at 1:00 p.m. UTC every Monday.
// */5 * * * * Execute a command every five minutes.
// 0 */2 * * * Execute a command every second hour, on the hour.
try {
//your code here
} catch (error) {
return error
}
})

How to get the next telegram messages from specific users

I'm implementing a telegram bot that will serve users. Initially, it used to get any new message sequentially, even in the middle of an ongoing session with another user. Because of that, anytime 2 or more users tried to use the bot, it used to get all jumbled up. To solve this I implemented a queue system that put users on hold until the ongoing conversation was finished. But this queue system is turning out to be a big hassle. I think my problems would be solved with just a method to get the new messages from a specific chat_id or user. This is the code that I'm using to get any new messages:
def get_next_message_result(self, update_id: int, chat_id: str):
"""
get the next message the of a given chat.
In case of the next message being from another user, put it on the queue, and wait again for
expected one.
"""
update_id += 1
link_requisicao = f'{self.url_base}getUpdates?timeout={message_timeout}&offset={update_id}'
result = json.loads(requests.get(link_requisicao).content)["result"]
if len(result) == 0:
return result, update_id # timeout
if "text" not in result[0]["message"]:
self.responder(speeches.no_text_speech, message_chat_id)
return [], update_id # message without text
message_chat_id = result[0]["message"]["chat"]["id"]
while message_chat_id != chat_id:
self.responder(speeches.wait_speech, message_chat_id)
if message_chat_id not in self.current_user_queue:
self.current_user_queue.append(message_chat_id)
print("Queuing user with the following chat_id:", message_chat_id)
update_id += 1
link_requisicao = f'{self.url_base}getUpdates?timeout={message_timeout}&offset={update_id}'
result = json.loads(requests.get(link_requisicao).content)["result"]
if len(result) == 0:
return result, update_id # timeout
if "text" not in result[0]["message"]:
self.responder(speeches.no_text_speech, message_chat_id)
return [], update_id # message without text
message_chat_id = result[0]["message"]["chat"]["id"]
return result, update_id
On another note: I use the queue so that the moment the current conversation ends, it calls the next user in line. Should I just drop the queue feature and tell the concurrent users to wait a few minutes? While ignoring any messages not from the current chat_id?

How to make reactions (when editing embed) in discord.py faster?

the code works just fine now, but after using it for a while, i found out some bugs around it. My problem is that after the user reacts to embed using reactions (picture shown here), from clicking the reaction to actually editing the embed is taking too long (in range of 1-3 seconds), which isn't ideal when too many users reacts at once or one user reacts to more answers in it.
Sometimes when the user reacts too fast, it shows this. As you can see, I reacted too fast on two answers and "un-reacted" to it. It showed some change in the embed, but at last it showed nothing. The user has to "un-react" it and react to it again to show any change in the embed.
My theory is that my code is just bad and/or the API can't keep up with that. Do you have any suggestions how to make it faster without any tradeoffs?
The code is shown below with comments to show why I'm doing it and why for.
#commands.Cog.listener()
async def on_raw_reaction_add(self, payload: discord.RawReactionActionEvent):
#open an json to retrieve the embed ID where I added it
with open("cogs/message_id.json") as f:
data = json.load(f)
#fetch the channel and message from the payload id
channel = self.bot.get_channel(payload.channel_id)
message = await channel.fetch_message(payload.message_id)
#don't react to anything other than in json file
if payload.message_id in data["message_ids"]:
embed = message.embeds[0]
reaction = discord.utils.get(message.reactions, emoji=payload.emoji.name)
#prepared the dictionary for reactions, as it helps with performance
emoticon_dict = {
"1️⃣": 0,
"2️⃣": 1,
"3️⃣": 2,
"4️⃣": 3,
"5️⃣": 4,
"6️⃣": 5,
"7️⃣": 6,
"8️⃣": 7,
"9️⃣": 8,
"🔟": 9
}
dictionary = {}
i = emoticon_dict[str(payload.emoji)]
dictionary[i] = reaction.count - 1 #the bot vote counts, subtract it
#set for when duplicates are in there
members = set()
async for user in reaction.users():
if user.id == self.bot.user.id: #don't add bot user to it
continue
else:
members.add(user)
#show the votes in members = set() and join them
vypis_hlasu = f"{', '.join(user.display_name for user in members)}"
#edit the message based on dictionary index
edit = embed.set_field_at(i, name=embed.fields[i].name, value="{} | {}".format(dictionary[i], vypis_hlasu),
inline=False)
await reaction.message.edit(embed=edit)
In case if you're wondering about on_raw_reaction_remove decorator, the code is same.
Thanks for any help.
So the answer was kind of simple. I was able to insert those values into MySQL database so I don't use an JSON to add message ids. I created a simple cache that caches the ids from database using pythonic set() for faster searching through extensive values as shown below:
class Poll(commands.Cog):
def __init__(self, bot):
self.bot = bot
self.caching = set()
self.cache.start()
#commands.command()
async def your_command(self, ctx):
#here you should insert ctx.message.id into the database and self.caching
pass
#tasks.loop(minutes=30)
async def cache(self):
try:
self.cursor = self.connect(user="xxx", password="xxx", host="xxx", database="xxx")
#so you don't overclog your databse.
query2 = "DELETE FROM `Poll` WHERE `DateOfPoll` < CURRENT_DATE - 7;"
self.cursor.execute(operation=query2)
query = "SELECT `PollID` FROM `Poll`"
tuples = self.query(query = query)
#set comprehension for having clean numbers in cache, return of tuples are [("321654", " "...)]
self.caching = {int(clean_variable) for variable in tuples for clean_variable in variable}
return self.caching
except mysql.connector.Error as e:
print(e)
self.database.rollback()
finally:
self.close(commit=True)
#cache.before_loop
async def before_cache(self):
try:
self.cursor = self.connect(user="xxx", password="xxx", host="xxx", database="xxx")
query = "your query"
self.cursor.execute(query)
except mysql.connector.Error as e:
print(e)
self.database.rollback()
return
finally:
self.close(commit=True)
this allows me to insert those values into database and cache them every 30 minutes. For creating next embeds, I simply add new values into self.caching and insert to database. If I was only inserting to database, the reactions won't work as the ID of embed isn't in cache and you would have to wait for 30 minutes to sort it all out. With self.caching.add(id_embed) you have instant reactions, as time complexity of Pythonic set is (on average) O(1) as shown here.
There are some bugs here and there, but overall the optimization on average is faster than opening json and reading through a list. Another approach to this problem is using dictionary than set for caching, but I wasn't able to further explore this idea.
You should also take in the API limits. You can't go faster than that, but I was able to reproduce the same code with better results.

Telegram bot api how to schedule a notification?

I've made a bot that gets today football matches and if the user wants he can get a reminder 10 min before a selected match.
while current_time != new_hour:
now = datetime.now()
current_time = now.strftime("%H:%M")
#return notification
text_caps = "Your match starts in 10 minutes"
context.bot.send_message(chat_id=update.effective_chat.id, text=text_caps)
Obviously while the loop runs i can not use another command . I am new to programming how could i implement this so i still get the notification but while that runs i can use other commands?
Thank you!
Try to use an aiogram and you can make scheduled tasks with aiocron (store users who wants to get notification in database or in global dict)
You can schedule a job.
Let's say you have a CommandHandler("watch_match", watch_match) that listens for a /watch_match conmmand and 10 minutes later a message is supposed to arrive
def watch_match(update: Update, context: CallbackContext):
chat_id = update.effective_chat.id
ten_minutes = 60 * 10 # 10 minutes in seconds
context.job_queue.run_once(callback=send_match_info, when=ten_minutes, context=chat_id)
# Whatever you pass here as context is available in the job.context variable of the callback
def send_match_info(context: CallbackContext):
chat_id = context.job.context
context.bot.send_message(chat_id=chat_id, text="Yay")
A more detailed example in the official repository
And in the official documentation you can see the run_once function

Categories

Resources