just started to use the python-telegram-bot library and I made my own bot using their examples and documentations, but still can't get my bot to do something that should be rather simple, which is to have different cache_times for different inline queries. This is the involved code:
def inline_opt(update, context):
results = [
InlineQueryResultArticle(
id=uuid4(),
title = "QUERY1",
input_message_content = InputTextMessageContent(
"blah blah")),
InlineQueryResultArticle(
id=uuid4(),
title = "QUERY2",
input_message_content = InputTextMessageContent(
"Blah blah "))
]
update.inline_query.answer(results, cache_time=0)
It works fine, except that I want the first query to have a cache_time of 0 seconds and the other one to have a cache_time of x seconds. Sorry if it's a dumb question but couldn't get an answer on the doc or in the telegram group.
cache_time is a parameter of inline_query.answer() which means you need to filter the queries you receive to create a tailored answer with its particular cache_time.
import time
def inlinequery(update, context):
query = update.inline_query.query
if query=="time":
results = [
InlineQueryResultArticle(
id=uuid4(),
title="time",
input_message_content=InputTextMessageContent(
"time({!s}): {!s}".format(query,time.asctime(time.localtime()))))
]
seconds = 1;
update.inline_query.answer(results,cache_time=seconds)
elif query=="hora":
results = [
InlineQueryResultArticle(
id=uuid4(),
title="hora",
input_message_content=InputTextMessageContent(
"Time({!s}): {!s}".format(query,time.asctime(time.localtime()))))
]
seconds = 60;
update.inline_query.answer(results,cache_time=seconds)
Related
I'd followed the document Link
and given the params (keywords and regions),
but the return values were not correct.
For example :
from linkedin_api import Linkedin
api = Linkedin( user_account , user_password )
res = api.search_people(
keywords = 'elons' ,
regions = ['105117694']
)
print( len(res) )
>> it only 14 results
and I manually performed the same params on website, it got 30 results
refer pic
Can somebody help me with the problem?
According to the documentation, it returns "minimal data only".
I am using microsoft graph api to pull my emails in python and return them as a json object. There is a limitation that it only returns 12 emails. The code is:
def get_calendar_events(token):
graph_client = OAuth2Session(token=token)
# Configure query parameters to
# modify the results
query_params = {
#'$select': 'subject,organizer,start,end,location',
#'$orderby': 'createdDateTime DESC'
'$select': 'sender, subject',
'$skip': 0,
'$count': 'true'
}
# Send GET to /me/events
events = graph_client.get('{0}/me/messages'.format(graph_url), params=query_params)
events = events.json()
# Return the JSON result
return events
The response I get are twelve emails with subject and sender, and total count of my email.
Now I want iterate over emails changing the skip in query_params to get the next 12. Any method of how to iterate it using loops or recursion.
I'm thinking something along the lines of this:
def get_calendar_events(token):
graph_client = OAuth2Session(token=token)
# Configure query parameters to
# modify the results
json_list = []
ct = 0
while True:
query_params = {
#'$select': 'subject,organizer,start,end,location',
#'$orderby': 'createdDateTime DESC'
'$select': 'sender, subject',
'$skip': ct,
'$count': 'true'
}
# Send GET to /me/events
events = graph_client.get('{0}/me/messages'.format(graph_url), params=query_params)
events = events.json()
json_list.append(events)
ct += 12
# Return the JSON result
return json_list
May require some tweaking but essentially you're adding 12 to the offset each time as long as it doesn't return an error. Then it appends the json to a list and returns that.
If you know how many emails you have, you could also batch it that way.
I have a simple Kafka reader class. I really don't remember where I got this code. Could have found it, or my previous self may have created it from various examples. Either way, it allows me to quickly read a kafka topic.
class KafkaStreamReader():
def __init__(self, schema_name, topic, server_list):
self.schema = get_schema(schema_name)
self.topic = topic
self.server_list = server_list
self.consumer = KafkaConsumer(topic, bootstrap_servers=server_list,
auto_offset_reset = 'latest',
security_protocol="PLAINTEXT")
def decode(self, msg, schema):
parsed_schema = avro.schema.parse(schema)
bytes_reader = io.BytesIO(msg)
decoder = avro.io.BinaryDecoder(bytes_reader)
reader = avro.io.DatumReader(parsed_schema)
record = reader.read(decoder)
return record
def fetch_msg(self):
event = next(self.consumer).value
record = self.decode(event, self.schema)
return record
To use it, I instantiate an object and loop forever reading data such as this:
consumer = KafkaStreamReader(schema, topic, server_list)
while True:
message = consumer.fetch_msg()
print message
I'm sure there are better solutions, but this works for me.
What I want to get out of this, is the meta data on the Kafka record. A coworker in another group used Java or Node and was able to see the following information on the record.
{
topic: 'clickstream-v2.origin.test',
value:
{
schema:payload_data/jsonschema/1-0-3',
data: [ [Object] ] },
offset: 16,
partition: 0,
highWaterOffset: 17,
key: null,
timestamp: 2018-07-25T17:01:36.959Z
}
}
I want to access the timestamp field using the Python KafkaConsumer.
I have a solution. If I change the fetch_msg method I can figure out how to access it.
def fetch_msg(self):
event = next(self.consumer)
timestamp = event.timestamp
record = self.decode(event.value, self.schema)
return record, timestamp
Not the most elegant solution as I personally don't like methods that return multiple values. However, it illustrates how to access the event data that I was after. I can work on more elegant solutions
I'm trying to automate email reporting using python. My problem is that i cant pull the subject from the data that my email client outputs.
Abbreviated dataset:
[(messageObject){
id = "0bd503eb00000000000000000000000d0f67"
name = "11.26.17 AM [TXT-CAT]{Shoppers:2}"
status = "active"
messageFolderId = "0bd503ef0000000000000000000000007296"
content[] =
(messageContentObject){
type = "html"
subject = "Early Cyber Monday – 60% Off Sitewide "
}
}
]
I can pull the other fields like this:
messageId = []
messageName = []
subject = []
for info in messages:
messageId.append(str(info['id']))
messageName.append(str(info['name']))
subject.append(str(info[content['subject']]))
data = pd.DataFrame({
'id': messageId,
'name': messageName,
'subject': subject
})
data.head()
I've been trying to iterate though content[] using a for loop, but i can't get it to work. Let me know if you have any suggestions.
#FamousJameous gave the correct answer:
That format is called SOAP. My guess for the syntax would be info['content']['subject'] or maybe info['content'][0]['subject']
info['content'][0]['subject'] worked with my data.
I have a very simple "guestbook" script on GAE/Python. It often happens however, that entries which I put() into the datastore are not showing right away - I almost always need to refresh.
def post(self):
t = NewsBase(
date = datetime.now(),
text = self.request.get('text'),
title = self.request.get('title'),
link = self.request.get('link'),
upvotes = [],
downvotes = [],
)
t.put()
q = db.GqlQuery('SELECT * FROM NewsBase ORDER BY date DESC')
template_values = {
'q' : q,
'user' : user,
'search' : search
}
template = jinja_environment.get_template('finaggnews.html')
self.response.out.write(template.render(template_values))
I'm sure there is a solution to this?
Best,
Oliver
This is due to the eventual consistency model of HRD.
You should really read some of the intro docs, Structuring Data for Strong Consistency - https://developers.google.com/appengine/docs/python/datastore/structuring_for_strong_consistency and do some searching of SO. This question has been asked many times before.