I`m writing a connector to our CRM system. CRM has its own configurations I want to be aware of. CRM is the only source of trust for these configurations and provides them via an API. Now I have my connector in python package as a python class. CRM configurations are updated on init, but as soon as they can be changed from CRM I want them to be periodically updated. Is there any good way to create some kind of task on object instance creation to perform configuration updates?
class Crm:
def __init__(self, crm_config, mongo_connection_string):
self.update_crm_configuration()
def update_crm_configuration(self):
self.crm_configuration = self.get_crm_configuration_from_crm_api()
def get_crm_configuration_from_crm_api(self):
r = self._send_crm_request_wrap(send_request_func=self._send_get_crm_configuration)
return self._parse_crm_configuration_response(r.text)
Now I update configurations one time, but I need to update them periodically.
It appears the best way is not to use a different thread or task with periodic updates, but save last time the configuration was updated and if this time exits some timeout update the configuration before actually performing the request.
Or if your API has the luxury of good exceptions for the "Configuration was changed" it is even better to perform the configuration update on response handler before the request retry.
I`m using a request wrapper for these purposes.
def _send_crm_request_wrap(self, send_request_func, func_params=(),
check_crm_configuration=True,
retries_limit=None):
if check_crm_configuration \
and time.time() - self.last_update_crm_configuration_time > CRM_CONFIGURATION_TIMEOUT:
self.update_crm_configuration()
while self.is_crm_locked():
time.sleep(1000)
if not self.is_authorized():
self.auth()
r = send_request_func(*func_params)
if retries_limit is None:
retries_limit = self.max_retries
retry = 1
while r.status_code == 205 and retry <= retries_limit:
waiting_time = randint(1000, 2000)
logging.info(f'Retry {retry} for {send_request_func.__name__}. Waiting for {waiting_time} sec')
time.sleep(waiting_time)
r = send_request_func(*func_params)
retry += 1
if r.status_code not in [200]:
message = f'AMO CRM {send_request_func.__name__} with args={func_params} failed. ' \
f'Error: {r.status_code} {r.text}'
logging.error(message)
raise ConnectionError(message)
return r
Related
I need to write historic data into InfluxDB (I'm using Python, which is not a must in this case, so I maybe willing to accept non-Python solutions). I set up the write API like this
write_api = client.write_api(write_options=ASYNCHRONOUS)
The Data comes from a DataFrame with a timestamp as key, so I write it to the database like this
result = write_api.write(bucket=bucket, data_frame_measurement_name=field_key, record=a_data_frame)
This call does not throw an exception, even if the InfluxDB server is down. result has a protected attribute _success that is a boolean in debugging, but I cannot access it from the code.
How do I check if the write was a success?
If you use background batching, you can add custom success, error and retry callbacks.
from influxdb_client import InfluxDBClient
def success_cb(details, data):
url, token, org = details
print(url, token, org)
data = data.decode('utf-8').split('\n')
print('Total Rows Inserted:', len(data))
def error_cb(details, data, exception):
print(exc)
def retry_cb(details, data, exception):
print('Retrying because of an exception:', exc)
with InfluxDBClient(url, token, org) as client:
with client.write_api(success_callback=success_cb,
error_callback=error_cb,
retry_callback=retry_cb) as write_api:
write_api.write(...)
If you are eager to test all the callbacks and don't want to wait until all retries are finished, you can override the interval and number of retries.
from influxdb_client import InfluxDBClient, WriteOptions
with InfluxDBClient(url, token, org) as client:
with client.write_api(success_callback=success_cb,
error_callback=error_cb,
retry_callback=retry_cb,
write_options=WriteOptions(retry_interval=60,
max_retries=2),
) as write_api:
...
if you want to immediately write data into database, then use SYNCHRONOUS version of write_api - https://github.com/influxdata/influxdb-client-python/blob/58343322678dd20c642fdf9d0a9b68bc2c09add9/examples/example.py#L12
The asynchronous write should be "triggered" by call .get() - https://github.com/influxdata/influxdb-client-python#asynchronous-client
Regards
write_api.write() returns a multiprocessing.pool.AsyncResult or multiprocessing.pool.AsyncResult (both are the same).
With this return object you can check on the asynchronous request in a couple of ways. See here: https://docs.python.org/2/library/multiprocessing.html#multiprocessing.pool.AsyncResult
If you can use a blocking request, then write_api = client.write_api(write_options=SYNCRONOUS) can be used.
from datetime import datetime
from influxdb_client import WritePrecision, InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
with InfluxDBClient(url="http://localhost:8086", token="my-token", org="my-org", debug=False) as client:
p = Point("my_measurement") \
.tag("location", "Prague") \
.field("temperature", 25.3) \
.time(datetime.utcnow(), WritePrecision.MS)
try:
client.write_api(write_options=SYNCHRONOUS).write(bucket="my-bucket", record=p)
reboot = False
except Exception as e:
reboot = True
print(f"Reboot? {reboot}")
Im trying to create an aiosmtpd server to process emails received.
It works great without authentication, yet i simply cannot figure out how to setup the authentication.
I have gone through the documents and searched for examples on this.
a sample of how im currently using it:
from aiosmtpd.controller import Controller
class CustomHandler:
async def handle_DATA(self, server, session, envelope):
peer = session.peer
mail_from = envelope.mail_from
rcpt_tos = envelope.rcpt_tos
data = envelope.content # type: bytes
# Process message data...
print('peer:' + str(peer))
print('mail_from:' + str(mail_from))
print('rcpt_tos:' + str(rcpt_tos))
print('data:' + str(data))
return '250 OK'
if __name__ == '__main__':
handler = CustomHandler()
controller = Controller(handler, hostname='192.168.8.125', port=10025)
# Run the event loop in a separate thread.
controller.start()
# Wait for the user to press Return.
input('SMTP server running. Press Return to stop server and exit.')
controller.stop()```
which is the basic method from the documentation.
could someone please provide me with an example as to how to do simple authentication?
Alright, since you're using version 1.3.0, you can follow the documentation for Authentication.
A quick way to start is to create an "authenticator function" (can be a method in your handler class, can be standalone) that follows the Authenticator Callback guidelines.
A simple example:
from aiosmtpd.smtp import AuthResult, LoginPassword
auth_db = {
b"user1": b"password1",
b"user2": b"password2",
b"user3": b"password3",
}
# Name can actually be anything
def authenticator_func(server, session, envelope, mechanism, auth_data):
# For this simple example, we'll ignore other parameters
assert isinstance(auth_data, LoginPassword)
username = auth_data.login
password = auth_data.password
# If we're using a set containing tuples of (username, password),
# we can simply use `auth_data in auth_set`.
# Or you can get fancy and use a full-fledged database to perform
# a query :-)
if auth_db.get(username) == password:
return AuthResult(success=True)
else:
return AuthResult(success=False, handled=False)
Then you're creating the controller, create it like so:
controller = Controller(
handler,
hostname='192.168.8.125',
port=10025,
authenticator=authenticator_func, # i.e., the name of your authenticator function
auth_required=True, # Depending on your needs
)
(background)
I have an ERP application which is managed from a Weblogic Console. Recently we noticed that the same activities that we perform from the console can be performed using the vendor provided REST API calls. So we wanted to utilize this approach programatically and try to build some automations.
This is the page from where we can control one of the instance ConsoleImage
The same button acts as Stop and Start to manage the start and stop instance.
Both the start and stop have different API calls which makes sense.
The complete API doc is at : https://docs.oracle.com/cd/E61420_01/doc.92/e80710/smcrestapis.htm#BABFHBJI
(Now)
I wrote a program in python using the request method to call these APIs and it works fine.
The API response can take anywhere between 20 to 30 seconds when I use the stopInstance API
And normally takes 60 to 90 seconds when I use the startInstance API, but if there is an issue when starting the instance it takes more than 300 seconds and goes into indefinate wait.
My problem is, while starting an instance I want to wait maximum only for 100 seconds for the response. If it takes more than 100 seconds the program should display a message like "Instance was not able to start in 100 seconds"
This is my program. I am taking input from a text file and all the values present there have been verified.
import requests
import json
import importlib.machinery
import importlib.util
import numpy
import time
import sys
loader = importlib.machinery.SourceFileLoader('SM','sm_details.txt')
spec = importlib.util.spec_from_loader(loader.name, loader)
mod = importlib.util.module_from_spec(spec)
loader.exec_module(mod)
username = str(mod.username)
password = str(mod.password)
hostname = str(mod.servermanagerHostname)
portnum = str(mod.servermanagerPort)
instanceDetails = numpy.array(mod.instanceName)
authenticationAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/authenticate"
startInstanceAPI = "http://"+hostname+":"+portnum+"/manage/mgmtrestservice/startinstance"
headers = {
'Content-Type':'application/json',
'Cache-Control':'no-cache',
}
data = {}
data['username']= username
data['password']= password
instanceNameDict = {'instanceName':''}
#Authentication request and storing token
response = requests.post(authenticationAPI, data=json.dumps(data), headers=headers)
token = response.headers['TOKEN']
head2 = {}
head2['TOKEN']=token
def start(instance):
print(f'\nTrying to start instance : '+instance['instanceName'])
startInstanceResponse = requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2) #this is where the program is stuck and it does not move to the time.sleep step
time.sleep(100)
if startInstanceResponse.status_code == 200:
print('Instance '+instance['instanceName']+' started.')
else:
print('Could not start instance in 100 seconds')
sys.exit(1)
I would suggest you to use the timeout parameter in requests:
requests.post(startInstanceAPI,data=json.dumps(instance), headers=head2, timeout=100.0)
You can tell Requests to stop waiting for a response after a given
number of seconds with the timeout parameter. Nearly all production
code should use this parameter in nearly all requests. Failure to do
so can cause your program to hang indefinitely.
Source
Here's the requests timeout documentation, you will also find more details in there and Exception handling.
Using Google Suite for Education.
I have an app that wants to:
Create a new calendar.
Add an ACL to such calendar, so the student role would be "reader".
Everything is run through a service account.
The calendar is created just fine, but inserting the ACL throws a 404 error (redacted for privacy):
<HttpError 404 when requesting https://www.googleapis.com/calendar/v3/calendars/MY_DOMAIN_long_string%40group.calendar.google.com/acl?alt=json returned "Not Found">
The function that tries to insert the ACL:
def _create_calendar_acl(calendar_id, user, role='reader'):
credentials = service_account.Credentials.from_service_account_file(
CalendarAPI.module_path)
scoped_credentials = credentials.with_scopes(
['https://www.googleapis.com/auth/calendar'])
delegated_credentials = scoped_credentials.with_subject(
'an_admin_email')
calendar_api = googleapiclient.discovery.build('calendar',
'v3',
credentials=delegated_credentials)
body = {'role': role,
'scope': {'type': 'user',
'value': user}}
answer = calendar_api.acl().insert(calendarId=calendar_id,
body=body,
).execute()
return answer
The most funny thing is, if I retry the operation a couple times, it finally succeeds. Hence, that's what my code does:
def create_student_schedule_calendar(email):
MAX_RETRIES = 5
# Get student information
# Create calendar
answer = Calendar.create_calendar('a.calendar.owner#mydomain',
f'Student Name - schedule',
timezone='Europe/Madrid')
calendar_id = answer['id']
counter = 0
while counter < MAX_RETRIES:
try:
print('Try ' + str(counter + 1))
_create_calendar_acl(calendar_id=calendar_id, user=email) # This is where the 404 is thrown
break
except HttpError: # this is where the 404 is caught
counter += 1
print('Wait ' + str(counter ** 2))
time.sleep(counter ** 2)
continue
if counter == MAX_RETRIES:
raise Exception(f'Exceeded retries to create ACL for {calendar_id}')
Anyway, it takes four tries (between 14 and 30 seconds) to succeed - and sometimes it expires.
Would it be possible that the recently created calendar is not immediately available for the API using it?
Propagation is often an issue with cloud-based services. Large-scale online service are distributed along a network of machines which in themselves have some level of latency - there is a discrete, non-zero amount of time that information takes to propagate along a network and update everywhere.
All operations working after the first call which doesn't result in 404, is demonstrative of this process.
Mitigation:
I suggest if you're creating and editing in the same function call implementing some kind of wait/sleep for a moment to mitigate getting 404s. This can be done in python using the time library:
import time
# calendar creation code here
time.sleep(2)
# calendar edit code here
I am using PyAPNs to send notifications to iOS devices. I am often sending groups of notifications at once. If any of the tokens is bad for any reason, the process will stop. As a result I am using the enhanced setup and the following method:
apns.gateway_server.register_response_listener
I use this to track which token was the problem and then I pick up from there sending the rest. The issue is that when sending the only way to trap these errors is to use a sleep timer between token sends. For example:
for x in self.retryAPNList:
apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
If I don't use a sleep timer no errors are caught and thus my entire APN list is not sent to as the process stops when there is a bad token. However, this sleep timer is somewhat arbitrary. Sometimes the .5 seconds is enough while other times I have had to set it to 1. In no case has it worked without some sleep delay being added. Doing this slows down web calls and it feels less than bullet proof to enter random sleep times.
Any suggestions for how this can work without a delay between APN calls or is there a best practice for the delay needed?
Adding more code due to the request made below. Here are 3 methods inside of a class that I use to control this:
class PushAdmin(webapp2.RequestHandler):
retryAPNList=[]
channelID=""
channelName = ""
userName=""
apns = APNs(use_sandbox=True,cert_file="mycert.pem", key_file="mykey.pem", enhanced=True)
def devChannelPush(self,channel,name,sendAlerts):
ucs = UsedChannelStore()
pus = PushUpdateStore()
channelName = ""
refreshApnList = pus.getAPN(channel)
if sendAlerts:
alertApnList,channelName = ucs.getAPN(channel)
if not alertApnList: alertApnList=[]
if not refreshApnList: refreshApnList=[]
pushApnList = list(set(alertApnList+refreshApnList))
elif refreshApnList:
pushApnList = refreshApnList
else:
pushApnList = []
self.retryAPNList = pushApnList
self.channelID = channel
self.channelName = channelName
self.userName = name
self.retryAPNPush()
def retryAPNPush(self):
token = -1
payload = Payload(alert="A message from " +self.userName+ " posted to "+self.channelName, sound="default", badge=1, custom={"channel":self.channelID})
if len(self.retryAPNList)>0:
token +=1
for x in self.retryAPNList:
self.apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
Below is the calling class (abbreviate to reduce non-related items):
class ChannelStore(ndb.Model):
def writeMessage(self,ID,name,message,imageKey,fileKey):
notify = PushAdmin()
notify.devChannelPush(ID,name,True)
Below is the slight change I made to the placement of the sleep timer that seems to have resolved the issue. I am, however, still concerned for whether the time given will be the right amount in all circumstances.
def retryAPNPush(self):
identifier = 1
token = -1
payload = Payload(alert="A message from " +self.userName+ " posted to "+self.channelName, sound="default", badge=1, custom={"channel":self.channelID})
if len(self.retryAPNList)>0:
token +=1
for x in self.retryAPNList:
self.apns.gateway_server.send_notification(x, payload, identifier = token)
time.sleep(0.5)
Resolution:
As noted in the comments at bottom, the resolution to this problem was to move the following statement to the module level outside the class. By doing this there is no need for any sleep statements.
apns = APNs(use_sandbox=True,cert_file="mycert.pem", key_file="mykey.pem", enhanced=True)
In fact, PyAPNS will auto resend dropped notifications for you, please see PyAPNS
So you don't have to retry by yourself, you can just record what notifications have bad tokens.
The behavior of your code might be result from APNS object kept in local scope (within if len(self.retryAPNList)>0:)
I suggest you to pull out APNS object to class or module level, so that it can complete its error handling procedure and reuse the TCP connection.
Please kindly let me know if it helps, thanks :)