I'm trying to wrap the Confluent kafka proxy api in one class that will handle producing and consuming.
Following this link: https://docs.confluent.io/platform/current/kafka-rest/api.html I tried to implement it as follows:
def send(self, topic, data):
try:
r = requests.post(self._url('/topics/' + topic), json=data, headers=headers_v2)
if not r.ok:
raise Exception("Error: ", r.reason)
except Exception as e:
print(" ")
print('Event streams send request failed')
print(Exception, e)
print(" ")
return e
but I ended up working with 2 versions of the api (v2/v3) cause I didn't find some api's in one implementation and vise versa...
For example I didn't find how to create topic in v2, so I implemented it with v3.
My issue now is with the send method, I'm getting Internal server error and I can't find why!
Maybe because the create topic was done with v3 and I'm trying to produce messages with v2.
I changed the data payload for the send to look like:
data = {"records": [{"value": data}]} and send passed,
poll passed when using:
r = requests.get(self._url('/consumers/' + self.consumer_group + '/instances/' + self.consumer + '/records'), headers={'Accept': 'application/vnd.kafka.json.v2+json'})
Related
To avoid errors when I have to send a group of messages that is larger than max size, I wrote a class useful to send bunch of messages.
Well, first of all would be wonderful if somebody could show me an example that explains how to avoid this problem.
Trying to solve the problem by myself I found extremely hard understanding the size of the message (ServiceBusMessage).
The method sb_msg.message.get_message_encoded_size() it’s the nearest thing of what I need.
Do you know how to calculate the message size?
def send_as_json(self, msg, id_field=None, size_in_bytes=262144):
if isinstance(msg, list):
payload = self.topic_sender.create_message_batch(size_in_bytes)
for m in msg:
try:
# add a message to the batch
sb_msg = ServiceBusMessage(json.dumps(m), message_id=m.get(id_field, uuid.uuid4()), content_type='application/json')
total_size = payload.size_in_bytes + sb_msg.message.get_message_encoded_size()
if total_size > size_in_bytes:
_log.info(f'sending partial batch of {payload.size_in_bytes} bytes')
self.send_service_bus_message(payload)
payload = self.topic_sender.create_message_batch(size_in_bytes)
payload.add_message(sb_msg)
except ValueError as e:
# ServiceBusMessageBatch object reaches max_size.
# New ServiceBusMessageBatch object can be created here to send more data.
raise Exception('', e)
self.send_service_bus_message(payload)
else:
sb_msg = ServiceBusMessage(json.dumps(msg), message_id=msg.get(id_field, uuid.uuid4()), content_type = 'application/json')
self.send_service_bus_message(sb_msg)
Azure ServiceBus - Python:
Below code will give you a insight on how to send batch of messages:
def send_batch_message(sender):
# create a batch of messages
batch_message = sender.create_message_batch()
for _ in range(10):
try:
# add a message to the batch
batch_message.add_message(ServiceBusMessage("Message inside a ServiceBusMessageBatch"))
except ValueError:
# ServiceBusMessageBatch object reaches max_size.
# New ServiceBusMessageBatch object can be created here to send more data.
break
# send the batch of messages to the queue
sender.send_messages(batch_message)
print("Sent a batch of 10 messages")
For more detail information you can visit below Microsoft docs, In this link we have clear information of sending messages as single, list or batch: Link
How to find the size of message:
With the help of below function we can find the size of the message in bytes:
Import sys
sys.getsizeof(variable_name) #this gives us the bytes occupied by the variable.
To check the max_size_in_bytes of a batch, we can simply print below message
batch_message = sender.create_message_batch() # Step1
batch_message.add_message(ServiceBusMessage("Message we want to add")) #Step2 Add message to the batch in a loop.
print(batch_message) #Step3 We will get the output as below as yellow after printing, below is the output:
Output:
ServiceBusMessageBatch(max_size_in_bytes=1048576, message_count=10)
I am trying to run the following code in multithreading however I keep getting the " Segmentation fault (core dumped)" . Please advise what I am doing wrong -
def insert_api(r):
url = url_responses+'/'+str(r[0])
response = requests.get(url,headers={'api-key':APIToken,'Content-Type': 'application/json'})
if response.status_code == 200:
dd = json.loads(response.content)
InsertTable('API_Response',str(r[0]),str(r[1]),json.dumps(dd['result']))
else:
logMsg('','HTTP request for '+url_responses+' failed. HTTP response code is: '+str(response.status_code),'failure')
subject='API Request failed ********* '+ datetime.now().strftime("%Y%m%d-%H%M")
Body='This email is to notify that the API request for the URL: ' +str(url)+' failed at '+ datetime.now().strftime("%Y%m%d-%H%M")
email_notifier(subject,Body)
with concurrent.futures.ThreadPoolExecutor() as executor:
executor.map(insert_api,response_list)
InsertTable is a function to insert records in a table(API_Response) passed as parameter along with other values. email_notifier is a function to send emails in case of exceptions. Since I have 95k+ records in the API , hence trying to implement the multithreading logic.
Thanks
Samy!!
I was able to resolve it by adding a lock logic before my InsertTable function
I am trying to use the streaming API from IG Index their documentation is here. The Api requires the light streamer client to be included in the app. So I have used this version and added it to my project.
I have created a function which connects to the server. (I believe)
def connect_light_stream_client():
if cst == None or xt == None:
create_session()
global client
client = lsc.LightstreamerClient(lightstreamer_username=stream_ident,
lightstreamer_password=stream_password,
lightstreamer_url=light_stream_server)
try:
client.connect()
except Exception as e:
print("Unable to connect to Lightstreamer Server")
return
Then I call a second function which should fetch a stream of stock data printing the results after each tick.
def listner(item_info):
print(item_info)
def light_stream_chart_tick():
sub = lsc.LightstreamerSubscription(mode="DISTINCT", items={"CHART:CS.D.XRPUSD.TODAY.IP:TICK"},
fields={"BID"})
sub.addlistener(listner)
sub_key = client.subscribe(sub)
print(sub_key)
The print at the end produces an output of 1. I get nothing from the listener. Any suggestions what I am doing wrong?
There's a few things wrong:
You must wait for the subscription request to respond with any
updates. In your code, execution ends before any ticks are received. I put the code from light_stream_chart_tick() into the connect method, with a request for input as a wait
The items and fields parameters need to be lists not dicts
The Ripple epic is offline (at least when I tried) - I have substituted Bitcoin
def connect_light_stream_client():
if cst == None or xt == None:
create_session()
global client
client = lsc.LightstreamerClient(lightstreamer_username=stream_ident,
lightstreamer_password=stream_password,
lightstreamer_url=light_stream_server)
try:
client.connect()
except Exception as e:
print("Unable to connect to Lightstreamer Server")
return
sub = lsc.LightstreamerSubscription(
mode="DISTINCT",
items=["CHART:CS.D.BITCOIN.TODAY.IP:TICK"],
fields=["BID"]
)
sub.addlistener(listner)
sub_key = client.subscribe(sub)
print(sub_key)
input("{0:-^80}\n".format("Hit CR to unsubscribe and disconnect"))
client.disconnect()
def listner(item_info):
print(item_info)
There's a python project here that makes it a bit easier to interact with the IG APIs, and there's a
streaming sample included. The project is up to date and actively maintained.
Full disclosure: I'm the maintainer of the project
i'm sending apple push notifications via AWS SNS via Lambda with Boto3 and Python.
from __future__ import print_function
import boto3
def lambda_handler(event, context):
client = boto3.client('sns')
for record in event['Records']:
if record['eventName'] == 'INSERT':
rec = record['dynamodb']['NewImage']
competitors = rec['competitors']['L']
for competitor in competitors:
if competitor['M']['confirmed']['BOOL'] == False:
endpoints = competitor['M']['endpoints']['L']
for endpoint in endpoints:
print(endpoint['S'])
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
Everything works fine! But when an endpoint is invalid for some reason (at the moment this happens everytime i run a development build on my device, since i get a different endpoint then. This will be either not found or deactivated.) the Lambda function fails and gets called all over again. In this particular case if for example the second endpoint fails it will send the push over and over again to endpoint 1 to infinity.
Is it possible to ignore invalid endpoints and just keep going with the function?
Thank you
Edit:
Thanks to your help i was able to solve it with:
try:
response = client.publish(
#TopicArn='string',
TargetArn = endpoint['S'],
Message = 'test message'
#Subject='string',
#MessageStructure='string',
)
except Exception as e:
print(e)
continue
Aws lamdba on failure retries the function till the event expires from the stream.
In your case since the exception on the 2nd endpoint is not handled, the retry mechanism ensures the reexecution of post to the first endpoint.
If you handle the exception and ensure the function successfully ends even when there is a failure, then the retries will not happen.
I'm trying to make a universal script in Python that can be used by anybody to import/export all sorts of information from/to Work Etc CRM platform. It has all the documentation here: http://admin.worketc.com/xml.
However, I am now a bit stuck. Authentication works, I can call different API methods, but only the ones without parameters. I am new to Python and that's why I can't figure out how to pass the parameters onto that specific method in the API. Specifically I need to export all time sheets. I'm trying to call this method specifically: http://admin.worketc.com/xml?op=GetDraftTimesheets. For obvious reasons I cannot disclose the login information so it might be a bit hard to test for you.
The code itself:
import xml.etree.ElementTree as ET
import urllib2
import sys
email = 'email#domain.co.uk'
password = 'pass'
#service = 'GetEmployee?EntityID=1658'
#service = 'GetEntryID?EntryID=23354'
#service = ['GetAllCurrenciesWebSafe']
#service = ['GetEntryID', 'EntryID=23354']
service = ['GetDraftTimesheets','2005-08-15T15:52:01+00:00','2014-08-15T15:52:01+00:00' ]
class workEtcUniversal():
sessionkey = None
def __init__(self,url):
if not "http://" in url and not "https://" in url:
url = "http://%s" % url
self.base_url = url
else:
self.base_url = url
def authenticate(self, user, password):
try:
loginurl = self.base_url + email + '&pass=' + password
req = urllib2.Request(loginurl)
response = urllib2.urlopen(req)
the_page = response.read()
root = ET.fromstring(the_page)
sessionkey = root[1].text
print 'Authentication successful!'
try:
f = self.service(sessionkey, service)
except RuntimeError:
print 'Did not perform function!'
except RuntimeError:
print 'Error logging in or calling the service method!'
def service(self, sessionkey, service):
try:
if len(service)<2:
retrieveurl = 'https://domain.worketc.com/xml/' + service[0] + '?VeetroSession=' + sessionkey
else:
retrieveurl = 'https://domain.worketc.com/xml/' + service[0,1,2] + '?VeetroSession=' + sessionkey
except TypeError as err:
print 'Type Error, which means arguments are wrong (or wrong implementation)'
print 'Quitting..'
sys.exit()
try:
responsefile = urllib2.urlopen(retrieveurl)
except urllib2.HTTPError as err:
if err.code == 500:
print 'Internal Server Error: Permission Denied or Object (Service) Does Not Exist'
print 'Quitting..'
sys.exit()
elif err.code == 404:
print 'Wrong URL!'
print 'Quitting..'
sys.exit()
else:
raise
try:
f = open("ExportFolder/worketcdata.xml",'wb')
for line in responsefile:
f.write(line)
f.close()
print 'File has been saved into: ExportFolder'
except (RuntimeError,UnboundLocalError):
print 'Could not write into the file'
client = workEtcUniversal('https://domain.worketc.com/xml/AuthenticateWebSafe?email=')
client.authenticate(email, password)
Writing a code Consuming API requires resolving few questions:
what methods on API are available (get their list with names)
how does a request to such method looks like (find out url, HTTP method to use, requirements to body if used, what headers are expected)
how to build up all the parts to make the request
What methods are available
http://admin.worketc.com/xml lists many of them
How does a request looks like
GetDraftTimesheet is described here http://admin.worketc.com/xml?op=GetDraftTimesheets
and it expects you to create following HTTP request:
POST /xml HTTP/1.1
Host: admin.worketc.com
Content-Type: text/xml; charset=utf-8
Content-Length: length
SOAPAction: "http://schema.veetro.com/GetDraftTimesheets"
<?xml version="1.0" encoding="utf-8"?>
<soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<GetDraftTimesheets xmlns="http://schema.veetro.com">
<arg>
<FromUtc>dateTime</FromUtc>
<ToUtc>dateTime</ToUtc>
</arg>
</GetDraftTimesheets>
</soap:Body>
</soap:Envelope>
Building up the request
The biggest task it to build properly shaped XML document as shown above and having elements FromUtc and ToUtc filled with proper values. I guess, the values shall be in format of ISO datetime, this you shall find yourself.
You shall be able building such an XML by some Python library, I would use lxml.
Note, that the XML document is using namespaces, you have to handle them properly.
Making POST request with all the headers shall be easy. The library you use to make HTTP requests shall fill in properly Content-Length value, but this is mostly done automatically.
Veerto providing many alternative methods
E.g. for "http://admin.worketc.com/xml?op=FindArticlesWebSafe" there is set of different methods for the same service:
SOAP 1.1
SOAP 1.2
HTTP GET
HTTP POST
Depending on your preferences, pick the one which fits your needs.
The simplest is mostly HTTP GET.
For HTTP requests, I would recommend using requests, which are really easy to use, if you get through tutorial, you will understand what I mean.