python Connecting To Multiple Sessions over QuickFix - python

So i have to connect to two(2) sessions (same host different ports) so i used two different initiator
application = Application()
settings = quickfix.SessionSettings(config_file) # stream settings
storefactory = quickfix.FileStoreFactory(settings)
logfactory = quickfix.FileLogFactory(settings)
initiator = quickfix.SocketInitiator(
application, storefactory, settings, logfactory)
initiator.start()
application.run()
initiator.stop()
and used two different config (.cfg) files for both sessions
# This is a client (initiator)
[DEFAULT]
DefaultApplVerID=FIX.4.4
#settings which apply to all the Sessions.
ConnectionType=initiator
# FIX messages have a sequence ID, which shouldn't be used for uniqueness as specification doesn't guarantee anything about them. If Y is provided every time logon message is sent, server will reset the sequence.
FileLogPath=./Logs/
#Path where logs will be written
StartTime=00:00:00
# Time when session starts and ends
EndTime=00:00:00
UseDataDictionary=Y
#Time in seconds before your session will expire, keep sending heartbeat requests if you don't want it to expire
ReconnectInterval=60
LogoutTimeout=5
LogonTimeout=30
# Time in seconds before reconnecting
ResetOnLogout=N
ResetOnDisconnect=N
SendRedundantResendRequests=Y
# RefreshOnLogon=Y
SocketNodelay=N
# PersistMessages=Y
ValidateUserDefinedFields=N
ValidateFieldsOutOfOrder=N
# CheckLatency=Y
# session stream
[SESSION]
ResetOnLogon=Y
BeginString=FIX.4.4
SenderCompID=TESTING1
TargetCompID=TESTACC1
HeartBtInt=30
SocketConnectPort=4000
SocketConnectHost=127.0.0.1
DataDictionary=./spec/FIX44.xml
FileStorePath=./Sessions/
# session market
[SESSION]
ResetOnLogon=Y
BeginString=FIX.4.4
SenderCompID=TESTING2
TargetCompID=TESTACC2
HeartBtInt=30
SocketConnectPort=5000
SocketConnectHost=127.0.0.1
DataDictionary=./spec/FIX44.xml
FileStorePath=./Sessions/
But this doesn't work
I saw a post where they recommended using ThreadedSocketInitiator but i don't think its used on the python quickfix lib.
Thanks in advance

Related

Kafka producer automatic fail-over/fail-back

I would like to know if it's possible to configure 2 different Kafka cluster in a Kafka producer.
Currently I'm trying to have my producers & consumer failback automatically to a passive cluster without reconfiguring (bootstrap.servers) and restarting their application.
I'm using Apache Kafka 2.8 and the confluent_kafka==1.8.2 package with Python 3.7.
Below the producer code:
from time import sleep
from confluent_kafka import Producer
p = Producer({'bootstrap.servers': 'clusterA:32531, clusterB:30804'})
def delivery_report(err, msg):
""" Called once for each message produced to indicate delivery result.
Triggered by poll() or flush(). """
if err is not None:
print('Message delivery failed: {}'.format(err))
else:
print(f'Message delivered to {msg.offset()}')
with open('test_data.csv', 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
# Check file as empty
if header is not None:
# Iterate over each row after the header in the csv
for row in csv_reader:
sleep(0.02)
p.produce(topic='demo', key=row[5], value=str(row), callback=delivery_report)
p.flush()
When I killed clusterB I got the following error message.
%4|1643837239.074|CLUSTERID|rdkafka#producer-1| [thrd:main]: Broker clusterA:32531/bootstrap reports different ClusterId "MLWCRsVXSxOf2YGPRIivjA" than previously known "6ZtcQCRPQ5msgeD3r7I11w": a client must not be simultaneously connected to multiple clusters
%3|1643837240.995|FAIL|rdkafka#producer-1| [thrd:clusterB:30804/bootstrap]: 172.27.176.222:30804/bootstrap: Connect to ipv4#clusterB:30804 failed: Unknown error (after 2044ms in state CONNECT)
At the moment, You will have to update the bootstrap information to secondary Cluster manually and this will require the restart of the client to failover.
Programmatically, Inorder to connect to a separate cluster you will have to stop the current producer instance and start a new instance with the new bootstrap server config. However this can get quite complicated.
Other options are,
You configure kafka with a LB or a VIP (Not recommended because by nature a direct connection from the client to broker is required)
Configure a shared store (memcached or redis) where you store the bootstrap server config. Your client will fetch the bootstrap server during the bootstrap process. During failure you change the value in the store and restart your clients. (This makes the operation quite easy)

Why am I reaching my connection limit and timing out queries when I use Heroku-Redis for multithreading?

I'm hosting a Django web app on Heroku, and it uses Redis to run a task in the background (this gets kicked off by the user when they submit a form). The task often has to make multiple queries to an API using pexpect, sequentially. When I had it all set up locally, I had no problems with timeouts, but now I regularly run into two problems: TIMEOUT in the pexpect function and "max client connections reached".
I'm using the free tier of Heroku-Redis, so my connection limit is 20. This should be more than enough for what I'm doing, but I notice on the Redis dashboard that when I run the program, the connections stay open for a while afterwards (which is what I suspect is causing the "max connections" error -- the old connections didn't close). I used heroku-redis on the command line to set it so idle connections should close after 15 seconds, but it doesn't seem like they are. I also should only have 4 concurrent connections at the same time (setting in Django settings.py), but I regularly have 15 or so spawned for a single run.
How do I actually limit the number of redis connections generated for my program and make sure they actually close when they finish their job? And how do I prevent TIMEOUTs (which I suspect happen because no connection is available to finish the query)? I don't understand why so many Redis connections are necessary for a single background task.
This is the relevant content in my Django settings.py file. I use celery to run my background task, so I don't directly interface with Redis.
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": "redis:// censored",
"OPTIONS": {
'CONNECTION_POOL_CLASS': 'redis.BlockingConnectionPool',
'CONNECTION_POOL_CLASS_KWARGS': {
'max_connections': 4,
'timeout': 15,
'retry_on_timeout': True
},
},
}
}
# the line below lets it run locally
# BROKER_URL and CELERY_RESULT_BACKEND = 'amqp://guest:guest#rabbit:5672/%2F'
# I'm not sure what the difference between BROKER_* and REDIS_* are,
# so I've included them both for good measure
BROKER_BACKEND = "redis"
BROKER_HOST = "amazonaws.com censored"
BROKER_PORT = 20789
BROKER_PASSWORD = "censored"
BROKER_VHOST = "0"
BROKER_POOL_LIMIT = 4 # limit # of celery workers
REDIS_URL = "redis://censored
REDIS_HOST = "amazonaws.com censored"
REDIS_PORT = 20789
REDIS_PASSWORD = "censored"
REDIS_DB = 0
REDIS_CONNECT_RETRY = True
CELERY_RESULT_BACKEND = REDIS_URL
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_REDIS_MAX_CONNECTIONS = 4
CELERYD_TASK_SOFT_TIME_LIMIT = 30
DATABASE_URL = "postgres:// censored"
# I added this for Heroku
django_heroku.settings(locals())
This is where I initialize Celery. I use a #shared_task(bind=True) decorator for my background task function.
app = Celery('aircraftToAirQuality', backend=settings.CELERY_RESULT_BACKEND, broker=settings.REDIS_URL)
app.conf.broker_pool_limit = 0
app.conf.broker_transport_options = {"visibility_timeout": 60}
My UI regularly checks to see the progress of the task to update a progress bar. This is just a simple HTTP Post call.
And this is the pexpect code, in case that's relevant. Since I'm sending sensitive content (passwords and stuff), I've removed much of the text.
def use_pexpect(self, query):
cmd = 'censored'
pex = pexpect.spawn(cmd)
pex.timeout = 30
index = pex.expect(censored)
time.sleep(0.1)
pex.sendline(censored)
pex.sendline(query)
pex.expect(censored)
result = pex.before
pex.sendline('quit;')
pex.close()
return result
Right now, I can infrequently run my task successfully. The longer the task is supposed to run for (make 10 queries instead of just 3), the more likely it is that the program will fail before it finishes running. My background task should be able to make queries sequentially and my UI thread should be able to check its progress without any connection limit or TIMEOUT problems.

Pysnmp can't resolve OID's from snmp trap

I'm trying to resolve the OIDs that are received on an SNMP Trap from an HP switch stack but they only resolve down to a certain level and stop. It's like the HP MIBs are not being loaded. It's unclear from all the documentation I can find on pysnmp if this is the appropriate way to add custom MIBs and resolve OIDs from a trap.
MIBs can be downloaded here.
from pysnmp.entity import engine, config
from pysnmp.carrier.asyncore.dgram import udp
from pysnmp.smi import view, builder, rfc1902
from pysnmp.entity.rfc3413 import ntfrcv, mibvar
# Create SNMP engine with autogenernated engineID and pre-bound
# to socket transport dispatcher
snmpEngine = engine.SnmpEngine()
build = snmpEngine.getMibBuilder()
build.addMibSources(builder.DirMibSource("C:/Users/t/Documents/mibs"))
viewer = view.MibViewController(build)
# Transport setup
# UDP over IPv4, first listening interface/port
config.addTransport(
snmpEngine,
udp.domainName + (1,),
udp.UdpTransport().openServerMode(('0.0.0.0', 162))
)
# SNMPv1/2c setup
# SecurityName <-> CommunityName mapping
config.addV1System(snmpEngine, '????', 'public')
# Callback function for receiving notifications
# noinspection PyUnusedLocal,PyUnusedLocal,PyUnusedLocal
def cbFun(snmpEngine, stateReference, contextEngineId, contextName, varBinds, cbCtx):
print('Notification from ContextEngineId "%s", ContextName "%s"' % (contextEngineId.prettyPrint(),
contextName.prettyPrint()))
for name, val in varBinds:
print(name)
symbol = rfc1902.ObjectIdentity(name).resolveWithMib(viewer).getMibSymbol()
print(symbol[1])
# Register SNMP Application at the SNMP engine
ntfrcv.NotificationReceiver(snmpEngine, cbFun)
snmpEngine.transportDispatcher.jobStarted(1) # this job would never finish
# Run I/O dispatcher which would receive queries and send confirmations
try:
snmpEngine.transportDispatcher.runDispatcher()
except:
snmpEngine.transportDispatcher.closeDispatcher()
raise
Output upon receiving a trap:
Notification from ContextEngineId "0x80004fb8056ed891e8", ContextName ""
1.3.6.1.2.1.1.3.0
sysUpTime
1.3.6.1.6.3.1.1.4.1.0
snmpTrapOID
1.3.6.1.6.3.18.1.3.0
snmpTrapAddress
1.3.6.1.6.3.18.1.4.0
snmpTrapCommunity
1.3.6.1.6.3.1.1.4.3.0
snmpTrapEnterprise
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.9
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.1
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.2
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.3
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.4
enterprises
1.3.6.1.4.1.11.2.14.11.5.1.7.1.29.1.0.5
enterprises
As you can see many distinct OIDs just resolve to "enterprises". I am using pysnmp 4.4.4.
Yes, it seems that only the core MIBs are loaded.
If you want to follow this quite low-level path, then you need to pre-compile all your ASN.1 MIBs (those you pulled from HPE site) with the mibdump tool into pysnmp format. Then put those *.py files into some directory and point pysnmp to it through build.addMibSources(builder.DirMibSource()) call.
Also, make sure to pre-load up all those MIBs at once on startup by invoking build.loadModules() (w/o arguments).

SMTP - Fast and reliable connection probing without auth?

Briefing
I am currently building a python SMTP Mail sender program.
I added a feature so that the user would not be able to log in if there was no active internet connection, I tried many solutions/variations to make the real time connection checking as swift as possible, there were many problems such as:
The thread where the connection handler was running suddenly lagged when I pulled out the ethernet cable ( to test how it would handle the sudden disconnect )
The whole program crashed
It took several seconds for the program to detect the change
My current solution
I set up a data handling class which would contain all the necessary info ( the modules needed to share info effectively )
import smtplib
from socket import gaierror, timeout
class DataHandler:
is_logged_in = None
is_connected = None
server_conn = None
user_address = ''
user_passwd = ''
#staticmethod
def try_connect():
try:
DataHandler.server_conn = smtplib.SMTP('smtp.gmail.com', 587, timeout=1) # The place where the connection is checked
DataHandler.is_connected = True
except (smtplib.SMTPException, gaierror, timeout):
DataHandler.is_connected = False # Connection status changed upon a connection error
I put a connection handler class on a second thread, the server connection process slowed down the gui when it was all on one thread.
from root_gui import Root
import threading
from time import sleep
from data_handler import DataHandler
def handle_conn():
DataHandler.try_connect()
smtp_client.refresh() # Refreshes the gui according to the current status
def conn_manager(): # Working pretty well
while 'smtp_client' in globals():
sleep(0.6)
try:
handle_conn() # Calls the connection
except NameError: # If the user quits the tkinter gui
break
smtp_client = Root()
handle_conn()
MyConnManager = threading.Thread(target=conn_manager)
MyConnManager.start()
smtp_client.mainloop()
del smtp_client # The connection manager will detect this and stop running
My question is:
Is this a good practice or a terrible waste of resources? Is there a better way to do this because no matter what I tried, this was the only solution that worked.
From what I know the try_connect() function creates a completely new smtp object each time it is run ( which is once in 0.6 seconds! )
Resources/observations
The project on git: https://github.com/cernyd/smtp_client
Observation: the timeout parameter when creating the smtp object improved response times drastically, why is that so?

Counting the number of requests per second in Tornado

I am new to Python and Tornado WebServer.
I am trying to figure out the number of request and number of requests/second in my server side code. I am using Tornadio2 to implement websockets.
Kindly take a look at the following code and let me know, what modification can be done to it.
I am using the RequestHandler.prepare() to bottleneck all the requests and using a list as it is immutable to store the count.
Consider all modules are included
count=[0]
class IndexHandler(tornado.web.RequestHandler):
"""Regular HTTP handler to serve the chatroom page"""
def prepare(self):
count[0]=count[0]+1
def get(self):
self.render('index1.html')
class SocketIOHandler(tornado.web.RequestHandler):
def get(self):
self.render('../socket.io.js')
partQue=Queue.Queue()
class ChatConnection(tornadio2.conn.SocketConnection):
participants = set()
def on_open(self, info):
self.send("Welcome from the server.")
self.participants.add(self)
def on_message(self, message):
partQue.put(message)
time.sleep(10)
self.qmes=partQue.get()
for p in self.participants:
p.send(self.qmes+" "+str(count[0]))
partQue.task_done()
def on_close(self):
self.participants.remove(self)
partQue.join()
# Create tornadio server
ChatRouter = tornadio2.router.TornadioRouter(ChatConnection)
# Create socket application
sock_app = tornado.web.Application(
ChatRouter.urls,
flash_policy_port = 843,
flash_policy_file = op.join(ROOT, 'flashpolicy.xml'),
socket_io_port = 8002)
# Create HTTP application
http_app = tornado.web.Application(
[(r"/", IndexHandler), (r"/socket.io.js", SocketIOHandler)])
if __name__ == "__main__":
import logging
logging.getLogger().setLevel(logging.DEBUG)
# Create http server on port 8001
http_server = tornado.httpserver.HTTPServer(http_app)
http_server.listen(8001)
# Create tornadio server on port 8002, but don't start it yet
tornadio2.server.SocketServer(sock_app, auto_start=False)
# Start both servers
tornado.ioloop.IOLoop.instance().start()
Also, I am confused about every Websocket messages. Does each Websocket event got to server in the form of an HTTP request? or a Socket.IO request?
I use Siege - excellent tool for testing requests if your running on linux. Example
siege http://localhost:8000/?q=yourquery -c10 -t10s
-c10 = 10 concurrent users
-t10s = 10 seconds
Tornadio2 has built-in statistics module, which includes incoming connections/s and other counters.
Check following example: https://github.com/MrJoes/tornadio2/tree/master/examples/stats
When testing applications, always approach performance testing with a healthy appreciation for the uncertainty principle..
If you want to test a server, hook up two PCs to a HUB where you can monitor traffic from one going to the other. Then bang the hell out of the server. There are a variety of tools for doing this, just look for web load testing tools.
Normal HTTP requests in Tornado create a new RequestHandler instance, which persists until the connection is terminated.
WebSockets use persistent connections. One WebSocketHandler instance is created, and each message sent by the browser to the server calls the on_message method.
From what I understand, Socket.IO/Tornad.IO will use WebSockets if supported by the browser, falling back to long polling.

Categories

Resources