twisted handle_quit() the way to disconnect? - python

so I'm implementing a log server with twisted (python-loggingserver) and I added simple authentication to the server. If the authentication fails, I wanna close the connection to the client. The class in the log server code already has a function called handle_quit(). Is that the right way to close the connection? Here's a code snippet:
if password != log_password:
self._logger.warning("Authentication failed. Connection closed.")
self.handle_quit()

If the handle_quit message you're referring to is this one, then that should work fine. The only thing the method does is self.transport.loseConnection(), which closes the connection. You could also just do self.transport.loseConnection() yourself, which will accomplish the same thing (since it is, of course, the same thing). I would select between these two options by thinking about whether failed authentication should just close the connection or if it should always be treated the same way a quit command is treated. In the current code this makes no difference, but you might imagine the quit command having extra processing at some future point (cleaning up some resources or something).

Related

How do I keep a FTP connection alive?

I used ftputil to download a batch of files from a FTP server. It raised the error ftputil.error.FTPIOError: [Errno 60] Operation timed out.
As described in Documentation – ftputil,
keep_alive() attempts to keep the connection to the remote server active in order to prevent timeouts from happening. This method is primarily intended to keep the underlying FTP connection of an FTPHost object alive while a file is uploaded or downloaded. This will require either an extra thread while the upload or download is in progress or calling keep_alive from a callback function.
I called keep_alive from a callback function with,
ftp_host.download(source, target, callback=ftp_host.keep_alive)
but it raised ERROR __main__ keep_alive() takes 1 positional argument but 2 were given.
How do I keep a FTP connection alive?
This isn't directly an answer to your question, but it may help finding an answer for your particular problem yourself. Also, a ticket on the ftputil website is better for help with debugging a problem. That said, I think it's fine to ask on StackOverflow first since you don't know in advance if the problem is a simple one or not. :-)
Since FTP is a stateful protocol, client and server can't send arbitrary commands at a given time. The allowed commands and possibly replies are determined by the state the connection is in. See also the state diagrams in RFC 959.
To work around this limitation, ftputil creates a new FTP connection behind the scenes for each remote file object [1]. With this approach, you can still send commands like chdir or start a download while another is still in progress. However, this means that from the perspective of the server, all these FTP connections that come from a single FTPHost object are independent connections, so each of these connections can have their timeout at different times, depending on the usage pattern of the respective connection.
For example, there was ftputil ticket 141, where presumably the main connection initiated by the FTPHost object timed out while a connection used for downloading was still usable.
In your case, it might be helpful to find out which of the underlying connections is timing out (the initial connection or a connection for a remote file). You can use ftputil.session.session_factory to create factories that have FTP debugging enabled (see the documentation).
Unfortunately, a timeout of 60 seconds is quite short, so there are relatively many chances for timeouts.
Especially given the possibility of timeouts in FTP connections, my advice is to write software for FTP transfers in a way so that you can restart the operation (ideally with a new FTPHost object for robustness) where it was interrupted by the timeout. So far I haven't been able to come up with a way to universally work around timeouts. In simple cases you may actually be better off using ftplib directly, although ftputil has robustness and latency improvements that ftplib doesn't have. Using ftplib doesn't save you from timeouts, but at least you don't have any "hidden" connections that may make debugging more difficult.
[1] That said, if you close a remote file in ftputil, the underlying FTP connection can be reused unless it's not timed out. The library checks for a timeout before it reuses the connection.
The picture regarding timeouts is even more complicated by ftputil caching a lot of information from the server to reduce latency. For example, if you call FTPHost.getcwd(), the current directory is retrieved from a cached attribute, not by sending a PWD command to the server and thereby resetting the timeout. Stat information from directory listings is also usually cached.
After couple hours looking for solutions I got it running without '421 Timeout' errors calling keepalive from separate thread. However your I/O Timeout error probably was caused by connection problems.
import ftputil
from threading import Thread
from time import sleep
fhandle = ftputil.FTPHost('host', 'user', 'pwd')
quitThread = 0
def _thread_keep_alive():
while quitThread == 0:
print("KEEPALIVE!")
fhandle.keep_alive()
sleep(25)
thread = Thread(target = _thread_keep_alive)
thread.start()
# some downloading...
quitThread = 1
fhandle.close()

Python Mysql-Connector. Which is better connection.close() or connection.disconnect() or connection.shutdown()

I have a question and I hope that someone could help me.
To give you some context, imagine a loop like this:
while True:
conn = mysql.connector.connect(**args) #args without specifying poolname
conn.cursor().execute(something)
conn.commit()
conn.cursor.close()
#at this point what is better:
conn.close()
#or
conn.disconnect()
#or
conn.shutdown()
In my case, I'm using conn.close() but after a long time of execution, the script I always get an error:
mysql.connector.errors.OperationalError: 2013 (HY000): Lost connection to MySQL server during query
Aparently I'm exceeding the time-out of the mysql connection which is by default 8 hours. But looking at the loop, it's creating and closing new connections on each iteration. I'm pretty sure that the cursor execution takes no more than an hour.
So the question is: doesn't the close() method close the connection? Should I use disconnect() or shutdown() instead? What are the differences between using one or the other.
I hope I've explained myself well, best regards!
There might be a problem inside your code.
Normally, close() will work everytime even if you are using loop.
But still try to trial and error those three command and see what suits your code.
The doc say that clearly
close() is a synonym for disconnect().
For a connection obtained from a connection pool, close() does not
actually close it but returns it to the pool and makes it available
for subsequent connection requests
disconnect() tries to send a QUIT command and close the socket. It raises no exceptions. MySQLConnection.close() is a synonymous method name and more commonly used.
To shut down the connection without sending a QUIT command first, use
shutdown().
For shutdown
Unlike disconnect(), shutdown() closes the client connection without
attempting to send a QUIT command to the server first. Thus, it will
not block if the connection is disrupted for some reason such as
network failure.
But I do not figure out why you get Lost connection to MySQL server during query You may check this discussion Lost connection to MySQL server during query

How to close a SolrClient connection?

I am using SolrClient for python with Solr 6.6.2. It works as expected but I cannot find anything in the documentation for closing the connection after opening it.
def getdocbyid(docidlist):
for id in docidlist:
solr = SolrClient('http://localhost:8983/solr', auth=("solradmin", "Admin098"))
doc = solr.get('Collection_Test',doc_id=id)
print(doc)
I do not know if the client closes it automatically or not. If it doesn't, wouldn't it be a problem if several connections are left open? I just want to know if it there is any way to close the connection. Here is the link to the documentation:
https://solrclient.readthedocs.io/en/latest/
The connections are not kept around indefinitely. The standard timeout for any persistent http connection in Jetty is five seconds as far as I remember, so you do not have to worry about the number of connections being kept alive exploding.
The Jetty server will also just drop the connection if required, as it's not required to keep it around as a guarantee for the client. solrclient uses a requests session internally, so it should do pipelining for subsequent queries. If you run into issues with this you can keep a set of clients available as a pool in your application instead, then request an available client instead of creating a new one each time.
I'm however pretty sure you won't run into any issues with the default settings.

How python socket detect the server is closed when continue sending data to server?

I use python socket to send data to server, and the code like:
When I close the server, it will send the data twice, and then, it will go to the "except" code. If I set the SEND_INTERVAL too long, it will be a disaster. So, how to get the error immediately when the server is closed or downtime?
Nothing happens immediatly over the network. That's one thing.
Secondly the underlying OS will detect broken connections (and Python gets that info in the form of an exception), but usually this takes time. And that's why you still send messages even though the connection is already dead. But since OS operates on network layer (as opposed to the application layer) then there's an issue: the connection may be dead but OS may never detect this. For example this will happen when the server is dead but behind alive proxy.
Thirdly the most reliable way to know that a server is alive is when it sends something back to the client. So you should always .recv() (with timeout) after .sendall() call and the server should always .sendall() after .recv() (the request-response pattern, even when the response is a simple "I received message"). If you can't modify the server side and (in worst case) if the server doesn't send anything back to the client then there's no reliable way to know this.
That's why you need some form of framing/correctness protocol. Simple .sendall() won't do.

Is it standard practice to keep a FIX connection connected all day long, or relogin periodically?

I wrote a program in Python using the quickfix package which connects to a vendor via FIX. We login in the morning, but don't actually send messages through the connection until the end of the day. The issue is, we don't want to keep the program open for the entirety of the day, but would rather relogin in the afternoon when we need to send the messages.
The vendor is requesting we stay logged in for the full duration between our start and stop times specified in our configurations. This is only possible by leaving my program on for the entirety of the day, because if I close it then the messages the vendor sends aren't registered as received by me. I don't send a logout message though.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
Any design or best practice advice would be helpful here.
Is it common practice to write a program to connect via FIX and leave it running for the entire session time? Or is it acceptable to close the program, given I don't send a logout message, and reconnect at a later time in the day?
I don't know what others have done, but I used QuickFIX with Python for years and never had any problem running my system all day, OR shutting it down periodically for whatever reason and reconnecting. In the end I wound up leaving the system connected for weeks at a time, since that allowed me to record data.
I would say that the answer to both of your questions is YES. It is common to leave it running. Also, it is acceptable to just close the program.
There can always be edge cases and idiosyncratic features of your implementation and your counterparty, so you should seek to understand more why they have asked you not to disconnect. That sounds very strange to me. Is their FIX engine not capable of something very simple and standard?
Yes it is common to keep the FIX sessions running for a long time. That should not be an issue.
You can't just shutdown your program your end, as Session-level FIX.Heartbeat(35=0) messages, sent periodically (usually 30s), as meant to keep the underlying TCP connection "open", and check that both ends are still up and running properly.
By the details you gave, if your vendor (which is likely the acceptor side) requests it, it might be because they need to send you messages, with no delay as they occur.
If you (the initiator side) are not logged in at that time, they won't be able to send those messages, as they won't be able to initiate a session with you.
The vendor might monitor sessions as well, but as an initiator, it sounds odd.
as initiators are waiting for connections.
More likely they will monitor unexpected sessions drops.
All in all, it very depends of your vendor anyway, you have to follow what they say...

Categories

Resources