Mitigating TCP connection resets in AWS Farget - python

I am using Amazon ECS on AWS Fargate, My instances can access the internet, but the connection drops after 350 seconds. On average, out of 100 times, my service is getting ConnectionResetError: [Errno 104] Connection reset by peer error approximately 5 times. I found a couple of suggestions to fix that issue on my server-side code, see here and here
Cause
If a connection that's using a NAT gateway is idle for 350 seconds or more, the connection times out.
When a connection times out, a NAT gateway returns an RST packet to any resources behind the NAT gateway that attempt to continue the connection (it does not send a FIN packet).
Solution
To prevent the connection from being dropped, you can initiate more traffic over the connection. Alternatively, you can enable TCP keepalive on the instance with a value less than 350 seconds.
Existing Code:
url = "url to call http"
params = {
"year": year,
"month": month
}
response = self.session.get(url, params=params)
To fix that I am currently using a band-aid retry logic solution using tenacity,
#retry(
retry=(
retry_if_not_exception_type(
HTTPError
) # specific: requests.exceptions.ConnectionError
),
reraise=True,
wait=wait_fixed(2),
stop=stop_after_attempt(5),
)
def call_to_api():
url = "url to call HTTP"
params = {
"year": year,
"month": month
}
response = self.session.get(url, params=params)
So my basic question is how can I use python requests correctly to do any of the below solutions,
Close the connection before 350 seconds of inactivity
Enable Keep-Alive for TCP connections

Concerning the "Close the connection before 350 seconds of inactivity" problem, there seems to be a read timeout parameter you can pass to the session.get() function call.
According to the doc "it’s the number of seconds that the client will wait between bytes sent from the server".
Which, to me, looks like an inactivity timeout.

Posting solution for the future user who will face this issue while working on AWS Farget + NAT,
We need to set the TCP keepalive settings to the values dictated by our server-side configuration, this PR helps me a lot to fix my issue: https://github.com/customerio/customerio-python/pull/70/files
import socket
from urllib3.connection import HTTPConnection
HTTPConnection.default_socket_options = ( HTTPConnection.default_socket_options + [
(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1),
(socket.SOL_TCP, socket.TCP_KEEPIDLE, 300),
(socket.SOL_TCP, socket.TCP_KEEPINTVL, 60)
]
)

Related

urllib2.urlopen timeout works only when connected to Internet

I'm working on Python 2.7 code to read a value from HTML page by using urllib2 library. I want to timeout the urllib2.urlopen function after 5 seconds in case of no Internet and jump to remaining code.
It works as expected when computer is connected to working internet connection. And for testing if I set timeout=0.1 it timed out suddenly without opening url, as expected. But when there is no Internet, timeout not works either I set timeout to 0.1, 5, or any other value. It simply does not timed out.
This is my Code:
import urllib2
url = "https://alfahd.witorbit.net/fingerprint.php?n"
try:
response = urllib2.urlopen(url , timeout=5).read()
print response
except Exception as e:
print e
Result when connected to Internet with timeout value 5:
180
Result when connected to Internet with timeout value 0.1 :
<urlopen error timed out>
Seems timeout is working.
Result when NOT connected to Internet and with any timeout value (it timed out after about 40 seconds every time I open url despite of any value I set for timeout=:
<urlopen error [Errno -3] Temporary failure in name resolution>
How can I timeout urllib2.urlopen when there is no Internet connectivity? Am I missing some thing? Please guide me to solve this issue. Thanks!
Because name resolution happens before the request is made, it's not subject to the timeout. You can prevent this error in name resolution by providing the IP for the host in your /etc/hosts file. For example, if the host is subdomain.example.com and the IP is 10.10.10.10 you would add the following line in the /etc/hosts file
10.10.10.10 subdomain.example.com
Alternatively, you may be able to simply use the IP address directly, however, some webservers require you use the hostname, in which case you'll need to modify the hosts file to use the name offline.

Autobahn|Python Twisted server that checks API key and disconnects clients

I want to add a simple API key check to an Autobahn Python WebSocket server. The server should check the key in the HTTP header of a client and disconnect clients that don't have the correct key.
I have figured out a solution to this, but I'm not sure it is the best solution (see below). If anyone has suggestions, I'd appreciate it.
From the API Docs for the onConnect method:
Throw autobahn.websocket.types.ConnectionDeny when you don’t want to accept the WebSocket connection request.
You can see this done on line 117 of one of the examples here.
I have tested this and it does not close the connection cleanly. However you are terminating a connection with an unauthenticated client so you should not want to go through a closing handshake.
The onClose callback takes a wasClean argument which allows you to differentiate between clean and unclean connection closures.
My solution is to check the HTTP header after the client has connected to the server and to close the connection if the client does not have a valid API key.
MY_API_KEY = u'12345'
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {}".format(request.peer))
def onOpen(self):
# Check API Key
if 'my-api-key' not in self.http_headers or\
self.http_headers['my-api-key'] != MY_API_KEY:
# Disconnect the client
print('Missing/Invalid Key')
self.sendClose( 4000, u'Missing/Invalid Key')
# Register client
self.factory.register(self)
I found that if I close the connection inside onConnect, I get an error saying that I cannot close a connection that has not yet connected. The above solution closes cleanly on the client side, but behaves oddly on the server side. The log output is
dropping connection: None
Connection to/from tcp4:127.0.0.1:51967 was aborted locally
_connectionLost: [Failure instance: Traceback (failure with no frames): <class 'twisted.internet.error.ConnectionAborted'>: Connection was aborted locally, using.
]
WebSocket connection closed: None
Is the reason the close message is None on the server side that the server closed the connection and the client did not send back a reason? Is there a better way to do this?
Update:
I have accepted Henry Heath's answer because it seems to be the officially supported solution, even though it does not close the connection cleanly. Using autobahn.websocket.types.ConnectionDeny, the solution becomes
from autobahn.websocket.types import ConnectionDeny
MY_API_KEY = u'12345'
class MyServerProtocol(WebSocketServerProtocol):
def onConnect(self, request):
print("Client connecting: {}".format(request.peer))
# Check API Key
if 'my-api-key' not in request.headers or\
request.headers['my-api-key'] != MY_API_KEY:
# Disconnect the client
print('Missing/Invalid Key')
raise ConnectionDeny( 4000, u'Missing/Invalid Key')
def onOpen(self):
# Register client
self.factory.register(self)
Note that within onConnect, HTTP headers are accessible with request.headers whereas within onOpen, they are accessible with self.http_headers.

"The connection was reset" on web browsers when trying to connect to a localhost socket server

I am trying to make a server in python using sockets that I can connect to on any web browser. I am using the host as "localhost" and the port as 8888.
When I attempt to connect to it, the stuff I want to be shown shows up for a split-second, and then it goes away with the browser saying "The connection was reset".
I've made it do something very simple to test if it still does it, and it does.
Is there a way to stop this?
import time
import socket
HOST = "localhost"
PORT = 8888
def function(sck):
sck.send(bytes("test"),"UTF-8"))
sck.close()
ssck=socket.socket(socket.AF_INET,socket.SOCK_STREAM)
ssck.bind((HOST,PORT))
ssck.listen(1)
while True:
sck,addr=ssck.accept()
function(sck)
Probably the same problem as Perl: Connection reset with simple HTTP server, Ultra simple HTTP socket server, written in PHP, behaving unexpectedly, HTTP Server Not Sending Complete File To WGET, Firefox. Connection reset by peer?. That is you don't read the HTTP header from the browser but simply send your response and close the connection.
tl;dr
your function should be
def function(sck):
sck.send(bytes("HTTP/1.1 200 OK\n\n<header><title>test page</title></header><body><h1>test page!</h1></body>"),"UTF-8"))
sck.close()
With a server as simple as that, you're only creating a TCP socket.
HTTP protocols suggest that the client should ask for a page, something like:
HTTP/1.1 GET /somepath/somepage.html
Host: somehost.com
OtherHeader: look at the http spec
The response should then be:
HTTP/1.1 200 OK
some: headers
<header></header><body></body>

tor via python - connection ok, but no showing up as on tor

i am using the stem example of connection to the tor network, this should connect a client to the tor network, it seems to be doing this but when i check the ip address it is incorrect and not of a tor ip, any ideas as to why this and more importantly how can i fix this issue :)
import StringIO
import socket
import urllib
import socks # SocksiPy module
import stem.process
from stem.util import term
SOCKS_PORT = 7000
# Set socks proxy and wrap the urllib module
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, '127.0.0.1', SOCKS_PORT)
socket.socket = socks.socksocket
# Perform DNS resolution through the socket
def getaddrinfo(*args):
return [(socket.AF_INET, socket.SOCK_STREAM, 6, '', (args[0], args[1]))]
socket.getaddrinfo = getaddrinfo
def query(url):
"""
Uses urllib to fetch a site using SocksiPy for Tor over the SOCKS_PORT.
"""
try:
return urllib.urlopen(url).read()
except:
return "Unable to reach %s" % url
# Start an instance of Tor configured to only exit through Russia. This prints
# Tor's bootstrap information as it starts. Note that this likely will not
# work if you have another Tor instance running.
def print_bootstrap_lines(line):
if "Bootstrapped " in line:
print term.format(line, term.Color.BLUE)
print term.format("Starting Tor:\n", term.Attr.BOLD)
tor_process = stem.process.launch_tor_with_config(
config = {
'SocksPort': str(SOCKS_PORT),
'ExitNodes': '{ru}',
},
init_msg_handler = print_bootstrap_lines,
)
I get the output :
richard#Tornado:~/Documents/Masters Project$ python russiaExample.py
Starting Tor:
May 26 21:56:49.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
May 26 21:56:50.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
May 26 21:56:50.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
May 26 21:56:50.000 [notice] Bootstrapped 100%: Done.
however when i visit https://check.torproject.org/ to check i am using tor, it says i am now, and my normal ip is shown,
what is causing this issue, as the output shown above seems to suggest it has established a circuit all ok to Tor, but seems as although it is not using it ?
i am on the right lines here ?
Thanks guys
You have to set up your browser to use Tor as a proxy.
If you are using Firefox:
Go to Edit, preference, advanced and choose "configure how firefox connects to the internet" settings.
In socks host enter 127.0.0.1 and under port enter 7000.
Go to whatismyip.com and you will see a new ip.
Or check tor,project to see you are using Tor successfully.

Issue with simple python server. socket.accept() accepts input /favicon.ico even after I (supposedly) close the socket

I'm new to stack overflow and socket programming. Thanks in advance for your help!
A little background: I am trying to implement a simple python server. I am trying to connect through TCP and am just trying to return the some parsed text from the request (I am trying to send back the text variable "message").
However, it seems that even after I close the connection, the server side socket accepts some random input called "/favicon.ico" and I am not sure where this is coming from. This loop accepts "/favicon.ico" a few times before returning to the state where it is waiting for a connection.
Does anyone know what is going on?
Here is my code:
#import socket module
from socket import *
serverSocket = socket(AF_INET, SOCK_STREAM)
#Prepare a sever socket
serverPort = 10016
serverName = '192.168.56.101'
serverSocket.bind((serverName,serverPort))
serverSocket.listen(0)
while True:
#Establish the connection
print 'Ready to serve...'
connectionSocket, addr = serverSocket.accept()
message = connectionSocket.recv(4096)
filename = message.split()[1]
#f = open(filename[1:])
print filename
connectionSocket.send(message)
connectionSocket.close()
print '\nYou got to this line\n'
-------------------------------------------------------------
Here is my client-side request: http://192.xxx.56.101:10016/sophie.JPG (stack overflow made me x out the IP)
And my client-side output, which seems to be returned alright:
GET /sophie.JPG HTTP/1.1
Host: 192.168.56.101:10016
User-Agent: Mozilla/5.0 (Windows NT 6.3; WOW64; rv:24.0) Gecko/20100101 Firefox/24.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
-------------------------------------------------------------
And here is my server-side output (print statements):
name#name-VirtualBox:~/Documents/python_scripts$ python server2.py
Ready to serve...
/sophie.JPG
You got to this line
Ready to serve...
/favicon.ico
You got to this line
Ready to serve...
/favicon.ico
You got to this line
Ready to serve...
/favicon.ico
You got to this line
Ready to serve...
----------------------------------------------------------------------
I would have expected the output to simply be the first four lines:
Ready to serve...
/sophie.JPG
You got to this line
Ready to serve...
-------
I expected only these four lines back since the server should return to its waiting state after the connection is closed. However, it is still accepting some input called /favicon.ico and running through the loop a few more times before it goes back to the waiting state.
Does anyone have an idea as to what is going on here?
Thanks!
----------------------------------------
Update:
Ok, so I added the line you suggested, and I see that the browser is sending these extra requests and that they are (according to you) getting queued.
In addition to that, I also changed the line:
serverSocket.listen(0)
to
serverSocket.listen(1)
and then my code runs as I would want it to. (Actually, I have tried it again now, and it does not run as expected. the /favicon.ico request is still being sent)
I guess I have a couple followup questions about what is happening:
Why is the browser making more requests for /favicon.ico when I have not asked it to (with the original code serverSocket(0)?
Now that I have allowed the server to listen to more than one socket connection, why do the bogus connection requests (/favicon.ico) disappear?
And thanks, I will read up on syn cookies as well.
Thankfully, your server is working as expected!
Try adding this to your code, after calling serversocket.accept(): print addr.
What's happening is this:
You start your loop, and you call accept(). You're waiting for a new connection to arrive on port 10016. You receive that connection, serve the response, and then close that connection.
Then you loop again - thus ready to accept another socket connection. This time, it's for /favicon.ico.
The addr variable tells you that each new socket connection (for foo.jpg, and favicon.ico) is happening on a different port - that is what accept() does.
Because your code can only handle one connection at a time, the browser's request for favicon.ico goes into a queue. That is, the browser has requested to connect to your server to fetch the favicon, but your server has not yet accepted that connection.
Now, theoretically, you should not accept any backlogged connections. But there is a catch! It turns out that if TCP syn cookies are enable on your kernel, this parameter is ignored. (How would you know this? Well, it helps that I've done a bunch of networking in C; Python abstracts away many of these details.)
Hope that helps!
Are you using Chrome perhaps : server socket receives 2 http requests when I send from chrome and receives one when I send from firefox ?
I had a similar issue with my node server. It is due to the following bug in Chrome. In summary, Chrome is sending a request for a favicon on every request. As, is likely, you aren't sending a favicon back, it requests one after every legitimate request.
Firefox, and most other browsers, also send out a request for a favicon when they first connect, but cache the result i.e. if there isn't a favicon returned first time, they don't keep trying - which is why you're only seeing a single request from Firefox. It seems Chrome is unfortunately a little too persistent with its favicon requestiness.

Categories

Resources