Malformed DNS response packet (python + scapy) - python

I'm working on creating a proxy server using Python and scapy. TCP packets seem to be working fine but I'm running into some issues with UDP, specifically DNS requests. Essentially when a DNS request comes in I capture it in my script, preform the DNS lookup, and am trying to return it back to the person requesting the DNS query. The script successfully preforms the lookup and returns the DNS response, however when looking at wireshark it tells me it's a "Malformed Packet". Could someone tell me what I need to do in order to correctly return the DNS response?
#!/usr/bin/env python
from tornado.websocket import WebSocketHandler
from tornado.httpserver import HTTPServer
from tornado.web import Application
from tornado.ioloop import IOLoop
from collections import defaultdict
from scapy.all import *
import threading
outbound_udp = defaultdict(int)
connection = None
class PacketSniffer(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
def run(self):
global connection
while (True):
pkt = sniff(iface="eth0", count=1)
if pkt[0].haslayer(DNS):
print "Returning back has UDP"
print pkt.summary()
ipPacket = pkt[0][IP]
dnsPacket = pkt[0][DNS]
if outbound_udp[(ipPacket.src, dnsPacket.id)] > 0:
outbound_udp[(ipPacket.src, dnsPacket.id)] -= 1
print "Found in outbound_udp"
# Modify the destination address back to the address of the TUN on the host.
ipPacket.dst = "10.0.0.1"
try:
del ipPacket[TCP].chksum
del ipPacket[IP].chksum
del ipPacket[UDP].chksum
except IndexError:
print ""
ipPacket.show2() # Force recompute the checksum
if connection:
connection.write_message(str(ipPacket).encode('base64'))
sniffingThread = PacketSniffer()
sniffingThread.daemon = True
sniffingThread.start()

Some bugs have been fixed recently in Scapy around DNS (and other sophisticated protocols, but DNS is the most frequently seen):
https://bitbucket.org/secdev/scapy/issue/913/
https://bitbucket.org/secdev/scapy/issue/5104/
https://bitbucket.org/secdev/scapy/issue/5105/
Trying with the latest Scapy development version from the Mercurial repository (hg clone http://bb.secdev.org/scapy) should fix this.

Related

Monkey patching sockets library to use a specifc network interface

I have been trying to make requests to a website using the requests library but using different network interfaces. Following are a list of answers that I have tried to use but did not work.
This answer describes how to achieve what I want, but it uses pycurl. I could use pycurl but I have learned about this monkey patching thing and want to give it a try.
This other answer seemed to work at first, since it does not raise any error. However, I monitored my network traffic using Wireshark and the packets were sent from my default interface. I tried to print messages inside the function set_src_addr defined by the author of the answer but the message did not show up. Therefore, I think it is patching a function that is never called. I get a HTTP 200 response, which should not occur since I have bound my socket to 127.0.0.1.
import socket
real_create_conn = socket.create_connection
def set_src_addr(*args):
address, timeout = args[0], args[1]
source_address = ('127.0.0.1', 0)
return real_create_conn(address, timeout, source_address)
socket.create_connection = set_src_addr
import requests
r = requests.get('http://www.google.com')
r
<Response [200]>
I have also tried this answer. I can get two kind of errors using this method:
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind(('127.0.0.1', 0))
return sock
socket.socket = bound_socket
import requests
This will not allow me to create a socket and raise this error. I have also tried to make a variation of this answer which looks like this:
import requests
import socket
true_socket = socket.socket
def bound_socket(*a, **k):
sock = true_socket(*a, **k)
sock.bind(('192.168.0.10', 0))
print(sock)
return sock
socket.socket = bound_socket
r = requests.get('https://www.google.com')
This also do not work and raises this error.
I have the following problem: I want to have each process sending requests through a specific network interface. I thought that since threads share global memory (including libraries), I should change my code to work with processes. Now, I want to apply a monkey patching solution somewhere, in a way that each process can use a different interface for communication. Am I missing something? Is this the best way to approach this problem?
Edit:
I also would like to know if it is possible for different process to have different versions of the same library. If they are shared, how can I have different versions of a library in Python (one for each process)?
This appears to work for python3:
In [1]: import urllib3
In [2]: real_create_conn = urllib3.util.connection.create_connection
In [3]: def set_src_addr(address, timeout, *args, **kw):
...: source_address = ('127.0.0.1', 0)
...: return real_create_conn(address, timeout=timeout, source_address=source_address)
...:
...: urllib3.util.connection.create_connection = set_src_addr
...:
...: import requests
...: r = requests.get('http://httpbin.org')
It fails with the following exception:
ConnectionError: HTTPConnectionPool(host='httpbin.org', port=80): Max retries exceeded with url: / (Caused by NewConnectionError("<urllib3.connection.HTTPConnection object at 0x10c4b89b0>: Failed to establish a new connection: [Errno 49] Can't assign requested address",))
I will document the solution I have found and list some problems I had in the process.
salparadise had it right. It is very similar to the first answer I have found. I am assuming that the requests module import the urllib3 and the latter has its own version of the socket library. Therefore, it is very likely that the requests module will never directly call the socket library, but will have its functionality provided by the urllib3 module.
I have not noticed it first, but the third snippet I had in my question was working. The problem why I had a ConnectionError is because I was trying to use a macvlan virtual interface over a wireless physical interface (which, if I understood correctly, drops packets if the MAC addresses do not match). Therefore, the following solution does work:
import requests
from socket import socket as backup
import socket
def socket_custom_src_ip(src_ip):
original_socket = backup
def bound_socket(*args, **kwargs):
sock = original_socket(*args, **kwargs)
sock.bind((src_ip, 0))
print(sock)
return sock
return bound_socket
In my problem, I will need to change the IP address of a socket several times. One of the problems I had was that if no backup of the socket function is made, changing it several times would cause an error RecursionError: maximum recursion depth exceeded. This occurs since on the second change, the socket.socket function would not be the original one. Therefore, my solution above creates a copy of the original socket function to use as a backup for further bindings of different IPs.
Lastly, following is a proof of concept of how to achieve multiple processes using different libraries. With this idea, I can import and monkey-patch each socket inside my processes, having different monkey-patched versions of them.
import importlib
import multiprocessing
class MyProcess(multiprocessing.Process):
def __init__(self, module):
super().__init__()
self.module = module
def run(self):
i = importlib.import_module(f'{self.module}')
print(f'{i}')
p1 = MyProcess('os')
p2 = MyProcess('sys')
p1.start()
<module 'os' from '/usr/lib/python3.7/os.py'>
p2.start()
<module 'sys' (built-in)>
This also works using the import statement and global keyword to provide transparent access inside all functions as the following
import multiprocessing
def fun(self):
import os
global os
os.var = f'{repr(self)}'
fun2()
def fun2():
print(os.system(f'echo "{os.var}"'))
class MyProcess(multiprocessing.Process):
def __init__(self):
super().__init__()
def run(self):
if 'os' in dir():
print('os already imported')
fun(self)
p1 = MyProcess()
p2 = MyProcess()
p2.start()
<MyProcess(MyProcess-2, started)>
p1.start()
<MyProcess(MyProcess-1, started)>
I faced a similar issue where I wanted to have some localhost traffic originating not from 127.0.0.1 ( I was testing a https connection over localhost)
This is how I did it using the python core libraries ssl and http.client (cf docs), as it seemed cleaner than the solutions I found online using the requests library.
import http.client as http
import ssl
dst= 'sever.infsec.local' # dns record was added to OS
src = ('127.0.0.2',0) # 0 -> select available port
context = ssl.SSLContext()
context.load_default_certs() # loads OS certifcate context
request = http.HTTPSConnection(dst, 443, context=context,
source_address=src)
request.connect()
request.request("GET", '/', json.dumps(request_data))
response = request.getresponse()

Understanding Autobahn and Twisted integration

I am trying to understand the examples given here: https://github.com/tavendo/AutobahnPython/tree/master/examples/twisted/wamp/basic/pubsub/basic
I built this script which is supposed to handle multiple pub/sub websocket connections and also open a tcp port ( 8123 ) for incoming control messages. When a message comes on the 8123 port, the application should broadcast to all the connected subscribers the message received on port 8123. How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message.
Another thing that i do not understand is the url. The client javascript connects to the url http://:8080/ws . Where does the "ws" come from ?
Also can someone explain the purpose of RouterFactory, RouterSessionFactory and this bit:
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
my code is below:
import sys, time
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory
from twisted.internet.defer import inlineCallbacks
from autobahn.twisted.wamp import ApplicationSession
from autobahn.twisted.util import sleep
class NotificationProtocol(Protocol):
def __init__(self, factory):
self.factory = factory
def dataReceived(self, data):
print "received new data"
class NotificationFactory(Factory):
protocol = NotificationProtocol
class WsNotificationComponent(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
counter = 0
while True:
self.publish("com.myapp.topic1", "test %d" % counter )
counter += 1
yield sleep(1)
## we use an Autobahn utility to install the "best" available Twisted reactor
##
from autobahn.twisted.choosereactor import install_reactor
reactor = install_reactor()
## create a WAMP router factory
##
from autobahn.wamp.router import RouterFactory
router_factory = RouterFactory()
## create a WAMP router session factory
##
from autobahn.twisted.wamp import RouterSessionFactory
session_factory = RouterSessionFactory(router_factory)
from autobahn.wamp import types
session_factory.add( WsNotificationComponent(types.ComponentConfig(realm = "realm1" )))
from autobahn.twisted.websocket import WampWebSocketServerFactory
transport_factory = WampWebSocketServerFactory(session_factory)
transport_factory.setProtocolOptions(failByDrop = False)
from twisted.internet.endpoints import serverFromString
## start the server from an endpoint
##
server = serverFromString(reactor, "tcp:8080")
server.listen(transport_factory)
notificationFactory = NotificationFactory()
reactor.listenTCP(8123, notificationFactory)
reactor.run()
"How do i make NotificationProtocol or NotificationFactory talk to the websocket and make the websocket server broadcast a message":
Check out one of my other answers on SO: Persistent connection in twisted. Jump down to the example code and model your websocket logic like the "IO" logic and you'll have a good fit (You might also want to see the follow-on answer about the newer endpoint calls from one of the twisted core-team too)
"Where does the "ws" come from ?"
Websockets are implemented by retasking http connections, which by their nature have to have a specific path on the request. That "ws" path typically would map to a special http handler that autobahn is building for you to process websockets (or at least that's what your javascript is expecting...). Assuming thing are setup right you can actually point your web-browswer at that url and it should print back an error about the websocket handshake (Expected WebSocket Headers in my case, but I'm using cyclones websockets not autobahn).
P.S. one of the cool side-effects from "websockets must have a specific path" is that you can actually mix websockets and normal http content on the same handler/listen/port, this gets really handy when your trying to run them all on the same SSL port because your trying to avoid the requirement of a proxy front-ending your code.

How to send data to a client?

from BaseHTTPServer import HTTPServer
from CGIHTTPServer import CGIHTTPRequestHandler
import socket,ssl
import time
import Config
class StartHTTPServer:
'''Web server to serve out map image and provide cgi'''
def __init__(self):
# Start HTTP Server. Port and address defined in Config.py
srvraddr = ("", Config.HTTP_PORT) # my hostname, portnumber
srvrobj = HTTPServer(srvraddr, CGIHTTPRequestHandler)
srvrobj.socket = ssl.wrap_socket(srvrobj.socket,server_side=True,
certfile="c:\users\shuen\desktop\servertryout 23022012\serverCert.crt",
keyfile="c:\users\shuen\desktop\servertryout 23022012\privateKey.key",
ssl_version=ssl.PROTOCOL_TLSv1,
do_handshake_on_connect=True)
print srvrobj.socket.cipher()
print srvrobj.socket.getsockname()
print "HTTP server running at IP Address %s port %s." % (Config.HTTP_ADDRESS, Config.HTTP_PORT)
srvrobj.serve_forever() # run as perpetual demon
srvrobj.socket.accept()
message='hello from server.<EOF>'
srvrobj.socket.send(message)
I have tried to send data with a normal socket, which works. However, the code i'm working on is using socketServer and can't be change. I cannot find any example which will send data across to the server. How can I do that?
If you really cannot change your server software, you must use a wrapper, e. g. openssl s_server.

How to achieve tcpflow functionality (follow tcp stream) purely within python

I am writing a tool in python (platform is linux), one of the tasks is to capture a live tcp stream and to
apply a function to each line. Currently I'm using
import subprocess
proc = subprocess.Popen(['sudo','tcpflow', '-C', '-i', interface, '-p', 'src', 'host', ip],stdout=subprocess.PIPE)
for line in iter(proc.stdout.readline,''):
do_something(line)
This works quite well (with the appropriate entry in /etc/sudoers), but I would like to avoid calling an external program.
So far I have looked into the following possibilities:
flowgrep: a python tool which looks just like what I need, BUT: it uses pynids
internally, which is 7 years old and seems pretty much abandoned. There is no pynids package
for my gentoo system and it ships with a patched version of libnids
which I couldn't compile without further tweaking.
scapy: this is a package manipulation program/library for python,
I'm not sure if tcp stream
reassembly is supported.
pypcap or pylibpcap as wrappers for libpcap. Again, libpcap is for packet
capturing, where I need stream reassembly which is not possible according
to this question.
Before I dive deeper into any of these libraries I would like to know if maybe someone
has a working code snippet (this seems like a rather common problem). I'm also grateful if
someone can give advice about the right way to go.
Thanks
Jon Oberheide has led efforts to maintain pynids, which is fairly up to date at:
http://jon.oberheide.org/pynids/
So, this might permit you to further explore flowgrep. Pynids itself handles stream reconstruction rather elegantly.See http://monkey.org/~jose/presentations/pysniff04.d/ for some good examples.
Just as a follow-up: I abandoned the idea to monitor the stream on the tcp layer. Instead I wrote a proxy in python and let the connection I want to monitor (a http session) connect through this proxy. The result is more stable and does not need root privileges to run. This solution depends on pymiproxy.
This goes into a standalone program, e.g. helper_proxy.py
from multiprocessing.connection import Listener
import StringIO
from httplib import HTTPResponse
import threading
import time
from miproxy.proxy import RequestInterceptorPlugin, ResponseInterceptorPlugin, AsyncMitmProxy
class FakeSocket(StringIO.StringIO):
def makefile(self, *args, **kw):
return self
class Interceptor(RequestInterceptorPlugin, ResponseInterceptorPlugin):
conn = None
def do_request(self, data):
# do whatever you need to sent data here, I'm only interested in responses
return data
def do_response(self, data):
if Interceptor.conn: # if the listener is connected, send the response to it
response = HTTPResponse(FakeSocket(data))
response.begin()
Interceptor.conn.send(response.read())
return data
def main():
proxy = AsyncMitmProxy()
proxy.register_interceptor(Interceptor)
ProxyThread = threading.Thread(target=proxy.serve_forever)
ProxyThread.daemon=True
ProxyThread.start()
print "Proxy started."
address = ('localhost', 6000) # family is deduced to be 'AF_INET'
listener = Listener(address, authkey='some_secret_password')
while True:
Interceptor.conn = listener.accept()
print "Accepted Connection from", listener.last_accepted
try:
Interceptor.conn.recv()
except: time.sleep(1)
finally:
Interceptor.conn.close()
if __name__ == '__main__':
main()
Start with python helper_proxy.py. This will create a proxy listening for http connections on port 8080 and listening for another python program on port 6000. Once the other python program has connected on that port, the helper proxy will send all http replies to it. This way the helper proxy can continue to run, keeping up the http connection, and the listener can be restarted for debugging.
Here is how the listener works, e.g. listener.py:
from multiprocessing.connection import Client
def main():
address = ('localhost', 6000)
conn = Client(address, authkey='some_secret_password')
while True:
print conn.recv()
if __name__ == '__main__':
main()
This will just print all the replies. Now point your browser to the proxy running on port 8080 and establish the http connection you want to monitor.

How do I use TLS with asyncore?

An asyncore-based XMPP client opens a normal TCP connection to an XMPP server. The server indicates it requires an encrypted connection. The client is now expected to start a TLS handshake so that subsequent requests can be encrypted.
tlslite integrates with asyncore, but the sample code is for a server (?) and I don't understand what it's doing.
I'm on Python 2.5. How can I get the TLS magic working?
Here's what ended up working for me:
from tlslite.api import *
def handshakeTls(self):
"""
Encrypt the socket using the tlslite module
"""
self.logger.info("activating TLS encrpytion")
self.socket = TLSConnection(self.socket)
self.socket.handshakeClientCert()
Definitely check out twisted and wokkel. I've been building tons of xmpp bots and components with it and it's a dream.
I've followed what I believe are all the steps tlslite documents to make an asyncore client work -- I can't actually get it to work since the only asyncore client I have at hand to tweak for the purpose is the example in the Python docs, which is an HTTP 1.0 client, and I believe that because of this I'm trying to set up an HTTPS connection in a very half-baked way. And I have no asyncore XMPP client, nor any XMPP server requesting TLS, to get anywhere close to your situation. Nevertheless I decided to share the fruits of my work anyway because (even though some step may be missing) it does seem to be a bit better than what you previously had -- I think I'm showing all the needed steps in the __init__. BTW, I copied the pem files from the tlslite/test directory.
import asyncore, socket
from tlslite.api import *
s = open("./clientX509Cert.pem").read()
x509 = X509()
x509.parse(s)
certChain = X509CertChain([x509])
s = open("./clientX509Key.pem").read()
privateKey = parsePEMKey(s, private=True)
class http_client(TLSAsyncDispatcherMixIn, asyncore.dispatcher):
ac_in_buffer_size = 16384
def __init__(self, host, path):
asyncore.dispatcher.__init__(self)
self.create_socket(socket.AF_INET, socket.SOCK_STREAM)
self.connect( (host, 80) )
TLSAsyncDispatcherMixIn.__init__(self, self.socket)
self.tlsConnection.ignoreAbruptClose = True
handshaker = self.tlsConnection.handshakeClientCert(
certChain=certChain,
privateKey=privateKey,
async=True)
self.setHandshakeOp(handshaker)
self.buffer = 'GET %s HTTP/1.0\r\n\r\n' % path
def handle_connect(self):
pass
def handle_close(self):
self.close()
def handle_read(self):
print self.recv(8192)
def writable(self):
return (len(self.buffer) > 0)
def handle_write(self):
sent = self.send(self.buffer)
self.buffer = self.buffer[sent:]
c = http_client('www.readyhosting.com', '/')
asyncore.loop()
This is a mix of the asyncore example http client in the Python docs, plus what I've gleaned from the tlslite docs and have been able to reverse engineer from their sources. Hope this (even though incomplete/not working) can at least advance you in your quest...
Personally, in your shoes, I'd consider switching from asyncore to twisted -- asyncore is old and rusty, Twisted already integrates a lot of juicy, useful bits (the URL I gave is to a bit in the docs that already does integrate TLS and XMPP for you...).

Categories

Resources