Python-2.7 Scapy retransmit packet to destination - python

Doing an arp poisonning, I am in the middle of the connection of 1-the router and 2-the victim computer. How can I retransmit the packet to the destination? (preferably with scapy)
I have this :
send(ARP(op=ARP.is_at, psrc=router_ip, hwdst=victim_mac, pdst=victim_ip))
send(ARP(op=ARP.is_at, psrc=victim_ip, hwdst=router_mac, pdst=router_ip))

Reviewing Scapy's API documentation suggests these alternatives:
The send function accepts 2 additional arguments that could prove useful:
loop: send the packets endlessly if not 0.
inter: time in seconds to wait between 2 packets.
Therefore, executing the following statement would send the packets in an endless loop:
send([ARP(op=ARP.is_at, psrc=router_ip, hwdst=victim_mac, pdst=victim_ip),
ARP(op=ARP.is_at, psrc=victim_ip, hwdst=router_mac, pdst=router_ip)],
inter=1, loop=1)
The sr function accepts 3 arguments that could prove useful:
retry: if positive, how many times to resend unanswered packets. if negative, how many consecutive unanswered probes before giving up. Only the negative value is really useful.
timeout: how much time to wait after the last packet has been sent. By
default, sr will wait forever and the user will have to interrupt (Ctrl-C) it when he expects no more answers.
inter: time in seconds to wait between each packet sent.
Since no answers are expected to be received for the sent ARP packets, specifying these arguments with the desired values enables sending the packets in a finite loop, in contrast to the previous alternative, which forces an endless one.
On the down side, this is probably a bit less efficient since resources are allocated towards packet receipt and handling, but this is negligible.
Therefore, executing the following statement would send the packets in a finite loop of 1000 iterations:
sr([ARP(op=ARP.is_at, psrc=router_ip, hwdst=victim_mac, pdst=victim_ip),
ARP(op=ARP.is_at, psrc=victim_ip, hwdst=router_mac, pdst=router_ip)],
retry=999, inter=1, timeout=1)

Related

Measuring packet per second for connected clients IP addresses

How can we determine the packet rate of clients connected to our server in case of multi client server using Winsock. The idea I came up with is keeping a frequency map for IP addresses of all the clients and storing the packets count for some arbitrary amount k seconds. Now after k seconds we traverse the map and see what IP addresses have more than 100*k packets, now we block these IP addresses. After every k seconds we empty the map and start again.
PSEUDO CODE: (k = 10)
map<string,int> map;
void calculate() {
for(auto &ip : map){
if(ip.second>10000) blacklist(ip.first);
}
map.clear();
Sleep(10000);
calculate();
}
int s = socket(AF_INET,SOCK_STREAM, IPPROTO_TCP);
// bind(), listen()
calculate();
while(1) {
if(recv(s,buff,len)>0) map[client.ip]++;
}
Per comments:
If someone is sending too fast, I'd like to block him permanently rather than receiving his messages less frequently. Something like this is what I'm trying to achieve
If this was UDP, I'd 100% be onboard with what you are trying to do and give the code I have. But this is TCP and your assumptions are flawed.
Let's say the sender invokes this:
send(sock, buffer, 1000, 0);
And then on the other side, you invoke this:
recv(sock, buffer, 1000, 0)
Did you know that recv may do any of the following:
It may return any value less than or equal to 1000. It could return 1 and expect you to invoke it another 999 times to consume the entire message. One of the biggest confusions with TCP socket is assuming that each send call mirrors a recv call in a 1:1 fashion. Lots of buggy apps have shipped that way.
More probably, you'll get 1 or 2 recv calls because of IP fragmentation and/or TCP segmentation. How fast you invoke recv also But this is never guaranteed or expected to be consistent. What you observe with local testing on your on LAN will not resemble actual internet behavior.
How many recv calls you get has nothing to do with how many actual IP packets or TCP segments, because "the packets" will get coalesced anyway by the TCP stack on the recv side.
Similarly, how many bytes you pass to send doesn't influence the packet count. TCP, including any number of routers and gateways in between, may split up this 1000 byte stream into additional fragments and segments.
I'm going to offer two suggestions:
Detect flood attacks by counting application protocol messages and/or the size of these application protocol messages - but not individual recv calls. That is, as you recv data, you'll accumulate this data stream into logical protocol messages based on a fixed size of bytes or a delimiter based message structure and pass it up to a higher part of your application for processing. Do the incremental count then.
Instead of trying to thwart flood attacks at the message level, it's probably simpler to just throttle clients to a fixed rate of data. That is, each time you recv data, count how many bytes it returns and use with a timer to measure an incoming bytes/second rate. If the remote side exceeds your limit, insert sleep statements in between recv calls. This will implicitly make TCP slow the other side down from sending too fast.

Reliable udp using tcp retransmit [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I am client receiver of UDP multicast data sent by Sender server (Stock Exchange data). I am continuously receiving udp multicast packet flow sequentially numbered 1 to approximately 35,000,000 sent uniformly over a period of 6 hours . I need to ensure all packets upto say N are received before the set of N packets is periodically processed after every say ~ 256 packets. i.e. I need reliable UDP.
Reliable UDP is mimicked using TCP retransmit. If any udp packet(s) is lost/not received, it is requested by using tcp protocol by specifying the desired missing packet range (starting number, ending number).
Sender keeps record of all the packets (stock exchange data) it has sent via UDP multicast so far. So Sender will resend by TCP only those packets numbers that the receiver specifically requests for via TCP. This is how UDP reliability is achieved by receiver. The UDP drop ratio is very small (less than 0.001%) except when starting the UDP multicast in the middle of the day, in which case all previously sent UDP packets from 1 to some N will need to be resent on TCP, while live transmission of UDP multicast data packet number N+1 onward is being received.) I can't request Sender (Stock Exchange) to change its protocol--it is fixed.
What is the efficient algorithm to implement this in terms of CPU?
The issue is speed BigOh. I can make a naive algorithm using several nested loops and methods, but it not necessarily the best.
I am thinking of maintaining a number N which confirms I have received UDP
packets 1 through N, and any packet no. M which is not the next expected packet no. N+1 will be buffered, for say 256 packets, and then TCP will be used to request the missing numbers. Then normal UDP reception resumes over from the last confirmed received number after TCP request is filled.
Example:
Suppose UDP packets received by receiver are in the following sequence {1,2,3,6,7,8,9,10 ...}
After packet No. 3, the next packet is No. 6. Packets 4 through 5 are missing.
So the missing packets {4,5} are requested using TCP request({4 through 5}), and {6,7,8,9,10} are buffered. There is enough space on the 10GBaseT LAN card for buffering 35,000,000 packets.
So: receive UDP {1,2,3}, refill by TCP request {4,5}, continue receive UDP {6,7,8,9,10, ...}
I assume since you are using multicast that there are going to be multiple receivers of this data? (Because if not, you'd probably be using unicast instead)
Therefore, if the receivers are going to have the option of requesting TCP retransmission of packets they didn't get, that means that the transmitting program will need to keep a copy of recently-sent UDP packets in memory, so that when it receives a retransmit-request, it will have the requested data available to retransmit. Assuming you're stamping each packet with a unique ID, it can store this data in a std::map or std::unordered_map or similar for quick lookup.
The real question becomes, how much of this old-packet data should the transmitter retain? ideally it would retain all of it, because you never know how much a given receiver might have missed and might want to request; but that would require infinite memory so that's not a realistic option. Probably the best you can do is decide how much RAM you're willing to tie up for this purpose, and keep a count of the total number of bytes you have in your table, and when it reaches the limit, start dropping the oldest packets from the table in order to keep its size under the limit.
I wrote an open-source library that uses essentially the technique you describe (multicast UDP + TCP-retransmit-to-recover-from-packet-loss) to synchronize databases across multiple hosts as quickly as possible; some things I learned while implementing it include:
If/when you can, pack your data-messages together into larger packets, up to the MTU of the network you are transmitting over (e.g. 1388 bytes for IPv4/Ethernet). Very small packet-sizes (like 48-bytes/packet) are inefficient, since the fixed-sized packet-headers make up a greater percentage of the total data sent/received.
Only try to send when your sending-socket indicates it is ready-for-write. (i.e. don't assume that you will never fill up the socket's outgoing-data-buffer; if your traffic is "bursty", you probably will at some point)
Minimize UDP packet loss by making your UDP sockets' send and receive buffers as large as you can get away with
Further minimize UDP packet loss by doing all the UDP receiving in a dedicated, high-priority thread (which can then route the received UDP data back to a normal-priority thread for further processing -- the main thing is to avoid allowing the receiving UDP-socket's incoming-data-buffer to overflow if possible)
For the TCP retransmission part, keep in mind that TCP streams can potentially slow down to nearly zero bytes-per-second in the worst case scenario, which makes it important to ensure that poor TCP performance to client A doesn't block the TCP communications to/from clients B, C, D, etc. This can be accomplished either via non-blocking I/O and select() (or poll() or similar), or asynchronous networking, or via multiple threads; avoid blocking I/O unless you are implementing a thread-per-socket model (and probably avoid that model as well, since a thread that is indefinitely-blocked-inside-recv() is difficult to shut down cleanly)
Think about under what circumstances (if any) it is acceptable for a client to never receive a particular packet at all; are there situations where that is okay? Or must the entire system grind to a halt until every receiver has received every packet in the group, regardless of how long that might take?
If you want to get really fancy, you can look into Forward Error Correction algorithms that encode data across packets, such that the receiver can still decode all of the data even if it never receives (up to a certain percentage of) the packets. This makes the need for a re-transmit request less likely, at the cost of making all of the packets slightly larger.

How does the python socket.recv() method know that the end of the message has been reached?

Let's say I'm using 1024 as buffer size for my client socket:
recv(1024)
Let's assume the message the server wants to send to me consists of 2024 bytes.
Only 1024 bytes can be received by my socket. What's happening to the other 1000 bytes?
Will the recv-method wait for a certain amount of time (say 2 seconds) for more data to come and stop working after this time span? (I.e., if the rest of the data arrives after 3 seconds, the data will not be received by the socket any more?)
or
Will the recv-method stop working immediately after having received 1024 bytes of data? (I.e. will the other 1000 bytes be discarded?)
In case that 1.) is correct ... is there a way for me to to determine the amount of time, the recv data should wait before returning or is it determined by the system? (I.e. could I tell the socket to wait for 5 seconds before stopping to wait for more data?)
UPDATE:
Assume, I have the following code:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], port))
s.send('Hello, world')
data = s.recv(1024)
print("received: {}".format(data))
s.close()
Assume that the server sends data of size > 1024 bytes. Can I be sure that the variable "data" will contain all the data (including those beyond the 1024th byte)?
If I can't be sure about that, how would I have to change the code so that I can always be sure that the variable "data" will contain all the data sent (in one or many steps) from the server?
It depends on the protocol. Some protocols like UDP send messages and exactly 1 message is returned per recv. Assuming you are talking about TCP specifically, there are several factors involved. TCP is stream oriented and because of things like the amount of currently outstanding send/recv data, lost/reordered packets on the wire, delayed acknowledgement of data, and the Nagle algorithm (which delays some small sends by a few hundred milliseconds), its behavior can change subtly as a conversation between client and server progresses.
All the receiver knows is that it is getting a stream of bytes. It could get anything from 1 to the fully requested buffer size on any recv. There is no one-to-one correlation between the send call on one side and the recv call on the other.
If you need to figure out message boundaries its up to the higher level protocols to figure that out. Take HTTP for example. It starts with a \r\n delimited header and then has a count of the remaining bytes the client should expect to receive. The client knows how to read the header because of the \r\n then knows exactly how many bytes are coming next. Part of the charm of RESTful protocols is that they are HTTP based and somebody else already figured this stuff out!
Some protocols use NUL to delimit messages. Others may have a fixed length binary header that includes a count of any variable data to come. I like zeromq which has a robust messaging system on top of TCP.
More details on what happens with receive...
When you do recv(1024), there are 6 possibilities
There is no receive data. recv will wait until there is receive data. You can change that by setting a timeout.
There is partial receive data. You'll get that part right away. The rest is either buffered or hasn't been sent yet and you just do another recv to get more (and the same rules apply).
There is more than 1024 bytes available. You'll get 1024 of that data and the rest is buffered in the kernel waiting for another receive.
The other side has shut down the socket. You'll get 0 bytes of data. 0 means you will never get more data on that socket. But if you keep asking for data, you'll keep getting 0 bytes.
The other side has reset the socket. You'll get an exception.
Some other strange thing has gone on and you'll get an exception for that.

Limiting TCP sending rate

TCP flows by their own nature will grow until they fill the maximum capacity of the links used from src to dst (if all those links are empty).
Is there an easy way to limit that ? I want to be able to send TCP flows with a maximum X mbps rate.
I thought about just sending X bytes per second using the socket.send() function and then sleeping the rest of the time. However if the link gets congested and the rate gets reduced, once the link gets uncongested again it will need to recover what it could not send previously and the rate will increase.
At the TCP level, the only control you have is how many bytes you pass off to send(), and how often you call it. Once send() has handed over some bytes to the networking stack, it's entirely up to the networking stack how fast (or slow) it wants to send them.
Given the above, you can roughly limit your transmission rate by monitoring how many bytes you have sent, and how much time has elapsed since you started sending, and holding off subsequent calls to send() (and/or the number of data bytes your pass to send()) to keep the average rate from going higher than your target rate.
If you want any finer control than that, you'll need to use UDP instead of TCP. With UDP you have direct control of exactly when each packet gets sent. (Whereas with TCP it's the networking stack that decides when to send each packet, what will be in the packet, when to resend a dropped packet, etc)

Scapy waiting for multiple packets

In order to perform a HTTP GET, I need to send a packet (the GET / HTTP/1.0\n\n) and wait for 3 packets:
The ACK of my GET
The GET answer: HTTP/1.0 200 OK
and the FIN ACK of the transmission
I found 2 ways:
=> use sr() with multi option
=> use sniff just after sending my GET request
For sr() function, the problem is to stop the sniffing, the only option is to set a timeout, but my script will test many different sites, so many different of time's answer, it could be hard to choose a static timeout value where I'm sure that no site exceed it anytime.
For sniff, there is no the same problem because I can set "count" argument to take only the 3 packets. But it's hard to make a filter good enough to be sure the 3 packets recorded are the 3 that I want (and no ARP, DNS or anything else).
But the main problem is sometimes the fist answer packet come before "sniff" is launched (between send(GET_PACKET) and answers=sniff(...)). In this case, I lost some information and all my post-treatment is corrupted.
The perfect way would be to use sr() function with "count=3" option to only get 3 packets, but that option doesn't exist with sr().
Anynone have an idea?
Thanks a lot
Sorry for my language, I'm French
Use Sniff and set the filter to TCP port 80
and for delay problem you can use a thread, first start your sniffer in thread then send the packets :
def sniffer():
packets=sniff(filter="tcp port 80" , count=5)
wrcap("test.cap" , packets) #save packets in .cap file
t = threading.Thread(target=sniffer)
t.start()
But you can use a better way that explained HERE. send your packets manually.
This is more of a hint than an answer, but the problem might be that you want to inspect transport layer packets for a application layer request. You could split up your HTTP GET down to transport layer by sending SYN, waiting for and answer and then send ACK, GET. Here is a link describing what you might want.

Categories

Resources