I want to test a data center routing algorithm using Mininet. The traffic needs to conform to certain parameters:
It should consist of "files" of various sizes (note that these don't actually have to be files; traffic generated in, e.g., iperf is OK, as long as the size is controllable);
The file sizes should be drawn from a particular distribution;
The source/destination host pairs between which data is sent should be selected randomly for a given file;
The interval between when a file is sent and its successor is sent should be random; and
If a huge file gets sent between two hosts that takes a long time to transfer, it should still be possible to send data between other hosts in the network.
Points 1-4 are taken care of. I've been struggling with #5 for days and I can't get it working properly. My initial thought was to spawn subprocesses/threads to send iperf commands to the hosts:
while count < 10:
if (count % 2) == 0:
host_pair = net.get("h1", "h2")
else:
host_pair = net.get("h3", "h4")
p = multiprocessing.Process(target=test_custom_iperf, args=(net, host_pair, nbytes))
p.daemon = True
p.start()
time.sleep(random.uniform(0, 1))
The command test_custom_iperf is modeled after the Python Mininet API's version of iperf to include the -n transfer size parameter:
client, server = host_pair
print client, server
output( '*** Iperf: testing TCP bandwidth between',
client, 'and', server, '\n' )
server.sendCmd( 'iperf -s' )
if not waitListening( client, server.IP(), 5001 ):
raise Exception( 'Could not connect to iperf on port 5001' )
cliout = client.cmd( 'iperf -c ' + server.IP() + ' -n %d' % nbytes )
print cliout
server.sendInt()
servout = server.waitOutput()
debug( 'Server output: %s\n' % servout)
result = [ net._parseIperf( servout ), net._parseIperf( cliout ) ]
output( '*** Results: %s\n' % result )
Making this non-blocking has been extremely difficult. I need to be able to send the server.sendInt() command, for some reason, and to do this I need to wait for the client's command to finish.
I'd appreciate any advice on what I can try to make this work!
I took a hint from here and used Mininet's host.popen() module to send the data around. Hopefully this helps someone else:
def send_one_file(file_dir, host_pair, files):
src, dst = host_pair # a tuple of Mininet node objects
# choose a random file from files
rand_fname = random.sample(files, 1)[0]
rand_path = os.path.join(file_dir, rand_fname)
port = random.randint(1024, 65535)
# Start listening at the destination host
dst_cmd = 'nc -l %d > /home/mininet/sent/%s.out' % (port, rand_fname)
print os.path.getsize(rand_path)
dst.popen( dst_cmd, shell=True )
# Send file from the source host
src_cmd = 'nc %s %s < %s' % (dst.IP(), port, rand_path)
src.popen( src_cmd, shell=True )
Then the parent function calls send_one_file() at random intervals:
def test_netcat_subprocess_async(net, duration):
file_dir = "/home/mininet/sf_mininet_vm/data/MVI_0406_split"
files = os.listdir(file_dir)
start_time = time.time()
end_time = start_time + duration
# Transfer for the desired duration
while time.time() < end_time:
# Choose a pair of hosts
host_pair = random.sample(net.hosts, 2)
test_send_one_file_netcat(file_dir, host_pair, files)
interval = random.uniform(0.01, 0.1)
print "Initialized transfer; waiting %f seconds..." % interval
time.sleep(interval)
This works without any of the problems I experienced with multiprocessing or threading (breaking the network after the session is over, blocking when it shouldn't, etc.).
Related
I have a question releted to Tor middle node. What are the possible ways to get all the node. second the method i follow is below
def renew_tor_circuit(): #renewing Tor circuit for every request, default port is 9050. renewing port is 9051
with Controller.from_port(port = 9051) as controller_handler:
controller_handler.authenticate('welcome')
controller_handler.signal(Signal.NEWNYM)
time.sleep(10)
for circuit in controller_handler.get_circuits():
# print(controller_handler.get_circuit(circuit.id))
entry_node_fingerprint = controller_handler.path[0][0]
#middle_node_fingerprint = circuit.path[1][0]
exit_node_fingerprint = circuit.path[2][0]
entry_node_descriptor = controller_handler.get_network_status(entry_node_fingerprint)
#middle_node_descriptor = controller_handler.get_network_status(middle_node_fingerprint)
exit_node_descriptor = controller_handler.get_network_status(exit_node_fingerprint)
print("exitip|%s" % (entry_node_descriptor.address))
#print("exitip|%s" % (middle_node_descriptor.address))
print("exitip|%s" % (exit_node_descriptor.address))
print('---------------------')
PROBLEM:: problem with this code it that it stop working most of the time. just start printing ---------. and if run it does not change ip.
What I want: I want to check the latency between Client and the Middle node along with RTT such that the RTT should be measured by sending a relay connect cell to a dummy destination, e.g., localhost.
for latency my idea is::
t1 = time.get_current_time() middle_node_descriptor = controller_handler.get_network_status(middle_node_fingerprint) t2 = time.get_current_time() t3 = t2-t1
NOTE didn't try it because of the above code is not working.
Only "destination unreachable" replies are received while using given snippet(handler is pyicmp.handler.Handler from https://github.com/volftomas/pyicmp):
def IPScanner(handler, ip, file, lock):
for ttl in (32,):
# putting 32 in a tuple was necessary here,
# otherwise do_ping() acts as a noop
ctr = 0
for j in range(0,4):
t = PrintThread(str("Dest:" + ip + " TTL" + str(ttl)), lock)
t.start()
ping_result = handler.do_ping(ip, ttl)
if ping_result['packet_loss'] == 0:
ping_result['packet_loss'] = j
break
else:
ping_result['packet_loss'] = 0
d = DumpFileThread(file, ping_result, lock)
d.start()
Some hosts send destination unreachable and others block echo request, I cannot share dumpfile because dumpfile I have now has IP mappings of a corporate institution. I can ping the IPs passed to this function via Windows's ping. Why can I not receive pings from pyicmp library. ICMP echo is not blocked on my host.
I have a python program that needs to executed like this;
$ sudo python my_awesome_program.py
Now I want to run thousands of instances of this program using celery, of course with different parameters. The Problem is that while celery tries to execute my program it fails and the reason is that it doesn't have sudo access.
How can I give my celery workers the power of sudo to run this program ?
I already tried to give sudo access to my user. Changed celery service owner etc.
May be a stupid question, but I am lost.
P.S
task.py
from celery import Celery
import os
import socket
import struct
import select
import time
import logging
# Global variables
broker = "redis://%s:%s" % ("127.0.0.1", '6379')
app = Celery('process_ips', broker=broker)
logging.basicConfig(filename="/var/log/celery_ping.log", level=logging.INFO)
# From /usr/include/linux/icmp.h; your milage may vary.
ICMP_ECHO_REQUEST = 8 # Seems to be the same on Solaris.
def checksum(source_string):
"""
I'm not too confident that this is right but testing seems
to suggest that it gives the same answers as in_cksum in ping.c
"""
sum = 0
countTo = (len(source_string) / 2) * 2
count = 0
while count < countTo:
thisVal = ord(source_string[count + 1]) * \
256 + ord(source_string[count])
sum = sum + thisVal
sum = sum & 0xffffffff # Necessary?
count = count + 2
if countTo < len(source_string):
sum = sum + ord(source_string[len(source_string) - 1])
sum = sum & 0xffffffff # Necessary?
sum = (sum >> 16) + (sum & 0xffff)
sum = sum + (sum >> 16)
answer = ~sum
answer = answer & 0xffff
# Swap bytes. Bugger me if I know why.
answer = answer >> 8 | (answer << 8 & 0xff00)
return answer
def receive_one_ping(my_socket, ID, timeout):
"""
receive the ping from the socket.
"""
timeLeft = timeout
while True:
startedSelect = time.time()
whatReady = select.select([my_socket], [], [], timeLeft)
howLongInSelect = (time.time() - startedSelect)
if whatReady[0] == []: # Timeout
return
timeReceived = time.time()
recPacket, addr = my_socket.recvfrom(1024)
icmpHeader = recPacket[20:28]
type, code, checksum, packetID, sequence = struct.unpack(
"bbHHh", icmpHeader
)
if packetID == ID:
bytesInDouble = struct.calcsize("d")
timeSent = struct.unpack("d", recPacket[28:28 + bytesInDouble])[0]
return timeReceived - timeSent
timeLeft = timeLeft - howLongInSelect
if timeLeft <= 0:
return
def send_one_ping(my_socket, dest_addr, ID):
"""
Send one ping to the given >dest_addr<.
"""
dest_addr = socket.gethostbyname(dest_addr)
# Header is type (8), code (8), checksum (16), id (16), sequence (16)
my_checksum = 0
# Make a dummy heder with a 0 checksum.
header = struct.pack("bbHHh", ICMP_ECHO_REQUEST, 0, my_checksum, ID, 1)
bytesInDouble = struct.calcsize("d")
data = (192 - bytesInDouble) * "Q"
data = struct.pack("d", time.time()) + data
# Calculate the checksum on the data and the dummy header.
my_checksum = checksum(header + data)
# Now that we have the right checksum, we put that in. It's just easier
# to make up a new header than to stuff it into the dummy.
header = struct.pack(
"bbHHh", ICMP_ECHO_REQUEST, 0, socket.htons(my_checksum), ID, 1
)
packet = header + data
my_socket.sendto(packet, (dest_addr, 1)) # Don't know about the 1
def do_one(dest_addr, timeout):
"""
Returns either the delay (in seconds) or none on timeout.
"""
logging.info('Called do_one Line 105')
icmp = socket.getprotobyname("icmp")
try:
my_socket = socket.socket(socket.AF_INET, socket.SOCK_RAW, icmp)
except socket.error as xxx_todo_changeme:
(errno, msg) = xxx_todo_changeme.args
if errno == 1:
# Operation not permitted
msg = msg + (
" - Note that ICMP messages can only be sent from processes"
" running as root."
)
raise socket.error(msg)
raise # raise the original error
my_ID = os.getpid() & 0xFFFF
send_one_ping(my_socket, dest_addr, my_ID)
delay = receive_one_ping(my_socket, my_ID, timeout)
my_socket.close()
return delay
def verbose_ping(dest_addr, timeout=1, count=2):
"""
Send >count< ping to >dest_addr< with the given >timeout< and display
the result.
"""
logging.info('Messing with : %s' % dest_addr)
try:
for i in xrange(count):
logging.info('line 136')
try:
delay = do_one(dest_addr, timeout)
logging.info('line 139' + str(delay))
except socket.gaierror as e:
break
logging.info('line 142'+str(delay))
if delay is None:
pass
else:
delay = delay * 1000
logging.info('This HO is UP : %s' % dest_addr)
return dest_addr
except:
logging.info('Error in : %s' % dest_addr)
#app.task()
def process_ips(items):
logging.info('This is Items:----: %s' % items)
up_one = verbose_ping(items)
if up_one is not None:
logging.info('This one is UP: %s' % up_one)
my_awesome_program.py
from task import process_ips
if __name__ == '__main__':
for i in range(0, 256):
for j in range(1, 256):
ip = "192.168.%s.%s" % (str(i), str(j))
jobs = process_ips.delay(ip)
/etc/defaults/celeryd
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker1"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
#CELERYD_NODES="worker1 worker2 worker3"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/home/jarvis/Development/venv/bin/celery"
#CELERY_BIN="/virtualenvs/def/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="task"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/jarvis/Development/pythonScrap/nmap_sub"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8 --loglevel=DEBUG"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="jarvis"
CELERYD_GROUP="jarvis"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
You need to set your CELERYD_USER param in settings.
Have a look here: http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html
If you're using supervisor, then in your supervisor conf of celery, you'd need to do this:
user=<user-you-want>
Also, under Example Configuration section, it's explicitly said to not run your workers as privileged users.
I have a problem.
I've written this part of code in Python (Tornado web server):
if command == 'RESTARTNWK':
op_group = "A3"
op_code = "E0"
netid = hextransform(int(nid), 16)
packet_meta = "*%s;%s;%s;#"
pkt_len = hextransform(0, 2)
packet = packet_meta % (op_group, op_code, pkt_len)
packet = packet.upper()
op_group_hex=0xA3
op_code_hex=0xE0
cmdjson = packet2json(op_group_hex,op_code_hex, packet)
mynet_type="ztc"
print("\t\t " + packet + "\n")
#TODO : -write command into db
ts = datetime.datetime.now().isoformat()
mynet_type ="ztc"
self.lock_tables("write", ['confcommands'])
self.db.execute("INSERT INTO confcommands (network_id, ntype, timestamp, command) \
VALUES (%s,%s,%s,%s)", nid, mynet_type, ts, cmdjson)
self.unlock_tables();
# TODO: - open the /tmp/iztc file in append mode
cmdfile = open('/tmp/iztc', 'a')
# - acquire a lock "only for the DB case, it's easier"
# - write the packet
cmdfile.write(netid + "\t"+ mynet_type + "\t"+ ts + "\t"+ cmdjson +"\n");
# - release the lock "only for the DB case, it's easier"
# - close the file
cmdfile.close()
if command == 'RESTARTNWK':
opcodegroupr = "A4"
opcoder = "E0"
#Code for retrieving the MAC address of the node
como_url = "".join(['http://', options.como_address, ':', options.como_port,
'/', ztc_config, '?netid=', netid,
'&opcode_group=', opcodegroupr,
'&opcode=', opcoder, '&start=-5m&end=-1s'])
http_client = AsyncHTTPClient()
response = yield tornado.gen.Task(http_client.fetch, como_url)
ret = {}
if response.error:
ret['error'] = 'Error while retrieving unregistered sensors'
else:
for line in response.body.split("\n"):
if line != "":
value = int(line.split(" ")[6])
ret['response'] = value
self.write(tornado.escape.json_encode(ret))
self.finish()
In this code I receive the restart network command from the user. After some settings, I write the relative command in a db table named confcommands. The server will read this command and will send to the specified network the restart signal.
After this, if all it's ok, the network resend me the response. I read this response with a http request to my server (como), and wait for the asynchronous response.
Where the response is written by the network, I have to find this in the packet. The value response is the sixth element. Other information of the packet are the opgroup and opcode, the network from which is the response, and other informations.
Then I write the response for the user.
I don't know if this code is right... can work this? The structure seems to me right....
Thank you all for any suggestions!
There is a socket method for getting the IP of a given network interface:
import socket
import fcntl
import struct
def get_ip_address(ifname):
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
return socket.inet_ntoa(fcntl.ioctl(
s.fileno(),
0x8915, # SIOCGIFADDR
struct.pack('256s', ifname[:15])
)[20:24])
Which returns the following:
>>> get_ip_address('lo')
'127.0.0.1'
>>> get_ip_address('eth0')
'38.113.228.130'
Is there a similar method to return the network transfer of that interface? I know I can read /proc/net/dev but I'd love a socket method.
The best way to poll ethernet interface statistics is through SNMP...
It looks like you're using linux... if so, load up your snmpd with these options... after installing snmpd, in your /etc/defaults/snmpd (make sure the line with SNMPDOPTS looks like this):
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -I -smux,usmConf,iquery,dlmod,diskio,lmSensors,hr_network,snmpEngine,system_mib,at,interface,ifTable,ipAddressTable,ifXTable,ip,cpu,tcpTable,udpTable,ipSystemStatsTable,ip,snmp_mib,tcp,icmp,udp,proc,memory,snmpNotifyTable,inetNetToMediaTable,ipSystemStatsTable,disk -Lsd -p /var/run/snmpd.pid'
You might also need to change the ro community to public See Note 1 and set your listening interfaces in /etc/snmp/snmpd.conf (if not on the loopback)...
Assuming you have a functional snmpd, at this point, you can poll ifHCInBytes and ifHCOutBytes See Note 2 for your interface(s) in question using this...
poll_bytes.py:
from SNMP import v2Manager
import time
def poll_eth0(manager=None):
# NOTE: 2nd arg to get_index should be a valid ifName value
in_bytes = manager.get_index('ifHCInOctets', 'eth0')
out_bytes = manager.get_index('ifHCOutOctets', 'eth0')
return (time.time(), int(in_bytes), int(out_bytes))
# Prep an SNMP manager object...
mgr = v2Manager('localhost')
mgr.index('ifName')
stats = list()
# Insert condition below, instead of True...
while True:
stats.append(poll_eth0(mgr))
print poll_eth0(mgr)
time.sleep(5)
SNMP.py:
from subprocess import Popen, PIPE
import re
class v2Manager(object):
def __init__(self, addr='127.0.0.1', community='public'):
self.addr = addr
self.community = community
self._index = dict()
def bulkwalk(self, oid='ifName'):
cmd = 'snmpbulkwalk -v 2c -Osq -c %s %s %s' % (self.community,
self.addr, oid)
po = Popen(cmd, shell=True, stdout=PIPE, executable='/bin/bash')
output = po.communicate()[0]
result = dict()
for line in re.split(r'\r*\n', output):
if line.strip()=="":
continue
idx, value = re.split(r'\s+', line, 1)
idx = idx.replace(oid+".", '')
result[idx] = value
return result
def bulkwalk_index(self, oid='ifOutOctets'):
result = dict()
if not (self._index==dict()):
vals = self.bulkwalk(oid=oid)
for key, val in vals.items():
idx = self._index.get(key, None)
if not (idx is None):
result[idx] = val
else:
raise ValueError, "Could not find '%s' in the index (%s)" % self.index
else:
raise ValueError, "Call the index() method before calling bulkwalk_index()"
return result
def get_index(self, oid='ifOutOctets', index=''):
# This method is horribly inefficient... improvement left as exercise for the reader...
if index:
return self.bulkwalk_index().get(index, "<unknown>")
else:
raise ValueError, "Please include an index to get"
def index(self, oid='ifName'):
self._index = self.bulkwalk(oid=oid)
END NOTES:
SNMP v2c uses clear-text authentication. If you are worried about security / someone sniffing your traffic, change your community and restrict queries to your linux machine by source ip address. The perfect world would be to modify the SNMP.py above to use SNMPv3 (which encrypts sensitive data); most people just use a non-public community and restrict snmp queries by source IP.
ifHCInOctets and ifHCOutOctets provide instantaneous values for the number of bytes transferred through the interface. If you are looking for data transfer rate, of course there will be some additional math involved.