I've decided to try to request an IP using scapy. I am able to send out a discover and receive an offer in the variable ansD. Unfortunately I'm having trouble accessing the field that contains the offered IP address which should be ansD[BOOTP].yiaddr . It tells me that the field does not exist. I have looked around and have seen similar issues but cannot seem to understand why I can access normal packet fields, but fail to do so with BOOTP fields.
receivedIP = 0
conf.checkIPaddr = False
fam,hw = get_if_raw_hwaddr(conf.iface)
dhcp_discover = Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw)/DHCP(options=[("message-type","discover"),"end"])
ansD,unans = srp(dhcp_discover, multi=True)
if True:
dhcp_request=Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw,yiaddr=ansD[BOOTP].yiaddr)/DHCP(options=[("message-type","request"),"end"])
ansR, unans = srp(dhcp_request,multi=True)
Object Error 'list' object has no attribute 'yiaddr'
I figured it out not two seconds after posting but hopefully this helps others in the future.
I used srp() instead of srp1(), the former returns multiple packets so I would need to index the specific packet I wanted to look at ansD[0][BOOTP].yiaddr . I have since changed my code to use srp1 instead since this is a DHCP request expecting only one specific "Offer" reply from the DHCP server.
Fixed code below
import sys
from scapy.all import *
receivedIP = 0
conf.checkIPaddr = False
fam,hw = get_if_raw_hwaddr(conf.iface)
dhcp_discover=Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw)/DHCP(options=[("message-type","discover"),"end"])
ansD = srp1(dhcp_discover, multi=True)
if True:
//Request using the IP the server offered us in ansD[BOOTP].yiaddr
dhcp_request = Ether(dst="ff:ff:ff:ff:ff:ff")/IP(src="0.0.0.0",dst="255.255.255.255")/UDP(sport=68,dport=67)/BOOTP(chaddr=hw,yiaddr=ansD[BOOTP].yiaddr)/DHCP(options=[("message-type","request"),"end"])
ansR, unans = srp(dhcp_request,multi=True)
ansR.summary()
Related
I want to implement an IoT application. I will give here a toy version of what I want to do.
Say I have two clients : 'client1' and 'client2' on REMOTE COMPUTERS, and a server 'server', that regulates the computations. The hard thing for me is the fact that the computations can't be made at the same place.
We have : clients_list = ['client1', 'client2']
I want to simulate an algorithm that looks like this:
The server starts with an initial value server_value
for round in range(R):
client_values_dict = {}
for client_id in clients_list:
server broadcasts server_value to the client 'client_id' # via http
client_value = action(server_value) # executed on clients computer
client broadcasts its value to the server # via http
at the meantime, server waits for the response
server fills dictionary with keys clients_list, values client values obtained with 'action' :
client_values_dict[client_id]
server_value = aggregate(client_values_dict) # executed on server computer
On the client side (in client.py), I have a function:
import time
def action(server_value):
time.sleep(10*random.random())
return server_value + random.random()-0.5
On the server side (in server.py), I have a function:
def aggregate(client_values_dict):
return sum(client_values_dict.values())/len(client_values_dict.values())
I want to implement that : I want to write a loop at server level that performs this. I think what I need is an API to handle client-server interactions and parallel computing.
I thought of using Flask for this but I'm afraid that the loop at server level will be blocked by the app.run(debug=True) loop, and that my code won't run until I break the app with CTRL+C.
I want the computations to be made in parallel by the two clients.
I am not familiar with web developpement, my problem might seem trivial and help is probably to be found everywhere on internet, but I don't know where to look at. Any help is cheerfully welcomed.
Here is an example ofa script that simulates what I want, but online.
# -*- coding: utf-8 -*-
import time
import random
server_value = 0
R = 10
clients_list = ['client1', 'client2']
def action(server_value):
time.sleep(3*random.random())
return server_value + random.random()-0.5
def aggregate(client_values_dict):
return sum(client_values_dict.values())/len(client_values_dict.values())
for round in range(R):
client_values_dict = {}
for client_id in clients_list:
client_value = action(server_value) # executed on clients computer
client_values_dict[client_id] = client_value
server_value = aggregate(client_values_dict)
print(server_value)
Have you tried network zero? It's an amazing networking library that I use all the time.
Install:
pip install networkzero
PyPI link: https://pypi.org/project/networkzero/
Docs: https://networkzero.readthedocs.io/en/latest/
Code sample (from their doc page):
Machine/process A:
import networkzero as nw0
address = nw0.advertise("hello")
while True:
name = nw0.wait_for_message_from(address)
nw0.send_reply_to(address, "Hello " + name)
Machine/process B:
import networkzero as nw0
hello = nw0.discover("hello")
reply = nw0.send_message_to(hello, "World!")
print(reply)
reply = nw0.send_message_to(hello, "Tim")
print(reply)
This library also supports more than just 2 connections on the local WiFi, read the docs for more info.
NOTE: I've used this answer before. You can see it here: How to set up a server for a local wifi multiplayer game for python
am trying to get proxies in two variables from module called proxyscrape
How it looks:
import proxyscrape
collector = proxyscrape.create_collector('default', 'socks5')
proxy = collector.get_proxy()
print(proxy)
Output:
Proxy(host='112.13.170.142', port='80', code=None, country=None, anonymous=None, type='socks5', source='proxy-daily-socks5')
What i need, is to parse, so i can use it in requests
import proxyscrape
collector = proxyscrape.create_collector('default', 'socks5')
proxy = collector.get_proxy()
ip = ip
port = port
Thank you for your help.
You should be able to do something like this.
ip = proxy.host
port = proxy.port
If those attributes exist on the proxy object, then you should be able to set a variable to them.
I need to identify a control packet from Python RYU-controller.
In other words: How I can to do the following instruction?
If (I receive a OFPT_PACKET_OUT msg from ryu-controller)
do something (for example all control traffic must mirroring to an output port)
and How can I match this rule?
I saw in OpenFlow v1.3 specification that there is a ofproto.OFPP_CONTROLLER reserved port: How can I use it as an ingress port?
From OFv1.3 spec.:
"OFPP_CONTROLLER: Represents the control channel with the OpenFlow controller. Can be used as an ingress port or as an output port.
When used as an output port, encapsulate the packet in a packet-in message and send it using the OpenFlow protocol.
When used as an ingress port, identify a packet originating from the controller."
Thanks for the help.
Regarding the first part of your question, let's see a basic Layer 2 Switch that simply floods the incoming packets to all output ports:
from ryu.base import app_manager
from ryu.controller import ofp_event
from ryu.controller.handler import MAIN_DISPATCHER
from ryu.controller.handler import set_ev_cls
class L2Switch(app_manager.RyuApp):
def __init__(self, *args, **kwargs):
super(L2Switch, self).__init__(*args, **kwargs)
#set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER)
def packet_in_handler(self, ev):
msg = ev.msg
dp = msg.datapath
ofp = dp.ofproto
ofp_parser = dp.ofproto_parser
actions = [ofp_parser.OFPActionOutput(ofp.OFPP_FLOOD)]
out = ofp_parser.OFPPacketOut(
datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,
actions=actions)
dp.send_msg(out)
The last two statements are
out = ofp_parser.OFPPacketOut(
datapath=dp, buffer_id=msg.buffer_id, in_port=msg.in_port,
actions=actions)
dp.send_msg(out)
These statements generate a packet_out message, however, I don't think there's a corresponding event that is raised for a packet_out message (Like a packet_in message generates the EventOFPPacketIn event which can be detected in code, and a method can be attached to it). I haven't used Ryu API much, but I think the reason is simple. A packet_out message is sent via the code itself, and you can simply add a few more lines of code after the lines generating this event. These few lines can execute whatever you want to do upon the generation of a packet_out message. For example, in the above code, you can just add the lines mirroring control traffic to a specific port after the dp.send_msg(out) line. Correct/edit my answer if I'm wrong.
I have a python code that would need to make use of various ShadowSocks proxy server that I have set up in order to use the IP of those servers.
Say for example I would like to use:
1.1.1.1:5678
2.2.2.2:5678
3.3.3.3:5678
i.e., all these servers have the same remote port and the local ports are all 1080.
My preference is to have the 3 proxies to rotate randomly so that each time I send a urlopen() request (in urllib2), my code randomly connect to one of the proxies and send the request via that proxy, and disconnect when the request is complete.
The IP could be hard coded or could be stored in some config files.
The problem is at the moment, all the sample online that I have found seems all require the connection to be pre-established and the Python code should simply use whatever that is on localhost:1080 instead of actively making connections.
I am just wondering if anyone could lend me a helping hand to accomplish this in the code.
Thanks!
If you have a look at the source of urllib2, you can see that when a default opener is installed, it is really just takes an object with an open method. So you really just need to create an object whose open method returns a random opener. Something like the following (untested) should work:
import urllib2
import random
class RandomOpener(object):
def __init__(self, ip_list)
self.ip_list = ip_list
def open(self, *args, **kwargs):
proxy = random.choice(self.ip_list)
handler = urllib2.ProxyHandler({'http': 'http://' + proxy})
opener = urllib2.build_opener(handler)
return opener(*args, **kwargs)
my_opener = RandomOpener(['1.1.1.1:5678',
'2.2.2.2:5678',
'3.3.3.3:5678'])
urllib2.install_opener(my_opener)
On my linux machine, 1 of 3 network interfaces may be actually connected to the internet. I need to get the IP address of the currently connected interface, keeping in mind that my other 2 interfaces may be assigned IP addresses, just not be connected.
I can just ping a website through each of my interfaces to determine which one has connectivity, but I'd like to get this faster than waiting for a ping time out. And I'd like to not have to rely on an external website being up.
Update:
All my interfaces may have ip addresses and gateways. This is for an embedded device. So we allow the user to choose between say eth0 and eth1. But if there's no connection on the interface that the user tells us to use, we fall back to say eth2 which (in theory) will always work.
So what I need to do is first check if the user's selection is connected and if so return that IP. Otherwise I need to get the ip of eth2. I can get the IPs of the interfaces just fine, it's just determining which one is actually connected.
If the default gateway for the system is reliable, then grab that from the output from route -n the line that contains " UG " (note the spaces) will also contain the IP of the gateway and interface name of the active interface.
the solution is here : http://code.activestate.com/recipes/439093-get-names-of-all-up-network-interfaces-linux-only/
import fcntl
import array
import struct
import socket
import platform
"""
global constants. If you don't like 'em here,
move 'em inside the function definition.
"""
SIOCGIFCONF = 0x8912
MAXBYTES = 8096
def localifs():
"""
Used to get a list of the up interfaces and associated IP addresses
on this machine (linux only).
Returns:
List of interface tuples. Each tuple consists of
(interface name, interface IP)
"""
global SIOCGIFCONF
global MAXBYTES
arch = platform.architecture()[0]
# I really don't know what to call these right now
var1 = -1
var2 = -1
if arch == '32bit':
var1 = 32
var2 = 32
elif arch == '64bit':
var1 = 16
var2 = 40
else:
raise OSError("Unknown architecture: %s" % arch)
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
names = array.array('B', '\0' * MAXBYTES)
outbytes = struct.unpack('iL', fcntl.ioctl(
sock.fileno(),
SIOCGIFCONF,
struct.pack('iL', MAXBYTES, names.buffer_info()[0])
))[0]
namestr = names.tostring()
return [(namestr[i:i+var1].split('\0', 1)[0], socket.inet_ntoa(namestr[i+20:i+24])) \
for i in xrange(0, outbytes, var2)]
print localifs()