I have coded a wireless probe sniffer using python + scapy. I'm want to use this script in openwrt routers.
Everytime it captures a probe request from nearby devices, the information is send to a webservice. (mac, power and probe).
My problem is the high consumption of CPU. The script runs quite good in my laptop (it takes between 50-70% of cpu), but when I run it in an openwrt router (400mhz cpu,16 ram) it takes 99%.
It's a well known bug in scapy the lost of packets with high loads (I have tested at the same time the script in my laptop and in the router and the router do not catched all the available packets in the air)
I already made some optimizations to the code, but I think there's more room for improvement.
This is the script.
#!/usr/bin/python
from scapy.all import *
import time
import thread
import requests
from datetime import datetime
PROBE_REQUEST_TYPE=0
PROBE_REQUEST_SUBTYPE=4
buf={'arrival':0,'source':0,'dest':0,'pwr':0,'probe':0}
uuid='1A2B3'
def PacketHandler(pkt):
global buf
if pkt.haslayer(Dot11):
if pkt.type==PROBE_REQUEST_TYPE and pkt.subtype == PROBE_REQUEST_SUBTYPE:
arrival= int(time.mktime(time.localtime()))
try:
extra = pkt.notdecoded
except:
extra=None
if extra!=None:
signal_strength = -(256-ord(extra[-4:-3]))
else:
signal_strength = -100
source = pkt.addr2
dest= pkt.addr3
pwr=signal_strength
probe=pkt.getlayer(Dot11).info
if buf['source']!=source and buf['probe']!=probe:
print 'launch %r %r %r' % (source,dest,probe)
buf={'arrival':arrival,'source':source,'dest':dest,'pwr':pwr,'probe':probe}
try:
thread.start_new_thread(exporter,(arrival,source,dest,pwr,probe))
except:
print 'Error launching the thread %r' % source
def exporter (arrival,source,dest,pwr,probe):
global uuid
urlg='http://webservice.com/?arrival='+str(arrival)+'&source='+str(source)+'&dest='+str(dest)+'&pwr='+str(pwr)+'&probe='+str(probe)+'&uuid='+uuid
try:
r=requests.get(urlg)
print r.status_code
print r.content
except:
print 'ERROR in Thread:::::: %r' % source
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler,store=0)
if __name__=="__main__":
main()
[UPDATE]
After a lot of reading and deep searching (it seems not many people have found full solution to the same issue or something similar).
I have found you can filter directly from the sniff function, so I've added a filter to just catch probe requests.
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40',store=0)
In my laptop runs really smooth, using between 1%-3% of cpu and catching most of the available packets in the air.
but when I run this on the router, the script throws an error and crash.
Traceback (most recent call last):
File "snrV2.py", line 66, in <module>
main()
File "snrV2.py", line 63, in main
sniff(iface=sys.argv[1],prn=PacketHandler, filter='link[26] = 0x40',store=0)
File "/usr/lib/python2.7/site-packages/scapy/sendrecv.py", line 550, in sniff
s = L2socket(type=ETH_P_ALL, *arg, **karg)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 460, in __init__
attach_filter(self.ins, filter)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 132, in attach_filter
s.setsockopt(SOL_SOCKET, SO_ATTACH_FILTER, bpfh)
File "/usr/lib/python2.7/socket.py", line 224, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 99] Protocol not available
I have tried using bpf filter syntax (the same used in tcpdump http://biot.com/capstats/bpf.html) and it is supposed you can also use it in scapy but I get a filter syntax error.
Sniff fucntion:
def main():
print "[%s] Starting scan"%datetime.now()
sniff(iface=sys.argv[1],prn=PacketHandler, filter='type mgt subtype probe-req', store=0)
error:
Traceback (most recent call last):
File "snrV2.py", line 66, in <module>
main()
File "snrV2.py", line 63, in main
sniff(iface=sys.argv[1],prn=PacketHandler, filter='type mgt subtype probe-req', store=0)
File "/usr/lib/python2.7/site-packages/scapy/sendrecv.py", line 550, in sniff
s = L2socket(type=ETH_P_ALL, *arg, **karg)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 460, in __init__
attach_filter(self.ins, filter)
File "/usr/lib/python2.7/site-packages/scapy/arch/linux.py", line 120, in attach_filter
raise Scapy_Exception("Filter parse error")
NameError: global name 'Scapy_Exception' is not defined
In the router I have installed the last version of scapy and tcpdump.
Now I really don't know what to do.
I encountered a similar error (socket.error: [Errno 99] Protocol not available) when I tried to use sniff() with a filter on my NETGEAR WNDR4300.
After a lot of searching on Google, I found that the reason is that the Linux kernel of my router does not enable CONFIG_PACKET. It is mentioned in the Scapy installation guide, as follows:
Make sure your kernel has Packet sockets selected (CONFIG_PACKET)
If your kernel is < 2.6, make sure that Socket filtering is selected CONFIG_FILTER)
If you set CONFIG_PACKET=y when you compile the kernel, then it will enable the BPF for underlying socket.
Related
I'm on Windows 7. When I start a Bottle web server with:
run('0.0.0.0', port=80)
And then once again run the same Python script, it doesn't fail with a Port already in use error (this should be normal behaviour), but instead successfully starts the Python script again!
Question: How to stop this behaviour, in a simple way?
This is related to Multiple processes listening on the same port?, but how can you prevent this in a Python context?
This is a Windows specific behavior that requires the use of the SO_EXCLUSIVEADDRUSE option before binding a network socket.
From the Using SO_REUSEADDR and SO_EXCLUSIVEADDRUSE article in the Windows Socket 2 documentation:
Before the SO_EXCLUSIVEADDRUSE socket option was introduced, there was
very little a network application developer could do to prevent a
malicious program from binding to the port on which the network
application had its own sockets bound. In order to address this
security issue, Windows Sockets introduced the SO_EXCLUSIVEADDRUSE
socket option, which became available on Windows NT 4.0 with Service
Pack 4 (SP4) and later.
...
The SO_EXCLUSIVEADDRUSE option is set by calling the setsockopt
function with the optname parameter set to SO_EXCLUSIVEADDRUSE and the
optval parameter set to a boolean value of TRUE before the socket is
bound.
In order to do this using the Bottle module, you have to create a custom backend facilitating access to the socket before it's bound. This gives an opportunity to set the required socket option as documented.
This is briefly described in the Bottle Deployment documentation:
If there is no adapter for your favorite server or if you need more
control over the server setup, you may want to start the server
manually.
Here's a modified version of the Bottle Hello World example that demonstrates this:
import socket
from wsgiref.simple_server import WSGIServer
from bottle import route, run, template
#route('/hello/<name>')
def index(name):
return template('<b>Hello {{name}}</b>!', name=name)
class CustomSocketServer(WSGIServer):
def server_bind(self):
# This tests if the socket option exists (i.e. only on Windows), then
# sets it.
if hasattr(socket, 'SO_EXCLUSIVEADDRUSE'):
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_EXCLUSIVEADDRUSE, 1)
# Everything below this point is a concatenation of the server_bind
# implementations pulled from each class in the class hierarchy.
# wsgiref.WSGIServer -> http.HTTPServer -> socketserver.TCPServer
elif self.allow_reuse_address:
self.socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.socket.bind(self.server_address)
self.server_address = self.socket.getsockname()
host, port = self.server_address[:2]
self.server_name = socket.getfqdn(host)
self.server_port = port
self.setup_environ()
print "Serving..."
run(host='localhost', port=8080, server_class=CustomSocketServer)
Note that the code copied is required to maintain the expected behavior by the super classes.
All of the super class implementations of server_bind() start by calling their parent classes server_bind(). This means that calling any of them results in the immediate binding of the socket, removing the opportunity to set the required socket option.
I tested this on Windows 10 using Python 2.7.
First instance:
PS C:\Users\chuckx\bottle-test> C:\Python27\python.exe test.py
Serving...
Second instance:
PS C:\Users\chuckx\bottle-test> C:\Python27\python.exe test.py
Traceback (most recent call last):
File "test.py", line 32, in <module>
server_class=CustomSocketServer)
File "C:\Python27\lib\wsgiref\simple_server.py", line 151, in make_server
server = server_class((host, port), handler_class)
File "C:\Python27\lib\SocketServer.py", line 417, in __init__
self.server_bind()
File "test.py", line 19, in server_bind
self.socket.bind(self.server_address)
File "C:\Python27\lib\socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
socket.error: [Errno 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
An alternative solution is to use #LuisMuñoz's comment: check if the port is already opened before opening it again:
# Bottle web server code here
# ...
import socket
sock = socket.socket()
sock.settimeout(0.2) # this prevents a 2 second lag when starting the server
if sock.connect_ex(('127.0.0.1', 80)) == 0:
print "Sorry, port already in use."
exit()
run(host='0.0.0.0', port=80)
I am trying to make calls using PJSIP module in python. For setup of SIP transport, I am doing like
trans_cfg = pj.TransportConfig()
# port for VoIP communication
trans_cfg.port = 5060
# local system address
trans_cfg.bound_addr = inputs.client_addr
transport = lib.create_transport(pj.TransportType.UDP,trans_cfg)
when I finish the call I am clearing the transport setup as, transport = None.
I am able to make call to user by running my program. But every time I restart my PC alone, I get an error while I run my python program
File "pjsuatrail_all.py", line 225, in <module>
main()
File "pjsuatrail_all.py", line 169, in main
transport = transport_setup()
File "pjsuatrail_all.py", line 54, in transport_setup
transport = lib.create_transport(pj.TransportType.UDP,trans_cfg)
File "/usr/local/lib/python2.7/dist-packages/pjsua.py", line 2304, in
create_transport
self._err_check("create_transport()", self, err)
File "/usr/local/lib/python2.7/dist-packages/pjsua.py", line 2723, in _err_check
raise Error(op_name, obj, err_code, err_msg)
pjsua.Error: Object: Lib, operation=create_transport(), error=Address already in use
Exception AttributeError: "'NoneType' object has no attribute 'destroy'" in <bound method Lib.__del__ of <pjsua.Lib instance at 0x7f8a4bbb6170>> ignored
For this currently I am doing like
$sudo lsof -t -i:5060
>> 1137
$sudo kill 1137
Then I run my code it works fine.
By instance from error, I can understand that somewhere I am not closing my transport configuration properly. Can anyone help in this regards.
Reference code used
From the inputs you give, it can be understood that its not the problem with pjsip wrapper. Transport configurations looks fine.
Looking in to the 'create_transport' error, the program is not able to create the connection because 5060 port is already occupied with some other program.
For that you are killing that process and you are able to run the program with out any error. And you say it only on restart, so on your system restart some program is occupying the port.
You can try like this
sudo netstat -nlp|grep 5060
in your case it will give like
1137/ProgramName
go to the 'ProgramName' in your startup configurations and make modifications such that it wont pickup the port.
I have a small python project to simulate CAN-enabled electronics that I have built. The project has a shell interpreter to interact with the simulated electronics devices. I'm at a point I'd like to trap exceptions thrown from within python-can running instances and print friendly error messages in the main shell UI instead of getting a full stack trace garbling the display.
For instance, if I run my script while the can0 interface is not configured (i.e. before executing ip link set can0 up type can bitrate 250000) I get a stack trace such as this:
Failed to send: Timestamp: 0.000000 ID: 0541 000 DLC: 8 01 1a c4 13 00 00 e2 04
> Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python3.6/site-packages/can/notifier.py", line 39, in rx_thread
msg = self.bus.recv(self.timeout)
File "/usr/lib/python3.6/site-packages/can/interfaces/socketcan_native.py", line 341, in recv
packet = capturePacket(self.socket)
File "/usr/lib/python3.6/site-packages/can/interfaces/socketcan_native.py", line 274, in capturePacket
cf, addr = sock.recvfrom(can_frame_size)
OSError: [Errno 100] Network is down
Note that the '>' sign above is my mini-shell prompt. That looks ugly, right.
I first tried to derive SocketcanNative_Bus and override the recv() method but then my new class is rejected as there's a check in /usr/lib/python3.6/site-packages/can/interfaces/interface.py that validates an interface class name against a list of well-known modules. So this might be not-quite-the-way-to-go.
Before I try anything else stupid, does python-can provide some means to intercept OS errors that occur in Notifier threads during transmission of data packets or when the CAN interface goes down, for instance?
Note that since I'm using a USB CAN adapter, it's pointless to check the network state only when the script starts because the CAN adapter could as well be unplugged while the script runs. So the only relevant way is to catch exceptions thrown by python-can modules and show a friendly message in the main thread.
I still don't know if that's what python-can developers intended but here's my latest working attempt. I'm overriding the recv() method of a class SocketcanNative_Bus instance, aka bus:
can.rc['interface'] = 'socketcan_native'
can.rc['channel'] = args[1]
from can.interfaces.socketcan_native import SocketcanNative_Bus
def recv(self, timeout=None):
try:
return SocketcanNative_Bus.recv(self, timeout)
except OSError as e:
print(str(e))
return None
try:
import types
bus = can.interface.Bus()
bus.recv = types.MethodType(recv, bus)
except OSError as e:
print('Error {0}: {1}'.format(e.errno, e.strerror), file=sys.stderr)
I dynamically overrode the bus instance's method recv() using Python's types.MethodType(). It's called patching as explained elsewhere on Stackoverflow.
I'm trying to figure out how to deploy a Bokeh slider chart over an IIS server.
I recently finished up a Flask application, so I figured I'd try the route where you embed through flask:
https://github.com/bokeh/bokeh/tree/master/examples/howto/server_embed
It's nice and easy when I launch the script locally.. but I can't seem to set it up properly over IIS. I believe the complexity is stemming from the fact that the wfastcgi.py module I'm using to deploy over IIS can't easily multi-thread without some sort of hack-like work around.
So, my second attempt was to wrap the flask app in tornado as below OPTION B
(without much success, but still think this is my best lead here)
Run Flask as threaded on IIS 7
My third attempt was to try and run Bokeh server standalone on a specific port. I figured I'd be able to run the server via standalone_embed.py using wfastcgi.py on say port 8888 & while using port 5000 for the server callbacks. However, the Server function:
from bokeh.server.server import Server
still launches it locally on the host machine
server = Server({'/': bokeh_app}, io_loop=io_loop, port=5000)
server.start()
So this actually works if I go to http://localhost:5000/ on the host,
but fails if I go to http://%my_host_ip%:5000/ from a remote machine.
I even tried manually setting the host but get an "invalid host" error:
server = Server({'/': bokeh_app}, io_loop=io_loop, host='%my_host_ip_address_here%:5000')
server.start()
ERR:
Error occurred while reading WSGI handler: Traceback (most recent call last): File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 45, in _create_hosts_whitelist int(parts[1]) ValueError: invalid literal for int() with base 10: '' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 711, in main env, handler = read_wsgi_handler(response.physical_path) File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 568, in read_wsgi_handler return env, get_wsgi_handler(handler_name) File "C:\WebsitesFlask\bokehTest\wfastcgi.py", line 537, in get_wsgi_handler handler = import(module_name, fromlist=[name_list[0][0]]) File ".\app.py", line 41, in server = Server({'/': bokeh_app}, io_loop=io_loop, host='%my_host_ip_address_here%:5000') File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 123, in init tornado_kwargs['hosts'] = _create_hosts_whitelist(kwargs.get('host'), self._port) File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 47, in _create_hosts_whitelist raise ValueError("Invalid port in host value: %s" % host) ValueError: Invalid port in host value: : StdOut: StdErr:
First off, the --host parameter should no longer be needed in the next 0.12.5 release. It's probably been the most confusing stumbling block for people trying to deploy a Bokeh server app in a "production" environment. You can follow the discussion on this issue on GitHub for more details.
Looking at the actual implementation in Bokeh that generates the error you are seeing, it is just this:
parts = host.split(':')
if len(parts) == 1:
if parts[0] == "":
raise ValueError("Empty host value")
hosts.append(host+":80")
elif len(parts) == 2:
try:
int(parts[1])
except ValueError:
raise ValueError("Invalid port in host value: %s" % host)
The exception you are reporting that states that int(parts[1]) is failing:
Traceback (most recent call last):
File "C:\Python34\lib\site-packages\bokeh\server\server.py", line 45,
in _create_hosts_whitelist int(parts[1])
ValueError: invalid literal for int() with base 10:
So, there is something amiss with the string you are passing for hosts that's causing the part after the colon to not be able to be converted to in int But without seeing the actual string, it's impossible to say much more. Maybe there is some encoding issue that needs to be handled differently or better. If you can provide a concrete string example that reproduces the problem I can take a closer look.
This question is related to How do we handle Python xmlrpclib Connection Refused?
When I try to use the following code, with my RPC server down, _get_rpc() returns False and I'm good to go. However if the server is running, it fails with unknown method. Is it trying to execute .connect() on the remote server? How can I get around this, when I needed to use .connect() to detect if the returned proxy worked (see related question)?
import xmlrpclib
import socket
def _get_rpc():
try:
a = xmlrpclib.ServerProxy('http://dd:LNXFhcZnYshy5mKyOFfy#127.0.0.1:9001')
a.connect() # Try to connect to the server
return a.supervisor
except socket.error:
return False
if not _get_rpc():
print "Failed to connect"
Here is the issue:
ahiscox#lenovo:~/code/dd$ python xmlrpctest2.py
Failed to connect
ahiscox#lenovo:~/code/dd$ supervisord -c ~/.supervisor # start up RPC server
ahiscox#lenovo:~/code/dd$ python xmlrpctest2.py
Traceback (most recent call last):
File "xmlrpctest2.py", line 13, in <module>
if not _get_rpc():
File "xmlrpctest2.py", line 7, in _get_rpc
a.connect() # Try to connect to the server
File "/usr/lib/python2.6/xmlrpclib.py", line 1199, in __call__
return self.__send(self.__name, args)
File "/usr/lib/python2.6/xmlrpclib.py", line 1489, in __request
verbose=self.__verbose
File "/usr/lib/python2.6/xmlrpclib.py", line 1253, in request
return self._parse_response(h.getfile(), sock)
File "/usr/lib/python2.6/xmlrpclib.py", line 1392, in _parse_response
return u.close()
File "/usr/lib/python2.6/xmlrpclib.py", line 838, in close
raise Fault(**self._stack[0])
xmlrpclib.Fault: <Fault 1: 'UNKNOWN_METHOD'>
Well i was just looking in to it ; my old method suck because xmlrpclib.ServerProxy try to connect to the XmlRPC server when you call a method, not before !!!
Try this instead :
import xmlrpclib
import socket
def _get_rpc():
a = xmlrpclib.ServerProxy('http://dd:LNXFhcZnYshy5mKyOFfy#127.0.0.1:9001')
try:
a._() # Call a fictive method.
except xmlrpclib.Fault:
# connected to the server and the method doesn't exist which is expected.
pass
except socket.error:
# Not connected ; socket error mean that the service is unreachable.
return False, None
# Just in case the method is registered in the XmlRPC server
return True, a
connected, server_proxy = _get_rpc():
if not connected
print "Failed to connect"
import sys
sys.exit(1)
To summarize this we have 3 cases :
XmlRPC server is up and in it we defined a method called _():
(EDIT : i did choose the name _ because it unlikely to have a method with this name, but this case still can happen)
In this case no exception will be catch and the code will execute
the return True
XmlRPC server is up and in it we don't have any method methoded call
_():
This time xmlrpclib.Fault will be raised and we will also pass to the return True
XmlRPC server is down:
Now the socket.error exception will be raised and that
when we call a._() so we should return False
I don't know if there is an easy way to do this and i will love to see it until then , hope this can fix thing this time :)
N.B: when you do if a: python will again search for a method __nonzero__() to test the boolean value of a and this will fail to.
N.B 2: Some xmlrpc service offer a rpc path specialized to do an authentication , in this path the service offer methods like login() ... , this kinds of method can replace the _() method in our case, so just calling login(), will be enough to know if the service is up or down (socket.error), and in the same time this login() method authenticate the user if the service is up .