traceroute multiple hos python. (reading .txt line by line) - python

I know this script was discussed here before, but I still can't run it properly. Problem is reading text file line by line. In older script
while host:
print host
was used, but using this method program crashed, so I decided to change it to
for host in Open_host:
host = host.strip()
but using this script only gives results of the last line in .txt file. Can someone help me to get it working?
The sript below:
# import subprocess
import subprocess
# Prepare host and results file
Open_host = open('c:/OSN/host.txt','r')
Write_results = open('c:/OSN/TracerouteResults.txt','a')
host = Open_host.readline()
# loop: excuse trace route for each host
for host in Open_host:
host = host.strip()
# execute Traceroute process and pipe the result to a string
Traceroute = subprocess.Popen(["tracert", '-w', '100', host],
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
hop = Traceroute.stdout.readline()
if not hop: break
print '-->',hop
Write_results.write( hop )
Traceroute.wait()
# Reading a new host
host = Open_host.readline()
# close files
Open_host.close()
Write_results.close()

I assume you only have two or three hosts in your host.txt file. The culprits are the calls to Open_host.readline() you make before the loop and at the end of each iteration, causing your script to skip the first host in the list, and one host out of two. Just removing those should solve your problem.
Here's the code, updated a bit to be more pythonic:
import subprocess
with open("hostlist.txt", "r") as hostlist, open("results.txt", "a") as output:
for host in hostlist:
host = host.strip()
print "Tracing", host
trace = subprocess.Popen(["tracert", "-w", "100", host], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
hop = trace.stdout.readline()
if not hop: break
print '-->', hop.strip()
output.write(hop)
# When you pipe stdout, the doc recommends that you use .communicate()
# instead of wait()
# see: http://docs.python.org/2/library/subprocess.html#subprocess.Popen.wait
trace.communicate()

Related

How do i ping websites in Python? [duplicate]

How do I ping a website or IP address with Python?
See this pure Python ping by Matthew Dixon Cowles and Jens Diemer. Also, remember that Python requires root to spawn ICMP (i.e. ping) sockets in linux.
import ping, socket
try:
ping.verbose_ping('www.google.com', count=3)
delay = ping.Ping('www.wikipedia.org', timeout=2000).do()
except socket.error, e:
print "Ping Error:", e
The source code itself is easy to read, see the implementations of verbose_ping and of Ping.do for inspiration.
Depending on what you want to achive, you are probably easiest calling the system ping command..
Using the subprocess module is the best way of doing this, although you have to remember the ping command is different on different operating systems!
import subprocess
host = "www.google.com"
ping = subprocess.Popen(
["ping", "-c", "4", host],
stdout = subprocess.PIPE,
stderr = subprocess.PIPE
)
out, error = ping.communicate()
print out
You don't need to worry about shell-escape characters. For example..
host = "google.com; `echo test`
..will not execute the echo command.
Now, to actually get the ping results, you could parse the out variable. Example output:
round-trip min/avg/max/stddev = 248.139/249.474/250.530/0.896 ms
Example regex:
import re
matcher = re.compile("round-trip min/avg/max/stddev = (\d+.\d+)/(\d+.\d+)/(\d+.\d+)/(\d+.\d+)")
print matcher.search(out).groups()
# ('248.139', '249.474', '250.530', '0.896')
Again, remember the output will vary depending on operating system (and even the version of ping). This isn't ideal, but it will work fine in many situations (where you know the machines the script will be running on)
You may find Noah Gift's presentation Creating Agile Commandline Tools With Python. In it he combines subprocess, Queue and threading to develop solution that is capable of pinging hosts concurrently and speeding up the process. Below is a basic version before he adds command line parsing and some other features. The code to this version and others can be found here
#!/usr/bin/env python2.5
from threading import Thread
import subprocess
from Queue import Queue
num_threads = 4
queue = Queue()
ips = ["10.0.1.1", "10.0.1.3", "10.0.1.11", "10.0.1.51"]
#wraps system ping command
def pinger(i, q):
"""Pings subnet"""
while True:
ip = q.get()
print "Thread %s: Pinging %s" % (i, ip)
ret = subprocess.call("ping -c 1 %s" % ip,
shell=True,
stdout=open('/dev/null', 'w'),
stderr=subprocess.STDOUT)
if ret == 0:
print "%s: is alive" % ip
else:
print "%s: did not respond" % ip
q.task_done()
#Spawn thread pool
for i in range(num_threads):
worker = Thread(target=pinger, args=(i, queue))
worker.setDaemon(True)
worker.start()
#Place work in queue
for ip in ips:
queue.put(ip)
#Wait until worker threads are done to exit
queue.join()
He is also author of: Python for Unix and Linux System Administration
http://ecx.images-amazon.com/images/I/515qmR%2B4sjL._SL500_AA240_.jpg
It's hard to say what your question is, but there are some alternatives.
If you mean to literally execute a request using the ICMP ping protocol, you can get an ICMP library and execute the ping request directly. Google "Python ICMP" to find things like this icmplib. You might want to look at scapy, also.
This will be much faster than using os.system("ping " + ip ).
If you mean to generically "ping" a box to see if it's up, you can use the echo protocol on port 7.
For echo, you use the socket library to open the IP address and port 7. You write something on that port, send a carriage return ("\r\n") and then read the reply.
If you mean to "ping" a web site to see if the site is running, you have to use the http protocol on port 80.
For or properly checking a web server, you use urllib2 to open a specific URL. (/index.html is always popular) and read the response.
There are still more potential meaning of "ping" including "traceroute" and "finger".
I did something similar this way, as an inspiration:
import urllib
import threading
import time
def pinger_urllib(host):
"""
helper function timing the retrival of index.html
TODO: should there be a 1MB bogus file?
"""
t1 = time.time()
urllib.urlopen(host + '/index.html').read()
return (time.time() - t1) * 1000.0
def task(m):
"""
the actual task
"""
delay = float(pinger_urllib(m))
print '%-30s %5.0f [ms]' % (m, delay)
# parallelization
tasks = []
URLs = ['google.com', 'wikipedia.org']
for m in URLs:
t = threading.Thread(target=task, args=(m,))
t.start()
tasks.append(t)
# synchronization point
for t in tasks:
t.join()
Here's a short snippet using subprocess. The check_call method either returns 0 for success, or raises an exception. This way, I don't have to parse the output of ping. I'm using shlex to split the command line arguments.
import subprocess
import shlex
command_line = "ping -c 1 www.google.comsldjkflksj"
args = shlex.split(command_line)
try:
subprocess.check_call(args,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
print "Website is there."
except subprocess.CalledProcessError:
print "Couldn't get a ping."
Most simple answer is:
import os
os.system("ping google.com")
I develop a library that I think could help you. It is called icmplib (unrelated to any other code of the same name that can be found on the Internet) and is a pure implementation of the ICMP protocol in Python.
It is completely object oriented and has simple functions such as the classic ping, multiping and traceroute, as well as low level classes and sockets for those who want to develop applications based on the ICMP protocol.
Here are some other highlights:
Can be run without root privileges.
You can customize many parameters such as the payload of ICMP packets and the traffic class (QoS).
Cross-platform: tested on Linux, macOS and Windows.
Fast and requires few CPU / RAM resources unlike calls made with subprocess.
Lightweight and does not rely on any additional dependencies.
To install it (Python 3.6+ required):
pip3 install icmplib
Here is a simple example of the ping function:
host = ping('1.1.1.1', count=4, interval=1, timeout=2, privileged=True)
if host.is_alive:
print(f'{host.address} is alive! avg_rtt={host.avg_rtt} ms')
else:
print(f'{host.address} is dead')
Set the "privileged" parameter to False if you want to use the library without root privileges.
You can find the complete documentation on the project page:
https://github.com/ValentinBELYN/icmplib
Hope you will find this library useful.
read a file name, the file contain the one url per line, like this:
http://www.poolsaboveground.com/apache/hadoop/core/
http://mirrors.sonic.net/apache/hadoop/core/
use command:
python url.py urls.txt
get the result:
Round Trip Time: 253 ms - mirrors.sonic.net
Round Trip Time: 245 ms - www.globalish.com
Round Trip Time: 327 ms - www.poolsaboveground.com
source code(url.py):
import re
import sys
import urlparse
from subprocess import Popen, PIPE
from threading import Thread
class Pinger(object):
def __init__(self, hosts):
for host in hosts:
hostname = urlparse.urlparse(host).hostname
if hostname:
pa = PingAgent(hostname)
pa.start()
else:
continue
class PingAgent(Thread):
def __init__(self, host):
Thread.__init__(self)
self.host = host
def run(self):
p = Popen('ping -n 1 ' + self.host, stdout=PIPE)
m = re.search('Average = (.*)ms', p.stdout.read())
if m: print 'Round Trip Time: %s ms -' % m.group(1), self.host
else: print 'Error: Invalid Response -', self.host
if __name__ == '__main__':
with open(sys.argv[1]) as f:
content = f.readlines()
Pinger(content)
import subprocess as s
ip=raw_input("Enter the IP/Domain name:")
if(s.call(["ping",ip])==0):
print "your IP is alive"
else:
print "Check ur IP"
If you want something actually in Python, that you can play with, have a look at Scapy:
from scapy.all import *
request = IP(dst="www.google.com")/ICMP()
answer = sr1(request)
That's in my opinion much better (and fully cross-platform), than some funky subprocess calls. Also you can have as much information about the answer (sequence ID.....) as you want, as you have the packet itself.
using system ping command to ping a list of hosts:
import re
from subprocess import Popen, PIPE
from threading import Thread
class Pinger(object):
def __init__(self, hosts):
for host in hosts:
pa = PingAgent(host)
pa.start()
class PingAgent(Thread):
def __init__(self, host):
Thread.__init__(self)
self.host = host
def run(self):
p = Popen('ping -n 1 ' + self.host, stdout=PIPE)
m = re.search('Average = (.*)ms', p.stdout.read())
if m: print 'Round Trip Time: %s ms -' % m.group(1), self.host
else: print 'Error: Invalid Response -', self.host
if __name__ == '__main__':
hosts = [
'www.pylot.org',
'www.goldb.org',
'www.google.com',
'www.yahoo.com',
'www.techcrunch.com',
'www.this_one_wont_work.com'
]
Pinger(hosts)
You can find an updated version of the mentioned script that works on both Windows and Linux here
using subprocess ping command to ping decode it because the response is binary:
import subprocess
ping_response = subprocess.Popen(["ping", "-a", "google.com"], stdout=subprocess.PIPE).stdout.read()
result = ping_response.decode('utf-8')
print(result)
you might try socket to get ip of the site and use scrapy to excute icmp ping to the ip.
import gevent
from gevent import monkey
# monkey.patch_all() should be executed before any library that will
# standard library
monkey.patch_all()
import socket
from scapy.all import IP, ICMP, sr1
def ping_site(fqdn):
ip = socket.gethostbyaddr(fqdn)[-1][0]
print(fqdn, ip, '\n')
icmp = IP(dst=ip)/ICMP()
resp = sr1(icmp, timeout=10)
if resp:
return (fqdn, False)
else:
return (fqdn, True)
sites = ['www.google.com', 'www.baidu.com', 'www.bing.com']
jobs = [gevent.spawn(ping_site, fqdn) for fqdn in sites]
gevent.joinall(jobs)
print([job.value for job in jobs])
On python 3 you can use ping3.
from ping3 import ping, verbose_ping
ip-host = '8.8.8.8'
if not ping(ip-host):
raise ValueError('{} is not available.'.format(ip-host))
If you only want to check whether a machine on an IP is active or not, you can just use python sockets.
import socket
s = socket.socket()
try:
s.connect(("192.168.1.123", 1234)) # You can use any port number here
except Exception as e:
print(e.errno, e)
Now, according to the error message displayed (or the error number), you can determine whether the machine is active or not.
Use this it's tested on python 2.7 and works fine it returns ping time in milliseconds if success and return False on fail.
import platform,subproccess,re
def Ping(hostname,timeout):
if platform.system() == "Windows":
command="ping "+hostname+" -n 1 -w "+str(timeout*1000)
else:
command="ping -i "+str(timeout)+" -c 1 " + hostname
proccess = subprocess.Popen(command, stdout=subprocess.PIPE)
matches=re.match('.*time=([0-9]+)ms.*', proccess.stdout.read(),re.DOTALL)
if matches:
return matches.group(1)
else:
return False

Losing stdout data in python

I'm trying to make a python script which is going run a bash script on a remote machine via ssh and then parse its output. The bash script outputs lot of data (like 5 megabytes of text / 50k lines) in stdout and here is a problem - I'm getting all the data only in ~10% cases. In other 90% cases I'm getting about 97% of what i expect and it looks like it always trims at the end. This is how my script looks like:
import subprocess
import re
import sys
import paramiko
def run_ssh_command(ip, port, username, password, command):
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip, port, username, password)
stdin, stdout, stderr = ssh.exec_command(command)
output = ''
while not stdout.channel.exit_status_ready():
solo_line = ''
# Print stdout data when available
if stdout.channel.recv_ready():
# Retrieve the first 1024 bytes
solo_line = stdout.channel.recv(2048).
output += solo_line
ssh.close()
return output
result = run_ssh_command(server_ip, server_port, login, password, 'cat /var/log/somefile')
print "result size: ", len(result)
I'm pretty sure that problem is in overflowing of some internal buffer, but which one and how to fix it?
Thank you very much for any tip!
When stdout.channel.exit_status_ready() starts returning True, there might still be a lot of data on the remote side, waiting to be sent. But you only receive one more chunk of 2048 bytes and quit.
Instead of checking the exit status, you could keep calling recv(2048) until it returns an empty string, which means that no more data is coming:
output = ''
next_chunk = True
while next_chunk:
next_chunk = stdout.channel.recv(2048)
output += next_chunk
But really you probably just want:
output = stdout.read()
May I suggest a less crude way to execute command over ssh via Fabric library.
It may look like this (omitting ssh authentication details):
from fabric import Connection
with Connection('user#localhost') as con:
res = con.run('~/test.sh', hide=True)
lines = res.stdout.split('\n')
print('{} lines readen.'.format(len(lines)))
given the test script ~/test.sh
#!/bin/sh
for i in {1..1234}
do
echo "Line $i"
done
all of the output is correctly consumed

Python script to probe smtp servers

I have written this script to test a single ip address for probing specific user names on smtp servers for a pentest. I am trying now to port this script to run the same tests, but to a range of ip addresses instead of a single one. Can anyone shed some light as to how that can be achieved?
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
If you're using Python3.3+, this is mostly simple
import ipaddress # new in Python3.3
start_ip, end_ip = however_you_get_these_as_strings()
ip_networks = ipaddress.summarize_address_range(
ipaddress.IPv4Address(start_ip),
ipaddress.IPv4Address(end_ip))
# list of networks between those two IPs
for network in ip_networks:
for ip in network:
# ip is an ipaddress.IPv4Address object
probe(str(ip))
# which converts nicely to str
I would implement this by turning your code as it stands into a function to probe a single host, taking the host name/ip as an argument. Then, loop over your list of hosts (either from the command line, a file, interactive querying of a user, or wherever) and make a call to your single host probe for each host in the loop.
Ok, so here is what I have done to get this going.
The solution is not elegant at all but it does the trick, and also, I could not spend more time trying to find a solution on this purely in Python, so I have decided, after reading the answer from bmhkim above(thanks for the tips) to write a bash script to have it iterate over a range of ip addresses and for each one call my python script to do its magic.
#!/bin/bash
for ip in $(seq 1 254); do
python smtp-probe.py 192.168.1.$ip <users.txt
done
I have had some problems with the output since that was giving me the servers responses to my probing attempts but not the actual ip addresses which were sending those responses, so I have adapted the original script to this:
#!/usr/bin/python
import socket
import sys
users= []
for line in sys.stdin:
line = line.strip()
if line != '':
users.append(line)
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((sys.argv[1], 25))
print sys.argv[1] #Notice the printing of the script arguments/ip addresses for my output
fp = s.makefile('rwb')
fp.readline()
fp.write('HELO test.example.com\r\n')
fp.flush()
fp.readline
for user in users:
fp.write('VRFY %s\r\n\ ' % user)
fp.flush()
print '%s: %s' % (user, fp.readline().strip())
fp.write('QUIT\r\n')
fp.flush()
s.close()
Like I said above, that is a tricky way out-I know, but I am not a programmer, so that is the way out I was able to find(if you have a way purely in Python to do it I would like very much to see it). I will definitely re-visit this issue once I have a bit more time and I will keep studying Python until I get this right.
Thanks all for the support to my question!!

Port Scanner Script Not Functioning

I'm totally confused as to why my script isn't working.
This script basically scans for servers with port 19 open (CHARGEN).
You enter a list of ips in the format:
1.1.1.1
2.2.2.2
3.3.3.3
4.4.4.4
5.5.5.5
and the script scans every ip in the list to check if port 19 is open, and if it is, it writes the ip to a file.
Here is my code:
#!/usr/bin/env python
#CHARGEN Scanner
#Written by Expedient
import sys
import Queue
import socket
import threading
queue = Queue.Queue()
def check_ip(host, output_file, timeout):
try:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(timeout)
result = sock.connect_ex((host, 19))
if result == 0:
print "Found: %s" % host
file = open(output_file, "a")
file.write(host+"\n")
file.close()
except:
pass
def add_to_queue(queue, host, output_file, timeout):
queue.put(check_ip(host, output_file, timeout))
if len(sys.argv) < 4:
print "Usage: %s <ip list> <output file> <timeout>" % sys.argv[0]
sys.exit()
try:
open(sys.argv[1])
except:
print "Unable to open ip list."
sys.exit()
print "Starting Expedient's CHARGEN Scanner..."
with open(sys.argv[1]) as ip_list:
for ip in ip_list:
thread = threading.Thread(target=add_to_queue, args=(queue, ip, sys.argv[2], float(sys.argv[3])))
thread.start()
Whenever I run the script on a list of CHARGEN enabled servers that I got from an nmap scan
(I double checked, every server has port 19 open), the script does not write any of the ips
to the output file, which is should, because every ip in the list has port 19 open.
I honestly have no idea why this isn't working and it would be wonderful if someone could
help me out/tell me what I'm doing wrong. Thank you.
Your example as posted is catching all exceptions in your check_ip function without telling you (except: pass). You could have any number of issues causing exceptions to be raised in this function, and if an exception is raising in every call of the function then you will get no results from your script while also not getting any feedback to log/console on the nature of the failure.
For the purposes of debugging, you should modify your exception handling to explicitly handle any exceptions that you want to pass over, and allow other exceptions to raise unhandled so that you can determine what your error conditions are.

Reading from pipe in python is imposiible

Hello I have the following code in python 2.6:
command = "tcpflow -c -i any port 5559"
port_sniffer = subprocess.Popen(command, stdout=subprocess.PIPE, bufsize=1, shell=True)
while True:
line = port_sniffer.stdout.readline()
#do some stuff with line
The purpose of this code is to sniff the traffic between two processes (A and B) that communicate on port 5559.
Now let me describe the different scenarios I am having:
1) Code above is not running:
A and B are communicating and i can see it clearly using logs and the linux command netstat -napl | grep 5559 shows that the processes are communicating on the desired port.
2) Code above is not running and I am sniffing by running tcpflow -c -i any port 5559 directly from shell:
I can see the communication on console clearly :-).
3) Code above is running: Proccesses can't communicate. netstat -napl | grep 5559 prints nothing and logs give out errors!!!
4) Code above is running in debug mode: I can't seem to be able to step after the line line = port_sniffer.stdout.readline()
I tried using an iterator instead of a while loop (not that it should matter but still I am pointing it out). I also tried different values for bufsize (none, 1, and 8).
Please help!!
So after a quick read through the docs I found these two sentences:
On Unix, if args is a string, the string is interpreted as the name or
path of the program to execute
and
The shell argument (which defaults to False) specifies whether to use
the shell as the program to execute. If shell is True, it is
recommended to pass args as a string rather than as a sequence.
Based on this, I would recommend recreating your command as a list:
command = ["tcpflow -c", "-i any port 5559"] #I don't know linux, so double check this line!!
The general idea is this (also from the docs):
If args is a sequence, the first item specifies the command string,
and any additional items will be treated as additional arguments to
the shell itself. That is to say, Popen does the equivalent of:
Popen(['/bin/sh', '-c', args[0], args[1], ...])
Additionally, it seems that to read from your process, you should use communicate(). So
while True:
line = port_sniffer.stdout.readline()
would become
while True:
line = port_sniffer.communicate()[0]
But keep in mind this note from the docs:
Note The data read is buffered in memory, so do not use this method if the data size is large or unlimited.
If I had to guess, I think the problem that you're having is that you aren't running your program as root. TCPFlow needs to be run as a privelaged user if you want to be able to sniff other people's traffic (otherwise that'd be a serious security vulnerability). I wrote the following programs and they worked just fine for your scenario
server.py
#!/usr/bin/python
import socket
s = socket.socket()
host = socket.gethostname()
port = 12345
s.bind((host,port))
s.listen(5)
while True:
c, addr = s.accept()
print 'Connection from', addr
c.send('Test string 1234')
c.recv(1024)
while x != 'q':
print "Received " + x
c.send('Blah')
x = c.recv(1024)
print "Closing connection"
c.close()
client.py
#!/usr/bin/python
import socket, sys
from time import sleep
from datetime import datetime
s = socket.socket()
host = socket.gethostname()
port = 12345
s.connect((host,port))
c = sys.stdin.read(1) # Type a char to send to initate the sending loop
while True:
s.send(str(datetime.now()))
s.sleep(3)
msg = s.recv(1024)
flow.py
#!/usr/bin/python
import subprocess
command = 'tcpflow -c -i any port 12345'
sniffer = subprocess.Popen(command, stdout=subprocess.PIPE, shell=True)
while True:
print sniffer.stdout.readline()

Categories

Resources