Check if command line argument is already used - python

I'm trying to check reverse lookup of IP address (argument). and then write the result to txt file.
How I can check if the IP address (argument) is already registered in the file? If so, I need to get out of the script.
My script:
import sys, os, re, shlex, urllib, subprocess
cmd = 'dig -x %s #192.1.1.1' % sys.argv[1]
proc = subprocess.Popen(shlex.split(cmd), stdout=subprocess.PIPE)
out, err = proc.communicate()
# Convert to list of str lines
out = out.decode().split('\n')
# Only write the line containing "PTR"
with open("/tmp/test.txt", "w") as f:
for line in out:
if "PTR" in line:
f.write(line)

If the file is not too large you could do:
with open('file.txt','r') as f:
content = f.read()
if ip in content:
sys.exit(0)
Now if the file is big and you want to avoid possible memory problems you could use mmap like so:
import mmap
with open("file.txt", "r+b") as f:
# memory-map the file, size 0 means whole file
mm = mmap.mmap(f.fileno(), 0)
if mm.find(ip) != -1:
sys.exit(0)
The mmap.find(string[, start[, end]]) is well documented here.

Something like:
otherIps = [line.strip() for line in open("<path to ipFile>", 'r')]
theIp = "192.168.1.1"
if theIp in otherIps:
sys.exit(0)
otherIps contains a list of the ip addresses on ipFile, then you need to check if theIp is already on otherIps, if so, exit the script.

Related

find and print - regular expression

This program is to login to my switch and copies output to a file; and the second part is for finding a keyword and print the entire line.
When I run the code the first part works fine but second part of the code does not print the line containing the key word i am looking for..
However, when i run the second part of the code separately i am able to print the line containing the key_word.
What is wrong here? Pease help me out?
import paramiko
import sys
import re
host = "15.112.34.36"
port = 22
username = "admin"
password = "ssmssm99"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port, username, password)
commands = ["switchshow"]
for command in commands:
print(command)
sys.stdout = open('zones_list.txt', 'w')
stdin, stdout, stderr = ssh.exec_command(command)
lines = stdout.readlines()
lines = "".join(lines)
print(lines)
ssh.close()
#SECOND PART
#open the zone_list file and search for a keyword
#search for wwn and print the entire line --> doesnt print why?
wwn = "10:00:00:90:fa:73:df:c9"
with open('zones_list.txt', 'r') as f:
lines = f.readlines()
for line in lines:
if re.search(r'10:00:00:90:fa:73:df:c9', line):
print (line)
break
You're redicting stdout to a file in line 17:
sys.stdout = open('zones_list.txt', 'w')
All print statements afterwards don't write to the console, but to the file.
Secondly you open the same file twice, once for writing and once for reading, but in the second case f.readlines() returns an empty list.
Example to show why it's problematic to open the file twice.
import sys
# 1. Opening the file without closing it => will be closed at the end of the program
# 2. stdout now writes into the file
sys.stdout = open('text3', 'w')
# Will be writen into the file when the program finishes
print ('line1')
print ('line2')
# We open the file a second time
with open('text3', 'r') as f:
# lines will always be an empty list, no matter if there was something in the file before
lines = f.readlines()
# Writing to stderr so we see the output in the console (stdout still points at the file)
sys.stderr.write('length: {}'.format(len(lines)))
for line in lines:
# We should never be here
sys.stderr.write (line)
# write some more stuff the the file
for i in range(1, 6):
print ('i + {}'.format(i))
print('line3')
The first part of the script redirected stdout to the file. So print(line) in the second part is also writing to the file instead of displaying the matching line. Also, you never closed the file in the first part, so buffered output won't be written to the file.
Don't use sys.stdout in the first part, use an ordinary variable.
Another problem is that you're overwriting the file for each command in commands. You should open the file once before the loop, not each time through the loop.
wwn isn't a regular expression, there's no need to use re.search(). Just use if wwn in line:. And you don't need to use f.readlines(), just loop through the file itself.
import paramiko
import sys
host = "15.112.34.36"
port = 22
username = "admin"
password = "ssmssm99"
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port, username, password)
commands = ["switchshow"]
with open('zones_list.txt', 'w') as f:
for command in commands:
print(command)
stdin, stdout, stderr = ssh.exec_command(command)
lines = stdout.readlines()
lines = "".join(lines)
print(lines, file=f)
ssh.close()
#SECOND PART
#open the zone_list file and search for a keyword
#search for wwn and print the entire line --> doesnt print why?
wwn = "10:00:00:90:fa:73:df:c9"
with open('zones_list.txt', 'r') as f:
for line in f:
if wwn in line:
print (line)
break
got the code straight. And couple of questions to bug you and Maddi.
This code asks the user to enter the "wwn" to search for in the host and print the line which contains the "wwn" number.
Question1: I would run this code multiple times whenever I would like to search for "wwn"...
And here I would like to have a clear "zones_list.txt" file each time I start. So I opened the file in 'w' mode -- SO this clears each time right? Any other suggestion?
Question2: IS there any other way to store the output and search it for a string and print the output? I guess storing the data in a file and searching through it is the best?
Question3: I would like to add a GUI where user is asked to enter the "wwn" and print the output. Any suggestion?
Thanks again :)
import paramiko
import sys
import re
#host details to fetch data - nodefind
host = "15.112.34.36"
port = 22
username = "admin"
password = "ssmssm99"
#shell into the client host
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, port, username, password)
#asking user to enter the wwn to search in the host
wwn = input('Enter the wwn to be searched >')
#for example command is: nodefind 20:34:00:02:ac:07:e9:d5
commands = ["nodefind " + wwn, "switchshow"]
f =open('zones_list.txt', 'w')
for command in commands:
print(command)
stdin, stdout, stderr = ssh.exec_command(command)
lines = stdout.readlines()
lines = "".join(lines)
print(lines, file=open('zones_list.txt', 'a'))
ssh.close()
#print a particular line in console to the user
f =open('zones_list.txt', 'r')
for line in f:
if wwn in line:
print(line)
break

How to open and print the contents of a file line by line using subprocess?

I am trying to write a python script which SSHes into a specific address and dumps a text file. I am currently having some issues. Right now, I am doing this:
temp = "cat file.txt"
need = subprocess.Popen("ssh {host} {cmd}".format(host='155.0.1.1', cmd=temp),shell=True,stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()
print(need)
This is the naive approach where I am basically opening the file, saving its output to a variable and printing it. However, this really messes up the format when I print "need". Is there any way to simply use subprocess and read the file line by line? I have to be SSHed into the address in order to dump the file otherwise the file will not be detected, that is why I am not simply doing
f = open(temp, "r")
file_contents = f.read()
print (file_contents)
f.close()
Any help would be appreciated :)
You don't need to use the subprocess module to print the entire file line by line. You can use pure python.
f = open(temp, "r")
file_contents = f.read()
f.close()
# turn the file contents into a list
file_lines = file_contents.split("\n")
# print all the items in the list
for file_line in file_lines:
print(file_line)

Use process substitution as input file to Python twice

Consider the following python script
#test.py
import sys
inputfile=sys.argv[1]
with open(inputfile,'r') as f:
for line in f.readlines():
print line
with open(inputfile,'r') as f:
for line in f.readlines():
print line
Now I want to run test.py on a substituted process, e.g.,
python test.py <( cat file | head -10)
It seems the second f.readlines returns empty. Why is that and is there a way to do it without having to specify two input files?
Why is that.
Process substitution works by creating a named pipe. So all the data consumed at the first open/read loop.
Is there a way to do it without having to specify two input files.
How about buffering the data before using it.
Here is a sample code
import sys
import StringIO
inputfile=sys.argv[1]
buffer = StringIO.StringIO()
# buffering
with open(inputfile, 'r') as f:
buffer.write(f.read())
# use it
buffer.seek(0)
for line in buffer:
print line
# use it again
buffer.seek(0)
for line in buffer:
print line
readlines() will read all available lines from the input at once. This is why the second call returns nothing because there is nothing left to read. You can assign the result of readlines() to a local variable and use it as many times as you want:
import sys
inputfile=sys.argv[1]
with open(inputfile,'r') as f:
lines = f.readlines()
for line in lines:
print line
#use it again
for line in lines:
print line

Reading a file in python via read()

Consider this snippet
from sys import argv
script, input_file = argv
def print_all(f):
print f.read()
current_file = open(input_file)
print_all(current_file)
Ref. line 4: Why do I have to use "print" along with "f.read()". When I use just f.read() it doesnt print anything, why ?
f.read() reads the file from disk into memory. print prints to the console. You will find more info on input and output in the documentation

How to test a directory of files for gzip and uncompress gzipped files in Python using zcat?

I'm in my 2nd week of Python and I'm stuck on a directory of zipped/unzipped logfiles, which I need to parse and process.
Currently I'm doing this:
import os
import sys
import operator
import zipfile
import zlib
import gzip
import subprocess
if sys.version.startswith("3."):
import io
io_method = io.BytesIO
else:
import cStringIO
io_method = cStringIO.StringIO
for f in glob.glob('logs/*'):
file = open(f,'rb')
new_file_name = f + "_unzipped"
last_pos = file.tell()
# test for gzip
if (file.read(2) == b'\x1f\x8b'):
file.seek(last_pos)
#unzip to new file
out = open( new_file_name, "wb" )
process = subprocess.Popen(["zcat", f], stdout = subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
if process.poll() != None:
break;
output = io_method(process.communicate()[0])
exitCode = process.returncode
if (exitCode == 0):
print "done"
out.write( output )
out.close()
else:
raise ProcessException(command, exitCode, output)
which I've "stitched" together using these SO answers (here) and blogposts (here)
However, it does not seem to work, because my test file is 2.5GB and the script has been chewing on it for 10+mins plus I'm not really sure if what I'm doing is correct anyway.
Question:
If I don't want to use GZIP module and need to de-compress chunk-by-chunk (actual files are >10GB), how do I uncompress and save to file using zcat and subprocess in Python?
Thanks!
This should read the first line of every file in the logs subdirectory, unzipping as required:
#!/usr/bin/env python
import glob
import gzip
import subprocess
for f in glob.glob('logs/*'):
if f.endswith('.gz'):
# Open a compressed file. Here is the easy way:
# file = gzip.open(f, 'rb')
# Or, here is the hard way:
proc = subprocess.Popen(['zcat', f], stdout=subprocess.PIPE)
file = proc.stdout
else:
# Otherwise, it must be a regular file
file = open(f, 'rb')
# Process file, for example:
print f, file.readline()

Categories

Resources