My question is very similar to others' here but I haven't found the exact answer I'm looking for so I hope a veteran Python user will be able to further me along.
I'm learning scripting methods for my job but they won't send me to a training center to learn it so my Chief Technical Officer said that I should learn how to create log files from summarized Wireshark collection reports. I've had great luck in Bash but he wants me to become fluent in Python - without any help or background in scripting/programming this is a difficult task. I am attempting to essentially grep from the Wireshark report to a new file, giving a count and list of occurences of DNS traffic. The only thing is in order to be effective, it needs to be able to work using new data sets at every use, otherwise this is a meaningless exercise.
>> f1 = open('/home/user/file','r')
>> for line in f1
** if "DNS" in line:
**** print line
Two questions:
1) How would I put a count on each DNS occurance?
2) How would I pipe/print to a new txt file?
This might be a bit more advanced, however, regarding file processing I really like to do some generator pipelining!
# this is a generator (an iterable) which only outputs a
# line containing "DNS" if it was requested in an iteration
# furthermore, the way i use a generator here is called "list comprehension"
dns_lines = ( line for line in open('/home/user/file','r') if "DNS" in line )
# the with-statement uses python's magic-methods to take care of
# opening and closing the file
with open("output", 'w') as f:
# enumerate works on generators
# it enumerates each item that is iterated over
# a tuple is returned (count, line)
for count_line in enumerate(dns_lines):
f.write("%d - %s" % (count_line))
More on Generators and file processing here by David Beazley
I assumed, that you want to learn a bit more about how powerful python is. Hence, my long comments. :)
// Edit:
A bit more of an explanation regarding what is going to happen here:
The first line will just generate a generator object.
The file-reading will start running in the second for-loop.
As soon as this iteration is started the file will be read until a line containing "DNS" is found.
A tuple will be created (count, line) and handed over to this very iteration.
The tuple is written into a file using the format string!
The next iteration will take place and request the next line, which will start the file-reading again.
I hope this helps!
Generators prevent loading a whole list into the memory and they allow for a lot of neat tricks and pipelined processing. However, there is a lot more to it than you could mention here in this post!
You can simply initialize a new variable to count your items
counter = 0
if 'DNS' in line:
counter += 1
print counter
WRT saving your data, you can either do it in python or just print out the data and output it to a file
counter = 0
data = []
if 'DNS' in line:
counter += 1
data.append(line)
to_s = "\n".join(data)
f = open('out.txt', 'w')
f.write(to_s)
f.close()
Related
input t1
P95P,71655,LINC-JP,pathogenic
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
output op
P95P,71655,LINC-JP,pathogenic
P71P,71655,LINC-JP,pathogenic
myCode
def dup():
fi=open("op","a")
l=[];final="";
q=[];dic={};
for i in open("t1"):
k=i.split(",")
q.append(k[1])
q.append(k[0])
if q in l:
pass
else:
final= final + i.strip() + "\n"
fi.write(str(i.strip()))
fi.write("\n")
l.append(q)
q=[]
#print i.strip()
fi.close()
return final.strip()
d=dup()
In the above input line1,2 and line 3,4 are duplicates. Hence in output these duplicates are removed the entries in my input files are around 10^7.
Why is my code running since past 24 hrs for an input of 76Mb file. Also it has yet to complete one iteration of entire input file.It works fine for small files.
Can anyone please point out the reason for this long time.How can I optimize my program ? thnx
You're using an O(n2) algorithm, which scales poorly for larger files:
for i in open("t1"): # linear pass of file takes O(n) time
...
if q in l: # linear pass of list l takes O(n) time
...
...
You should consider using a set (i.e. make l a set) or itertools.groupby if duplicates will always be next to each other. These approaches will be O(n).
if you have access to a Unix system, uniq is a nice utility that is made for your problem.
uniq input.txt output.txt
see https://www.cs.duke.edu/csl/docs/unix_course/intro-89.html
I know this is a Python question, but sometimes Python is not the tool for the task.
And you can always embed a system call in your python script.
It's not clear why you're building a huge string (final) that holds the same thing the file does, or what dic is for. In terms of performance, you can look up x in y much faster if y is a set than if y is a list. Also, a minor point; shorter variable names don't improve performance, so use good ones instead. I would suggest:
def deduplicate(infile, outfile):
seen = set()
#final = []
with open(outfile, "a") as out, open(infile) as in_:
for line in in_:
check = tuple(line.split(",")[:2])
if check not in seen:
#final.append(line.strip())
out.write(line) # why 'strip' the '\n' then 'write' a new one?
seen.add(check)
#return "\n".join(final)
If you do really need final, make it a list until the last moment (see commented-out lines) - gradual string concatenation means the creation of lots of unnecessary objects.
There are a couple things that you are doing very inefficiently. The largest is that you made l a list, so the line if q in l has to search through everything in the list already in order to check if q matches it. If you make l a set, the membership check can be done using a hash calculation and array lookup, which take the same (small) amount of time no matter how much you add to the set (though it will cause l not to be read in the order that it was written).
Other little speedups that you can do include:
Using a tupple (k[1], k[0]) instead of a list for q.
You are writing your output file fi every loop. Your OS will try to batch and background the writes, but it may be faster to just do one big write at the end. I am not sure on this point but try it.
I'm working on a Python script to go through two files - one containing a list of UUIDs, the other containing a large amount of log entries - each line containing one of the UUIDs from the other file. The purpose of the program is to create a list of the UUIDS from file1, then for each time that UUID is found in the log file, increment the associated value for each time a match is found.
So long story short, count how many times each UUID appears in the log file.
At the moment, I have a list which is populated with UUID as the key, and 'hits' as the value. Then another loop which iterates over each line of the log file, and checking if the UUID in the log matches a UUID in the UUID list. If it matches, it increments the value.
for i, logLine in enumerate(logHandle): #start matching UUID entries in log file to UUID from rulebase
if logFunc.progress(lineCount, logSize): #check progress
print logFunc.progress(lineCount, logSize) #print progress in 10% intervals
for uid in uidHits:
if logLine.count(uid) == 1: #for each UUID, check the current line of the log for a match in the UUID list
uidHits[uid] += 1 #if matched, increment the relevant value in the uidHits list
break #as we've already found the match, don't process the rest
lineCount += 1
It works as it should - but I'm sure there is a more efficient way of processing the file. I've been through a few guides and found that using 'count' is faster than using a compiled regex. I thought reading files in chunks rather than line by line would improve performance by reducing the amount of disk I/O time but the performance difference on a test file ~200MB was neglible. If anyone has any other methods I would be very grateful :)
Think functionally!
Write a function which will take a line of the log file and return the uuid. Call it uuid, say.
Apply this function to every line of the log file. If you are using Python 3 you can use the built-in function map; otherwise, you need to use itertools.imap.
Pass this iterator to a collections.Counter.
collections.Counter(map(uuid, open("log.txt")))
This will be pretty much optimally efficient.
A couple comments:
This completely ignores the list of UUIDs and just counts the ones that appear in the log file. You will need to modify the program somewhat if you don't want this.
Your code is slow because you are using the wrong data structures. A dict is what you want here.
This is not a 5-line answer to your question, but there was an excellent tutorial given at PyCon'08 called Generator Tricks for System Programmers. There is also a followup tutorial called A Curious Course on Coroutines and Concurrency.
The Generator tutorial specifically uses big log file processing as its example.
Like folks above have said, with a 10GB file you'll probably hit the limits of your disk pretty quickly. For code-only improvements, the generator advice is great. In python 2.x it'll look something like
uuid_generator = (line.split(SPLIT_CHAR)[UUID_FIELD] for line in file)
It sounds like this doesn't actually have to be a python problem. If you're not doing anything more complex than counting UUIDs, Unix might be able to solve your problems faster than python can.
cut -d${SPLIT_CHAR} -f${UUID_FIELD} log_file.txt | sort | uniq -c
Have you tried mincemeat.py? It is a Python implementation of the MapReduce distributed computing framework. I'm not sure if you'll have performance gain since I've not yet processed 10GB of data before using it, though you might explore this framework.
Try measuring where most time is spent, using a profiler http://docs.python.org/library/profile.html
Where best to optimise will depend on the nature of your data: If the list of uuids isn't very long, you may find, for example, that a large proportion of time is spend on the "if logFunc.progress(lineCount, logSize)". If the list is very long, you it could help to save the result of uidHits.keys() to a variable outside the loop and iterate over that instead of the dictionary itself, but Rosh Oxymoron's suggesting of finding the id first and then checking for it in uidHits would probably help even more.
In any case, you can eliminate the lineCount variable, and use i instead. And find(uid) != -1 might be better than count(uid) == 1 if the lines are very long.
I have two files, say source and target. I compare each element in source to check if it also exists in target. If it does not exist in target, I print it ( the end goal is to have 0 difference). Here is the code I have written.
def finddefaulters(source,target):
f = open(source,'r')
g = open(target,'r')
reference = f.readlines()
done = g.readlines()
for i in reference:
if i not in done:
print i,
I need help with
How would this code be rated on a scale of 1-10
How can I make it better and optimal if the file sizes are huge.
Another question - When I read all the lines as list elements, they are interpreted as 'element\n' - So for correct comparison, I have to add a newline at the end of each file. Is there a way to strip the newlines so I do not have to add newline at the end of files. I tried rstrip. But it did not work.
Thanks in advance.
Regarding efficiency: The method you show has an asymptotic runtime complexity of O(m*n) where m and n are the number of elements in reference and done, i.e. if you double the size of both lists, the algorithm will run 4 times longer (times a fixed constant that is uniteresting to theoretical computer scientists). If m and n are very large, you will probably want to choose a faster algorithm, e.g sort the two lists first using the .sort() (runtime complexity: O(n * log(n))) and then go through the lists just once (runtime complexity: O(n)). That algorithm has a worst-case runtime complexity of O(n * log(n)), which is already a big improvement. However, you trade readability and simplicity of the code for efficiency, so I would only advise you to do this if absolutely necessary.
Regarding coding style: You do not .close() the file handles which you should. Instead of opening and closing the file handle, you could use the with language construct of python. Also, if you like the functional style, you could replace the for loop by a list expression:
for i in reference:
if i not in done:
print i,
then becomes:
items = [i.strip() for i in reference if i not in done]
print ' '.join(items)
However, this way you will not see any progress while the list is being composed.
As joaquin already mentions, you can loop over f directly instead of f.readlines() as file handles support the iterator protocol.
Some ideas:
1) use [with] to open files safely:
with open(source) as f:
.............
The with statement is used to wrap the
execution of a block with methods
defined by a context manager. This
allows common try...except...finally
usage patterns to be encapsulated for
convenient reuse.
2) you can iterate over the lines of a file instead of using readlines:
for line in f:
..........
3) Although for this short snippet it could be enough, try to use more informative names for your variables. One-letter names are not recommended.
4) If you want to get profit of python lib, try functions in difflib module. For example:
make_file(fromlines, tolines[, fromdesc][, todesc][, context][, numlines])
Compares fromlines and tolines (lists
of strings) and returns a string which
is a complete HTML file containing a
table showing line by line differences
with inter-line and intra-line changes
highlighted.
hey guys, beginner here. I have written a program that outputs files to .txt's and am using another to read them and use them. i have used a list to store these values (len(..) gives me 100 for all files). However, whenever i run this:
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
exec "f=open('file%02d.txt','r').readlines()"%w #stores data from file00,file01,file02...
f00=open('file00.txt','r').readlines() #same as ^ but from file00
for y in range(100):
xvp=float(f[c].rstrip('\n')) #the error is on this line; the file are stored in vertical order
pvp=float(f00[y].rstrip('\n')) #maybe even this one
#and i do stuff with those values...
I get in line 12,
xvp=float(f[c].rstrip('\n'))
IndexError: list index out of range
note: there are 100 numbers stored on separate lines in the .txt's
please, if there is any way to help you help me, let me know
thanks
You seem to be incrementing c two thousand times (20 times 100 -- actually only 1900 times, since range(1,20) will not reach the value 20, as you seem to desire in a comment) -- so of course you're going out of range if you use it to index a list of 100! The whole code is rather a mess and I suggest refactoring it radically, to avoid exec and do things the Python way. Assuming Python 2.6 or better (in 2.5, you need a from __future__ import with_statement at the start of your module):
f00 = open('file00.txt').readlines()
for w in range(1, 21):
for x in range(100):
with open('file%02d.txt' % w) as f:
for line in f:
xvp = float(line)
for line00 in f00:
rvp = float(line00)
do_stuff(xvp, rvp)
I don't know if this is the logic you want -- coupling every line of file00.txt with each line from the 20 other files -- but at least this makes it clear which lines are coupled up with which;-). If what you want is to only couple the first line of file00.txt with the first line from each of the others, then second line with second lines, etc, then add import itertools at the start of your module and change the contents of the with into:
for line00, line in itertools.izip(f00, f):
rvp = float(line00)
xvp = float(line)
do_stuff(xvp, rvp)
and so forth.
Note that I'm reading all of file00.txt in memory once and for all (into the f00 list of lines) because apparently you need to loop on those contents more than once, but that's not needed for the other files.
An obvious optimization is to convert file00.txt's lines to floats only once, replacing the f00 = statement with
with open('file00.txt') as f:
rvps = [float(line) for line in f]
then use rvps directly instead of repeating the conversion every time on the strings in f00 -- for example, in the second version (the one using itertools.izip):
for rvp, line in itertools.izip(rvps, f):
xvp = float(line)
do_stuff(xvp, rvp)
Edit: I see I've done a number of tiny enhancements while hardly realizing I was doing so, maybe I'd better explain them;-). No need to pass 'r' when opening a file for reading (can't hurt, but it's quite idiomatic to omit it). No need to strip trailing (or for that matter leading) whitespace from a string before calling float on it -- float happily skips all such leading and trailing whitespace itself. I did fix what apparently was another bug (you'd never deal with file20.txt) by fixing the applicable range to range(1, 21).
The with open(...) as f: statements do the opening, bind name f to the open file object, and, as soon as the block of statements they control is finished, guarantee that the file is properly closed -- it should almost invariably be used in preference to a stand-alone open, because ensuring all files are closed ASAP is really very good practice (the with statement has many other excellent use cases, but this is the single most frequent one, and the only one that happens to be necessary for this functionality).
Looping directly on an open file object f (provided the file is opened in text mode, as is the default and applies throughout here), for line in f:, provides one after the other the lines of f (without ever needing to keep them all in memory at once) and is an extremely popular and good Pythonic idiom.
The construct rvps = [float(line) for line in f], which I use in my recommended optimization, is known as a "list comprehension" and it's a nicely speedy and compact alternative to a loop that builds a new list.
itertools.izip, given a number of iterables, provides a single iterable whose items are tuples made by the items of the other iterables "walked in lockstep". The built-in zip is similar, but (in Python 2) it builds a list in memory, which itertools.izip avoids, so it's good practice to learn to use the itertools version to avoid wasting memory (not really important for small files like the ones you have, but good habits are best learned and "just applied" rather than having to reflect on them every single time -- just one one doesn't start every morning pondering whether one should brush one's teeth, but just goes and does so as a matter of good habit;-).
I'm sure there's more, but this is what comes to mind off-hand - feel free to ask if I can be of further assistance!
there are 100 numbers stored on
separate lines in the .txt's
but in
for w in range(1,20): # i want files file01-file20 excluding file00
for x in range(100):
c=c+1 #counter to keep list position on f=0
you incrementing c by 20*100 = 2000 times.
Maybe you need c = 0 in "w" cycle or just use x instead of c?
Based on how you describe your files, you are indexing into them incorrectly. By using c which is incremented for each iteration of the second loop. It will reach values of up to 2000. Using x seems to be the logical choice.
#restructured for efficiency
file = open('file00.txt','r')
f00 = file.readlines() #no need to reopen the file for every iteration
file.close() #always close the file when done with
for w in range(1,20):
file = open('file%02d.txt'%w,'r')
f = file.readlines() #only open once per iteration
file.close()
for x in range(100):
xvp = float(f[x].rstrip('\n'))
for y in range(100):
pvp = float(f00[y].rstrip('\n'))
#do stuff
So lets say I'm using Python's ftplib to retrieve a list of log files from an FTP server. How would I parse that list of files to get just the file names (the last column) inside a list? See the link above for example output.
Using retrlines() probably isn't the best idea there, since it just prints to the console and so you'd have to do tricky things to even get at that output. A likely better bet would be to use the nlst() method, which returns exactly what you want: a list of the file names.
This best answer
You may want to use ftp.nlst() instead of ftp.retrlines(). It will give you exactly what you want.
If you can't, read the following :
Generators for sysadmin processes
In his now famous review, Generator Tricks For Systems Programmers An Introduction, David M. Beazley gives a lot of receipes to answer to this kind of data problem with wuick and reusable code.
E.G :
# empty list that will receive all the log entry
log = []
# we pass a callback function bypass the print_line that would be called by retrlines
# we do that only because we cannot use something better than retrlines
ftp.retrlines('LIST', callback=log.append)
# we use rsplit because it more efficient in our case if we have a big file
files = (line.rsplit(None, 1)[1] for line in log)
# get you file list
files_list = list(files)
Why don't we generate immediately the list ?
Well, it's because doing it this way offer you much flexibility : you can apply any intermediate generator to filter files before turning it into files_list : it's just like pipe, add a line, you add a process without overheat (since it's generators). And if you get rid off retrlines, it still work be it's even better because you don't store the list even one time.
EDIT : well, I read the comment to the other answer and it says that this won't work if there is any space in the name.
Cool, this will illustrate why this method is handy. If you want to change something in the process, you just change a line. Swap :
files = (line.rsplit(None, 1)[1] for line in log)
and
# join split the line, get all the item from the field 8 then join them
files = (' '.join(line.split()[8:]) for line in log)
Ok, this may no be obvious here, but for huge batch process scripts, it's nice :-)
And a slightly less-optimal method, by the way, if you're stuck using retrlines() for some reason, is to pass a function as the second argument to retrlines(); it'll be called for each item in the list. So something like this (assuming you have an FTP object named 'ftp') would work as well:
filenames = []
ftp.retrlines('LIST', lambda line: filenames.append(line.split()[-1]))
The list 'filenames' will then be a list of the file names.
Is there any reason why ftplib.FTP.nlst() won't work for you? I just checked and it returns only names of the files in a given directory.
Since every filename in the output starts at the same column, all you have to do is get the position of the dot on the first line:
drwxrwsr-x 5 ftp-usr pdmaint 1536 Mar 20 09:48 .
Then slice the filename out of the other lines using the position of that dot as the starting index.
Since the dot is the last character on the line, you can use the length of the line minus 1 as the index. So the final code is something like this:
lines = ftp.retrlines('LIST')
lines = lines.split("\n") # This should split the string into an array of lines
filename_index = len(lines[0]) - 1
files = []
for line in lines:
files.append(line[filename_index:])
If the FTP server supports the MLSD command, then please see section “single directory case” from that answer.
Use an instance (say ftpd) of the FTPDirectory class, call its .getdata method with connected ftplib.FTP instance in the correct folder, then you can:
directory_filenames= [ftpfile.name for ftpfile in ftpd.files]
I believe it should work for you.
file_name_list = [' '.join(each_file.split()).split()[-1] for each_file_detail in file_list_from_log]
NOTES -
Here I am making a assumption that you want the data in the program (as list), not on console.
each_file_detail is each line that is being produced by the program.
' '.join(each_file.split())
To replace multiple spaces by 1 space.