Check if files in dir are the same - python

I have a folder of 5000+ images in jpeg/png etc. How can I check if any of the images are the same. The images were collected through web scraping and have been sequentially renamed so I cannot compare file names.
I am currently checking if the hashes are the same however this is a very long process. I am currently using:
def sameIm(file_name1,file_name2):
hash = imagehash.average_hash(Image.open(path + file_name1))
otherhash = imagehash.average_hash(Image.open(path + file_name2))
return (hash == otherhash)
Then nested loops. Comparing 1 image to 5000+ others takes about 5mins so comparing each to each would take days to compute.
Is there a faster way to do this in python. I was thinking parallel processing but would that still take a long time?
or is there another way to compare files which is faster?
Thanks

There is indeed a much faster way of doing this:
import collections
import glob
import os
def dupDetector(dirpath, ext):
hashes = collections.defaultdict(list)
for fpath in glob.glob(os.path.join(dirpath, "*.{}".format(ext))):
h = imagehash.average_hash(Image.open(fpath))
hashes[h].append(fpath)
for h,fpaths in hashes.items():
if len(fpaths) == 1:
print(fpaths[0], "is one of a kind")
continue
print("The following files are duplicates of each other (with the hash {}): \n\t{}".format(h, '\n\t'.join(fpaths)))
Using the dictionary with the file hash as a key gives you O(1) lookups, which means you don't need to do the pair-wise comparisons. You therefore go from a quadratic runtime, to a linear runtime (yay!)

Why not compute hash only once?
hashes = [imagehash.average_hash(Image.open(path + fn)) for fn in file_names]
def compare_hashes(hash1, hash2):
return hash1 == hash2

One solution is to keep using the hash but stocking it in a list of tuple (or a dic, i don't know wich is more efficient here) where the first element is the name of the image and the second is the hash. It should take aproximatively the same 5 mins.
If you have 5000 images,
You compare the value of the first element of the list to the 4999 others
Then the second to the 4998 others (as you already checked the first one)
Then the third ...
This "just" make you do n²/2 comparisons (where n is the number of images)

Just use map structure to calculate hashes for each image,then store hashes as a key and name of the image as a value.
As a result you would have unique images names array.
def get_hash(filename):
return imagehash.average_hash(Image.open(path + filename))
def get_unique_images(filenames):
hashes = {}
for filename in filenames:
image_hash = get_hash(filename)
hashes[image_hash] = filename
return hashes.values()

Related

Memory issues with a list of lists [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am having some memory issues and I am wondering if there is any way I can free up some memory in the code below. I have tried using a generator expression rather than list comprehension but that does not produce unique combinations, as the memory is freed up.
The list of lists (combinations) causes me to run out of memory and the program does not finish.
The end result would be 729 lists in this list, with each list containing 6 WindowsPath elements that point to images. I have tried storing the lists as strings in a text file but I can not get that to work, I tried using a pandas dataframe but I can not get that to work.
I need to figure out a different solution. The output right now is exactly what I need but the memory is the only issue.
from pathlib import Path
from random import choice
from itertools import product
from PIL import Image
import sys
def combine(arr):
return list(product(*arr))
def generate(x):
#set new value for name
name = int(x)
#Turn name into string for file name
img_name = str(name)
#Pick 1 random from each directory, add to list.
a_paths = [choice(k) for k in layers]
#if the length of the list of unique combinations is equal to the number of total combinations, this function stops
if len(combinations) == len(combine(layers)):
print("Done")
sys.exit()
else:
#If combination exists, generate new list
if any(j == a_paths for j in combinations) == True:
print("Redo")
generate(name)
#Else, initialize new image, paste layers + save image, add combination to list, and generate new list
else:
#initialize image
img = Image.new("RGBA", (648, 648))
png_info = img.info
#For each path in the list, paste on top of previous, sets image to be saved
for path in a_paths:
layer = Image.open(str(path), "r")
img.paste(layer, (0, 0), layer)
print(str(name) + ' - Unique')
img.save(img_name + '.png', **png_info)
combinations.append(a_paths)
name = name - 1
generate(name)
'''
Main method
'''
global layers
layers = [list(Path(directory).glob("*.png")) for directory in ("dir1/", "dir2/", "dir3/", "dir4/", "dir5/", "dir6/")]
#name will dictate the name of the file output(.png image) it is equal to the number of combinations of the image layers
global name
name = len(combine(layers))
#combinations is the list of lists that will store all unique combinations of images
global combinations
combinations = []
#calling recursive function
generate(name)
Let's start with a MRE version of your code (i.e. something that I can run without needing a bunch of PNGs -- all we're concerned with here is how to go through the images without hitting recursion limits):
from random import choice
from itertools import product
def combine(arr):
return list(product(*arr))
def generate(x):
# set new value for name
name = int(x)
# Turn name into string for file name
img_name = str(name)
# Pick 1 random from each directory, add to list.
a_paths = [choice(k) for k in layers]
# if the length of the list of unique combinations is equal to the number of total combinations, this function stops
if len(combinations) == len(combine(layers)):
print("Done")
return
else:
# If combination exists, generate new list
if any(j == a_paths for j in combinations) == True:
print("Redo")
generate(name)
# Else, initialize new image, paste layers + save image, add combination to list, and generate new list
else:
# initialize image
img = []
# For each path in the list, paste on top of previous, sets image to be saved
for path in a_paths:
img.append(path)
print(str(name) + ' - Unique')
print(img_name + '.png', img)
combinations.append(a_paths)
name = name - 1
generate(name)
'''
Main method
'''
global layers
layers = [
[f"{d}{f}.png" for f in ("foo", "bar", "baz", "ola", "qux")]
for d in ("dir1/", "dir2/", "dir3/", "dir4/", "dir5/", "dir6/")
]
# name will dictate the name of the file output(.png image) it is equal to the number of combinations of the image layers
global name
name = len(combine(layers))
# combinations is the list of lists that will store all unique combinations of images
global combinations
combinations = []
# calling recursive function
generate(name)
When I run this I get some output that starts with:
15625 - Unique
15625.png ['dir1/qux.png', 'dir2/bar.png', 'dir3/bar.png', 'dir4/foo.png', 'dir5/baz.png', 'dir6/foo.png']
15624 - Unique
15624.png ['dir1/baz.png', 'dir2/qux.png', 'dir3/foo.png', 'dir4/foo.png', 'dir5/foo.png', 'dir6/foo.png']
15623 - Unique
15623.png ['dir1/ola.png', 'dir2/qux.png', 'dir3/bar.png', 'dir4/ola.png', 'dir5/ola.png', 'dir6/bar.png']
...
and ends with a RecursionError. I assume this is what you mean when you say you "ran out of memory" -- in reality it doesn't seem like I'm anywhere close to running out of memory (maybe this would behave differently if I had actual images?), but Python's stack depth is finite and this function seems to be recursing into itself arbitrarily deep for no particularly good reason.
Since you're trying to eventually generate all the possible combinations, you already have a perfectly good solution, which you're even already using -- itertools.product. All you have to do is iterate through the combinations that it gives you. You don't need recursion and you don't need global variables.
from itertools import product
from typing import List
def generate(layers: List[List[str]]) -> None:
for name, a_paths in enumerate(product(*layers), 1):
# initialize image
img = []
# For each path in the list, paste on top of previous,
# sets image to be saved
for path in a_paths:
img.append(path)
print(f"{name} - Unique")
print(f"{name}.png", img)
print("Done")
'''
Main method
'''
layers = [
[f"{d}{f}.png" for f in ("foo", "bar", "baz", "ola", "qux")]
for d in ("dir1/", "dir2/", "dir3/", "dir4/", "dir5/", "dir6/")
]
# calling iterative function
generate(layers)
Now we get all of the combinations -- the naming starts at 1 and goes all the way to 15625:
1 - Unique
1.png ['dir1/foo.png', 'dir2/foo.png', 'dir3/foo.png', 'dir4/foo.png', 'dir5/foo.png', 'dir6/foo.png']
2 - Unique
2.png ['dir1/foo.png', 'dir2/foo.png', 'dir3/foo.png', 'dir4/foo.png', 'dir5/foo.png', 'dir6/bar.png']
3 - Unique
3.png ['dir1/foo.png', 'dir2/foo.png', 'dir3/foo.png', 'dir4/foo.png', 'dir5/foo.png', 'dir6/baz.png']
...
15623 - Unique
15623.png ['dir1/qux.png', 'dir2/qux.png', 'dir3/qux.png', 'dir4/qux.png', 'dir5/qux.png', 'dir6/baz.png']
15624 - Unique
15624.png ['dir1/qux.png', 'dir2/qux.png', 'dir3/qux.png', 'dir4/qux.png', 'dir5/qux.png', 'dir6/ola.png']
15625 - Unique
15625.png ['dir1/qux.png', 'dir2/qux.png', 'dir3/qux.png', 'dir4/qux.png', 'dir5/qux.png', 'dir6/qux.png']
Done
Replacing the actual image-generating code back into my mocked-out version is left as an exercise for the reader.
If you wanted to randomize the order of the combinations, it'd be pretty reasonable to do:
from random import shuffle
...
combinations = list(product(*layers))
shuffle(combinations)
for name, a_paths in enumerate(combinations, 1):
...
This uses more memory (since now you're building a list of the product instead of iterating through a generator), but the number of images you're working with isn't actually that large, so this is fine as long as you aren't adding a level of recursion for each image.

Parse list of strings for speed

Background
I have a function called get_player_path that takes in a list of strings player_file_list and a int value total_players. For the sake of example i have reduced the list of strings and also set the int value to a very small number.
Each string in the player_file_list either has a year-date/player_id/some_random_file.file_extension or
year-date/player_id/IDATs/some_random_number/some_random_file.file_extension
Issue
What i am essentially trying to achieve here is go through this list and store all unique year-date/player_id path in a set until it's length reaches the value of total_players
My current approach does not seem the most efficient to me and i am wondering if i can speed up my function get_player_path in anyway??
Code
def get_player_path(player_file_list, total_players):
player_files_to_process = set()
for player_file in player_file_list:
player_file = player_file.split("/")
file_path = f"{player_file[0]}/{player_file[1]}/"
player_files_to_process.add(file_path)
if len(player_files_to_process) == total_players:
break
return sorted(player_files_to_process)
player_file_list = [
"2020-10-27/31001804320549/31001804320549.json",
"2020-10-27/31001804320549/IDATs/204825150047/foo_bar_Red.idat",
"2020-10-28/31001804320548/31001804320549.json",
"2020-10-28/31001804320548/IDATs/204825150123/foo_bar_Red.idat",
"2020-10-29/31001804320547/31001804320549.json",
"2020-10-29/31001804320547/IDATs/204825150227/foo_bar_Red.idat",
"2020-10-30/31001804320546/31001804320549.json",
"2020-10-30/31001804320546/IDATs/123455150047/foo_bar_Red.idat",
"2020-10-31/31001804320545/31001804320549.json",
"2020-10-31/31001804320545/IDATs/597625150047/foo_bar_Red.idat",
]
print(get_player_path(player_file_list, 2))
Output
['2020-10-27/31001804320549/', '2020-10-28/31001804320548/']
Let's analyze your function first:
your loop should take linear time (O(n)) in the length of the input list, assuming the path lengths are bounded by a relatively "small" number;
the sorting takes O(n log(n)) comparisons.
Thus the sorting has the dominant cost when the list becomes big. You can micro-optimize your loop as much as you want, but as long as you keep that sorting at the end, your effort won't make much of a difference with big lists.
Your approach is fine if you're just writing a Python script. If you really needed perfomances with huge lists, you would probably be using some other language. Nonetheless, if you really care about performances (or just to learn new stuff), you could try one of the following approaches:
replace the generic sorting algorithm with something specific for strings; see here for example
use a trie, removing the need for sorting; this could be theoretically better but probably worse in practice.
Just for completeness, as a micro-optimization, assuming the date has a fixed length of 10 characters:
def get_player_path(player_file_list, total_players):
player_files_to_process = set()
for player_file in player_file_list:
end = player_file.find('/', 12) # <--- len(date) + len('/') + 1
file_path = player_file[:end] # <---
player_files_to_process.add(file_path)
if len(player_files_to_process) == total_players:
break
return sorted(player_files_to_process)
If the IDs have fixed length too, as in your example list, then you don't need any split or find, just:
LENGTH = DATE_LENGTH + ID_LENGTH + 1 # 1 is for the slash between date and id
...
for player_file in player_file_list:
file_path = player_file[:LENGTH]
...
EDIT: fixed the LENGTH initialization, I had forgotten to add 1
I'll leave this solution here which can be further improved, hope it helps.
player_file_list = (
"2020-10-27/31001804320549/31001804320549.json",
"2020-10-27/31001804320549/IDATs/204825150047/foo_bar_Red.idat",
"2020-10-28/31001804320548/31001804320549.json",
"2020-10-28/31001804320548/IDATs/204825150123/foo_bar_Red.idat",
"2020-10-29/31001804320547/31001804320549.json",
"2020-10-29/31001804320547/IDATs/204825150227/foo_bar_Red.idat",
"2020-10-30/31001804320546/31001804320549.json",
"2020-10-30/31001804320546/IDATs/123455150047/foo_bar_Red.idat",
"2020-10-31/31001804320545/31001804320549.json",
"2020-10-31/31001804320545/IDATs/597625150047/foo_bar_Red.idat",
)
def get_player_path(l, n):
pfl = set()
for i in l:
i = "/".join(i.split("/")[0:2])
if i not in pfl:
pfl.add(i)
if len(pfl) == n:
return pfl
if n > len(pfl):
print("not enough matches")
return
print(get_player_path(player_file_list, 2))
# {'2020-10-27/31001804320549', '2020-10-28/31001804320548'}
Python Demo
Use dict so that you don't have to sort it since your list is already sorted. If you still need to sort you can always use sorted in the return statement. Add import re and replace your function as follows:
def get_player_path(player_file_list, total_players):
dct = {re.search('^\w+-\w+-\w+/\w+',pf).group(): 1 for pf in player_file_list}
return [k for i,k in enumerate(dct.keys()) if i < total_players]

Find duplicate images in fastest way

I have 2 image folder containing 10k and 35k images. Each image is approximately the size of (2k,2k).
I want to remove the images which are exact duplicates.
The variation in different images are just a change in some pixels.
I have tried DHashing, PHashing, AHashing but as they are lossy image hashing technique so they are giving the same hash for non-duplicate images too.
I also tried writing a code in python, which will just subtract images and the combination in which the resultant array is not zero everywhere gives those image pair to be duplicate of each other.
Buth the time for a single combination is 0.29 seconds and for total 350 million combinations is really huge.
Is there a way to do it in a faster way without flagging non-duplicate images also.
I am open to doing it in any language(C,C++), any approach(distributed computing, multithreading) which can solve my problem accurately.
Apologies if I added some of the irrelevant approaches as I am not from computer science background.
Below is the code I used for python approach -
start = timeit.default_timer()
dict = {}
for i in path1:
img1 = io.imread(i)
base1 = os.path.basename(i)
for j in path2:
img2 = io.imread(j)
base2 = os.path.basename(j)
if np.array_equal(img1, img2):
err = img1.astype('float') - img2.astype('float')
is_all_zero = np.all((err == 0))
if is_all_zero:
dict[base1] = base2
else:
continue
stop = timeit.default_timer()
print('Time: ', stop - start)
Use lossy hashing as a prefiltering step, before a complete comparison. You can also generate thumbnail images (say 12 x 8 pixels), and compare for similarity.
The idea is to perform quick rejection of very different images.
You should find the answer on how to delete duplicate files (not only images). Then you can use, for example, fdupes or find some alternative SW: https://alternativeto.net/software/fdupes/
This code checks if there are any duplicates in a folder (it's a bit slow though):
import image_similarity_measures
from image_similarity_measures.quality_metrics import rmse, psnr
from sewar.full_ref import rmse, psnr
import cv2
import os
import time
def check(path_orginal,path_new):#give r strings
original = cv2.imread(path_orginal)
new = cv2.imread(path_new)
return rmse(original, new)
def folder_check(folder_path):
i=0
file_list = os.listdir(folder_path)
print(file_list)
duplicate_dict={}
for file in file_list:
# print(file)
file_path=os.path.join(folder_path,file)
for file_compare in file_list:
print(i)
i+=1
file_compare_path=os.path.join(folder_path,file_compare)
if file_compare!=file:
similarity_score=check(file_path,file_compare_path)
# print(str(similarity_score))
if similarity_score==0.0:
print(file,file_compare)
duplicate_dict[file]=file_compare
file_list.remove(str(file))
return duplicate_dict
start_time=time.time()
print(folder_check(r"C:\Users\Admin\Linear-Regression-1\image-similarity-measures\input1"))
end_time=time.time()
stamp=end_time-start_time
print(stamp)

python similar string removal from multiple files

I have crawled txt files from different website, now i need to glue them into one file. There are many lines are similar to each other from various websites. I want to remove repetitions.
Here is what I have tried:
import difflib
sourcename = 'xiaoshanwujzw'
destname = 'bindresult'
sourcefile = open('%s.txt' % sourcename)
sourcelines = sourcefile.readlines()
sourcefile.close()
for sourceline in sourcelines:
destfile = open('%s.txt' % destname, 'a+')
destlines = destfile.readlines()
similar = False
for destline in destlines:
ratio = difflib.SequenceMatcher(None, destline, sourceline).ratio()
if ratio > 0.8:
print destline
print sourceline
similar = True
if not similar:
destfile.write(sourceline)
destfile.close()
I will run it for every source, and write line by line to the same file. The result is, even if i run it for the same file multiple times, the line is always appended to the destination file.
EDIT:
I have tried the code of the answer. It's still very slow.
Even If I minimize the IO, I still need to compare O(n^2), especially when you have 1000+ lines. I have average 10,000 lines per file.
Any other ways to remove the duplicates?
Here is a short version that does minimal IO and cleans up after itself.
import difflib
sourcename = 'xiaoshanwujzw'
destname = 'bindresult'
with open('%s.txt' % destname, 'w+') as destfile:
# we read in the file so that on subsequent runs of this script, we
# won't duplicate the lines.
known_lines = set(destfile.readlines())
with open('%s.txt' % sourcename) as sourcefile:
for line in sourcefile:
similar = False
for known in known_lines:
ratio = difflib.SequenceMatcher(None, line, known).ratio()
if ratio > 0.8:
print ratio
print line
print known
similar = True
break
if not similar:
destfile.write(line)
known_lines.add(line)
Instead of reading the known lines each time from the file, we save them to a set, which we use for comparison against. The set is essentially a mirror of the contents of 'destfile'.
A note on complexity
By its very nature, this problem has a O(n2) complexity. Because you're looking for similarity with known strings, rather than identical strings, you have to look at every previously seen string. If you were looking to remove exact duplicates, rather than fuzzy matches, you could use a simple lookup in a set, with complexity O(1), making your entire solution have O(n) complexity.
There might be a way to reduce the fundamental complexity by using lossy compression on the strings so that two similar strings compress to the same result. This is however both out of scope for a stack overflow answer, and beyond my expertise. It is an active research area so you might have some luck digging through the literature.
You could also reduce the time taken by ratio() by using the less accurate alternatives quick_ratio() and real_quick_ratio().
Your code works fine for me. it prints destline and sourceline to stdout when lines are similar (in the example I used, exactly the same) but it only wrote unique lines to file once. You might need to set your ratio threshold lower for your specific "similarity" needs.
Basically what you need to do is check every line in the source file to see if it has a potential match against every line of the destination file.
##xiaoshanwujzw.txt
##-----------------
##radically different thing
##this is data
##and more data
##bindresult.txt
##--------------
##a website line
##this is data
##and more data
from difflib import SequenceMatcher
sourcefile = open('xiaoshanwujzw.txt', 'r')
sourcelines = sourcefile.readlines()
sourcefile.close()
destfile = open('bindresult.txt', 'a+')
destlines = destfile.readlines()
has_matches = {k: False for k in sourcelines}
for d_line in destlines:
for s_line in sourcelines:
if SequenceMatcher(None, d_line, s_line).ratio() > 0.8:
has_matches[s_line] = True
break
for k in has_matches:
if has_matches[k] == False:
destfile.write(k)
destfile.close()
This will add the line radically different thing`` to the destinationfile.

How to Compare 2 very large matrices using Python

I have an interesting problem.
I have a very large (larger than 300MB, more than 10,000,000 lines/rows in the file) CSV file with time series data points inside. Every month I get a new CSV file that is almost the same as the previous file, except for a few new lines have been added and/or removed and perhaps a couple of lines have been modified.
I want to use Python to compare the 2 files and identify which lines have been added, removed and modified.
The issue is that the file is very large, so I need a solution that can handle the large file size and execute efficiently within a reasonable time, the faster the better.
Example of what a file and its new file might look like:
Old file
A,2008-01-01,23
A,2008-02-01,45
B,2008-01-01,56
B,2008-02-01,60
C,2008-01-01,3
C,2008-02-01,7
C,2008-03-01,9
etc...
New file
A,2008-01-01,23
A,2008-02-01,45
A,2008-03-01,67 (added)
B,2008-01-01,56
B,2008-03-01,33 (removed and added)
C,2008-01-01,3
C,2008-02-01,7
C,2008-03-01,22 (modified)
etc...
Basically the 2 files can be seen as matrices that need to be compared, and I have begun thinking of using PyTable. Any ideas on how to solve this problem would be greatly appreciated.
Like this.
Step 1. Sort.
Step 2. Read each file, doing line-by-line comparison. Write differences to another file.
You can easily write this yourself. Or you can use difflib. http://docs.python.org/library/difflib.html
Note that the general solution is quite slow as it searches for matching lines near a difference. Writing your own solution can run faster because you know things about how the files are supposed to match. You can optimize that "resynch-after-a-diff" algorithm.
And 10,000,000 lines hardly matters. It's not that big. Two 300Mb files easily fit into memory.
This is a little bit of a naive implementation but will deal with unsorted data:
import csv
file1_dict = {}
file2_dict = {}
with open('file1.csv') as handle:
for row in csv.reader(handle):
file1_dict[tuple(row[:2])] = row[2:]
with open('file2.csv') as handle:
for row in csv.reader(handle):
file2_dict[tuple(row[:2])] = row[2:]
with open('outfile.csv', 'w') as handle:
writer = csv.writer(handle)
for key, val in file1_dict.iteritems():
if key in file2_dict:
#deal with keys that are in both
if file2_dict[key] == val:
writer.writerow(key+val+('Same',))
else:
writer.writerow(key+file2_dict[key]+('Modified',))
file2_dict.pop(key)
else:
writer.writerow(key+val+('Removed',))
#deal with added keys!
for key, val in file2_dict.iteritems():
writer.writerow(key+val+('Added',))
You probably won't be able to "drop in" this solution but it should get you ~95% of the way there. #S.Lott is right, 2 300mb files will easily fit in memory ... if your files get into the 1-2gb range then this may have to be modified with the assumption of sorted data.
Something like this is close ... although you may have to change the comparisons around for the added a modified to make sense:
#assumming both files are sorted by columns 1 and 2
import datetime
from itertools import imap
def str2date(in):
return datetime.date(*map(int,in.split('-')))
def convert_tups(row):
key = (row[0], str2date(row[1]))
val = tuple(row[2:])
return key, val
with open('file1.csv') as handle1:
with open('file2.csv') as handle2:
with open('outfile.csv', 'w') as outhandle:
writer = csv.writer(outhandle)
gen1 = imap(convert_tups, csv.reader(handle1))
gen2 = imap(convert_tups, csv.reader(handle2))
gen2key, gen2val = gen2.next()
for gen1key, gen1val in gen1:
if gen1key == gen2key and gen1val == gen2val:
writer.writerow(gen1key+gen1val+('Same',))
gen2key, gen2val = gen2.next()
elif gen1key == gen2key and gen1val != gen2val:
writer.writerow(gen2key+gen2val+('Modified',))
gen2key, gen2val = gen2.next()
elif gen1key > gen2key:
while gen1key>gen2key:
writer.writerow(gen2key+gen2val+('Added',))
gen2key, gen2val = gen2.next()
else:
writer.writerow(gen1key+gen1val+('Removed',))

Categories

Resources