I want python to read to the EOF so I can get an appropriate hash, whether it is sha1 or md5. Please help. Here is what I have so far:
import hashlib
inputFile = raw_input("Enter the name of the file:")
openedFile = open(inputFile)
readFile = openedFile.read()
md5Hash = hashlib.md5(readFile)
md5Hashed = md5Hash.hexdigest()
sha1Hash = hashlib.sha1(readFile)
sha1Hashed = sha1Hash.hexdigest()
print "File Name: %s" % inputFile
print "MD5: %r" % md5Hashed
print "SHA1: %r" % sha1Hashed
TL;DR use buffers to not use tons of memory.
We get to the crux of your problem, I believe, when we consider the memory implications of working with very large files. We don't want this bad boy to churn through 2 gigs of ram for a 2 gigabyte file so, as pasztorpisti points out, we gotta deal with those bigger files in chunks!
import sys
import hashlib
# BUF_SIZE is totally arbitrary, change for your app!
BUF_SIZE = 65536 # lets read stuff in 64kb chunks!
md5 = hashlib.md5()
sha1 = hashlib.sha1()
with open(sys.argv[1], 'rb') as f:
while True:
data = f.read(BUF_SIZE)
if not data:
break
md5.update(data)
sha1.update(data)
print("MD5: {0}".format(md5.hexdigest()))
print("SHA1: {0}".format(sha1.hexdigest()))
What we've done is we're updating our hashes of this bad boy in 64kb chunks as we go along with hashlib's handy dandy update method. This way we use a lot less memory than the 2gb it would take to hash the guy all at once!
You can test this with:
$ mkfile 2g bigfile
$ python hashes.py bigfile
MD5: a981130cf2b7e09f4686dc273cf7187e
SHA1: 91d50642dd930e9542c39d36f0516d45f4e1af0d
$ md5 bigfile
MD5 (bigfile) = a981130cf2b7e09f4686dc273cf7187e
$ shasum bigfile
91d50642dd930e9542c39d36f0516d45f4e1af0d bigfile
Also all of this is outlined in the linked question on the right hand side: Get MD5 hash of big files in Python
Addendum!
In general when writing python it helps to get into the habit of following [pep-8][4]. For example, in python variables are typically underscore separated not camelCased. But that's just style and no one really cares about those things except people who have to read bad style... which might be you reading this code years from now.
For the correct and efficient computation of the hash value of a file (in Python 3):
Open the file in binary mode (i.e. add 'b' to the filemode) to avoid character encoding and line-ending conversion issues.
Don't read the complete file into memory, since that is a waste of memory. Instead, sequentially read it block by block and update the hash for each block.
Eliminate double buffering, i.e. don't use buffered IO, because we already use an optimal block size.
Use readinto() to avoid buffer churning.
Example:
import hashlib
def sha256sum(filename):
h = hashlib.sha256()
b = bytearray(128*1024)
mv = memoryview(b)
with open(filename, 'rb', buffering=0) as f:
while n := f.readinto(mv):
h.update(mv[:n])
return h.hexdigest()
Note that the while loop uses an assignment expression which isn't available in Python versions older than 3.8.
With older Python 3 versions you can use an equivalent variation:
import hashlib
def sha256sum(filename):
h = hashlib.sha256()
b = bytearray(128*1024)
mv = memoryview(b)
with open(filename, 'rb', buffering=0) as f:
for n in iter(lambda : f.readinto(mv), 0):
h.update(mv[:n])
return h.hexdigest()
I would propose simply:
def get_digest(file_path):
h = hashlib.sha256()
with open(file_path, 'rb') as file:
while True:
# Reading is buffered, so we can read smaller chunks.
chunk = file.read(h.block_size)
if not chunk:
break
h.update(chunk)
return h.hexdigest()
All other answers here seem to complicate too much. Python is already buffering when reading (in ideal manner, or you configure that buffering if you have more information about underlying storage) and so it is better to read in chunks the hash function finds ideal which makes it faster or at lest less CPU intensive to compute the hash function. So instead of disabling buffering and trying to emulate it yourself, you use Python buffering and control what you should be controlling: what the consumer of your data finds ideal, hash block size.
Here is a Python 3, POSIX solution (not Windows!) that uses mmap to map the object into memory.
import hashlib
import mmap
def sha256sum(filename):
h = hashlib.sha256()
with open(filename, 'rb') as f:
with mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ) as mm:
h.update(mm)
return h.hexdigest()
I have programmed a module wich is able to hash big files with different algorithms.
pip3 install py_essentials
Use the module like this:
from py_essentials import hashing as hs
hash = hs.fileChecksum("path/to/the/file.txt", "sha256")
You do not need to define a function with 5-20 lines of code to do this! Save your time by using the pathlib and hashlib libraries, also py_essentials is another solution, but third-parties are *****.
from pathlib import Path
import hashlib
filepath = '/path/to/file'
filebytes = Path(filepath).read_bytes()
filehash_sha1 = hashlib.sha1(filebytes)
filehash_md5 = hashlib.md5(filebytes)
print(f'MD5: {filehash_md5}')
print(f'SHA1: {filehash_sha1}')
I used a few variables here to show the steps, you know how to avoid it.
What do you think about the below function?
from pathlib import Path
import hashlib
def compute_filehash(filepath: str, hashtype: str) -> str:
"""Computes the requested hash for the given file.
Args:
filepath: The path to the file to compute the hash for.
hashtype: The hash type to compute.
Available hash types:
md5, sha1, sha224, sha256, sha384, sha512, sha3_224,
sha3_256, sha3_384, sha3_512, shake_128, shake_256
Returns:
A string that represents the hash.
Raises:
ValueError: If the hash type is not supported.
"""
if hashtype not in ['md5', 'sha1', 'sha224', 'sha256', 'sha384',
'sha512', 'sha3_224', 'sha3_256', 'sha3_384',
'sha3_512', 'shake_128', 'shake_256']:
raise ValueError(f'Hash type {hashtype} is not supported.')
return getattr(hashlib, hashtype)(
Path(filepath).read_bytes()).hexdigest()
FWIW, I prefer this version, which has the same memory and performance characteristics as maxschlepzig's answer but is more readable IMO:
import hashlib
def sha256sum(filename, bufsize=128 * 1024):
h = hashlib.sha256()
buffer = bytearray(bufsize)
# using a memoryview so that we can slice the buffer without copying it
buffer_view = memoryview(buffer)
with open(filename, 'rb', buffering=0) as f:
while True:
n = f.readinto(buffer_view)
if not n:
break
h.update(buffer_view[:n])
return h.hexdigest()
Starting Python 3.11, you can use file_digest() method, which takes responsibility of reading files:
import hashlib
with open(inputFile, "rb") as f:
digest = hashlib.file_digest(f, "sha256")
import hashlib
user = input("Enter ")
h = hashlib.md5(user.encode())
h2 = h.hexdigest()
with open("encrypted.txt","w") as e:
print(h2,file=e)
with open("encrypted.txt","r") as e:
p = e.readline().strip()
print(p)
Related
def get_chunks(self):
fd = open(self.path, 'rb')
while True:
chunk = fd.read(1)
if not chunk:
break
yield chunk
fd.close()
def calculate(self):
md5 = hashlib.md5()
sha256 = hashlib.sha256()
sha512 = hashlib.sha512()
for chunk in self.get_chunks():
md5.update(chunk)
sha256.update(chunk)
sha512.update(chunk)
self.md5 = md5.hexdigest()
self.sha256 = sha256.hexdigest()
self.sha512 = sha512.hexdigest()
I am using multiple hash algorithms to try to identify files on my computer, I have encountered a problem while making this tool in python, the problem is:
"How many bytes should I read at one time if I intend to use them to update multiple hash algorithms and still keep the idea for this piece of code to run correctly on as many platforms as possible?"
any suggestions please?
I would go for one hash algorithm to hash each complete files. And in order to avoid a collision problems compare the colliding files completely bytewise. This should put you in a safe spot.
I am trying to encrypt a file in place using cryptography module, so I dont have to buffer the ciphertext of the file which can be memory intensive and then i will have to replace the original file with it's encrypted one.so my solution is encrypting a chunk of plaintext then trying to replace it with its ciphertext 16 bytes at a time(AES-CTR mode). The problem seems that the loop is an infinite loop.
so how to fix this.
what other methods you suggest.
What are The side effects of using such a method below.
pointer = 0
with open(path, "r+b") as file:
print("...ENCRYPTING")
while file:
file_data = file.read(16)
pointer += 16
ciphertext = aes_enc.update(file_data)
file.seek(pointer-16)
file.write(ciphertext)
print("...Complete...")
so how to fix this.
As Cyril Jouve already mentions, check for if not file_data
what other methods you suggest.
What are The side effects of using such a method below.
Reading in blocks of 16 bytes is relatively slow. I guess you have enough memory to read larger blocks like 4096, 8192 ...
Unless you have very large files and limited diskspace I think there is no benefit in reading and writing in the same file. In case of an error and if the os has already written data to disk you will have lost the original data and will have an incomplete encrypted file of which you don't know which part is encrypted.
It's easier and saver to create a new encrypted file an then delete and rename if there were no errors.
Encrypt to a new file, catch exceptions, check existence and size of the encrypted file, delete source and rename encrypted file only if all is oké.
import os
path = r'D:\test.dat'
input_path = path
encrypt_path = path + '_encrypt'
try:
with open(input_path, "rb") as input_file:
with open(encrypt_path, "wb") as encrypt_file:
print("...ENCRYPTING")
while True:
file_data = input_file.read(4096)
if not file_data:
break
ciphertext = aes_enc.update(file_data)
encrypt_file.write(ciphertext)
print("...Complete...")
if os.path.exists(encrypt_path):
if os.path.getsize(input_path) == os.path.getsize(encrypt_path):
print(f'Deleting {input_path}')
os.remove(input_path)
print(f'Renaming {encrypt_path} to {input_path}')
os.rename(encrypt_path, input_path)
except Exception as e:
print(f'EXCEPTION: {str(e)}')
there is no "truthiness" for a file object, so you can't use it as the condition for your loop.
The file is at EOF when read() returns an empty bytes object (https://docs.python.org/3/library/io.html#io.BufferedIOBase.read)
with open(path, "r+b") as file:
print("...ENCRYPTING")
while True:
file_data = file.read(16)
if not file_data:
break
ciphertext = aes_enc.update(file_data)
file.seek(-len(file_data), os.SEEK_CUR)
file.write(ciphertext)
print("...Complete...")
This script is xor encrypt function, if encrypt small file, is good ,but I have tried to open encrypt a big file (about 5GB) error information:
"OverflowError: size does not fit in an int"
,and open too slow.
Anyone can help me optimization my script,thank you.
from Crypto.Cipher import XOR
import base64
import os
def encrypt():
enpath = "D:\\Software"
key = 'vinson'
for files in os.listdir(enpath):
os.chdir(enpath)
with open(files,'rb') as r:
print ("open success",files)
data = r.read()
print ("loading success",files)
r.close()
cipher = XOR.new(key)
encoding = base64.b64encode(cipher.encrypt(data))
with open(files,'wb+') as n:
n.write(encoding)
n.close()
To expand upon my comment: you don't want to read the file into memory all at once, but process it in smaller blocks.
With any production-grade cipher (which XOR is definitely not) you would need to also deal with padding the output file if the source data is not a multiple of the cipher's block size. This script does not deal with that, hence the assertion about the block size.
Also, we're no longer irreversibly (well, aside from the fact that the XOR cipher is actually directly reversible) overwriting files with their encrypted versions. (Should you want to do that, it'd be better to just add code to remove the original, then rename the encrypted file into its place. That way you won't end up with a half-written, half-encrypted file.)
Also, I removed the useless Base64 encoding.
But – don't use this code for anything serious. Please don't. Friends don't friends roll their own crypto.
from Crypto.Cipher import XOR
import os
def encrypt_file(cipher, source_file, dest_file):
# this toy script is unable to deal with padding issues,
# so we must have a cipher that doesn't require it:
assert cipher.block_size == 1
while True:
src_data = source_file.read(1048576) # 1 megabyte at a time
if not src_data: # ran out of data?
break
encrypted_data = cipher.encrypt(src_data)
dest_file.write(encrypted_data)
def insecurely_encrypt_directory(enpath, key):
for filename in os.listdir(enpath):
file_path = os.path.join(enpath, filename)
dest_path = file_path + ".encrypted"
with open(file_path, "rb") as source_file, open(dest_path, "wb") as dest_file:
cipher = XOR.new(key)
encrypt_file(cipher, source_file, dest_file)
I have a backup hard drive that I know has duplicate files scattered around and I decided it would be a fun project to write a little python script to find them and remove them. I wrote the following code just to traverse the drive and calculate the md5 sum of each file and compare it to what I am going to call my "first encounter" list. If the md5 sum does not yet exist, then add it to the list. If the sum does already exist, delete the current file.
import sys
import os
import hashlib
def checkFile(fileHashMap, file):
fReader = open(file)
fileData = fReader.read();
fReader.close()
fileHash = hashlib.md5(fileData).hexdigest()
del fileData
if fileHash in fileHashMap:
### Duplicate file.
fileHashMap[fileHash].append(file)
return True
else:
fileHashMap[fileHash] = [file]
return False
def main(argv):
fileHashMap = {}
fileCount = 0
for curDir, subDirs, files in os.walk(argv[1]):
print(curDir)
for file in files:
fileCount += 1
print("------------: " + str(fileCount))
print(curDir + file)
checkFile(fileHashMap, curDir + file)
if __name__ == "__main__":
main(sys.argv)
The script processes about 10Gb worth of files and then throws MemoryError on the line 'fileData = fReader.read()'. I thought that since I am closing the fReader and marking the fileData for deletion after I have calculated the md5 sum I wouldn't run into this. How can I calculate the md5 sums without running into this memory error?
Edit: I was requested to remove the dictionary and look at the memory usage to see if there may be a leak in hashlib. Here was the code I ran.
import sys
import os
import hashlib
def checkFile(file):
fReader = open(file)
fileData = fReader.read();
fReader.close()
fileHash = hashlib.md5(fileData).hexdigest()
del fileData
def main(argv):
for curDir, subDirs, files in os.walk(argv[1]):
print(curDir)
for file in files:
print("------: " + str(curDir + file))
checkFile(curDir + file)
if __name__ == "__main__":
main(sys.argv)
and I still get the memory crash.
Your problem is in reading the entire files, they're too big and your system can't load it all in memory, so then it throws the error.
As you can see in the Official Python Documentation, the MemoryError is:
Raised when an operation runs out of memory but the situation may
still be rescued (by deleting some objects). The associated value is a
string indicating what kind of (internal) operation ran out of memory.
Note that because of the underlying memory management architecture
(C’s malloc() function), the interpreter may not always be able to
completely recover from this situation; it nevertheless raises an
exception so that a stack traceback can be printed, in case a run-away
program was the cause.
For your purpose, you can use hashlib.md5()
In that case, you'll have to read chunks of 4096 bytes sequentially and feed them to the Md5 function:
def md5(fname):
hash = hashlib.md5()
with open(fname) as f:
for chunk in iter(lambda: f.read(4096), ""):
hash.update(chunk)
return hash.hexdigest()
Not a solution to your memory problem, but an optimization that might avoid it:
small files: calculate md5 sum, remove duplicates
big files: remember size and path
at the end, only calculate md5sums of files of same size when there is more than one file
Python's collection.defaultdict might be useful for this.
How about calling openssl command from python
In both windows and Linux
$ openssl md5 "file"
I use the following simple Python script to compress a large text file (say, 10GB) on an EC2 m3.large instance. However, I always got a MemoryError:
import gzip
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
f_out.writelines(f_in)
# or the following:
# for line in f_in:
# f_out.write(line)
The traceback I got is:
Traceback (most recent call last):
File "test.py", line 8, in <module>
f_out.writelines(f_in)
MemoryError
I have read some discussion about this issue, but still not quite clear how to handle this. Can someone give me a more understandable answer about how to deal with this problem?
The problem here has nothing to do with gzip, and everything to do with reading line by line from a 10GB file with no newlines in it:
As an additional note, the file I used to test the Python gzip functionality is generated by fallocate -l 10G bigfile_file.
That gives you a 10GB sparse file made entirely of 0 bytes. Meaning there are no newline bytes. Meaning the first line is 10GB long. Meaning it will take 10GB to read the first line. (Or possibly even 20 or 40GB, if you're using pre-3.3 Python and trying to read it as Unicode.)
If you want to copy binary data, don't copy line by line. Whether it's a normal file, a GzipFile that's decompressing for you on the fly, a socket.makefile(), or anything else, you will have the same problem.
The solution is to copy chunk by chunk. Or just use copyfileobj, which does that for you automatically.
import gzip
import shutil
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
shutil.copyfileobj(f_in, f_out)
By default, copyfileobj uses a chunk size optimized to be often very good and never very bad. In this case, you might actually want a smaller size, or a larger one; it's hard to predict which a priori.* So, test it by using timeit with different bufsize arguments (say, powers of 4 from 1KB to 8MB) to copyfileobj. But the default 16KB will probably be good enough unless you're doing a lot of this.
* If the buffer size is too big, you may end up alternating long chunks of I/O and long chunks of processing. If it's too small, you may end up needing multiple reads to fill a single gzip block.
That's odd. I would expect this error if you tried to compress a large binary file that didn't contain many newlines, since such a file could contain a "line" that was too big for your RAM, but it shouldn't happen on a line-structured .csv file.
But anyway, it's not very efficient to compress files line by line. Even though the OS buffers disk I/O it's generally much faster to read and write larger blocks of data, eg 64 kB.
I have 2GB of RAM on this machine, and I just successfully used the program below to compress a 2.8GB tar archive.
#! /usr/bin/env python
import gzip
import sys
blocksize = 1 << 16 #64kB
def gzipfile(iname, oname, level):
with open(iname, 'rb') as f_in:
f_out = gzip.open(oname, 'wb', level)
while True:
block = f_in.read(blocksize)
if block == '':
break
f_out.write(block)
f_out.close()
return
def main():
if len(sys.argv) < 3:
print "gzip compress in_file to out_file"
print "Usage:\n%s in_file out_file [compression_level]" % sys.argv[0]
exit(1)
iname = sys.argv[1]
oname = sys.argv[2]
level = int(sys.argv[3]) if len(sys.argv) > 3 else 6
gzipfile(iname, oname, level)
if __name__ == '__main__':
main()
I'm running Python 2.6.6 and gzip.open() doesn't support with.
As Andrew Bay notes in the comments, if block == '': won't work correctly in Python 3, since block contains bytes, not a string, and an empty bytes object doesn't compare as equal to an empty text string. We could check the block length, or compare to b'' (which will also work in Python 2.6+), but the simple way is if not block:.
It is weird to get a memory error even when reading a file line by line. I suppose it is because you have very little available memory and very large lines. You should then use binary reads :
import gzip
#adapt size value : small values will take more time, high value could cause memory errors
size = 8096
with open('test_large.csv', 'rb') as f_in:
with gzip.open('test_out.csv.gz', 'wb') as f_out:
while True:
data = f_in.read(size)
if data == '' : break
f_out.write(data)