I have a PDF file consisting of around 20-25 pages. The aim of this tool is to split the PDF file into pages (using PyPdf2), save every PDF page in a directory (using PyPdf2), convert the PDF pages into images (using ImageMagick) and then perform some OCR on them using tesseract (using PIL and PyOCR) to extract data. The tool will eventually be a GUI through tkinter so the users can perform the same operation many times by clicking on a button. Throughout my heavy testing, I have noticed that if the whole process is repeated around 6-7 times, the tool/python script crashes by showing not responding on Windows. I have performed some debugging, but unfortunately there is no error thrown. The memory and CPU are good so no issues there as well. I was able to narrow down the problem by observing that, before reaching to the tesseract part, PyPDF2 and ImageMagick are failing when they are run together. I was able to replicate the problem by simplifying it to the following Python code:
from wand.image import Image as Img
from PIL import Image as PIL
import pyocr
import pyocr.builders
import io, sys, os
from PyPDF2 import PdfFileWriter, PdfFileReader
def splitPDF (pdfPath):
#Read the PDF file that needs to be parsed.
pdfNumPages =0
with open(pdfPath, "rb") as pdfFile:
inputpdf = PdfFileReader(pdfFile)
#Iterate on every page of the PDF.
for i in range(inputpdf.numPages):
#Create the PDF Writer Object
output = PdfFileWriter()
output.addPage(inputpdf.getPage(i))
with open("tempPdf%s.pdf" %i, "wb") as outputStream:
output.write(outputStream)
#Get the number of pages that have been split.
pdfNumPages = inputpdf.numPages
return pdfNumPages
pdfPath = "Test.pdf"
for i in range(1,20):
print ("Run %s\n--------" %i)
#Split the PDF into Pages & Get PDF number of pages.
pdfNumPages = splitPDF (pdfPath)
print(pdfNumPages)
for i in range(pdfNumPages):
#Convert the split pdf page to image to run tesseract on it.
with Img(filename="tempPdf%s.pdf" %i, resolution=300) as pdfImg:
print("Processing Page %s" %i)
I have used the with statement to handle the opening and closing of files correctly, so there should be no memory leaks there. I have tried running the splitting part separately and the image conversion part separately, and they work fine when ran alone. However when the codes are combined, it will fail after iterating for around 5-6 times. I have used try and exception blocks but no error is captured. Also I am using the latest version of all the libraries. Any help or guidance is appreciated.
Thank you.
For future reference, the problem was due to the 32-bit version of ImageMagick as mentioned in one of the comments (thanks to emcconville). Uninstalling Python and ImageMagick 32-bit versions and installing both 64-bit versions fixed the problem. Hope this helps.
Related
Context:
I have PDF files I'm working with.
I'm using an ocr to extract the text from these documents and to be able to do that I have to convert my pdf files to images.
I currently use the convert_from_path function of the pdf2image module but it is very time inefficient (9minutes for a 9page pdf).
Problem:
I am looking for a way to accelerate this process or another way to convert my PDF files to images.
Additional info:
I am aware that there is a thread_count parameter in the function but after several tries it doesn't seem to make any difference.
This is the whole function I am using:
def pdftoimg(fic,output_folder):
# Store all the pages of the PDF in a variable
pages = convert_from_path(fic, dpi=500,output_folder=output_folder,thread_count=9, poppler_path=r'C:\Users\Vincent\Documents\PDF\poppler-21.02.0\Library\bin')
image_counter = 0
# Iterate through all the pages stored above
for page in pages:
filename = "page_"+str(image_counter)+".jpg"
page.save(output_folder+filename, 'JPEG')
image_counter = image_counter + 1
for i in os.listdir(output_folder):
if i.endswith('.ppm'):
os.remove(output_folder+i)
Link to the convert_from_path reference.
I found an answer to that problem using another module called fitz which is a python binding to MuPDF.
First of all install PyMuPDF:
The documentation can be found here but for windows users it's rather simple:
pip install PyMuPDF
Then import the fitz module:
import fitz
print(fitz.__doc__)
>>>PyMuPDF 1.18.13: Python bindings for the MuPDF 1.18.0 library.
>>>Version date: 2021-05-05 06:32:22.
>>>Built for Python 3.7 on win32 (64-bit).
Open your file and save every page as images:
The get_pixmap() method accepts different parameters that allows you to control the image (variation,resolution,color...) so I suggest that you red the documentation here.
def convert_pdf_to_image(fic):
#open your file
doc = fitz.open(fic)
#iterate through the pages of the document and create a RGB image of the page
for page in doc:
pix = page.get_pixmap()
pix.save("page-%i.png" % page.number)
Hope this helps anyone else.
I'm playing around in python trying to download some images from imgur. I've been using the urrlib and urllib.retrieve but you need to specify the extension when saving the file. This isn't a problem for most posts since the link has for example .jpg in it, but I'm not sure what to do when the extension isn't there. My question is if there is any way to determine the image format of the file before downloading it. The question is mostly imgur specific, but I wouldn't mind a solution for most image-hosting sites.
Thanks in advance
You can use imghdr.what(filename[, h]) in Python 2.7 and Python 3 to determine the image type.
Read here for more info, if you're using Python 2.7.
Read here for more info, if you're using Python 3.
Assuming the picture has no file extension, there's no way to determine which type it is before you download it. All image formats sets their initial bytes to a particular value. To inspect these 'magic' initial bytes check out https://github.com/ahupp/python-magic - it matches the initial bytes against known image formats.
The code below downloads a picture from imgur and determines which file type it is.
import magic
import requests
import shutil
r = requests.get('http://i.imgur.com/yed5Zfk.gif', stream=True) ##Download picture
if r.status_code == 200:
with open('~/Desktop/picture', 'wb') as f:
r.raw.decode_content = True
shutil.copyfileobj(r.raw, f)
print magic.from_file('~/Desktop/picture') ##Determine type
## Prints: 'GIF image data, version 89a, 360 x 270'
I want to serve matplotlib generated images with django.
If the image is a static png file, the following code works great:
from django.http import HttpResponse
def static_image_view(request):
response = HttpResponse(mimetype='image/png')
with open('test.png', 'rb') as f:
response.write(f.read())
return response
However, if the image is dynamically generated:
import numpy as np
import matplotlib
matplotlib.use('Agg')
from matplotlib import pyplot as plt
def dynamic_image_view(request):
response = HttpResponse(mimetype='image/png')
fig = plt.figure()
plt.plot(np.random.rand(100))
plt.savefig(response, format='png')
plt.close(fig)
return response
When accessing the url in Chrome (v36.0), the image will show up for a few seconds, then disappear and turn to the alt text. It seems that the browser doesn't know the image has already finished loading and waits until timeout. Checking with Chrome > Tools > Developer tools > Network supports this hypothesis: although the image appears after only about 1 sec, the status of the corresponding http request becomes "failed" after about 5 sec.
Note again, this strange phenomenon occurs only with the dynamically generated image, so it shouldn't be Chrome's problem (though it doesn't happen with IE or FireFox, presumably due to different rules in dealing with timeout requests).
To make it more tricky (i.e., hard to reproduce), it seems to be network speed dependent. It happens if I access the url from an IP in China, but not if via a proxy in the US (which seems to be faster visiting the host on which django is running)...
According to #HSquirrel, I tested writing the png into temporary disk file. Strangely, saving file with matplotlib didn't work,
plt.savefig('MPL.png', format='png')
with open('MPL.png', 'rb') as f:
response.write(f.read())
while saving file with PIL worked:
import io
from PIL import Image
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
im = Image.open(f)
im.save('PIL.png', 'PNG')
Attempt of getting rid of temp file failed:
im.save(response, 'PNG')
However, if I generate the image data stream with PIL rather than matplotlib, temporary disk file would be unnecessary. The following code works:
from PIL import Image, ImageDraw
im = Image.new('RGBA', (256,256), (0,255,0,255))
draw = ImageDraw.Draw(im)
draw.line((100,100, 150,200), fill=128, width=3)
im.save(response, 'PNG')
Finally, savefig(response, 'jepg') has no problem at all.
Have you tried saving the image to disk and then returning that? (you can periodically clear your disk of such generated images based on their time of creation)
If that gives the same problem, it might be a problem with the way the png is generated. Than you could use some kind of image library (like PIL) to make sure all your png's are (re)generated in a way that works with all browsers.
EDIT:
I've checked the png you've linked and I've played around with it a bit, opening and saving it with different programs and with PIL. I get different binary data every time. It seems each program decides which chunks to keep and which to remove. They all encode the png image data differently as well (as far as I can see, I am by no means a specialist in this, I just looked at the binary data based on the specs).
There are a few different paths you can take:
1.The quick and dirty one:
import io
from PIL import Image
f = io.BytesIO()
plt.savefig(f, format='png')
f.seek(0)
im = Image.open(f)
tempfilename = generatetempfilename()
im.save(tempfilename, 'PNG')
with open(tempfilename, 'rb') as f:
response.write(f.read())
2.Adapt how matplotlib makes PNG files (possibly by just using PIL for
it as well). See
http://matplotlib.org/users/customizing.html#customizing-matplotlib
3.If it's an option for you, use jpeg.
4.Figure out what's wrong with the PNG generated by matplotlib and fix
it binary (I don't recommend this). You can use xxd (linux command: xxd test.png) to figure out how the files look in binary and then see how things go using the png spec: overview chunk spec
I want to convert some multi-pages .tif or .pdf files to individual .png images. From command line (using ImageMagick) I just do:
convert multi_page.pdf file_out.png
And I get all the pages as individual images (file_out-0.png, file_out-1.png, ...)
I would like to handle this file conversion within Python, unfortunately PIL cannot read .pdf files, so I want to use PythonMagick. I tried:
import PythonMagick
im = PythonMagick.Image('multi_page.pdf')
im.write("file_out%d.png")
or just
im.write("file_out.png")
But I only get 1 page converted to png.
Of course I could load each pages individually and convert them one by one. But there must be a way to do them all at once?
ImageMagick is not memory efficient, so if you try to read a large pdf, like 100 pages or so, the memory requirement will be huge and it might crash or seriously slow down your system. So after all reading all pages at once with PythonMagick is a bad idea, its not safe.
So for pdfs, I ended up doing it page by page, but for that I need to get the number of pages first using pyPdf, its reasonably fast:
pdf_im = pyPdf.PdfFileReader(file('multi_page.pdf', "rb"))
npage = pdf_im.getNumPages()
for p in npage:
im = PythonMagick.Image('multi_page.pdf['+ str(p) +']')
im.write('file_out-' + str(p)+ '.png')
A more complete example based on the answer by Ivo Flipse and http://p-s.co.nz/wordpress/pdf-to-png-using-pythonmagick/
This uses a higher resolution and uses PyPDF2 instead of older pyPDF.
import sys
import PyPDF2
import PythonMagick
pdffilename = sys.argv[1]
pdf_im = PyPDF2.PdfFileReader(file(pdffilename, "rb"))
npage = pdf_im.getNumPages()
print('Converting %d pages.' % npage)
for p in range(npage):
im = PythonMagick.Image()
im.density('300')
im.read(pdffilename + '[' + str(p) +']')
im.write('file_out-' + str(p)+ '.png')
I had the same problem and as a work around i used ImageMagick and did
import subprocess
params = ['convert', 'src.pdf', 'out.png']
subprocess.check_call(params)
So the state I'm in released a bunch of data in PDF form, but to make matters worse, most (all?) of the PDFs appear to be letters typed in Office, printed/fax, and then scanned (our government at its best eh?). At first I thought I was crazy, but then I started seeing numerous pdfs that are 'tilted', like someone didn't get them on the scanner properly. So, I figured the next best thing to getting the actual text out of them, would be to turn each page into an image.
Obviously this needs to be automated, and I'd prefer to stick with Python if possible. If Ruby or Perl have some form of implementation that's just too awesome to pass up, I can go that route. I've tried pyPDF for text extraction, that obviously didn't do me much good. I've tried swftools, but the images I'm getting from that are just shy of completely unusable. It just seems like the fonts get ruined in the conversion. I also don't even really care about the image format on the way out, just as long as they're relatively lightweight, and readable.
If the PDFs are truly scanned images, then you shouldn't convert the PDF to an image, you should extract the image from the PDF. Most likely, all of the data in the PDF is essentially one giant image, wrapped in PDF verbosity to make it readable in Acrobat.
You should try the simple expedient of simply finding the image in the PDF, and copying the bytes out: Extracting JPGs from PDFs. The code there is dead simple, and there are probably dozens of reasons it won't work on your PDF files. But if it does, you'll have a quick and painless way to get the image data out of the PDF files.
You could call e.g. pdftoppm from the command-line (or using Python's subprocess module) and then convert the resulting PPM files to the desired format using e.g. ImageMagick (again, using subprocess or some bindings if they exist).
Ghostscript is ideal for converting PDF files to images. It is reliable and has many configurable options. Its also available under the GPL license or commercial license. You can call it from the command line or use its native API. For more information:
Ghostscript Main Website
Ghostscript docs on Command line usage
Another stackoverflow thread that provides some examples of invoking Ghostscript's command line interface from Python
Ghostscript API Documentation
Here's an alternative approach to turning a .pdf file into images: Use an image printer. I've successfully used the function below to "print" pdf's to jpeg images with ImagePrinter Pro. However, there are MANY image printers out there. Pick the one you like. Some of the code may need to be altered slightly based on the image printer you pick and the standard file saving format that image printer uses.
import win32api
import os
def pdf_to_jpg(pdfPath, pages):
# print pdf using jpg printer
# 'pages' is the number of pages in the pdf
filepath = pdfPath.rsplit('/', 1)[0]
filename = pdfPath.rsplit('/', 1)[1]
#print pdf to jpg using jpg printer
tempprinter = "ImagePrinter Pro"
printer = '"%s"' % tempprinter
win32api.ShellExecute(0, "printto", filename, printer, ".", 0)
# Add time delay to ensure pdf finishes printing to file first
fileFound = False
if pages > 1:
jpgName = filename.split('.')[0] + '_' + str(pages - 1) + '.jpg'
else:
jpgName = filename.split('.')[0] + '.jpg'
jpgPath = filepath + '/' + jpgName
waitTime = 30
for i in range(waitTime):
if os.path.isfile(jpgPath):
fileFound = True
break
else:
time.sleep(1)
# print Error if the file was never found
if not fileFound:
print "ERROR: " + jpgName + " wasn't found after " + str(waitTime)\
+ " seconds"
return jpgPath
The resulting jpgPath variable tells you the path location of the last jpeg page of the pdf printed. If you need to get another page, you can easily add some logic to modify the path to get prior pages
in pdf_to_jpg(pdfPath)
6 # 'pages' is the number of pages in the pdf
7 filepath = pdfPath.rsplit('/', 1)[0]
----> 8 filename = pdfPath.rsplit('/', 1)[1]
9
10 #print pdf to jpg using jpg printer
IndexError: list index out of range
With Wand there are now excellent imagemagick bindings for Python that make this a very easy task.
Here is the code necessary for converting a single PDF file into a sequence of PNG images:
from wand.image import Image
input_path = "name_of_file.pdf"
output_name = "name_of_outfile_{index}.png"
source = Image(filename=upload.original.path, resolution=300, width=2200)
images = source.sequence
for i in range(len(images)):
Image(images[0]).save(filename=output_name.format(i))