I have created a series of PDF documents (maps) using data driven pages in ESRI ArcMap 10. There is a page 1 and page 2 for each map generated from separate *.mxd. So I have one list of PDF documents containing page 1 for each map and one list of PDF documents containing page 2 for each map. For example: Map1_001.pdf, map1_002.pdf, map1_003.pdf...map2_001.pdf, map2_002.pdf, map2_003.pdf...and so one.
I would like to append these maps, pages 1 and 2, together so that both page 1 and 2 are together in one PDF per map. For example: mapboth_001.pdf, mapboth_002.pdf, mapboth_003.pdf... (they don't have to go into a new pdf file (mapboth), it's fine to append them to map1)
For each map1_ *.pdf
Walk through the directory and append map2_ *.pdf where the numbers (where the * is) in the file name match
There must be a way to do it using python. Maybe with a combination of arcpy, os.walk or os.listdir, and pyPdf and a for loop?
for pdf in os.walk(datadirectory):
??
Any ideas? Thanks kindly for your help.
A PDF file is structured in a different way than a plain text file. Simply putting two PDF files together wouldn't work, as the file's structure and contents could be overwritten or become corrupt. You could certainly author your own, but that would take a fair amount of time, and intimate knowledge of how a PDF is internally structured.
That said, I would recommend that you look into pyPDF. It supports the merging feature that you're looking for.
This should properly find and collate all the files to be merged; it still needs the actual .pdf-merging code.
Edit: I have added pdf-writing code based on the pyPdf example code. It is not tested, but should (as nearly as I can tell) work properly.
Edit2: realized I had the map-numbering crossways; rejigged it to merge the right sets of maps.
import collections
import glob
import re
# probably need to install this module -
# pip install pyPdf
from pyPdf import PdfFileWriter, PdfFileReader
def group_matched_files(filespec, reg, keyFn, dataFn):
res = collections.defaultdict(list)
reg = re.compile(reg)
for fname in glob.glob(filespec):
data = reg.match(fname)
if data is not None:
res[keyFn(data)].append(dataFn(data))
return res
def merge_pdfs(fnames, newname):
print("Merging {} to {}".format(",".join(fnames), newname))
# create new output pdf
newpdf = PdfFileWriter()
# for each file to merge
for fname in fnames:
with open(fname, "rb") as inf:
oldpdf = PdfFileReader(inf)
# for each page in the file
for pg in range(oldpdf.getNumPages()):
# copy it to the output file
newpdf.addPage(oldpdf.getPage(pg))
# write finished output
with open(newname, "wb") as outf:
newpdf.write(outf)
def main():
matches = group_matched_files(
"map*.pdf",
"map(\d+)_(\d+).pdf$",
lambda d: "{}".format(d.group(2)),
lambda d: "map{}_".format(d.group(1))
)
for map,pages in matches.iteritems():
merge_pdfs((page+map+'.pdf' for page in sorted(pages)), "merged{}.pdf".format(map))
if __name__=="__main__":
main()
I don't have any test pdfs to try and combine but I tested with a cat command on text files.
You can try this out (I'm assuming unix based system): merge.py
import os, re
files = os.listdir("/home/user/directory_with_maps/")
files = [x for x in files if re.search("map1_", x)]
while len(files) > 0:
current = files[0]
search = re.search("_(\d+).pdf", current)
if search:
name = search.group(1)
cmd = "gs -q -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -sOutputFile=FULLMAP_%s.pdf %s map2_%s.pdf" % (name, current, name)
os.system(cmd)
files.remove(current)
Basically it goes through and grabs the maps1 list and then just goes through and assumes correct files and just goes through numbers. (I can see using a counter to do this and padding with 0's to get similar effect).
Test the gs command first though, I just grabbed it from http://hints.macworld.com/article.php?story=2003083122212228.
There are examples of how to to do this on the pdfrw project page at googlecode:
http://code.google.com/p/pdfrw/wiki/ExampleTools
Related
I'm on Python 3, using PyPDF2 and in order to add page numbers to a newly generated PDF (which I do using reportlab) I merge the two PDF files page by page in the following way:
from PyPDF2 import PdfFileWriter, PdfFileReader
def merge_pdf_files(first_pdf_fp, second_pdf_fp, target_fp):
"""
Merges two PDF files into a target final PDF file.
Args:
first_pdf_fp: the first PDF file path.
second_pdf_fp: the second PDF file path.
target_fp: the target PDF file path.
"""
pdf1 = PdfFileReader(first_pdf_fp)
pdf2 = PdfFileReader(second_pdf_fp)
assert (pdf1.getNumPages() == pdf2.getNumPages())
final_pdf_writer = PdfFileWriter()
for i in range(pdf1.getNumPages()):
number_page = pdf1.getPage(i)
content_page = pdf2.getPage(i)
content_page.mergePage(number_page)
final_pdf_writer.addPage(content_page)
with open(target_fp, "wb") as final_os:
final_pdf_writer.write(final_os)
But this is very slow. Is there a faster and cleaner way to a merge at once using PyPDF2?
I do not have enough 'reputation' to comment. But since I was going to post an answer I made it long.
Normally when people want to 'merge' documents they mean 'combining' them, or as you point out, concatenate or append one pdf at the end of the other (or somewhere in between). But based on the code you present, it seems you meant overlaying one pdf over another, right? Or in other words, you want page 1 from both pdf1 and pdf2 to be combined in to a single page as part of a new pdf.
If so, you could use this (modified from example used to illustrate watermarking). It is still overlaying one page at a time. But, pdfrw is known to be super fast compared to PyPDF2 and supposed to work well with reportlab. I havent compared the speeds, so not sure if this will actually be faster than what you already have
from pdfrw import PdfReader, PdfWriter, PageMerge
p1 = pdfrw.PdfReader("file1")
p2 = pdfrw.PdfReader("file2")
for page in range(len(p1.pages)):
merger = PageMerge(p1.pages[page])
merger.add(p2.pages[page]).render()
writer = PdfWriter()
writer.write("output.pdf", p1)
Try this.
You can use PyPdf2s PdfMerger class.
using file Concatenation, you can concatenate files using append method
from PyPDF2 import PdfFileMerger
pdfs = ['file1.pdf', 'file2.pdf', 'file3.pdf', 'file4.pdf']
merger = PdfFileMerger()
for pdf in pdfs:
merger.append(pdf)
merger.write("result.pdf")
merger.close()
Maybe the answer will help you in Is there a way to speed up PDF page merging... where using multiprocessing takes 100% of the processor
I downloaded a pdf where every other page is blank and I'd like to remove the blank pages. I could do this manually in a pdf tool (Adobe Acrobat, Preview.app, PDFPen, etc.) but since it's several hundred pages I'd like to do something more automated. Is there a way to do this in python?
One way is to use pypdf, so in your terminal first do
pip install pypdf4
Then create a .py script file similar to this:
# pdf_strip_every_other_page.py
from PyPDF4 import PdfFileReader, PdfFileWriter
number_of_pages = 500
output_writer = PdfFileWriter()
with open("/path/to/original.pdf", "rb") as inputfile:
pdfOne = PdfFileReader(inputfile)
for i in list(range(0, number_of_pages)):
if i % 2 == 0:
page = pdfOne.getPage(i)
output_writer.addPage(page)
with open("/path/to/output.pdf", "wb") as outfile:
output_writer.write(outfile)
Note: you'll need to change the paths to what's appropriate for your scenario.
Obviously this script is rather crude and could be improved, but wanted to share it for anyone else wanting a quick way to deal with this scenario.
I have a pdf file over 100 pages. There are boxes and columns of text. When I extract the text using PyPdf2 and tika parser, I get a string of of data which is out of order. It is ordered by columns in many cases and skips around the document in other cases. Is it possible to read the pdf file starting from the top, moving left to right until the bottom? I want to read the text in the columns and boxes, but I want the line of text displayed as it would be read left to right.
I've tried:
PyPDF2 - the only tool is extracttext(). Fast but does not give gaps in the elements. Results are jumbled.
Pdfminer - PDFPageInterpeter() method with LAParams. This works well but is slow. At least 2 seconds per page and I've got 200 pages.
pdfrw - this only tells me the number of pages.
tabula_py - only gives me the first page. Maybe I'm not looping it correctly.
tika - what I'm currently working with. Fast and more readable, but the content is still jumbled.
from tkinter import filedialog
import os
from tika import parser
import re
# select the file you want
file_path = filedialog.askopenfilename(initialdir=os.getcwd(),filetypes=[("PDF files", "*.pdf")])
print(file_path) # print that path
file_data = parser.from_file(file_path) # Parse data from file
text = file_data['content'] # Get files text content
by_page = text.split('... Information') # split up the document into pages by string that always appears on the
# top of each page
for i in range(1,len(by_page)): # loop page by page
info = by_page[i] # get one page worth of data from the pdf
reformated = info.replace("\n", "&") # I replace the new lines with "&" to make it more readable
print("Page: ",i) # print page number
print(reformated,"\n\n") # print the text string from the pdf
This provides output of a sort, but it is not ordered in the way I would like. I want the pdf to be read left to right. Also, if I could get a pure python solution, that would be a bonus. I don't want my end users to be forced to install java (I think the tika and tabula-py methods are dependent on java).
I did this for .docx with this code. Where txt is the .docx. Hope this help link
import re
pttrn = re.compile(r'(\.|\?|\!)(\'|\")?\s')
new = re.sub(pttrn, r'\1\2\n\n', txt)
print(new)
I am trying generate pdf files based on the county they fall in. If there is more than one pdf file per county then I need to append the files into a single file based on the county key. I can't seem to get the maps to append based on key. The final maps generated seem random and often have way too many files appended. I am pretty sure I am not grouping them correctly. I have read that multiple values in a key can result in showing up multiple times. Can someone please clue me in on how to access each value per key separately, one time only? Obviously I am not understanding something crucial.
My code:
import csv, os
import shutil
from PyPDF2 import PdfFileMerger, PdfFileReader, PdfFileWriter
merged_file = PdfFileMerger()
counties = {'County4': ['C:\\maps\\map2.pdf', 'C:\\maps\\map3.pdf', 'C:\\maps\\map4.pdf'], 'County1': ['C:\\maps\\map1.pdf', 'C:\\maps\\map2.pdf'], 'County3': ['C:\\maps\\map3.pdf'], 'County2': ['C:\\maps\\map1.pdf', 'C:\\maps\\map3.pdf']}
for k, v in counties.items():
newPdfFile = ('C:\maps\JoinedMaps\k +'.pdf')
if len(v) > 1:
for filename in v:
merged_file.append(PdfFileReader(filename,'rb'))
merged_file.write(newPdfFile)
else:
for filename in v:
shutil.copyfile(filename, newPdfFile)
I get four maps outputted (which is correct) but the number of "pages" (appended files) in some of these files is wildly off. As far as I can tell there is no rhyme or reason as to how these pages are appended. County4 pdf has 3 pages (correct), County1 pdf has 8 pages instead of 2, County3 pdf has 1 page (correct) and County2 has 15 pages instead of 2.
EDIT:
It turns out pyPDF2 does not like iterating through and creating files using the concept of group-by. I imagine it has something to so with how it stores memory. The results are the creation of increasingly greater number of pages as you iterate through the key values. I spent days thinking it was my coding. Good to know it wasn't I guess but I am surprised this piece of information is not "out there on the internet" better.
My solution was to use arcpy, which doesn't help most users reading this, sorry to say.
For those looking at my solution, my csv file looked like this:
County1 C:\maps\map1.pdf
County1 C:\maps\map2.pdf
County2 C:\maps\map1.pdf
County2 C:\maps\map3.pdf
County3 C:\maps\map3.pdf
County4 C:\maps\map2.pdf
County4 C:\maps\map3.pdf
County4 C:\maps\map4.pdf
and my resulting pdf files looked like this:
County-County1 (2 pages - Map1 and Map2)
County-County2 (2 pages - Map1 and Map3)
County-County3 (1 page - Map3)
County-County2 (3 pages - Map2, Map3, and Map4)
My data started out as a csv file and the code below references this instead of the dictionaries (which were generated from the csv file) which I used in the above example, but you should be able to glean what I did based on code below. I basically scraped the dictionary idea and went with reading the csv file line by line and then appending using arcpy. pyPDF2 does NOT merge correctly when trying to output multiple files based on a key. Three days of my life I can't get back
import csv
import arcpy
from arcpy import env
import shutil, os, glob
# clear out files from destination directory
files = glob.glob(r'C:\maps\JoinedMaps\*')
for f in files:
os.remove(f)
# open csv file
f = open("C:\maps\Maps.csv", "r+")
ff = csv.reader(f)
# set variable to establish previous row of csv file (for comaprrison)
pre_line = ff.next()
# Iterate through csv file
for cur_line in ff:
# new file name and location based on value in column (county name)
newPdfFile = (r'C:\maps\JoinedMaps\County-' + cur_line[0] +'.pdf')
# establish pdf files to be appended
joinFile = pre_line[1]
appendFile = cur_line[1]
# If columns in both rows match
if pre_line[0] == cur_line[0]: # <-- compare first column
# If destnation file already exists, append file referenced in current row
if os.path.exists(newPdfFile):
tempPdfDoc = arcpy.mapping.PDFDocumentOpen(newPdfFile)
tempPdfDoc.appendPages(appendFile)
# Otherwise create destination and append files reference in both the previous and current row
else:
tempPdfDoc = arcpy.mapping.PDFDocumentCreate(newPdfFile)
tempPdfDoc.appendPages(joinFile)
tempPdfDoc.appendPages(appendFile)
# save and delete temp file
tempPdfDoc.saveAndClose()
del tempPdfDoc
else:
# if no match, do not merge, just copy
shutil.copyfile(appendFile,newPdfFile)
# reset variable
pre_line = cur_line
I have three sound files for example a.wav, b.wav and c.wav . I want to write them into a single file for example all.xmv (extension could be different too) and when I need I want to extract one of them and I want to play it (for example I want to play a.wav and extract it form all.xmv).
How can I do it in python. I have heard that there is a function named blockwrite in Delphi and it does the thing that I want. Is there a function in python that is like blockwrite in Delphi or how can I write these files and play them?
Would standard tar/zip files work for you?
http://docs.python.org/library/zipfile.html
http://docs.python.org/library/tarfile.html
If the archive idea (which is btw, the best answer to your question) doesn't suit you, you can fuse the data from several files in one file, e.g. by writing consecutive blocks of binary data (thus creating an uncompressed archive!)
Let paths be a list of files that should be concatenated:
import io
import os
offsets = [] # the offsets that should be kept for later file navigation
last_offset = 0
fout = io.FileIO(out_path, 'w')
for path in paths:
f = io.FileIO(path) # stream IO
fout.write(f.read())
f.close()
last_offset += os.path.getsize(path)
offsets.append(last_offset)
fout.close()
# Pseudo: write the offsets to separate file e.g. by pickling
# ...
# reading the data, given that offsets[] list is available
file_ID = 10 # e.g. you need to read 10th file
f = io.FileIO(path)
f.seek(offsets[file_ID - 1]) # seek to required position
read_size = offsets[filed_ID] - offsets[file_ID - 1] # get the file size
data = f.read(read_size) # here we are!
f.close()