I have been working on a program to Automate some of my repetitive tasks. One of which is making adjustments to PDF files and specifically there fields. The PDF file use's one input and then calculates other associated outputs. I would like to have a python script that Fills in the one field without having to open the file. I used some stock code I found online, and it seems to work however the calculated fields do not update to the new value and Adobe Acrobat crashes whenever I open the field editor. The Code I am using is below.
from PyPDF2 import PdfReader, PdfWriter, PdfFileReader
reader = PdfReader("scratch.pdf")
writer = PdfWriter()
page = reader.pages[0]
fields = reader.get_form_text_fields()
fields == {"key": "value", "key2": "value2"}
print(fields)
writer.add_page(page)
writer.update_page_form_field_values(writer.pages[0], {'Fill': '11'})
# write "output" to PyPDF2-output.pdf
with open("Scratch_write.pdf", "wb") as output_stream:
writer.write(output_stream)
Above I want to Edit a PDF file called "scratch.pdf" and the field I would like to edit is 'Fill'.
When I run the script and open the new pdf, I have to click on the field box to see any changes and all associated fields are not updating the proper values. It is important to note that the output of
print(fields)
is
{'Fill': '10', 'Answer': '8'}
This is the correct values. I modified the code previous shown to see what's going on, I run the script below after running the first to see if the fields are being updated.
from PyPDF2 import PdfReader, PdfWriter, PdfFileReader
reader = PdfReader("Scratch_write.pdf")
writer = PdfWriter()
page = reader.pages[0]
fields = reader.get_form_text_fields()
fields == {"key": "value", "key2": "value2"}
print(fields)
When I run the code above I get:
{}
leading me to believe that it's not actually writing to the fields.
Any and All Help Appreciated
Related
I'm using Python and PyPDF2 to generate a set of PDFs based on a template with form fields. The PDFs are created and all of the fields are filled correctly, but when I open the PDFs in Adobe Acrobat they show changes made to the file (i.e., the "Save" menu option is enabled, and when I try to close the file Adobe asks if I want to save changes, even if I haven't touched anything).
It's mostly just a slight annoyance, but is there a way to prevent this from happening? From my research it seems like this means (1) there's JavaScript modifying the file (there isn't), or (2) the file is corrupted and Adobe is fixing it.
A simplified version of my code is below. I set /NeedAppearances to True in both the reader and writer because otherwise the values didn't appear in the PDF unless I clicked on the field. I also set the annotations so that the fields are read-only and appear as regular text.
from PyPDF2 import PdfFileReader, PdfFileWriter
from PyPDF2.generic import BooleanObject, NameObject, IndirectObject, NumberObject
data = {'field1': 'Text1', 'field2': 'Text2'}
with open('template.pdf', 'rb') as read_file:
pdf_reader = PdfFileReader(read_file)
pdf_writer = PdfFileWriter()
# Set /NeedAppearances to make field values visible
try:
if '/AcroForm' in pdf_reader.trailer['/Root']:
pdf_reader.trailer['/Root']['/AcroForm'][NameObject('/NeedAppearances')] = BooleanObject(True)
if '/AcroForm' not in pdf_writer._root_object:
root = pdf_writer._root_object
acroform = {NameObject('/AcroForm'): IndirectObject(len(pdf_writer._objects), 0, pdf_writer)}
root.update(acroform)
root['/AcroForm'][NameObject('/NeedAppearances')] = BooleanObject(True)
except:
print('Warning: Error setting PDF /NeedAppearances value.')
# Add first page to writer
pdf_writer.addPage(pdf_reader.getPage(0))
page = pdf_writer.getPage(0)
# Update form fields
pdf_writer.updatePageFormFieldValues(page, data)
# Make fields read-only
for i in range(len(page['/Annots'])):
annot = page['/Annots'][i].getObject()
annot.update({NameObject('/Ff'): NumberObject(1)})
# Write PDF
with open('result.pdf', 'wb') as write_file:
pdf_writer.write(write_file)
I am new to scraping using Python. After using a lot of useful resources I was able to scrape the content of a Page. However, I am having trouble saving this data into a .csv file.
Python:
import mechanize
import time
import requests
import csv
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Firefox(executable_path=r'C:\Users\geckodriver.exe')
driver.get("myUrl.jsp")
username = driver.find_element_by_name('USER')
password = driver.find_element_by_name('PASSWORD')
username.send_keys("U")
password.send_keys("P")
main_frame = driver.find_element_by_xpath('//*[#id="Frame"]')
src = driver.switch_to_frame(main_frame)
table = driver.find_element_by_xpath("/html/body/div/div[2]/div[5]/form/div[7]/div[3]/table")
rows = table.find_elements(By.TAG_NAME, "tr")
for tr in rows:
outfile = open("C:/Users/Scripts/myfile.csv", "w")
with outfile:
writers = csv.writer(outfile)
writers.writerows(tr.text)
Problem:
Only one of the rows gets written to the excel file. However, when I print the tr.text into the console, all the required rows show up. How can I get all the text inside tr elements to be written into an excel file?
Currently your code will open the file, write one line, close it, then on the next row open it again and overwrite the line. Please consider the following code snippet:
# We use 'with' to open the file and auto close it when done
# syntax is best modified as follows
with open('C:/Users/Scripts/myfile.csv', 'w') as outfile:
writers = csv.writer(outfile)
# we only need to open the file once so we open it first
# then loop through each row to print everything into the open file
for tr in rows:
writers.writerows(tr.text)
Basic Situation
I currently have access to a salesforce page that has a 5000+ list of contacts. However, the page can only be loaded 25 contacts at a time and copying and pasting is unfeasible. Clicking a contact also gives other useful details but the general list is the most important. I don't have access to an admin portal; I only have viewing access to specific content such as contacts.
The link is structured as follows: https://example.force.com/example/_ui/search/ui/UnifiedSearchResults?offset=25&fpg=1cjmlvhdxsqly&str=epsilon-mu&sen=&fen=003&initialViewMode=detail&relatedListId=Contact&aId=_1527023282480&cookieParam=cookieParam1527023882485&tyme=1527023282485
My View
Question
Is there a method (such as python, bash script, url edit, web-scraping, etc.) to either download the list (as a .cvs or .txt) or make the list populate in its entirety for simple copy and paste?
To get the whole table into Excel I run this script. You will need to add in your login details and the table name.
import csv
from simple_salesforce import Salesforce
"""
Downloading the table
"""
# Logging in to SF and creating object
sf = Salesforce(
password='{{password}}', username='{{uName}}',
security_token='{{token}}',
client_id='TestingApp')
# Getting the field names
tableInfo = sf.{{tableName}}.describe()
tableFields = []
for x in tableInfo['fields']:
tableFields.append(x['name'])
hdrs = ', '.join(tableFields)
output = sf.query_all("SELECT " + hdrs + " FROM {{tableName}}")
headline = []
for key in output['records'][0]:
headline.append(key)
with open('C:\\temp\\Table.csv', 'w', newline='', encoding='utf-8') as csvfile:
writer = csv.DictWriter(csvfile, fieldnames = headline)
writer.writeheader()
for record in output['records']:
writer.writerow(record)
I've encountered an issue with my writing CSV program for a web-scraping project.
I got a data formatted like this :
table = {
"UR": url,
"DC": desc,
"PR": price,
"PU": picture,
"SN": seller_name,
"SU": seller_url
}
Which I get from a loop that analyze a html page and return me this table.
Basically, this table is ok, it changes every time I've done a loop.
The thing now, is when I want to write every table I get from that loop into my CSV file, it is just gonna write the same thing over and over again.
The only element written is the first one I get with my loop and write it about 10 millions times instead of about 45 times (articles per page)
I tried to do it vanilla with the library 'csv' and then with pandas.
So here's my loop :
if os.path.isfile(file_path) is False:
open(file_path, 'a').close()
file = open(file_path, "a", encoding = "utf-8")
i = 1
while True:
final_url = website + brand_formatted + "+handbags/?p=" + str(i)
request = requests.get(final_url)
soup = BeautifulSoup(request.content, "html.parser")
articles = soup.find_all("div", {"class": "dui-card searchresultitem"})
for article in articles:
table = scrap_it(article)
write_to_csv(table, file)
if i == nb_page:
break
i += 1
file.close()
and here my method to write into a csv file :
def write_to_csv(table, file):
import csv
writer = csv.writer(file, delimiter = " ")
writer.writerow(table["UR"])
writer.writerow(table["DC"])
writer.writerow(table["PR"])
writer.writerow(table["PU"])
writer.writerow(table["SN"])
writer.writerow(table["SU"])
I'm pretty new on writing CSV files and Python in general but I can't find why this isn't working. I've followed many guide and got more or less the same code for writing csv file.
edit: Here's an output in an img of my csv file
you can see that every element is exactly the same, even if my table change
EDIT: I fixed my problems by making a file for each article I scrap. That's a lot of files but apparently it is fine for my project.
This might be solution you wanted
import csv
fieldnames = ['UR', 'DC', 'PR', 'PU', 'SN', 'SU']
def write_to_csv(table, file):
writer = csv.DictWriter(file, fieldnames=fieldnames)
writer.writerow(table)
Reference: https://docs.python.org/3/library/csv.html
I have a reportlab SimpleDocTemplate and returning it as a dynamic PDF. I am generating it's content based on some Django model metadata. Here's my template setup:
buff = StringIO()
doc = SimpleDocTemplate(buff, pagesize=letter,
rightMargin=72,leftMargin=72,
topMargin=72,bottomMargin=18)
Story = []
I can easily add textual metadata from the Entry model into the Story list to be built later:
ptext = '<font size=20>%s</font>' % entry.title.title()
paragraph = Paragraph(ptext, custom_styles["Custom"])
Story.append(paragraph)
And then generate the PDF to be returned in the response by calling build on the SimpleDocTemplate:
doc.build(Story, onFirstPage=entry_page_template, onLaterPages=entry_page_template)
pdf = buff.getvalue()
resp = HttpResponse(mimetype='application/x-download')
resp['Content-Disposition'] = 'attachment;filename=logbook.pdf'
resp.write(pdf)
return resp
One metadata field on the model is a file attachment. When those file attachments are PDFs, I'd like to merge them into the Story that I am generating; IE meaning a PDF of reportlab "flowable" type.
I'm attempting to do so using pdfrw, but haven't had any luck. Ideally I'd love to just call:
from pdfrw import PdfReader
pdf = pPdfReader(entry.document.file.path)
Story.append(pdf)
and append the pdf to the existing Story list to be included in the generation of the final document, as noted above.
Anyone have any ideas? I tried something similar using pagexobj to create the pdf, trying to follow this example:
http://code.google.com/p/pdfrw/source/browse/trunk/examples/rl1/subset.py
from pdfrw.buildxobj import pagexobj
from pdfrw.toreportlab import makerl
pdf = pagexobj(PdfReader(entry.document.file.path))
But didn't have any luck either. Can someone explain to me the best way to merge an existing PDF file into a reportlab flowable? I'm no good with this stuff and have been banging my head on pdf-generation for days now. :) Any direction greatly appreciated!
I just had a similar task in a project. I used reportlab (open source version) to generate pdf files and pyPDF to facilitate the merge. My requirements were slightly different in that I just needed one page from each attachment, but I'm sure this is probably close enough for you to get the general idea.
from pyPdf import PdfFileReader, PdfFileWriter
def create_merged_pdf(user):
basepath = settings.MEDIA_ROOT + "/"
# following block calls the function that uses reportlab to generate a pdf
coversheet_path = basepath + "%s_%s_cover_%s.pdf" %(user.first_name, user.last_name, datetime.now().strftime("%f"))
create_cover_sheet(coversheet_path, user, user.performancereview_set.all())
# now user the cover sheet and all of the performance reviews to create a merged pdf
merged_path = basepath + "%s_%s_merged_%s.pdf" %(user.first_name, user.last_name, datetime.now().strftime("%f"))
# for merged file result
output = PdfFileWriter()
# for each pdf file to add, open in a PdfFileReader object and add page to output
cover_pdf = PdfFileReader(file( coversheet_path, "rb"))
output.addPage(cover_pdf.getPage(0))
# iterate through attached files and merge. I only needed the first page, YMMV
for review in user.performancereview_set.all():
review_pdf = PdfFileReader(file(review.pdf_file.file.name, "rb"))
output.addPage(review_pdf.getPage(0)) # only first page of attachment
# write out the merged file
outputStream = file(merged_path, "wb")
output.write(outputStream)
outputStream.close()
I used the following class to solve my issue. It inserts the PDFs as vector PDF images.
It works great because I needed to have a table of contents. The flowable object allowed the built in TOC functionality to work like a charm.
Is there a matplotlib flowable for ReportLab?
Note: If you have multiple pages in the file, you have to modify the class slightly. The sample class is designed to just read the first page of the PDF.
I know the question is a bit old but I'd like to provide a new solution using the latest PyPDF2.
You now have access to the PdfFileMerger, which can do exactly what you want, append PDFs to an existing file. You can even merge them in different positions and choose a subset or all the pages!
The official docs are here: https://pythonhosted.org/PyPDF2/PdfFileMerger.html
An example from the code in your question:
import tempfile
import PyPDF2
from django.core.files import File
# Using a temporary file rather than a buffer in memory is probably better
temp_base = tempfile.TemporaryFile()
temp_final = tempfile.TemporaryFile()
# Create document, add what you want to the story, then build
doc = SimpleDocTemplate(temp_base, pagesize=letter, ...)
...
doc.build(...)
# Now, this is the fancy part. Create merger, add extra pages and save
merger = PyPDF2.PdfFileMerger()
merger.append(temp_base)
# Add any extra document, you can choose a subset of pages and add bookmarks
merger.append(entry.document.file, bookmark='Attachment')
merger.write(temp_final)
# Write the final file in the HTTP response
django_file = File(temp_final)
resp = HttpResponse(django_file, content_type='application/pdf')
resp['Content-Disposition'] = 'attachment;filename=logbook.pdf'
if django_file.size is not None:
resp['Content-Length'] = django_file.size
return resp
Use this custom flowable:
class PDF_Flowable(Flowable):
#----------------------------------------------------------------------
def __init__(self,P,page_no):
Flowable.__init__(self)
self.P = P
self.page_no = page_no
#----------------------------------------------------------------------
def draw(self):
"""
draw the line
"""
canv = self.canv
pages = self.P
page_no = self.page_no
canv.translate(x, y)
canv.doForm(makerl(canv, pages[page_no]))
canv.restoreState()
and then after opening existing pdf i.e.
pages = PdfReader(BASE_DIR + "/out3.pdf").pages
pages = [pagexobj(x) for x in pages]
for i in range(0, len(pages)):
F = PDF_Flowable(pages,i)
elements.append(F)
elements.append(PageBreak())
use this code to add this custom flowable in elements[].