Example:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
from reportlab.platypus import SimpleDocTemplate, Table, TableStyle
from reportlab.lib.pagesizes import letter
def testPdf():
doc = SimpleDocTemplate("testpdf.pdf",pagesize=letter,
rightMargin=72,leftMargin=72,
topMargin=72,bottomMargin=18)
elements = []
datas = []
for x in range(1,50):
datas.append(
[x,x+1]
)
t=Table(datas)
tTableStyle=[
('SPAN',(0,0),(0,37)),
]
t.setStyle(TableStyle(tTableStyle))
elements.append(t)
doc.build(elements)
if __name__ == '__main__':
testPdf()
this code runs success, because the table is in one page,if I set the "SPAN" to "(0,0),(0,38)" ,the error is:
reportlab.platypus.doctemplate.LayoutError: Flowable with cell(0,0) containing
'1'(46.24 x 702) too large on page 2 in frame 'normal'(456.0 x 690.0*) of template 'Later'
and if I set it bigger the error will be:
Traceback (most recent call last):
File "testpdf.py", line 26, in <module>
testPdf()
File "testpdf.py", line 23, in testPdf
doc.build(elements)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/doctemplate.py", line 1117, in build
BaseDocTemplate.build(self,flowables, canvasmaker=canvasmaker)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/doctemplate.py", line 880, in build
self.handle_flowable(flowables)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/doctemplate.py", line 763, in handle_flowable
if frame.add(f, canv, trySplit=self.allowSplitting):
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/frames.py", line 159, in _add
w, h = flowable.wrap(aW, h)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/tables.py", line 1113, in wrap
self._calc(availWidth, availHeight)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/tables.py", line 587, in _calc
self._calc_height(availHeight,availWidth,W=W)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/tables.py", line 553, in _calc_height
spanFixDim(H0,H,spanCons,lim=hmax)
File "/usr/local/lib/python2.7/dist-packages/reportlab-2.5-py2.7-linux-x86_64.egg/reportlab/platypus/tables.py", line 205, in spanFixDim
t = sum([V[x]+M.get(x,0) for x in xrange(x0,x1)])
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
How can I deal with this?
The reason you're facing this problem is exactly what Gordon Worley commented above. There's no way to SPAN across page automatically as the algorithm implemented will be confused of the height and width calculated.
An approach to tackle this will be manually format/style your table per page using row/column coordinates. Sadly, even the replies in the reportlab suggest we do this manually.
I did split my tables manually and style them separately, which in my opinion is a very ugly approach. I'll look for other alternatives later.
For reference: https://bitbucket.org/ntj/reportlab_imko_table
Related
I'm tyring to get an array of points on the a particular path (it's a teapot). I made the path and exported it using the "Inkscape" and "Gimp" software.
I'm trying to parse the svg file (essentially an XML file) using the library svgpathtools and espetially the parse_path function. The normal behavior of parse_path is to -well parse- the "d-string" of the SVG and create a Path object.
However, I get an error:
File (...)\parser.py", line 112, in parse_path
control1 = float(elements.pop()) + float(elements.pop()) * 1j
ValueError: could not convert string to float: 's'
Here are the first few lines of the SVG file:
<path id="Sélection"
fill="none" stroke="black" stroke-width="1"
d="M 1381.00,143.00
C 1382.71,149.01 1394.44,175.21 1397.93,180.00
1400.62,183.69 1402.89,185.74 1405.83,189.00
1405.83,189.00 1429.69,216.00 1429.69,216.00
[...]
1403.00,127.29 1381.00,143.00 1381.00,143.00 Z
M 2296.00,978.00
C 2296.00,978.00 2293.17,942.00 2293.17,942.00
2293.17,942.00 2288.72,891.00 2288.72,891.00
2288.72,891.00 2276.88,838.00 2276.88,838.00
[...]
2315.00,967.85 2296.00,978.00 2296.00,978.00 Z
M 326.00,1040.00" />
The file is 250 lines long.
This is the problematic piece of my code:
path = svgpathtools.parse_path(filepath)
And here the full, unredacted error
Traceback (most recent call last):
File "c:\Users\vikto\.vscode\extensions\ms-python.python-2019.10.44104\pythonFiles\ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "c:\Users\vikto\.vscode\extensions\ms-python.python-2019.10.44104\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 432, in main
run()
File "c:\Users\vikto\.vscode\extensions\ms-python.python-2019.10.44104\pythonFiles\lib\python\old_ptvsd\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Users\vikto\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\vikto\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\vikto\AppData\Local\Programs\Python\Python37-32\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\vikto\Desktop\Best_Dossier_ever\Python\TeapotProj\TeapotProject.py", line 34, in <module>
path = svgpathtools.parse_path(fpath)
File "C:\Users\vikto\AppData\Local\Programs\Python\Python37-32\lib\site-packages\svgpathtools\parser.py", line 112, in parse_path
control1 = float(elements.pop()) + float(elements.pop()) * 1j
ValueError: could not convert string to float: 's'
I'm afraid the error might be due to incorrect formating since the CubicBezier function has parameters start, control1, control2, end all in complex a + bj format. It seems here that there are fewer parameters !? Would it be Inkscape/Gimp that doesnt format it well (I doubt that) ? Or something else?
Rope will be greatly appreciated !!
Got the answer!
Here the problem was that I was parsing the whole SVG file and not only the "d-string" part.
To get the actual string:
from xml.dom import minidom
mydoc = minidom.parse(file_path)
path_tag = mydoc.getElementsByTagName("path")
d_string = path_tag[0].attributes['d'].value
Path_elements = svgpathtools.parse_path(d_string)
Here, Path_elements is a list of Path objects defined by the CubicBezier curves.
As to the "fewer" points, actually the end of one Bezier curve is the start of the other, hence no need for 5 parameters but only 4 and a start "M" instruction point!
Shoutout to #Mike'Pomax'Kamermans for the help !
I am trying to get into writing code to scrape stock web pages, I came across this youtube video https://www.youtube.com/watch?v=2BrpKpWwT2A. When I copy and paste the following code (from the video)
import datetime as dt
import matplotlib.pyplot as plt
from matplotlib import style
import pandas as pd
import pandas_datareader.data as web
style.use("ggplot")
start = dt.datetime(2015, 1, 1)
end = dt.datetime.now()
df = web.DataReader("TSLA", "yahoo", start, end)
print(df.head())
I still get the same error (Full traceback is too long to add here) but the last few lines say:
File "/anaconda3/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py", line 363, in getpeercert
'subjectAltName': get_subj_alt_name(x509)
File "/anaconda3/lib/python3.6/site-packages/urllib3/contrib/pyopenssl.py", line 213, in get_subj_alt_name
ext = cert.extensions.get_extension_for_class(
File "/anaconda3/lib/python3.6/site-packages/cryptography/utils.py", line 170, in inner
result = func(instance)
File "/anaconda3/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/x509.py", line 127, in extensions
self._backend, self._x509
File "/anaconda3/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/decode_asn1.py", line 252, in parse
value = handler(backend, ext_data)
File "/anaconda3/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/decode_asn1.py", line 438, in _decode_subject_alt_name
_decode_general_names_extension(backend, ext)
File "/anaconda3/lib/python3.6/site-packages/cryptography/x509/extensions.py", line 1262, in __init__
self._general_names = GeneralNames(general_names)
File "/anaconda3/lib/python3.6/site-packages/cryptography/x509/extensions.py", line 1217, in __init__
"Every item in the general_names list must be an "
TypeError: Every item in the general_names list must be an object conforming to the GeneralName interface
Any ideas as to what I could be doing wrong?
I tried the above code and got the following output:
I am exploring the Platypus library for multi-objective optimization in Python. It appears to me that Platypus should support variables (optimization parameters) as integers out of the box, however this simple problem (two objectives, three variables, no constraints and Integer variables with SMPSO):
from platypus import *
def my_function(x):
""" Some objective function"""
return [-x[0] ** 2 - x[2] ** 2, x[1] - x[0]]
def AsInteger():
problem = Problem(3, 2) # define 3 inputs and 1 objective (and no constraints)
problem.directions[:] = Problem.MAXIMIZE
int1 = Integer(-50, 50)
int2 = Integer(-50, 50)
int3 = Integer(-50, 50)
problem.types[:] = [int1, int2, int3]
problem.function = my_function
algorithm = SMPSO(problem)
algorithm.run(10000)
Results into:
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 820, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 838, in iterate
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1008, in _update_velocities
TypeError: unsupported operand type(s) for -: 'list' and 'list'
Similarly, if I try to use another optimization technique in Platypus (CMAES instead of SMPSO):
Traceback (most recent call last):
File "D:\MyProjects\Drilling\test_platypus.py", line 62, in
AsInteger()
File "D:\MyProjects\Drilling\test_platypus.py", line 19, in AsInteger
algorithm.run(10000)
File "build\bdist.win-amd64\egg\platypus\core.py", line 405, in run
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1074, in step
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1134, in initialize
File "build\bdist.win-amd64\egg\platypus\algorithms.py", line 1298, in iterate
File "build\bdist.win-amd64\egg\platypus\core.py", line 378, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 88, in evaluate_all
File "build\bdist.win-amd64\egg\platypus\evaluator.py", line 55, in run_job
File "build\bdist.win-amd64\egg\platypus\core.py", line 345, in run
File "build\bdist.win-amd64\egg\platypus\core.py", line 518, in evaluate
File "build\bdist.win-amd64\egg\platypus\core.py", line 160, in call
File "build\bdist.win-amd64\egg\platypus\types.py", line 147, in decode
File "build\bdist.win-amd64\egg\platypus\tools.py", line 521, in gray2bin
TypeError: 'float' object has no attribute 'getitem'
I get other types of error messages with other algorithms (OMOPSO, GDE3). While the algorithms NSGAIII, NSGAII, SPEA2, etc... appear to be working.
Has anyone ever encountered such issues? Maybe I am specifying the problem in te wrong way?
Thank you in advance for any suggestion.
Andrea.
try to change the way u add the problem type
problem.types[:] = [integer(-50,50),integer(-50,50),integer(-50,50)]
could work this way
I wanted to write the code to find the similarity between two sentences and then I ended up writing this code using nltk and gensim. I used tokenization and gensim.similarities.Similarity to do the work. But it ain't serving my purpose.
It works fine until I introduce the last line of code.
import gensim
import nltk
raw_documents = ["I'm taking the show on the road.",
"My socks are a force multiplier.",
"I am the barber who cuts everyone's hair who doesn't cut their
own.",
"Legend has it that the mind is a mad monkey.",
"I make my own fun."]
from nltk.tokenize import word_tokenize
gen_docs = [[w.lower() for w in word_tokenize(text)]
for text in raw_documents]
dictionary = gensim.corpora.Dictionary(gen_docs)
print(dictionary[5])
print(dictionary.token2id['socks'])
print("Number of words in dictionary:",len(dictionary))
for i in range(len(dictionary)):
print(i, dictionary[i])
corpus = [dictionary.doc2bow(gen_doc) for gen_doc in gen_docs]
print(corpus)
tf_idf = gensim.models.TfidfModel(corpus)
print(tf_idf)
s = 0
for i in corpus:
s += len(i)
print(s)
sims = gensim.similarities.Similarity('/usr/workdir/',tf_idf[corpus],
num_features=len(dictionary))
print(sims)
print(type(sims))
query_doc = [w.lower() for w in word_tokenize("Socks are a force for good.")]
print(query_doc)
query_doc_bow = dictionary.doc2bow(query_doc)
print(query_doc_bow)
query_doc_tf_idf = tf_idf[query_doc_bow]
print(query_doc_tf_idf)
sims[query_doc_tf_idf]
It throws this error. I couldn't find the answer for this anywhere on the internet.
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\gensim\utils.py", line 679, in save
_pickle.dump(self, fname_or_handle, protocol=pickle_protocol)
TypeError: file must have a 'write' attribute
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "semantic.py", line 45, in <module>
sims[query_doc_tf_idf]
File "C:\Python36\lib\site-packages\gensim\similarities\docsim.py", line
503, in __getitem__
self.close_shard() # no-op if no documents added to index since last
query
File "C:\Python36\lib\site-packages\gensim\similarities\docsim.py", line
427, in close_shard
shard = Shard(self.shardid2filename(shardid), index)
File "C:\Python36\lib\site-packages\gensim\similarities\docsim.py", line
110, in __init__
index.save(self.fullname())
File "C:\Python36\lib\site-packages\gensim\utils.py", line 682, in save
self._smart_save(fname_or_handle, separately, sep_limit, ignore,
pickle_protocol=pickle_protocol)
File "C:\Python36\lib\site-packages\gensim\utils.py", line 538, in
_smart_save
pickle(self, fname, protocol=pickle_protocol)
File "C:\Python36\lib\site-packages\gensim\utils.py", line 1337, in pickle
with smart_open(fname, 'wb') as fout: # 'b' for binary, needed on
Windows
File "C:\Python36\lib\site-packages\smart_open\smart_open_lib.py", line
181, in smart_open
fobj = _shortcut_open(uri, mode, **kw)
File "C:\Python36\lib\site-packages\smart_open\smart_open_lib.py", line
287, in _shortcut_open
return io.open(parsed_uri.uri_path, mode, **open_kwargs)
Please help figure out where the problem is.
Your query should work if you specify a valid path when you instantiate your Similarity. For the example below, I have created a directory Similarity on my C-drive and have specified the directory path and a name for the file in the function call.
sims = gensim.similarities.Similarity('C:/Similarity/sims',tf_idf[corpus],
num_features=len(dictionary))
print(sims)
print(type(sims))
query_doc = [w.lower() for w in word_tokenize("Socks are a force for good.")]
print(query_doc)
query_doc_bow = dictionary.doc2bow(query_doc)
print(query_doc_bow)
query_doc_tf_idf = tf_idf[query_doc_bow]
print(query_doc_tf_idf)
print('Query result:', sims[query_doc_tf_idf])
Query result: [0. 0.84565616 0. 0.06124881 0. ]
Im using python to open an existing excel file and do some formatting and save and close the file. My code is working good when the file size is small but when excel size is big (apprx. 40MB) I'm getting Serialization I/O error and Im sure it due to memory problem or due to my code. Kindly help.
System Config:
RAM - 8 GB
32 - bit operation
Windows 7
Code:
import numpy as np
from openpyxl import load_workbook
from openpyxl.styles import colors, Font
dest_loc='/Users/abdulr06/Documents/Python Scripts/'
np.seterr(divide='ignore', invalid='ignore')
SRC='TSYS'
YM1='201707'
dest_file=dest_loc+SRC+'_'+''+YM1+'.xlsx'
sheetname = [SRC+''+' GL-Recon']
#Following code is common for rest of the sourc systems
wb=load_workbook(dest_file)
fmtB=Font(color=colors.BLUE)
fmtR=Font(color=colors.RED)
for i in range(len(sheetname)):
sheet1=wb.get_sheet_by_name(sheetname[i])
print(sheetname[i])
last_record=sheet1.max_row+1
for m in range(2,last_record):
if -30 <= sheet1.cell(row=m,column=5).value <=30:
ft=sheet1.cell(row=m,column=5)
ft.font=fmtB
ft.number_format = '_(* #,##0.00_);_(* (#,##0.00);_(* "-"??_);_(#_)'
ft1=sheet1.cell(row=m,column=6)
ft1.number_format = '0.00%'
else:
ft=sheet1.cell(row=m,column=5)
ft.font=fmtR
ft.number_format = '_(* #,##0.00_);_(* (#,##0.00);_(* "-"??_);_(#_)'
ft1=sheet1.cell(row=m,column=6)
ft1.number_format = '0.00%'
wb.save(filename=dest_file)
Exception:
Traceback (most recent call last):
File "<ipython-input-17-fc16d9a46046>", line 6, in <module>
wb.save(filename=dest_file)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\workbook\workbook.py", line 263, in save
save_workbook(self, filename)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\writer\excel.py", line 239, in save_workbook
writer.save(filename, as_template=as_template)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\writer\excel.py", line 222, in save
self.write_data(archive, as_template=as_template)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\writer\excel.py", line 80, in write_data
self._write_worksheets(archive)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\writer\excel.py", line 163, in _write_worksheets
xml = sheet._write(self.workbook.shared_strings)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\worksheet\worksheet.py", line 776, in _write
return write_worksheet(self, shared_strings)
File "C:\Users\abdulr06\AppData\Local\Continuum\Anaconda3\lib\site-packages\openpyxl\writer\worksheet.py", line 263, in write_worksheet
xf.write(worksheet.page_breaks.to_tree())
File "serializer.pxi", line 1016, in lxml.etree._FileWriterElement.__exit__ (src\lxml\lxml.etree.c:141944)
File "serializer.pxi", line 904, in lxml.etree._IncrementalFileWriter._write_end_element (src\lxml\lxml.etree.c:140137)
File "serializer.pxi", line 999, in lxml.etree._IncrementalFileWriter._handle_error (src\lxml\lxml.etree.c:141630)
File "serializer.pxi", line 195, in lxml.etree._raiseSerialisationError (src\lxml\lxml.etree.c:131006)
SerialisationError: IO_WRITE
Why do you allocate font at each loop?
fmt=Font(color=colors.BLUE)
Or red, create two fonts red and blue, once and then use it, each time you are allocating Font, you are using more memory.
Optimise your code at first. Less code -> less errors, for example:
mycell = sheet1.cell(row=m,column=5)
if -30 <= mycell.value <=30:
mycell.font = redfont
This should ensure that you do not have the issue again (hopefully)