Issues extracting elements from StdDevUncertainty (astropy) - python

I am using StdDEvUncertainty and when I am converting the list of uncertainties called 'err' to a csv file using pd.DataFrame and to_csv, each cell in the 'err' column in excel is the entire array of uncertainties. I just want to somehow extract the elements so I can save them as needed instead of as an entire array for each data point. Any help/advice is much appreciated!
Additionally, when I try to use value() or append() I get an error saying that StdDevUncertainty does not have that attribute.
What I am getting for every single cell:
StdDevUncertainty([1.1489272e-15, 1.0933596e-15, 7.6915864e-16,
1.0783939e-15, 1.2733187e-15, 9.2377510e-16,
1.2482474e-15... and so on
def save_cont_2(self, file, linename, xlow, xhigh, thresh, width,yesplot, saveyes=True):
thisspec = self.continuum(xlim = [xlow - 50, xhigh + 50], line_find_thresh = thresh, line_width = width, plot=yesplot,return_result = 'sub')
savename= 'Continuum_Subtracted_Files/' +file.replace('.csv','')+'_continuumsub_'+linename+'.csv'
if saveyes is True:
thisspec2 = pd.DataFrame({'wl' : thisspec.spec.spectral_axis.value,
'flux' : thisspec.spec.flux.value,
'err' : thisspec.spec.uncertainty.array})
thisspec2.to_csv(savename, index = False)

Related

Pandas TypeError when trying to count NaNs in subset of dataframe column

I'm writing a script to perform LLoD analysis for qPCR assays for my lab. I import the relevant columns from the .csv of data from the instrument using pandas.read_csv() with the usecols parameter, make a list of the unique values of RNA quantity/concentration column, and then I need to determine the detection rate / hit rate at each given concentration. If the target is detected, the result will be a number; if not, it'll be listed as "TND" or "Undetermined" or some other non-numeric string (depends on the instrument). So I wrote a function that (should) take a quantity and the dataframe of results and return the probability of detection for that quantity. However, on running the script, I get the following error:
Traceback (most recent call last):
File "C:\Python\llod_custom.py", line 34, in <module>
prop[idx] = hitrate(val, data)
File "C:\Python\llod_custom.py", line 29, in hitrate
df = pd.to_numeric(list[:,1], errors='coerce').isna()
File "C:\Users\wmacturk\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\frame.py", line 3024, in __getitem__
indexer = self.columns.get_loc(key)
File "C:\Users\wmacturk\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\indexes\base.py", line 3080, in get_loc
return self._engine.get_loc(casted_key)
File "pandas\_libs\index.pyx", line 70, in pandas._libs.index.IndexEngine.get_loc
File "pandas\_libs\index.pyx", line 75, in pandas._libs.index.IndexEngine.get_loc
TypeError: '(slice(None, None, None), 1)' is an invalid key
The idea in the line that's throwing the error (df = pd.to_numeric(list[:,1], errors='coerce').isna()) is to change any non-numeric values in the column to NaN, then get a boolean array telling me whether a given row's entry is NaN, so I can count the number of numeric entries with df.sum() later.
I'm sure it's something that should be obvious to anyone who's worked with pandas / dataframes, but I haven't used dataframes in python before, so I'm at a loss. I'm also much more familiar with C and JavaScript, so something like python that isn't as rigid can actually be a bit confusing since it's so flexible. Any help would be greatly appreciated.
N.B. the conc column will consist of 5 to 10 different values, each repeated 5-10 times (i.e. 5-10 replicates at each of the 5-10 concentrations); the detect column will contain either a number or a character string in each row -- numbers mean success, strings mean failure... For my purposes the value of the numbers is irrelevant, I only need to know if the target was detected or not for a given replicate. My script (up to this point) follows:
import os
import pandas as pd
import numpy as np
import statsmodels as sm
from scipy.stats import norm
from tkinter import filedialog
from tkinter import *
# initialize tkinter
root = Tk()
root.withdraw()
# prompt for data file and column headers, then read those columns into a dataframe
print("In the directory prompt, select the .csv file containing data for analysis")
path = filedialog.askopenfilename()
conc = input("Enter the column header for concentration/number of copies: ")
detect = input("Enter the column header for target detection: ")
tnd = input("Enter the value listed when a target is not detected (e.g. \"TND\", \"Undetected\", etc.): ")
data = pd.read_csv(path, usecols=[conc, detect])
# create list of unique values for quantity of RNA, initialize vectors of same length
# to store probabilies and probit scores for each
qtys = data[conc].unique()
prop = probit = [0] * len(qtys)
# Function to get the hitrate/probability of detection for a given quantity
def hitrate(qty, dataFrame):
list = dataFrame[dataFrame.iloc[:,0] == qty]
df = pd.to_numeric(list[:,1], errors='coerce').isna()
return (len(df) - (len(df)-df.sum()))/len(df)
# iterate over quantities to calculate the corresponding probability of Detection
# and its associate probit score
for idx, val in enumerate(qtys):
prop[idx] = hitrate(val, data)
probit[idx] = norm.ppf(hitrate(val, data))
# create an array of the quantities with their associated probabilities & Probit scores
hitTable = vstack([qtys,prop,probit])
sample dataframe can be created with:
d = {'qty':[1,1,1,1,1, 10,10,10,10,10, 20,20,20,20,20, 50,50,50,50,50, 100,100,100,100,100], 'result':['TND','TND','TND',5,'TND', 'TND',5,'TND',5,'TND', 5,'TND',5,'TND',5, 5,6,5,5,'TND', 5,5,5,5,5]}
exData = pd.DataFrame(data=d)
Then just use exData as the dataframe data in the original code
EDIT: I've fixed the problem by tweaking Loic RW's answer slightly. The function hitrate should be
def hitrate(qty, df):
t_s = df[df.qty == qty].result
t_s = t_s.apply(pd.to_numeric, args=('coerce',)).isna()
return (len(t_s)-t_s.sum())/len(t_s)
Does the following achieve what you want? I made some assumptions on the structure of your data.
def hitrate(qty, df):
target_subset = df[df.qty == qty].target
target_subset = target_subset.apply(pd.to_numeric, args=('coerce',)).isna()
return 1-((target_subset.sum())/len(target_subset))
If i run the following:
data = pd.DataFrame({'qty': [1,2,2,2,3],
'target': [.5, .8, 'TND', 'Undetermined', .99]})
hitrate(2, data)
I get:
0.33333333333333337

the cell value is not returned in pandas (where function)

I used the following code to:
To map the values from another data frame (Map function is used)
To finalize the values in the column; where() is used
Requirement: VT_Final column to either of these values (V_Team1 or V_Team2 or Non-PA)
Issue: VT_Final returns an empty cell(blanks);
Please advise with clarifications.
Code:
Bookings['V_Team1'] = Bookings.Marker1.map(Manpower_1.set_index('Marker1')['Vertical Team'].to_dict())
Bookings['V_Team2'] = Bookings.Marker1.map(Attrition_1.set_index('Marker1')['Vertical Team'].to_dict())
Bookings['VT_Final'] = Bookings['V_Team1']
Bookings['VT_Final'].where(Bookings['V_Team1'] !='')
Bookings['VT_Final'] = Bookings['V_Team2']
Bookings['VT_Final'].where(Bookings['V_Team1'] =='')
Bookings['VT_Final'] = 'Non PA'
Bookings['VT_Final'].where((Bookings['V_Team1'] =='')&(Bookings['V_Team2']==''))

Rename a data frame name by adding the iteration value as suffix in a for loop (Python)

I have run the following Python code :
array = ['AEM000', 'AID017']
USA_DATA_1D = USA_DATA10.loc[USA_DATA10['JOBSPECIALTYCODE'].isin(array)]
I run a regression model and extract the log-likelyhood value on each item of this array by a for loop :
for item in array:
USA_DATA_1D = USA_DATA10.loc[USA_DATA10['JOBSPECIALTYCODE'] == item]
formula = "WEIGHTED_BASE_MEDIAN_FINAL_MEAN ~ YEAR"
response, predictors = dmatrices(formula, USA_DATA_1D, return_type='dataframe')
mod1 = sm.GLM(response, predictors, family=sm.genmod.families.family.Gaussian()).fit()
LLF_NG = {'model': ['Standard Gaussian'],
'llf_value': mod1.llf
}
df_llf = pd.DataFrame(LLF_NG , columns = ['model', 'llf_value'])
Now I would like to remane the dataframe df_llf by df_llf_(name of the item) i.e. df_llf_AEM000 when running the loop on the first item and df_llf_AID017 when running the loop on the second one.
I need some help to know how to proceed that.
If you want to rename the data frame, you need to use the copy method so that the original data frame does not get altered.
df_llf_AEM000 = df_llf.copy()
If you want to save iteratively several different versions of the original data frame, you can do something like this:
allDataframes = []
for i in range(10):
df = df_original.copy()
allDataframes.append(df)
print(allDataframes[0])

why is my data a tuple and how can I change this so I can sort the data

I am using rpy2 to do some statistical analyses in R via python. After importing a data file I want to sort the data and do a couple other things with it in R. Once I import the data and try to sort the data I get this error message:
TypeError: 'tuple' object cannot be interpreted as an index
The last 2 lines of my code are where I am trying to sort my data, and the few lines before that are where I import the data.
root = os.getcwd()
dirs = [os.path.abspath(name) for name in os.listdir(".") if os.path.isdir(name)]
for d in dirs:
os.chdir(d)
cwd = os.getcwd()
files_to_analyze = (glob.glob("*.afa"))
for f in files_to_analyze:
afa_file = os.path.join(cwd + '/' + f)
readfasta = robjects.r['read.fasta']
mydatafasta = readfasta(afa_file)
names = robjects.r['names']
IDnames = names(mydatafasta)
substr = robjects.r['substr']
ID = substr(IDnames, 1,8)
#print ID
readtable = robjects.r['read.table']
gps_file = os.path.join(root + '/' + "GPS.txt")
xy = readtable(gps_file, sep="\t")
#print xy
order = robjects.r['order']
gps = xy[order(xy[:,2]),]
I don't understand why my data is a tuple and not a dataframe that I can manipulate further using R. Is there a way to transform this into a workable dataframe that can be used by R?
My xy data look like:
Species AB425882 35.62 -83.4
Species AB425905 35.66 -83.33
Species KC413768 37.35 127.03
Species AB425841 35.33 -82.82
Species JX402724 29.38 -82.2
I want to sort the data alphanumerically by the second column using the order function in R.
There is a quite a bit of guesswork since the example is not sufficient to reproduce what you have.
In the following, if xy is an R data frame, you will want to use the method dedicated to R-style subsetting to perform R-style subsetting (see the doc):
# Note R indices are 1-based while Python indices are 0-based.
# When using R-style subsetting the indices are 1-based.
gps = xy.rx(order(xy.rx(True, 2)),
True)

Simulate autofit column in xslxwriter

I would like to simulate the Excel autofit function in Python's xlsxwriter. According to this url, it is not directly supported:
http://xlsxwriter.readthedocs.io/worksheet.html
However, it should be quite straightforward to loop through each cell on the sheet and determine the maximum size for the column and just use worksheet.set_column(row, col, width) to set the width.
The complications that is keeping me from just writing this are:
That URL does not specify what the units are for the third argument to set_column.
I can not find a way to measure the width of the item that I want to insert into the cell.
xlsxwriter does not appear to have a method to read back a particular cell. This means I need to keep track of each cell width as I write the cell. It would be better if I could just loop through all the cells, that way a generic routine could be written.
[NOTE: as of Jan 2023 xslxwriter added a new method called autofit. See jmcnamara's answer below]
As a general rule, you want the width of the columns a bit larger than the size of the longest string in the column. The with of 1 unit of the xlsxwriter columns is about equal to the width of one character. So, you can simulate autofit by setting each column to the max number of characters in that column.
Per example, I tend to use the code below when working with pandas dataframes and xlsxwriter.
It first finds the maximum width of the index, which is always the left column for a pandas to excel rendered dataframe. Then, it returns the maximum of all values and the column name for each of the remaining columns moving left to right.
It shouldn't be too difficult to adapt this code for whatever data you are using.
def get_col_widths(dataframe):
# First we find the maximum length of the index column
idx_max = max([len(str(s)) for s in dataframe.index.values] + [len(str(dataframe.index.name))])
# Then, we concatenate this to the max of the lengths of column name and its values for each column, left to right
return [idx_max] + [max([len(str(s)) for s in dataframe[col].values] + [len(col)]) for col in dataframe.columns]
for i, width in enumerate(get_col_widths(dataframe)):
worksheet.set_column(i, i, width)
I agree with Cole Diamond. I needed to do something very similar, it worked fine for me. where self.columns is my list of columns
def set_column_width(self):
length_list = [len(x) for x in self.columns]
for i, width in enumerate(length_list):
self.worksheet.set_column(i, i, width)
That URL does not specify what the units are for the third argument to set_column.
The column widths are given in multiples of the width of the '0' character in the font Calibri, size 11 (that's the Excel standard).
I can not find a way to measure the width of the item that I want to insert into the cell.
In order to get a handle on the exact width of a string, you can use tkinter's ability to measure string lengths in pixels, depending on the font/size/weight/etc. If you define a font, e.g.
reference_font = tkinter.font.Font(family='Calibri', size=11)
you can afterwards use its measure method to determine string widths in pixels, e.g.
reference_font.measure('This is a string.')
In order to do this for a cell from your Excel table, you need to take its format into account (it contains all the information on the used font). That means, if you wrote something to your table using worksheet.write(row, col, cell_string, format), you can get the used font like this:
used_font = tkinter.font.Font(family = format.font_name,
size = format.font_size,
weight = ('bold' if format.bold else 'normal'),
slant = ('italic' if format.italic else 'roman'),
underline = format.underline,
overstrike = format.font_strikeout)
and afterwards determine the cell width as
cell_width = used_font.measure(cell_string+' ')/reference_font.measure('0')
The whitespace is added to the string to provide some margin. This way the results are actually very close to Excel's autofit results, so that I assume Excel is doing just that.
For the tkinter magic to work, a tkinter.Tk() instance (a window) has to be open, therefore the full code for a function that returns the required width of a cell would look like this:
import tkinter
import tkinter.font
def get_cell_width(cell_string, format = None):
root = tkinter.Tk()
reference_font = tkinter.font.Font(family='Calibri', size=11)
if format:
used_font = tkinter.font.Font(family = format.font_name,
size = format.font_size,
weight = ('bold' if format.bold else 'normal'),
slant = ('italic' if format.italic else 'roman'),
underline = format.underline,
overstrike = format.font_strikeout)
else:
used_font = reference_font
cell_width = used_font.measure(cell_string+' ')/reference_font.measure('0')
root.update_idletasks()
root.destroy()
return cell_width
Of course you would like to get the root handling and reference font creation out of the function, if it is meant to be executed frequently. Also, it might be faster to use a lookup table format->font for your workbook, so that you do not have to define the used font every single time.
Finally, one could take care of line breaks within the cell string:
pixelwidths = (used_font.measure(part) for part in cell_string.split('\n'))
cell_width = (max(pixelwidths) + used_font.measure(' '))/reference_font.measure('0')
Also, if you are using the Excel filter function, the dropdown arrow symbol needs another 18 pixels (at 100% zoom in Excel). And there might be merged cells spanning multiple columns... A lot of room for improvements!
xlsxwriter does not appear to have a method to read back a particular cell. This means I need to keep track of each cell width as I write the cell. It would be better if I could just loop through all the cells, that way a generic routine could be written.
If you do not like to keep track within your own data structure, there are at least three ways to go:
(A) Register a write handler to do the job:
You can register a write handler for all standard types. In the handler function, you simply pass on the write command, but also do the bookkeeping wrt. column widths. This way, you only need to read and set the optimal column width in the end (before closing the workbook).
# add worksheet attribute to store column widths
worksheet.colWidths = [0]*number_of_used_columns
# register write handler
for stdtype in [str, int, float, bool, datetime, timedelta]:
worksheet.add_write_handler(stdtype, colWidthTracker)
def colWidthTracker(sheet, row, col, value, format):
# update column width
sheet.colWidths[col] = max(sheet.colWidths[col], get_cell_width(value, format))
# forward write command
if isinstance(value, str):
if value == '':
sheet.write_blank(row, col, value, format)
else:
sheet.write_string(row, col, value, format)
elif isinstance(value, int) or isinstance(value, float):
sheet.write_number(row, col, value, format)
elif isinstance(value, bool):
sheet.write_boolean(row, col, value, format)
elif isinstance(value, datetime) or isinstance(value, timedelta):
sheet.write_datetime(row, col, value, format)
else:
raise TypeError('colWidthTracker cannot handle this type.')
# and in the end...
for col in columns_to_be_autofitted:
worksheet.set_column(col, col, worksheet.colWidths[col])
(B) Use karolyi's answer above to go through the data stored within XlsxWriter's internal variables. However, this is discouraged by the module's author, since it might break in future releases.
(C) Follow the recommendation of jmcnamara: Inherit from and override the default worksheet class and add in some autofit code, like this example: xlsxwriter.readthedocs.io/example_inheritance2.html
I recently ran into this same issue and this is what I came up with:
r = 0
c = 0
for x in list:
worksheet.set_column('{0}:{0}'.format(chr(c + ord('A'))), len(str(x)) + 2)
worksheet.write(r, c, x)
c += 1
In my example r would be the row number you are outputting to, c would be the column number you are outputting to (both 0 indexed), and x would be the value from list that you are wanting to be in the cell.
the '{0}:{0}'.format(chr(c + ord('A'))) piece takes the column number provided and converts it to the column letter accepted by xlsxwriter, so if c = 0 set_column would see 'A:A', if c = 1 then it would see 'B:B', and so on.
the len(str(x)) + 2 piece determines the length of the string you are trying to output then adds 2 to it to ensure that the excel cell is wide enough as the length of the string does not exactly correlate to the width of the cell. You may want to play with rather you add 2 or possibly more depending on your data.
The units that xlsxwriter accepts is a little harder to explain. When you are in excel and you hover over where you can change the column width you will see Width: 8.43 (64 pixels). In this example the unit it accepts is the 8.43, which I think is centimeters? But excel does not even provide a unit, at least not explicitly.
Note: I have only tried this answer on excel files that contain 1 row of data. If you will have multiple rows, you will need to have a way to determine which row will have the 'longest' information and only apply this to that row. But if each column will be roughly the same size regardless of row, then this should work fine for you.
Good luck and I hope this helps!
Update from January 2023.
XlsxWriter 3.0.6+ now supports a autofit() worksheet method:
from xlsxwriter.workbook import Workbook
workbook = Workbook('autofit.xlsx')
worksheet = workbook.add_worksheet()
# Write some worksheet data to demonstrate autofitting.
worksheet.write(0, 0, "Foo")
worksheet.write(1, 0, "Food")
worksheet.write(2, 0, "Foody")
worksheet.write(3, 0, "Froody")
worksheet.write(0, 1, 12345)
worksheet.write(1, 1, 12345678)
worksheet.write(2, 1, 12345)
worksheet.write(0, 2, "Some longer text")
worksheet.write(0, 3, "http://ww.google.com")
worksheet.write(1, 3, "https://github.com")
# Autofit the worksheet.
worksheet.autofit()
workbook.close()
Output:
Or using Pandas:
import pandas as pd
# Create a Pandas dataframe from some data.
df = pd.DataFrame({
'Country': ['China', 'India', 'United States', 'Indonesia'],
'Population': [1404338840, 1366938189, 330267887, 269603400],
'Rank': [1, 2, 3, 4]})
# Order the columns if necessary.
df = df[['Rank', 'Country', 'Population']]
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('pandas_autofit.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1', index=False)
# Get the xlsxwriter workbook and worksheet objects.
workbook = writer.book
worksheet = writer.sheets['Sheet1']
worksheet.autofit()
# Close the Pandas Excel writer and output the Excel file.
writer.close()
Output:
Cole Diamond's answer is awesome. I just updated the subroutine to handle multiindex rows and columns.
def get_col_widths(dataframe):
# First we find the maximum length of the index columns
idx_max = [max([len(str(s)) for s in dataframe.index.get_level_values(idx)] + [len(str(idx))]) for idx in dataframe.index.names]
# Then, we concatenate this to the max of the lengths of column name and its values for each column, left to right
return idx_max + [max([len(str(s)) for s in dataframe[col].values] + \
[len(str(x)) for x in col] if dataframe.columns.nlevels > 1 else [len(str(col))]) for col in dataframe.columns]
There is another workaround to simulate Autofit that I've found on the Github site of xlsxwriter. I've modified it to return the approximate size of horizontal text (column width) or 90° rotated text (row height):
from PIL import ImageFont
def get_cell_size(value, font_name, font_size, dimension="width"):
""" value: cell content
font_name: The name of the font in the target cell
font_size: The size of the font in the target cell """
font = ImageFont.truetype(font_name, size=font_size)
(size, h) = font.getsize(str(value))
if dimension == "height":
return size * 0.92 # fit value experimentally determined
return size * 0.13 # fit value experimentally determined
This doesn't address bold text or other format elements that might affect the text size. Otherwise it works pretty well.
To find the width for your columns for autofit:
def get_col_width(data, font_name, font_size, min_width=1):
""" Assume 'data' to be an iterable (rows) of iterables (columns / cells)
Also, every cell is assumed to have the same font and font size.
Returns a list with the autofit-width per column """
colwidth = [min_width for col in data[0]]
for x, row in enumerate(data):
for y, value in enumerate(row):
colwidth[y] = max(colwidth[y], get_cell_size(value, font_name, font_size))
return colwidth
My version that will go over the one worksheet and autoset the field lengths:
from typing import Optional
from xlsxwriter.worksheet import (
Worksheet, cell_number_tuple, cell_string_tuple)
def get_column_width(worksheet: Worksheet, column: int) -> Optional[int]:
"""Get the max column width in a `Worksheet` column."""
strings = getattr(worksheet, '_ts_all_strings', None)
if strings is None:
strings = worksheet._ts_all_strings = sorted(
worksheet.str_table.string_table,
key=worksheet.str_table.string_table.__getitem__)
lengths = set()
for row_id, colums_dict in worksheet.table.items(): # type: int, dict
data = colums_dict.get(column)
if not data:
continue
if type(data) is cell_string_tuple:
iter_length = len(strings[data.string])
if not iter_length:
continue
lengths.add(iter_length)
continue
if type(data) is cell_number_tuple:
iter_length = len(str(data.number))
if not iter_length:
continue
lengths.add(iter_length)
if not lengths:
return None
return max(lengths)
def set_column_autowidth(worksheet: Worksheet, column: int):
"""
Set the width automatically on a column in the `Worksheet`.
!!! Make sure you run this function AFTER having all cells filled in
the worksheet!
"""
maxwidth = get_column_width(worksheet=worksheet, column=column)
if maxwidth is None:
return
worksheet.set_column(first_col=column, last_col=column, width=maxwidth)
just call set_column_autowidth with the column.
Some of the solutions given here were too elaborate for the rather simple thing that I was looking for: every column had to be sized so that all its values fits nicely. So I wrote my own solution. It basically iterates over all columns, and for each column it gets all string values (including the column name itself) and then takes the longest string as the maximal width for that column.
# Set the width of the columns to the max. string length in that column
# ~ simulates Excel's "autofit" functionality
for col_idx, colname in enumerate(df.columns):
max_width = max([len(colname)]+[len(str(s)) for s in df[colname]])
worksheet.set_column(col_idx, col_idx, max_width+1) # + 1 to add some padding
Here is a version of code that supports MultiIndex for row and column - it is not pretty but works for me. It expands on #cole-diamond answer:
def _xls_make_columns_wide_enough(dataframe, worksheet, padding=1.1, index=True):
def get_col_widths(dataframe, padding, index):
max_width_idx = []
if index and isinstance(dataframe.index, pd.MultiIndex):
# Index name lengths
max_width_idx = [len(v) for v in dataframe.index.names]
# Index value lengths
for column, content in enumerate(dataframe.index.levels):
max_width_idx[column] = max(max_width_idx[column],
max([len(str(v)) for v in content.values]))
elif index:
max_width_idx = [
max([len(str(s))
for s in dataframe.index.values] + [len(str(dataframe.index.name))])
]
if isinstance(dataframe.columns, pd.MultiIndex):
# Take care of columns - headers first.
max_width_column = [0] * len(dataframe.columns.get_level_values(0))
for level in range(len(dataframe.columns.levels)):
values = dataframe.columns.get_level_values(level).values
max_width_column = [
max(v1, len(str(v2))) for v1, v2 in zip(max_width_column, values)
]
# Now content.
for idx, col in enumerate(dataframe.columns):
max_width_column[idx] = max(max_width_column[idx],
max([len(str(v)) for v in dataframe[col].values]))
else:
max_width_column = [
max([len(str(s)) for s in dataframe[col].values] + [len(col)])
for col in dataframe.columns
]
return [round(v * padding) for v in max_width_idx + max_width_column]
for i, width in enumerate(get_col_widths(dataframe, padding, index)):
worksheet.set_column(i, i, width)
Openpyxl easily handles this task. Just install the module and insert the below line of code in your file
# Imorting the necessary modules
try:
from openpyxl.cell import get_column_letter
except ImportError:
from openpyxl.utils import get_column_letter
from openpyxl.utils import column_index_from_string
from openpyxl import load_workbook
import openpyxl
from openpyxl import Workbook
for column_cells in sheet.columns:
new_column_length = max(len(str(cell.value)) for cell in column_cells)
new_column_letter = (get_column_letter(column_cells[0].column))
if new_column_length > 0:
sheet.column_dimensions[new_column_letter].width = new_column_length*1.23

Categories

Resources