I'm trying to convert a directory full of XLSX files to CSV. Everything is working except I'm encountering an issue with the columns containing time information. The XLSX file is being created by another program that I can not modify. But I want to maintain the same times that show up when I view the XLSX file in Excel as when it is converted to CSV and viewed in any text editor.
My code:
import csv
import xlrd
import os
import fnmatch
import Tkinter, tkFileDialog, tkMessageBox
def main():
root = Tkinter.Tk()
root.withdraw()
print 'Starting .xslx to .csv conversion'
directory = tkFileDialog.askdirectory()
for fileName in os.listdir(directory):
if fnmatch.fnmatch(fileName, '*.xlsx'):
filePath = os.path.join(directory, fileName)
saveFile = os.path.splitext(filePath)[0]+".csv"
savePath = os.path.join(directory, saveFile)
workbook = xlrd.open_workbook(filePath)
sheet = workbook.sheet_by_index(0)
csvOutput = open(savePath, 'wb')
csvWriter = csv.writer(csvOutput, quoting=csv.QUOTE_ALL)
for row in xrange(sheet.nrows):
csvWriter.writerow(sheet.row_values(row))
csvOutput.close()
print '.csv conversion complete'
main()
To add some detail, if I open one file in Excel I see this in a time column:
00:10.3
00:14.2
00:16.1
00:20.0
00:22.0
But after I convert to CSV I see this in the same location:
0.000118981
0.000164005
0.000186227
0.000231597
0.000254861
Thanks to seanmhanson with his answer https://stackoverflow.com/a/25149562/1858351 I was able to figure out that Excel is dumping the times as decimals of a day. While I should try to learn and use xlrd better, for a quick short term fix I was instead able to convert that into seconds and then from seconds back into the time format originally seen of HH:MM:SS. My (probably ugly) code below in case anyone might be able to use it:
import csv
import xlrd
import os
import fnmatch
from decimal import Decimal
import Tkinter, tkFileDialog
def is_number(s):
try:
float(s)
return True
except ValueError:
return False
def seconds_to_hms(seconds):
input = Decimal(seconds)
m, s = divmod(input, 60)
h, m = divmod(m, 60)
hm = "%02d:%02d:%02.2f" % (h, m, s)
return hm
def main():
root = Tkinter.Tk()
root.withdraw()
print 'Starting .xslx to .csv conversion'
directory = tkFileDialog.askdirectory()
for fileName in os.listdir(directory):
if fnmatch.fnmatch(fileName, '*.xlsx'):
filePath = os.path.join(directory, fileName)
saveFile = os.path.splitext(filePath)[0]+".csv"
savePath = os.path.join(directory, saveFile)
workbook = xlrd.open_workbook(filePath)
sheet = workbook.sheet_by_index(0)
csvOutput = open(savePath, 'wb')
csvWriter = csv.writer(csvOutput, quoting=csv.QUOTE_ALL)
rowData = []
for rownum in range(sheet.nrows):
rows = sheet.row_values(rownum)
for cell in rows:
if is_number(cell):
seconds = float(cell)*float(86400)
hms = seconds_to_hms(seconds)
rowData.append((hms))
else:
rowData.append((cell))
csvWriter.writerow(rowData)
rowData = []
csvOutput.close()
print '.csv conversion complete'
main()
Excel stores time as a float in terms of days. You will need to use XLRD to determine if a cell is a date, and then convert it as needed. I'm not great with XLRD, but you might want something akin to this, changing the string formatting if you want to keep leading zeroes:
if cell.ctype == xlrd.XL_CELL_DATE:
try:
cell_tuple = xldate_as_tuple(cell, 0)
return "{hours}:{minutes}:{seconds}".format(
hours=cell_tuple[3], minutes=cell_tuple[4], seconds=cell_tuple[5])
except (any exceptions thrown by xldate_as_tuple):
//exception handling
The XLRD date to tuple method's documentation can be found here: https://secure.simplistix.co.uk/svn/xlrd/trunk/xlrd/doc/xlrd.html?p=4966#xldate.xldate_as_tuple-function
For a similar issue already answered, see also this question: Python: xlrd discerning dates from floats
Related
I am using python-watchdog PatternMatchingEventHandler to listen to any files with .xlsx extension. If any excel file is loaded to Home_Folder then it creates two folders Excel and CSV. Under Excel folder the loaded excel files are updated so that data starts with row 1. Under CSV folder, the transformed excel files are converted to csv. Below code is working fine. However, I was wondering if there is a way to simplify below code ? For example if you notice I am calling working directory again in main function. I am new to OOP not sure how to simplify below code? Any help is much appreciated!
Thanks in advance!
Python Code
import csv
from pathlib import Path
import openpyxl
from openpyxl import load_workbook,Workbook
import os
import pathlib
import glob
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
import time
import logging
from logging.handlers import RotatingFileHandler
def createFolders(HOME_FOLDER):
folders_name=['Excel','CSV']
for i in folders_name:
pathlib.Path(HOME_FOLDER+ i).mkdir(parents=True, exist_ok=True)
ALLOWED_EXTENSIONS = set(['xlsx'])
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
class FileWatcher(PatternMatchingEventHandler):
patterns = ["*.xlsx"] # Matching the file with extension .xlsx
def process(self, event):
# event.src_path will be the full file path
# event.event_type will be 'created', 'moved', etc.
print("{} noticed: {} on: {} ".format(time.asctime(),event.event_type, event.src_path))
def on_created(self, event):
self.process(event) #Calling above process
self.main(event) # Calling below main event
MyPWD = os.getcwd() #Is this redundant ?
for filename in os.listdir(MyPWD): #Is this redundant ?
path = os.path.join(MyPWD, filename) #Is this redundant ?
if os.path.isfile(path) and allowed_file(filename): #Is this redundant ?
XLFILE = filename #Is this redundant ?
def main(self,event):
def get_first_nonempty_row():
print('get_first_nonempty_row, mr=', max_row_in_sheet)
first_nonempty_row = None
for row in range(1, max_row_in_sheet + 1):
print('checking row', row)
for col in range(1, max_col_in_sheet + 1):
if sheet.cell(row, col).value is not None:
first_nonempty_row = row
print('first_nonempty_row =', first_nonempty_row)
return first_nonempty_row
return first_nonempty_row
def del_rows_before(first_nonempty_row):
modified = False
if first_nonempty_row > 1:
print('del_rows_before', first_nonempty_row)
print('deleting from 1 to', first_nonempty_row - 1)
modified = True
sheet.delete_rows(1, first_nonempty_row - 1)
return modified
#Splitting excel sheets into separate excel files
MyPWD = os.getcwd() #Is this redundant ?
workbooks = glob.iglob('*.xlsx') # Is this reduntant. Since Pattern handler is matching .xlsx files ?
for filename in os.listdir(MyPWD):
path = os.path.join(MyPWD, filename)
if os.path.isfile(path) and allowed_file(filename):
for workbook in workbooks:
print('reading:', workbook)
wb2 = openpyxl.load_workbook(workbook)
for sheet in wb2.worksheets:
new_wb = Workbook()
ws = new_wb.active
ws.title=sheet.title
for row_data in sheet.iter_rows():
for row_cell in row_data:
ws[row_cell.coordinate].value = row_cell.value
f_name=os.path.basename(workbook)
new_wb.save("/Excel/"+f_name[:-5]+"-"+sheet.title+"_Updated.xlsx")
new_wb.close()
#Splitting transformed excel Files. So empty rows are removed and data starst from row 1
Excel_file_path="/Excel"
for file in Path(Excel_file_path).glob('*_Updated.xlsx'):
wb=load_workbook(file)
wb_modified = False
for sheet in wb.worksheets:
max_row_in_sheet = sheet.max_row
max_col_in_sheet = sheet.max_column
print("this is max",max_row_in_sheet)
sheet_modified = False
if max_row_in_sheet >= 1:
first_nonempty_row = get_first_nonempty_row() # Function to find nonempty row
sheet_modified = del_rows_before(first_nonempty_row) #Function to delete nonempty roW
file_name = os.path.basename(file)
wb.save("/Excel/"+file_name[:-13]+"_Transformed.xlsx") #Converting Updated file to transformed file
wb.close()
#### Converting Files to CSV
ExcelPath ='/Excel'
CSV_FILE_PATH = 'CSV/'
for file in Path(ExcelPath).glob('*_Transformed.xlsx'): # Getting files with _Transformed.xlsx to convert to csv
wb = load_workbook(file)
print(file, wb.active.title)
for sheetname in wb.sheetnames:
with open(CSV_FILE_PATH+f'{file.stem[:-12]}.csv', 'w',encoding="utf-8-sig") as csvfile:
spamwriter = csv.writer(csvfile)
for row in wb[sheetname].rows:
spamwriter.writerow([cell.value for cell in row])
if __name__ == '__main__':
logging.basicConfig(handlers=[RotatingFileHandler('./my_log.log', maxBytes=100000, backupCount=10)],
level=logging.DEBUG,format="%(message)s")
HOME_FOLDER = 'Files to be tested/' #Folder to be watched
obs = Observer()
obs.schedule(FileWatcher(), path= HOME_FOLDER)
print("Monitoring started....")
createFolders(HOME_FOLDER)
#main()
obs.start() # Start watching
try:
while obs.isAlive():
obs.join()
finally:
obs.stop()
obs.join()
Before File Directory Structure
C:\Desktop\Jupyter Notebook\Files to be tested
After File Directory Structure
C:\Desktop\Jupyter Notebook\Files to be tested <- This Path has created folders Excel and CSV, and this is the path where excel files are getting uploaded and File listener is listening to this path ONLY.In below GIF image "Myexcel_file1","Myexcel_file2","Myexcel_file3" are uploaded.
C:\Desktop\Jupyter Notebook\Files to be tested\Excel <- This path contains transformed,updated excel files.
C:\Desktop\Jupyter Notebook\Files to be tested\CSV <- The "_transformed.xlsx" files from above Excel folder is getting converted to csv and saved into this path
GIF
I'm writing a Python script to generate a QR code from the first column in a csv (concatenated with a local name), and that part works well. The csv just has three columns and looks like this:
ID First Last
144 Jerry Seinfeld
491 George Costanza
104 Elaine Benes
99 Cosmo Kramer
And I use my Python script to take that file, append a prefix to the IDs (in this case, 'NBC') and then create QR codes for each record in a new folder. It's a little long but all of this seems to work fine also:
import csv
import qrcode
import os
import shutil
import time
import inquirer
#Identify Timestamp
timestr = time.strftime("%Y%m%d-%H%M%S")
local = 'NBC'
#Load csv
filename = "stackoverflowtest.csv"
#Path to new local folder
localfolder = local
localimagefolder = localfolder+'/image'
localfilefolder = localfolder+'/files'
#Check/create folders based on local
if not os.path.exists(localfolder):
os.makedirs(localfolder)
if not os.path.exists(localimagefolder):
os.makedirs(localimagefolder)
if not os.path.exists(localfilefolder):
os.makedirs(localfilefolder)
#Copy uploaded file to their local's file folder
shutil.copy2(filename, localfilefolder+'/'+local+'-'+timestr+'.csv') # complete target filename given
#Read csv and generate QR code for local+first column of csv
with open(filename, 'rU') as csvfile:
next(csvfile, None) #skip header row
reader = csv.reader(csvfile, delimiter=',', dialect=csv.excel_tab)
for i, row in enumerate(reader):
labeldata = row[0] #Choose first column of data to create QR codes
print labeldata
qr = qrcode.QRCode(
version=1,
error_correction=qrcode.constants.ERROR_CORRECT_L,
box_size=10,
border=4,
)
qr.add_data(local+"-"+labeldata)
qr.make()
img = qr.make_image()
img.save(localimagefolder+"/"+local+"-"+labeldata+".png".format(i)) #Save image
It creates the NBC folder, copies each csv file in one subfolder, and creates the QR codes for each ID (NBC-144,NBC-491,NBC-104,NBC-99) in another.
The part where I'm running into a problem is opening the csv and writing the filepath/filename back to the csv (or a copy of the csv since from what I've read, I likely can't do it to the same one). Is that possible?
The closest I've come with a script that works is appending the local name with the ID and writing that back to a column but I can't seem to figure out how to do the same with a variable, let alone a filepath/filename:
import csv
import os
import sys
filename = 'stackoverflowtest.csv'
newfilename = 'stackoverflowtest2.csv'
local = 'NBC'
with open(filename, 'rU') as f:
reader = csv.reader(f)
with open(newfilename, 'w') as g:
writer = csv.writer(g)
for row in reader:
new_row = row[0:] + ['-'.join([local, row[0]])]
writer.writerow(new_row)
Is it possible to write something like that within my existing script to add a column for the filepath and filename? Everything I try breaks -- especially if I attempt to do it in the same script.
EDIT:
This is my closest attempt that overwrote the existing file
f=open(newfilename,'r+')
w=csv.writer(f)
for path, dirs, files in os.walk(path):
for filename in files:
w.writerow([newfilename])
Also it's still in a separate script.
Since I can't run the code in your question directly, I had to commented-out portions of it in what's below for testing, but think it does everything you wanted in one loop in one script.
import csv
#import qrcode
import os
import shutil
import time
#import inquirer
# Identify Timestamp
timestr = time.strftime("%Y%m%d-%H%M%S")
local = 'NBC'
# Load csv
filename = "stackoverflowtest.csv"
# Path to new local folder
localfolder = local
localimagefolder = os.path.join(localfolder, 'image')
localfilefolder = os.path.join(localfolder, 'files')
# Check/create folders based on local
if not os.path.exists(localfolder):
os.makedirs(localfolder)
if not os.path.exists(localimagefolder):
os.makedirs(localimagefolder)
if not os.path.exists(localfilefolder):
os.makedirs(localfilefolder)
# Copy uploaded file to their local's file folder
target = os.path.join(localfilefolder, local+'-'+timestr+'.csv') # Target filename
#shutil.copy2(filename, target) # Don't need to do this.
# Read csv and generate QR code for local+first column of csv
with open(filename, 'rb') as csvfile, open(target, 'wb') as outfile:
reader = csv.reader(csvfile, delimiter=',', dialect=csv.excel_tab)
writer = csv.writer(outfile, delimiter=',', dialect=csv.excel_tab)
next(reader) # Skip header row.
for row in reader:
id, first, last = row
# qr = qrcode.QRCode(
# version=1,
# error_correction=qrcode.constants.ERROR_CORRECT_L,
# box_size=10,
# border=4,
# )
#
# qr.add_data(local+"-"+id)
# qr.make()
#
# img = qr.make_image()
imagepath = os.path.join(localimagefolder, local+"-"+id+".png")
# img.save(imagepath) # Save image.
print "saving img:", imagepath
writer.writerow(row + [local+'-'+id, imagepath])
Output from sample input data:
144,Jerry,Seinfeld,NBC-144,NBC/image/NBC-144.png
491,George,Costanza,NBC-491,NBC/image/NBC-491.png
104,Elaine,Benes,NBC-104,NBC/image/NBC-104.png
99,Cosmo,Kramer,NBC-99,NBC/image/NBC-99.png
I want to use tkinter to browse an excel sheet and make a drop down menu of the rows of that excel sheet.
I am pretty new to python and do not know how to work it through. The code until now looks like this:
import xlrd
import os
from subprocess import call
import Tkinter,tkFileDialog
root = Tkinter.Tk()
root.withdraw()
filename = tkFileDialog.askopenfiles(title='Choose an excel file')
print(filename)
print type(filename)
#file = str(filename)
file = [filetypes for filetypes in filename if ".xlsx" in filetypes]
workbook = xlrd.open_workbook(filename)
for file in filename:
sheet = workbook.sheet_by_index(0)
print(sheet)
for value in sheet.row_values(0):
print(value)
This throws an error:
Traceback (most recent call last):
File "C:/Geocoding/test.py", line 14, in
workbook = xlrd.open_workbook(filename)
File "C:\Python27\ArcGIS10.3\lib\site-packages\xlrd__init__.py", line 394, in open_workbook
f = open(filename, "rb")
TypeError: coercing to Unicode: need string or buffer, list found
I am not even able to read the excel sheet that the user browses. I have no idea why this error. I would really appreciate if anybody can help me with it. Am I on the right path ?
Thanks
The new code that works:
import xlrd
from Tkinter import *
import Tkinter,tkFileDialog
root = Tkinter.Tk()
root.withdraw()
filename = tkFileDialog.askopenfilename(title='Choose an excel file')
print(filename)
print type(filename)
#file = str(filename)
file = [filetypes for filetypes in filename if ".xlsx" in filetypes]
workbook = xlrd.open_workbook(filename)
#for file in filename:
sheet = workbook.sheet_by_index(0)
print(sheet)
for value in sheet.row_values(0):
print(value)
print(type(value))
master = Tk()
variable=StringVar(master)
#variable=sheet.row_values(0)[0]
variable.set(sheet.row_values(0)[0])
#for var in value:
# variable = StringVar(master)
# variable.set(value) # default value
#w = OptionMenu(master, variable, value)
w = apply(OptionMenu, (master, variable) + tuple(sheet.row_values(0)))
w.pack()
mainloop()
You may have more errors along the way but in your code here:
filename = tkFileDialog.askopenfiles(title='Choose an excel file')
the result from that dialog is a list of file objects. So you are passing that list of fileobjects to open_workbook here:
workbook = xlrd.open_workbook(filename)
Instead what you need to do is pass the name of the file you care about as a string to open_workbook:
workbook = xlrd.open_workbook(filename[0].name) # the name of the first file in the list
here is a working Python3 (sorry I abandoned Python2) example for tkinter to properly select filenames:
from tkinter import filedialog
from tkinter import *
root = Tk()
root.withdraw()
filename = filedialog.askopenfiles(title='Choose an excel file')
print(filename) # filename is a list of file objects
print(filename[0].name) # this is the name of the first selected in the dialog that you can pass to xlrd
I have a dataframe that I'm exporting to Excel, and people want it in .xlsx. I use to_excel, but when I change the extension from .xls to .xlsx, the exporting step takes about 9 seconds as opposed to 1 second. Exporting to a .csv is even faster, which I believe is due to the fact that it's just a specially formatted text file.
Perhaps the .xlsx files just added a lot more features so it takes longer to write to them, but I'm hoping there is something I can do to prevent this.
Pandas defaults to using OpenPyXL for writing xlsx files which can be slower than than the xlwt module used for writing xls files.
Try it instead with XlsxWriter as the xlsx output engine:
df.to_excel('file.xlsx', sheet_name='Sheet1', engine='xlsxwriter')
It should be as fast as the xls engine.
As per different Python to Excel modules benchmark, pyexcelerate has better performance.
Below code used to take sqlite tables data into xlsx file datasheets. table is not stored in xlsx file unless raw size is less than 1000000 raws. In that case info is stored in csv file.
def passfile(datb, tables):
"""copy to xlsx or csv files tables from query results"""
import sqlite3
import pandas as pd
import timeit
import csv
from pyexcelerate import Workbook
from pathlib import Path
from datetime import date
dat_dir = Path("C:/XML")
db_path = dat_dir / datb
start_time = timeit.default_timer()
conn = sqlite3.connect(db_path) # database connection
c = conn.cursor()
today = date.today()
tablist = []
with open(tables, 'r') as csv_file: # tables to be collected file
csv_reader = csv.DictReader(csv_file)
for line in csv_reader:
tablist.append(line['table']) #column header
xls_file = "Param" + today.strftime("%y%m%d") + ".xlsx"
xls_path = dat_dir / xls_file # xls file path-name
csv_path = dat_dir / "csv" # csv path to store big data
wb = Workbook() # excelerator file init
for line in tablist:
try:
df = pd.read_sql_query("select * from " + line + ";", conn) # pandas dataframe from sqlite
if len(df) > 1000000: # excel not supported
print('save to csv')
csv_loc = line + today.strftime("%y%m%d") + '.csv.gz' # compressed csv file name
df.to_csv(csv_path / csv_loc, compression='gzip')
else:
data = [df.columns.tolist()] + df.values.tolist()
data = [[index] + row for index, row in zip(df.index, data)]
wb.new_sheet(line, data=data)
except sqlite3.Error as error: # sqlite error handling
print('SQLite error: %s' % (' '.join(error.args)))
print("saving workbook")
wb.save(xls_path)
end_time = timeit.default_timer()
delta = round(end_time - start_time, 2)
print("Took " + str(delta) + " secs")
c.close()
conn.close()
passfile("20200522_sqlite.db", "tablesSQL.csv")
I have few csv files which I would like to dump as new worksheets in a excel workbook(xls/xlsx).
How do I achieve this?
Googled and found 'pyXLwriter' but it seems the project was stopped.
While Im trying out 'pyXLwriter' would like to know are there any alternatives/suggestions/modules?
Many Thanks.
[Edit]
Here is my solution: (anyone has much leaner, much pythonic solution? do comment. thx)
import glob
import csv
import xlwt
import os
wb = xlwt.Workbook()
for filename in glob.glob("c:/xxx/*.csv"):
(f_path, f_name) = os.path.split(filename)
(f_short_name, f_extension) = os.path.splitext(f_name)
ws = wb.add_sheet(str(f_short_name))
spamReader = csv.reader(open(filename, 'rb'), delimiter=',',quotechar='"')
row_count = 0
for row in spamReader:
for col in range(len(row)):
ws.write(row_count,col,row[col])
row_count +=1
wb.save("c:/xxx/compiled.xls")
print "Done"
Not sure what you mean by "much leaner, much pythonic" but you certainly could spruce it up a bit:
import glob, csv, xlwt, os
wb = xlwt.Workbook()
for filename in glob.glob("c:/xxx/*.csv"):
(f_path, f_name) = os.path.split(filename)
(f_short_name, f_extension) = os.path.splitext(f_name)
ws = wb.add_sheet(f_short_name)
spamReader = csv.reader(open(filename, 'rb'))
for rowx, row in enumerate(spamReader):
for colx, value in enumerate(row):
ws.write(rowx, colx, value)
wb.save("c:/xxx/compiled.xls")
You'll find all you need in this xlwt tutorial. This libraries (xlrd and xlwt) are the most popular choices for managing Excel interaction in Python. The downside is that, at the moment, they only support Excel binary format (.xls).
Use xlsxwriter to create and write in a excel file in python.
Install it by : pip install xlsxwriter
import xlsxwriter
# Create an new Excel file and add a worksheet.
workbook = xlsxwriter.Workbook('demo.xlsx')
worksheet = workbook.add_worksheet()
# Widen the first column to make the text clearer.
worksheet.set_column('A:A', 20)
# Add a bold format to use to highlight cells.
bold = workbook.add_format({'bold': True})
# Write some simple text.
worksheet.write('A1', 'Hello')
# Text with formatting.
worksheet.write('A2', 'World', bold)
# Write some numbers, with row/column notation.
worksheet.write(2, 0, 123)
worksheet.write(3, 0, 123.456)
# Insert an image.
worksheet.insert_image('B5', 'logo.png')
workbook.close()
I always just write the Office 2003 XML format through strings. It's quite easy to do and much easier to manage than writing and zipping up what constitutes a xlsx document. It also doesn't require any external libraries. (though one could easily roll their own)
Also, Excel supports loading CSV files. Both space delimited or character delimited. You can either load it right in, or try to copy & paste it, then press the Text-To-Columns button in the options. This option has nothing to do with python, of course.
Also available in GitHub repo "Kampfmitexcel"...
import csv, xlwt, os
def input_from_user(prompt):
return raw_input(prompt).strip()
def make_an_excel_file_from_all_the_txtfiles_in_the_following_directory(directory):
wb = xlwt.Workbook()
for filename in os.listdir(data_folder_path):
if filename.endswith(".csv") or filename.endswith(".txt"):
ws = wb.add_sheet(os.path.splitext(filename)[0])
with open('{}\\{}'.format(data_folder_path,filename),'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
for rowx, row in enumerate(reader):
for colx, value in enumerate(row):
ws.write(rowx, colx, value)
return wb
if __name__ == '__main__':
path_to_data = input_from_user("Where is the data stored?: ")
xls = make_an_excel_file_from_all_the_txtfiles_in_the_following_directory(path_to_data)
xls_name = input_from_user('What do you want to name the excel file?: ')
xls.save('{}\\{}{}'.format(data_folder_path,xls_name,'.xls'))
print "Your file has been saved in the data folder."
This is basing on the answer your answer itself. But the reason I am using xlsxwriter is because, it accepts more data in xlsx format. Where as the xlwt limits you to 65556 rows and xls format.
import xlsxwriter
import glob
import csv
workbook = xlsxwriter.Workbook('compiled.xlsx')
for filename in glob.glob("*.csv"):
ws = workbook.add_worksheet(str(filename.split('.')[0]))
spamReader = csv.reader(open(filename, 'rb'), delimiter=',',quotechar='"')
row_count = 0
print filename
for row in spamReader:
for col in range(len(row)):
ws.write(row_count,col,row[col])
row_count +=1
workbook.close()