MySQLdb to Excel - python

I have a Django project which has a mysql database backend. How can I export contents from my db to an Excel (xls, xlsx) format?

phpMyAdmin has an Export tab, and you can export in CSV. This can be imported into Excel.

http://pypi.python.org/pypi/xlwt

If you need a xlsx (excel 2007) exporter, you can use openpyxl. Otherwise xlwt is an option.

Openpyxl is a great choice,
but if you don't wanna go through a new thing you can simply write you own exporting function:
for example you can export things in CSV format like this:
def CVSExport(database_array):
f_csv = open('mydatabase.csv', 'w')
for row in database_array:
f_csv.write('"%s";;;;;"%s"\n'%(row[0], row[1]))
f_csv.close()
when you open exported file by excel you should set ";;;;;" as separator.

Related

python sqlite3 uploading data to DB from excel(xls/xlsx)

I'm trying to create a DB from an excel spreadsheet. I can fetch data from excel and display in the html page, but I am not able to store it in sqlite db.
Few ways you can try:
Save excel as csv. Read csv in python (link) and save in sqlite (link).
Read excel into a pandas dataframe (link), and then save dataframe to sqlite (link).
Read excel directly from python (link) and save data to sqlite.
I used below code which worked but it over rights file.
#import pandas software library
import pandas as pd
df = pd.read_excel(r'C:\Users\kmc487\PycharmProjects\myproject\Product List.xlsx')
#Print sheet1
print(df)
df.to_excel("output.xlsx", sheet_name="Sheet_1")
Below are the input file details:
My input file is in .xlsx format and file is stored as .xls(Need code to .xlsx format)
File has heading in second row(First row blank)

CSV file with Arabic characters is displayed as symbols in Excel

I am using python to extract Arabic tweets from twitter and save it as a CSV file, but when I open the saved file in excel the Arabic language displays as symbols. However, inside python, notepad, or word, it looks good.
May I know where is the problem?
This is a problem I face frequently with Microsoft Excel when opening CSV files that contain Arabic characters. Try the following workaround that I tested on latest versions of Microsoft Excel on both Windows and MacOS:
Open Excel on a blank workbook
Within the Data tab, click on From Text button (if not
activated, make sure an empty cell is selected)
Browse and select the CSV file
In the Text Import Wizard, change the File_origin to "Unicode (UTF-8)"
Go next and from the Delimiters, select the delimiter used in your file e.g. comma
Finish and select where to import the data
The Arabic characters should show correctly.
Just use encoding='utf-8-sig' instead of encoding='utf-8' as follows:
import csv
data = u"اردو"
with(open('example.csv', 'w', encoding='utf-8-sig')) as fh:
writer = csv.writer(fh)
writer.writerow([data])
It worked on my machine.
The only solution that i've found to save arabic into an excel file from python is to use pandas and to save into the xlsx extension instead of csv, xlsx seems a million times better here's the code i've put together which worked for me
import pandas as pd
def turn_into_csv(data, csver):
ids = []
texts = []
for each in data:
texts.append(each["full_text"])
ids.append(str(each["id"]))
df = pd.DataFrame({'ID': ids, 'FULL_TEXT': texts})
writer = pd.ExcelWriter(csver + '.xlsx', engine='xlsxwriter')
df.to_excel(writer, sheet_name='Sheet1', encoding="utf-8-sig")
# Close the Pandas Excel writer and output the Excel file.
writer.save()
Fastest way is after saving the file into .csv from python:
open the .csv file using Notepad++
from Encoding drop-down menu choose UTF-8-BOM
click save as and save at with same name with .csv extension (e.g. data.csv) and keep the file type as it is .txt
re-open the file again with Microsoft Excel.
Excel is known to have an awful csv import sytem. Long story short if on same system you import a csv file that you have just exported, it will work smoothly. Else, the csv file is expected to use the Windows system encoding and delimiter.
A rather awkward but robust system is to use LibreOffice or Oracle OpenOffice. Both are far beyond Excel on any feature but the csv module: they will allow you to specify the delimiters and optional quoting characters along with the encoding of the csv file and you will be able to save the resulting file in xslx.
Although my CSV file encoding was UTF-8; but explicitly redoing it again using the Notepad resolved it.
Steps:
Open your CSV file in Notepad.
Click File --> Save as...
In the "Encoding" drop-down, select UTF-8.
Rename your file using the .csv extension.
Click Save.
Reopen the file with Excel.

pandas - python export as xls instead xlsx - ExcelWriter

I would like to export my pandas dataframe as a xls file and not a xlsx.
I use ExcelWriter.
I have done :
xlsxWriter = pd.ExcelWriter(str(outputName + "- Advanced.xls"))
Unfortunatly, nothing outputs.
I think I have to change the engine, but I don't know how?
You can use to_excel and pass the extension .xls as the file name:
df.to_excel(file_name_blah.xls)
pandas will use a different module to write the excel sheet out, note that it will require you to have the pre-requisite 3rd party module installed.
If for some reason you do need to explicitly call pd.ExcelWriter, here's how:
outputName = "xxxx"
xlsWriter = pd.ExcelWriter(str(outputName + "- Advanced.xls"), engine = 'xlwt')
# Convert the dataframe to an Excel Writer object.
test.to_excel(xlsWriter, sheet_name='Sheet1')
# Close the Pandas Excel writer and output the Excel file.
xlsWriter.save()
It's critical not to forget the save() command. That was your problem.
Note that you can also set the engine directly like so: test.to_excel('test.xls', engine='xlwt')
The easiest way to do this is to install the "xlwt" package on your active Env.
pip install xlwt
then just simply use the below code:
df.to_excel('test.xls')

Python CSV reading multiple Tabs

Is there a way I can use the csv reader in python to look at multiple tabs in the workbook?
I am using the following to open the file but how could I target python between tab1 and tab2 in the workbook?
working_file = open('X:/test/test/test_file.csv','r')
working_file_CSV = working_file.read().splitlines()
working_file= csv.reader(working_file_CSV)
working_file .close()
I want to read tab1 and then append it to a list in python and then read tab2 and append it to a list as well.
You're thinking about a spreadsheet (Excel format, LibreOffice etc). When you export a spreadsheet to a CSV, it only exports the current worksheet to a CSV (Spreadsheet formats are much more complex than a simple CSV file).
So there is no way to switch worksheets - simply open your CSV with a plain text-editor to see the contents yourself.

Copy data from MS Access to MS Excel using Python

I've been spending the better part of the weekend trying to figure out the best way to transfer data from an MS Access table into an Excel sheet using Python. I've found a few modules that may help (execsql, python-excel), but with my limited knowledge and the modules I have to use to create certain data (I'm a GIS professional, so I'm creating spatial data using the ArcGIS arcpy module into an access table)
I'm not sure what the best approach should be. All I need to do is copy 4 columns of data from access to excel and then format the excel. I have the formatting part solved.
Should I:
Iterate through the rows using a cursor and somehow load the rows into excel?
Copy the columns from access to excel?
Export the whole access table into a sheet in excel?
Thanks for any suggestions.
I eventually found a way to do this. I thought I'd post my code for anyone who may run into the same situation. I use some GIS files, but if you don't, you can set a variable to a directory path instead of using env.workspace and use a cursor search instead of the arcpy.SearchCursor function, then this is doable.
import arcpy, xlwt
from arcpy import env
from xlwt import Workbook
# Set the workspace. Location of feature class or dbf file. I used a dbf file.
env.workspace = "C:\data"
# Use row object to get and set field values
cur = arcpy.SearchCursor("SMU_Areas.dbf")
# Set up workbook and sheet
book = Workbook()
sheet1 = book.add_sheet('Sheet 1')
book.add_sheet('Sheet 2')
# Set counter
rowx = 0
# Loop through rows in dbf file.
for row in cur:
rowx += 1
# Write each row to the sheet from the workbook. Set column index in sheet for each column in .dbf
sheet1.write(rowx,0,row.ID)
sheet1.write(rowx,1,row.SHAPE_Area/10000)
book.save('C:\data\MyExcel.xls')
del cur, row
I currently use the XLRD module to suck in data from an Excel spreadsheet and an insert cursor to create a feature class, which works very well.
You should be able to use a search cursor to iterate through the feature class records and then use the XLWT Python module (http://www.python-excel.org/) to write the records to Excel.
You can use ADO to read the data from Access(Here are the connection strings for Access 2007+(.accdb files) and Access 2003-(.mdb files)) and than use Excel's Range.CopyFromRecordset method(assuming you are using Excel via COM) to copy the entire recordset into Excel.
The best approach might be to not use Python for this task.
You could use the macro recorder in Excel to record the import of the External data into Excel.
After starting the macro recorder click Data -> Get External Data -> New Database Query and enter your criteria. Once the data import is complete you can look at the code that was generated and replace the hard coded search criteria with variables.
Another idea - how important is the formatting part? If you can ditch the formatting, you can output your data as CSV. Excel can open CSV files, and the CSV format is much simpler then the Excel format - it's so simple you can write it directly from Python like a text file, and that way you won't need to mess with Office COM objects.

Categories

Resources