import pandas as pd
df1=pd.read_csv('out.csv')
df2=pd.read_excel('file.xls')
df2['Location']=df1['Location']
df2['Sublocation']=df1['Sublocation']
df2['Zone']=df1['Zone']
df2['Subnet Type']=df1['Subnet Type']
df2['Description']=df1['Description']
newfile = input("Enter a name for the combined xlsx file: ")
print('Saving to new xlsx file...')
writer = pd.ExcelWriter(newfile)
df2.to_excel(writer, index=False)
writer.save()
Basically, it reads a csv file with 5 columns and it reads a xls file with existing columns, then makes a xlsx file where the two files are combined with the 5 new columns.
So it works, but only for 4999 rows, the last 10 dont have the 5 new columns in the new xlsx file.
I am little confused about the problem, so i came up with 2 options
1. append df1 to df2
2. Merge df1 to df2 (adds new columns to existing df)
. I think in your case you dont have same number of rows in csv and excel and for that reason last 10 rows dont have value in the output
import numpy as np
import pandas as pd
df1 = pd.DataFrame(np.array([
['a', 51, 61],
['b', 52, 62],
['c', 53, 63]]),
columns=['name', 'attr11', 'attr12'])
df2 = pd.DataFrame(np.array([
['a', 31, 41],
['b', 32, 42],
['c', 33, 43],
['d',34,44]]),
columns=['name', 'attr21', 'attr22'])
df3= df1.append(df2)
print df3
print pd.merge(df1,df2,on='name',how='right')
Most likely there is a way to do what you want within pandas, but in case there isn't, you can use lower-level packages to accomplish your task.
To read the CSV file, use the csv module that comes with Python. The following code loads all the data into a Python list, where each element of the list is a row in the CSV. Note that this code is not as compact as an experienced Python programmer would write. I've tried to strike a balance between being readable for Python beginners and being "idiomatic":
import csv
with open('input1.csv', 'rb') as f:
reader = csv.reader(f)
csvdata = []
for row in reader:
csvdata.append(row)
To read the .xls file, use xlrd, which should already be installed since pandas uses it, but you can install it separately if needed. Again, the following code is not the shortest possible, but is hopefully easy to understand:
import xlrd
wb = xlrd.open_workbook('input2.xls')
ws = wb.sheet_by_index(0) # use the first sheet
xlsdata = []
for rx in range(ws.nrows):
xlsdata.append(ws.row_values(rx))
Finally, write out the combined data to a .xlsx file using XlsxWriter. This is another package that may already be installed if you've used pandas to write Excel files, but can be installed separately if needed. Once again, I've tried to stick to relatively simple language features. For example, I've avoided zip(), whose workings might not be obvious to Python beginners:
import xlsxwriter
wb = xlsxwriter.Workbook('output.xlsx')
ws = wb.add_worksheet()
assert len(csvdata) == len(xlsdata) # we expect the same number of rows
for rx in range(len(csvdata)):
ws.write_row(rx, 0, xlsdata[rx])
ws.write_row(rx, len(xlsdata[rx]), csvdata[rx])
wb.close()
Note that write_row() lets you choose the destination cell of the leftmost data element. So I've used it twice for each row: once to write the .xls data at the far left, and once more to write the CSV data with a suitable offset.
I think you should append data
import pandas as pd
df1=pd.read_csv('out.csv')
df2=pd.read_excel('file.xls')
df2.append(df1)
newfile = input("Enter a name for the combined xlsx file: ")
print('Saving to new xlsx file...')
writer = pd.ExcelWriter(newfile)
df2.to_excel(writer, index=False)
writer.save()
Related
I have write some content to a xlsx file by using xlsxwriter
workbook = xlsxwriter.Workbook(file_name)
worksheet = workbook.add_worksheet()
worksheet.write(row, col, value)
worksheet.close()
I'd like to add a dataframe after the existing rows to this file by to_excel
df.to_excel(file_name,
startrow=len(existing_content),
engine='xlsxwriter')
However, this seems not work.The dataframe not inserted to the file. Anyone knows why?
Unfortunately, as the content above is not specifically written, let's take a look at to_excel and XlsxWriter as examples.
using xlsxwriter
import xlsxwriter
# Create a new Excel file and add a worksheet
workbook = xlsxwriter.Workbook('example.xlsx')
worksheet = workbook.add_worksheet()
# Add some data to the worksheet
worksheet.write('A1', 'Language')
worksheet.write('B1', 'Score')
worksheet.write('A2', 'Python')
worksheet.write('B2', 100)
worksheet.write('A3', 'Java')
worksheet.write('B3', 98)
worksheet.write('A4', 'Ruby')
worksheet.write('B4', 88)
# Save the file
workbook.close()
Using the above code, we have saved the table similar to the one below to an Excel file.
Language
Score
Python
100
Java
98
Ruby
88
Next, if we want to add rows using a dataframe.to_excel :
using to_excel
import pandas as pd
# Load an existing Excel file
existing_file = pd.read_excel('example.xlsx')
# Create a new DataFrame to append
df = pd.DataFrame({
'Language': ['C++', 'Javascript', 'C#'],
'Score': [78, 97, 67]
})
# Append the new DataFrame to the existing file
result = pd.concat([existing_file, df])
# Write the combined DataFrame to the existing file
result.to_excel('example.xlsx', index=False)
The reason for using pandas concat:
To append, it is necessary to use pandas.DataFrame.ExcelWriter(), but XlsxWriter does not support append mode in ExcelWriter
Although the task can be accomplished using pandas.DataFrame.append(), the append method is slated to be deleted in the future, so we use concat instead.
The OP is using xlsxwriter in the engine parameter. Per XlsxWriter documentation "XlsxWriter is designed only as a file writer. It cannot read or modify an existing Excel file." (link to XlsxWriter Docs).
Below I've provided a fully reproducible example of how you can go about modifying an existing .xlsx workbook using the openpyxl module (link to Openpyxl Docs).
For demonstration purposes, I'll first create create a workbook called test.xlsx using pandas:
import pandas as pd
df = pd.DataFrame({'Col_A': [1,2,3,4],
'Col_B': [5,6,7,8],
'Col_C': [0,0,0,0],
'Col_D': [13,14,15,16]})
df.to_excel('test.xlsx', index=False)
This is the Expected output at this point:
Using openpyxl you can use another dataset to load the existing workbook ('test.xlsx') and modify the third column with different data from the new dataframe while preserving the other existing data. In this example, for simplicity, I update it with a one column dataframe but you could extend it to update or add more data.
from openpyxl import load_workbook
import pandas as pd
df_new = pd.DataFrame({'Col_C': [9, 10, 11, 12]})
wb = load_workbook('test.xlsx')
ws = wb['Sheet1']
for index, row in df_new.iterrows():
cell = 'C%d' % (index + 2)
ws[cell] = row[0]
wb.save('test.xlsx')
With the Expected output at the end:
I am trying to take a workbook, loop through specific worksheets retrieve a dataframe, manipulate it and essentially paste the dataframe back in the same place without changing any of the other data / sheets in the document, this is what I am trying:
path= '<folder location>.xlsx'
wb = pd.ExcelFile(path)
for sht in ['sheet1','sheet2','sheet3']:
df= pd.read_excel(wb,sheet_name = sht, skiprows = 607,nrows = 11, usecols = range(2,15))
# here I manipulate the df, to then save it down in the same place
df.to_excel(wb,sheet_name = sht, startcol=3, startrow=607)
# Save down file
wb.save(path))
wb.close()
My solution so far will just save the first sheet down with ONLY the data that I manipulated, I lose all other sheets and data that was on the sheet that I want to stay, so I end up with just sheet1 with only the data I manipulated.
Would really appreciate any help, thank you
Try using an ExcelWriter instead of an ExcelFile:
path= 'folder location.xlsx'
with pd.ExcelWriter(path) as writer:
for sht in ['sheet1','sheet2','sheet3']:
df= pd.read_excel(wb,sheet_name = sht, skiprows = 607,nrows = 11, usecols = range(2,15))
####here I manipulate the df, to then save it down in the same place###
df.to_excel(writer,sheet_name = sht, startcol=3, startrow=607)
Although I am not sure how it will behave when the file already exists and you overwrite some of them. It might be easier to read everything in first, manipulate the required sheets and save to a new file.
I am trying to come up with a script that will allow me to read all csv files with greater than 62 bits and print two columns into a separate excel file and create a list.
The following is one of the csv files:
FileUUID Table RowInJSON JSONVariable Error Notes SQLExecuted
ff3ca629-2e9c-45f7-85f1-a3dfc637dd81 lng02_rpt_b_calvedets 1 Duplicate entry 'ETH0007805440544' for key 'nosameanimalid' INSERT INTO lng02_rpt_b_calvedets(farmermobile,hh_id,rpt_b_calvedets_rowid,damidyesno,damid,calfdam_id,damtagid,calvdatealv,calvtype,calvtypeoth,easecalv,easecalvoth,birthtyp,sex,siretype,aiprov,othaiprov,strawidyesno,strawid) VALUES ('0974502779','1','1','0','ETH0007805440544','ETH0007805470547',NULL,'2017-09-16','1',NULL,'1',NULL,'1','2','1',NULL,NULL,NULL,NULL,NULL,'0',NULL,NULL,NULL,NULL,NULL,NULL,'0',NULL,'Tv',NULL,NULL,'Et','23',NULL,'5',NULL,NULL,NULL,'0','0')
This is my attempt to solving this problem:
path = 'csvs/'
for infile in glob.glob( os.path.join(path, '*csv') ):
output = infile + '.out'
with open(infile, 'r') as source:
readr = csv.reader(source)
with open(output,"w") as result:
writr = csv.writer(result)
for r in readr:
writr.writerow((r[4], r[2]))
Please help point me to the right direction with any alternative solution
pandas does a lot of what you are trying to achieve:
import pandas as pd
# Read a csv file to a dataframe
df = pd.read_csv("<path-to-csv>")
# Filter two columns
columns = ["FileUUID", "Table"]
df = df[columns]
# Combine multiple dataframes
df_combined = pd.concat([df1, df2, df3, ...])
# Output dataframe to excel file
df_combined.to_excel("<output-path>", index=False)
To loop through all csv files > 62bits, you can use glob.glob() and os.stat()
import os
import glob
dataframes = []
for csvfile in glob.glob("<csv-folder-path>/*.csv"):
if os.stat(csvfile).st_size > 62:
dataframes.append(pd.read_csv(csvfile))
Use the standard csv module. Don't re-invent the wheel.
https://docs.python.org/3/library/csv.html
I'm new to Python (and programming in general) and am running into a problem when writing data out to sheets in Excel.
I'm reading in an Excel file, performing a sum calculation on specific columns, and then writing the results out to a new workbook. Then at the end, it creates two charts based on the results.
The code works, except every time I run it, it creates new sheets with numbers appended to the end. I really just want it to overwrite the sheet names I provide, instead of creating new ones.
I'm not familiar enough with all the modules to understand all the options that are available. I've researched openpyxl, and pandas, and similar examples to what I'm trying to do either aren't easy to find, or don't seem to work when I try them.
import pandas as pd
import xlrd
import openpyxl as op
from openpyxl import load_workbook
import matplotlib.pyplot as plt
# declare the input file
input_file = 'TestData.xlsx'
# declare the output_file name to be written to
output_file = 'TestData_Output.xlsx'
book = load_workbook(output_file)
writer = pd.ExcelWriter(output_file, engine='openpyxl')
writer.book = book
# read the source Excel file and calculate sums
excel_file = pd.read_excel(input_file)
num_events_main = excel_file.groupby(['Column1']).sum()
num_events_type = excel_file.groupby(['Column2']).sum()
# create dataframes and write names and sums out to new workbook/sheets
df_1 = pd.DataFrame(num_events_main)
df_2 = pd.DataFrame(num_events_type)
df_1.to_excel(writer, sheet_name = 'TestSheet1')
df_2.to_excel(writer, sheet_name = 'TestSheet2')
# save and close
writer.save()
writer.close()
# dataframe for the first sheet
df = pd.read_excel(output_file, sheet_name='TestSheet1')
values = df[['Column1', 'Column3']]
# dataframe for the second sheet
df = pd.read_excel(output_file, sheet_name='TestSheet2')
values_2 = df[['Column2', 'Column3']]
# create the graphs
events_graph = values.plot.bar(x = 'Column1', y = 'Column3', rot = 60) # rot = rotation
type_graph = values_2.plot.bar(x = 'Column2', y = 'Column3', rot = 60) # rot = rotation
plt.show()
I get the expected results, and the charts work fine. I'd really just like to get the sheets to overwrite with each run.
From the pd.DataFrame.to_excel documentation:
Multiple sheets may be written to by specifying unique sheet_name.
With all data written to the file it is necessary to save the changes.
Note that creating an ExcelWriter object with a file name that already
exists will result in the contents of the existing file being erased.
Try writing to the book like
import pandas as pd
df = pd.DataFrame({'col1':[1,2,3],'col2':[4,5,6]})
writer = pd.ExcelWriter('g.xlsx')
df.to_excel(writer, sheet_name = 'first_df')
df.to_excel(writer, sheet_name = 'second_df')
writer.save()
If you inspect the workbook, you will have two worksheets.
Then lets say you wanted to write new data to the same workbook:
writer = pd.ExcelWriter('g.xlsx')
df.to_excel(writer, sheet_name = 'new_df')
writer.save()
If you inspect the workbook now, you will just have one worksheet named new_df
If there are other worksheets in the excel file that you want to keep and just overwrite the desired worksheets, you would need to use load_workbook.
Before you wrtie any data, you could delete the sheets you want to write to with:
std=book.get_sheet_by_name(<sheee_name>)
book.remove_sheet(std)
That will stop the behavior where a number gets appended to the worksheet name once you attempt to write a workbook with a duplicate sheet name.
data trying to read
I have tried various ways still getting errors of the different type.
import codecs
f = codecs.open('sampledata.xlsx', encoding='utf-8')
for line in f:
print (repr(line))
the other way I tried is
f = open(fname, encoding="ascii", errors="surrogateescape")
still no luck.any help?
Newer versions of Pandas supports xlxs.
file_name = # path to file + file name
sheet = # sheet name or sheet number or list of sheet numbers and names
import pandas as pd
df = pd.read_excel(io=file_name, sheet_name=sheet)
print(df.head(5)) # print first 5 rows of the dataframe
Works great, especially if you're working with many sheets.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_excel.html