Here is where in need your help.
I have multiple xlsx files and I am looking for the same columns information inside each one. Until now all was working fine but some *.xlsx files are not containing the data and my python script just stop but looking over the others.
import glob
import pandas as pd
# Setup variables
xlsx_input = 'D:\\script\\bdd\\xlsx\\*.xlsx'
csv_output = 'D:\\script\\bdd\\csv\\'
# Save all file matches: xlsx_files
xlsx_files = glob.glob(xlsx_input, recursive=True)
# Create an empty list: frames
frames = []
# Iterate over xlsx_files
for file in xlsx_files:
# Read xlsx into a DataFrame
df = pd.read_excel(file , usecols=['ref_01','ref_02','ref_03'])
# Append df to frames
frames.append(df)
# Concatenate frames into dataframe
excel_output = pd.concat(frames)
# Write CSV file
excel_output.to_csv ((csv_output +"bdd_export.csv"), encoding='utf-8-sig', index=None)
Any help would be greatly appreciated.
Cheers !
Ok, I have found how to do it.
Just by adding this: df = pd.read_excel(file , usecols=lambda c: c in ['ref_01','ref_02','ref_03'])
Related
I am trying to come up with a script that will allow me to read all csv files with greater than 62 bits and print two columns into a separate excel file and create a list.
The following is one of the csv files:
FileUUID Table RowInJSON JSONVariable Error Notes SQLExecuted
ff3ca629-2e9c-45f7-85f1-a3dfc637dd81 lng02_rpt_b_calvedets 1 Duplicate entry 'ETH0007805440544' for key 'nosameanimalid' INSERT INTO lng02_rpt_b_calvedets(farmermobile,hh_id,rpt_b_calvedets_rowid,damidyesno,damid,calfdam_id,damtagid,calvdatealv,calvtype,calvtypeoth,easecalv,easecalvoth,birthtyp,sex,siretype,aiprov,othaiprov,strawidyesno,strawid) VALUES ('0974502779','1','1','0','ETH0007805440544','ETH0007805470547',NULL,'2017-09-16','1',NULL,'1',NULL,'1','2','1',NULL,NULL,NULL,NULL,NULL,'0',NULL,NULL,NULL,NULL,NULL,NULL,'0',NULL,'Tv',NULL,NULL,'Et','23',NULL,'5',NULL,NULL,NULL,'0','0')
This is my attempt to solving this problem:
path = 'csvs/'
for infile in glob.glob( os.path.join(path, '*csv') ):
output = infile + '.out'
with open(infile, 'r') as source:
readr = csv.reader(source)
with open(output,"w") as result:
writr = csv.writer(result)
for r in readr:
writr.writerow((r[4], r[2]))
Please help point me to the right direction with any alternative solution
pandas does a lot of what you are trying to achieve:
import pandas as pd
# Read a csv file to a dataframe
df = pd.read_csv("<path-to-csv>")
# Filter two columns
columns = ["FileUUID", "Table"]
df = df[columns]
# Combine multiple dataframes
df_combined = pd.concat([df1, df2, df3, ...])
# Output dataframe to excel file
df_combined.to_excel("<output-path>", index=False)
To loop through all csv files > 62bits, you can use glob.glob() and os.stat()
import os
import glob
dataframes = []
for csvfile in glob.glob("<csv-folder-path>/*.csv"):
if os.stat(csvfile).st_size > 62:
dataframes.append(pd.read_csv(csvfile))
Use the standard csv module. Don't re-invent the wheel.
https://docs.python.org/3/library/csv.html
I'm having a hard time loading multiple line delimited JSON files into a single pandas dataframe. This is the code I'm using:
import os, json
import pandas as pd
import numpy as np
import glob
pd.set_option('display.max_columns', None)
temp = pd.DataFrame()
path_to_json = '/Users/XXX/Desktop/Facebook Data/*'
json_pattern = os.path.join(path_to_json,'*.json')
file_list = glob.glob(json_pattern)
for file in file_list:
data = pd.read_json(file, lines=True)
temp.append(data, ignore_index = True)
It looks like all the files are loading when I look through file_list, but cannot figure out how to get each file into a dataframe. There are about 50 files with a couple lines in each file.
Change the last line to:
temp = temp.append(data, ignore_index = True)
The reason we have to do this is because the append doesn't happen in place. The append method does not modify the data frame. It just returns a new data frame with the result of the append operation.
Edit:
Since writing this answer I have learned that you should never use DataFrame.append inside a loop because it leads to quadratic copying (see this answer).
What you should do instead is first create a list of data frames and then use pd.concat to concatenate them all in a single operation. Like this:
dfs = [] # an empty list to store the data frames
for file in file_list:
data = pd.read_json(file, lines=True) # read data frame from json file
dfs.append(data) # append the data frame to the list
temp = pd.concat(dfs, ignore_index=True) # concatenate all the data frames in the list.
This alternative should be considerably faster.
If you need to flatten the JSON, Juan Estevez’s approach won’t work as is. Here is an alternative :
import pandas as pd
dfs = []
for file in file_list:
with open(file) as f:
json_data = pd.json_normalize(json.loads(f.read()))
dfs.append(json_data)
df = pd.concat(dfs, sort=False) # or sort=True depending on your needs
Or if your JSON are line-delimited (not tested) :
import pandas as pd
dfs = []
for file in file_list:
with open(file) as f:
for line in f.readlines():
json_data = pd.json_normalize(json.loads(line))
dfs.append(json_data)
df = pd.concat(dfs, sort=False) # or sort=True depending on your needs
from pathlib import Path
import pandas as pd
paths = Path("/home/data").glob("*.json")
df = pd.DataFrame([pd.read_json(p, typ="series") for p in paths])
I combined Juan Estevez's answer with glob. Thanks a lot.
import pandas as pd
import glob
def readFiles(path):
files = glob.glob(path)
dfs = [] # an empty list to store the data frames
for file in files:
data = pd.read_json(file, lines=True) # read data frame from json file
dfs.append(data) # append the data frame to the list
df = pd.concat(dfs, ignore_index=True) # concatenate all the data frames in the list.
return df
Maybe you should state, if the json files are created themselves with pandas pd.to_json() or in another way.
I used data which was not created with pd.to_json() and I think it is not possible to use pd.read_json() in my case. Instead, I programmed a customized for-each loop approach to write everything to the DataFrames
I have 300 raw datas (.xlsm) and wanne to extract useful datas and turn them to csv files as input for subsequent neural network, now i try to implement them with 10 datas as example, i have sucessfully extracted the informations what i need, but i dont know how to convert them to csv files with the same name, for single data we can use df.to_csv, but how about for all the datas? with for function?
import glob
import pandas as pd
import numpy as np
import csv
import os
excel_files = glob.glob('../../Versuch/Versuche/RohBeispiel/*.xlsm')
directory = '/Beispiel'
for files in excel_files:
data = pd.read_excel(files)
# getting the list of rows and columns you need
list_of_dfs = pd.DataFrame(data.values[0:600:,12:26],
columns=data.columns[12:26]).drop(['Sauberkeit', 'Temparatur'], axis=1)
# converting pandas dataframe columns to numeric: string into float
cols = ['KonzA', 'KonzB', 'KonzC', 'TempA',
'TempB', 'TempC', 'Modul1', 'Modul2',
'Modul3', 'Modul4', 'Modul5', 'Modul6']
list_of_dfs[cols] = list_of_dfs[cols].apply(pd.to_numeric, errors='coerce', axis=1)
# Filling down from a column through missing data
for fec in list_of_dfs[cols]:
list_of_dfs[fec].fillna(method='ffill', inplace=True)
csvfilename = files.split('/')[-1].split('.')[0] + '.csv'
newtempfile = os.path.join(directory,csvfilename)
print(newtempfile)
print(list_of_dfs.head(2))
problem is solved.
folder_name = 'Beispiel'
csvfilename = files.split('/')[-1].split('.')[0] + '.csv' # change into csv files
newtempfile = os.path.join(folder_name, csvfilename)
# Verify if directory exists
if not os.path.exists(folder_name):
os.makedirs(folder_name) # If not, create it
print(newtempfile)
list_of_dfs.to_csv(newtempfile, index=False)
The easiest way of doing this is to get the filename from the excel and then use the os.path.join() method to save it to the directory you want.
directory = "C:/Test"
for files in excel_files:
csvfilename = (os.path.basename(file)[-1]).replace('.xlsm','.csv')
newtempfile=os.path.join(directory,csvfilename)
Since you already have the excel df you want to push into the csv file, just add the above code to the loop and change the output csv file to 'newtempfile' and that should do it.
df.to_csv(newtempfile, 'Beispel/data{0}.csv'.format(idx))
Hope this helps. :)
Updated Code:
cols = ['KonzA', 'KonzB', 'KonzC', 'TempA',
'TempB', 'TempC', 'Modul1', 'Modul2',
'Modul3', 'Modul4', 'Modul5', 'Modul6']
excel_files = glob.glob('../../Versuch/Versuche/RohBeispiel/*.xlsm')
for file in excel_files:
data = pd.read_excel(file, columns = cols) # import only the columns you need to the dataframe
csvfilename = (os.path.basename(files)[-1]).replace('.xlsm','.csv')
newtempfile=os.path.join(directory,csvfilename)
# converting pandas dataframe columns to numeric: string into float
data[cols] = data[cols].apply(pd.to_numeric, errors='coerce', axis=1)
data[cols].fillna(method='ffill', inplace=True)
data.to_csv(newtempfile).format(idx)
I have a file called 'workbooks_to_process.xlsx' with a column that contains the following excel files' paths:
**files_paths_2_process** (column header)
c:/work/file01.xlsx
c:/work/file02.xlsx
c:/work/file03.xlsx
………………….
c:/work/file0m.xlsx
On the other hand in Python Pandas
df_0 = pd.read_excel('workbooks_to_process.xlsx') # No issue
list_of_paths = df_0[files_paths_2_process].tolist() # No issue
Following is what I want to do (in an iterative process)
itr = list_of_paths[3] # or [0], [1], [n] etc
df_1 = pd.read_excel(itr)
Is there any method to accomplish the above?
Thanks!
for iterating through all files in a folder and all sheets in those files . try this:
import pandas as pd
import os
file_list = [os.path.join(r,file) for r,d,f in os.walk("C:\\Users\\ref_folder\\") for file in f]
for file in list(file_list):
f = pd.ExcelFile(file)
sheet_names = f.sheet_names
for i in list(sheet_names):
dataframe = pd.read_excel(f,i)
this dataframe will give you dataframe for every sheets, works for workbooks having 1 sheet too.
You can match the filename with your excel column filename and if it matches, read the df. I feel this is the most generalized way you can iterate through files in a folder and read as a df.
Hope that helps.
Try this
for itr in range(len(list_of_paths)):
df_1 = pd.read_excel(list_of_paths[itr])
...
...
Good Morning.
I'm starting with Python and I have a problem.
I need to find all .xls files (all have the same header) and merge all into a single DataFrame, so I need to say that the first line of the file should be ignored.
The current code I'm using is this:
os.chdir("file folder path")
fileLista = glob.glob('*.xls')
df = list()
for arquivo in fileLista:
df = df.append(pd.read_excel(arquivo))
Company= pd.concat(df)
Company.columns = Company.columns.str.strip()
I am using Glob to return all the .xls extension files,
df.append is to merge all the files that have been returned and put inside a DataFrame,
Company concat is to form a single file,
Company strip is to remove the spaces that it has in the column header.
When I run the code it returns me this error:
"erro NoneType' object is not iterable"
Can anyone help me with this mistake?
What about this instead?
fileLista = glob.glob('*.xls')
Company = pd.DataFrame()
for arquivo in fileLista:
df = pd.read_excel(arquivo)
Company= pd.concat([Company,df])
Company.columns = Company.columns.str.strip()
This should do what you want.
import pandas as pd
import numpy as np
import glob
glob.glob("C:/your_path_here/*.xlsx")
all_data = pd.DataFrame()
for f in glob.glob("C:/your_path_here/*.xlsx"):
df = pd.read_excel(f)
all_data = all_data.append(df,ignore_index=True)
print(all_data)
Here is another option to consider.
import pandas as pd
# filenames
excel_names = ["C:/your_path_here/Book1.xlsx", "C:/your_path_here/Book2.xlsx", "C:/your_path_here/Book3.xlsx"]
# read them in
excels = [pd.ExcelFile(name) for name in excel_names]
# turn them into dataframes
frames = [x.parse(x.sheet_names[0], header=None,index_col=None) for x in excels]
# delete the first row for all frames except the first
# i.e. remove the header row -- assumes it's the first
frames[1:] = [df[1:] for df in frames[1:]]
# concatenate them..
combined = pd.concat(frames)
# write it out
combined.to_excel("c.xlsx", header=False, index=False)
# Results go to the default directory if not assigned somewhere else.
# C:\Users\Excel\.spyder-py3