I have lots of excel files(xlsx format) and want to read and handle them.
For example, file names are ex201901, ex201902, .... 201912.
Its name is made by exYYYYMM format.
Anyway, to import these files in pandas as an usual case, it's easy.
import pandas as pd
df201901 = pd.read_excel(r'C:\\users\ex201901.xlsx)
df201902 = pd.read_excel(r'C:\\users\ex201902.xlsx)
df201903 = pd.read_excel(r'C:\\users\ex201903.xlsx)
df201904 = pd.read_excel(r'C:\\users\ex201904.xlsx)
....
df201912 = pd.read_excel(r'C:\\users\ex201912.xlsx)
However, it seem to be a boring and tedius.
In SAS program, I use Macro() syntax. But in python, I have no idea how to handle.
Can you help me how to handle the multiple and repeated jobs in easy way, like a SAS MACRO().
Thanks for reading.
Given that you'll probably want to somehow work with all data frames at once afterwards, it's a smell if you even put them into separate local variables, and in general, whenever you're experiencing a "this task feels repetitive because I'm doing the same thing over and over again", that calls for introducing loops of some sort. As you're planning to use pandas, chances are that you'll be iterating soon again (now that you have your files, you're probably going to be performing some transformations on the rows of those files), in which case you'll probably be best off looking into how control flow a la loops works in Python (and indeed in pandas) in general; good tutorials are plentiful.
In your particular case, depending on what kind of processing you are planning on doing afterwards, you'd probably benefit from having something like
df2019 = [pd.read_excel(rf'C:\users\ex2019{str(i).zfill(2)}.xlsx') for i in range(1, 13)]
With that, you can access the individual data frames through e.g. df2019[5] to get the data frame corresponding to June, or you can collapse all of them into a single data frame using df = pd.concat(df2019) if that's what suits your need.
If you have less structure in your file names, glob can come in handy. With that, the above could become something like
import glob
df2019 = list(map(pd.read_excel, glob.glob(r'C:\users\ex2019*.xlsx')))
You can use OS module from python. It has a method listdir which stores all the file names in the folder. Check the code below:
import os, re
listDir = os.listdir(FILE_PATH)
dfList = []
for aFile in listDir:
if re.search(r'ex20190[0-9]{1}.xlsx', aFile):
tmpDf = pd.read_excel(FILE_PATH + aFile)
dfList.append(tmpDf)
outDf = pd.concat(dfList)
Related
I'm currently using the following line to read Excel files
df = pd.read_excel(f"myfile.xlsx")
The problem is the enormous slow down which occurs when I implement data from this Excel file, for example in function commands. I think this occurs because I'm not reading the file via a context manager. Is there a way of combining a 'with' command with the pandas 'read' command so the code runs more smoothly? Sorry that this is vague, I'm just learning about context managers.
Edit : Here is an example of a piece of code that does not run...
import pandas as pd
import numpy as np
def fetch_excel(x):
df_x = pd.read_excel(f"D00{x}_balance.xlsx")
return df_x
T = np.zeros(3000)
for i in range(0, 3000):
T[i] = fetch_excel(1).iloc[i+18, 0]
print(fetch_excel(1).iloc[0,0])
...or it takes more than 5 minutes which seems exceptional to me. Anyway I can't work with a delay like that. If I comment out the for loop, this does work.
Usually the key reason to use standard context managers for reading in files is convenience of closing and opening the underlying file descriptor. You can create context managers to do anything you'd like, though. They're just functions.
Unfortunately they aren't likely to solve the problem of slow loading times reading in your excel file.
You are accessing the HDD, opening, reading and converting the SAME file D001_balance.xlsx 3000 times to access a single piece of data - different row each time from 18 to 3017. This is pointless as the data is all in the DataFrame after one reading. Just use:
df_x = pd.read_excel(f"D001_balance.xlsx")
T = np.zeros(3000)
for i in range(0, 3000):
T[i] = df_x.iloc[i+18, 0]
print(df_x.iloc[0,0])
I have many csv files and I am trying to pass all the data that they contain into a database. For this reason, I found that I could use the glob library to iterate over all csv files in my folder. Following is the code I used:
import requests as req
import pandas as pd
import glob
import json
endpoint = "testEndpoint"
path = "test/*.csv"
for fname in glob.glob(path):
print(fname)
df = pd.read_csv(fname)
for index, row in df.iterrows():
#print(row['ID'], row['timestamp'], row['date'], row['time'],
# row['vltA'], row['curA'], row['pwrA'], row['rpwrA'], row['frq'])
print(row['timestamp'])
testjson = {"data":
{"installationid": row['ID'],
"active": row['pwrA'],
"reactive": row['rpwrA'],
"current": row['curA'],
"voltage": row['vltA'],
"frq": row['frq'],
}, "timestamp": row['timestamp']}
payload = {"payload": [testjson]}
json_data = json.dumps(payload)
response = req.post(
endpoint, data=json_data, headers=headers)
This code seems to work fine in the beginning. However, after some time it starts to become really slow (I noticed this because I print the timestamp as I upload the data) and eventually stops completely. What is the reason for this? Is something I am doing here really inefficient?
I can see 3 possible problems here:
memory. read_csv is fast, but it loads the content of a full file in memory. If the files are really large, you could exhaust the real memory and start using swap which has terrible performances
iterrows: you seem to build a dataframe - meaning a data structure optimized for column wise access - to then access it by rows. This already is a bad idea and iterrows is know to have terrible performances because it builds a Series per each row
one post request per row. An http request has its own overhead, but furthemore, this means that you add rows to the database one at a time. If this is the only interface for your database, you may have no other choice, but you should search whether it is possible to prepare a bunch of rows and load it as a whole. It often provides a gain of more than one magnitude order.
Without more info I can hardly say more, but IHMO the higher gain is to be found on database feeding so here in point 3. If nothing can be done on that point, of if further performance gain is required, I would try to replace pandas with the csv module which is row oriented and has a limited footprint because it only processes one line at a time whatever the file size.
Finally, and if it makes sense for your use case, I would try to use one thread for the reading of the csv file that would feed a queue and a pool of threads to send requests to the database. That should allow to gain the HTTP overhead. But beware, depending on the endpoint implementation it could not improve much if really the database access if the limiting factor.
I'm currently working with functional MRI data in R but I need to import it to Python for some faster analysis. How can I do that in an efficient way?
I currently have in R a list of 198135 dataframes. All of them have 5 variables and 84 observations of connectivity between brain regions. I need to display the same 198135 dataframes in Python for running some specific analysis there (with the same structure than in R: one object that contains all dataframes separately).
Initially I tried exporting a .RDS file from R and then importing it to Python using "pyreadr", but I'm getting empty objects in every atempt with "pyreadr.read_r()" function.
My other method was to save every dataframe of the R list as a separate .csv file, and then importing them to Python. In that way I could get what I wanted (I tried it with 100 dataframes only for trying the code). The problem with this method is that is highly inefficient and slow.
I found several answers to similar problems, but most of them were to merge all dataframes and load it as a unique .csv into Python, which is not the solution I need.
Is there some more efficient way to do this process, without altering the data structure that I mentioned?
Thanks for your help!
# This is the code in R for an example
a <- as.data.frame(cbind(c(1:3), c(1:3), c(4:6), c(7:9)))
b <- as.data.frame(cbind(c(11:13), c(21:23), c(64:66), c(77:79)))
c <- as.data.frame(cbind(c(31:33), c(61:63), c(34:36), c(57:59)))
d <- as.data.frame(cbind(c(12:14), c(13:15), c(54:56), c(67:69)))
e <- as.data.frame(cbind(c(31:33), c(51:53), c(54:56), c(37:39)))
somelist_of_df <- list(a,b,c,d,e)
saveRDS(somelist_of_df, "somefile.rds")
## This is the function I used from pyreadr in Python
import pyreadr
results = pyreadr.read_r('/somepath/somefile.rds')
Well, thanks for the help in the other answers, but it's not exactly what I was looking for(I wanted to export just one file with the list of dataframes within it, and then loading one single file to Python, keeping the same structure). For using feather you have to decompose the list in all the dataframes within it, pretty much like saving separate .csv files, and then load each one of them into Python (or R). Anyway, it must be said that it's much faster than the method with .csv.
I leave the code that I used successfully in a separate answer, maybe it could be useful for other people since I used a simple loop for loading dataframes into Python as a list:
## Exporting a list of dataframes from R to .feather files
library(feather) #required package
a <- as.data.frame(cbind(c(1:3), c(1:3), c(4:6), c(7:9))) #Example DFs
b <- as.data.frame(cbind(c(11:13), c(21:23), c(64:66), c(77:79)))
c <- as.data.frame(cbind(c(31:33), c(61:63), c(34:36), c(57:59)))
d <- as.data.frame(cbind(c(12:14), c(13:15), c(54:56), c(67:69)))
e <- as.data.frame(cbind(c(31:33), c(51:53), c(54:56), c(37:39)))
somelist_of_df <- list(a,b,c,d,e)
## With sapply you loop over the list for creating the .feather files
sapply(seq_along(1:length(somelist_of_df)),
function(i) write_feather(somelist_of_df[[i]],
paste0("/your/directory/","DF",i,".feather")))
(Using just a MacBook Air, the code above took less than 5 seconds to run for a list of 198135 DFs)
## Importing .feather files into a list of DFs in Python
import os
import feather
os.chdir('/your/directory')
directory = '/your/directory'
py_list_of_DFs = []
for filename in os.listdir(directory):
DF = feather.read_dataframe(filename)
py_list_of_DFs.append(DF)
(This code did the work for me besides it was a bit slow, it took 12 minutes to do the task for the 198135 DFs)
I hope this could be useful for somebody.
This package may be of some interest to you
Pandas also implements a direct way to read .feather file :
pd.read_feather()
Pyreadr cannot currently read R lists, therefore you need to save the dataframes individually, also you need to save to a RDA file so that you can host multiple dataframes in one file:
# first construct a list with the names of dataframes you want to save
# instead of the dataframes themselves
somelist_of_df <- list("a", "b", "c", "d", "e")
do.call("save", c(somelist_of_df, file="somefile.rda"))
or any other variant as described here.
Then you can read the file in python:
import pyreadr
results = pyreadr.read_r('/somepath/somefile.rda')
The advantage is that there will be only one file with all dataframes.
I cannot comment in the #crlagos0 answer because reputation. I Want to add a couple of things:
seq_along(list_of_things) is enough, there is no need to do seq_along(lenght(1:list_of_things)) in R. Also, I want to point out that the official package to read and write feather files in R is called arrow and you can find its documentation here. In python is pyarrow.
I have hundred of thousands of data text files to read. As of now, I'm importing the data from text files every time I run the code. Perhaps the easy solution would be to simply reformat the data into a file faster to read.
Anyway, right now every text files I have look like:
User: unknown
Title : OE1_CHANNEL1_20181204_103805_01
Sample data
Wavelength OE1_CHANNEL1
185.000000 27.291955
186.000000 27.000877
187.000000 25.792290
188.000000 25.205620
189.000000 24.711882
.
.
.
The code where I read and import the txt files is:
# IMPORT DATA
path = 'T2'
if len(sys.argv) == 2:
path = sys.argv[1]
files = os.listdir(path)
trans_import = []
for index, item in enumerate(files):
trans_import.append(np.loadtxt(path+'/'+files[1], dtype=float, skiprows=4, usecols=(0,1)))
The resulting array looks in the variable explorer as:
{ndarray} = [[185. 27.291955]\n [186. 27.000877]\n ... ]
I'm wondering, how I could speed up this part? It takes a little too long as of now just to import ~4k text files. There are 841 lines inside every text files (spectrum). The output I get with this code is 841 * 2 = 1682. Obviously, it considers the \n as a line...
It would probably be much faster if you had one large file instead of many small ones. This is generally more efficient. Additionally, you might get a speedup from just saving the numpy array directly and loading that .npy file in instead of reading in a large text file. I'm not as sure about the last part though. As always when time is a concern, I would try both of these options and then measure the performance improvement.
If for some reason you really can't just have one large text file / .npy file, you could also probably get a speedup by using, e.g., multiprocessing to have multiple workers reading in the files at the same time. Then you can just concatenate the matrices together at the end.
Not your primary question but since it seems to be an issue - you can rewrite the text files to not have those extra newlines, but I don't think np.loadtxt can ignore them. If you're open to using pandas, though, pandas.read_csv with skip_blank_lines=True should handle that for you. To get a numpy.ndarray from a pandas.DataFrame, just do dataframe.values.
Let use pandas.read_csv (with C speed) instead of numpy.loadtxt. This is a very helpful post:
http://akuederle.com/stop-using-numpy-loadtxt
I have a folder with NetCDF files from 2006-2100, in ten year blocks (2011-2020, 2021-2030 etc).
I want to create a new NetCDF file which contains all of these files joined together. So far I have read in the files:
ds = xarray.open_dataset('Path/to/file/20062010.nc')
ds1 = xarray.open_dataset('Path/to/file/20112020.nc')
etc.
Then merged these like this:
dsmerged = xarray.merge([ds,ds1])
This works, but is clunky and there must be a simpler way to automate this process, as I will be doing this for many different folders full of files. Is there a more efficient way to do this?
EDIT:
Trying to join these files using glob:
for filename in glob.glob('path/to/file/.*nc'):
dsmerged = xarray.merge([filename])
Gives the error:
AttributeError: 'str' object has no attribute 'items'
This is reading only the text of the filename, and not the actual file itself, so it can't merge it. How do I open, store as a variable, then merge without doing it bit by bit?
If you are looking for a clean way to get all your datasets merged together, you can use some form of list comprehension and the xarray.merge function to get it done. The following is an illustration:
ds = xarray.merge([xarray.open_dataset(f) for f in glob.glob('path/to/file/.*nc')])
In response to the out of memory issues you encountered, that is probably because you have more files than the python process can handle. The best fix for that is to use the xarray.open_mfdataset function, which actually uses the library dask under the hood to break the data into smaller chunks to be processed. This is usually more memory efficient and will often allow you bring your data into python. With this function, you do not need a for-loop; you can just pass it a string glob in the form "path/to/my/files/*.nc". The following is equivalent to the previously provided solution, but more memory efficient:
ds = xarray.open_mfdataset('path/to/file/*.nc')
I hope this proves useful.