How can I convert a large .xlsx file which contains a lot of timestamps (i.e. 1537892885364) into date and time ( in python and then save it as a new .xlsx file?
I am new to python, and I tried lots of ways to achieve this today, but I did not find a solution.
Below is the code I used, but it gives me '[Errno 13] Permission denied'. I tried different ways which also gave problems.
from __future__ import absolute_import, division, print_function
import os
import pandas as pd
def main(path, filename, absolute_path_organisation_structure):
absolute_filepath = os.path.join(path,filename)
#Relevant list formed with 4th, 5th and 6th columns
df = pd.read_excel(absolute_filepath, header=None, parse_cols=[4,5,6])
# Transform column 0 and 2 to datetime
df[0] = pd.to_datetime(df[0])
df[2] = pd.to_datetime(df[2])
print(df)
path = open(r'C:\\Users\\****\\data')
MISfile = 'filename.xlsx'
main(path, MISfile,None)
Hope this helps:
# requires the following packages:
# - pandas
# - xlrd
# - openpyxl
import pandas as pd
# reading in the excel file timestamps.xlsx
# this file contains a column 'epoch' with the unix epoch timestamps
df = pd.read_excel('timestamps.xlsx')
# translate epochs into human readable and write into newly created column
# note, your timestamps are in ms, hence the unit
df['timestamp'] = pd.to_datetime(df['epoch'],unit='ms')
# write to excel file 'new_timestamps.xlsx'
# index=False prevents pandas to add the indices as a new column
df.to_excel('new_timestamps.xlsx', index=False)
Related
I have 2 folders with 365 CSV files each. However, I only need certain columns from these CSV files.
I have already solved this problem with pandas usecols. But only for one file. I want to automate the whole thing.
With an incrementing variable date
f{date}_sds011_sensor_3659.csv
I don't know what's smartest though.
loop through it first and insert it into the database at the end? insert after each loop iteration?
I've been stuck with the problem for 2 weeks, I've tried all possible variants, but I can't find a solution that covers all areas (automated import + only selected columns + skip the first line in each case)
folder names: dht22 and sds011 (the names of 2 sensors)
format of the csv file names: 2020-09-25_sds011_sensor_3659.csv
Start date: 25.09.2020
End date: 24.09.2021
400-700 rows in each file
the sds011 sensor has 12 columns and i need 3 (timestamp, P1, P2 (size of particulate matter)
the dht22 has 8 and i need 3 (timestamp, temperature and humidity)
Possible solution is the following:
import glob
import pandas as pd
all_files = glob.glob('folder_name/*.csv', recursive=True)
all_data = []
for file in all_files:
df = pd.read_csv(file, index_col=None, header=0, usecols=['col1', 'col2'])
all_data.append(df)
result = pd.concat(all_data, axis=0, ignore_index=True)
I have a folder TDMS files (can also be Excel).
These are stored in 5 MB packages, but all contain the same data structure.
Unfortunately there is no absolute time in the lines and the timestamp is stored somewhat cryptically in the column "TimeStamp" in the following format
"Tues. 17.11.2020 19:20:15"
But now I would like to load each file and plot them one after the other in the same graph.
For one file this is no problem, because I simply use the index of the file for the x-axis, but if I load several files, the index in each file is the same and the data overlap.
Does anyone have an idea how I can write all the data into a DataFrame, but with a continuous timestamp, so that the data can be plotted one after the other or I can also specify a time period in which I would like to see the data?
My first approach would be as follows.
If someone could upload an example with a CSV file (pandas.read.csv) instead of npTDMS Module, it would be just as helpful!
https://nptdms.readthedocs.io/en/stable/
import pandas as pd
import matplotlib.pyplot as plt
from nptdms import TdmsFile
tdms_file = TdmsFile.read("Datei1.tdms")
tdms_groups = tdms_file.groups()
tdms_Variables_1 = tdms_file.group_channels(tdms_groups[0])
MessageData_channel_1 = tdms_file.object('Data', 'Position')
MessageData_data_1 = MessageData_channel_1.data
#MessageData_channel_2 = tdms_file.object('Data', 'Timestamp')
#MessageData_data_2 = MessageData_channel_2.data
df_y = pd.DataFrame(data=MessageData_data_1).append(df_y)
plt.plot(df_y)
Here is an example with CSV. It will first create a bunch of files that should look similar to yours in the ./data/ folder. Then it will read those files back (finding them with glob). It uses pandas.concat to combine the dataframes into 1, and then it parses the date.
import glob
import random
import pandas
import matplotlib.pyplot as plt
# Create a bunch of test files that look like your data (NOTE: my files aren't 5MB, but 100 rows)
df = pandas.DataFrame([{"value": random.randint(50, 100)} for _ in range(1000)])
df["timestamp"] = pandas.date_range(
start="17/11/2020", periods=1000, freq="H"
).strftime(r"%a. %d.%m.%Y %H:%M:%S")
chunks = [df.iloc[i : i + 100] for i in range(0, len(df) - 100 + 1, 100)]
for index, chunk in enumerate(chunks):
chunk[["timestamp", "value"]].to_csv(f"./data/data_{index}.csv", index=False)
# ===============
# Read the files back into a dataframe
dataframe_list = []
for file in glob.glob("./data/data_*.csv"):
df = pandas.read_csv(file)
dataframe_list.append(df)
# Combine all individual dataframes into 1
df = pandas.concat(dataframe_list)
# Set the time file correctly
df["timestamp"] = pandas.to_datetime(df["timestamp"], format=r"%a. %d.%m.%Y %H:%M:%S")
# Use the timestamp as the index for the dataframe, and make sure it's sorted
df = df.set_index("timestamp").sort_index()
# Create the plot
plt.plot(df)
#Gijs Wobben
Thank you so much ! It works perfectly well and it will save me a lot of work !
As a mechanical engineer you don't write code like this very often, so I'm happy if people from other disciplines can help you.
Here is the basic structure, how i did it directly with TDMS-Files, because I read afterwards that the npTDMS module offers the possibility to read the data directly as dataframe, which I didn't know before
import pandas as pd
from nptdms import TdmsFile
from nptdms import tdms
import os,glob
file_names=glob.glob('*.tdms')
tdms_file = TdmsFile.read(file_names[0])
# Read the files back into a dataframe
dataframe_list = []
for file in glob.glob("*.tdms"):
tdms_file = TdmsFile.read(file)
df = tdms_file['Sheet1'].as_dataframe()
dataframe_list.append(df)
df_all = pd.concat(dataframe_list)
# Set the time file correctly
df["Timestamp"] = pd.to_datetime(df["Timestamp"], format=r"%a. %d.%m.%Y %H:%M:%S")
# Use the timestamp as the index for the dataframe, and make sure it's sorted
df = df.set_index("Timestamp").sort_index()
# Create the plot
plt.plot(df)
I am having below file(file1.xlsx) as input. In total i am having 32 columns in this file and almost 2500 rows. Just for example i am mentioning 5 columns in screen print
I want to edit same file with python and want output as (file1.xlsx)
it should be noted i am adding one column named as short and data is a kind of substring upto first decimal of data present in name(A) column of same excel.
Request you to please help
Regards
Kawaljeet
Here is what you need...
import pandas as pd
file_name = "file1.xlsx"
df = pd.read_excel(file_name) #Read Excel file as a DataFrame
df['short'] = df['Name'].str.split(".")[0]
df.to_excel("file1.xlsx")
hello guys i solved the problem with below code:
import pandas as pd
import os
def add_column():
file_name = "cmdb_inuse.xlsx"
os.chmod(file_name, 0o777)
df = pd.read_excel(file_name,) #Read Excel file as a DataFrame
df['short'] = [x.split(".")[0] for x in df['Name']]
df.to_excel("cmdb_inuse.xlsx", index=False)
My code:
import numpy as np
import pandas as pd
import time
tic = time.time()
I read a long file of the headers [meter] [daycode] [meter reading in kWh]. A time series of over 6,000 meters.
consum = pd.read_csv("data/File1.txt", delim_whitespace=True, encoding = "utf-8", names =['meter', 'daycode', 'val'], engine='python')
consum.set_index('meter', inplace=True)
Because I have in fact total 6 files of this humungous size, I want to filter out those with insufficient information. These are the time series data with [meter] values under code 3 by category. I can collect this category information from another file. Following is where I extract this.
id_total = pd.read_csv("data/meter_id_code.csv", header = 0, encoding="cp1252")
#print(len(id_total.index))
id_total.set_index('Code', inplace=True)
id_other = id_total.loc[3].copy()
print id_other
And this is where I write to csv to check whether the last line is correctly performed:
id_other.to_csv('data/id_other.csv', sep='\t', encoding='utf-8')
print consum[~consum.index.isin(id_other)]
Output: (of print id_other)
Problem:
I get the following warning. Here it says it didn't affect the code from working but mine is affected. I checked the correct directory (earlier confused my remote connection to gpu server with my hardware) and csv file was created. It turns out the meter IDs in the file are not filtered.
How can I fix the last line?
I have CSV file with data like
data,data,10.00
data,data,11.00
data,data,12.00
I need to update this as
data,data,10.00
data,data,11.00,1.00(11.00-10.00)
data,data,12.30,1.30(12.30-11.00)
could you help me to update the csv file using python
You can use pandas and numpy. pandas reads/writes the csv and numpy does the calculations:
import pandas as pd
import numpy as np
data = pd.read_csv('test.csv', header=None)
col_data = data[2].values
diff = np.diff(col_data)
diff = np.insert(diff, 0, 0)
data['diff'] = diff
# write data to file
data.to_csv('test1.csv', header=False, index=False)
when you open test1.csv then you will find the correct results as you described above with the addition of a zero next to the first data point.
For more info see the following docs:
http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.to_csv.html