How to convert string into datetime? - python

I'm quite new to Python and I'm encountering a problem.
I have a dataframe where one of the columns is the departure time of flights. These hours are given in the following format : 1100.0, 525.0, 1640.0, etc.
This is a pandas series which I want to transform into a datetime series such as : S = [11.00, 5.25, 16.40,...]
What I have tried already :
Transforming my objects into string :
S = [str(x) for x in S]
Using datetime.strptime :
S = [datetime.strptime(x,'%H%M.%S') for x in S]
But since they are not all the same format it doesn't work
Using parser from dateutil :
S = [parser.parse(x) for x in S]
I got the error :
'Unknown string format'
Using the panda datetime :
S= pd.to_datetime(S)
Doesn't give me the expected result
Thanks for your answers !

Since it's a columns within a dataframe (A series), keep it that way while transforming should work just fine.
S = [1100.0, 525.0, 1640.0]
se = pd.Series(S) # Your column
# se:
0 1100.0
1 525.0
2 1640.0
dtype: float64
setime = se.astype(int).astype(str).apply(lambda x: x[:-2] + ":" + x[-2:])
This transform the floats to correctly formatted strings:
0 11:00
1 5:25
2 16:40
dtype: object
And then you can simply do:
df["your_new_col"] = pd.to_datetime(setime)

How about this?
(Added an if statement since some entries have 4 digits before decimal and some have 3. Added the use case of 125.0 to account for this)
from datetime import datetime
S = [1100.0, 525.0, 1640.0, 125.0]
for x in S:
if str(x).find(".")==3:
x="0"+str(x)
print(datetime.strftime(datetime.strptime(str(x),"%H%M.%S"),"%H:%M:%S"))

You might give it a go as follows:
# Just initialising a state in line with your requirements
st = ["1100.0", "525.0", "1640.0"]
dfObj = pd.DataFrame(st)
# Casting the string column to float
dfObj_num = dfObj[0].astype(float)
# Getting the hour representation out of the number
df1 = dfObj_num.floordiv(100)
# Getting the minutes
df2 = dfObj_num.mod(100)
# Moving the minutes on the right-hand side of the decimal point
df3 = df2.mul(0.01)
# Combining the two dataframes
df4 = df1.add(df3)
# At this point can cast to other types
Result:
0 11.00
1 5.25
2 16.40
You can run this example to verify the steps for yourself, also you can make it into a function. Make slight variations if needed in order to tweak it according to your precise requirements.
Might be useful to go through this article about Pandas Series.
https://www.geeksforgeeks.org/python-pandas-series/

There must be a better way to do this, but this works for me.
df=pd.DataFrame([1100.0, 525.0, 1640.0], columns=['hour'])
df['hour_dt']=((df['hour']/100).apply(str).str.split('.').str[0]+'.'+
df['hour'].apply((lambda x: '{:.2f}'.format(x/100).split('.')[1])).apply(str))
print(df)
hour hour_dt
0 1100.0 11.00
1 525.0 5.25
2 1640.0 16.40

Related

pandas: convert column with multiple datatypes to int, ignore errors

I have a column with data that needs some massaging. the column may contain strings or floats. some strings are in exponential form. Id like to best try to format all data in this column as a whole number where possible, expanding any exponential notation to integer. So here is an example
df = pd.DataFrame({'code': ['1170E1', '1.17E+04', 11700.0, '24477G', '124601', 247602.0]})
df['code'] = df['code'].astype(int, errors = 'ignore')
The above code does not seem to do a thing. i know i can convert the exponential notation and decimals with simply using the int function, and i would think the above astype would do the same, but it does not. for example, the following code work in python:
int(1170E1), int(1.17E+04), int(11700.0)
> (11700, 11700, 11700)
Any help in solving this would be appreciated. What i'm expecting the output to look like is:
0 '11700'
1 '11700'
2 '11700
3 '24477G'
4 '124601'
5 '247602'
You may check with pd.to_numeric
df.code = pd.to_numeric(df.code,errors='coerce').fillna(df.code)
Out[800]:
0 11700.0
1 11700.0
2 11700.0
3 24477G
4 124601.0
5 247602.0
Name: code, dtype: object
Update
df['code'] = df['code'].astype(object)
s = pd.to_numeric(df['code'],errors='coerce')
df.loc[s.notna(),'code'] = s.dropna().astype(int)
df
Out[829]:
code
0 11700
1 11700
2 11700
3 24477G
4 124601
5 247602
BENY's answer should work, although you potentially leave yourself open to catching exceptions and filling that you don't want to. This will also do the integer conversion you are looking for.
def convert(x):
try:
return str(int(float(x)))
except ValueError:
return x
df = pd.DataFrame({'code': ['1170E1', '1.17E+04', 11700.0, '24477G', '124601', 247602.0]})
df['code'] = df['code'].apply(convert)
outputs
0 11700
1 11700
2 11700
3 24477G
4 124601
5 247602
where each element is a string.
I will be the first to say, I'm not proud of that triple cast.

How to sum an object column in python

I have a data set represented in a Pandas object, see below:
datetime season holiday workingday weather temp atemp humidity windspeed casual registered count
1/1/2011 0:00 1 0 0 1 9.84 14.395 81 0 3 13 16
1/1/2011 1:00 1 0 0 2 9.02 13.635 80 0 8 32 40
1/1/2011 2:00 1 0 0 3 9.02 13.635 80 0 5 27 32
p_type_1 = pd.read_csv("Bike Share Demand.csv")
p_type_1 = (p_type_1 >>
rename(date = X.datetime))
p_type_1.date.str.split(expand=True,)
p_type_1[['Date','Hour']] = p_type_1.date.str.split(" ",expand=True,)
p_type_1['date'] = pd.to_datetime(p_type_1['date'])
p_hour = p_type_1["Hour"]
p_hour
Now I am trying to take the sum of my column Hour that I created (p_hour)
p_hours = p_type_1["Hour"].sum()
p_hours
and get this error:
TypeError: must be str, not int
so I then tried:
p_hours = p_type_1(str["Hour"].sum())
p_hours
and get this error:
TypeError: 'type' object is not subscriptable
i just want the sum, what gives.
Your dataframe datatypes are problem.
Take a closer look at this question:
Convert DataFrame column type from string to datetime, dd/mm/yyyy format
Sample code that should be solution for your problem, i simplified CSV
'''
CSV
datetime,season
1/1/2011 0:00,1
1/1/2011 1:00,1
1/1/2011 2:00,1
'''
import pandas as pd
p_type_1 = pd.read_csv("Bike Share Demand.csv")
p_type_1['datetime'] = p_type_1['datetime'].astype('datetime64[ns]')
p_type_1['hour'] = [val.hour for i, val in p_type_1['datetime'].iteritems()]
print(p_type_1['hour'].sum())
There's quite a bit going on in here that's not correct. So I'll try to break down the issues and offer alternatives.
Here:
p_hours = p_type_1(str["Hour"].sum())
p_hours
What your issue is, is that you are actually trying to do this:
p_hours = p_type_1([str("Hour")].sum())
p_hours
Instead of doing that, your code technically asks for the property named 'Hour' in the string type. That's not what you are trying to do. This crash is unrelated to your core problem, and is just a syntax error.
What the problem actually is here, is that your dataframe column has mixed string and integer types together in the same column. The sum operation will concatenate string, or sum numeric types. In a mixed type, it will fail out.
In order to verify that this is the issue however, we would need to see your actual dataframe, as I have a feeling the one you gave may not be the correct one.
As a proof of concept, I created the following example:
import pandas as pd
dta = [str(x) for x in range(20)]
dta.append(12)
frame = pd.DataFrame.from_dict({
"data": dta})
print(frame["data"].sum())
>>> TypeError: can only concatenate str (not "int") to str
Note that the newer editions of pandas have more clear error messages.

Convert Quarter + Year (Datetime) into String in Pandas

I have some datetime info extracted into columns in Pandas. For example, I got the quarters like this:
df['quarter'] = pd.to_datetime(df['ddate'], format='%Y%m%d', errors='coerce').dt.quarter
I need to take the 'quarter' and 'year' columns and combine them into something like "Q3_2017". I can get this to work fine with a single data point like this:
'Q' + str(df['quarter'].iloc[0]) + '_' + str(df['year'].iloc[0])
But when I try to apply "str()" to a whole column I get bizarre results. For instance:
df['period'] = str(df['quarter'])
Instead of getting the quarter (e.g. "1"), I get something like this:
7222 1\n185579 4\n185580 1\n2129..
What exactly is going on and what's an easy fix?
I found a few previous solutions, but none seem to work specifically with quarters; can only find out how to do this with month or year, for example.
Try:
df['period'] = 'Q' + df['quarter'].astype(str) + '_' + df['year'].astype(str)
With Periods you can access %q for strftime.
import pandas as pd
df = pd.DataFrame({'ddate': pd.date_range('2010-01-01', freq='57D', periods=4)})
df.ddate.dt.to_period('Q').dt.strftime('Q%q_%Y')
0 Q1_2010
1 Q1_2010
2 Q2_2010
3 Q2_2010
Name: ddate, dtype: object
Or just keep the format of to_period (convert to string if you want)
df.ddate.dt.to_period("Q")
0 2010Q1
1 2010Q1
2 2010Q2
3 2010Q2
Name: ddate, dtype: period[Q-DEC]

pandas dataFrame : i'd like to 'uniformize' values

First of all, I couldn't find a proper english way to put my request, therefore it might have been answered before but I couldn't find what I need. Please forgive me if there's already an answer for this...
So I have "hours" stored in a pd.DataFrame as follow:
1454
1621
and so on (they are 14:54 and 16:21)
problem :
some of them are 953 (for 09:53).
question :
how could I "autocomplete" these so that they are four digits long, containing zeroes (i'd like the above to be 0953, and additionnaly 23 to be 0023).
I was considering converting the number into strings, checking if they have less than 4 caracters, and adding a 0 at the beginning if not, but surely there must be a more pythonic way to do this ?
Thank you very much for your help and have a nice day !
You'll need to have a string column, and then you can use zfill:
df = pd.DataFrame([1453, 923, 24, 1250], columns=['time'])
df['time'].astype(str).str.zfill(4)
#0 1453
#1 0923
#2 0024
#3 1250
#Name: time, dtype: object
To add 0 at the beginning, the type must be string. If the column names is hours, start with
df.hours = df.hours.astype(str)
Now you can conditionally add a 0 to the beginning of shorter entries:
short = df.hours.str.len() < 4
df.hours.loc[short] = '0' + df.hours.loc[short]
For example:
df = pd.DataFrame({'hours': [123, 3444, 233]})
df.hours = df.hours.astype(str)
short = df.hours.str.len() < 4
df.hours.loc[short] = '0' + df.hours.loc[short]
>>> df
hours
0 0123
1 3444
2 0233
Perhaps this is just me, but I firmly believe all dates manipulations should be done through datetime, not strings, so I would recommend some thing as follow:
df['time'] = pd.to_datetime(df['time'].astype(str).str.zfill(4).apply(lambda x: x[:2] + ':' + x[2:]))
df['time_str'] = df['time'].dt.strftime('%I-%M')

Output of column in Pandas dataframe from float to currency (negative values)

I have the following data frame (consisting of both negative and positive numbers):
df.head()
Out[39]:
Prices
0 -445.0
1 -2058.0
2 -954.0
3 -520.0
4 -730.0
I am trying to change the 'Prices' column to display as currency when I export it to an Excel spreadsheet. The following command I use works well:
df['Prices'] = df['Prices'].map("${:,.0f}".format)
df.head()
Out[42]:
Prices
0 $-445
1 $-2,058
2 $-954
3 $-520
4 $-730
Now my question here is what would I do if I wanted the output to have the negative signs BEFORE the dollar sign. In the output above, the dollar signs are before the negative signs. I am looking for something like this:
-$445
-$2,058
-$954
-$520
-$730
Please note there are also positive numbers as well.
You can use np.where and test whether the values are negative and if so prepend a negative sign in front of the dollar and cast the series to a string using astype:
In [153]:
df['Prices'] = np.where( df['Prices'] < 0, '-$' + df['Prices'].astype(str).str[1:], '$' + df['Prices'].astype(str))
df['Prices']
Out[153]:
0 -$445.0
1 -$2058.0
2 -$954.0
3 -$520.0
4 -$730.0
Name: Prices, dtype: object
You can use the locale module and the _override_localeconv dict. It's not well documented, but it's a trick I found in another answer that has helped me before.
import pandas as pd
import locale
locale.setlocale( locale.LC_ALL, 'English_United States.1252')
# Made an assumption with that locale. Adjust as appropriate.
locale._override_localeconv = {'n_sign_posn':1}
# Load dataframe into df
df['Prices'] = df['Prices'].map(locale.currency)
This creates a dataframe that looks like this:
Prices
0 -$445.00
1 -$2058.00
2 -$954.00
3 -$520.00
4 -$730.00

Categories

Resources