I've been working with data imported from a CSV. Pandas changed some columns to float, so now the numbers in these columns get displayed as floating points! However, I need them to be displayed as integers or without comma. Is there a way to convert them to integers or not display the comma?
To modify the float output do this:
df= pd.DataFrame(range(5), columns=['a'])
df.a = df.a.astype(float)
df
Out[33]:
a
0 0.0000000
1 1.0000000
2 2.0000000
3 3.0000000
4 4.0000000
pd.options.display.float_format = '{:,.0f}'.format
df
Out[35]:
a
0 0
1 1
2 2
3 3
4 4
Use the pandas.DataFrame.astype(<type>) function to manipulate column dtypes.
>>> df = pd.DataFrame(np.random.rand(3,4), columns=list("ABCD"))
>>> df
A B C D
0 0.542447 0.949988 0.669239 0.879887
1 0.068542 0.757775 0.891903 0.384542
2 0.021274 0.587504 0.180426 0.574300
>>> df[list("ABCD")] = df[list("ABCD")].astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
EDIT:
To handle missing values:
>>> df
A B C D
0 0.475103 0.355453 0.66 0.869336
1 0.260395 0.200287 NaN 0.617024
2 0.517692 0.735613 0.18 0.657106
>>> df[list("ABCD")] = df[list("ABCD")].fillna(0.0).astype(int)
>>> df
A B C D
0 0 0 0 0
1 0 0 0 0
2 0 0 0 0
Considering the following data frame:
>>> df = pd.DataFrame(10*np.random.rand(3, 4), columns=list("ABCD"))
>>> print(df)
... A B C D
... 0 8.362940 0.354027 1.916283 6.226750
... 1 1.988232 9.003545 9.277504 8.522808
... 2 1.141432 4.935593 2.700118 7.739108
Using a list of column names, change the type for multiple columns with applymap():
>>> cols = ['A', 'B']
>>> df[cols] = df[cols].applymap(np.int64)
>>> print(df)
... A B C D
... 0 8 0 1.916283 6.226750
... 1 1 9 9.277504 8.522808
... 2 1 4 2.700118 7.739108
Or for a single column with apply():
>>> df['C'] = df['C'].apply(np.int64)
>>> print(df)
... A B C D
... 0 8 0 1 6.226750
... 1 1 9 9 8.522808
... 2 1 4 2 7.739108
To convert all float columns to int
>>> df = pd.DataFrame(np.random.rand(5, 4) * 10, columns=list('PQRS'))
>>> print(df)
... P Q R S
... 0 4.395994 0.844292 8.543430 1.933934
... 1 0.311974 9.519054 6.171577 3.859993
... 2 2.056797 0.836150 5.270513 3.224497
... 3 3.919300 8.562298 6.852941 1.415992
... 4 9.958550 9.013425 8.703142 3.588733
>>> float_col = df.select_dtypes(include=['float64']) # This will select float columns only
>>> # list(float_col.columns.values)
>>> for col in float_col.columns.values:
... df[col] = df[col].astype('int64')
>>> print(df)
... P Q R S
... 0 4 0 8 1
... 1 0 9 6 3
... 2 2 0 5 3
... 3 3 8 6 1
... 4 9 9 8 3
This is a quick solution in case you want to convert more columns of your pandas.DataFrame from float to integer considering also the case that you can have NaN values.
cols = ['col_1', 'col_2', 'col_3', 'col_4']
for col in cols:
df[col] = df[col].apply(lambda x: int(x) if x == x else "")
I tried with else x) and else None), but the result is still having the float number, so I used else "".
Use 'Int64' for NaN support
astype(int) and astype('int64') cannot handle missing values (numpy int)
astype('Int64') (note the capital I) can handle missing values (pandas int)
df['A'] = df['A'].astype('Int64') # capital I
This assumes you want to keep missing values as NaN. If you plan to impute them, you could fillna first as Ryan suggested.
Examples of 'Int64' (capital I)
If the floats are already rounded, just use astype:
df = pd.DataFrame({'A': [99.0, np.nan, 42.0]})
df['A'] = df['A'].astype('Int64')
# A
# 0 99
# 1 <NA>
# 2 42
If the floats are not rounded yet, round before astype:
df = pd.DataFrame({'A': [3.14159, np.nan, 1.61803]})
df['A'] = df['A'].round().astype('Int64')
# A
# 0 3
# 1 <NA>
# 2 2
To read int+NaN data from a file, use dtype='Int64' to avoid the need for converting at all:
csv = io.StringIO('''
id,rating
foo,5
bar,
baz,2
''')
df = pd.read_csv(csv, dtype={'rating': 'Int64'})
# id rating
# 0 foo 5
# 1 bar <NA>
# 2 baz 2
Notes
'Int64' is an alias for Int64Dtype:
df['A'] = df['A'].astype(pd.Int64Dtype()) # same as astype('Int64')
Sized/signed aliases are available:
lower bound
upper bound
'Int8'
-128
127
'Int16'
-32,768
32,767
'Int32'
-2,147,483,648
2,147,483,647
'Int64'
-9,223,372,036,854,775,808
9,223,372,036,854,775,807
'UInt8'
0
255
'UInt16'
0
65,535
'UInt32'
0
4,294,967,295
'UInt64'
0
18,446,744,073,709,551,615
Expanding on #Ryan G mentioned usage of the pandas.DataFrame.astype(<type>) method, one can use the errors=ignore argument to only convert those columns that do not produce an error, which notably simplifies the syntax. Obviously, caution should be applied when ignoring errors, but for this task it comes very handy.
>>> df = pd.DataFrame(np.random.rand(3, 4), columns=list('ABCD'))
>>> df *= 10
>>> print(df)
... A B C D
... 0 2.16861 8.34139 1.83434 6.91706
... 1 5.85938 9.71712 5.53371 4.26542
... 2 0.50112 4.06725 1.99795 4.75698
>>> df['E'] = list('XYZ')
>>> df.astype(int, errors='ignore')
>>> print(df)
... A B C D E
... 0 2 8 1 6 X
... 1 5 9 5 4 Y
... 2 0 4 1 4 Z
From pandas.DataFrame.astype docs:
errors : {‘raise’, ‘ignore’}, default ‘raise’
Control raising of exceptions on invalid data for provided dtype.
raise : allow exceptions to be raised
ignore : suppress exceptions. On error return original object
New in version 0.20.0.
The columns that needs to be converted to int can be mentioned in a dictionary also as below
df = df.astype({'col1': 'int', 'col2': 'int', 'col3': 'int'})
>>> import pandas as pd
>>> right = pd.DataFrame({'C': [1.002, 2.003], 'D': [1.009, 4.55], 'key': ['K0', 'K1']})
>>> print(right)
C D key
0 1.002 1.009 K0
1 2.003 4.550 K1
>>> right['C'] = right.C.astype(int)
>>> print(right)
C D key
0 1 1.009 K0
1 2 4.550 K1
In the text of the question is explained that the data comes from a csv. Só, I think that show options to make the conversion when the data is read and not after are relevant to the topic.
When importing spreadsheets or csv in a dataframe, "only integer columns" are commonly converted to float because excel stores all numerical values as floats and how the underlying libraries works.
When the file is read with read_excel or read_csv there are a couple of options avoid the after import conversion:
parameter dtype allows a pass a dictionary of column names and target types like dtype = {"my_column": "Int64"}
parameter converters can be used to pass a function that makes the conversion, for example changing NaN's with 0. converters = {"my_column": lambda x: int(x) if x else 0}
parameter convert_float will convert "integral floats to int (i.e., 1.0 –> 1)", but take care with corner cases like NaN's. This parameter is only available in read_excel
To make the conversion in an existing dataframe several alternatives have been given in other comments, but since v1.0.0 pandas has a interesting function for this cases: convert_dtypes, that "Convert columns to best possible dtypes using dtypes supporting pd.NA."
As example:
In [3]: import numpy as np
In [4]: import pandas as pd
In [5]: df = pd.DataFrame(
...: {
...: "a": pd.Series([1, 2, 3], dtype=np.dtype("int64")),
...: "b": pd.Series([1.0, 2.0, 3.0], dtype=np.dtype("float")),
...: "c": pd.Series([1.0, np.nan, 3.0]),
...: "d": pd.Series([1, np.nan, 3]),
...: }
...: )
In [6]: df
Out[6]:
a b c d
0 1 1.0 1.0 1.0
1 2 2.0 NaN NaN
2 3 3.0 3.0 3.0
In [7]: df.dtypes
Out[7]:
a int64
b float64
c float64
d float64
dtype: object
In [8]: converted = df.convert_dtypes()
In [9]: converted.dtypes
Out[9]:
a Int64
b Int64
c Int64
d Int64
dtype: object
In [10]: converted
Out[10]:
a b c d
0 1 1 1 1
1 2 2 <NA> <NA>
2 3 3 3 3
Although there are many options here,
You can also convert the format of specific columns using a dictionary
Data = pd.read_csv('Your_Data.csv')
Data_2 = Data.astype({"Column a":"int32", "Column_b": "float64", "Column_c": "int32"})
print(Data_2 .dtypes) # Check the dtypes of the columns
This is an useful and very fast way to change the data format of specific columns for quick data analysis.
Related
How can one idiomatically run a function like get_dummies, which expects a single column and returns several, on multiple DataFrame columns?
With pandas 0.19, you can do that in a single line :
pd.get_dummies(data=df, columns=['A', 'B'])
Columns specifies where to do the One Hot Encoding.
>>> df
A B C
0 a c 1
1 b c 2
2 a b 3
>>> pd.get_dummies(data=df, columns=['A', 'B'])
C A_a A_b B_b B_c
0 1 1.0 0.0 0.0 1.0
1 2 0.0 1.0 0.0 1.0
2 3 1.0 0.0 1.0 0.0
Since pandas version 0.15.0, pd.get_dummies can handle a DataFrame directly (before that, it could only handle a single Series, and see below for the workaround):
In [1]: df = DataFrame({'A': ['a', 'b', 'a'], 'B': ['c', 'c', 'b'],
...: 'C': [1, 2, 3]})
In [2]: df
Out[2]:
A B C
0 a c 1
1 b c 2
2 a b 3
In [3]: pd.get_dummies(df)
Out[3]:
C A_a A_b B_b B_c
0 1 1 0 0 1
1 2 0 1 0 1
2 3 1 0 1 0
Workaround for pandas < 0.15.0
You can do it for each column seperate and then concat the results:
In [111]: df
Out[111]:
A B
0 a x
1 a y
2 b z
3 b x
4 c x
5 a y
6 b y
7 c z
In [112]: pd.concat([pd.get_dummies(df[col]) for col in df], axis=1, keys=df.columns)
Out[112]:
A B
a b c x y z
0 1 0 0 1 0 0
1 1 0 0 0 1 0
2 0 1 0 0 0 1
3 0 1 0 1 0 0
4 0 0 1 1 0 0
5 1 0 0 0 1 0
6 0 1 0 0 1 0
7 0 0 1 0 0 1
If you don't want the multi-index column, then remove the keys=.. from the concat function call.
Somebody may have something more clever, but here are two approaches. Assuming you have a dataframe named df with columns 'Name' and 'Year' you want dummies for.
First, simply iterating over the columns isn't too bad:
In [93]: for column in ['Name', 'Year']:
...: dummies = pd.get_dummies(df[column])
...: df[dummies.columns] = dummies
Another idea would be to use the patsy package, which is designed to construct data matrices from R-type formulas.
In [94]: patsy.dmatrix(' ~ C(Name) + C(Year)', df, return_type="dataframe")
Unless I don't understand the question, it is supported natively in get_dummies by passing the columns argument.
The simple trick I am currently using is a for-loop.
First separate categorical data from Data Frame by using select_dtypes(include="object"),
then by using for loop apply get_dummies to each column iteratively
as I have shown in code below:
train_cate=train_data.select_dtypes(include="object")
test_cate=test_data.select_dtypes(include="object")
# vectorize catagorical data
for col in train_cate:
cate1=pd.get_dummies(train_cate[col])
train_cate[cate1.columns]=cate1
cate2=pd.get_dummies(test_cate[col])
test_cate[cate2.columns]=cate2
I need some help with a concise and first of all efficient formulation in pandas of the following operation:
Given a data frame of the format
id a b c d
1 0 -1 1 1
42 0 1 0 0
128 1 -1 0 1
Construct a data frame of the format:
id one_entries
1 "c d"
42 "b"
128 "a d"
That is, the column "one_entries" contains the concatenated names of the columns for which the entry in the original frame is 1.
Here's one way using boolean rule and applying lambda func.
In [58]: df
Out[58]:
id a b c d
0 1 0 -1 1 1
1 42 0 1 0 0
2 128 1 -1 0 1
In [59]: cols = list('abcd')
In [60]: (df[cols] > 0).apply(lambda x: ' '.join(x[x].index), axis=1)
Out[60]:
0 c d
1 b
2 a d
dtype: object
You can assign the result to df['one_entries'] =
Details of apply func.
Take first row.
In [83]: x = df[cols].ix[0] > 0
In [84]: x
Out[84]:
a False
b False
c True
d True
Name: 0, dtype: bool
x gives you Boolean values for the row, values greater than zero. x[x] will return only True. Essentially a series with column names as index.
In [85]: x[x]
Out[85]:
c True
d True
Name: 0, dtype: bool
x[x].index gives you the column names.
In [86]: x[x].index
Out[86]: Index([u'c', u'd'], dtype='object')
Same reasoning as John Galt's, but a bit shorter, constructing a new DataFrame from a dict.
pd.DataFrame({
'one_entries': (test_df > 0).apply(lambda x: ' '.join(x[x].index), axis=1)
})
# one_entries
# 1 c d
# 42 b
# 128 a d
I have a pandas dataframe and I want to create a new column, that is computed differently for different groups of rows. Here is a quick example:
import pandas as pd
data = {'foo': list('aaade'), 'bar': range(5)}
df = pd.DataFrame(data)
The dataframe looks like this:
bar foo
0 0 a
1 1 a
2 2 a
3 3 d
4 4 e
Now I am adding a new column and try to assign some values to selected rows:
df['xyz'] = 0
df.loc[(df['foo'] == 'a'), 'xyz'] = df.loc[(df['foo'] == 'a')].apply(lambda x: x['bar'] * 2, axis=1)
The dataframe has not changed. What I would expect is the dataframe to look like this:
bar foo xyz
0 0 a 0
1 1 a 2
2 2 a 4
3 3 d 0
4 4 e 0
In my real-world problem, the 'xyz' column is also computated for the other rows, but using a different function. In fact, I am also using different columns for the computation. So my questions:
Why does the assignment in the above example not work?
Is it neccessary to do df.loc[(df['foo'] == 'a') twice (as I am doing it now)?
You're changing a copy of df (a boolean mask of the DataFrame is a copy, see docs).
Another way to achieve the desired result is as follows:
In [11]: df.apply(lambda row: (row['bar']*2 if row['foo'] == 'a' else row['xyz']), axis=1)
Out[11]:
0 0
1 2
2 4
3 0
4 0
dtype: int64
In [12]: df['xyz'] = df.apply(lambda row: (row['bar']*2 if row['foo'] == 'a' else row['xyz']), axis=1)
In [13]: df
Out[13]:
bar foo xyz
0 0 a 0
1 1 a 2
2 2 a 4
3 3 d 0
4 4 e 0
Perhaps a neater way is just to:
In [21]: 2 * (df1.bar) * (df1.foo == 'a')
Out[21]:
0 0
1 2
2 4
3 0
4 0
dtype: int64
I have a dataframe with columns A,B. I need to create a column C such that for every record / row:
C = max(A, B).
How should I go about doing this?
You can get the maximum like this:
>>> import pandas as pd
>>> df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
>>> df
A B
0 1 -2
1 2 8
2 3 1
>>> df[["A", "B"]]
A B
0 1 -2
1 2 8
2 3 1
>>> df[["A", "B"]].max(axis=1)
0 1
1 8
2 3
and so:
>>> df["C"] = df[["A", "B"]].max(axis=1)
>>> df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
If you know that "A" and "B" are the only columns, you could even get away with
>>> df["C"] = df.max(axis=1)
And you could use .apply(max, axis=1) too, I guess.
#DSM's answer is perfectly fine in almost any normal scenario. But if you're the type of programmer who wants to go a little deeper than the surface level, you might be interested to know that it is a little faster to call numpy functions on the underlying .to_numpy() (or .values for <0.24) array instead of directly calling the (cythonized) functions defined on the DataFrame/Series objects.
For example, you can use ndarray.max() along the first axis.
# Data borrowed from #DSM's post.
df = pd.DataFrame({"A": [1,2,3], "B": [-2, 8, 1]})
df
A B
0 1 -2
1 2 8
2 3 1
df['C'] = df[['A', 'B']].values.max(1)
# Or, assuming "A" and "B" are the only columns,
# df['C'] = df.values.max(1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
If your data has NaNs, you will need numpy.nanmax:
df['C'] = np.nanmax(df.values, axis=1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
You can also use numpy.maximum.reduce. numpy.maximum is a ufunc (Universal Function), and every ufunc has a reduce:
df['C'] = np.maximum.reduce(df['A', 'B']].values, axis=1)
# df['C'] = np.maximum.reduce(df[['A', 'B']], axis=1)
# df['C'] = np.maximum.reduce(df, axis=1)
df
A B C
0 1 -2 1
1 2 8 8
2 3 1 3
np.maximum.reduce and np.max appear to be more or less the same (for most normal sized DataFrames)—and happen to be a shade faster than DataFrame.max. I imagine this difference roughly remains constant, and is due to internal overhead (indexing alignment, handling NaNs, etc).
The graph was generated using perfplot. Benchmarking code, for reference:
import pandas as pd
import perfplot
np.random.seed(0)
df_ = pd.DataFrame(np.random.randn(5, 1000))
perfplot.show(
setup=lambda n: pd.concat([df_] * n, ignore_index=True),
kernels=[
lambda df: df.assign(new=df.max(axis=1)),
lambda df: df.assign(new=df.values.max(1)),
lambda df: df.assign(new=np.nanmax(df.values, axis=1)),
lambda df: df.assign(new=np.maximum.reduce(df.values, axis=1)),
],
labels=['df.max', 'np.max', 'np.maximum.reduce', 'np.nanmax'],
n_range=[2**k for k in range(0, 15)],
xlabel='N (* len(df))',
logx=True,
logy=True)
For finding max among multiple columns would be:
df[['A','B']].max(axis=1).max(axis=0)
Example:
df =
A B
timestamp
2019-11-20 07:00:16 14.037880 15.217879
2019-11-20 07:01:03 14.515359 15.878632
2019-11-20 07:01:33 15.056502 16.309152
2019-11-20 07:02:03 15.533981 16.740607
2019-11-20 07:02:34 17.221073 17.195145
print(df[['A','B']].max(axis=1).max(axis=0))
17.221073
I need to add columns iteratively to a DataFrame object. This is a simplified version:
>>> x=DataFrame()
>>> for i in 'ps':
... x = x.append(DataFrame({i:[3,4]}))
...
>>> x
p s
0 3 NaN
1 4 NaN
0 NaN 3
1 NaN 4
What should I do to get:
p s
0 3 3
1 4 4
?
Your idea of creating the dict first is probably the best way:
>>> from pandas import *
>>> DataFrame({c: [1,2] for c in 'sp'})
p s
0 1 1
1 2 2
(here using dictionary comprehensions, available in Python 2.7). Just for completeness, though, you could -- inefficiently -- use join or concat to get a column-by-column approach to work:
>>> df = DataFrame()
>>> for c in 'sp':
... df = concat([df, DataFrame({c: [1,2]})], axis=1)
...
>>> print df
s p
0 1 1
1 2 2
>>>
>>> df = DataFrame()
>>> for c in 'sp':
... df = df.join(DataFrame({c: [1,2]}), how='outer')
...
>>> print df
s p
0 1 1
1 2 2
[You can see the difference in column order.] But your idea of building the dict and then constructing the DataFrame from the constructed dict is a much better approach.