How can one print a multi-index Dataframe such as the one below:
import numpy as np
import tabulate
import pandas as pd
df = pd.DataFrame(np.random.randn(4, 3),
index=pd.MultiIndex.from_product([["foo", "bar"],
["one", "two"]]),
columns=list("ABC"))
so that the two levels of the Multindex show as separate columns, much the same way pandas itself prints it:
In [16]: df
Out[16]:
A B C
foo one -0.040337 0.653915 -0.359834
two 0.271542 1.328517 1.704389
bar one -1.246009 0.087229 0.039282
two -1.217514 0.721025 -0.017185
However, tabulate prints like this:
In [28]: print(tabulate.tabulate(df, tablefmt="github", headers="keys", showindex="always"))
| | A | B | C |
|----------------|------------|-----------|------------|
| ('foo', 'one') | -0.0403371 | 0.653915 | -0.359834 |
| ('foo', 'two') | 0.271542 | 1.32852 | 1.70439 |
| ('bar', 'one') | -1.24601 | 0.0872285 | 0.039282 |
| ('bar', 'two') | -1.21751 | 0.721025 | -0.0171852 |
MultiIndexes are represented by tuples internally, so tabulate is showing you the right thing.
If you want column-like display, the easiest is to reset_index first:
print(tabulate.tabulate(df.reset_index().rename(columns={'level_0':'', 'level_1': ''}), tablefmt="github", headers="keys", showindex=False))
Output:
| | | A | B | C |
|-----|-----|-----------|-----------|-----------|
| foo | one | -0.108977 | 2.03593 | 1.11258 |
| foo | two | 0.65117 | -1.48314 | 0.391379 |
| bar | one | -0.660148 | 1.34875 | -1.10848 |
| bar | two | 0.561418 | 0.762137 | 0.723432 |
Alternatively, you can rework the MultiIndex to a single index:
df2 = df.copy()
df2.index = df.index.map(lambda x: '|'.join(f'{e:>5} ' for e in x))
print(tabulate.tabulate(df2.rename_axis('index'), tablefmt="github", headers="keys", showindex="always"))
Output:
| index | A | B | C |
|------------|-----------|-----------|-----------|
| foo | one | -0.108977 | 2.03593 | 1.11258 |
| foo | two | 0.65117 | -1.48314 | 0.391379 |
| bar | one | -0.660148 | 1.34875 | -1.10848 |
| bar | two | 0.561418 | 0.762137 | 0.723432 |
Related
Let's say I have a pandas dataframe:
| id1 | id2 | attr1 | combo_id | perm_id |
| --- | --- | --- | --- | --- |
| 1 | 2 | [9606] | [1,2] | AB |
| 2 | 1 | [9606] | [1,2] | BA |
| 3 | 4 | [9606] | [3,4] | AB |
| 4 | 3 | [9606] | [3,4] | BA |
I'd like to aggregate rows with the same combo_id together, and store information from both rows using the perm_id of that row. So the resulting dataframe would look like:
| attr1 | combo_id |
| --- | --- |
| {'AB':[9606], 'BA': [9606]} | [1,2] |
| {'AB':[9606], 'BA': [9606]} | [3,4] |
How would I use groupby and aggregate functions for these operations?
I tried converting attribute1 to a dict using perm_id.
df['attr1'] = df.apply(lambda x: {x['perm_id']: x['attr1']})
Then I planned to use something to combine dictionaries in the same group.
df.groupby(['combo_id']).agg({ 'attr1': lambda x: {x**})
But this resulted in KeyError: perm_id
Any suggestions?
Try:
from ast import literal_eval
x = (
df.groupby(df["combo_id"].astype(str))
.apply(lambda x: dict(zip(x["perm_id"], x["attr1"])))
.reset_index(name="attr1")
)
# convert combo_id back to list (if needed)
x["combo_id"] = x["combo_id"].apply(literal_eval)
print(x)
Prints:
combo_id attr1
0 [1, 2] {'AB': [9606], 'BA': [9606]}
1 [3, 4] {'AB': [9606], 'BA': [9606]}
Can anyone help me sort the order of last page viewed?
I have a dataframe where I am attempting to sort it by the previous page viewed and I am having a really hard time coming up with an efficient method using Pandas.
For example from this:
+------------+------------------+----------+
| Customer | previousPagePath | pagePath |
+------------+------------------+----------+
| 1051471580 | A | D |
| 1051471580 | C | B |
| 1051471580 | A | exit |
| 1051471580 | B | A |
| 1051471580 | D | A |
| 1051471580 | entrance | C |
+------------+------------------+----------+
To this:
+------------+------------------+----------+
| Customer | previousPagePath | pagePath |
+------------+------------------+----------+
| 1051471580 | entrance | C |
| 1051471580 | C | B |
| 1051471580 | B | A |
| 1051471580 | A | D |
| 1051471580 | D | A |
| 1051471580 | A | exit |
+------------+------------------+----------+
However it could be millions of rows long for thousands of different customers so I really need to think how to make this efficient.
pd.DataFrame({
'Customer':'1051471580',
'previousPagePath': ['E','C','B','A','D','A'],
'pagePath': ['C','B','A','D','A','F']
})
Thanks!
What you're trying to do is topological sorting, which can be achieved with networkx. Note that I had to change some values in your dataframe in order to prevent it throwing a cycle error, so I hope that the data you work on contains unique values:
import networkx as nx
import pandas as pd
data = [ [1051471580, "Z", "D"], [1051471580,"C","B" ], [1051471580,"A","exit" ], [1051471580,"B","Z" ], [1051471580,"D","A" ], [1051471580,"entrance","C" ] ]
df = pd.DataFrame(data, columns=['Customer', 'previousPagePath', 'pagePath'])
edges = df[df.pagePath != df.previousPagePath].reset_index()
dg = nx.from_pandas_edgelist(edges, source='previousPagePath', target='pagePath', create_using=nx.DiGraph())
order = list(nx.lexicographical_topological_sort(dg))
result = df.set_index('previousPagePath').loc[order[:-1], :].dropna().reset_index()
result = result[['Customer', 'previousPagePath', 'pagePath']]
Output:
| | Customer | previousPagePath | pagePath |
|---:|-----------:|:-------------------|:-----------|
| 0 | 1051471580 | entrance | C |
| 1 | 1051471580 | C | B |
| 2 | 1051471580 | B | Z |
| 3 | 1051471580 | Z | D |
| 4 | 1051471580 | D | A |
| 5 | 1051471580 | A | exit |
you can sort your DataFrame by column like that.
df = pd.DataFrame({'Customer':'1051471580','previousPagePath':['E','C','B','A','D','A'], 'pagePath':['C','B','A','D','A','F']})
df.sort_values(by='previousPagePath')
and you can find the document here pandas.DataFrame.sort_values
Given two datatable Frame. How to combine (merge) them in one frame?
dt_f_A =
+--------+--------+--------+-----+--------+
| A_at_1 | A_at_2 | A_at_3 | ... | A_at_m |
+--------+--------+--------+-----+--------+
| v_1 | | | | |
+--------+--------+--------+-----+--------+
| ... | | | | |
+--------+--------+--------+-----+--------+
| v_N | | | | |
+--------+--------+--------+-----+--------+
dt_f_B =
+--------+--------+--------+-----+--------+
| B_at_1 | B_at_2 | B_at_3 | ... | B_at_k |
+--------+--------+--------+-----+--------+
| w_1 | | | | |
+--------+--------+--------+-----+--------+
| ... | | | | |
+--------+--------+--------+-----+--------+
| w_N | | | | |
+--------+--------+--------+-----+--------+
The expected result (dt_f_A concat(combine or merge) dt_f_B)
+--------+--------+--------+-----+--------+--------+--------+--------+-----+--------+
| A_at_1 | A_at_2 | A_at_3 | ... | A_at_m | B_at_1 | B_at_2 | B_at_3 | ... | B_at_k |
+--------+--------+--------+-----+--------+--------+--------+--------+-----+--------+
| v_1 | | | | | w_1 | | | | |
+--------+--------+--------+-----+--------+--------+--------+--------+-----+--------+
| ... | | | | | ... | | | | |
+--------+--------+--------+-----+--------+--------+--------+--------+-----+--------+
| v_N | | | | | w_N | | | | |
+--------+--------+--------+-----+--------+--------+--------+--------+-----+--------+
We consider three cases:
Case 1: a) The two frames have exactly the same numbers of rows, and b) unique attributes in the columns.
Case 2: The number of rows is different
Case 3: the attributes are not unique (there is a duplication)
#sammywemmy Thank you for the valuable comment.
Case 1: a) The two frames have exactly the same numbers of rows, and b) unique attributes in the columns
1- use cbind : dt_f_A.cbind(dt_f_B)
or
2- use : dt_f_A[:,dt_f_B.names] = dt_f_B
Example :
import datatable as dt
dt_f_A = dt.Frame({"a":[1,2,3,4],"b":['a','b','c','d']})
dt_f_B = dt.Frame({"c":[1.1, 2.2, 3.3, 4.4], "d":['aa', 'bb', 'cc', 'dd']})
dt_f_A.cbind(dt_f_B)
#dt_f_A[:, dt_f_B.names] = dt_f_B # it's work fine also
print(dt_f_A)
Case 2: The number of rows is different
dt_f_A.cbind(dt_f_B) gives InvalidOperationError: Cannot cbind frame with X rows to a frame with Y rows. (X ≠ Y)
dt_f_A[:, dt_f_B.names] gives ValueError: Frame has X rows, and cannot be used in an expression where Y are expected. (X ≠ Y)
The solution : use dt_f_A.cbind(dt_f_B,force=True)
Example:
import datatable as dt
dt_f_A = dt.Frame({"a":[1, 2, 3, 4, 5,6], "b":['a', 'b', 'c', 'd', 'e','f']})
dt_f_B = dt.Frame({"c":[1.1, 2.2, 3.3, 4.4], "d":['aa', 'bb', 'cc', 'dd']})
dt_f_A.cbind(dt_f_B,force=True)
print(dt_f_A)
The missing value, then will be filled with NA
Case 3: the attributes are not unique (there is a duplication)
dt_f_A.cbind(dt_f_B) : It works and gives a warning. It changes the duplicated attribute to a unique attribute: atatableWarning: Duplicate column name found, and was assigned a unique name: 'a' -> 'a.0'
dt_f_A[:, dt_f_B.names] = dt_f_B : IT doesn't give any error. It eliminate the duplicated column in dt_f_A and keep the column in dt_f_B.
Example:
import datatable as dt
dt_f_A = dt.Frame({"a":[1,2,3,4],"b":['a','b','c','d']})
dt_f_B = dt.Frame({"a":[1.1, 2.2, 3.3, 4.4], "d":['aa', 'bb', 'cc', 'dd']})
dt_f_A.cbind(dt_f_B) # rename the duplicated columns
#dt_f_A[:, dt_f_B.names] = dt_f_B # keep only the duplicated columns in dt_f_B
print(dt_f_A)
#sammywemmy Thank you for your valuable comment :)
I have 2 dataframes which I need to merge based on a column (Employee code). Please note that the dataframe has about 75 columns, so I am providing a sample dataset to get some suggestions/sample solutions. I am using databricks, and the datasets are read from S3.
Following are my 2 dataframes:
DATAFRAME - 1
|-----------------------------------------------------------------------------------|
|EMP_CODE |COLUMN1|COLUMN2|COLUMN3|COLUMN4|COLUMN5|COLUMN6|COLUMN7|COLUMN8|COLUMN9|
|-----------------------------------------------------------------------------------|
|A10001 | B | | | | | | | | |
|-----------------------------------------------------------------------------------|
DATAFRAME - 2
|-----------------------------------------------------------------------------------|
|EMP_CODE |COLUMN1|COLUMN2|COLUMN3|COLUMN4|COLUMN5|COLUMN6|COLUMN7|COLUMN8|COLUMN9|
|-----------------------------------------------------------------------------------|
|A10001 | | | | | C | | | | |
|B10001 | | | | | | | | |T2 |
|A10001 | | | | | | | | B | |
|A10001 | | | C | | | | | | |
|C10001 | | | | | | C | | | |
|-----------------------------------------------------------------------------------|
I need to merge the 2 dataframes based on EMP_CODE, basically join dataframe1 with dataframe2, based on emp_code. I am getting duplicate columns when i do a join, and I am looking for some help.
Expected final dataframe:
|-----------------------------------------------------------------------------------|
|EMP_CODE |COLUMN1|COLUMN2|COLUMN3|COLUMN4|COLUMN5|COLUMN6|COLUMN7|COLUMN8|COLUMN9|
|-----------------------------------------------------------------------------------|
|A10001 | B | | C | | C | | | B | |
|B10001 | | | | | | | | |T2 |
|C10001 | | | | | | C | | | |
|-----------------------------------------------------------------------------------|
There are 3 rows with emp_code A10001 in dataframe1, and 1 row in dataframe2. All data should be merged as one record without any duplicate columns.
Thanks much
you can use inner join
output = df1.join(df2,['EMP_CODE'],how='inner')
also you can apply distinct at the end to remove duplicates.
output = df1.join(df2,['EMP_CODE'],how='inner').distinct()
You can do that in scala if both dataframes have same columns by
output = df1.union(df2)
First you need to aggregate the individual dataframes.
from pyspark.sql import functions as F
df1 = df1.groupBy('EMP_CODE').agg(F.concat_ws(" ", F.collect_list(df1.COLUMN1)))
you have to write this for all columns and for all dataframes.
Then you'll have to use union function on all dataframes.
df1.union(df2)
and then repeat same aggregation on that union dataframe.
What you need is a union.
If both dataframes have the same number of columns and the columns that are to be "union-ed" are positionally the same (as in your example), this will work:
output = df1.union(df2).dropDuplicates()
If both dataframes have the same number of columns and the columns that need to be "union-ed" have the same name (as in your example as well), this would be better:
output = df1.unionByName(df2).dropDuplicates()
So I have an excel data like:
+---+--------+----------+----------+----------+----------+---------+
| | A | B | C | D | E | F |
+---+--------+----------+----------+----------+----------+---------+
| 1 | Name | 266 | | | | |
| 2 | A | B | C | D | E | F |
| 3 | 0.1744 | 0.648935 | 0.947621 | 0.121012 | 0.929895 | 0.03959 |
+---+--------+----------+----------+----------+----------+---------+
My main labels are on row 2. but I need to delete the first row. I am using the following Pandas code:
import pandas as pd
excel_file = 'Data.xlsx'
c1 = pd.read_excel(excel_file)
How do I make the 2nd row as my main label row?
You can use the skiprows parameter to skip the top row, also you can read more about the parameters you can use with read_excel on the pandas documentation