I have two dataframes with meaningless index's, but carefully curated order and I want to merge them while preserving that order. So, for example:
>>> df1
First
a 1
b 3
and
>>> df2
c 2
d 4
After merging, what I want to obtain is this:
>>> Desired_output
First Second
AnythingAtAll 1 2 # <--- Row Names are meaningless.
SeriouslyIDontCare 3 4 # <--- But the ORDER of the rows is critical and must be preserved.
The fact that I've got row-indices "a/b", and "c/d" is irrelevent, but what is crucial is the order in which the rows appear. Every version of "join" I've seen requires me to manually reset indices, which seems really awkward, and I don't trust that it won't screw up the ordering. I thought concat would work, but I get this:
>>> pd.concat( [df1, df2] , axis = 1, ignore_index= True )
0 1
a 1.0 NaN
b 3.0 NaN
c NaN 2.0
d NaN 4.0
# ^ obviously not what I want.
Even when I explicitly declare ignore_index.
How do I "overrule" the indexing and force the columns to be merged with the rows kept in the exact order that I supply them?
Edit:
Note that if I assign another column, the results are all "NaN".
>>> df1["second"]=df2["Second"]
>>> df1
First second
a 1 NaN
b 3 NaN
This was screwing me up but thanks to the suggestion from jsmart and topsail, you can dereference the indices by directly accessing the values in the column:
df1["second"]=df2["Second"].values
>>> df1
First second
a 1 2
b 3 4
^ Solution
This should also work I think:
df1["second"] = df2["second"].values
It would keep the index from the first dataframe, but since you have values in there such as "AnyThingAtAll" and "SeriouslyIdontCare" I guess any index values whatsoever are acceptable.
Basically, we are just adding a the values from your series as a new column to the first dataframe.
Here's a test example similar to your described problem:
# -----------
# sample data
# -----------
df1 = pd.DataFrame(
{
'x': ['a','b'],
'First': [1,3],
})
df1.set_index("x", drop=True, inplace=True)
df2 = pd.DataFrame(
{
'x': ['c','d'],
'Second': [2, 4],
})
df2.set_index("x", drop=True, inplace=True)
# ---------------------------------------------
# Add series as a new column to first dataframe
# ---------------------------------------------
df1["Second"] = df2["Second"].values
Result is:
First
Second
a
1
2
b
3
4
The goal is to combine data based on position (not by Index). Here is one way to do it:
import pandas as pd
# create data frames df1 and df2
df1 = pd.DataFrame(data = {'First': [1, 3]}, index=['a', 'b'])
df2 = pd.DataFrame(data = {'Second': [2, 4]}, index = ['c', 'd'])
# add a column to df1 -- add by position, not by Index
df1['Second'] = df2['Second'].values
print(df1)
First Second
a 1 2
b 3 4
And you could create a completely new data frame like this:
data = {'1st': df1['First'].values, '2nd': df1['Second'].values}
print(pd.DataFrame(data))
1st 2nd
0 1 2
1 3 4
ignore_index means whether to keep the output dataframe index from original along axis. If it is True, it means don't use original index but start from 0 to n just like what the column header 0, 1 shown in your result.
You can try
out = pd.concat( [df1.reset_index(drop=True), df2.reset_index(drop=True)] , axis = 1)
print(out)
First Second
0 1 2
1 3 4
This may be a simple question, but I can not figure out how to do this. Lets say that I have two variables as follows.
a = 2
b = 3
I want to construct a DataFrame from this:
df2 = pd.DataFrame({'A':a,'B':b})
This generates an error:
ValueError: If using all scalar values, you must pass an index
I tried this also:
df2 = (pd.DataFrame({'a':a,'b':b})).reset_index()
This gives the same error message.
The error message says that if you're passing scalar values, you have to pass an index. So you can either not use scalar values for the columns -- e.g. use a list:
>>> df = pd.DataFrame({'A': [a], 'B': [b]})
>>> df
A B
0 2 3
or use scalar values and pass an index:
>>> df = pd.DataFrame({'A': a, 'B': b}, index=[0])
>>> df
A B
0 2 3
You may try wrapping your dictionary into a list:
my_dict = {'A':1,'B':2}
pd.DataFrame([my_dict])
A B
0 1 2
You can also use pd.DataFrame.from_records which is more convenient when you already have the dictionary in hand:
df = pd.DataFrame.from_records([{ 'A':a,'B':b }])
You can also set index, if you want, by:
df = pd.DataFrame.from_records([{ 'A':a,'B':b }], index='A')
You need to create a pandas series first. The second step is to convert the pandas series to pandas dataframe.
import pandas as pd
data = {'a': 1, 'b': 2}
pd.Series(data).to_frame()
You can even provide a column name.
pd.Series(data).to_frame('ColumnName')
Maybe Series would provide all the functions you need:
pd.Series({'A':a,'B':b})
DataFrame can be thought of as a collection of Series hence you can :
Concatenate multiple Series into one data frame (as described here )
Add a Series variable into existing data frame ( example here )
Pandas magic at work. All logic is out.
The error message "ValueError: If using all scalar values, you must pass an index" Says you must pass an index.
This does not necessarily mean passing an index makes pandas do what you want it to do
When you pass an index, pandas will treat your dictionary keys as column names and the values as what the column should contain for each of the values in the index.
a = 2
b = 3
df2 = pd.DataFrame({'A':a,'B':b}, index=[1])
A B
1 2 3
Passing a larger index:
df2 = pd.DataFrame({'A':a,'B':b}, index=[1, 2, 3, 4])
A B
1 2 3
2 2 3
3 2 3
4 2 3
An index is usually automatically generated by a dataframe when none is given. However, pandas does not know how many rows of 2 and 3 you want. You can however be more explicit about it
df2 = pd.DataFrame({'A':[a]*4,'B':[b]*4})
df2
A B
0 2 3
1 2 3
2 2 3
3 2 3
The default index is 0 based though.
I would recommend always passing a dictionary of lists to the dataframe constructor when creating dataframes. It's easier to read for other developers. Pandas has a lot of caveats, don't make other developers have to experts in all of them in order to read your code.
You could try:
df2 = pd.DataFrame.from_dict({'a':a,'b':b}, orient = 'index')
From the documentation on the 'orient' argument: If the keys of the passed dict should be the columns of the resulting DataFrame, pass ‘columns’ (default). Otherwise if the keys should be rows, pass ‘index’.
I usually use the following to to quickly create a small table from dicts.
Let's say you have a dict where the keys are filenames and the values their corresponding filesizes, you could use the following code to put it into a DataFrame (notice the .items() call on the dict):
files = {'A.txt':12, 'B.txt':34, 'C.txt':56, 'D.txt':78}
filesFrame = pd.DataFrame(files.items(), columns=['filename','size'])
print(filesFrame)
filename size
0 A.txt 12
1 B.txt 34
2 C.txt 56
3 D.txt 78
You need to provide iterables as the values for the Pandas DataFrame columns:
df2 = pd.DataFrame({'A':[a],'B':[b]})
I had the same problem with numpy arrays and the solution is to flatten them:
data = {
'b': array1.flatten(),
'a': array2.flatten(),
}
df = pd.DataFrame(data)
import pandas as pd
a=2
b=3
dict = {'A': a, 'B': b}
pd.DataFrame(pd.Series(dict)).T
# *T :transforms the dataframe*
Result:
A B
0 2 3
To figure out the "ValueError" understand DataFrame and "scalar values" is needed.
To create a Dataframe from dict, at least one Array is needed.
IMO, array itself is indexed.
Therefore, if there is an array-like value there is no need to specify index.
e.g. The index of each element in ['a', 's', 'd', 'f'] are 0,1,2,3 separately.
df_array_like = pd.DataFrame({
'col' : 10086,
'col_2' : True,
'col_3' : "'at least one array'",
'col_4' : ['one array is arbitrary length', 'multi arrays should be the same length']})
print("df_array_like: \n", df_array_like)
Output:
df_array_like:
col col_2 col_3 col_4
0 10086 True 'at least one array' one array is arbitrary length
1 10086 True 'at least one array' multi arrays should be the same length
As shows in the output, the index of the DataFrame is 0 and 1.
Coincidently same with the index of the array ['one array is arbitrary length', 'multi arrays should be the same length']
If comment out the 'col_4', it will raise
ValueError("If using all scalar values, you must pass an index")
Cause scalar value (integer, bool, and string) does not have index
Note that Index(...) must be called with a collection of some kind
Since index used to locate all the rows of DataFrame
index should be an array. e.g.
df_scalar_value = pd.DataFrame({
'col' : 10086,
'col_2' : True,
'col_3' : "'at least one array'"
}, index = ['fst_row','snd_row','third_row'])
print("df_scalar_value: \n", df_scalar_value)
Output:
df_scalar_value:
col col_2 col_3
fst_row 10086 True 'at least one array'
snd_row 10086 True 'at least one array'
third_row 10086 True 'at least one array'
I'm a beginner, I'm learning python and English. 👀
I tried transpose() and it worked.
Downside: You create a new object.
testdict1 = {'key1':'val1','key2':'val2','key3':'val3','key4':'val4'}
df = pd.DataFrame.from_dict(data=testdict1,orient='index')
print(df)
print(f'ID for DataFrame before Transpose: {id(df)}\n')
df = df.transpose()
print(df)
print(f'ID for DataFrame after Transpose: {id(df)}')
Output
0
key1 val1
key2 val2
key3 val3
key4 val4
ID for DataFrame before Transpose: 1932797100424
key1 key2 key3 key4
0 val1 val2 val3 val4
ID for DataFrame after Transpose: 1932797125448
```
the input does not have to be a list of records - it can be a single dictionary as well:
pd.DataFrame.from_records({'a':1,'b':2}, index=[0])
a b
0 1 2
Which seems to be equivalent to:
pd.DataFrame({'a':1,'b':2}, index=[0])
a b
0 1 2
This is because a DataFrame has two intuitive dimensions - the columns and the rows.
You are only specifying the columns using the dictionary keys.
If you only want to specify one dimensional data, use a Series!
If you intend to convert a dictionary of scalars, you have to include an index:
import pandas as pd
alphabets = {'A': 'a', 'B': 'b'}
index = [0]
alphabets_df = pd.DataFrame(alphabets, index=index)
print(alphabets_df)
Although index is not required for a dictionary of lists, the same idea can be expanded to a dictionary of lists:
planets = {'planet': ['earth', 'mars', 'jupiter'], 'length_of_day': ['1', '1.03', '0.414']}
index = [0, 1, 2]
planets_df = pd.DataFrame(planets, index=index)
print(planets_df)
Of course, for the dictionary of lists, you can build the dataframe without an index:
planets_df = pd.DataFrame(planets)
print(planets_df)
Change your 'a' and 'b' values to a list, as follows:
a = [2]
b = [3]
then execute the same code as follows:
df2 = pd.DataFrame({'A':a,'B':b})
df2
and you'll get:
A B
0 2 3
simplest options ls :
dict = {'A':a,'B':b}
df = pd.DataFrame(dict, index = np.arange(1) )
Another option is to convert the scalars into list on the fly using Dictionary Comprehension:
df = pd.DataFrame(data={k: [v] for k, v in mydict.items()})
The expression {...} creates a new dict whose values is a list of 1 element. such as :
In [20]: mydict
Out[20]: {'a': 1, 'b': 2}
In [21]: mydict2 = { k: [v] for k, v in mydict.items()}
In [22]: mydict2
Out[22]: {'a': [1], 'b': [2]}
Convert Dictionary to Data Frame
col_dict_df = pd.Series(col_dict).to_frame('new_col').reset_index()
Give new name to Column
col_dict_df.columns = ['col1', 'col2']
You could try this:
df2 = pd.DataFrame.from_dict({'a':a,'b':b}, orient = 'index')
If you have a dictionary you can turn it into a pandas data frame with the following line of code:
pd.DataFrame({"key": d.keys(), "value": d.values()})
Just pass the dict on a list:
a = 2
b = 3
df2 = pd.DataFrame([{'A':a,'B':b}])
I have a dataframe like the following
StructureID Target Val Measured Val
'A' 2 1.8
'A' 2 1.5
'B' 4 4.6
'C' 1 2.5
'C' 1 2.3
'C' 1 2.2
. . .
. . .
But the 'Target Val' column has the wrong target value. So, I need to replace the 'Target Val' column with the correct target value for the right 'StructureID'. I have another excel file where I have the 'Structure ID' and the correct 'Target Val'
StructureID Correct Target Val
'A' 1.5
'B' 3.0
'C' 2.2
'D' 1.0
. .
. .
My question - is there an easy way to match the structure ID with my first dataframe and populate the 'Target Val' col with the 'Correct Target Val' column from my excel file?
I have thought of making a dictionary to hold the correct target value for each structure id and do a for loop, but my approach is not efficient.
For loop doesn't sound good... my solution is certainly not the most elegant one but should do the trick a lot faster.
Delete the column with the wrong values using .drop
df1.drop(['Target Value'], axis=1)
Merge the original dataframe df1 with the one holding the correct values (df2) using the StructureID for joining
df1.merge(df2, how='left', left_on='StructureID', right_on='StructureID')
Check out the documentation for join types but from your example I think 'left'seems like what you want if df1 holds multiple rows per StructureID and df2 only holds one row per StructureID.
import pandas as pd
import numpy as np
d1 = {'StructureID': ['A', 'A', 'B', 'C'], 'TargetVal': [2, 2, 4, 1], 'MeasuredVal': [1.8, 1.5, 4.6, 2.5]}
df1 = pd.DataFrame(data=d)
d2 = {'StructureID': ['A', 'B', 'C'], 'TargetVal': [1.5, 3, 2.2]}
df2 = pd.DataFrame(data=d2)
df1 = df1.drop(['TargetVal'], axis=1)
df1 = df1.merge(df2, how='left', left_on='StructureID', right_on='StructureID')
I have a panda DataFrame that I want to add rows to. The Dataframe looks like this:
col1 col2
a 1 5
b 2 6
c 3 7
I want to add rows to the dataframe, but only if they are unique. The problem is that some new rows might have the same index, but different values in the columns. If this is the case, I somehow need to know.
Some example rows to be added and the desired result:
row 1:
col1 col2
a 1 5
desired row 1 result: Not added - it is already in the dataframe
row 2:
col1 col2
a 9 9
desired row 2 result: something like,
print('non-unique entries for index a')
row 3:
col1 col2
d 4 4
desired row 3 result: just add the row to the dataframe.
try this:
# existing dataframe == df
# new rows == df_newrows
# dividing newrows dataframe into two, one for repeated indexes, one without.
df_newrows_usable = df_newrows.loc[df_newrows.index.isin(list(df.index.get_values()))==False]
df_newrows_discarded = df_newrows.loc[df_newrows.index.isin(list(df.index.get_values()))]
print ('repeated indexes:', df_newrows_discarded)
# concat df and newrows without repeated indexes
new_df = pd.concat([df,df_newrows],0)
print ('new dataframe:', new_df)
the easy option would be to merge all rows and then keep the unique ones via the dataframe method drop_duplicates
However, this option doesn't report a warning / error when a duplicate row is appended.
drop_duplicates doesn't consider indexes, so the dataframe must be reset before dropping the duplicates, and set back after:
import pandas as pd
# set up data frame
df = pd.DataFrame({'col1': [1, 2, 3], 'col2':[5, 6, 7]}, index=['a', 'b', 'c'])
# set up row to be appended
row = pd.DataFrame({'col1':[3], 'col2': [7]}, index=['c'])
# append row (don't care if it's duplicate)
df = df.append([row])
# drop duplicatesdf2 = df2.reset_index()
df2 = df2.drop_duplicates()
df2 = df2.set_index('index')
if the warning message is an absolute requirement, we can write a function to that effect that checks if a row is duplicate via a merge operation and appends the row only if it is unique.
def append_unique(df, row):
d = df.reset_index()
r = row.reset_index()
if d.merge(r, on=list(d.columns), how='inner').empty:
d2 = d.append(r)
d2 = d2.set_index('index')
return d2
print('non-unique entries for index a')
return df
df2 = append_unique(df2, row)
I have the pandas.DataFrame below:
One of the columns from the Dataframe, pontos, holds a dict for each of the rows.
What I want to do is add one column to the DataFrame for each key from this dict. So the new columns would be, in this example: rodada, mes, etc, and for each row, these columns would be populated with the respective value from the dict.
So far I've tried the following for one of the keys:
df_times["rodada"] = [df_times["pontos"].get('rodada') for d in df_times["pontos"]]
However, as a result I'm getting a new column rodada filled with None values:
Any hints on what I'm doing wrong?
You can create a new dataframe and concat it to the current one like:
Code:
df2 = pd.concat([df, pd.DataFrame(list(df.pontos))], axis=1)
Test Code:
import pandas as pd
df = pd.DataFrame([
['A', dict(col1='1', col2='2')],
['B', dict(col1='3', col2='4')],
], columns=['X', 'D'])
print(df)
df2 = pd.concat([df, pd.DataFrame(list(df.D))], axis=1)
print(df2)
Results:
X D
0 A {'col2': '2', 'col1': '1'}
1 B {'col2': '4', 'col1': '3'}
X D col1 col2
0 A {'col2': '2', 'col1': '1'} 1 2
1 B {'col2': '4', 'col1': '3'} 3 4
You just need a slight change in your comprehension to extract that data.
It should be:
df_times["rodada"] = [d.get('rodada') for d in
df_times["pontos"]]
You want the values of the dictionary key 'rodada' to be the basis of your new column. So you iterate over those dictionary entries in the loop- in other words, d, and then extract the value by key to make the new column.
you can also use join and pandas apply function:
df=df.join(df['pontos'].apply(pd.Series))