Pandas Merge returning only null values - python

I am trying to use Pandas to merge a products packing information with each order record for a given product. The data frame information is below.
BreakerOrders.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 3774010 entries, 0 to 3774009
Data columns (total 2 columns):
Material object
Quantity float64
dtypes: float64(1), object(1)
memory usage: 86.4+ MB
manh.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1381 entries, 0 to 1380
Data columns (total 4 columns):
Material 1381 non-null object
SUBPACK_QTY 202 non-null float64
PACK_QTY 591 non-null float64
PALLET_QTY 809 non-null float64
dtypes: float64(3), object(1)
memory usage: 43.2+ KB
When attempting the merge using the code below, I get the following table with all NaN values for packaging quantities.
BreakerOrders.merge(manh,how='left',on='Material')
Material Quantity SUBPACK_QTY PACK_QTY PALLET_QTY
HOM230CP 5.0 NaN NaN NaN
QO115 20.0 NaN NaN NaN
QO2020CP 20.0 NaN NaN NaN
QO220CP 50.0 NaN NaN NaN
HOM115CP 50.0 NaN NaN NaN
HOM120 100.0 NaN NaN NaN

I was having the same and I was able to solve it by just flipping the DFs. so instead of:
df2 = df.merge(df1)
try
df2 = df1.merge(df)
Looks silly, but it solved my issue.

Related

Merge two data frames on three columns in Python

I have two data frames and I would like to merge them on the two columns Latitude and Longitude. The resulting df should include all columns.
df1:
Date Latitude Longitude LST
0 2019-01-01 66.33 17.100 -8.010004
1 2019-01-09 66.33 17.100 -6.675005
2 2019-01-17 66.33 17.100 -21.845003
3 2019-01-25 66.33 17.100 -26.940004
4 2019-02-02 66.33 17.100 -23.035009
... ... ... ... ...
and df2:
Station_Number Date Latitude Longitude Elevation Value
0 CA002100636 2019-01-01 69.5667 -138.9167 1.0 -18.300000
1 CA002100636 2019-01-09 69.5667 -138.9167 1.0 -26.871429
2 CA002100636 2019-01-17 69.5667 -138.9167 1.0 -19.885714
3 CA002100636 2019-01-25 69.5667 -138.9167 1.0 -17.737500
4 CA002100636 2019-02-02 69.5667 -138.9167 1.0 -13.787500
... ... ... ... ... ... ...
I have tried: LST_1=pd.merge(df1, df2, how = 'inner') but using merge in that way I have lost several data points, which are included in both data frames.
I am not sure if you want to merge on a specific column, if so you need to pick one with overlapping identifiers - for instance the "Date" column.
df_ = pd.merge(df1, df2, on="Date")
print(df_)
Date Latitude_x Longitude_x ... Longitude_y Elevation Value
0 01.01.2019 66.33 17.1 ... -138.9167 1.0 -18.300000
1 09.01.2019 66.33 17.1 ... -138.9167 1.0 -26.871429
2 17.01.2019 66.33 17.1 ... -138.9167 1.0 -19.885714
3 25.01.2019 66.33 17.1 ... -138.9167 1.0 -17.737500
4 02.02.2019 66.33 17.1 ... -138.9167 1.0 -13.787500
[5 rows x 9 columns]
<class 'pandas.core.frame.DataFrame'>
Int64Index: 5 entries, 0 to 4
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 5 non-null object
1 Latitude_x 5 non-null float64
2 Longitude_x 5 non-null float64
3 LST 5 non-null object
4 Station_Number 5 non-null object
5 Latitude_y 5 non-null int64
6 Longitude_y 5 non-null int64
7 Elevation 5 non-null float64
8 Value 5 non-null object
dtypes: float64(3), int64(2), object(4)
memory usage: 400.0+ bytes
As you have column names that are the same pandas will create _x and _y on Latitude and Longitude.
If you want all the columns and the data in one row is independent from the others, then you can use pd.concat. However, this will create some NaN values, due to missing data.
df_1 = pd.concat([df1, df2])
print(df_1)
Date Latitude Longitude ... Station_Number Elevation Value
0 01.01.2019 66.33 17.1 ... NaN NaN NaN
1 09.01.2019 66.33 17.1 ... NaN NaN NaN
2 17.01.2019 66.33 17.1 ... NaN NaN NaN
3 25.01.2019 66.33 17.1 ... NaN NaN NaN
4 02.02.2019 66.33 17.1 ... NaN NaN NaN
0 01.01.2019 69.56 -138.9167 ... CA002100636 1.0 -18.300000
1 09.01.2019 69.56 -138.9167 ... CA002100636 1.0 -26.871429
2 17.01.2019 69.56 -138.9167 ... CA002100636 1.0 -19.885714
3 25.01.2019 69.56 -138.9167 ... CA002100636 1.0 -17.737500
4 02.02.2019 69.56 -138.9167 ... CA002100636 1.0 -13.787500
df_1.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 10 entries, 0 to 4
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Date 10 non-null object
1 Latitude 10 non-null float64
2 Longitude 10 non-null float64
3 LST 5 non-null object
4 Station_Number 5 non-null object
5 Elevation 5 non-null float64
6 Value 5 non-null object
dtypes: float64(3), object(4)
memory usage: 640.0+ bytes

TypeError: '<=' not supported between instances of 'str' and 'int' Duplicate

I am using Python3 and I'm working on several files where some of my data (AYield & BYield) is missing which is considered a NaN, however, when I'm running the last line of the code, I get an error. Both Ask and Bid data frames contain the same rows and columns. Thank you
Askyield = pd.read_excel("AYield.xlsx",na_values=["NaN"])
Bidyield = pd.read_excel("BYield.xlsx",na_value=["NaN"])
matchedbond_info = pd.read_excel("matched_bonds.xlsx")
Askyield = pd.merge(matchedbond_info, Askyield, on = ['ISIN'])
Bidyield = pd.merge(matchedbond_info, Bidyield, on = ['ISIN'])
date_list = []
for i in range(len(Bidyield.columns)):
if isinstance(Bidyield.columns[i], dt.datetime):date_list.append(Bidyield.columns[i])
matchedbond_info = Bidyield.drop(columns=date_list)
bid_yield.info()
bid_yield.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 1236 entries, 0 to 1235
Columns: 1566 entries, 2019-12-31 00:00:00 to 2014-01-01 00:00:00
dtypes: float64(1566)
bid_yield = Bidyield[date_list]
ask_yield = Askyield[date_list]
bid_yield.head()
2019-12-31 2019-12-30 2019-12-27 ... 2014-01-03 2014-01-02 2014-01-01
0 NaN NaN NaN ... NaN NaN NaN
1 3.119 3.084 3.081 ... NaN NaN NaN
2 NaN NaN NaN ... NaN NaN NaN
3 NaN NaN NaN ... NaN NaN NaN
4 NaN NaN NaN ... NaN NaN NaN
[5 rows x 1566 columns]
bid_yield = bid_yield.mask((bid_yield >0) & (ask_yield <0))
Then I get the following
TypeError: '<' not supported between instances of 'str' and 'int'
I can reproduce this error with this example:
import pandas as pd
df = pd.DataFrame(dict(x=["5", "10"],
y=[1, 4]))
df.dtypes
# x object
# y int64
# dtype: object
df[df.x > df.y]
# TypeError: '>' not supported between instances of 'str' and 'int'
You probably need to convert one of the columns to float.
In this example:
df['x'] = df.x.astype("float")

How to add a column name for Timeseries when it is indexed

Input: df.info()
Output:
<class 'pandas.core.frame.DataFrame'>
Index: 100 entries, 2019-01-16 to 2018-08-23 - I want to add this as my first column to to analysis.
Data columns (total 5 columns):
open 100 non-null float64
high 100 non-null float64
low 100 non-null float64
close 100 non-null float64
volume 100 non-null float64
dtypes: float64(5)
memory usage: 9.7+ KB
df = df.assign(time=date)
df.head()
Out[76]:
1. open 2. high 3. low 4. close 5. volume time
2019-01-16 105.2600 106.2550 104.9600 105.3800 29655851 2019-01-16
2019-01-15 102.5100 105.0500 101.8800 105.0100 31587616 2019-01-15
2019-01-14 101.9000 102.8716 101.2600 102.0500 28437079 2019-01-14
2019-01-11 103.1900 103.4400 101.6400 102.8000 28314202 2019-01-11
2019-01-10 103.2200 103.7500 102.3800 103.6000 30067556 2019-01-10

Pandas df.fillna(method = 'pad') not working on 28000 row df

Have been trying to just replace the NaN values in my DataFrame with the last valued item however this does not seem to do the job. Just wondering if anyone else has this same issue or what could be causing this problem.
In [16]: ABCW.info()
Out[16]:<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 692 entries, 2014-10-22 10:30:00 to 2015-05-21 16:00:00
Data columns (total 6 columns):
Price 692 non-null float64
Volume 692 non-null float64
Symbol_Num 692 non-null object
Actual Price 577 non-null float64
Market Cap Rank 577 non-null float64
Market Cap 577 non-null float64
dtypes: float64(5), object(1)
memory usage: 37.8+ KB
In [18]: ABCW.fillna(method = 'pad')
In [19]: ABCW.info()
Out [19]: <class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 692 entries, 2014-10-22 10:30:00 to 2015-05-21 16:00:00
Data columns (total 6 columns):
Price 692 non-null float64
Volume 692 non-null float64
Symbol_Num 692 non-null object
Actual Price 577 non-null float64
Market Cap Rank 577 non-null float64
Market Cap 577 non-null float64
dtypes: float64(5), object(1)
memory usage: 37.8+ KB
There is no change in the number of non-null values and there is still all the preexisting NaN values that were previously in the data frame
You are using the 'pad' method. This is basically a forward fill. See examples at http://pandas.pydata.org/pandas-docs/stable/missing_data.html
I am reproducing the relevant example here,
In [33]: df
Out[33]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h NaN -0.706771 -1.039575
In [34]: df.fillna(method='pad')
Out[34]:
one two three
a NaN -0.282863 -1.509059
c NaN 1.212112 -0.173215
e 0.119209 -1.044236 -0.861849
f -2.104569 -0.494929 1.071804
h -2.104569 -0.706771 -1.039575
This method will not do a backfill. You should also consider doing a backfill if you want all your NaNs to go away. Also, 'inplace= False' by default. So you probably want to assign results of the operation back to ABCW.. like so,
ABCW = ABCW.fillna(method = 'pad')
ABCW = ABCW.fillna(method = 'bfill')

Pandas import excel export HDF5

Working with pandas and PyTables. Begin by importing a table from excel containing columns of integers and floats, as well as other columns containing strings and even tuples. There are a limited number of options on the excel import and unfortunately, unlike the csv import process, datatypes must be converted from their inferred types after the import and cannot be specified in the process.
That being said, all non-numeric is apparently imported as unicode text which is incompatible with a later export to HDF5. Is there a simple way to convert all unicode columns (as well as all column headings) to an HDF5 compatible string format?
MORE DETAILS:
>>> metaFrame.head()
ProjectName Company ContactName \
LocationID
935 PCS Petaluma High School Site Testco Test Name
937 PCS Casa Grande High School Testco Test Name
3465 FUSD Fowler High School Testco Test Name
3466 FUSD Sutter Middle School Testco Test Name
3467 FUSD Fremont Elementary School Testco Test Name
Contactemail \
LocationID
935 test.address#email.com
937 test.address#email.com
3465 test.address#email.com
3466 test.address#email.com
3467 test.address#email.com
Link Systemsize(kW) \
LocationID
935 https://internal.testco.com/locations/935/syst... NaN
937 https://internal.testco.com/locations/937/syst... 675.39
3465 https://internal.testco.com/locations/3465/sys... 384.30
3466 https://internal.testco.com/locations/3466/sys... 198.90
3467 https://internal.testco.com/locations/3467/sys... 35.10
SystemCheckStartdate SystemCheckActive \
LocationID
935 2013-10-01 00:00:00 True
937 2013-10-01 00:00:00 True
3465 2013-10-01 00:00:00 True
3466 2013-10-01 00:00:00 True
3467 2013-10-01 00:00:00 True
YTDProductionPriortostartdate NumberofInverters/cktsmonitored \
LocationID
935 NaN NaN
937 NaN NaN
3465 NaN NaN
3466 NaN NaN
3467 NaN NaN
InverterMfg InverterModel \
LocationID
935 PV Powered : PVP260KW NaN
937 PV Powered : PVP260KW NaN
3465 Advanced Energy Industries : Solaron 333kW (31... NaN
3466 PV Powered : PVP260KW NaN
3467 PV Powered : PVP35KW-480 NaN
InverterCECefficiency ModuleMfg Modulemodel \
LocationID
935 97.0 NaN NaN
937 97.0 NaN NaN
3465 97.5 NaN NaN
3466 97.0 NaN NaN
3467 96.0 NaN NaN
Moduleirradiancefactor Moduleirradiancefactorslope \
LocationID
935 NaN NaN
937 NaN NaN
3465 NaN NaN
3466 NaN NaN
3467 NaN NaN
StraightLineIntercept ModuleTemp-PwrDerate MeterDK
LocationID
935 NaN 0.005 3291 ...
937 NaN 0.005 11548 ...
3465 NaN 0.005 19248 ...
3466 NaN 0.005 15846 ...
3467 NaN 0.005 15847 ...
[5 rows x 27 columns]
>>> metaFrame.info()
<class 'pandas.core.frame.DataFrame'>
Int64Index: 43 entries, 935 to 5844
Data columns (total 27 columns):
ProjectName 43 non-null values
Company 43 non-null values
ContactName 43 non-null values
Contactemail 43 non-null values
Link 43 non-null values
Systemsize(kW) 42 non-null values
SystemCheckStartdate 37 non-null values
SystemCheckActive 43 non-null values
YTDProductionPriortostartdate 0 non-null values
NumberofInverters/cktsmonitored 2 non-null values
InverterMfg 42 non-null values
InverterModel 8 non-null values
InverterCECefficiency 33 non-null values
ModuleMfg 0 non-null values
Modulemodel 0 non-null values
Moduleirradiancefactor 0 non-null values
Moduleirradiancefactorslope 0 non-null values
StraightLineIntercept 0 non-null values
ModuleTemp-PwrDerate 43 non-null values
MeterDK 43 non-null values
Genfieldname 43 non-null values
WSDK 34 non-null values
WSirradianceField 43 non-null values
WSCellTempField 43 non-null values
MiscDerate 1 non-null values
InverterDKs 37 non-null values
Invertergenfields 37 non-null values
dtypes: bool(1), datetime64[ns](1), float64(9), object(16)

Categories

Resources