panda tseries convertion not working - python

I am new to python and I am trying to build Time Series through this. I am trying to convert this csv data into time series, however by the internet and stack research, 'result' should have had
<class 'pandas.tseries.index.DatetimeIndex'>,
but my output is not converted time series. Why is it not converting? How do I convert it? Thanks for the help in advance.
import pandas as pd
import numpy as np
import matplotlib.pylab as plt
data = pd.read_csv('somedata.csv')
print data.head()
#selecting specific columns by column name
df1 = data[['a','b']]
#converting the data to time series
dates = pd.date_range('2015-01-01', '2015-12-31', freq='H')
dates #preview
results:
DatetimeIndex(['2015-01-01 00:00:00', '2015-01-01 01:00:00',
...
'2015-12-31 23:00:00', '2015-12-31 00:00:00'],
dtype='datetime64[ns]', length=2161, freq='H')
Above is working, however I get error below:
df1 = Series(df1[:,2], index=dates)
output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'Series' is not defined
After attempting the pd.Series...
df1 = pd.Series(df1[:,2], index=dates)
Error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/someid/miniconda2/lib/python2.7/site- packages/pandas/core/frame.py", line 1992, in __getitem__
return self._getitem_column(key)
File "/home/someid/miniconda2/lib/python2.7/site- packages/pandas/core/frame.py", line 1999, in _getitem_column
return self._get_item_cache(key)
File "/home/someid/miniconda2/lib/python2.7/site- packages/pandas/core/generic.py", line 1343, in _get_item_cache
res = cache.get(item)
TypeError: unhashable type

you do need to have the pd.Series. However, you were also doing something else wrong. I'm assuming you want to get all rows, 2nd column of df1 and return a pd.Series with an index of dates.
Solution
df1 = pd.Series(df1.iloc[:, 1], index=dates)
Explanation
df1.iloc is used to return the slice of df1 by row/column postitioning
[:, 1] gets all rows, 2nd columns
Also, df1.iloc[:, 1] returns a pd.Series and can be passed into the pd.Series constructor.

Related

Conversion RGB to xyY with colormath

With colormath I make a conversion from RGB to xyY value. It works fine for 1 RGB value, but I can't find the right code to do the conversion for multiple RGB values imported from an Excel. I use to following code:
from colormath.color_objects import sRGBColor, xyYColor
from colormath.color_conversions import convert_color
import pandas as pd
data = pd.read_excel(r'C:/Users/User/Desktop/Color/Fontane/RGB/FontaneHuco.xlsx')
df = pd.DataFrame(data, columns=['R', 'G', 'B'])
#print(df)
rgb = sRGBColor(df['R'],df['G'],df['B'], is_upscaled=True)
xyz = convert_color(rgb, xyYColor)
print(xyz)
But when i run this code i receive to following error:
Traceback (most recent call last):
File "C:\Users\User\PycharmProjects\pythonProject4\Overige\Chroma.py", line 9, in <module>
lab = sRGBColor(df['R'], df['G'], df['B'])
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\colormath\color_objects.py", line 524, in __init__
self.rgb_r = float(rgb_r)
File "C:\Users\User\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\series.py", line 141, in wrapper
raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
Does anyone has an idea how to fix this problem?
convert_color is expecting floats and you're giving it dataframe columns instead. You need to apply the conversion one row at at time, which can be done as follows:
xyz = df.apply(
lambda row: convert_color(
sRGBColor(row.R, row.G, row.B, is_upscaled=True), xyYColor
),
axis=1,
)

python - Getting error while taking difference between two dates in columns

this is my code, I am trying to get business days between two dates
the number of days is saved in a new column 'nd'
import numpy as np
df1 = pd.DataFrame(pd.date_range('2020-01-01',periods=26,freq='D'),columns=['A'])
df2 = pd.DataFrame(pd.date_range('2020-02-01',periods=26,freq='D'),columns=['B'])
df = pd.concat([df1,df2],axis=1)
# Iterate over each row of the DataFrame
for index , row in df.iterrows():
bc = np.busday_count(row['A'],row['B'])
df['nd'] = bc
I am getting this error.
Traceback (most recent call last):
File "<input>", line 35, in <module>
File "<__array_function__ internals>", line 5, in busday_count
TypeError: Iterator operand 0 dtype could not be cast from dtype('<M8[us]') to dtype('<M8[D]') according to the rule 'safe'
Is there a way to fix it or another way to get the solution?
busday_count only accepts dates, not datetimes change
bc = np.busday_count(row['A'],row['B'])
to
np.busday_count(row['A'].date(), row['B'].date())

Error with date_range using datetime subselection

I need to create a vector of dates with pd.date_range specifying the min and the max for date values.
Date values come from a subselection performed on a dataframe object ds.
This is the code I wrote:
Note that Date in ds are obtained from
ds = pd.read_excel("data.xlsx",sheet_name='all') # Read the Excel file
ds['Date'] = pd.to_datetime(ds['Date'], infer_datetime_format=True)
This is the part inside a for loop where x loops on a list of Names.
for x in lofNames:
date_tmp = ds.loc[ds['Security Name']==x,['Date']]
mindate = date_tmp.min()
maxdate = date_tmp.max()
date = pd.date_range(start=mindate, end=maxdate, freq='D')
This is the error I get:
Traceback (most recent call last):
File "<ipython-input-8-1f56d07b5a74>", line 4, in <module>
date = pd.date_range(start=mindate, end=maxdate, freq='D')
File "/Users/marco/opt/anaconda3/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py", line 1180, in date_range
**kwargs,
File "/Users/marco/opt/anaconda3/lib/python3.7/site-packages/pandas/core/arrays/datetimes.py", line 365, in _generate_range
start = Timestamp(start)
File "pandas/_libs/tslibs/timestamps.pyx", line 418, in pandas._libs.tslibs.timestamps.Timestamp.__new__
File "pandas/_libs/tslibs/conversion.pyx", line 329, in pandas._libs.tslibs.conversion.convert_to_tsobject
TypeError: Cannot convert input [Date 2007-01-09
dtype: datetime64[ns]] of type <class 'pandas.core.series.Series'> to Timestamp
What's wrong?
thank you
Here is returned one column DataFrame instead Series, so next min and max returned one item Series instead scalar, so error is raised:
date_tmp = ds.loc[ds['Security Name']==x,['Date']]
Correct way is removed []:
date_tmp = ds.loc[ds['Security Name']==x,'Date']

Python - Pandas - resample issue

I am trying to adapt a Pandas.Series with a certain frequency to a Pandas.Series with a different frequency. Therefore I used the resample function but it does not recognize for instance that 'M' is a subperiod of '3M' and raised an error
import pandas as pd
idx_1 = pd.period_range('2017-01-01', periods=6, freq='M')
data_1 = pd.Series(range(6), index=idx_1)
data_higher_freq = data_1.resample('3M', kind="Period").sum()
Raises the following exception:
Traceback (most recent call last): File "/home/mitch/Programs/Infrastructure_software/Sandbox/spyderTest.py", line 15, in <module>
data_higher_freq = data_1.resample('3M', kind="Period").sum() File "/home/mitch/anaconda3/lib/python3.6/site-packages/pandas/core/resample.py", line 758, in f return self._downsample(_method, min_count=min_count) File "/home/mitch/anaconda3/lib/python3.6/site-packages/pandas/core/resamplepy", line 1061, in _downsample 'sub or super periods'.format(ax.freq, self.freq))
pandas._libs.tslibs.period.IncompatibleFrequency: Frequency <MonthEnd> cannot be resampled to <3 * MonthEnds>, as they are not sub or super periods
This seems to be due to the pd.tseries.frequencies.is_subperiod function:
import pandas as pd
pd.tseries.frequencies.is_subperiod('M', '3M')
pd.tseries.frequencies.is_subperiod('M', 'Q')
Indeed it returns False for the first command and True for the second.
I would really appreciate any hints about any solution.
Thks.
Try changing from PeriodIndex to DateTimeIndex before resampling:
import pandas as pd
idx_1 = pd.period_range('2017-01-01', periods=6, freq='M')
data_1 = pd.Series(range(6), index=idx_1)
data_1.index = data_1.index.astype('datetime64[ns]')
data_higher_freq = data_1.resample('3M', kind='period').sum()
Output:
data_higher_freq
Out[582]:
2017-01 3
2017-04 12
Freq: 3M, dtype: int64

Pandas Data Frame Merge

I am new to Pandas. I am trying to make a data set with ZIP Code, Population in that ZIP Code, and Number of Counties in the ZIP Code.
I get the data from census website: https://www2.census.gov/geo/docs/maps-data/data/rel/zcta_county_rel_10.txt
I am trying with the following code, but it is not working. Could you help me to figure out the correct code? I have a hunch that the error is due to data frame or sorts related to data type. But I can not work out the correct code to make it right. Please let me know your thoughts. Thank you in advance!
import pandas as pd
df = pd.read_csv("zcta_county_rel_10.txt", dtype={'ZCTA5': str, 'STATE': str, 'COUNTY': str}, usecols=['ZCTA5', 'STATE', 'COUNTY', 'ZPOP'])
zcta_pop = df.drop_duplicates(subset={'ZCTA5', 'ZPOP'}).drop(['STATE', 'COUNTY'], 1)
zcta_ct_county = df['ZCTA5'].value_counts()
zcta_ct_county.columns = ['ZCTA5', 'CT_COUNTY']
pre_merge_1 = pd.merge(zcta_pop, zcta_ct_county, on='ZCTA5')[['ZCTA5', 'ZPOP', 'CT_COUNTY']]
Here is my error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/python27/lib/python2.7/site-packages/pandas/tools/merge.py", line 58, in merge copy=copy, indicator=indicator)
File "/usr/local/python27/lib/python2.7/site-packages/pandas/tools/merge.py", line 473, in __init__ 'type {0}'.format(type(right)))
ValueError: can not merge DataFrame with instance of type <class 'pandas.core.series.Series'>
SOLUTION
import pandas as pd
df = pd.read_csv("zcta_county_rel_10.txt", dtype={'ZCTA5': str, 'STATE': str, 'COUNTY': str}, usecols=['ZCTA5', 'STATE', 'COUNTY', 'ZPOP'])
zcta_pop = df.drop_duplicates(subset={'ZCTA5', 'ZPOP'}).drop(['STATE', 'COUNTY'], 1)
zcta_ct_county = df['ZCTA5'].value_counts().reset_index()
zcta_ct_county.columns = ['ZCTA5', 'CT_COUNTY']
pre_merge_1 = pd.merge(zcta_pop, zcta_ct_county, on='ZCTA5')[['ZCTA5', 'ZPOP', 'CT_COUNTY']]
I think you need add reset_index, because output of value_counts is Series and need DataFrame with 2 columns:
zcta_ct_county = df['ZCTA5'].value_counts().reset_index()

Categories

Resources