enter image description hereenter image description hereI have the following data frame. I am trying to group the data into rolling date categories based on the received date. I need this code to work so that when a new piece of data is added to the source, it still falls into these date groups: 0-3 months, 3-6 months, 6-9, 9-12, and 12+. Can someone please walk me through how to do this? I have searched high and low for days and I just don't understand how to do it. I have tried grouping the dates based on receive date but that doesn't account for new dates being added (picture attached)enter image description here. My ultimate goal is to deploy this in a dash app. I have a ton of other graphs that are currently deployed but I have a handful that have to use the datetime buckets and I can't do them until I figure this part out.
Related
My question is something that I didn't encounter anywhere, I've been wondering if it was possible for a TF Model to determinate values between 2 dates that have real / validated values assigned to them.
I have an example :
Let's take the price of Nickel, here's it's chart the last week :
There is no data for the two following dates : 19/11 and 20/11
But we have the data points before and after.
So is it possible to use the datas from before and after these 2 points to guess the values of the 2 missing dates ?
Thank you a lot !
It would be possible to create a machine learning model to predict the prices given a dataset of previous prices. Take a look at this post for instance. You would have to modify it slightly such that it predicts the prices in the gaps given previous and upcoming prices.
But for the example you gave assuming the dates are of this year 2022, these are a Saturday and Sunday, the stock market is closed on the weekends, hence there is not price of the item. Also notice that there are other days in the year where there is not trading occurring, think about holidays, then there also is not price of course.
I have time series data
How do I get period??
The first picture graph and first graph of second picture is same graph.
and ignore the second graph of second picture.
We wanted to get micro current when a person workout.
this chart is that a man push the sensor with his finger.
Because we didn't make a good sensor yet.
and My team tried to find not noisy data so that we made the data to 0 below 400. But we can return it to normal data.
It seems 7 similiar periods.
I have used
https://github.com/gsubramani/SignalRecognition
but this have an error. code did not work well
https://github.com/guillaume-chevalier/seq2seq-signal-prediction
My computer have no gpu.. so I couldn't test it. It had errors
https://github.com/tbnsilveira/STFT_analysis/blob/master/STFT_sinusoidal_signal.ipynb
This don't seem to have how to get periods.
I use python.
Any helps would be helpful!! Thank you in advance.
I'm trying to find the price change betweeen two dates using yfinance. I have my code set up right now to show me a graph of a selected stock from the start and end of The Great Recession with matplotlib. But I can't figure out how to get just one day of data and store it in a variable. Is there any way to store the closing price of a certain date or even get the price change between two dates?
I'm an Environmental Engineer, trying to make a leap change to the data science area which interests me more.
I'm new to Python, I work at a company that evaluates air quality data and I think that if I automate the analysis, I should save some time.
I've imported the CSV files with environmental data from the past month, did some filter in that just to make sure that the data were okay and did a groupby just analyse the data day-to-day (I need that in my report for the regulatory agency).
The step by step of what I did:
medias = tabela.groupby(by=["Data"]).mean()
display (tabela)
enter image description here
As you can see there's a column named Data, but when I do the info check it not recognizes the Data as a column.
print (medias.info())
enter image description here
How can I solve this? I need to plot some graphs with the concentration of rain and dust per day.
After grouping, please do a reset index
medias = tabela.groupby(by=["Data"]).mean()
medias = medias.reset_index()
OK, I'll ask in more detail. I'm updating the question and will add an image as well. I have the sectors and job vacancies data for those sectors as in the picture. The first column is dates and it's an index. the other 18 columns are job vacancies data for sectors.
enter image description here
Now my question is,
When I chart calculations such as seasonality and moving average, there are 18 tables for each sector.
For example, the healthcare industry, or mining.
enter image description here
I have exactly 18 of these three tables. At the end of the data preprocessing stage, I will have almost hundreds of tables. I wanted to tell them table by table in the readme.md section when I upload them to my github profile. But this way it won't be possible. Do you think I'm going right? I think about it. Or am I making things difficult for myself?
Is there any other way to analyze them? Can't I merge? I am open to suggestions at this point... I am doing time series analysis for the first time.