I am trying to get a smooth curve for my data points. Say (lin_space,rms) are my ordered pairs that I need to plot. For the following code-
spl=UnivariateSpline(lin_space,rms)
x=np.arange(0,1001,0.5)
plt.plot(lin_space,rms,'k.')
plt.plot(lin_space,spl(lin_space),'b-')
plt.plot(x,np.sqrt(x),'r-')
After smoothing with UnivariateSpline I am getting the blue line whereas I need my plots like the red one like shown (with no local extremums)
You'll want a more limited class of models.
One option, for the data that you have shown, is to do least squares with a square-root function. That should produce good results.
A running average will be smooth(er), depending on how you weight the terms.
A Gaussian Process regression with an RBF + WhiteNoise kernel might be worth checking into, with appropriate a priori bounds on the length scale of the RBF kernel. OTOH, your residuals aren't normally distributed, so this model may not work as well for values toward the edges.
Note: If you specifically want a function with no local extrema, you need to select a class of models that has that property. e.g. fitting a square root function.
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
import sklearn.linear_model
mpl.rcParams['figure.figsize'] = (18,16)
WINDOW=30
def ma(signal, window=30):
return sum([signal[i:-window+i] for i in range(window)])/window
X=np.linspace(0,1000,1000)
Y=np.sqrt(X) + np.log(np.log(X+np.e))*np.random.normal(0,1,X.shape)
sqrt_model_X = np.sqrt(X)
model = sklearn.linear_model.LinearRegression()
model.fit(sqrt_model_X.reshape((-1,1)),Y.reshape((-1,1)))
plt.scatter(X,Y,c='b',marker='.',s=5)
plt.plot(X,np.sqrt(X),'r-')
plt.plot(X[WINDOW:],ma(Y,window=WINDOW),'g-.')
plt.plot(X,model.predict(sqrt_model_X.reshape((-1,1))),'k--')
plt.show()
Related
I am experimenting with Fourier transformations and the built-in NumPy.fft library. I was trying to see the difference between computing just fft2 of an image and fftshift on fft2 of an image. But for some reason, I am not getting the results that I was expecting. I have tried changing images as well but regardless of what I use, I get the same results as below. If someone could help me out here, it would be awesome. This is the code I used:
import numpy as np
import cv2
import matplotlib.pyplot as plt
from scipy import ndimage, fftpack
light = cv2.imread("go_light.jpeg")
dark = cv2.imread("go_dark.jpeg")
g_img = cv2.cvtColor(dark, cv2.COLOR_BGR2GRAY)
di = (np.abs((np.fft.fft2(g_img))))
dm = np.abs(np.fft.fftshift(np.fft.fft2(g_img)))
plt.figure(figsize=(6.4*5, 4.8*5), constrained_layout=False)
plt.subplot(151), plt.imshow(di, "gray"), plt.title("fft");
plt.subplot(152), plt.imshow(dm, "gray"), plt.title("fftshift");
plt.show()
di and dm are floating point values. Matplotlib can't do that. First, try di.astype(np.int8). However, many of the values are out of range. You may need to scale the array.
I'm trying to create a grapher using matplotlib.pyplot and want to graph a function that comes like a string
My Code is:
import matplotlib.pyplot as mpl
import numpy as np
def plot2D(*args):
mpl.grid(1)
xAxis = np.arange(args[1],args[2],args[3])
def xfunction(x,input):
return eval(input)
print(xfunction(5,args[0]))
mpl.plot(xAxis,xfunction(xAxis,args[0]))
mpl.show()
plot2D("1/(x)",-1,2,0.1)
I want it to plot the function 1/x but it looks like this when it should look like this (desmos). Am I converting the string to a function wrong or can matplotlib even be used to graph functions like that or should I use another library? How would I go about graphing a function like x**2 + y**2 = 1 ? Or functions like sin(x!) ?
There's an intrinsic problem with the function 1/x: it's not defined in 0. Now, in your code one of the values inside the range is unfortunately 0, and thus it messes up the whole thing big time. All you have to do is change the last line of code to shift the range a little bit, and increase the number of steps in order to get more accurate results: plot2D("1/x",-1.01,2,0.02). This is the plot:
If you want to eliminate the nasty line in between you'll have to change the code to split the graph into two.
I have used the code below to run a Granger causality test on a data frame that I have. The code runs fine and returns the correct results that I would expect, however, I was wondering if it is possible to plot the data in a graph using python showing the causality?
Something similar to this:
I have tried using the code below and have been successful in returning data.
print(grangercausalitytests(df[['Number_of_Ethereum_Searches', 'Price_in_USD']], maxlag=1, addconst=True, verbose=True))
If you are trying to inspect this visually, then you should use a cross-correlation diagram. This illustrates the strength of the correlation between two time series.
Let's illustrate this with an example. Consider the following two variables:
Sunlight hours
Maximum temperature
Ever notice how July/August are the hottest months in the Northern Hemispehere, while the longest day is on June 21? This is due to a a time lag, where the effects of maximum sunlight do not cause a maximum temperature until a month later or so.
If one were to plot a cross-correlation function to describe this, here is what it would look like (code included).
# Import Libraries
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.tsa.stattools as ts
from statsmodels.tsa.stattools import acf, pacf
import matplotlib as mpl
import matplotlib.pyplot as plt
import quandl
import scipy.stats as ss
import os;
# Set Path
path="directory"
os.chdir(path)
os.getcwd()
# Variables
dataset=np.loadtxt("dataset.csv", delimiter=",")
x=dataset[:,1]
y=dataset[:,0]
plt.xcorr(x, y, normed=True, usevlines=True, maxlags=365)
plt.title("Sunlight Hours versus Maximum Temperature")
plt.show()
Cross-Correlation Diagram
The ACF (autocorrelation) and PACF (partial autocorrelation) plots for these can also be plotted.
# Autocorrelation
acfx=statsmodels.tsa.stattools.acf(x)
plt.plot(acfx)
plt.title("Autocorrelation Function")
plt.show()
pacfx=statsmodels.tsa.stattools.pacf(x)
plt.plot(pacfx)
plt.title("Partial Autocorrelation Function")
plt.show()
acfy=statsmodels.tsa.stattools.acf(y)
plt.plot(acfy)
plt.title("Autocorrelation Function")
plt.show()
pacfy=statsmodels.tsa.stattools.pacf(y)
plt.plot(pacfy)
plt.title("Partial Autocorrelation Function")
plt.show()
Autocorrelation and Partial Autocorrelation plots (Maximum temperature)
Autocorrelation and Partial Autocorrelation plots (Sunlight hours)
Notice how the strength of the correlations for sunlight hours persist for longer than maximum temperature, which implies that the effects of long sunlight hours persist in influencing temperature (i.e. one is Granger-causing the other).
Hope the above example is helpful. I would suggest looking at both cross-correlations and autocorrelations in order to get a better overview of the nature of Granger causality in your data.
I have a complicated method called plotter() which processes some data and produces a matplotlib plot with several components. Due to its complexity I simply want to test that the plot appears. This will confirm that all of the data is processed reasonably and that something gets shown without any errors being thrown. I am not looking to run an image comparison as that's not currently possible for this project.
My function is too complicated to show here, so the following example could be considered instead.
import matplotlib.pyplot as plt
import numpy as np
def plotter():
x = np.arange(0,10)
y = 2*x
fig = plt.plot(x, y)
plotter()
plt.show()
Is there a way to use PyTest to simply assert that a figure appears? If not then solutions using other test frameworks would also be greatly appreciated.
(For context I am using Python 3.)
I would like to add a scale bar (showing how big a micron is for example) to a mayavi plot I create with mlab.
For example, referencing this question: How to display a volume with non-cubic voxels correctly in mayavi
I can set the voxel size of a plot by using
from enthought.mayavi import mlab
import numpy as np
s=64
x,y,z = np.ogrid[0:s,0:s,0:s/2]
volume = np.sqrt((x-s/2)**2 + (y-s/2)**2 + (2*z-s/2)**2)
grid = mlab.pipeline.scalar_field(data)
grid.spacing = [1.0, 1.0, 2.0]
contours = mlab.pipeline.contour_surface(grid,
contours=[5,15,25], transparent=True)
mlab.show()
I would like an automated way of adding a some indicator of what the scale of the object I am showing is. Right now I am adding scale bars by hand with inkscape to exported images, but there has to be a better way.
A straightforward mayavi way would be most helpful, but if there is anything in vtk that would do it, I can always use mayavi's wrapper.
Something like text3d will let me add text, and then I suppose I could figure out how to draw a line as well and compute the correct scaling by hand, but I am hoping there is an easier way.
Try the following:
mlab.axes()
mlab.outline()
mlab.colorbar()
This reference: http://github.enthought.com/mayavi/mayavi/auto/mlab_reference.html would help as would the several examples.