I plot a couple of lines in log scale with a huge amount of points. I plot them in black using different line styles/markers. I use "markevery" property to decrease amount of markers. X-values change at even intervals.
The issue I have is that markers distributed unevenly - less of them near 0, and more near the right end of each line.
Is there are any way to get around this issue without nitpicking x-values, so that they will be "evenly" distributed on log-scale?
You can give the index of points you want to plot. In logscale these points should be non-uniformly distributed. You can try logspace to achieve it.
import pylab as plt
import numpy as np
x=np.arange(1,1e5)
# Normal plot
#plt.plot(x,x,'o-')
# Log plot
idx=np.logspace(0,np.log10(len(x)),10).astype('int')-1
plt.plot(x[idx],x[idx],'o-')
plt.xscale('log')
plt.yscale('log')
plt.show()
generates:
Related
I have a plot with several curves that looks like this:
These curves start from the top right corner and finish around the point (0.86, 0.5).
I want to focus attention on the end point. If I zoom on this region, it is still not very easy to distinguish the different lines because they overlap several times.
My idea is then to add a gradient of opacity so that the curves would be transparent at their start point and then, the opacity would increasing as we get closer to the end point.
How would you do that with matplotlib?
Currently, I just basically do for the three curves:
plt.plot( r, l )
with r, l being two arrays.
You could always break down your x and y arrays into smaller arrays that you plot separately. This would give you the opportunity to modify alpha for each segment.
See example below:
import numpy as np
import matplotlib.pyplot as plt
N_samp=1000
x=np.arange(N_samp)
y=np.sin(2*np.pi*x/N_samp)
step=10
[plt.plot(x[step*i:step*(i+1)],y[step*i:step*(i+1)],alpha=np.min([0.1+0.01*i,1]),color='tab:blue',lw=1) for i in range(int(N_samp/step))]
plt.show()
cI previously posted this over at code review, but moved it over here as I was told it is more fitting.
Basically, I want to create a colorplot of some irregularly sampled data. I've had some success with the interpolation using matplotlib.mlab.griddata. When I plot the interpolated data (using matplotlib.pyplot.imshow) however, the edges of the domain appear to be left blank. This gets better if I increase the grid density (increase N in the code) but doesn't solve the problem.
I've attached my code and would like to upload an image of the plot I can generate, but am still lacking the reputation to post an image ;)
edit: That has changed now, uploaded the plot after the changes proposed by Ajean:
. Can someone help me out as to what is going wrong?
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.mlab import griddata
# Generate Data
X=np.random.random(100)
Y=2*np.random.random(100)-1
Z=X*Y
# Interpolation
N=100j
extent=(0,1,-1,1)
xs,ys = np.mgrid[extent[0]:extent[1]:N, extent[2]:extent[3]:N]
resampled=griddata(X,Y,Z,xs,ys,interp='nn')
#Plot
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel('X')
ax.set_ylabel('Y')
cplot=ax.imshow(resampled.T,extent=extent)
ticks=np.linspace(-1,1,11)
cbar=fig.colorbar(magplot,ticks=ticks,orientation='vertical')
cbar.set_label('Value', labelpad=20,rotation=270,size=16)
ax.scatter(X,Y,c='r')
It is because your calls to random don't provide you with any values at the boundary corners, therefore there is nothing to interpolate with. If you change X and Y definitions to
# Just include the four corners
X=np.concatenate([np.random.random(100),[0,0,1,1]])
Y=np.concatenate([2*np.random.random(100)-1,[-1,1,1,-1]])
You'll fill in the whole thing.
I'm charting the progress of a differential equation solver (boundary value problem). Each iteration yields a complete set of function evaluations f(x), which can then be plotted against x. Each graph is (supposedly) closer to the correct solution than the last until convergence is reached. A sequential colormap is used to make earlier graphs faded and later ones saturated.
This works fine when the number of iterations is predetermined:
import matplotlib.pyplot as plt
ax = plt.subplot(111)
cm = plt.get_cmap('OrRd')
ax.set_color_cycle([cm(1.*i/(iter+1)) for i in range(1,iter+2)])
ax.plot(x,y)
for k in range(iter):
# iterative solve
ax.plot(x,y)
However, if I use a convergence criterion instead of a predetermined number of iterations, I won't be able to set_color_cycle beforehand. And putting that line after the loop doesn't work.
I know that I can store my intermediate results and plot only after convergence is reached, but this strikes me as heavy-handed because I really have no use for all the intermediate results other than to see them on the plot.
So here are my questions:
1. How do I change the colormap of the existing graphs after plotting? (This is easy in MATLAB.)
2. How do I do the same thing with another collection of graphs on the same plot (e.g. from a different initial guess, converging to a different solution) without disturbing the first collection, so that two colormaps distinguish the collections from one another. (This should be obvious with the answer to Question 1, but just in case.)
Many thanks.
You can also use plt.set_cmap, see here or (more elaborately, scroll down) here:
import numpy as np
import matplotlib.pyplot as plt
plt.imshow(np.random.random((10,10)), cmap='magma')
plt.colorbar()
plt.set_cmap('viridis')
Use the update_colors() to update the colors of all lines:
import pylab as pl
import numpy as np
cm = pl.get_cmap('OrRd')
x = np.linspace(0, 1, 100)
def update_colors(ax):
lines = ax.lines
colors = cm(np.linspace(0, 1, len(lines)))
for line, c in zip(lines, colors):
line.set_color(c)
fig, ax = pl.subplots()
for i in range(10):
ax.plot(x, x**(1+i*0.1))
update_colors(ax)
One trick you could consider is rather than trying to change the colour values after plotting you can use a black overlay with less than 100% transparency to "fade" the past plots, e.g. an alpha of 10% would reduce the brightness of each past plot sequentially.
i try to plot data in a histogram or bar in python. The data size (array size) is between 0-10000. The data itself (each entry of the array) depends on the input and has a range between 0 and e+20 (mostly the data is in th same range). So i want to do a hist plot with matplotlib. I want to plot how often a data is in some intervall (to illustrate the mean and deviation). Sometimes it works like this:
hist1.
But sometimes there is a problem with the intevall size like this:
hist2.
In this plot i need more bars at point 0-100 etc.
Can anyone help me with this?
The plots are just made with:
from numpy.linalg import *
import matplotlib.pyplot as plt
plt.hist(numbers,bins=100)
plt.show()
By default, hist produces a plot with an x range that covers the full range of your data.
If you have one outsider at very high x in comparison with the other values, then you will see this image with a 'compressed' figure.
I you want to have always the same view you can fix the limits with xlim.
Alternatively, if you want to see your distribution always centered and as nicer as possible, you can calculate the mean and the standard deviation of your data and fix the x range accordingly (p.e. for mean +/- 5 stdev)
I'm trying to reproduce this plot in python with little luck:
It's a simple number density contour currently done in SuperMongo. I'd like to drop it in favor of Python but the closest I can get is:
which is by using hexbin(). How could I go about getting the python plot to resemble the SuperMongo one? I don't have enough rep to post images, sorry for the links. Thanks for your time!
Example simple contour plot from a fellow SuperMongo => python sufferer:
import numpy as np
from matplotlib.colors import LogNorm
from matplotlib import pyplot as plt
plt.interactive(True)
fig=plt.figure(1)
plt.clf()
# generate input data; you already have that
x1 = np.random.normal(0,10,100000)
y1 = np.random.normal(0,7,100000)/10.
x2 = np.random.normal(-15,7,100000)
y2 = np.random.normal(-10,10,100000)/10.
x=np.concatenate([x1,x2])
y=np.concatenate([y1,y2])
# calculate the 2D density of the data given
counts,xbins,ybins=np.histogram2d(x,y,bins=100,normed=LogNorm())
# make the contour plot
plt.contour(counts.transpose(),extent=[xbins.min(),xbins.max(),
ybins.min(),ybins.max()],linewidths=3,colors='black',
linestyles='solid')
plt.show()
produces a nice contour plot.
The contour function offers a lot of fancy adjustments, for example let's set the levels by hand:
plt.clf()
mylevels=[1.e-4, 1.e-3, 1.e-2]
plt.contour(counts.transpose(),mylevels,extent=[xbins.min(),xbins.max(),
ybins.min(),ybins.max()],linewidths=3,colors='black',
linestyles='solid')
plt.show()
producing this plot:
And finally, in SM one can do contour plots on linear and log scales, so I spent a little time trying to figure out how to do this in matplotlib. Here is an example when the y points need to be plotted on the log scale and the x points still on the linear scale:
plt.clf()
# this is our new data which ought to be plotted on the log scale
ynew=10**y
# but the binning needs to be done in linear space
counts,xbins,ybins=np.histogram2d(x,y,bins=100,normed=LogNorm())
mylevels=[1.e-4,1.e-3,1.e-2]
# and the plotting needs to be done in the data (i.e., exponential) space
plt.contour(xbins[:-1],10**ybins[:-1],counts.transpose(),mylevels,
extent=[xbins.min(),xbins.max(),ybins.min(),ybins.max()],
linewidths=3,colors='black',linestyles='solid')
plt.yscale('log')
plt.show()
This produces a plot which looks very similar to the linear one, but with a nice vertical log axis, which is what was intended:
Have you checked out matplotlib's contour plot?
Unfortunately I couldn't view yours images. Do you mean something like this? It was done by MathGL -- GPL plotting library, which have Python interface too. And you can use arbitrary data arrays as input (including numpy's one).
You can use numpy.histogram2d to get a number density distribution of your array.
Try this example:
http://micropore.wordpress.com/2011/10/01/2d-density-plot-or-2d-histogram/