Warning: I am very new at python and know very little.
I am trying to graph (x,y) with error bars in both directions. I am able to achieve this, using:
from matplotlib import pyplot
pyplot.errorbar(x,y,yerr=y_error,xerr=xerror)
However, sometimes the error in either x or y is zero. In this case, I want the program to create an upper (x) and lower (y) limit just for those data points. The actual value of the limit doesn't matter; it just needs to show that it is such.
I've found some things suggesting to add 'lolim' and 'uplim' to the pyplot.errorbar, but it's not working. Do I need a for loop or something to add in the lower/upper limits?
Thanks.
Suggest a simple solution like so:
all the non-limits:
ind1 = y_error*x_error!=0
pyplot.errorbar(x[ind1],y[ind1],yerr=y_error[ind1],xerr=xerror[ind1],fmt='k+')
x-limits:
ind2 = x_error==0
pyplot.errorbar(x[ind2],y[ind2],yerr=y_error[ind2],fmt='k<')
y-limits:
ind3 = y_error==0
pyplot.errorbar(x[ind3],y[ind3],xerr=x_error[ind3],fmt='kv')
I'll leave it to you to decide what to do with the cases where both x and y are limits.
Related
I have 2 pink noise signals created with a random generator
and I put that into a for-loop like:
for i in range(1000):
input[i] = numpy.random.uniform(-1,1)
for i in range(1000):
z[i] = z[i-1] + (1-b)*(z[i] - input[i-1])
Now I try to convert this via the snntorch library. I already used the rate coding part of this library and want to compare it with the latency coding part. So I want to use snntorch.spikegen.latency() but I don't know how to use it right. I changed all the parameters and got no good result.
Do you have any tips for the Encoding/Decoding part to convert this noise into a spike train and convert it back?
Thanks to everyone!
Can you share how you're currently trying to use the latency() function?
It should be similar to rate() in that you just pass z to the latency function. Though there are many more options involved (e.g., normalize=True finds the time constant to ensure all spike times occur within the range of time num_steps).
Each element in z will correspond to one spike. So if it is of dimension N, then the output should be T x N.
The value/intensity of the element corresponds to what time that spike occurs. Negative intensities don't make sense here, so either take the absolute value of z before passing it in, or level shift it.
When I build a scatterplot of this data, you can see see that the one large value (462) is completely swamping even being able to see some of the other points.
Does anyone know of a specific way to normalize this data, so that the small dots can see be seen, while maintaining a link between the size of the dot and the value size. I'm thinking would either of these make sense:
(1) Set a minimum value for the size a dot can be
(2) Do some normalization of the data somehow, but I guess the large data point will always be 462 compared to some of the other points with a value of 1.
Just wondering how other people get around this, so they don't actually miss seeing some points on the plot that are actually there? Or I guess is the most obvious answer just don't scale the points by size, and then add a label to each point somehow with the size.
you can clip() https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.clip.html the values used for size param
full solution below
import pandas as pd
import numpy as np
import plotly.express as px
df = pd.DataFrame(
{"Class": np.linspace(-8, 4, 25), "Values": np.random.randint(1, 40, 25)}
).assign(Class=lambda d: "class_" + d["Class"].astype(str))
df.iloc[7, 1] = 462
px.scatter(df, x="Class", y="Values", size=df["Values"].clip(0, 50))
This isn't really a question linking to Python directly, but more to plotting styles. There are several ways to solve the issue in your case:
Split the data into equally sized categories and assign colorlabels. Your legend would look something like this in this case:
0 - 1: color 1
2 - 20: color 2
...
The way to implement this is to split your data into the sets you want and plotting seperate scatter plots each with a new color. See here or here for examples
The second option that is frequently used is to use the log of the value for the bubble size. You would just have to point that out quite clearly in your legend.
The third option is to limit marker size to an arbitrary value. I personally am not a bit fan of this method since it changes the information shown in a degree that the other alternatives don't, but if you add a data callout, this would still be legitimate.
These options should be fairly easy to implement in code. If you are having difficulties, feel free to post runnable sample code and we could implement an example as well.
I am using Python 2.7 and need to draw a time series using matplotlib library. My y axis data is numeric and everything is ok with it.
The problem is my x axis data which is not numeric, and matplotlib does not cooperate in this case. It does not draw me a time series even though it is not supposed to affect the correctness of the plot, because the x axis data is arranged by a given order anyway and it's order does not affect anything logically.
For example let's say the x data is ["i","like","python"] and the y axis data is [1,2,3].
I did not add my code because I've found that the code is ok, it works if I change the data to all numeric data.
Please explain me how can I use matplotlib to draw the time series, without making me to convert the x values to numeric stuff.
I've based my matplotlib code on following answers: How to plot Time Series using matplotlib Python, Time Series Plot Python.
Matplotlib requires someway of positioning those labels. See the following example:
import matplotlib.pyplot as plt
x = ["i","like","python"]
y = [1,2,3]
plt.plot(y,y) # y,y because both are numeric (you could create an xt = [1,2,3]
plt.xticks(y,x) # same here, the second argument are the labels.
plt.show()
, that results in this:
Notice how I've put the labels there but had to somehow say where they are supposed to be.
I also think you should put a part of your code so that it's easier for other people to suggest upon.
In the graphic below, I want to put in a legend for the calendar plot. The calendar plot was made using ax.plot(...,label='a') and drawing rectangles in a 52x7 grid (52 weeks, 7 days per week).
The legend is currently made using:
plt.gca().legend(loc="upper right")
How do I correct this legend to something more like a colorbar? Also, the colorbar should be placed at the bottom of the plot.
EDIT:
Uploaded code and data for reproducing this here:
https://www.dropbox.com/sh/8xgyxybev3441go/AACKDiNFBqpsP1ZttsZLqIC4a?dl=0
Aside - existing bugs
The code you put on the dropbox doesn't work "out of the box". In particular - you're trying to divide a datetime.timedelta by a numpy.timedelta64 in two places and that fails.
You do your own normalisation and colour mapping (calling into color_list based on an int() conversion of your normalised value). You subtract 1 from this and you don't need to - you already floor the value by using int(). The result of doing this is that you can get an index of -1 which means your very smallest values are incorrectly mapped to the colour for the maximum value. This is most obvious if you plot column 'BIOM'.
I've hacked this by adding a tiny value (0.00001) to the total range of the values that you divide by. It's a hack - I'm not sure that this method of mapping is at all the best use of matplotlib, but that's a different question entirely.
Solution adapting your code
With those bugs fixed, and adding a last suplot below all the existing ones (i.e. replacing 3 with 4 on all your calls to subplot2grid(), you can do the following:
Replace your
plt.gca().legend(loc="upper right")
with
# plot an overall colorbar type legend
# Grab the new axes object to plot the colorbar on
ax_colorbar = plt.subplot2grid((4,num_yrs), (3,0),rowspan=1,colspan=num_yrs)
mappableObject = matplotlib.cm.ScalarMappable(cmap = palettable.colorbrewer.sequential.BuPu_9.mpl_colormap)
mappableObject.set_array(numpy.array(df[col_name]))
col_bar = fig.colorbar(mappableObject, cax = ax_colorbar, orientation = 'horizontal', boundaries = numpy.arange(min_val,max_val,(max_val-min_val)/10))
# You can change the boundaries kwarg to either make the scale look less boxy (increase 10)
# or to get different values on the tick marks, or even omit it altogether to let
col_bar.set_label(col_name)
ax_colorbar.set_title(col_name + ' color mapping')
I tested this with two of your columns ('NMN' and 'BIOM') and on Python 2.7 (I assume you're using Python 2.x given the print statement syntax)
The finalised code that works directly with your data file is in a gist here
You get
How does it work?
It creates a ScalarMappable object that matplotlib can use to map values to colors. It set the array to base this map on to all the values in the column you are dealing with. It then used Figure.colorbar() to add the colorbar - passing in the mappable object so that the labels are correct. I've added boundaries so that the minimum value is shown explicitly - you can omit that if you want matplotlib to sort that out for itself.
P.S. I've set the colormap to palettable.colorbrewer.sequential.BuPu_9.mpl_colormap, matching your get_colors() function which gets these colours as a 9 member list. I strongly recommend importing the colormap you want to use as a nice name to make the use of mpl_colors and mpl_colormap more easy to understand e.g.
import palettable.colorbrewer.sequential.BuPu_9 as color_scale
Then access it as
color_scale.mpl_colormap
That way, you can keep your code DRY and change the colors with only one change.
Layout (in response to comments)
The colorbar may be a little big (certainly tall) for aesthetic ideal. There are a few possible options to do that. I'll point you to two:
The "right" way to do it is probably to use a Gridspec
You could use your existing approach, but increase the number of rows and have the colorbar still in one row, while the other elements span more rows than they do currently.
I've implemented that with 9 rows, an extra column (so that the month labels don't get lost) and the colorbar on the bottom row, spanning 2 less columns than the main figure. I've also used tight_layout with w_pad=0.0 to avoid label clashes. You can play with this to get your exact preferred size. New code here.
This gives:
:
There are functions to do this in matplotlib.colorbar. With some specific code from your example, I could give you a better answer, but you'll use something like:
myColorbar = matplotlib.colorbar.ColorbarBase(myAxes, cmap=myColorMap,
norm=myNorm,
orientation='vertical')
basically I want to graph two functions
g1 = x*cos(x*pi)
g2 = 1 - 0.6x^2
and then plot the intersection, I already have a module that takes inputs close to the two lines intersections, and then converges to those points (there's four of them)
but I want to graph these two functions and their intersections using matplotlib but have no clue how. I've only graphed basic functions. Any help is greatly appreciated
Assuming you can get as far as plotting one function, with x and g1 as numpy arrays,
pylab.plot(x,g1)
just call plot again (and again) to draw any number of separate curves:
pylab.plot(x,g2)
finally display or save to a file:
pylab.show()
To indicate a special point such as an intersection, just pass in scalars for x, y and ask for a marker such 'x' or 'o' or whatever else you like.
pylab.plot(x_intersect, y_intersect, 'x', color="#80C0FF")
Alternatively, I often mark a special place along x with a vertical segment by plotting a quick little two-point data set:
pylab.plot( [x_special, x_special], [0.5, 1.9], '-b' )
I may hardcode the y values to look good on a plot for my current project, but obviously this is not reusable for other projects. Note that plot() can take ordinary python lists; no need to convert to numpy arrays.
If you can't get as far as plotting one function (just g1) then you need a basic tutorial in matplot lib, which wouldn't make a good answer here but please go visit http://matplotlib.org/ and google "matplotlib tutorial" or "matplotlib introduction".