I am using Reportlab to create some graphs in my PDF reports. I was creating an Area Line Plot and got stuck at a point where I am not able to understand why am I not getting the output I would like to see.
Here is the code I had written for my output:
def standardLinePlot(data, width=200, height=200):
d = Drawing(width, height)
lp = AreaLinePlot()
lp.data=data
lp.width, lp.height = width, height
lp.xValueAxis.valueMin = 0
lp.xValueAxis.valueMax =36
lp.xValueAxis.valueSteps = [0,6,12,18,24,30,36]
lp.yValueAxis.valueMin = 0
lp.yValueAxis.valueMax =100
lp.strokeColor=colors.black
lp.fillColor=colors.grey
lp.reversePlotOrder = False
lp.joinedLines=1
d.add(lp)
return d
The output I am getting is:
My intended output is that grey color should be in place of red color which is the area under the line plot. The other problem is how can I add the axis title to this chart. For example, I need “Months” to be my X axis and “% of NAV” to be my Y axis.
To define the color for the lines it seems you need to access... well, the lines :). So, lp.lines[0].strokeColor = colors.grey instead of lp.strokeColor = colors.grey, as that one goes for the plot background color!
The question about the labels is a bit more tricky, though... ScatterPlot includes functionality to set labels for X and Y axis, but that's not the case for AreaLinePlot. Of course, you could derive a class from AreaLinePlot copying that functionality, if you're going to use it often.
Change
lp = AreaLinePlot()
to
lp = LinePlot()
and try that
lp.lines[0].strokeColor = colors.red
lp.lines[0].inFill = True
but the fill color will be the same as the line color.
credit goes to #Ricardo Cárdenes
Related
I am trying to create a volume in Gmsh (using Python API) by cutting some small cylinders from a bigger one.
When I do that, I expect to have one surface for each cutted region, instead, I get the result in the figure. I have highlighted in red the surfaces that give me the problem (some cutted regions behave as expected), as you can see, instead of one surface I get two, that sometimes aren't even equal.
gmsh creates more surfaces than expected:
So, my questions are:
Why gmsh behaves like that?
How can I fix this as I need predictable behavior?
Below is the code I used to generate the geometry.
The code to work requires some parameters such as core_height, core_inner_radius and core_outer_radius, the number of small cylinders and their radius.
gmsh.initialize(sys.argv)
#gmsh.initialize()
gmsh.clear()
gmsh.model.add("circle_extrusion")
inner_cyl_tag = 1
outer_cyl_tag = 2
inner_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_inner_radius, tag = inner_cyl_tag)
outer_cyl = gmsh.model.occ.addCylinder(0,0,0, 0, 0, core_height, core_outer_radius, tag = outer_cyl_tag)
core_tag = 3
cut1 = gmsh.model.occ.cut([(3,outer_cyl)],[(3,inner_cyl)], tag = core_tag)
#create a set of filled cylinders
#set position
angle_vector = np.linspace(0,2*np.pi,number_of_hp+1)
pos_x = hp_radial_position*np.cos(angle_vector)
pos_y = hp_radial_position*np.sin(angle_vector)
pos_z = 0.0
#cut one cylinder at the time and assign the new core tag
for ii in range(0,len(angle_vector)):
old_core_tag = core_tag
heat_pipe = gmsh.model.occ.addCylinder(pos_x[ii], pos_y[ii], pos_z, 0, 0, core_height,hp_outer_radius, tag =-1)
core_tag = heat_pipe+1
core = gmsh.model.occ.cut([(3,old_core_tag)],[(3,heat_pipe)], tag = core_tag)
gmsh.model.occ.synchronize()
#get volume entities and assign physical groups
volumes = gmsh.model.getEntities(dim=3)
solid_marker = 1
gmsh.model.addPhysicalGroup(volumes[0][0], [volumes[0][1]],solid_marker)
gmsh.model.setPhysicalName(volumes[0][0],solid_marker, "solid_volume")
#get surfaces entities and apply physical groups
surfaces = gmsh.model.getEntities(dim=2)
surface_markers= np.arange(1,len(surfaces)+1,1)
for ii in range(0,len(surfaces)):
gmsh.model.addPhysicalGroup(2,[surfaces[ii][1]],tag = surface_markers[ii])
#We finally generate and save the mesh:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.option.setNumber("Mesh.MshFileVersion", 2.2) #save in ASCII 2 format
gmsh.write(mesh_name+".msh")
# Launch the GUI to see the results:
#if '-nopopup' not in sys.argv:
# gmsh.fltk.run()
gmsh.finalize()
I do not think that you have additional surfaces in the sense of gmsh.model.occ surfaces. To me this looks like your volume mesh is sticking out of your surface mesh, i.e. volume and surface mesh do not fit together.
Here is what I did to check your case:
First I added the following lines at the beginning of our code to get a minimum working example:
import gmsh
import sys
import numpy as np
inner_cyl_tag = 1
outer_cyl_tag = 2
core_height = 1
core_inner_radius = 0.1
core_outer_radius = 0.2
number_of_hp = 5
hp_radial_position = 0.1
hp_outer_radius = 0.05
What I get with this code is the following:
To visualize it like this go to "Tools"-->"Options"-->"Mesh" and check "2D element faces", "3D element edges" and "3D element faces".
You can see that there are some purple triangles sticking out of the green/yellowish surfaces triangles of the inner surfaces.
You could try to visualize your case the same way and check <--> uncheck the "3D element faces" a few times.
So here is the solution for this behaviour, I did not know that gmsh behaves like this myself. It seems that when you create your mesh and refine it the refinement will be applied on the 2D surface mesh and the 3D volume mesh seperately, which means that those two meshes are not connected after the refinement anymore. What I did next was to try what happens if you create the 2D mesh only, refine it, and then create the 3D mesh, i.e.:
replace:
gmsh.model.mesh.generate(3)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
by:
gmsh.model.mesh.generate(2)
gmsh.model.mesh.refine()
gmsh.model.mesh.refine()
gmsh.model.mesh.generate(3)
The result then looks like this:
I hope that this was actually your problem. But in future it would be good if you could provide us a minimum working example of code that we can copy-paste and get the same visualization you showed us in your image.
I have this df:
x y term s
0 0.000000 0.132653 matlab 0.893072
1 0.000000 0.142857 matrix 0.905120
2 0.012346 0.153061 laboratory 0.902610
3 0.987654 0.989796 be 0.857932
4 0.938272 0.959184 a 0.861948
The variable s tells us the "distance" of the term from the central line (slope 1).
And I need to make a scatterplot that looks like this:
I have this code so far:
chart = alt.Chart(scatterdata_df).mark_circle().encode(
x = alt.X('x:Q', axis = alt.Axis(tickMinStep = 0.05)),
y = alt.Y('y:Q', axis = alt.Axis(tickMinStep = 0.05)),
color=alt.condition('s:Q', alt.value('red'), alt.value('blue')),
tooltip = ['term']
).properties(
width = 500,
height = 500
)
chart
And that gives me an error.
Javascript Error: Expression parse error: (s:Q)?"red":"blue"
This usually means there's a typo in your chart specification. See the javascript console for the full traceback.
When I just do color = 's' I get this, which is closer:
But again I need that double-gradient of colors. I know that the gradient is respective of the s variable, but I'm not sure how to make it have two gradients, one for each side of the central line.
s:Q is not a valid conditional statement. But, for example, you could write a condition like this:
color = alt.condition(alt.datum.s < 0, alt.value('red'), alt.value('blue'))
and points with s < 0 would be colored red, and all others would be colored blue.
Alternatively, if you want to encode a continuous color scale by the value of s (rather than deciding between two colors based on a condition), you could do
color = 's:Q'
If you'd like to use a color scheme in this case that's different from the default, you can specify it this way:
color = alt.Color('s:Q', scale=alt.Scale(scheme='redblue'))
where the string passed to the scheme argument is one of the built-in named color schemes, listed at https://vega.github.io/vega/docs/schemes/#reference
For more information on customizing colors in Altair, see https://altair-viz.github.io/user_guide/customization.html#customizing-colors
I've read the documentation for the Label class in Bokeh but the x and y parameters are quite confusing. Their behavior seems to change if you pass something to the x_units and y_units parameters but I don't understand what the units are supposed to be by default.
More specifically, I have a list of strings that I'm using for my x-axis:
xlab = [
'COREPCE2',
'COREPCE3',
'COREPCE4',
'COREPCE5',
'COREPCE6',
'',
'T5YIE'
]
p = figure(..., y_range = (0,.04), x_range = xlab)
If I wanted to draw basically anything else on the plot, I could just use those strings. For example I drew some lines like this:
p.line(['COREPCE2', 'T5YIE'], [.02,.02], color = 'black', line_dash = 'dashed')
p.line(['', ''], [0,.04], color = 'black')
And that works fine, this is the full chart.
Here's the issue though. I want to put a text label on the "COREPCE4" location of the x axis. If I try just passing the string for the x parameter in the Label class it just doesn't work:
section = Label(x = 'COREPCE4', y = .03, text = 'Survey of Professional Forecasters: August 9, 2019')
p.add_layout(section)
It throws an error: ValueError: expected a value of type Real, got COREPCE4 of type str. I don't really know what units its expecting. Is there a way to make Bokeh recognize that I want to use the x-axis label as my x parameter in the same way I've done with the other glyphs?
The propertied x_units, y_units, refer to screen (pixel) vs data-space (axis) units. As of Bokeh 1.3.4 the x and y properties of Label can only be set from floating point numbers, so they cannot be used directly with categorical coordinates. For now you should use LabelSet, even if you are only showing a single label, since it can work with categorical coordinates.
I am using the seaborn clustermap function and I would like to make multiple plots where the cell sizes are exactly identical. Also the size of the axis labels should be the same. This means figure size and aspect ratio will need to change, the rest needs to stay identical.
import pandas
import seaborn
import numpy as np
dataFrameA = pd.DataFrame([ [1,2],[3,4] ])
dataFrameB = pd.DataFrame( np.arange(3*6).reshape(3,-1))
Then decide how big the clustermap itself needs to be, something along the lines of:
dpi = 72
cellSizePixels = 150
This decides that dataFrameA should be should be 300 by 300 pixels. I think that those need to be converted to the size units of the figure, which will be cellSizePixels/dpi units per pixel. So for dataFrameA that will be a heatmap size of ~2.01 inches. Here I am introducing a problem: there is stuff around the heatmap, which will also take up some space, and I don't know how much space those will exactly take.
I tried to parametrize the heatmap function with a guess of the image size using the formula above:
def fixedWidthClusterMap( dpi, cellSizePixels, dataFrame):
clustermapParams = {
'square':False # Tried to set this to True before. Don't: the dendograms do not scale well with it.
}
figureWidth = (cellSizePixels/dpi)*dataFrame.shape[1]
figureHeight= (cellSizePixels/dpi)*dataFrame.shape[0]
return sns.clustermap( dataFrame, figsize=(figureWidth,figureHeight), **clustermapParams)
fixedWidthClusterMap(dpi, cellSizePixels, dataFrameA)
plt.show()
fixedWidthClusterMap(dpi, cellSizePixels, dataFrameB)
plt.show()
This yields:
My question: how do I obtain square cells which are exactly the size I want?
This is a bit tricky, because there are quite a few things to take into consideration, and in the end, it depends how "exact" you need the sizes to be.
Looking at the code for clustermap the heatmap part is designed to have a ratio of 0.8 compared to the axes used for the dendrograms. But we also need to take into account the margins used to place the axes. If one knows the size of the heatmap axes, one should therefore be able to calculate the desired figure size that would produce the right shape.
dpi = matplotlib.rcParams['figure.dpi']
marginWidth = matplotlib.rcParams['figure.subplot.right']-matplotlib.rcParams['figure.subplot.left']
marginHeight = matplotlib.rcParams['figure.subplot.top']-matplotlib.rcParams['figure.subplot.bottom']
Ny,Nx = dataFrame.shape
figWidth = (Nx*cellSizePixels/dpi)/0.8/marginWidth
figHeigh = (Ny*cellSizePixels/dpi)/0.8/marginHeight
Unfortunately, it seems matplotlib must adjust things a bit during plotting, because that was not enough the get perfectly square heatmap cells. So I choose to resize the various axes create by clustermap after the fact, starting with the heatmap, then the dendrogram axes.
I think the resulting image is pretty close to what you were trying to get, but my tests sometime show some errors by 1-2 px, which I attribute to rounding errors due to all the conversions between sizes in inches and pixels.
dataFrameA = pd.DataFrame([ [1,2],[3,4] ])
dataFrameB = pd.DataFrame( np.arange(3*6).reshape(3,-1))
def fixedWidthClusterMap(dataFrame, cellSizePixels=50):
# Calulate the figure size, this gets us close, but not quite to the right place
dpi = matplotlib.rcParams['figure.dpi']
marginWidth = matplotlib.rcParams['figure.subplot.right']-matplotlib.rcParams['figure.subplot.left']
marginHeight = matplotlib.rcParams['figure.subplot.top']-matplotlib.rcParams['figure.subplot.bottom']
Ny,Nx = dataFrame.shape
figWidth = (Nx*cellSizePixels/dpi)/0.8/marginWidth
figHeigh = (Ny*cellSizePixels/dpi)/0.8/marginHeight
# do the actual plot
grid = sns.clustermap(dataFrame, figsize=(figWidth, figHeigh))
# calculate the size of the heatmap axes
axWidth = (Nx*cellSizePixels)/(figWidth*dpi)
axHeight = (Ny*cellSizePixels)/(figHeigh*dpi)
# resize heatmap
ax_heatmap_orig_pos = grid.ax_heatmap.get_position()
grid.ax_heatmap.set_position([ax_heatmap_orig_pos.x0, ax_heatmap_orig_pos.y0,
axWidth, axHeight])
# resize dendrograms to match
ax_row_orig_pos = grid.ax_row_dendrogram.get_position()
grid.ax_row_dendrogram.set_position([ax_row_orig_pos.x0, ax_row_orig_pos.y0,
ax_row_orig_pos.width, axHeight])
ax_col_orig_pos = grid.ax_col_dendrogram.get_position()
grid.ax_col_dendrogram.set_position([ax_col_orig_pos.x0, ax_heatmap_orig_pos.y0+axHeight,
axWidth, ax_col_orig_pos.height])
return grid # return ClusterGrid object
grid = fixedWidthClusterMap(dataFrameA, cellSizePixels=75)
plt.show()
grid = fixedWidthClusterMap(dataFrameB, cellSizePixels=75)
plt.show()
Not a complete answer (not dealing with pixels) but I suspect OP has moved on after 4 years.
def reshape_clustermap(cmap, cell_width=0.02, cell_height=0.02):
ny, nx = cmap.data2d.shape
hmap_width = nx * cell_width
hmap_height = ny * cell_height
hmap_orig_pos = cmap.ax_heatmap.get_position()
cmap.ax_heatmap.set_position(
[hmap_orig_pos.x0, hmap_orig_pos.y0, hmap_width, hmap_height]
)
top_dg_pos = cmap.ax_col_dendrogram.get_position()
cmap.ax_col_dendrogram.set_position(
[hmap_orig_pos.x0, hmap_orig_pos.y0 + hmap_height, hmap_width, top_dg_pos.height]
)
left_dg_pos = cmap.ax_row_dendrogram.get_position()
cmap.ax_row_dendrogram.set_position(
[left_dg_pos.x0, left_dg_pos.y0, left_dg_pos.width, hmap_height]
)
if cmap.ax_cbar:
cbar_pos = cmap.ax_cbar.get_position()
hmap_pos = cmap.ax_heatmap.get_position()
cmap.ax_cbar.set_position(
[cbar_pos.x0, hmap_pos.y1, cbar_pos.width, cbar_pos.height]
)
cmap = sns.clustermap(dataFrameA)
reshape_clustermap(cmap)
Is is possible to change the colour of an iso-surface depending on height of the points (in python / mayavi) ?
I can create an iso-surface visualization with my script, but I don't know how to make the iso_surface change colour with z axis so that it will be let's say black at the bottom and white at the top of the plot.
I need this in order to make sense of the visualization when it is viewed from directly above the graph.
If you know any other way to achieve this, please let me know as well.
I only want to show one iso_surface plot.
I managed to do this by combining some code from examples http://docs.enthought.com/mayavi/mayavi/auto/example_atomic_orbital.html#example-atomic-orbital and http://docs.enthought.com/mayavi/mayavi/auto/example_custom_colormap.html . Basically you must create a surface as in atomic-orbital example and then make it change colour depending on x. You must create an array of values for x. My code is (the relevant part) :
#src.image_data.point_data.add_array(np.indices(list(self.data.shape)[self.nx,self.ny,self.nz])[2].T.ravel())
src.image_data.point_data.add_array(np.indices(list(self.data.shape))[0].T.ravel())
src.image_data.point_data.get_array(1).name = 'z'
# Make sure that the dataset is up to date with the different arrays:
src.image_data.point_data.update()
# We select the 'scalar' attribute, ie the norm of Phi
src2 = mlab.pipeline.set_active_attribute(src, point_scalars='scalar')
# Cut isosurfaces of the norm
contour = mlab.pipeline.contour(src2)
# contour.filter.contours=[plotIsoSurfaceContours]
# contour.filter.contours=[plotIsoSurfaceContours[0]]
min_c = min(contour.filter._data_min * 1.05,contour.filter._data_max)
max_c = max(contour.filter._data_max * 0.95,contour.filter._data_min)
plotIsoSurfaceContours = [ max(min(max_c,x),min_c) for x in plotIsoSurfaceContours ]
contour.filter.contours= plotIsoSurfaceContours
# Now we select the 'angle' attribute, ie the phase of Phi
contour2 = mlab.pipeline.set_active_attribute(contour, point_scalars='z')
# And we display the surface. The colormap is the current attribute: the phase.
# mlab.pipeline.surface(contour2, colormap='hsv')
xxx = mlab.pipeline.surface(contour2, colormap='gist_ncar')
colorbar = xxx.module_manager.scalar_lut_manager
colorbar.reverse_lut = True
lut = xxx.module_manager.scalar_lut_manager.lut.table.to_array()
lut[:,-1] = int(plotIsoSurfaceOpacity * 254)
xxx.module_manager.scalar_lut_manager.lut.table = lut
# mlab.colorbar(title='Phase', orientation='vertical', nb_labels=3)
self.data is my data, and for unknown reasons if you want to set opacity of your surface you must reverse the lut first and then set the opacity. Multiplication by 254 instead of 255 is done to avoid a possible bug in mayavi.
I hope this helps someone.