I am a mathematician. Recently, I became the editor of the puzzles and problems column for a well-known magazine. Occasionally, I need to create a figure to accompany a problem or solution. These figures mostly relate to 2D (occasionally, 3D) euclidean geometry (lines, polygons, circles, plus the occasional ellipse or other conic section). The goal is obtaining figures of very high quality (press-ready), with Computer Modern ("TeX") textual labels. My hope is finding (or perhaps helping write!) a relatively high-level Python library that "knows" euclidean geometry in the sense that natural operations (e.g., drawing a perpendicular line to a given one passing through a given point, bisecting a given angle, or reflecting a figure A on a line L to obtain a new figure A') are already defined in the library. Of course, the ability to create figures after their elements are defined is a crucial goal (e.g., as Encapsulated Postscript).
I know multiple sub-optimal solutions to this problem (some partial), but I don't know of any that is both simple and flexible. Let me explain:
Asymptote (similar to/based on Metapost) allows creating extremely high-quality figures of great complexity, but knows almost nothing about geometric constructions (it is a rather low-level language) and thus any nontrivial construction requires quite a long script.
TikZ with package tkz-euclide is high-level, flexible and also generates quality figures, but its syntax is so heavy that I just cry for Python's simplicity in comparison. (Some programs actually export to TikZ---see below.)
Dynamic Geometry programs, of which I'm most familiar with Geogebra, often have figure-exporting features (EPS, TikZ, etc.), but are meant to be used interactively. Sometimes, what one needs is a figure based on hard specs (e.g., exact side lengths)---defining objects in a script is ultimately more flexible (if correspondingly less convenient).
Two programs, Eukleides and GCLC, are closest to what I'm looking for: They generate figures (EPS format; GCLC also exports to TikZ). Eukleides has the prettiest, simplest syntax of all the options (see the examples), but it happens to be written in C (with source available, though I'm not sure about the license), rather limited/non-customizable, and no longer maintained. GCLC is still maintained but it is closed-source, its syntax is significantly worse than Eukleides's, and has certain other unnatural quirks. Besides, it is not available for Mac OS (my laptop is a Mac).
Python has:
Matplotlib, which produces extremely high-quality figures (particularly of functions or numerical data), but does not seem to know about geometric constructions, and
Sympy has a geometry module which does know about geometric objects and constructions, all accessible in delightful Python syntax, but seems to have no figure-exporting (or even displaying?) capabilities.
Finally, a question: Is there a library, something like "Figures for Sympy/geometry", that uses Python syntax to describe geometric objects and constructions, allowing to generate high-quality figures (primarily for printing, say EPS)?
If a library with such functionality does not exist, I would consider helping to write one (perhaps an extension to Sympy?). I will appreciate pointers.
There is a way to generate vector images with matplotlob, outputting with the library io to a vector image (SVG) with this approach.
I personally tried to run the code of the approach (generate a vectorial histogram) in that webpage as a python file, and it worked.
The code:
import numpy as np
import matplotlib.pyplot as plt
import xml.etree.ElementTree as ET
from io import BytesIO
import json
plt.rcParams['svg.fonttype'] = 'none'
# Apparently, this `register_namespace` method is necessary to avoid garbling
# the XML namespace with ns0.
ET.register_namespace("", "http://www.w3.org/2000/svg")
# Fixing random state for reproducibility
np.random.seed(19680801)
# --- Create histogram, legend and title ---
plt.figure()
r = np.random.randn(100)
r1 = r + 1
labels = ['Rabbits', 'Frogs']
H = plt.hist([r, r1], label=labels)
containers = H[-1]
leg = plt.legend(frameon=False)
plt.title("From a web browser, click on the legend\n"
"marker to toggle the corresponding histogram.")
# --- Add ids to the svg objects we'll modify
hist_patches = {}
for ic, c in enumerate(containers):
hist_patches['hist_%d' % ic] = []
for il, element in enumerate(c):
element.set_gid('hist_%d_patch_%d' % (ic, il))
hist_patches['hist_%d' % ic].append('hist_%d_patch_%d' % (ic, il))
# Set ids for the legend patches
for i, t in enumerate(leg.get_patches()):
t.set_gid('leg_patch_%d' % i)
# Set ids for the text patches
for i, t in enumerate(leg.get_texts()):
t.set_gid('leg_text_%d' % i)
# Save SVG in a fake file object.
f = BytesIO()
plt.savefig(f, format="svg")
# Create XML tree from the SVG file.
tree, xmlid = ET.XMLID(f.getvalue())
# --- Add interactivity ---
# Add attributes to the patch objects.
for i, t in enumerate(leg.get_patches()):
el = xmlid['leg_patch_%d' % i]
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Add attributes to the text objects.
for i, t in enumerate(leg.get_texts()):
el = xmlid['leg_text_%d' % i]
el.set('cursor', 'pointer')
el.set('onclick', "toggle_hist(this)")
# Create script defining the function `toggle_hist`.
# We create a global variable `container` that stores the patches id
# belonging to each histogram. Then a function "toggle_element" sets the
# visibility attribute of all patches of each histogram and the opacity
# of the marker itself.
script = """
<script type="text/ecmascript">
<![CDATA[
var container = %s
function toggle(oid, attribute, values) {
/* Toggle the style attribute of an object between two values.
Parameters
----------
oid : str
Object identifier.
attribute : str
Name of style attribute.
values : [on state, off state]
The two values that are switched between.
*/
var obj = document.getElementById(oid);
var a = obj.style[attribute];
a = (a == values[0] || a == "") ? values[1] : values[0];
obj.style[attribute] = a;
}
function toggle_hist(obj) {
var num = obj.id.slice(-1);
toggle('leg_patch_' + num, 'opacity', [1, 0.3]);
toggle('leg_text_' + num, 'opacity', [1, 0.5]);
var names = container['hist_'+num]
for (var i=0; i < names.length; i++) {
toggle(names[i], 'opacity', [1, 0])
};
}
]]>
</script>
""" % json.dumps(hist_patches)
# Add a transition effect
css = tree.getchildren()[0][0]
css.text = css.text + "g {-webkit-transition:opacity 0.4s ease-out;" + \
"-moz-transition:opacity 0.4s ease-out;}"
# Insert the script and save to file.
tree.insert(0, ET.XML(script))
ET.ElementTree(tree).write("svg_histogram.svg")
Previously, you need to pip install the required libraries on the top lines, and it successfully saved a SVG file with a plot (you can read the file and zoomwant in the histogram and you will get no pixels, as the image is generated with mathematicals functions).
It (obviously for our time) uses python 3.
You then could import the SVG image within your TeX document for the publication rendering.
I hope it may help.
Greetings,
Javier.
Related
I have some code that uses Python to read some data from a file, and then while still using Python generates an object in Maya based on the file. Everything is working good and the object comes out looking correctly. The problem is however that non of the segments that the object is made up of has a correct actual position, the Translate XYZ is set to 0 for all of them, even though they look correctly. The problem with this is that when I later on import the model into Unity3D I can't interact with the objects properly as the position is still at 0 while the mesh is were it should be. Is there some correct way to generate objects to make them have a position?
The code calls multiple functions to make different segments (one example of one of these functions is shown below). Then it uses maya.cmds.polyUnite to make it into one object. This is then repeated using a for-loop that runs for some amount of times (specified in the file). This for-loop calls cmds.duplicate(object made above) and moves the new object in the z-axis using cmds.move(0, 0, z, duped object). Due to some bad coding the code then calls polyUnite to make it all into one big object and then calls polySeperate to split it into small segements again. Is it possible that this is causing the problem?
Each segment is generated something like this:
cmds.polyCube(n='leftWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('leftWall.translateX', -Bt/float(2)-(THICKNESS/float(2)))
cmds.setAttr('leftWall.translateY', a)
cmds.setAttr('leftWall.rotateZ', -90)
cmds.polyNormal('leftWall', nm=0, unm=0)
cmds.polyCube(n='rightWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('rightWall.translateX', Bt/float(2)+(THICKNESS/float(2)))
cmds.setAttr('rightWall.translateY', a)
cmds.setAttr('rightWall.rotateZ', 90)
cmds.polyNormal('rightWall', nm=0, unm=0)
cmds.polyUnite('leftWall', 'rightWall', n='walls')
addTexture('walls', 'wall.jpg')
I am struggling with properly organising Python code. My struggle stems from namespaces in Python. More specifically, as I understand, every module has its own specific namespace and if not stated specifically, elements from one namespace will not be available in another namespace.
To give an example what I mean, consider the function subfile1.py:
def foo();
print(a)
Now if I want the foo() function to print a from the namespace of another module/script, I will have to set it explicitly for that subfile1 module; e.g. like in main.py:
import subfile1
a = 4
subfile1.a = 4
subfile1.foo()
When I code (in R), it’s generally to provide data analytics reports. So I have to import data, clean data and perform analyses on this data. My coding projects (to be re-run every couple of months) are then organised as follows: I have a main script from which I run other sub-scripts. I find this very handy I have separate sub-scripts for:
Setting paths
Loading data
Data cleaning
Specific analyses
Simply looking at the names of the sub-scripts listed in the main script gives me then a good overview of what is being done and where I need to add additional sub-scripts if needed.
With Python I don’t find this an easy way of working because of the namespaces. If I would like to keep this way of working in Python, it seems to me that I would have to import all sub-scripts as modules and set the necessary variables for each module explicitly (either in the module or in the main script as in the example above).
So I thought that maybe my way of working is simply not optimal, at least in a Python setting. Could anyone tell me how best to organise the project I have in mind? Or put differently, what is a good way of organising code if the same data/variables has to be used at various places in the code?
Update:
After reading the comments, I made another example. Now with classes. Any comments on whether this is an appropriate way of structuring code or not would be very welcome. My main aim is to split the code in bits and pieces so a third party can quickly gauge what is going on and make changes to the code where necessary.
I have a master_file.py in which I run the code:
from sub_file_1 import DoSomething
from sub_file_2 import DoSomethingElse
from sub_file_3 import CombineTheStuff
x1 = DoSomething.c #could e.g. be import from a data source and transform the data
x2 = DoSomethingElse.d #could e.g. be import from another data source and transform the data taking into its particularities
y = CombineTheStuff(x1, x2) # e.g. with the data imported and cleaned (x1 and x2) I can now estimate a particular model
print(x1)
print(x2)
print(y.fin_result())
sub_file_1.py contains (in reality these sub_files are very lengthy)
from param_file import GeneralPar
class DoSomething():
c = GeneralPar.a + GeneralPar.b
sub_file_2.py contains
from param_file import GeneralPar
class DoSomethingElse():
d = GeneralPar.a * 4
sub_file_3.py contains
class CombineTheStuff():
def __init__(self, input1, input2):
self.input1 = input1
self.input2 = input2
def fin_result(self):
k = self.input1 * self.input2 - 0.01
return(k)
I am writing a Python3 program to work with AutoCAD.
I use pyautocad and comtypes.
I can take any object on the drawing and get its best interface.
For example, I can explode some block reference and work with new objects the AutoCAD creates:
for NewItem in BlockReference.Explode():
# NewItem is unusable unknown object here
NewItem = comtypes.client.GetBestInterface(NewItem)
# Now NewItem is what it is in Acad (text, line or so on)
if NewItem.ObjectName == 'AcDbMText':
....
GetBestInterface method is perfect if I want to get 'the best' interface, which supports methods necessary to iterate with it as with a specific Acad object (for example, AcDbMText). But if I want, for example, to explode an MText or Dimension, I need methods of AcDbEntity.
So, can anyone, please, advice me how can I get not 'the best' but the necessary interface of an object? And, as an ideal, a list of interfaces it supports.
This was only tested with python 2.7:
from pyautocad import Autocad, APoint
from comtypes.client import GetBestInterface
from comtypes.gen.AutoCAD import IAcadEntity, IAcadObject
# Get acad application
acad = Autocad(create_if_not_exists=True)
# Create a new document
doc1 = GetBestInterface(acad.Application.Documents.Add())
# add a circle in this document and make it visible
circle = GetBestInterface(doc1.ModelSpace.AddCircle(APoint(0.0, 0.0), 1.0))
# to cast to a different interface:
circle = circle.QueryInterface(IDispatch)
circle = circle.QueryInterface(IAcadEntity)
circle = circle.QueryInterface(IAcadObject)
Should work, tho. Stay away from CopyObjects. Just sayin'.
So in a nutshell I am needing to export the vertex normals from a character into a text file, or whatever, and then reimport them onto the same character in a different scene.
I have the import export part working in a method that I think is ok, but actually going through the loop and setting the normal on each vertex is taking over twenty minutes and usually overloads the ram on my machine and crashes maya.
I guess I am looking for a way to make my code more efficient or just run faster, any advice would be appreciated. Thanks.
def ImoNorms(self):
ll = mc.ls ('head.vtxFace[*][*]')
input = open('My desktop.txt', 'r')
spltOne = ll[:len(ll)/2]
spltTwo = ll[len(ll)/2:]
i = 0
for each in spltOne:
CurrentLine = input.readline()
kk = re.split(r'\[|\]|\,|\/n|\ ',CurrentLine)
aa = float(kk[1])
aa = round(aa, 3)
bb = float(kk[3])
bb = round(bb,3)
cc = float(kk[5])
cc = round(cc,3)
mc.select(each)
mc.polyNormalPerVertex(xyz =(aa, bb, cc))
i = i + 1
if i%1000 == 0:
print i
init()
Sorry for the formatting issues, still new to this site.
+1 to using OpenMaya if you want better performance.
Check out MFnMesh.getNormals and MFnMesh.setNormals. I admit I haven't used these methods myself, but if it's anything like MFnMesh.setPoints it should be a significant boost in speed as it's setting the normals all at once. Seems like you don't have to deal with its history either.
Here's an example on its usage that will re-direct all of a sphere's vert normals to point down. (Go to Display->Polygons->Vertex Normals to visualize the normals)
import maya.OpenMaya as OpenMaya
# Create a sphere to change vert normals with
mesh_obj, _ = cmds.polySphere()
# Wrap sphere as MDagPath object
sel = OpenMaya.MSelectionList()
sel.add(mesh_obj)
dag_path = OpenMaya.MDagPath()
sel.getDagPath(0, dag_path)
# Pass sphere to mesh function set
mesh_fn = OpenMaya.MFnMesh(dag_path)
# Create empty vector array
vec_array = OpenMaya.MFloatVectorArray()
# Get sphere normals and stuff data in our vector array
mesh_fn.getNormals(vec_array, OpenMaya.MSpace.kObject)
for i in range( vec_array.length() ):
# Point all normals downwards
vec_array[i].assign( OpenMaya.MFloatVector(0, -1, 0) )
# Apply normals back to sphere
mesh_fn.setNormals(vec_array, OpenMaya.MSpace.kObject)
You may also want to consider a different way on how you read your file instead of reading each line one by one. Maybe using json.dumps to store the data to a file and json.loads to retrieve the data. This could potentially speed things up.
What you have will be very slow because you are creating a lot of history on your mesh: if you exit out of the loop after a couple of iterations you'll see that your mesh is accumulating a polyNormalPerVertex node for every iteration.
Unfortunately there's no way to turn of construction history for this command (which seems like an oversight: most commands have a ch flag for exactly this purpose). So the first thing to try is to add an mc.delete(ch=True) after every polyNormalPerVertex call. That will be much faster and it might be enough for what you're doing.
Otherwise you'll need to use the OpenMaya api, which is a bit harder than cmds but will let you do bulk normal operations faster. I would get the cmds version working first and see if it's good enough, it should be much more performant without the history overhead.
UPDATE
Unless you want to learn the API to do this, the right thing is probably:
Save out the mesh with the right normals as an MB or MA file
in the other file, load that mesh and use Transfer Attributes to copy the normals across.
Delete history and remove the mesh you loaded in (2)
That gets you off the hook for both making your own file format and for perf issues, plus it gives you pre-made options for cases where the topology does not line up for some reason
I need to use the GDCM for converting DICOM images to PNG-format. While this example works, it does not seem to take the LUT into account and thus I get a mixture of inverted/non-inverted images. While I'm familiar with both C++ and Python I can't quite grasp the black magic inside the wrapper. The documentation is purely written in C++ and I need some help in connecting the dots.
The main task
Convert the following section in the example:
def gdcm_to_numpy(image):
....
gdcm_array = image.GetBuffer()
result = numpy.frombuffer(gdcm_array, dtype=dtype)
....
to something like this:
def gdcm_to_numpy(image):
....
gdcm_array = image.GetBuffer()
lut = image.GetLUT()
gdcm_decoded = lut.Decode(gdcm_array)
result = numpy.frombuffer(gdcm_decoded, dtype=dtype)
....
Now this gives the error:
NotImplementedError: Wrong number or type of arguments for overloaded function 'LookupTable_Decode'.
Possible C/C++ prototypes are:
gdcm::LookupTable::Decode(std::istream &,std::ostream &) const
gdcm::LookupTable::Decode(char *,size_t,char const *,size_t) const
From looking at the GetBuffer definition I guess the first parameter is the assigned variable bool GetBuffer(char *buffer) const;. I guess that the latter 4-argument version is the one I should aim for. Unfortunately I have no clue to what the size_t arguments should be. I've tried with
gdcm_in_size = sys.getsizeof(gdcm_array)
gdcm_out_size = sys.getsizeof(gdcm_array)*3
gdcm_decoded = lut.Decode(gdcm_out_size, gdcm_array, gdcm_in_size)
also
gdcm_in_size = ctypes.sizeof(gdcm_array)
gdcm_out_size = ctypes.sizeof(gdcm_array)*3
gdcm_decoded = lut.Decode(gdcm_out_size, gdcm_array, gdcm_in_size)
but with no success.
Update - test with the ImageApplyLookupTable according to #malat's suggestion
...
lutfilt = gdcm.ImageApplyLookupTable();
lutfilt.SetInput( image );
if (not lutfilt.Apply()):
print("Failed to apply LUT")
gdcm_decoded = lutfilt.GetOutputAsPixmap()\
.GetBuffer()
dtype = get_numpy_array_type(pf)
result = numpy.frombuffer(gdcm_decoded, dtype=dtype)
...
Unfortunately I get "Failed to apply LUT" printed and the images are still inverted. See the below image, ImageJ suggests that it has an inverting LUT.
As a simple solution, I would apply the LUT first. In which case you'll need to use ImageApplyLookupTable. It internally calls the gdcm::LookupTable API. See for example.
Of course the correct solution would be to pass the DICOM LUT and convert it to a PNG LUT.
Update: now that you have posted the screenshot. I understand what is going on on your side. You are not trying to apply the DICOM Lookup Table, you are trying to change the rendering of two different Photometric Interpration DICOM DataSet, namely MONOCHROME1 vs MONOCHROME2.
In this case you can change that using software implementation via the use of: gdcm.ImageChangePhotometricInterpretation. Technically this type of rendering is best done using your graphic card (but that is a different story).