Shift SVG objects towards origin - python

I am creating bunch of different SVG files in Python using svgwrite module. I need to present those pictures later on a website.
Since those pictures consists of different mathematical object I am working in whole plane, both with positive and negative numbers. That means that I need to shift all the objects so they will be visible and maintain the structure. Sometimes shifting those objects is not hard, but since I have to do it in almost all my work I am looking for something more general.
So I am looking for something that will shift all of the objects and maintain their structure, so that everything will be visible and the size of the picture would be minimal(meaning the width/height).
I search the official documentation but haven't been successful so far, maybe just blind.
Update To make it clearer I have added an example
Python example:
import svgwrite
im = svgwrite.drawing.Drawing()
im.add(im.line( start = (-10,-10),\
end = (20,20),\
stroke= 'black'))
im.saveas('example.svg')
Then, when I view example.svg in e.g. Firefox there would be only part the "positive" part that is line from [0,0] to [20,20].
Or slightly different example:
import svgwrite
im = svgwrite.drawing.Drawing()
im.add(im.line( start = (10,10),\
end = (20,20),\
stroke= 'black'))
im.saveas('example2.svg')
Now there is unnecessary gap between [0,0] and the line, which starts at [10,10].
So, to sum it up. Imagine bunch of SVG objects, they can be drawn anywhere. I want to find minimal x_min, y_min coordinates throughout all the objects and then:
if the the coordinate x_min (resp. y_min) is negative, add |x_min| (resp. |y_min|) to all the x (resp. y) coordinates of all objects. Where |a| stands for absolute value of a
if the the coordinate x_min (resp. y_min) is positive, substract x_min (resp. y_min) from all the x (resp. y) coordinates of all objects
After that the all the objects will be visible, and they will touch one or both axis at zero. As discussed below, same result might be achieved by shifting the origin (in corresponding way).
Note While I am at it, the similar problem comes with module Image.

Assuming you want to shift the origin, what you need to do is surround your graph lines with a group element and apply a transform to that.
I don't know the proper syntax for that library, but going by the documentation, it will be something like the following:
im = svgwrite.drawing.Drawing()
g = svgwrite.container.Group(transform='translate(50,50)')
im.add(g)
g.add(im.line( start = (-10,-10),\
end = (20,20),\
stroke= 'black'))
im.saveas('example.svg')

If I understand your clarifications right, what you want is for the diagram not to go off the page when rendered (?). If that's the case, what you need to do is add a viewBox to your SVG.
First, keep track of the minimum and maximum, x and y values over the whole diagram. You may have to do that yourself, or it is possible the svgwrite library will do that for you.
Once the diagram is finished, add a viewBox attribute to your root SVG element that contains these values.
The viewBox takes this form:
minX minY width height
where minX minY is the top left of the diagram. So, for your first example above, the viewBox would be:
<svg viewBox="-10 -10 30 30" ... >
and for your second example it would be:
<svg viewBox="10 10 10 10" ... >
Then when the picture is rendered in, say, a browser, it will use the picture dimensions given by the viewBox attribute to position and scale the contents to fill the container (the browser window, <div> etc).
If, rather than having it fill the container, you want to specify a default width and height for the diagram (eg. 200px x 200px), add width and height attributes to the SVG. Like this:
<svg width="200px" height="200px" viewBox="-10 -10 30 30">
<line x1="-10" y1="-10" x2="20" y2="20" stroke="black" />
</svg>
Demo here

Related

Tkinter map mouse coordinates to figure size

I'm using Tkinter where an ASCII figure is printed on a label. I want it to change depending on where I click with the mouse, but I do not know how to tackle that problem. How would I map the mouse's coordinates so that it is restricted by a range, say [-n, n]? I am printing my mouse's (event.x, event.y) and I need those values restricted by the earlier stated interval. So instead of the values I get by moving my mouse ranging from 300 to 400, how can I map them to range from -15 to 15, for example?
Thank you in advance!
Edit: Here is what the program looks like - the center is the (0, 0) coordinate and I only want to be able to click on the sphere to rotate it. Therefore, I want to have my interval range from [-r, r] when clicking width the mouse.
Depends on your intervals. lets assume you have [-15,15], and mouse movement from 0-300, you map -15,15 onto [-150,150], which means that every spatial entity you move your mouse (1/150) is 15/150 = 0.1 step in the scale of your choice [-n,n], which you multiply with your mouse coordinate to get the corresponding value within your range [-n, n]. therefore its n*(n/m) with n being target interval and m being the coordinate interval. Why negative values? You must determine your [0,0] for your coordinate system, and this also depends on whether you want to increase size only or also shrink the figure. Maybe give some additional information or a code snippet!

Testing if a point lies within a labeled object with scipys ndi.label()

Above is an image that has been put through ndi.label()and displayed with matplotlib with each coloured region representing a different feature. Plotted on top of the image are red points that represent a pair of coordinates each. All coordinates are stored and ndi.label returns the number of features. Does skimage, scipy or ndimage have a function that will test if a given set of coordinates lies within a labelled feature?
Initially I intended to use the binding box (left, right, top, bottom) of each feature but due to the regions not all being quadrilateral this won't work.
code to generate the image:
image = io.import("image path")
labelledImage, featureNumber = ndi.label(image)
plt.imshow(labelledImage)
for i in range(len(list))
y, x = list[i]
plt.scatter(y,x, c='r', s=40)
You can use ndi.map_coordinates to find the value at a particular coordinate (or group of coordinates) in an image:
labels_at_coords = ndi.map_coordinates(
labelledImage, np.transpose(list), order=0
)
Notes:
the coordinates array needs to be of shape (ndim, npoints), instead of the sometimes more intuitive (npoints, ndim), hence the transpose.
ideally, it would be best to rename your points list to something like points_list, so that you don't overwrite the Python built-in function list.

How to move the pivot point of an object to a specific location

I want to animate the creation of a cylinder. That means I want to set the scale for the first keyframe to 0 and for the last keyframe to the actual cylinder size.
First I create a cylinder between two points like this:
# p1 is point 1 and p2 is point 2
dx, dy, dz = p2.x - p1.x, p2.y - p1.y, p2.z - p1.z
v_axis = mathutils.Vector((dx, dy, dz)).normalized()
v_obj = mathutils.Vector((0,0,1))
v_rot = v_obj.cross(v_axis)
angle = math.acos(v_obj.dot(v_axis))
bpy.ops.mesh.primitive_cylinder_add()
bpy.ops.transform.rotate(value=angle, axis=v_rot)
After this rotation I would like to set the pivot point at the location of p1 in order to be able to manipulate the location and scaling in respect to p1.
I know how to set the pivot point to the 3D cursor from within the blender UI but how can I set the pivot point to a specific location (p1) from within my python script?
I think the approach people use is first translating the volume so that the desired pivot point is at (0,0), and then rotating, and translating it back to the proper position.
You can also see 6.1 of the following webpage:
http://inside.mines.edu/~gmurray/ArbitraryAxisRotation/
What I ended up doing was scaling from 0 to 100% and at the same time changing the location of the cylinder so that the bottom of the cylinder is always at p1.
But I am still looking for a better solution.
Another technique to try is to experiment with the use of parent objects. I created a cylinder at the origin. I created an Empty object and moved it to <0,-4,0>. I then set the Empty as the parent of the cylinder using the Parent field of the Relations subtab of the Object tab in the Properties window. The cylinder's position of <0,0,0> was then interpreted relative to the empty. I moved the cylinder so it was back at the world origin, but now it's Location (which is relative to its parent) was <0,4,0>. I then animated the Scale of the Empty object to go from 1 to 2. The cylinder, being a child, was affected by this scaling, and the Empty provided the origin of the scaling, so the cylinder slid along the y axis as it scaled.

Measuring rectangles at odd angles with a low resolution input matrix (Linear regression classification?)

I'm trying to solve the following problem:
Given an input of, say,
0000000000000000
0011111111110000
0011111111110000
0011111111110000
0000000000000000
0000000111111110
0000000111111110
0000000000000000
I need to find the width and height of all rectangles in the field. The input is actually a single column at a time (think like a scanner moves from left to right) and is continuous for the duration of the program (that is, the scanning column doesn't move, but the rectangles move over it).
In this example, I can 'wait for a rectangle to begin' (that is, watch for zeros changing to 1s) and then watch it end (ones back to zeros) and measure the piece in 'grid units'. This will work fine for the simple case outlined above, but will fail is the rectangle is tilted at an angle, for example:
0000000000000000
0000011000000000
0000111100000000
0001111111000000
0000111111100000
0000011111110000
0000000111100000
0000000011000000
I had originally thought that the following question would apply:
Dynamic programming - Largest square block
but now i'm not so sure.
I have little to no experience with regression or regression testing, but I think that I could represent this as an input of 8 variables.....
Well to be honest i'm not sure how I would do this at all. The sizes that this part of the code extracts need to be fitted against rectangles of known sizes (ie, from a database).
I initially thought I could feed the known data as training exercises and store the positive test results, but I'm really not sure where to go from here.
Thanks for any advice you might have.
Collect the transition points (from a 1 to a 0 or vice-versa) as you're scanning, then figure the length and width either directly from there, or from the convex hull of each object.
If rectangles can overlap, then you'll have bigger issues.
I'd take following steps:
get all columns together in a matrix (this is needed for proper filtering)
now apply a filter (need to google for it a bit) to sharpen edges and corners
create some structure to hold data for next steps (this can have many different solutions, choose your favorite and/or optimal)
scan vertically (column by column) and for each segment of consequent 'ones' found in a column (segment means you have found it's start end end y coordinates) do:
check that this segment overlaps some segment in the previous column
if it does not, consider this a new rect. Create a rect object and assign it's handle to the segment. for the new rect, update it's metrics (this operation takes just the segment's coordinates - x, ymin, ymax, and will be discussed later)
if it does, assume this is the same rect, take the rect's handle, assign this handle to the current segment then get the rect by it's handle and update it's metrics
That's pretty it. After this you will have a pool of rect objects each having four coordinates of its corners. Do some primitive math to approximate rect's width and height.
So where is the magic? Well, it all happens in the update rect metrics routine.
For each rect we have 13 metrics:
min X => ymin1, ymax1
max X => ymin2, ymax2
min Y => xmin1, xmax1
max Y => xmin2, xmax2
average vertical segment length
First of all we have to determine if this rect is properly aligned within our scan grid. To do this we compare values average vertical segment length and max Y - min Y. If they are the same (i'd choose a threshold around 97%, and then tune it for the best results), then we assume the following coordinates for our rect:
(min X, max Y)
(min X, min Y)
(max X, max Y)
(max X, min Y).
In other case out rect is rotated and in this case we take it's coordinates as follows:
(min X, (ymin1+ymax1)/2)
((xmin1+xmax1)/2, min Y)
(max X, (ymin2+ymax2)/2)
((xmin2+xmax2)/2, max Y)
I posed this question to a friend, and he suggested:
When seeing a 1 for the first time, store it as a new shape. Flood fill it to the right, and add those points to the same shape.
Any input pixel that is'nt in a shape now is a new shape. Do the same flood fill.
On the next input column, flood again from the original shape points. Add new pixels to the corresponding shape
If any flood fill does not add any new pixels for two consecutive columns, you have a completed shape. Move on, and try to determine it's dimensions
This then leaves us with getting the dimensions for a shape we isolated (like in example 2).
For this, we thought up:
If the number of leftmost pixels in the shape is below the average number of pixels per column, then the peice is probably rotated. Thus, find the corners by getting the outermost pixels. Use distance formula between all of them. Largest = hypotenuse, others = width or height.
Otherwise, this peice is probably perfectly aligned, so the corners are probably just the topleft most pixel, bottom right most pixel, etc
What do you all think?

How to 'zoom' in on a section of the Mandelbrot set?

I have created a Python file to generate a Mandelbrot set image. The original maths code was not mine, so I do not understand it - I only heavily modified it to make it about 250x faster (Threads rule!).
Anyway, I was wondering how I could modify the maths part of the code to make it render one specific bit. Here is the maths part:
for y in xrange(size[1]):
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
z = complex(coords[0],coords[1])
o = complex(0,0)
dotcolor = 0 # default, convergent
for trials in xrange(n):
if abs(o) <= 2.0:
o = o**2 + z
else:
dotcolor = trials
break # diverged
im.putpixel((x,y),dotcolor)
And the size definitions:
size1 = 500
size2 = 500
n=64
box=((-2,1.25),(0.5,-1.25))
plus = size[1]+size[0]
uleft = box[0]
lright = box[1]
xwidth = lright[0] - uleft[0]
ywidth = uleft[1] - lright[1]
what do I need to modify to make it render a certain section of the set?
The line:
box=((-2,1.25),(0.5,-1.25))
is the bit that defines the area of coordinate space that is being rendered, so you just need to change this line. First coordinate pair is the top-left of the area, the second is the bottom right.
To get a new coordinate from the image should be quite straightforward. You've got two coordinate systems, your "image" system 100x100 pixels in size, origin at (0,0). And your "complex" plane coordinate system defined by "box". For X:
X_complex=X_complex_origin+(X_image/X_image_width)*X_complex_width
The key in understanding how to do this is to understand what the coords = line is doing:
coords = (uleft[0] + (x/size[0]) * (xwidth),uleft[1] - (y/size[1]) * (ywidth))
Effectively, the x and y values you are looping through which correspond to the coordinates of the on-screen pixel are being translated to the corresponding point on the complex plane being looked at. This means that (0,0) screen coordinate will translate to the upper left region being looked at (-2,1.25), and (1,0) will be the same, but moved 1/500 of the distance (assuming a 500 pixel width window) between the -2 and 0.5 x-coordinate.
That's exactly what that line is doing - I'll expand just the X-coordinate bit with more illustrative variable names to indicate this:
mandel_x = mandel_start_x + (screen_x / screen_width) * mandel_width
(The mandel_ variables refer to the coordinates on the complex plane, the screen_ variables refer to the on-screen coordinates of the pixel being plotted.)
If you want then to take a region of the screen to zoom into, you want to do exactly the same: take the screen coordinates of the upper-left and lower-right region, translate them to the complex-plane coordinates, and make those the new uleft and lright variables. ie to zoom in on the box delimited by on-screen coordinates (x1,y1)..(x2,y2), use:
new_uleft = (uleft[0] + (x1/size[0]) * (xwidth), uleft[1] - (y1/size[1]) * (ywidth))
new_lright = (uleft[0] + (x2/size[0]) * (xwidth), uleft[1] - (y2/size[1]) * (ywidth))
(Obviously you'll need to recalculate the size, xwidth, ywidth and other dependent variables based on the new coordinates)
In case you're curious, the maths behind the mandelbrot set isn't that complicated (just complex).
All it is doing is taking a particular coordinate, treating it as a complex number, and then repeatedly squaring it and adding the original number to it.
For some numbers, doing this will cause the result diverge, constantly growing towards infinity as you repeat the process. For others, it will always stay below a certain level (eg. obviously (0.0, 0.0) never gets any bigger under this process. The mandelbrot set (the black region) is those coordinates which don't diverge. Its been shown that if any number gets above the square root of 5, it will diverge - your code is just using 2.0 as its approximation to sqrt(5) (~2.236), but this won't make much noticeable difference.
Usually the regions that diverge get plotted with the number of iterations of the process that it takes for them to exceed this value (the trials variable in your code) which is what produces the coloured regions.

Categories

Resources