Filling a closed curve in PyX (python) failing - python

I am clearly misunderstanding how closed curves and filling a closed curve works in PyX. I'm building a closed curve from four parts, but the filling fails depending on how I define one of the four curves. Below is a minimal working/failing example and the output. Why does the first definition of upArc work but the second one does not?
from pyx import *
c = canvas.canvas()
cL = canvas.canvas()
cR = canvas.canvas()
upArc = path.path(path.arc(0,3,1,180,0))
right = path.line(1,3,1,0)
downArc = path.path(path.arcn(0,0,1,0,180))
left = path.line(-1,0,-1,3)
p = upArc+right+downArc<<left
cR.fill(p,[color.rgb.blue])
cR.stroke(p)
upArc = path.path(path.arc(0,0,1,180,0)).transformed(trafo.translate(0,3))
right = path.line(1,3,1,0)
downArc = path.path(path.arcn(0,0,1,0,180))
left = path.line(-1,0,-1,3)
p = upArc+right+downArc<<left
cL.fill(p,[color.rgb.blue])
cL.stroke(p)
c.insert(cL,[trafo.translate(-2,0)])
c.insert(cR,[trafo.translate( 2,0)])
c.writePDFfile("minfail")
A picture of the results.

PyX uses the postscript path model, which can contain several subpaths within a path. (Think of using a path element moveto within a path.) When filling such paths, things like you observe happen. Note that the arc contains an implicit moveto at the beginning, which for filling is taken like a lineto, but not for stroking. This is why it makes a difference to use the add operator + and the join operator <<. Switching to join resolves your problem.
(You can use
print(p.normpath().normsubpaths)
to see, that you have several normsubpaths when adding the paths, even though the arclen does not alter, as the moveto commands from one normsubpath to the next are not part of the arc length.)

Related

win32com LineStyle Excel

Luckily i found this side:
https://www.linuxtut.com/en/150745ae0cc17cb5c866/
(There are many Linetypes difined
Excel Enum XlLineStyle)
(xlContinuous = 1
xlDashDot = 4
xlDashDotDot = 5
xlSlantDashDot = 13
xlDash = -4115
xldot = -4118
xlDouble = -4119
xlLineStyleNone = -4142)
i run with try and except +/- 100.000 times set lines because i thought anywhere should be this
[index] number for put this line in my picture too but they warsnt.. why not?
how can i set this line?
why are there some line indexe's in a such huge negative ranche and not just 1, 2, 3...?
how can i discover things like the "number" for doing things like that?
why is this even possible, to send apps data's in particular positions, i want to step a little deeper in that, where can i learn more about this?
(1) You can't find the medium dashed in the linestyle enum because there is none. The line that is drawn as border is a combination of lineStyle and Weight. The lineStyle is xlDash, the weight is xlThin for value 03 in your table and xlMedium for value 08.
(2) To figure out how to set something like this in VBA, use the Macro recorder, it will reveal that lineStyle, Weight (and color) are set when setting a border.
(3) There are a lot of pages describing all the constants, eg have a look to the one #FaneDuru linked to in the comments. They can also be found at Microsoft itself: https://learn.microsoft.com/en-us/office/vba/api/excel.xllinestyle and https://learn.microsoft.com/en-us/office/vba/api/excel.xlborderweight. It seems that someone translated them to Python constants on the linuxTut page.
(4) Don't ask why the enums are not continuous values. I assume especially the constants with negative numbers serve more that one purpose. Just never use the values directly, always use the defined constants.
(5) You can assume that numeric values that have no defined constant can work, but the results are kind of unpredictable. It's unlikely that there are values without constant that result in something "new" (eg a different border style).
As you can see in the following table, not all combination give different borders. Setting the weight to xlHairline will ignore the lineStyle. Setting it to xlThick will also ignore the lineStyle, except for xlDouble. Ob the other hand, xlDouble will be ignored when the weight is not xlThick.
Sub border()
With ThisWorkbook.Sheets(1)
With .Range("A1:J18")
.Clear
.Interior.Color = vbWhite
End With
Dim lStyles(), lWeights(), lStyleNames(), lWeightNames
lStyles() = Array(xlContinuous, xlDash, xlDashDot, xlDashDotDot, xlDot, xlDouble, xlLineStyleNone, xlSlantDashDot)
lStyleNames() = Array("xlContinuous", "xlDash", "xlDashDot", "xlDashDotDot", "xlDot", "xlDouble", "xlLineStyleNone", "xlSlantDashDot")
lWeights = Array(xlHairline, xlThin, xlMedium, xlThick)
lWeightNames = Array("xlHairline", "xlThin", "xlMedium", "xlThick")
Dim x As Long, y As Long
For x = LBound(lStyles) To UBound(lStyles)
Dim row As Long
row = x * 2 + 3
.Cells(row, 1) = lStyleNames(x) & vbLf & "(" & lStyles(x) & ")"
For y = LBound(lWeights) To UBound(lWeights)
Dim col As Long
col = y * 2 + 3
If x = 1 Then .Cells(1, col) = lWeightNames(y) & vbLf & "(" & lWeights(y) & ")"
With .Cells(row, col).Borders
.LineStyle = lStyles(x)
.Weight = lWeights(y)
End With
Next
Next
End With
End Sub

Given a rotated rectangle in Inkscape (svg format), find coordinates in Python

Given the following rectangles in Inkscape .svg format, I want to find the absolute coordinates, of all four corners, of the second rectangle (in Python). Without writing my own matrix-transformations, or anything really complex.
You'd think there would be a library for this sort of thing. In fact, I found Python SVG Extensions - simpletransform.py, that sounds like it would to it. But it's in the deprecated folder of my installed Inkscape, with this notice:
This directory IS NOT a module path, to denote this we are using a dash in the name and there is no 'init.py'
And is not really importable, as-is. I might just try copy/pasting the code, but I don't have a warm-fuzzy that it will work at all.
And there seem to be a lot of questions/articles about "removing transforms", but they all seem to be related to "accidentally" added transforms.
Just to make things more complex - it looks like the x/y coordinates of the second rectangle - refer to the corner of the bounding-box, not the actual rectangle corner. I still don't really understand Inkscape's funky coordinate-system - it seems like the GUI is backwards from the actual objects. When I mouse-over the rectangle, its coordinates don't match what I expect to see.
Oh, and all units are set to pixels (I think).
This is a very interesting question. Inkscape transform (or transform in computer graphics) can be quite complicated. This webpage has some good information on how transform works in Inkscape extensions.
https://inkscapetutorial.org/transforms.html
For your specific example, the direct answer is that Inkscape system extension (after version 1.0) has a Transform class (in transforms.py module), which has a method apply_to_point that can calculate the absolute coordinates.
More specifically, the following extension (inx and py files, under menu item Extension -> Custom -> Transform Element 2) draws the rectangle in your example with the Rectangle class, calculates the 4 corners with apply_to_point method, draws a path with those 4 points. The result two rectangles overlap each other, so we know the calculation is correct.
Code in transform2.inx file
<?xml version="1.0" encoding="UTF-8"?>
<inkscape-extension xmlns="http://www.inkscape.org/namespace/inkscape/extension">
<name>Transform Element 2</name>
<id>user.transform2</id>
<effect>
<object-type>all</object-type>
<effects-menu>
<submenu name="Custom"/>
</effects-menu>
</effect>
<script>
<command location="inx" interpreter="python">transform2.py</command>
</script>
</inkscape-extension>
Code in transform2.py file
import inkex
from inkex import Rectangle, Transform
from inkex import Vector2d
class NewElement(inkex.GenerateExtension):
container_label = 'transform'
container_layer = True
def generate(self):
self.style = {'fill' : 'none', 'stroke' : '#000000',
'stroke-width' : self.svg.unittouu('2px')}
self.style_red = {'fill' : 'none', 'stroke' : '#FF0000',
'stroke-width' : self.svg.unittouu('.5px')}
rects = self.add_rect()
for r in rects:
yield r
def add_rect(self):
rect = Rectangle.new(15, 5, 20, 5)
rect.style = self.style
tr = Transform('rotate(45)')
rect.transform = tr
el = rect
pt_top_left = tr.apply_to_point(Vector2d(el.left, el.top))
pt_top_right = tr.apply_to_point(Vector2d(el.right, el.top))
pt_right_bottom = tr.apply_to_point(Vector2d(el.right, el.bottom))
pt_left_bottom = tr.apply_to_point(Vector2d(el.left, el.bottom))
path = inkex.PathElement()
path.update(**{
'style': self.style_red,
'inkscape:label': 'redline',
'd': 'M ' + str(pt_top_left.x) + ',' + str(pt_top_left.y) +
' L ' + str(pt_top_right.x) + ',' + str(pt_top_right.y) +
' L ' + str(pt_right_bottom.x) + ',' + str(pt_right_bottom.y) +
' L ' + str(pt_left_bottom.x) + ',' + str(pt_left_bottom.y) +
' z'})
return [rect, path]
if __name__ == '__main__':
NewElement().run()
Here is the result of the extension run:
Furthermore, the simpletransform.py documentation you referenced in your post is written for Inkscape System Extension before version 1.0. The code is written with Python 2.X version. Even though you can find a copy of the file that comes with Inkscape 0.92.X, you will need to spend time to understand the code and rewrite it to be Python 3 compatible, and then you can use it in your program. It is not really recommended for anyone to do that.
As for Inkscape extension units, this webpage also has some good information on this topic.
https://inkscapetutorial.org/units-and-coordinate-systems.html

How to add and/or modify a point in XY data set in abaqus viewer using python script

I have a data set, obtained form the .odb file by script. I would like to change coordinates of the first point ((0, 26.7852) to (0,0)) in this data set in the same script. From .rpy file I found how to do this for the entire set (see below), but don't have idea for a single point. Please help
xQuantity = visualization.QuantityType(type=DISPLACEMENT)
yQuantity = visualization.QuantityType(type=FORCE)
session.xyDataObjects['F_vs_U'].setValues(data=((0, 26.7852), (0.3, 26.7852), (
0.394435, 35.446), (0.490063, 44.1067), (0.581765, 52.7674), (0.675743,
61.4282), (0.770288, 70.0889), (0.865283, 78.7497), (0.949015, 87.4104), (
1.03486, 96.0711), (1.12699, 104.732), (1.21825, 113.393), (1.30867,
122.053), (1.38952, 130.714), (1.45982, 139.375), (1.52214, 148.036), (
1.59321, 156.696), (1.66979, 165.357), (1.75083, 174.018), (1.83359,
182.679), (1.90974, 191.339), (1.96586, 200), ),
sourceDescription='Data modified in editor', axis1QuantityType=xQuantity,
axis2QuantityType=yQuantity, )
Actually, I found another approach. This array is a combination from two steps. In the first one I apply small displacement on a rigid body to establish contact between that and a deformable body. Then, in the second step, I apply certain load on the rigid body in the same direction. Previously I got CFvsU graph for both steps. Since there were no CF in the 1 step, abaqus extrapolated the first CF value to the zero displacement. Instead of this, I created RFvsU on the first step, CFvsU on the second one and then used the command append(A,B).

Find neighbouring polygons in Python QGIS

I am using code I found and slightly modified for my purposes. The problem is, it is not doing exactly what I want, and I am stuck with what to change to fix it.
I am searching for all neighbouring polygons, that share common borded (a line), that is not a point
My goal: 135/12 is neigbour with 319/2 135/4, 317 but not with 320/1
What I get in my QGIS table after I run my script
NEIGBOURS are the neighbouring polygons,
SUM is the number of neighbouring polygons
The code I use also includes 320/1 as a neighbouring polygon. How to fix it?
from qgis.utils import iface
from PyQt4.QtCore import QVariant
_NAME_FIELD = 'Nr'
_SUM_FIELD = 'calc'
_NEW_NEIGHBORS_FIELD = 'NEIGHBORS'
_NEW_SUM_FIELD = 'SUM'
layer = iface.activeLayer()
layer.startEditing()
layer.dataProvider().addAttributes(
[QgsField(_NEW_NEIGHBORS_FIELD, QVariant.String),
QgsField(_NEW_SUM_FIELD, QVariant.Int)])
layer.updateFields()
feature_dict = {f.id(): f for f in layer.getFeatures()}
index = QgsSpatialIndex()
for f in feature_dict.values():
index.insertFeature(f)
for f in feature_dict.values():
print 'Working on %s' % f[_NAME_FIELD]
geom = f.geometry()
intersecting_ids = index.intersects(geom.boundingBox())
neighbors = []
neighbors_sum = 0
for intersecting_id in intersecting_ids:
intersecting_f = feature_dict[intersecting_id]
if (f != intersecting_f and
not intersecting_f.geometry().disjoint(geom)):
neighbors.append(intersecting_f[_NAME_FIELD])
neighbors_sum += intersecting_f[_SUM_FIELD]
f[_NEW_NEIGHBORS_FIELD] = ','.join(neighbors)
f[_NEW_SUM_FIELD] = neighbors_sum
layer.updateFeature(f)
layer.commitChanges()
print 'Processing complete.'
I have found somewhat a workaround for it. Before using my script, I create a small (for my purposes, 0,01 m was enough) buffer around all joints. Later, I use a Difference tool to remove the buffer areas from my main layer, thus removing not-needed neighbouring polygons. Using the code now works fine

Using astropy.fits and numpy to apply coincidence corrections to SWIFT fits image

This question may be a little specialist, but hopefully someone might be able to help. I normally use IDL, but for developing a pipeline I'm looking to use python to improve running times.
My fits file handling setup is as follows:
import numpy as numpy
from astropy.io import fits
#Directory: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
with fits.open('...') as ima_norm_um2:
#Open UVOTIMSUM file once and close it after extracting the relevant values:
ima_norm_um2_hdr = ima_norm_um2[0].header
ima_norm_um2_data = ima_norm_um2[0].data
#Individual dimensions for number of x pixels and number of y pixels:
nxpix_um2_ext1 = ima_norm_um2_hdr['NAXIS1']
nypix_um2_ext1 = ima_norm_um2_hdr['NAXIS2']
#Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#Call the header and data from the UVOTIMSUM file with the relevant keyword extensions:
corrfact_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
coincorr_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
#Check that the dimensions are all the same:
print(corrfact_um2_ext1.shape)
print(coincorr_um2_ext1.shape)
print(ima_norm_um2_data.shape)
# Make a new image file to save the correction factors:
hdu_corrfact = fits.PrimaryHDU(corrfact_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_corrfact]).writeto('.../M33_sum_epoch1_um2_corrfact.img')
# Make a new image file to save the corrected image to:
hdu_coincorr = fits.PrimaryHDU(coincorr_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_coincorr]).writeto('.../M33_sum_epoch1_um2_coincorr.img')
I'm looking to then apply the following corrections:
# Define the variables from Poole et al. (2008) "Photometric calibration of the Swift ultraviolet/optical telescope":
alpha = 0.9842000
ft = 0.0110329
a1 = 0.0658568
a2 = -0.0907142
a3 = 0.0285951
a4 = 0.0308063
for i in range(nxpix_um2_ext1 - 1): #do begin
for j in range(nypix_um2_ext1 - 1): #do begin
if (numpy.less_equal(i, 4) | numpy.greater_equal(i, nxpix_um2_ext1-4) | numpy.less_equal(j, 4) | numpy.greater_equal(j, nxpix_um2_ext1-4)): #then begin
#UVM2
corrfact_um2_ext1[i,j] == 0
coincorr_um2_ext1[i,j] == 0
else:
xpixmin = i-4
xpixmax = i+4
ypixmin = j-4
ypixmax = j+4
#UVM2
ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax])
xvec_UVM2 = ft*ima_UVM2sum
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2*xvec_UVM2) + (a3*xvec_UVM2*xvec_UVM2*xvec_UVM2) + (a4*xvec_UVM2*xvec_UVM2*xvec_UVM2*xvec_UVM2)
Ctheory_UVM2 = - alog(1-(alpha*ima_UVM2sum*ft))/(alpha*ft)
corrfact_um2_ext1[i,j] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum)
coincorr_um2_ext1[i,j] = corrfact_um2_ext1[i,j]*ima_sk_um2[i,j]
The above snippet is where it is messing up, as I have a mixture of IDL syntax and python syntax. I'm just not sure how to convert certain aspects of IDL to python. For example, the ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax]) I'm not quite sure how to handle.
I'm also missing the part where it will update the correction factor and coincidence correction image files, I would say. If anyone could have the patience to go over it with a fine tooth comb and suggest the neccessary changes I need that would be excellent.
The original normalised image can be downloaded here: Replace ... in above code with this file
One very important thing about numpy is that it does every mathematical or comparison function on an element-basis. So you probably don't need to loop through the arrays.
So maybe start where you convolve your image with a sum-filter. This can be done for 2D images by astropy.convolution.convolve or scipy.ndimage.filters.uniform_filter
I'm not sure what you want but I think you want a 9x9 sum-filter that would be realized by
from scipy.ndimage.filters import uniform_filter
ima_UVM2sum = uniform_filter(ima_norm_um2_data, size=9)
since you want to discard any pixel that are at the borders (4 pixel) you can simply slice them away:
ima_UVM2sum_valid = ima_UVM2sum[4:-4,4:-4]
This ignores the first and last 4 rows and the first and last 4 columns (last is realized by making the stop value negative)
now you want to calculate the corrections:
xvec_UVM2 = ft*ima_UVM2sum_valid
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2**2) + (a3*xvec_UVM2**3) + (a4*xvec_UVM2**4)
Ctheory_UVM2 = - np.alog(1-(alpha*ima_UVM2sum_valid*ft))/(alpha*ft)
these are all arrays so you still do not need to loop.
But then you want to fill your two images. Be careful because the correction is smaller (we inored the first and last rows/columns) so you have to take the same region in the correction images:
corrfact_um2_ext1[4:-4,4:-4] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum_valid)
coincorr_um2_ext1[4:-4,4:-4] = corrfact_um2_ext1[4:-4,4:-4] *ima_sk_um2
still no loop just using numpys mathematical functions. This means it is much faster (MUCH FASTER!) and does the same.
Maybe I have forgotten some slicing and that would yield a Not broadcastable error if so please report back.
Just a note about your loop: Python's first axis is the second axis in FITS and the second axis is the first FITS axis. So if you need to loop over the axis bear that in mind so you don't end up with IndexErrors or unexpected results.

Categories

Resources