DirectWrite not adjusting for diacritics - python

I am currently working on debugging some of the DirectWrite code I have written, as I have run into issues when testing with non-English characters. Mostly with getting multiple Unicode characters returning proper indices.
EDIT: After further research, I believe the issue is diacritics, the extra character should be combined somehow. The DWRITE_SHAPING_GLYPH_PROPERTIES field isDiacritic does return 1 for the last unicode codepoint. However, it doesn't seem like the shaping process takes these into account at all. GetGlyphPlacements returns 0's for advance and offset for the diacritic glyph. The LSB is around -5 but that's not enough to offset to the correct position. Does anyone know where in the shaping process DirectWrite is supposed to take diacritics into account and how?
Consider this character: œ̃
It is displayed as one character (through most text editors), but two codepoints: U+0153 U+0303
How do I account for this in GetGlyphs(), since they are separate codepoints? In my code, it is returning two different indices (177, 1123), and one cluster (0, 0).
This is what ends up getting rendered:
Which is consistent with both codepoints rendered individually, but not the actual character. The actual indice count returned by GetGlyphs() is 2.
My questions are as follows:
Should this be returning one indice from GetGlyphs()?
Should I even be getting one indice, or is there some magic involved with two different indices, where at some stage in the process they are combined in the glyph run?
If I should be getting one indice, what process/functions are these indices combined at? Perhaps a bug in my ScriptAnalysis? Trying to narrow down where the issue may be.
Should I be using the length of the characters and not include codepoints?
I apologize as I am not super knowledgeable about fonts/Unicode and the inner workings of the whole shaping process.
Here is some of my code for the process I use to get the indices and advances:
text_length = len(text.encode('utf-16-le')) // 2
text_buffer = create_unicode_buffer(text, text_length)
self._text_analysis.GenerateResults(self._analyzer, text_buffer, len(text_buffer))
# Formula for text buffer size from Microsoft.
max_glyph_size = int(3 * text_length / 2 + 16)
length = text_length
clusters = (UINT16 * length)()
text_props = (DWRITE_SHAPING_TEXT_PROPERTIES * length)()
indices = (UINT16 * max_glyph_size)()
glyph_props = (DWRITE_SHAPING_GLYPH_PROPERTIES * max_glyph_size)()
actual_count = UINT32()
self._analyzer.GetGlyphs(text_buffer,
len(text_buffer),
self.font.font_face,
False, # sideways
False, # rtl
self._text_analysis.script, # scriptAnalysis
None, # localName
None, # numberSub
None, # typo features
None, # feature range length
0, # feature range
max_glyph_size, # max glyph size
clusters, # cluster map
text_props, # text props
indices, # glyph indices
glyph_props, # glyph pops
byref(actual_count) # glyph count
)
advances = (FLOAT * length)()
offsets = (DWRITE_GLYPH_OFFSET * length)()
self._analyzer.GetGlyphPlacements(text_buffer,
clusters,
text_props,
text_length,
indices,
glyph_props,
actual_count,
self.font.font_face,
self.font.font_metrics.designUnitsPerEm,
False, False,
self._text_analysis.script,
self.font.locale,
None,
None,
0,
advances,
offsets)
EDIT: Here is rendering code:
def render_single_glyph(self, font_face, indice, advance, offset, metrics):
"""Renders a single glyph using D2D DrawGlyphRun"""
glyph_width, glyph_height, lsb, font_advance = metrics
# Slicing an array turns it into a python object. Maybe a better way to keep it a ctypes value?
new_indice = (UINT16 * 1)(indice)
new_advance = (FLOAT * 1)(advance)
run = self._get_single_glyph_run(font_face,
self.font._real_size,
new_indice, # indice,
new_advance, # advance,
pointer(offset), # offset,
False,
False)
offset_x = 0
if lsb < 0:
# Negative LSB: we shift the layout rect to the right
# Otherwise we will cut the left part of the glyph
offset_x = math.ceil(abs(lsb))
font_height = (self.font.font_metrics.ascent + self.font.font_metrics.descent) * self.font.font_scale_ratio
# Create new bitmap.
self._create_bitmap(int(math.ceil(glyph_width)),
int(math.ceil(font_height)))
# This offsets the characters if needed.
point = D2D_POINT_2F(offset_x, int(math.ceil(font_height)))
self._render_target.BeginDraw()
self._render_target.Clear(transparent)
self._render_target.DrawGlyphRun(point,
run,
self.brush,
DWRITE_MEASURING_MODE_NATURAL)
self._render_target.EndDraw(None, None)
image = wic_decoder.get_image(self._bitmap)
glyph = self.font.create_glyph(image)
glyph.set_bearings(self.font.descent, offset_x, round(advance * self.font.font_scale_ratio)) # baseline, lsb, advance
return glyph

Shaping process is controlled by your input which is (text,font,locale,script,user features). All that affects results you get. To answer your questions specifically:
Should this be returning one indice from GetGlyphs()?
That's mostly defined by your font.
Should I even be getting one indice, or is there some magic involved
with two different indices, where at some stage in the process they
are combined in the glyph run?
GetGlyphs() operates on single run. Glyphs are free to form a cluster according to shaping rules defined per-script, and according to transformations defined in the font.
If I should be getting one indice, what process/functions are these
indices combined at? Perhaps a bug in my ScriptAnalysis? Trying to
narrow down where the issue may be.
Basically, if your input arguments are correct, you get what you get as output, you can't really control the core of it. What you can do is to test output for the same text and font on Uniscribe, on CoreText (macos), and on Chromium/Firefox (harfbuzz) to see if they differ.
Should I be using the length of the characters and not include
codepoints?
I didn't get this one.

Related

win32com LineStyle Excel

Luckily i found this side:
https://www.linuxtut.com/en/150745ae0cc17cb5c866/
(There are many Linetypes difined
Excel Enum XlLineStyle)
(xlContinuous = 1
xlDashDot = 4
xlDashDotDot = 5
xlSlantDashDot = 13
xlDash = -4115
xldot = -4118
xlDouble = -4119
xlLineStyleNone = -4142)
i run with try and except +/- 100.000 times set lines because i thought anywhere should be this
[index] number for put this line in my picture too but they warsnt.. why not?
how can i set this line?
why are there some line indexe's in a such huge negative ranche and not just 1, 2, 3...?
how can i discover things like the "number" for doing things like that?
why is this even possible, to send apps data's in particular positions, i want to step a little deeper in that, where can i learn more about this?
(1) You can't find the medium dashed in the linestyle enum because there is none. The line that is drawn as border is a combination of lineStyle and Weight. The lineStyle is xlDash, the weight is xlThin for value 03 in your table and xlMedium for value 08.
(2) To figure out how to set something like this in VBA, use the Macro recorder, it will reveal that lineStyle, Weight (and color) are set when setting a border.
(3) There are a lot of pages describing all the constants, eg have a look to the one #FaneDuru linked to in the comments. They can also be found at Microsoft itself: https://learn.microsoft.com/en-us/office/vba/api/excel.xllinestyle and https://learn.microsoft.com/en-us/office/vba/api/excel.xlborderweight. It seems that someone translated them to Python constants on the linuxTut page.
(4) Don't ask why the enums are not continuous values. I assume especially the constants with negative numbers serve more that one purpose. Just never use the values directly, always use the defined constants.
(5) You can assume that numeric values that have no defined constant can work, but the results are kind of unpredictable. It's unlikely that there are values without constant that result in something "new" (eg a different border style).
As you can see in the following table, not all combination give different borders. Setting the weight to xlHairline will ignore the lineStyle. Setting it to xlThick will also ignore the lineStyle, except for xlDouble. Ob the other hand, xlDouble will be ignored when the weight is not xlThick.
Sub border()
With ThisWorkbook.Sheets(1)
With .Range("A1:J18")
.Clear
.Interior.Color = vbWhite
End With
Dim lStyles(), lWeights(), lStyleNames(), lWeightNames
lStyles() = Array(xlContinuous, xlDash, xlDashDot, xlDashDotDot, xlDot, xlDouble, xlLineStyleNone, xlSlantDashDot)
lStyleNames() = Array("xlContinuous", "xlDash", "xlDashDot", "xlDashDotDot", "xlDot", "xlDouble", "xlLineStyleNone", "xlSlantDashDot")
lWeights = Array(xlHairline, xlThin, xlMedium, xlThick)
lWeightNames = Array("xlHairline", "xlThin", "xlMedium", "xlThick")
Dim x As Long, y As Long
For x = LBound(lStyles) To UBound(lStyles)
Dim row As Long
row = x * 2 + 3
.Cells(row, 1) = lStyleNames(x) & vbLf & "(" & lStyles(x) & ")"
For y = LBound(lWeights) To UBound(lWeights)
Dim col As Long
col = y * 2 + 3
If x = 1 Then .Cells(1, col) = lWeightNames(y) & vbLf & "(" & lWeights(y) & ")"
With .Cells(row, col).Borders
.LineStyle = lStyles(x)
.Weight = lWeights(y)
End With
Next
Next
End With
End Sub

Convert LineString / MultiLineString geometries to lat lon

I am using this Mapillary endpoint: https://tiles.mapillary.com/maps/vtp/mly1_public/2/{zoom_level}/{x}/{y}?access_token={} and getting such responses back (see photo). Also, here is the Mapillary documentation.
It is not quite clear to me what the nested coordinate lists in the response represent. By the looks of it, I initially thought it may have to do with pixel coordinates. But judging by the context (the API documentation) and the endpoint I am using, I would say that is not the case. Also, I am not sure if the json response you see in the picture is valid geojson. Some online formatters did not accept it as valid.
I would like to find the bounding box of the "sequence". For context, that would be the minimal-area rectangle defined by two lat, lon positions that fully encompasses the geometry of the so-called "sequence"; and a "sequence" is basically a series of photos taken during a vehicle/on-foot trip, together with the metadata associated with the photos (metadata is available using another endpoint, but that is just for context).
My question is: is it possbile to turn the coordinates you see in the pictures into (lat,lon)? Having those, it would be easy for me to find the bounding box of the sequence. And if so, how? Also, please notice that some of the nested lists are of type LineString while others are MultiLineString (which I read about the difference here: help.arcgis.com, hope this helps)
Minimal reproducible code snippet:
import json
import requests
import mercantile
import mapbox_vector_tile as mvt
ACCESS_TOKEN = 'XXX' # can be provided from here: https://www.mapillary.com/dashboard/developers
z6_tiles = list(mercantile.tiles( #us_west_coast_bbox
west=-125.066423,
south=42.042594,
east=-119.837770,
north=49.148042,
zooms=6
))
# pprint(z6_tiles)
vector_tiles_url = 'https://tiles.mapillary.com/maps/vtp/mly1_public/2/{}/{}/{}?access_token={}'
for tile in z6_tiles:
res = requests.get(vector_tiles_url.format(tile.z,tile.x,tile.y,ACCESS_TOKEN))
res_json = mvt.decode(res.content)
with open('idea.json','w+') as f:
json.dump(res_json, f, indent=4)
I think this get_normalized_coordinates is the solution I was looking for. Please take this with a grain of salt, as I did not fully test it yet. Will try to and then I will update my answer. Also, please be cautious, because for tiles closer to either the South or the North Pole, the Z14_TILE_DMD_WIDTH constant will not be the one you see, but something more like: 0.0018958715374282065.
Z14_TILE_DMD_WIDTH = 0.02197265625
Z14_TILE_DMD_HEIGHT = 0.018241950298914844
def get_normalized_coordinates(bbox: mercantile.LngLatBbox,
target_lat: int,
target_lon: int,
extent: int=4096): # 4096 is Mapillary's default
"""
Returns lon,lat tuple representing real position on world map of a map feature.
"""
min_lon, min_lat, _, _ = bbox
return min_lon + target_lon / extent * Z14_TILE_DMD_WIDTH,
min_lat + target_lat / extent * Z14_TILE_DMD_HEIGHT
And if you are wondering how I came with the constants that you see, I simply iterated over the list of tiles that I am interested in and checked to make sure they all have the same width/height size (this might have not been the case, keeping in mind what I mentioned above about tiles closer to one of the poles - I think this is called "distortion", not sure). Also, for context: these tiles I iterated over are within this bbox: (-125.024414, 31.128199, -108.896484, 49.152970) (min_lon, min_lat, max_lon, max_lat; US west coast) which I believe is also why all the tiles have the same width/height sizes.
set_test = set()
for tile in relevant_tiles_set:
curr_bbox = mercantile.bounds(list_relevant_tiles_set[i])
dm_width_diff: float = curr_bbox.east - curr_bbox.west
dm_height_diff: float = curr_bbox.north - curr_bbox.south
set_test.add((dm_width_diff, dm_height_diff))
set_test
output:
{(0.02197265625, 0.018241950298914844}
UPDATE: forgot to mention that you actually do not need to compute those WIDTH, HEIGHT constants. You just replace those with (max_lon - min_lon) and (max_lat - min_lat) respectively. What I did with those constants was something for testing purposes only

Using astropy.fits and numpy to apply coincidence corrections to SWIFT fits image

This question may be a little specialist, but hopefully someone might be able to help. I normally use IDL, but for developing a pipeline I'm looking to use python to improve running times.
My fits file handling setup is as follows:
import numpy as numpy
from astropy.io import fits
#Directory: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
with fits.open('...') as ima_norm_um2:
#Open UVOTIMSUM file once and close it after extracting the relevant values:
ima_norm_um2_hdr = ima_norm_um2[0].header
ima_norm_um2_data = ima_norm_um2[0].data
#Individual dimensions for number of x pixels and number of y pixels:
nxpix_um2_ext1 = ima_norm_um2_hdr['NAXIS1']
nypix_um2_ext1 = ima_norm_um2_hdr['NAXIS2']
#Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#Call the header and data from the UVOTIMSUM file with the relevant keyword extensions:
corrfact_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
coincorr_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
#Check that the dimensions are all the same:
print(corrfact_um2_ext1.shape)
print(coincorr_um2_ext1.shape)
print(ima_norm_um2_data.shape)
# Make a new image file to save the correction factors:
hdu_corrfact = fits.PrimaryHDU(corrfact_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_corrfact]).writeto('.../M33_sum_epoch1_um2_corrfact.img')
# Make a new image file to save the corrected image to:
hdu_coincorr = fits.PrimaryHDU(coincorr_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_coincorr]).writeto('.../M33_sum_epoch1_um2_coincorr.img')
I'm looking to then apply the following corrections:
# Define the variables from Poole et al. (2008) "Photometric calibration of the Swift ultraviolet/optical telescope":
alpha = 0.9842000
ft = 0.0110329
a1 = 0.0658568
a2 = -0.0907142
a3 = 0.0285951
a4 = 0.0308063
for i in range(nxpix_um2_ext1 - 1): #do begin
for j in range(nypix_um2_ext1 - 1): #do begin
if (numpy.less_equal(i, 4) | numpy.greater_equal(i, nxpix_um2_ext1-4) | numpy.less_equal(j, 4) | numpy.greater_equal(j, nxpix_um2_ext1-4)): #then begin
#UVM2
corrfact_um2_ext1[i,j] == 0
coincorr_um2_ext1[i,j] == 0
else:
xpixmin = i-4
xpixmax = i+4
ypixmin = j-4
ypixmax = j+4
#UVM2
ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax])
xvec_UVM2 = ft*ima_UVM2sum
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2*xvec_UVM2) + (a3*xvec_UVM2*xvec_UVM2*xvec_UVM2) + (a4*xvec_UVM2*xvec_UVM2*xvec_UVM2*xvec_UVM2)
Ctheory_UVM2 = - alog(1-(alpha*ima_UVM2sum*ft))/(alpha*ft)
corrfact_um2_ext1[i,j] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum)
coincorr_um2_ext1[i,j] = corrfact_um2_ext1[i,j]*ima_sk_um2[i,j]
The above snippet is where it is messing up, as I have a mixture of IDL syntax and python syntax. I'm just not sure how to convert certain aspects of IDL to python. For example, the ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax]) I'm not quite sure how to handle.
I'm also missing the part where it will update the correction factor and coincidence correction image files, I would say. If anyone could have the patience to go over it with a fine tooth comb and suggest the neccessary changes I need that would be excellent.
The original normalised image can be downloaded here: Replace ... in above code with this file
One very important thing about numpy is that it does every mathematical or comparison function on an element-basis. So you probably don't need to loop through the arrays.
So maybe start where you convolve your image with a sum-filter. This can be done for 2D images by astropy.convolution.convolve or scipy.ndimage.filters.uniform_filter
I'm not sure what you want but I think you want a 9x9 sum-filter that would be realized by
from scipy.ndimage.filters import uniform_filter
ima_UVM2sum = uniform_filter(ima_norm_um2_data, size=9)
since you want to discard any pixel that are at the borders (4 pixel) you can simply slice them away:
ima_UVM2sum_valid = ima_UVM2sum[4:-4,4:-4]
This ignores the first and last 4 rows and the first and last 4 columns (last is realized by making the stop value negative)
now you want to calculate the corrections:
xvec_UVM2 = ft*ima_UVM2sum_valid
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2**2) + (a3*xvec_UVM2**3) + (a4*xvec_UVM2**4)
Ctheory_UVM2 = - np.alog(1-(alpha*ima_UVM2sum_valid*ft))/(alpha*ft)
these are all arrays so you still do not need to loop.
But then you want to fill your two images. Be careful because the correction is smaller (we inored the first and last rows/columns) so you have to take the same region in the correction images:
corrfact_um2_ext1[4:-4,4:-4] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum_valid)
coincorr_um2_ext1[4:-4,4:-4] = corrfact_um2_ext1[4:-4,4:-4] *ima_sk_um2
still no loop just using numpys mathematical functions. This means it is much faster (MUCH FASTER!) and does the same.
Maybe I have forgotten some slicing and that would yield a Not broadcastable error if so please report back.
Just a note about your loop: Python's first axis is the second axis in FITS and the second axis is the first FITS axis. So if you need to loop over the axis bear that in mind so you don't end up with IndexErrors or unexpected results.

Inappropriate argument value (of correct type). in JES (Python/Jython)

Hey so I am just working on some coding homework for my Python class using JES. Our assignment is to take a sound, add some white noise to the background and to add an echo as well. There is a bit more exacts but I believe I am fine with that. There are four different functions that we are making: a main, an echo equation based on a user defined length of time and amount of echos, a white noise generation function, and a function to merge the noises.
Here is what I have so far, haven't started the merging or the main yet.
#put the following line at the top of your file. This will let
#you access the random module functions
import random
#White noise Generation functiton, requires a sound to match sound length
def whiteNoiseGenerator(baseSound) :
noise = makeEmptySound(getLength(baseSound))
index = 0
for index in range(0, getLength(baseSound)) :
sample = random.randint(-500, 500)
setSampleValueAt(noise, index, sample)
return noise
def multipleEchoesGenerator(sound, delay, number) :
endSound = getLength(sound)
newEndSound = endSound +(delay * number)
len = 1 + int(newEndSound/getSamplingRate(sound))
newSound = makeEmptySound(len)
echoAmplitude = 1.0
for echoCount in range (1, number) :
echoAmplitude = echoAmplitude * 0.60
for posns1 in range (0, endSound):
posns2 = posns1 + (delay * echoCount)
values1 = getSampleValueAt(sound, posns1) * echoAmplitude
values2 = getSampleValueAt(newSound, posns2)
setSampleValueAt (newSound, posns2, values1 + values2)
return newSound
I receive this error whenever I try to load it in.
The error was:
Inappropriate argument value (of correct type).
An error occurred attempting to pass an argument to a function.
Please check line 38 of C:\Users\insanity180\Desktop\Work\Winter Sophomore\CS 140\homework3\homework_3.py
That line of code is:
setSampleValueAt (newSound, posns2, values1 + values2)
Anyone have an idea what might be happening here? Any assistance would be great since I am hoping to give myself plenty of time to finish coding this assignment. I have gotten a similar error before and it was usually a syntax error however I don't see any such errors here.
The sound is made before I run this program and I defined delay and number as values 1 and 3 respectively.
Check the arguments to setSampleValueAt; your sample value must be out of bounds (should be within -32768 - 32767). You need to do some kind of output clamping for your algorithm.
Another possibility (which indeed was the error, according to further input) is that your echo will be out of the range of the sample - that is, if your sample was 5 seconds long, and echo was 0.5 seconds long; or the posns1 + delay is beyond the length of the sample; the length of the new sound is not calculated correctly.

dynamic plotting in wxpython

I have been developing a GUI for reading continuous data from a serial port. After reading the data, some calculations are made and the results will be plotted and refreshed (aka dynamic plotting). I use the wx backend provided in the matplotlib for this purposes. To do this, I basically use an array to store my results, in which I keep appending it to, after each calculation, and replot the whole graph. To make it "dynamic", I just set the x-axis lower and upper limits for each iteration. Something like found in:
http://eli.thegreenplace.net/2008/08/01/matplotlib-with-wxpython-guis/
The problem, however, is that since the data is continuous, and if I keep plotting it, eventually the system memory will run out and system will crash. Is there any other way I can plot my result continuously?
To do this, I basically use an array
to store my results, in which I keep
appending it to
Try limiting the size of this array, either by deleting old data or by deleting every n-th entry (the screen resolution will prevent all entries to be displayed anyway). I assume you write all the data to disk so you won't lose anything.
Also, analise your code for memory leaks. Stuff you use and don't need anymore but that doesn't get garbage-collected because you still have a reference to it.
I have created such a component with pythons Tkinter. The source is here.
Basically, you have to keep the plotted data somewhere. You cannot keep an infinite amount of data points in memory, so you either have to save it to disk or you have to overwrite old data points.
Data and representation of data are two different things. You might want to store your data to disk if it's important data to be analyzed later, but only keep a fixed period of time or the last N points for display purposes. You could even let the user pick the time frame to be displayed.
I actually ran into this problem (more of a mental block, actually...).
First of all I copy-pasted some wx Plot code from wx Demo Code.
What I do is keep a live log of a value, and compare it to two markers (min and max, shown as red and green dotted lines) (but I will make these 2 markers optional - hence the optional parameters).
In order to implement the live log, I first wanted to use the deque class, but since the data is in tuple mode (x,y coordinates) I gave up and just tried to rewrite the entire parameter list of tuples: see _update_coordinates.
It works just fine for keeping track of the last 100-10,000 plots. Would have also included a printscreen, but I'm too much of a noob at stackoverflow to be allowed :))
My live parameter is updated every 0.25 seconds over a 115kbps UART.
The trick is at the end, in the custom refresh method!
Here is most of the code:
class DefaultPlotFrame(wx.Frame):
def __init__(self, ymin=0, ymax=MAXIMUM_PLOTS, minThreshold=None,
maxThreshold=None, plotColour='blue',
title="Default Plot Frame",
position=(10,10),
backgroundColour="yellow", frameSize=(400,300)):
self.minThreshold = minThreshold
self.maxThreshold = maxThreshold
self.frame1 = wx.Frame(None, title="wx.lib.plot", id=-1, size=(410, 340), pos=position)
self.panel1 = wx.Panel(self.frame1)
self.panel1.SetBackgroundColour(backgroundColour)
self.ymin = ymin
self.ymax = ymax
self.title = title
self.plotColour = plotColour
self.lines = [None, None, None]
# mild difference between wxPython26 and wxPython28
if wx.VERSION[1] < 7:
self.plotter = plot.PlotCanvas(self.panel1, size=frameSize)
else:
self.plotter = plot.PlotCanvas(self.panel1)
self.plotter.SetInitialSize(size=frameSize)
# enable the zoom feature (drag a box around area of interest)
self.plotter.SetEnableZoom(False)
# list of (x,y) data point tuples
self.coordinates = []
for x_item in range(MAXIMUM_PLOTS):
self.coordinates.append((x_item, (ymin+ymax)/2))
self.queue = deque(self.coordinates)
if self.maxThreshold!=None:
self._update_max_threshold()
#endif
if self.lockThreshold!=None:
self._update_min_threshold()
#endif
self.line = plot.PolyLine(self.coordinates, colour=plotColour, width=1)
self.lines[0] = (self.line)
self.gc = plot.PlotGraphics(self.lines, title, 'Time', 'Value')
self.plotter.Draw(self.gc, xAxis=(0, MAXIMUM_PLOTS), yAxis=(ymin, ymax))
self.frame1.Show(True)
def _update_max_threshold(self):
if self.maxThreshold!=None:
self.maxCoordinates = []
for x_item in range(MAXIMUM_PLOTS):
self.maxCoordinates.append((x_item, self.maxThreshold))
#endfor
self.maxLine = plot.PolyLine(self.maxCoordinates, colour="green", width=1)
self.maxMarker = plot.PolyMarker(self.maxCoordinates, colour="green", marker='dot')
self.lines[1] = self.maxMarker
#endif
def _update_live_param(self, liveParam, minParam, maxParam):
if minParam!=None:
self.minThreshold = int(minParam)
self._update_min_threshold()
#endif
if maxParam!=None:
self.maxThreshold = int(maxParam)
self._update_max_threshold()
#endif
if liveParam!=None:
self._update_coordinates(int(liveParam))
#endif
def _update_coordinates(self, newValue):
newList = []
for x,y in self.coordinates[1:]:
newList.append((x-1, y))
#endfor
newList.append((x, newValue))
print "New list", newList
self.line = (plot.PolyLine(newList, colour=self.plotColour, width=1))
self.lines[0] = self.line
self.coordinates = newList
def _MyLIVE_MAGIC_refresh__(self, liveParam=None, minParam=None, maxParam=None):
self._update_live_param(liveParam, minParam, maxParam)
self.gc = plot.PlotGraphics(self.lines, self.title, 'Time', 'Value')
self.plotter.Draw(self.gc, xAxis=(0, MAXIMUM_PLOTS), yAxis=(self.ymin, self.ymax))
self.plotter.Refresh()
self.frame1.Refresh()

Categories

Resources