Hey guys (and off course Ladys) ,
i have this little script which should show me some
nice rrd graphs. But i seems like i cant find a way to
bring it to work to show me some stats. This is my Script:
# Function: Simple ping plotter for rrd
import rrdtool,tempfile,commands,time,sys
from model import hosts
sys.path.append('/home/dirk/devel/python/stattool/stattool/lib')
import nurrd
from nurrd import RRDplot
class rrdPing(RRDplot):
def __init__(self):
self.DAY = 86400
self.YEAR = 365 * self.DAY
self.rrdfile = 'hostname.rrd'
self.interval = 300
self.probes = 5
self.rrdList = []
def create_rrd(self, interval):
ret = rrdtool.create("%s" % self.rrdfile, "--step", "%s" % self.interval,
"DS:packets:COUNTER:600:U:U",
"RRA:AVERAGE:0.5:1:288",
"RRA:AVERAGE:0.5:1:336")
def getHosts(self, userID):
myHosts = hosts.query.filter_by(uid=userID).all()
return myHosts.pop(0)
def _doPing(self,host):
for x in xrange(0, self.probes):
ans,unans = commands.getstatusoutput("ping -c 3 -w 6 %s| grep rtt| awk -F '/' '{ print $5 }'" % host)
print x
self.probes -=1
self.rrdList.append(unans)
return self.rrdList
def plotRRD(self):
self.create_rrd(self.interval)
times = self._doPing(self.getHosts(3))
for x in xrange(0,len(times)):
loc = times.pop(0)
rrdtool.update(self.rrdfile, '%d:%d' % (int(time.time()), int(float(loc))))
print '%d:%d' % (int(time.time()), int(float(loc)))
time.sleep(5)
self.graph(60)
def graph(self, mins):
ret = rrdtool.graph( "%s.png" % self.rrdfile, "--start", "-1", "--end" , "+1","--step","300",
"--vertical-label=Bytes/s",
"DEF:inoctets=%s:packets:AVERAGE" % self.rrdfile ,
"AREA:inoctets#7113D6:In traffic",
"CDEF:inbits=inoctets,8,*",
"COMMENT:\\n",
"GPRINT:inbits:AVERAGE:Avg In traffic\: %6.2lf \\r",
"COMMENT: ",
"GPRINT:inbits:MAX:Max In traffic\: %6.2lf")
if __name__ == "__main__":
ping = rrdPing()
ping.plotRRD()
info = rrdtool.info('hostname.rrd')
print info['last_update']
Could somebody please give me some advice or some tips how to solve this?
(Sorry code is a litte mess)
Thanks in advance
Kind regards,
Dirk
Several issues.
Firstly, you appear to only be collecting a single data sample and storing it before you try to generate a graph. You will need at least two samples, separated by about 300s, before you can get a single Primary Data Point and therefore something to graph.
Secondly, you do not post nay information as to what data you are actually storing. Are you sure your rrdPing function is returning valid data to store? You are not testing the error status of the write either.
Thirdly, the data you are collecting seems to be ping times or similar, which is a GAUGE type value. Yet, your RRD DS definition uses a COUNTER type and your graph call is treating it as network traffic data. A COUNTER type assumes increasing values and converts to a rate of change, so if you give it ping RTT data you'll get either unknowns or zeroes stored, which will not show up on a graph.
Fourthly, your call to RRDGraph is specifying a start of -1 and and end of +1. From 1 second in the past to 1 second in the future? Since your step is 300s this is an odd graph. Maybe you should have--end 'now-1' --start 'end-1day' or similar?
You should make your code test the return values for any error messages produced by the RRDTool library -- this is good practice anyway. When testing, print out the values you are updating with to be sure you are giving valid values. With RRDTool, you should collect several data samples at the step interval and store them before expecting to see a line on the graph. Also, make sure you are using the correct data type, GAUGE or COUNTER.
Related
I am trying to run a simple experiment using python. I want to present two different types of audio stimuli, an higher and a lower pitch. The higher pitch has a fixed duration of 200ms while the lower pitch come in pairs, the first with a fixed duration of 250ms and the second with a variable duration that can take the following values [.4, .6, .8, 1, 1.2]. I need to know at what time (machine) the stimuli start and end, and their duration (precision is not the most important issue, I have a tolerance of ~ 10ms), thus I log this information
I am using the library audiomath to create and present the stimuli and I have create several custom functions to manage the other aspects of the task. I have 3 scripts: one in which I define the functions, one in which I set the specific parameters for the experiment for each subject (source) and one with the main()
My problem is that the main() works erratically: it works sometimes, some other times it seems to enter into an infinite loop and a certain sound is presented and never stop playing. The point is that this behavior seems to be really random, with the problem that presents itself at different trials, or not at all, even with the exact same parameter.
This is my code:
source file
#%%imports
from exp_funcs import tone440Hz, tone880Hz
import numpy as np
#%%global var
n_long = 10
n_short = 10
short_duration = .2
long_durations = [.4, .6, .8, 1, 1.2]
#%%calculations
n_tot = n_long + n_long
trial_types = ['short_blink'] * n_short + ['long_blink'] * n_long
sounds = [tone880Hz] * n_short + [tone440Hz] * n_long
np.random.seed(10)
durations = [short_duration] * n_short + [el for el in np.random.choice(long_durations, n_long)]
durations = [.5 if el < .2 else el for el in durations]
cue_duration = [.25] * n_tot
spacing = [1.25] * n_tot
np.random.seed(10)
iti = [el for el in (3 + np.random.normal(0, .25, n_tot))]
functions
import numpy as np
import audiomath as am
import time
import pandas as pd
TWO_PI = 2.0 * np.pi
#am.Synth(fs=22050)
def tone880Hz(fs, sampleIndices, channelIndices):
timeInSeconds = sampleIndices / fs
return np.sin(TWO_PI * 880 * timeInSeconds)
#am.Synth(fs=22050)
def tone440Hz(fs, sampleIndices, channelIndices):
timeInSeconds = sampleIndices / fs
return np.sin(TWO_PI * 440 * timeInSeconds)
def short_blink(sound, duration):
p = am.Player(sound)
init = time.time()
while time.time() < init + duration:
p.Play()
end = time.time()
p.Stop()
print(f'start {init} end {end} duration {end - init}')
return(init, end, end - init)
def long_blink(sound, duration, cue_duration, spacing):
p = am.Player(sound)
i_ = time.time()
while time.time() < i_ + cue_duration:
p.Play()
p.Stop()
time.sleep(spacing)
init = time.time()
while time.time() < init + duration:
p.Play()
end = time.time()
p.Stop()
print(f'start {init} end {end} duration {end - init}')
return(init, end, end - init)
def run_trial(ttype, sound, duration, cue_duration, spacing):
if ttype == 'short_blink':
init, end, effective_duration = short_blink(sound, duration)
else:
init, end, effective_duration = long_blink(sound, duration,
cue_duration, spacing)
otp_df = pd.DataFrame([[ttype, init, end, effective_duration]],
columns = ['trial type', 'start', 'stop',
'effective duration'])
return(otp_df)
main
import pandas as pd
import sys
import getopt
import os
import time
import random
from exp_funcs import run_trial
from pathlib import PurePath
def main(argv):
try:
opts, args = getopt.getopt(argv,'hs:o:',['help', 'source_file=', 'output_directory='])
except getopt.GetoptError:
print ('experiment.py -s source file -o output directory')
sys.exit(2)
for opt, arg in opts:
if opt == '-h':
print ('experiment.py -s source file')
sys.exit()
elif opt in ("-s", "--source_file"):
source_file = arg
elif opt in ("-o", "--output_directory"):
output_dir = arg
os.chdir(os.getcwd())
if not os.path.isfile(f'{source_file}.py'):
raise FileNotFoundError('{source_file} does not exist')
else:
source = __import__('source')
complete_param = list(zip(source.trial_types,
source.sounds,
source.durations,
source.cue_duration,
source.spacing,
source.iti))
# shuffle_param = random.sample(complete_param, len(complete_param))
shuffle_param = complete_param
dfs = []
for ttype, sound, duration, cue_duration, spacing, iti in shuffle_param:
time.sleep(iti)
df = run_trial(ttype, sound, duration, cue_duration, spacing)
dfs.append(df)
dfs = pd.concat(dfs)
dfs.to_csv(PurePath(f'{output_dir}/{source_file}.csv'), index = False)
if __name__ == "__main__":
main(sys.argv[1:])
The 3 files are in the same directory, I browse with the terminal within the directory and run the main as follow python experiment.py -s source -o /whatever/output/directory.
Any help would be more than appreciated
This is too big/complex a program to hope for help on non-specific "erratic" behavior here on stackoverflow. You need to boil it down into a small reproducible example that behaves unexpectedly. If it works sometimes and not others, systematically home in on the conditions that make it fail. I did make one attempt to run the whole thing, but after fixing a few missing imports there was still the matter of the unspecified "source file" content.
So I don't know specifically what your problem is. However, from the audiomath and general real-time-performance perspectives, I can certainly identify a few things you shouldn't be doing:
Although Player instances are designed to be played, stopped or manipulated at time-critical moments, they are not (by default) designed to be created and destroyed at time-critical moments. If you want to create/destroy them fast, pre-initialize a persistent Stream() instance and pass it as the stream argument when creating the Player, as described towards the end of https://audiomath.readthedocs.io/en/release/auto/Examples.html#play-sounds
If you are using Synth instances, you could take advantage of their .duration attribute instead of checking the clock explicitly in a while loop. For example, you can set tone880Hz.duration = 0.5, and then play the sound synchronously with p.Play(wait=True). The big problem with your clock-watching while loops is that they are currently "busy-wait" loops that will thrash the CPU, likely leading to sporadic disruption to your sound (Python's multithreading is far from perfect). However, before you fix this problem you should know...
The strategy "Play(), wait, sleep, Play()" is never going to achieve precise timing of one stimulus relative to the other anyway. First, whenever you issue a command to play a sound in any software, there will unavoidably be a non-zero (and randomly varying!) latency between the command and the physical onset of the sound. Second, sleep() is unlikely to be as precise as you think it is. This applies both to the sleep() you’ve been using to create a gap, and also to the sleep() that would be used internally by Play(wait=True). Sleep implementations suspend operation for "at least" the specified amount of time but they don't guarantee an upper bound on that. This is very hardware- and OS-dependent; on some Windows systems you may even find that the granularity never gets any better than 10ms.
If you really want to use the Synth approach I suppose you could program the gap procedurally into the function definitions of tone440Hz() and tone880Hz(), accessing cue_duration, duration and spacing as global variables (in fact, while you're at it, why not make frequency a global variable too, and only write one function). But I don't see any great advantage in this, either in performance or in code maintainability.
What I would do instead is pre-initialize the following (once, at the start of your program):
max_duration = 1 # length, in seconds, of the longest continuous tone we'll need
tone440Hz = am.Sound(fs=22050).GenerateWaveform(freq_hz=440, duration_msec=max_duration*1000)
tone880Hz = am.Sound(fs=22050).GenerateWaveform(freq_hz=880, duration_msec=max_duration*1000)
m = am.Stream()
Then compose each "long blink" stimulus as a static Sound using the parameters you want.
This will ensure that the tone and gap durations are precise:
s = tone440Hz[:cue_duration] % spacing % tone440Hz[:duration]
For best real-time performance, you could pre-compute a whole set of these stimuli with different parameters. Or, if it turns out that those composition operations (slicing and splicing) happen fast enough, you might decide you can get away with doing that at trial time, in your long_blink() function.
Either way, when it comes to playing the stimulus at trial time:
p = am.Player(s, stream=m) # to make Player() initialization fast, use a pre-initialized Stream() instance
p.Play(wait=True)
Finally: in implementing this, start from scratch—start simple, and test the performance of a few simple cases before compounding things.
I am currently parsing historic delay data from a public transport network in Sweden. I have ~5700 files (one from every 15 seconds) from the 27th of January containing momentary delay data for vehicles on active trips in the network. It's, unfortunately, a lot of overhead / duplicate data, so I want to parse out the relevant stuff to do visualizations on it.
However, when I try to parse and filter out the relevant delay data on a trip level using the script below it performs really slow. It has been running for over 1,5 hours now (on my 2019 Macbook Pro 15') and isn't finished yet.
How can I optimize / improve this python parser?
Or should I reduce the number of files, and i.e. the frequency of the data collection, for this task?
Thank you so much in advance. 💗
from google.transit import gtfs_realtime_pb2
import gzip
import os
import datetime
import csv
import numpy as np
directory = '../data/tripu/27/'
datapoints = np.zeros((0,3), int)
read_trips = set()
# Loop through all files in directory
for filename in os.listdir(directory)[::3]:
try:
# Uncompress and parse protobuff-file using gtfs_realtime_pb2
with gzip.open(directory + filename, 'rb') as file:
response = file.read()
feed = gtfs_realtime_pb2.FeedMessage()
feed.ParseFromString(response)
print("Filename: " + filename, "Total entities: " + str(len(feed.entity)))
for trip in feed.entity:
if trip.trip_update.trip.trip_id not in read_trips:
try:
if len(trip.trip_update.stop_time_update) == len(stopsOnTrip[trip.trip_update.trip.trip_id]):
print("\t","Adding delays for",len(trip.trip_update.stop_time_update),"stops, on trip_id",trip.trip_update.trip.trip_id)
for i, stop_time_update in enumerate(trip.trip_update.stop_time_update[:-1]):
# Store the delay data point (arrival difference of two ascending nodes)
delay = int(trip.trip_update.stop_time_update[i+1].arrival.delay-trip.trip_update.stop_time_update[i].arrival.delay)
# Store contextual metadata (timestamp and edgeID) for the unique delay data point
ts = int(trip.trip_update.stop_time_update[i+1].arrival.time)
key = int(str(trip.trip_update.stop_time_update[i].stop_id) + str(trip.trip_update.stop_time_update[i+1].stop_id))
# Append data to numpy array
datapoints = np.append(datapoints, np.array([[key,ts,delay]]), axis=0)
read_trips.add(trip.trip_update.trip.trip_id)
except KeyError:
continue
else:
continue
except OSError:
continue
I suspect the problem here is repeatedly calling np.append to add a new row to a numpy array. Because the size of a numpy array is fixed when it is created, np.append() must create a new array, which means that it has to copy the previous array. On each loop, the array is bigger and so all these copies add a quadratic factor to your execution time. This becomes significant when the array is quite big (which apparently it is in your application).
As an alternative, you could just create an ordinary Python list of tuples, and then if necessary convert that to a complete numpy array at the end.
That is (only the modified lines):
datapoints = []
# ...
datapoints.append((key,ts,delay))
# ...
npdata = np.array(datapoints, dtype=int)
I still think the parse routine is your bottleneck (even if it did come from Google), but all those '.'s were killing me! (And they do slow down performance somewhat.) Also, I converted your i, i+1 iterating to using two iterators zipping through the list of updates, this is a little more advanced style of working through a list. Plus the cur/next_update names helped me keep straight when you wanted to reference one vs. the other. Finally, I remove the trailing "else: continue", since you are at the end of the for loop anyway.
for trip in feed.entity:
this_trip_update = trip.trip_update
this_trip_id = this_trip_update.trip.trip_id
if this_trip_id not in read_trips:
try:
if len(this_trip_update.stop_time_update) == len(stopsOnTrip[this_trip_id]):
print("\t", "Adding delays for", len(this_trip_update.stop_time_update), "stops, on trip_id",
this_trip_id)
# create two iterators to walk through the list of updates
cur_updates = iter(this_trip_update.stop_time_update)
nxt_updates = iter(this_trip_update.stop_time_update)
# advance the nxt_updates iter so it is one ahead of cur_updates
next(nxt_updates)
for cur_update, next_update in zip(cur_updates, nxt_updates):
# Store the delay data point (arrival difference of two ascending nodes)
delay = int(nxt_update.arrival.delay - cur_update.arrival.delay)
# Store contextual metadata (timestamp and edgeID) for the unique delay data point
ts = int(next_update.arrival.time)
key = "{}/{}".format(cur_update.stop_id, next_update.stop_id)
# Append data to numpy array
datapoints = np.append(datapoints, np.array([[key, ts, delay]]), axis=0)
read_trips.add(this_trip_id)
except KeyError:
continue
This code should be equivalent to what you posted, and I don't really expect major performance gains either, but perhaps this will be more maintainable when you come back to look at it in 6 months.
(This probably is more appropriate for CodeReview, but I hardly ever go there.)
I'm looking for a solution, that allows to set list of values
[0,1,2]
over given list of times
[0,1,2]
at once, without loop, like this:
for frame, value in zip([0,1,2], [0,1,2]):
cmds.keyframe(node, e=True, vc=value, t=frame)
There are commands
cmds.setKeyframe()
and
cmds.keyframe()
that allow to set animation keys at a given time
But non of them allow to set range of value over the range of time (frames).
The same value can be put on the time range, but that's not the case.
mel.eval("setKeyframe -e -v %s -t 0 -t 1 -t 2 %s" % (value, node))
I tried to get attributes of the animation curve node, that stores keys inside,
but got an empty output.
node = '...'
types = cmds.listAttr(node)
for t in types:
if cmds.objExists(node+t):
try:
print t, cmds.getAttr(node+t)
except:
print 'failed with', t
continue
...
keyTimeValue [()]
...
Figured out.
Here is a documentation about anim curve node.
https://download.autodesk.com/us/maya/2011help/Nodes/animCurveUU.html
You can see, the keyTimeValue attribute stores no data by itself.
But its attributes keyTimeValue.keyTime and keyTimeValue.keyValue do.
This command worked as I expected:
def keyframe_range(node, values, id_range):
eval("cmds.setAttr('%s.ktv[%s].kv', %s, size=%s)" % (
node, id_range, ','.join([str(v) for v in values]), str(len(values))))
selected_id = cmds.keyframe(sl=True, query=True, iv=True)
index_range = '%s:%s' % (str(selected_id[0]), str(selected_id[-1]))
selected_curve = cmds.keyframe(query=True, name=True)
keyframe_range(selected_curve[0], values, index_range)
But there is a limit up to 255 arguments in Python 2.7 a function can get.
As soon, as the values are feeded direct to the function,
no more, than 255 keys can be processed at a time.
Latest approach does not work in 2019 Maya.
For those who will stumble upon, here is the proper code.
def add_keys(plugName, times, values, changeCache=None):
# Get the plug to be animated.
sel = om.MSelectionList()
sel.add(plugName)
plug = om.MPlug()
sel.getPlug(0, plug)
# Create the animCurve.
animfn = oma.MFnAnimCurve(plug)
timeArray = om.MTimeArray()
valueArray = om.MDoubleArray()
for i in range(len(times)):
timeArray.append(om.MTime(times[i], om.MTime.uiUnit()))
valueArray.append(values[i])
# Add the keys to the animCurve.
animfn.addKeys(
timeArray,
valueArray,
oma.MFnAnimCurve.kTangentGlobal,
oma.MFnAnimCurve.kTangentGlobal,
False,
changeCache)
Suppose I have an object Trace, say trace, already and I want to have an amplitude data at the time 30 sec.
I think I can do it like below, assuming begin time is 0 for simplicity,
delta = trace.stats.delta
count = int(30 / delta)
target_value = trace.data[count]
Do they prepare a good way to do it?
something like....
trace.foo(time=30)
Right now we do not have such a convenience method, but I agree that it would be a good addition. You could make a feature request in our GitHub issue tracker: https://github.com/obspy/obspy/issues/new
In the meantime, you could make use of the sample times convenience functions mixed with some numpy to achieve what you want..
If you're looking for times relative to start of trace:
from obspy import read
tr = read()[0]
my_time = 4 # 4 seconds after start of trace
times = tr.times()
index = times.searchsorted(my_time)
print(times[index])
print(tr.data[index])
4.0
204.817965896
If you're looking for absolute times:
from obspy import read, UTCDateTime
tr = read()[0]
my_time = UTCDateTime("2009-08-24T00:20:12.0Z") # 4 seconds after start of trace
times = tr.times('utcdatetime')
index = times.searchsorted(my_time)
print(times[index])
print(tr.data[index])
2009-08-24T00:20:12.000000Z
156.68994731
If you're not matching the exact time of one sample, you will have to manually interpolate.
Hope this gets you started.
Hey so I am just working on some coding homework for my Python class using JES. Our assignment is to take a sound, add some white noise to the background and to add an echo as well. There is a bit more exacts but I believe I am fine with that. There are four different functions that we are making: a main, an echo equation based on a user defined length of time and amount of echos, a white noise generation function, and a function to merge the noises.
Here is what I have so far, haven't started the merging or the main yet.
#put the following line at the top of your file. This will let
#you access the random module functions
import random
#White noise Generation functiton, requires a sound to match sound length
def whiteNoiseGenerator(baseSound) :
noise = makeEmptySound(getLength(baseSound))
index = 0
for index in range(0, getLength(baseSound)) :
sample = random.randint(-500, 500)
setSampleValueAt(noise, index, sample)
return noise
def multipleEchoesGenerator(sound, delay, number) :
endSound = getLength(sound)
newEndSound = endSound +(delay * number)
len = 1 + int(newEndSound/getSamplingRate(sound))
newSound = makeEmptySound(len)
echoAmplitude = 1.0
for echoCount in range (1, number) :
echoAmplitude = echoAmplitude * 0.60
for posns1 in range (0, endSound):
posns2 = posns1 + (delay * echoCount)
values1 = getSampleValueAt(sound, posns1) * echoAmplitude
values2 = getSampleValueAt(newSound, posns2)
setSampleValueAt (newSound, posns2, values1 + values2)
return newSound
I receive this error whenever I try to load it in.
The error was:
Inappropriate argument value (of correct type).
An error occurred attempting to pass an argument to a function.
Please check line 38 of C:\Users\insanity180\Desktop\Work\Winter Sophomore\CS 140\homework3\homework_3.py
That line of code is:
setSampleValueAt (newSound, posns2, values1 + values2)
Anyone have an idea what might be happening here? Any assistance would be great since I am hoping to give myself plenty of time to finish coding this assignment. I have gotten a similar error before and it was usually a syntax error however I don't see any such errors here.
The sound is made before I run this program and I defined delay and number as values 1 and 3 respectively.
Check the arguments to setSampleValueAt; your sample value must be out of bounds (should be within -32768 - 32767). You need to do some kind of output clamping for your algorithm.
Another possibility (which indeed was the error, according to further input) is that your echo will be out of the range of the sample - that is, if your sample was 5 seconds long, and echo was 0.5 seconds long; or the posns1 + delay is beyond the length of the sample; the length of the new sound is not calculated correctly.