process_time() function in python not working as expected - python

Can someone help me understand how process_time() works?
My code is
from time import process_time
t = process_time()
def fibonacci_of(n):
if n in cache: # Base case
return cache[n]
# Compute and cache the Fibonacci number
cache[n] = fibonacci_of(n - 1) + fibonacci_of(n - 2) # Recursive case
return cache[n]
cache = {0: 0, 1: 1}
fib = [fibonacci_of(n) for n in range(1500)]
print(fib[-1])
print(process_time() - t)
And last print is always 0.0.
My expected result is something like 0.764891862869
Docs at https://docs.python.org/3/library/time.html#time.process_time don't help newbie me :(
I tried some other functions and reading docs. But without success.

I'd assume this is OS dependent. Linux lets me get down to ~5 microseconds using process_time, but other operating systems might well not deal well with differences this small and return zero instead.
It's for this reason that Python exposes other timers that are designed to be more accurate over shorter time scales. Specifically, perf_counter is specified as using:
the highest available resolution to measure a short duration
Using this lets me measure down to ~80 nanoseconds, whether I'm using perf_counter or perf_counter_ns.

As documentation suggests:
time.process_time() → float
Return the value (in fractional seconds) of the sum of the system and user
CPU time of the current process. It does not include time
elapsed during sleep. It is process-wide by definition. The reference
point of the returned value is undefined, so that only the difference
between the results of two calls is valid.
Use process_time_ns() to avoid the precision loss caused by the float type.
This last sentence is the most important and differentiates between the very precise function: process_time_ns() and one that is less precise but more appropriate for the long-running processes - time.process_time().
It turns out that when you measure a couple nanoseconds (nano means 10**-9) and try to express it in seconds dividing by 10**9, you often go out of the precision of float (64 bits) and end up rounding up to zero. The float limitations are described in the Python documentation.
To get to know more you can also read a general introduction to precision in floating point arithmetic (ambitious) and its peril (caveats).

Related

Running something in a method takes much more depending on the data types

Introduction
Today I found a weird behaviour in python while running some experiments with exponentiation and I was wondering if someone here knows what's happening. In my experiments, I was trying to check what is faster in python int**int or float**float. To check that I run some small snippets, and I found a really weird behaviour.
Weird results
My first approach was just to write some for loops and prints to check which one is faster. The snipper I used is this one
import time
# Run powers outside a method
ti = time.time()
for i in range(EXPERIMENTS):
x = 2**2
tf = time.time()
print(f"int**int took {tf-ti:.5f} seconds")
ti = time.time()
for i in range(EXPERIMENTS):
x = 2.**2.
tf = time.time()
print(f"float**float took {tf-ti:.5f} seconds")
After running it I got
int**int took 0.03004
float**float took 0.03070 seconds
Cool, it seems that data types do not affect the execution time. However, since I try to be a clean coder I refactored the repeated logic in a function power_time
import time
# Run powers in a method
def power_time(base, exponent):
ti = time.time()
for i in range(EXPERIMENTS):
x = base ** exponent
tf = time.time()
return tf-ti
print(f"int**int took {power_time(2, 2):.5f} seconds")
print(f"float**float took {power_time(2., 2.):5f} seconds")
And what a surprise of mine when I got these results
int**int took 0.20140 seconds
float**float took 0.05051 seconds
The refactor didn't affect a lot the float case, but it multiplied by ~7 the time required for the int case.
Conclusions and questions
Apparently, running something in a method can slow down your process depending on your data types, and that's really weird to me.
Also, if I run the same experiments but change ** by * or + the weird results disappear, and all the approaches give more or less the same results
Does someone know why is this happening? Am I missing something?
Apparently, running something in a method can slow down your process depending on your data types, and that's really weird to me.
It would be really weird if it was not like this! You can write your class that has it's own ** operator (through implementing the __pow__(self, other) method), and you could, for example, sleep 1s in there. Why should that take as long as taking a float to the power of another?
So, yeah, Python is a dynamically typed language. So, the operations done on data depend on the type of that data, and things can generally take different times.
In your first example, the difference never arises, because a) most probably the values get cached, because right after parsing it's clear that 2**2 is a constant and does not need to get evaluated every loop. Even if that's not the case, b) the time it costs to run a loop in python is hundreds of times that it takes to actually execute the math here – again, dynamically typed, dynamically named.
base**exponent is a whole different story. None about this is constant. So, there's actually going to be a calculation every iteration.
Now, the ** operator (__rpow__ in the Python data model) for Python's built-in float type is specified to do the float exponent (which is something implemented in highly optimized C and assembler), as exponentiation can elegantly be done on floating point numbers. Look for nb_power in cpython's floatobject.c. So, for the float case, the actual calculation is "free" for all that matters, again, because your loop is limited by how much effort it is to resolve all the names, types and functions to call in your loop. Not by doing the actual math, which is trivial.
The ** operator on Python's built-in int type is not as neatly optimized. It's a lot more complicated – it needs to do checks like "if the exponent is negative, return a float," and it does not do elementary math that your computer can do with a simple instruction, it handles arbitrary-length integers (remember, a python integer has as many bytes as it needs. You can save numbers that are larger than 64 bit in a Python integer!), which comes with allocation and deallocations. (I encourage you to read long_pow in CPython's longobject.c; it has 200 lines.)
All in all, integer exponentiation is expensive in python, because of python's type system.

Maximum value of kernel statistics - python

Question: What is the maximum value of kernel statistic counters and how can I handle it in python code?
Context: I calculate some statistics based on kernel statistics (e.g. /proc/partitions - it'll be customized python iostat version). But I have a problem with overflowed values - negative values. Original iostat code https://github.com/sysstat/sysstat/blob/master/iostat.c comments:
* Counters overflows are possible, but don't need to be handled in
* a special way: The difference is still properly calculated if the
* result is of the same type as the two values.
My language is python and I need to care about overflow in my case. Probably it depends also on architecture (32/64). I've tried 2^64-1 (64bit system), but no success.
The following function will work for 32-bit counters:
def calc_32bit_diff(old, new):
return (new - old + 0x100000000) % 0x100000000
print calc_32bit_diff(1, 42)
print calc_32bit_diff(2147483647, -2147483648)
print calc_32bit_diff(-2147483648, 2147483647)
This obviously won't work is the counter wraps around more than once between two consecutive reads (but then no other method would work either, since the information has been irrecoverably lost).
Writing a 64-bit version of this is left as an exercise for the reader. :)

speeding up scipy.integrate.quad()?

I'm trying to speed up the following code which computes a sum of integrals. To get good accuracy I need to increase L_max but that also makes the execution time much longer. The specific case below computes 0.999999 of the probability curve and takes about 65 seconds. I have heard of cython and its ability to speed up code, but i dont know how to use it or how it could help in this case. any ideas?
import math
from scipy import integrate
import numpy
from decimal import *
import time
start_time=time.time()
getcontext().prec=100
################################
def pt(t):
first_term=math.exp(-lam*t)*((0.0001*I)**ni)*(t**(ni-1))*math.exp(-(0.0001*I)*t)/(math.factorial(ni-1))
sum_term=0.0
i=0
while i<ni:
sum_term=sum_term+((0.0001*I)**i)*(t**(i))*math.exp(-(0.0001*I)*t)/(math.factorial(i))
i=i+1
sum_term=lam*math.exp(-lam*t)*sum_term
total=first_term+sum_term
return total
#################################
def pLgt(t):
return Decimal(((Decimal((0.0001*O)*t))**Decimal(L))*Decimal(math.exp(-(0.0001*O)*t)))/Decimal((math.factorial(L)))
######################################
def pL_t(t):
return (pLgt(t))*Decimal(pt(t))
################################
lam=0.0001
beta=0.0001
ni=10
I=5969
O=48170
L_max=300.0
L=0.0
sum_term=0.0
sum_probability=0.0
while L<L_max:
probability=(integrate.quad(lambda t: pL_t(t),0,800))[0]
sum_probability=sum_probability+probability
sum_term=sum_term+L*probability
L=L+1.0
print time.time()-start_time
print sum_probability
print sum_term
print (sum_term-1)*0.46+6.5
Doing calculations in Decimal is probably slowing you down a lot without providing any benefit. Decimal calculations are much slower than float, ~100x slower, as noted by Kozyarchuk on Stack Overflow here. Using Decimal types in Numpy arrays keeps you from getting the speed benefits from Numpy.
Meanwhile, it's unclear to me that the results from scipy.integrate.quad would actually be of the level of precision you want; if you really do need arbitrary precision, you may have to write your quadrature code from scratch.
If you do need to use Decimal numbers, at least caching the Decimal representations of those numbers will provide you some speed benefit. That is, using
O=Decimal(48170)
L=Decimal(0.0)
and telling pLgt to just use O and L, will be faster.
The pL_t function looks like a sum of gamma distributions, in which case, you should be able to evaluate the integral as a sum of partial incomplete gamma functions: http://docs.scipy.org/doc/scipy/reference/generated/scipy.special.gdtr.html#scipy.special.gdtr

numpy float: 10x slower than builtin in arithmetic operations?

I am getting really weird timings for the following code:
import numpy as np
s = 0
for i in range(10000000):
s += np.float64(1) # replace with np.float32 and built-in float
built-in float: 4.9 s
float64: 10.5 s
float32: 45.0 s
Why is float64 twice slower than float? And why is float32 5 times slower than float64?
Is there any way to avoid the penalty of using np.float64, and have numpy functions return built-in float instead of float64?
I found that using numpy.float64 is much slower than Python's float, and numpy.float32 is even slower (even though I'm on a 32-bit machine).
numpy.float32 on my 32-bit machine. Therefore, every time I use various numpy functions such as numpy.random.uniform, I convert the result to float32 (so that further operations would be performed at 32-bit precision).
Is there any way to set a single variable somewhere in the program or in the command line, and make all numpy functions return float32 instead of float64?
EDIT #1:
numpy.float64 is 10 times slower than float in arithmetic calculations. It's so bad that even converting to float and back before the calculations makes the program run 3 times faster. Why? Is there anything I can do to fix it?
I want to emphasize that my timings are not due to any of the following:
the function calls
the conversion between numpy and python float
the creation of objects
I updated my code to make it clearer where the problem lies. With the new code, it would seem I see a ten-fold performance hit from using numpy data types:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0
for i in range(10000000):
s = (s + 8) * s % 2399232
print(s)
print('Runtime:', datetime.now() - START_TIME)
The timings are:
float64: 34.56s
float32: 35.11s
float: 3.53s
Just for the hell of it, I also tried:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
s = np.float64(1)
for i in range(10000000):
s = float(s)
s = (s + 8) * s % 2399232
s = np.float64(s)
print(s)
print('Runtime:', datetime.now() - START_TIME)
The execution time is 13.28 s; it's actually 3 times faster to convert the float64 to float and back than to use it as is. Still, the conversion takes its toll, so overall it's more than 3 times slower compared to the pure-python float.
My machine is:
Intel Core 2 Duo T9300 (2.5GHz)
WinXP Professional (32-bit)
ActiveState Python 3.1.3.5
Numpy 1.5.1
EDIT #2:
Thank you for the answers, they help me understand how to deal with this problem.
But I still would like to know the precise reason (based on the source code perhaps) why the code below runs 10 times slow with float64 than with float.
EDIT #3:
I rerun the code under the Windows 7 x64 (Intel Core i7 930 # 3.8GHz).
Again, the code is:
from datetime import datetime
import numpy as np
START_TIME = datetime.now()
# one of the following lines is uncommented before execution
#s = np.float64(1)
#s = np.float32(1)
#s = 1.0
for i in range(10000000):
s = (s + 8) * s % 2399232
print(s)
print('Runtime:', datetime.now() - START_TIME)
The timings are:
float64: 16.1s
float32: 16.1s
float: 3.2s
Now both np floats (either 64 or 32) are 5 times slower than the built-in float. Still, a significant difference. I'm trying to figure out where it comes from.
END OF EDITS
CPython floats are allocated in chunks
The key problem with comparing numpy scalar allocations to the float type is that CPython always allocates the memory for float and int objects in blocks of size N.
Internally, CPython maintains a linked list of blocks each large enough to hold N float objects. When you call float(1) CPython checks if there is space available in the current block; if not it allocates a new block. Once it has space in the current block it simply initializes that space and returns a pointer to it.
On my machine each block can hold 41 float objects, so there is some overhead for the first float(1) call but the next 40 run much faster as the memory is allocated and ready.
Slow numpy.float32 vs. numpy.float64
It appears that numpy has 2 paths it can take when creating a scalar type: fast and slow. This depends on whether the scalar type has a Python base class to which it can defer for argument conversion.
For some reason numpy.float32 is hard-coded to take the slower path (defined by the _WORK0 macro), while numpy.float64 gets a chance to take the faster path (defined by the _WORK1 macro). Note that scalartypes.c.src is a template which generates scalartypes.c at build time.
You can visualize this in Cachegrind. I've included screen captures showing how many more calls are made to construct a float32 vs. float64:
float64 takes the fast path
float32 takes the slow path
Updated - Which type takes the slow/fast path may depend on whether the OS is 32-bit vs 64-bit. On my test system, Ubuntu Lucid 64-bit, the float64 type is 10 times faster than float32.
Operating with Python objects in a heavy loop like that, whether they are float, np.float32, is always slow. NumPy is fast for operations on vectors and matrices, because all of the operations are performed on big chunks of data by parts of the library written in C, and not by the Python interpreter. Code run in the interpreter and/or using Python objects is always slow, and using non-native types makes it even slower. That's to be expected.
If your app is slow and you need to optimize it, you should try either converting your code to a vector solution that uses NumPy directly, and is fast, or you could use tools such as Cython to create a fast implementation of the loop in C.
Perhaps, that is why you should use Numpy directly instead of using loops.
s1 = np.ones(10000000, dtype=np.float)
s2 = np.ones(10000000, dtype=np.float32)
s3 = np.ones(10000000, dtype=np.float64)
np.sum(s1) <-- 17.3 ms
np.sum(s2) <-- 15.8 ms
np.sum(s3) <-- 17.3 ms
The answer is quite simple: the memory allocation might be part of it, but the biggest problem is that arithmetic operations for numpy scalars is done using "ufuncs" which are meant to be fast for several hundred values not just 1. There is some overhead in choosing the correct function to call and setting up the loops. Overhead which is un-necessary for scalars.
It was easier to just have the scalars be converted to 0-d arrays and then passed to the corresponding numpy ufunc then write separate calculation methods for each of the many different scalar types that NumPy supports.
The intent was that optimized versions of the scalar math would be added to the type-objects in C. This could still happen, but it never has happened because no-one has been motivated enough to do it. Possibly because the work-around is to convert numpy scalars to Python scalars which do have optimized arithmetic.
Summary
If an arithmetic expression contains both numpy and built-in numbers, Python arithmetics works slower. Avoiding this conversion removes almost all of the performance degradation I reported.
Details
Note that in my original code:
s = np.float64(1)
for i in range(10000000):
s = (s + 8) * s % 2399232
the types float and numpy.float64 are mixed up in one expression. Perhaps Python had to convert them all to one type?
s = np.float64(1)
for i in range(10000000):
s = (s + np.float64(8)) * s % np.float64(2399232)
If the runtime is unchanged (rather than increased), it would suggest that's what Python indeed was doing under the hood, explaining the performance drag.
Actually, the runtime fell by 1.5 times! How is it possible? Isn't the worst thing that Python could possibly have to do was these two conversions?
I don't really know. Perhaps Python had to dynamically check what needs to be converted into what, which takes time, and being told what precise conversions to perform makes it faster. Perhaps, some entirely different mechanism is used for arithmetics (which doesn't involve conversions at all), and it happens to be super-slow on mismatched types. Reading numpy source code might help, but it's beyond my skill.
Anyway, now we can obviously speed things up more by moving the conversions out of the loop:
q = np.float64(8)
r = np.float64(2399232)
for i in range(10000000):
s = (s + q) * s % r
As expected, the runtime is reduced substantially: by another 2.3 times.
To be fair, we now need to change the float version slightly, by moving the literal constants out of the loop. This results in a tiny (10%) slowdown.
Accounting for all these changes, the np.float64 version of the code is now only 30% slower than the equivalent float version; the ridiculous 5-fold performance hit is largely gone.
Why do we still see the 30% delay? numpy.float64 numbers take the same amount of space as float, so that won't be the reason. Perhaps the resolution of the arithmetic operators takes longer for user-defined types. Certainly not a major concern.
If you're after fast scalar arithmetic, you should be looking at libraries like gmpy rather than numpy (as others have noted, the latter is optimised more for vector operations rather than scalar ones).
I can confirm the results also. I tried to see what it would look like using all numpy types, and the difference persists. So then, my tests were:
def testStandard(length=100000):
s = 1.0
addend = 8.0
modulo = 2399232.0
startTime = datetime.now()
for i in xrange(length):
s = (s + addend) * s % modulo
return datetime.now() - startTime
def testNumpy(length=100000):
s = np.float64(1.0)
addend = np.float64(8.0)
modulo = np.float64(2399232.0)
startTime = datetime.now()
for i in xrange(length):
s = (s + addend) * s % modulo
return datetime.now() - startTime
So at this point, the numpy types are all interacting with each other, but the 10x difference persists (2 sec vs 0.2 sec).
If I had to guess, I would say that there are two possible reasons for why the default float types are much faster. The first possibility is that python performs significant optimizations under the hood for dealing with certain numeric operations or looping in general (e.g. loop unrolling). The second possibility is that the numpy types involves an extra layer of abstraction (i.e. having to read from an address). To look into the effects of each, I did a few extra checks.
One difference could be the result of python having to take extra steps to resolve the float64 types. Unlike compiled languages that generate efficient tables, python 2.6 (and maybe 3) has a significant cost for resolving things that you'd generally think of as free. Even a simple X.a resolution has to resolve the dot operator EVERY time it is called. (Which is why if you have a loop that calls instance.function() you're better off having a variable "function = instance.function" declared outside the loop).
From my understanding, when you use python standard operators, these are fairly similar to using the ones from "import operator." If you substitute add, mul, and mod in for your +, *, and %, you see a static performance hit of about 0.5 sec versus the standard operators (to both cases). This means that by wrapping the operators, the standard python float operations get 3x slower. If you do one further, using operator.add and those variants adds on 0.7 sec approximately (over 1m trials, starting with 2 sec and 0.2 sec respectively). That's verging on the 5x slowness. So basically, if each of these issues happens twice, you're basically at the 10x slower point.
So let's assume we're the python interpreter for a moment. Case 1, we do an operation on native types, let's say a+b. Under the hood, we can check the types of a and b and dispatch our addition to python's optimized code. Case 2, we have an operation of two other types (also a+b). Under the hood, we check if they're native types (they're not). We move on to the 'else' case. The else case sends us to something like a.add(b). a.add can then do a dispatch to numpy's optimized code. So at this point we have had additional overhead of an extra branch, one '.' get slots property, and a function call. And we've only got into the addition operation. We then have to use the result to create a new float64 (or alter an existing float64). Meanwhile, the python native code probably cheats by treating its types specially to avoid this sort of overhead.
Based on the above examination of the costliness of python function calls and scoping overhead, it would be pretty easy for numpy to incur a 9x penalty just getting to and from its c math functions. I can entirely imagine this process taking many times longer than a simple math operation call. For each operation, the numpy library will have to wade through layers of python to get to its C implementation.
So in my opinion, the reason for this is probably captured in this effect:
length = 10000000
class A():
X = 10
startTime = datetime.now()
for i in xrange(length):
x = A.X
print "Long Way", datetime.now() - startTime
startTime = datetime.now()
y = A.X
for i in xrange(length):
x = y
print "Short Way", datetime.now() - startTime
This simple case shows a difference of 0.2 sec vs 0.14 sec (short way faster, obviously). I think what you're seeing is mainly just a bunch of those issues adding up.
To avoid this, I can think of a a couple possible solutions that mainly echo what has been said. The first solution is to try to keep your evaluations inside NumPy as much as possible, as Selinap said. A large amount of the losses are probably due to the interfacing. I would look into ways to dispatch your job into numpy or some other numeric library optimized in C (gmpy has been mentioned). The goal should be to push as much into C at the same time as possible, then get the result(s) back. You want to put in big jobs, not lots of small jobs.
The second solution, of course, would be to do more of your intermediate and small operations in python if you can. Clearly, using the native objects are going to be faster. They're going to be the first options on all the branch statements and will always have the shortest path to C code. Unless you have a specific need for fixed precision calculation or other issues with the default operators, I don't see why one wouldn't use the straight python functions for many things.
Really strange...I confirm the results in Ubuntu 11.04 32bit, python 2.7.1, numpy 1.5.1 (official packages):
import numpy as np
def testfloat():
s = 0
for i in range(10000000):
s+= float(1)
def testfloat32():
s = 0
for i in range(10000000):
s+= np.float32(1)
def testfloat64():
s = 0
for i in range(10000000):
s+= np.float64(1)
%time testfloat()
CPU times: user 4.66 s, sys: 0.06 s, total: 4.73 s
Wall time: 4.74 s
%time testfloat64()
CPU times: user 11.43 s, sys: 0.07 s, total: 11.50 s
Wall time: 11.57 s
%time testfloat32()
CPU times: user 47.99 s, sys: 0.09 s, total: 48.08 s
Wall time: 48.23 s
I don't see why float32 should be 5 times slower that float64.

Get POSIX/Unix time in seconds and nanoseconds in Python?

I've been trying to find a way to get the time since 1970-01-01 00:00:00 UTC in seconds and nanoseconds in python and I cannot find anything that will give me the proper precision.
I have tried using time module, but that precision is only to microseconds, so the code I tried was:
import time
print time.time()
which gave me a result like this:
1267918039.01
However, I need a result that looks like this:
1267918039.331291406
Does anyone know a possible way to express UNIX time in seconds and nanoseconds? I cannot find a way to set the proper precision or get a result in the correct format. Thank you for any help
Since Python 3.7 it's easy to achieve with time.time_ns()
Similar to time() but returns time as an integer number of nanoseconds since the epoch.
All new features that includes nanoseconds in Python 3.7 release:
PEP 564: Add new time functions with nanosecond resolution
Your precision is just being lost due to string formatting:
>>> import time
>>> print "%.20f" % time.time()
1267919090.35663390159606933594
The problem is probably related to your OS, not Python. See the documentation of the time module: http://docs.python.org/library/time.html
time.time()
Return the time as a floating point
number expressed in seconds since the
epoch, in UTC. Note that even though
the time is always returned as a
floating point number, not all
systems provide time with a better
precision than 1 second. While this
function normally returns
non-decreasing values, it can return a
lower value than a previous call if
the system clock has been set back
between the two calls.
In other words: if your OS can't do it, Python can't do it. You can multiply the return value by the appropriate order of magnitude in order to get the nanosecond value, though, imprecise as it may be.
EDIT: The return is a float variable, so the number of digits after the comma will vary, whether your OS has that level of precision or not. You can format it with "%.nf" where n is the number of digits you want, though, if you want a fixed point string representation.
It depends on the type of clock and your OS and hardware wether or not you can even get nanosecond precision at all. From the time module documentation:
The precision of the various real-time functions may be less than suggested by the units in which their value or argument is expressed. E.g. on most Unix systems, the clock “ticks” only 50 or 100 times a second.
On Python 3, the time module gives you access to 5 different types of clock, each with different properties; some of these may offer you nanosecond precision timing. Use the time.get_clock_info() function to see what features each clock offers and what precision time is reported in.
On my OS X 10.11 laptop, the features available are:
>>> for name in ('clock', 'monotonic', 'perf_counter', 'process_time', 'time'):
... print(name, time.get_clock_info(name), sep=': ')
...
clock: namespace(adjustable=False, implementation='clock()', monotonic=True, resolution=1e-06)
monotonic: namespace(adjustable=False, implementation='mach_absolute_time()', monotonic=True, resolution=1e-09)
perf_counter: namespace(adjustable=False, implementation='mach_absolute_time()', monotonic=True, resolution=1e-09)
process_time: namespace(adjustable=False, implementation='getrusage(RUSAGE_SELF)', monotonic=True, resolution=1e-06)
time: namespace(adjustable=True, implementation='gettimeofday()', monotonic=False, resolution=1e-06)
so using the time.monotonic() or time.perf_counter() functions would theoretically give me nanosecond resolution. Neither clock gives me wall time, only elapsed time; the values are otherwise arbitrary. They are however useful for measuring how long something took.
It is unlikely that you will actually get nanosecond precision from any current machine.
The machine can't create precision, and displaying significant digits where not appropriate is not The Right Thing To Do.
I don't think there's a platform-independent way (maybe some third party has coded one, but I can't find it) to get time in nanoseconds; you need to do it in a platform-specific way. For example, this SO question's answer shows how to do it on platform that supply a librt.so system library for "realtime" operations.

Categories

Resources