Reusing module references in Python (Matplotlib) - python

I think I may have misunderstood something here... But here goes.
I'm using the psd method in matplotlib inside a loop, I'm not making it plot anything, I just want the numerical result, so:
import pylab as pyl
...
psdResults = pyl.psd(inputData, NFFT=512, Fs=sampleRate, window=blackman)
But that's being looped 36 times every time I run the function it's in.
I'm getting a slow memory leak when I run my program over time, so used 'heapy' to monitor this, and every time I run the function, it adds 36 to these 3 heaps:
dict matplotlib.line.Line26
dict matplotlib.transforms.CompositeAffine2D
dict matplotlib.path.Path
I can only conclude that each time I use the psd method it merely adds it to some dictionary somewhere, whereas I want to effectively wipe the memory - i.e. reset pylab each loop so it doesn't store anything.
I could be misinterpreting heapy but it seems pretty clear that pylab is just growing each loop even though I only want to use it's psd method, I don't want it saving the results anywhere itself !
Cheers

Try this:
from matplotlib import mlab
psdResults = mlab.psd(inputData, NFFT=512, Fs=sampleRate, window=blackman)
Does that improve the situation?

Related

Is there any way to output "monitor" objects value in real time as text/values in Salabim?

I know we can use animatemonitor to have a synchronized real-time value of monitor objects value and display as a graph over the built-in display/gui, but I have a requirement to use those values/plot in another graph in a browser. So i need those raw data in a synchronized real time manner. Is it possible?
What you could do is start the animation (possibly with a minimized window and no further functionality) and create your own Environment.animation_pre_tick() method that emits the values you want to show in another way.
Something like:
import salabim as sim
class RealTimeEnvironment(sim.Environment):
def animation_pre_tick(self, t):
... # put your emitting code here at time=t
env = RealTimeEnvironment()
env.animate(True)
env.run(sim.inf)

Generating objects with position in Maya using Python

I have some code that uses Python to read some data from a file, and then while still using Python generates an object in Maya based on the file. Everything is working good and the object comes out looking correctly. The problem is however that non of the segments that the object is made up of has a correct actual position, the Translate XYZ is set to 0 for all of them, even though they look correctly. The problem with this is that when I later on import the model into Unity3D I can't interact with the objects properly as the position is still at 0 while the mesh is were it should be. Is there some correct way to generate objects to make them have a position?
The code calls multiple functions to make different segments (one example of one of these functions is shown below). Then it uses maya.cmds.polyUnite to make it into one object. This is then repeated using a for-loop that runs for some amount of times (specified in the file). This for-loop calls cmds.duplicate(object made above) and moves the new object in the z-axis using cmds.move(0, 0, z, duped object). Due to some bad coding the code then calls polyUnite to make it all into one big object and then calls polySeperate to split it into small segements again. Is it possible that this is causing the problem?
Each segment is generated something like this:
cmds.polyCube(n='leftWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('leftWall.translateX', -Bt/float(2)-(THICKNESS/float(2)))
cmds.setAttr('leftWall.translateY', a)
cmds.setAttr('leftWall.rotateZ', -90)
cmds.polyNormal('leftWall', nm=0, unm=0)
cmds.polyCube(n='rightWall', w=w, h=h, d=d, sw=w, sh=h, sd=d)
cmds.setAttr('rightWall.translateX', Bt/float(2)+(THICKNESS/float(2)))
cmds.setAttr('rightWall.translateY', a)
cmds.setAttr('rightWall.rotateZ', 90)
cmds.polyNormal('rightWall', nm=0, unm=0)
cmds.polyUnite('leftWall', 'rightWall', n='walls')
addTexture('walls', 'wall.jpg')

Running same python code multiple times and getting inconsistent results

I am new to Python, so I am not sure if this problem is due to my inexperience or whether this is a glitch.
I am running this code multiple times on the same data (no random number generation) and getting different results. This has occurred with more than one variable so far, and obviously I cannot proceed with the analysis until I figure out which results are trustworthy. Here is a short sample of the results I have obtained after running the code four times. Why is there such a discrepancy between these outputs? I am puzzled and greatly appreciate your advice.
Linear Regression
from scipy.stats import linregress
import scipy.stats
from scipy.signal import welch
import matplotlib
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.signal as signal
part_022_o = pd.read_excel(r'C:\Users\Me\Desktop\Behavioral Data Processed\part_022_combined_other.xlsx')
distance_o = part_022_o["distance"]
fs = 200
f, Pwelch_spec = signal.welch(distance_o, fs=fs, window='hanning',nperseg=400, noverlap=200, scaling='density', average='mean')
log_f = np.log(f, where=f>0)
log_pwelch = np.log(Pwelch_spec, where=Pwelch_spec>0)
idx = np.isfinite(log_f) & np.isfinite(log_pwelch)
polynomial_coefficients = np.polyfit(log_f[idx],log_pwelch[idx],1)
print(polynomial_coefficients)
scipy.stats.linregress(log_f[idx], log_pwelch[idx])
Results First Attempt
[ 0.00324568 -2.82962602]
Results Second Attempt
[-2.70137164 6.97117509]
Results Third Attempt
[-2.70137164 6.97117509]
Results Fourth Attempt
[-2.28028005 5.53839502]
The same thing happens when I use scipy.stats.linregress().
Thank you,
Confused
Edit: full code added.
Also, the issue appears to be related to np.log(), since only the values of "log_f" array seem to be changing with the different outputs. It is hard to be certain that nothing else is changing (e.g. log_pwelch), but differences in output clearly correspond to differences in the first value of the "log_f" array.
Edit: I have narrowed the issue down to np.log(f, where=f>0). The first value in the f array is zero. According to the documentation of numpy log, "...Note that if an uninitialized out array is created via the default out=None, locations within it where the condition is False will remain uninitialized." Apparently this means that the value or variable is unpredictable and can vary from trial to trial, which is exactly what I am observing. Given my inexperience with Python, I am not sure what the best solution is (e.g. specifying the out-array in the log function, use a random seed, just note the regression coefficients whenever the value of zero is unchanged after log, etc.)
Try to use a random seed to reproduce results. Do this with the following code at the top of your program:
import numpy as np
np.random.seed(123) or any number you want
see here for more info: https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.seed.html
A random seed ensures you get repeatable results when some part of your program is generating numbers at random.
Try finding out what the functions (np.polyfit(), np.log()) are actually doing using documentation.
This is standard practice for scikit-learn and ML to use a seed value.

Using Scipy curve_fit with external program?

Is it possible to use curve_fit with an external program?
For example, I have a function
def test(a,b):
os.system('./program a b')
data = numpy.load('outputdata.txt')
return data
which takes two variables as input variables for an external programm written in C++. This program is a simulation and creates its output into the text file. I then read the text file and return it for plotting. The one thing that is immediatly noticable is, that it does not need an X variable.
I tried running this function in curve_fit, but it will only give me the initial guess variables as a result with infinite errors.
Is this even at all possible? Could it be that the lack of an X variable is the problem? Because curve_fit needs one, I just gave it an array of [1...1], since it is not actually used.
Thx.

How to return multiple values using scipy ndimage.generic_filter in Python?

I'm looking for a way to output multiple values using the generic_filter module in scipy.ndimage like so:
import numpy as np
from scipy import ndimage
a = np.array([range(1,5),range(5,9),range(9,13),range(13,17)])
def summary(a):
minVal = np.min(a)
maxVal = np.max(a)
return [minVal,maxVal]
[arrMin, arrMax] = ndimage.generic_filter(a, summary, footprint=np.ones((3,3)))
But I keep getting the error that a float is expected.
I've played with the 'output' parameter, like so:
arrMin = np.zeros(np.shape(a))
arrMax = np.zeros(np.shape(a))
ndimage.generic_filter(a, summary, footprint=np.ones((3,3)), output = [arrMin, arrMax])
to no avail. I've also tried returning a named tuple, a class, or a dictionary, as per this question none of which have worked.
Based on the comments, you want to perform multiple filters simultaneously rather than performing them separately.
Unfortunately I do not think this filter works that way. It expects you to return a single filtered output value for each corresponding input value. I looked for a way to do simultaneous filters with numpy/scipy but couldn't find anything.
If you can manage a data flow that allows you to load the image, filter, process and produce some small result data in separate parallel paths (one for each filter), then you may get some benefit from using multiprocessing but if you use it naively it's likely to take more time than doing everything sequentially. If you really have a bottleneck that multiprocessing solves you should also look into sharing your input array rather than loading it in each process.

Categories

Resources