Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Let's consider a distribution L(t) depending on a float parameter t.
I need to compute such a value of t that 5%-quantile of distribution L(t) equals 0.
Are there any solutions in python for this problem? The way of generating a new sample for somehow chosen t and calculating its quantile seems too slow and irrational.
Not sure I understand question clearly, but you have distribution with PDF=L(x,t) on interval [a, b] (a is negative), and you want to find such t that integral from a to 0 over x is equal to 1/20 (5%), right ?
Looks like root finding problem to me.
S_a^0 L(x,t) dx - 1/20 = 0
In Python, function I usually use is scipy.optimize.brentq
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 hours ago.
Improve this question
I have following algorithm I will implement in Python:
I'm not sure how to build it up and especially to deal with the minimum function. Can anyone help me?
I have self made the norm of the gradiant calculated out from matrix A and vector b by following:
r=b-A#x
r_new=np.inner(r,r)
np.sqrt(r_new)
But how do I deal with the minimum function and the setup. Can anyone help me?
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I have a Set of lets say 100 points. And the distance of a point from any other point is given. Which means I have 100x100 dataset giving me distance of each of the 100 points from all the other 100 points. I want to form clusters from this dataset based on the condition that distance between any two points in a cluster should not be greater than x(where x can be for example 25kms.).
I am new to clustering and data science. Please guide me how to solve this problem. What libraries can most efficiently solve this problem. Any help will be appreciated. :)
This can be solved using sklearn's agglomerative clustering by setting the affinity as "precomputed"
Refer this link for the solution.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
This is most likely a dumb question but being a beginner in Python/Numpy I will ask it anyways. I have come across a lot of posts on how to Normalize an array/matrix in numpy. But I am not sure about the WHY. Why/When does an array/matrix need to be normalized in numpy? When is it used?
Normalize can have multiple meanings in difference context. My question belongs to the field of Data Analytics/Data Science. What does Normalization mean in this context? Or more specifically in what situation should I normalize an array?
The second part to this question is - What are the different methods of Normalization and can they be used interchangeably in all situations?
The third and final part - can Normalization be used for Arrays of any dimensions?
Links to any reference material (for beginners) will be appreciated.
Consider trying to cluster objects with two numerical attributes A and B. Both are equally important. Attribute A can range from 0 to 1000 and attribute B can range from 0 to 5.
If you did not normalize A and B you would end up with attribute A completely overpowering attribute B when applying any standard distance metric.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I have a question about the Levenberg-Marquardt optimize method in Python:
Generally, the Lavenberg Maquardt is used for deterministic systems. Can I use it for stochastic model to estimate unknown parameters (inputs of my model).
Thanks
The requirement for the Levenberg Marquard algorithm is that you need to be able to calculate the jacoboan (derivative with respect to your parameter).
If this is the case for your problem then yes. I guess that it is not.
Perhaps the simplex algorithm is what you are looking for.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have an array of values which I have generated starting from an observed value y1, and assuming that this value has a poissonian distribution:
array = np.random.poisson(np.real(y1), 10000)
What if I want to extract a random value from array, which is poissonian distributed, and hence has a "most probable value" that peaks at y1? How can I do that? Does it work by simple random extraction, or does it need something else to be specified?
EDIT: trying to be more specific. I have an array whose elements are Poisson distributed. If I want to randomly extract an element from that array, should I tell to the method about the array distribution, or it is not necessary?
I hope this will clarify a bit.
Just
import random
randval = random.choice(array)
should work fine for you. Once array is generated, in order to pick one at random of its items it does not matter any more by what process or according to what distribution array was populated in the first place.