Python Grouping Data - python

I have a set of data:
(1438672131.185164, 377961152)
(1438672132.264816, 377961421)
(1438672133.333846, 377961690)
(1438672134.388937, 377961954)
(1438672135.449144, 377962220)
(1438672136.540044, 377962483)
(1438672137.172971, 377962763)
(1438672138.24253, 377962915)
(1438672138.652991, 377963185)
(1438672139.069998, 377963285)
(1438672139.44115, 377963388)
What I need to figure out is how to group them. Until now I've used a super-duper simple approach, just by diffing two of the second part of the tuple and if the diff was bigger than a certain pre-defined threshold I'd put them into different groups. But it's yielded only unsatisfactory results.
But theoretically I imagine, that it should be possible to determine wether a value of the second part of the tuple belongs to the same group or not, by fitting them on one or multiple line, because I know about the first part of the tuple that it's strictly monotenous because it's a timestamp (time.time()) and I know that all data sets that result will be close to linear. Let's say the tuple is (y, x). There are only three options:
Either all data fits the same equation y = mx + c
Or there is only a differing offset c or
there is an offset c and a different m
The above set would be one group only. The following group would resolve in three groups:
(1438672131.185164, 377961152)
(1438672132.264816, 961421)
(1438672133.333846, 477961690)
(1438672134.388937, 377961954)
(1438672135.449144, 962220)
(1438672136.540044, 377962483)
(1438672137.172971, 377962763)
(1438672138.24253, 377962915)
(1438672138.652991, 377963185)
(1438672139.069998, 477963285)
(1438672139.44115, 963388)
group1:
(1438672131.185164, 377961152)
(1438672134.388937, 377961954)
(1438672136.540044, 377962483)
(1438672137.172971, 377962763)
(1438672138.24253, 377962915)
(1438672138.652991, 377963185)
group2:
(1438672132.264816, 961421)
(1438672135.449144, 962220)
(1438672139.44115, 963388)
group3:
(1438672133.333846, 477961690)
(1438672139.069998, 477963285)
Is there a module or otherwise simple solution that will solve this problem? I've found least-squares in numpy and scipy, but I'm not quite sure how to properly use or apply them. If there is another way besides linear functions I'm happy to hear about them as well!
EDIT 2
It is a two dimensional problem unfortunately, not one-dimensional. For example
(1439005464, 477961152)
should (if assuming for this data a relationship approximately 1:300) would still be first group.

Related

Numpy Divide Arrays With Multiple Out Conditions

I have two, two-dimensional arrays (say arrayA & arrayB) that are exactly the same size (2500X, 1500Y). I am interested in dividing array A by array B, but have three conditions that I would like to be excluded from the division and instead replaced with a specific value. These conditions are:
If arrayB contains zero at point (Bx,By), replace output (Cx,Cy) with (arrayA*arrayA)
If arrayA contains zero at point (Ax,Ay), replace output (Cx,Cy) with 0.50
If both arrayA & B at overlapping points (Ax,Ay & Bx,By) contain 0, replace output (Cx,Cy) with 1
I've found that numpy.divide parameters out and where allow me to define each of these individually, so I've taken the first condition and arranged it as follows:
arrayC = np.divide(arrayA, arrayB, out=(arrayA*arrayA), where=arrayB!=0)
My question is how can I combine the other two conditions and their desired outputs within this operation?
One solution, not sure it is the fastest
za=A==0
zb=B==0
case0=(~za)&~zb
case1=zb&~za
case2=za&~zb
case3=za&zb
C=case3*1 + case2*0.5 + case1*A*A # Case 3,2, 1
C[case0]=(A[case0]/B[case0])
Could be more compact with less intermediate values, but I've chosen clarity.
You could also use a cascade of np.where
zb=B==0
C=np.where(A==0, np.where(zb,1,0.5), np.where(zb, A*A, A/B))
Edit: better version (but still not perfect)
zb=B==0
za=A==0
C=np.where(za, np.where(zb,1,0.5), A*A)
np.divide(A, B, out=C, where=(~zb)&~za)
It combines np.where and your np.divide where=
It is as fast as the previous solution.
And does not complain about 0-division, since division occurs only for the cases where it is needed.
Nevertheless, it computes the first version of C (the one before np.divide), and particularly A*A, everywhere, even where it is not needed, since it will be overwritten.
So, it could probably be better.

OR-Tools solution to partition the data so that subset of rows where every feature falls in the corresponding range maximizes the objective function

Cross-posted from https://cs.stackexchange.com/questions/153558/find-a-range-of-values-to-subset-the-rows-to-maximize-the-objective-function?noredirect=1#comment323025_153558.
I have searched around for some time but couldn't find a similar example to my problem.
It looks common enough that I would expect it to be solved. It lies between search and optimization/regression.
The goal is to find a range of values for each feature, so that the subset of rows where every feature falls in the corresponding range maximizes the objective function.
Assume we have a matrix with Yi and corresponding set of features Xi (say around 40).
Number of samples relatively large, 100k+.
Table example
So in this case for the total data Sum(Y_i) = 73 and the mean(Y_i)= 6.0833
The problem is to:
Max sum(Yi) subj to:
mean(Y_i) > 7$
sum(i) > 5000
, where i are the row index and rows are selected by imposing 2 constraints ( < and > ) or each feature.
I have managed to get solution using DEoptim in R for 5-6 variables with 2 conditions (partitions) "<" and ">". For more features it gets slow/fail to converge.
Seeing the (somewhat) similar question (and answer) here : Pandas find subset of rows minimizing the sum of a column under other column constraint
I am wondering if there is a way to formulate my problem in OR-Tools as well. I have went through the documentation on the https://developers.google.com/optimization but still struggle to understand how to express my problem.
Would appreciate any pointers as to how to formulate (solve) this problem in OR-tools in the general case, where there is a dataset with features + response variable and the objective is find the splits on features to maximize (minimize) the sum (or other function) of the response variable.
The number of splits should be 2 per feature as we want solution to be locally monotonic wrt to features.
Thanks.

Sort unknown length array within unknown length 2D array - Python

I have a Python script which ends up creating a 2D array based on user input. Therefore, the length of the 2D array is unknown and the length of the individual arrays within the 2D array are also unknown until the user has input the information. I would like to sort the individual array pieces based on a value associated with them. An example of a possible output that needs to be sorted is below:
Basically, each individual array is a failure symptom followed by the a list of possible components, each having a "score" associated with them that is the likelihood that this component is causing the failure. My goal is to reorder the array with the components along with their scores in descending order based on the score, i.e., the component and score need to be moved together. The problem I have is like I said, I do not know the length of anything until user input is given. There could be only 1 failure symptom input, or there could be 9. The failure symptom could contain only 1 component, or maybe 12. I know it will take nested for loops and if statements, but I haven't been able to figure it out based on all the possible scenarios. Some possible scenarios I have thought of:
The array is already in order (move to the next failure symptom)
The first component is correct, but the ones after may not be. Or the first two are correct, but the ones after may not be, etc...
The array is completely backwards in order
The array only contains 1 component, therefore there is no need to sort
The array is in some random order, so some positions for some components may already be in the correct spot while some others aren't
Every time I feel like I am making headway, I think of another scenario which wouldn't hold up. Any help is greatly appreciated!
Your problem is a bit special. You don't only want to sort a multidimensional array, which would be rather simple using the default sorting algorithms, you also want to keep the order between the key/value pairs.
The second problem is that the keys are strings with numbers in it. So simple string comparison wouldn't work, because it is compared letter by letter, so "test9" > "test11" would be true (the second 1 wouldn't be even recognized, because 9>1).
The simpliest solution i figured out would be the following:
#get the failure id of one list
def failureId(value):
return int(value[0].replace("failure",""))
#get the id of one component
def componentId(value):
return int(value.replace("component",""))
#sort one failure list using bubble sort
def sortFailure(failure):
#iteraring through the array twice (only the keys, ignoring the values)
for i in range(1,len(failure), 2):
for j in range(1,i, 2):
#comparing the component ids
if (componentId(failure[j])>componentId(failure[j+2])):
#swaping keys and values
failure[j],failure[j+2] = failure[j+2],failure[j]
failure[j+1],failure[j+3] = failure[j+3],failure[j+1]
#sorting the full list
def sortData(data):
#sorting the failures using default sort algorithm
data.sort(key=failureId)
#sorting the single list of failure datas itself
for failure in data:
sortFailure(failure)
data = [['failure2', 'component2', 0.15, 'component1', 0.85], ['failure3', 'component1', 0.95], ['failure1','component1',0.05,'component3', 0.8, 'component2', 0.1, 'component4', 0.05]]
print(data)
sortData(data)
print(data)
The first two functions are required to get the numbers(=id) from the strings as mentioned above. The second function uses "bubble sort" to sort the array. It uses steps 2 for the range function, because we want to skipt the values for each component. If the data are in wrong order we are swapping the key & value. In the sortData function we are using the built in sort function for lists to sort the whole list (by failure ids). Then we take each "sublist" and sort them using the other function.

Checking validity of permutations in python

Using python, I would like to generate all possible permutations of 10 labels (for simplicity, I'll call them a, b, c, ...), and return all permutations that satisfy a list of conditions. These conditions have to do with the ordering of the different labels - for example, let's say I want to return all permutations in which a comes before b and when d comes after e. Notably, none of the conditions pertain to any details of the labels themselves, only their relative orderings. I would like to know what the most suitable data structure and general approach is for dealing with these sorts of problems. For example, I can generate all possible permutations of elements within a list, but I can't see a simple way to verify whether a given permutation satisfies the conditions I want.
"The most suitable data structure and general approach" varies, depending on the actual problem. I can outline three basic approaches to the problem you give (generate all permutations of 10 labels a, b, c, etc. in which a comes before b and d comes after e).
First, generate all permutations of the labels, using itertools.permutations, remove/skip over the ones where a comes after b and d comes before e. Given a particular permutation p (represented as a Python tuple) you can check for
p.index("a") < p.index("b") and p.index("d") > p.index("e")
This has the disadvantage that you reject three-fourths of the permutations that are initially generated, and that expression involves four passes through the tuple. But this is simple and short and most of the work is done in the fast code inside Python.
Second, general all permutation of the locations 0 through 9. Consider these to represent the inverses of your desired permutations. In other words, the number at position 0 is not what will go to position 0 in the permutation but rather shows where label a will go in the permutation. Then you can quickly and easily check for your requirements:
p[0] < p[1] and p[3] > p[4]
since a is the 0'th label, etc. If the permutation passes this test, then find the inverse permutation of this and apply it to your labels. Finding the inverse involves one or two passes through the tuple, so it makes fewer passes than the first method. However, this is more complicated and does more work outside the innards of Python, so it is very doubtful that this will be faster than the first method.
Third, generate only the permutations you need. This can be done with these steps.
3a. Note that there are four special positions in the permutations (those for a, b, d, and e). So use itertools.combinations to choose 4 positions out of the 10 total positions. Note I said positions, not labels, so choose 4 integers between 0 and 9.
3b. Use itertools.combinations again to choose 2 of those positions out of the 4 already chosen in step 3a. Place a in the first (smaller) of those 2 positions and b in the other. Place e in the first of the other 2 positions chosen in step 3a and place d in the other.
3c. Use itertools.permutations to choose the order of the other 6 labels.
3d. Interleave all that into one permutation. There are several ways to do that. You could make one pass through, placing everything as needed, or you could use slices to concatenate the various segments of the final permutation.
That third method generates only what you need, but the time involved in constructing each permutation is sizable. I do not know which of the methods would be fastest--you could test with smaller sizes of permutations. There are multiple possible variations for each of the methods, of course.

Filter list to remove similar, but not identical, entries

I have a long list containing several thousand names that are all unique strings, but I would like to filter them to produce a shorter list so that if there are similar names only one is retained. For example, the original list could contain:
Mickey Mouse
Mickey M Mouse
Mickey M. Mouse
The new list would contain just one of them - it doesn't really matter which at this moment in time. It's possible to get a similarity score using the code below (where a and b are the text being compared), so providing I pick an appropriate ratio it I have a way of making a include/exclude decision.
difflib.SequenceMatcher(None, a, b).ratio()
What I'm struggling to work out is how to populate the second list from the first one. I'm sure it's a trivial matter, but it baffling my newbie brain.
I'd have thought something along the lines of this would have worked, but nothing ends up being populated in the second list.
for p in ppl1:
for pp in ppl2:
if difflib.SequenceMater(None, p, pp).ratio() <=0.9:
ppl2.append(p)
In fact, even if that did populate the list, it'd still be wrong. I guess it'd need to compare the name from the first list to all the names in the second list, keep track of the highest ratio scored, and then only add it if the highest ratio was less that the cutoff criteria.
Any guidance gratefully received!
I'm going to risk never getting an accept because this may be too advanced for you, but here's the optimal solution.
What you're trying to do is a variant of agglomerative clustering. A union-find algorithm can be used to solve this efficiently. From all pairs of distinct strings a and b, which can be generated using
def pairs(l):
for i, a in enumerate(l):
for j in range(i + 1, len(l)):
yield (a, l[j])
you filter the pairs that have a similarity ratio <= .9:
similar = ((a, b) for a, b in pairs
if difflib.SequenceMatcher(None, p, pp).ratio() <= .9)
then union those in a disjoint-set forest. After that, you loop over the sets to get their representatives.
Firstly, you shouldn't modify a list while you're iterating over it.
One strategy would be to go through all pairs of names and, if a certain pair is too similar to each other, only keep one, and then iterate this until no two pairs are too similar. Of course, the result would now depend on the initial order of the list, but if your data is sufficiently clustered and your similarity score metric sufficiently nice, it should produce what you're looking for.

Categories

Resources