I created a Sankey diagram using plotly (python) and it looks like this:
As you can see, some links overlap, but this plot can be easily changed (manually) to this:
I think the overlapping result comes from the 3rd column of nodes being centered on Y. Is there a way for me to align the 3rd column to the top (or bottom) to fix this problem? (or any other fix is also welcome of course)
The only thing I've found is setting x and y for nodes manually, but I seem to not be able to only set the y, and this also would involve calculating all those coordinates.
Thank you for the help!
Edit: My code
import plotly.graph_objects as go
sources = [23, 23, 23, 23, 23, 23, 23, 24, 8, 23, 23, 23, 30, 17, 5, 12, 20, 20, 23, 18, 18, 18, 18, 23, 33, 33, 33, 33, 33, 23, 16, 16, 23]
targets = [7, 13, 6, 21, 1, 2, 15, 23, 23, 32, 25, 19, 23, 23, 23, 23, 27, 22, 20, 31, 4, 0, 3, 18, 11, 26, 9, 14, 28, 33, 29, 10, 16]
values = [50.0, 1542.78, 287.44, 2619.76, 1583.26, 722.1, 5133.69, 6544.0, 2563.35, 6476.59, 4314.0, 82.87, 650.0, 1773.68, 16723.0, 32297.7, 81.64, 266.92, 348.56, 388.57, 743.2, 5403.24, 5821.52, 12356.53, 12905.68, 316.12, 497.68, 354.42, 3830.44, 17904.34, 175.95, 1224.46, 1400.41]
fig = go.Figure(data=[go.Sankey(
node = dict(
pad = 5,
thickness = 10,
line = dict(color = "black", width = 0.5),
label = list(range(len(values))),
color = "blue"
),
link = dict(
source = sources,
target = targets,
value = values
))])
fig.update_layout(title_text=
"Basic Sankey Diagram", font_size=8)
fig.write_html("test.html")
There's an open issue on github that both x and y positions have to be set in order for manual positioning to work. Does manually adding y coordinates along with x coordinates address your problem?
In general there other issues with sankey sorting as well.
I have been working with problems in this area only in plotly.R so I'm afraid I can't offer specific python suggestions to modify your code.
If you're also looking for suggestions about calculating the coordinates manually, you can calculate this as
1 - (cumulative_sum_of_higher_nodes + current_node_size/2)
or
1 - (cumulative_sum_of_all_nodes_including_current_node - current_node_size/2)
assuming y = 0 is at the bottom of the plot area.
Related
I have the speed data in that I need to detect the values where threshold is greater than 20 and valley greater than 0. I used this code for peak detection but I am getting index error
import numpy as np
from scipy.signal import find_peaks, find_peaks_cwt
import matplotlib.pyplot as plt
import pandas as pd
import sys
np.set_printoptions(threshold=sys.maxsize)
zero_locs = np.where(x==0)
search_lims = np.append(zero_locs, len(x)) # limits for search area
diff_x = np.diff(x)
diff_x_mapped = diff_x > 0
peak_locs = []
x = np.array([1, 9, 18, 24, 26, 5, 26, 25, 26, 16, 20, 16, 23, 5, 1, 27,
22, 26, 27, 26, 25, 24, 25, 26, 3, 25, 26, 24, 23, 12, 22, 11, 15, 24, 11,
26, 26, 26, 24, 25, 24, 24, 22, 22, 22, 23, 24])
for i in range(len(search_lims)-1):
peak_loc = search_lims[i] + np.where(diff_x_mapped[search_lims[i]:search_lims[i+1]]==0)[0][0]
if x[peak_loc] > 20:
peak_locs.append(peak_loc)
fig= plt.figure(figsize=(10,4))
plt.plot(x)
plt.plot(np.array(peak_locs), x[np.array(peak_locs)], "x", color = 'r')
I tried using peak detection algorithm where it is not detecting peaks where the peak value is above 20 i need to detect the peaks where x values is 0 and peak values is 20
expected output: the marked peaks has to be detected
by running the above script i am getting this error
IndexError: arrays used as indices must be of integer (or boolean) type
how to get ride of this error any suggestions thanks in regards
You found no peaks.
That is, len(peak_locs) is zero.
So you wind up with this array, whose type defaulted to float:
>>> np.array(peak_locs)
array([], dtype=float64)
To fix it?
Find more peaks!
I have 3 arrays of the same length:
import numpy as np
weights = np.array([10, 14, 18, 22, 26, 30, 32, 34, 36, 38, 40])
resistances = np.array([15, 16.5, 18, 19.5, 21, 24, 27, 30, 33, 36, 39])
depths = np.array([0,1,2,3,4,5,6,7,8,9,10])
I want to take each item in weights, then find the nearest match that is >= this item in resistances, and then using the index of this nearest match I want to return the corresponding value from depths i.e. depths[index].
BUT, with the additional condition that if nothing is >= the max value in weights then just return last value in depths. I then want to populate a list with the results.
Is there a better way than the for loop approach below? I would like to avoid the loop.
SWP = []
for w in weights:
if len(depths[w<=resistances]) == 0:
swp=depths[-1]
else:
swp = np.min(depths[w<=resistances])
SWP.append(swp)
SWP
You can .clip
the indices that np.searchsorted produces with len(resistances)-1:
depths[
np.searchsorted(resistances, weights).clip(max=len(resistances)-1)
]
So any index larger than the last one - will become the last one.
Alternative idea (but only if your resistances are sorted) - clip the weights with the maximum of resistances:
depths[
np.searchsorted(resistances, weights.clip(max=resistances.max()))
]
Usually to do what you're talking about you want to create a function that can be mapped over a list.
import numpy as np
weights = np.array([10, 14, 18, 22, 26, 30, 32, 34, 36, 38, 40])
resistances = np.array([15, 16.5, 18, 19.5, 21, 24, 27, 30, 33, 36, 39])
depth = np.array([0,1,2,3,4,5,6,7,8,9,10])
def evaluate_weight(w):
depths = depth[resistances<=w]
return np.max(depths) if len(depths) else 0
SWP = list(map(evaluate_weight, weights))
I have a List with points from a Test
points= [0,0,0,0,0,0,8,8,8,9,10,11,11,12,12,13,14,14,15,15,16,
16,17,17, 18, 19,21,21,23, 23, 24, 24, 24, 25,25, 25,
26, 27, 27, 28, 29, 29, 29, 29, 30, 30, 30, 31,31, 32,
34, 35, 36, 36, 37, 38]
If we assum all participants get full points on the next two tests(in total 80, 40 for each test), which percentage of participants can still attain the mark “A”. The function shall return the percentage, in the mathematical sense so between 0 and 1.
You can get an A if you have more than 88 points.
Thats my code till now and I dunno what to do next
The answer should look likes this:
Potential Top Marks: 89.285714%
Here is a simple solution without numpy:
sum( i +80 >= 88 for i in points )/len(points)*100
This returns (in Python 3):
89.28571428571429
edit: simplified by h4z3s tip.
Use this:
Without numpy:
potential = [p + 80 for p in points]
percentage = sum([1 for i in potential if i>=88]) / float(len(potential)) * 100
89.28571428571429
Using numpy:
import numpy as np
potential = [p + 80 for p in points]
percentage = sum(np.array(potential)>=88) / float(len(potential)) * 100
89.28571428571429
Getting a, "TypeError: unsupported operand type(s) for /: 'generator' and 'int'",
Problem is when I try and calculate the p_value, not sure what I am doing wrong. Forgive me if my question is a bit vague
import numpy as np
import random
beer = [27, 19, 20, 20, 23, 17, 21, 24, 31, 26, 28, 20, 27, 19, 25, 31, 24, 28, 24, 29, 21, 21, 18, 27, 20]
water = [21, 19, 13, 22, 15, 22, 15, 22, 20, 12, 24, 24, 21, 19, 18, 16, 23, 20]
#running a permutation test
def permutation_test():
combined = beer + water
random.shuffle(combined)
#slice function to create 2 groups of the same length as the beer test group
split = len(beer)
group_one,group_two = combined[:split], combined[split:] #first25, last25
return np.mean(group_one)-np.mean(group_two)
#monte carlo method to run the permutation test 100 000 times
iterate = [permutation_test() for _ in range(100000)]
#calculating effect size, standard score
effect_size = np.median(beer) - np.median(water)
standard_score = (effect_size - np.mean(iterate))/np.std(iterate)
#calculating p-value to assess whether the observed effect size is an anomaly
p_value = np.mean(test >= effect_size for test in iterate)
print(standard_score, p_value)
Your list comprehension expression is not correctly defined:
Use this to solve the problem:
p_value = np.mean([(test >= effect_size) for test in iterate])
Lately I've been doing a lot of processing on 8x8 blocks of image-data.
Standard approach has been to use nested for-loops to extract the blocks, e.g.
for y in xrange(0,height,8):
for x in xrange(0,width,8):
d = image_data[y:y+8,x:x+8]
# further processing on the 8x8-block
I can't help to wonder if there is a way to vectorize this operation or another approach using numpy/scipy that I can use instead? An iterator of some kind?
A MWE1:
#!/usr/bin/env python
import sys
import numpy as np
from scipy.fftpack import dct, idct
import scipy.misc
import matplotlib.pyplot as plt
def dctdemo(coeffs=1):
unzig = np.array([
0, 1, 8, 16, 9, 2, 3, 10,
17, 24, 32, 25, 18, 11, 4, 5,
12, 19, 26, 33, 40, 48, 41, 34,
27, 20, 13, 6, 7, 14, 21, 28,
35, 42, 49, 56, 57, 50, 43, 36,
29, 22, 15, 23, 30, 37, 44, 51,
58, 59, 52, 45, 38, 31, 39, 46,
53, 60, 61, 54, 47, 55, 62, 63])
lena = scipy.misc.lena()
width, height = lena.shape
# reconstructed
rec = np.zeros(lena.shape, dtype=np.int64)
# Can this part be vectorized?
for y in xrange(0,height,8):
for x in xrange(0,width,8):
d = lena[y:y+8,x:x+8].astype(np.float)
D = dct(dct(d.T, norm='ortho').T, norm='ortho').reshape(64)
Q = np.zeros(64, dtype=np.float)
Q[unzig[:coeffs]] = D[unzig[:coeffs]]
Q = Q.reshape([8,8])
q = np.round(idct(idct(Q.T, norm='ortho').T, norm='ortho'))
rec[y:y+8,x:x+8] = q.astype(np.int64)
plt.imshow(rec, cmap='gray')
plt.show()
if __name__ == '__main__':
try:
c = int(sys.argv[1])
except ValueError:
sys.exit()
else:
if 1 <= int(sys.argv[1]) <= 64:
dctdemo(int(sys.argv[1]))
Footnotes:
Actual application: https://github.com/figgis/dctdemo
There's a function view_as_windows for this in Scikit Image
http://scikit-image.org/docs/dev/api/skimage.util.html#view-as-windows
Unfortunately I will have to finish this answer another time, but you can grab the windows in a form that you can pass to dct with:
from skimage.util import view_as_windows
# your code...
d = view_as_windows(lena.astype(np.float), (8, 8)).reshape(-1, 8, 8)
dct(d, axis=0)
There is a function called extract_patches in the scikit-learn feature extraction routines. You need to specify a patch_size and an extraction_step. The result will be a view on your image as patches, which may overlap. The resulting array is 4D, the first 2 index the patch, and the last two index the pixels of the patch. Try this
from sklearn.feature_extraction.image import extract_patches
patches = extract_patches(image_data, patch_size=(8, 8), extraction_step=(4, 4))
This gives (8, 8) size patches that overlap by half.
Note that up until now this uses no extra memory, because it is implemented using stride tricks. You can force a copy by reshaping
patches = patches.reshape(-1, 8, 8)
which will basically yield a list of patches.