I have a simple, stupid Python problem. Given a graph, I'm trying to sample from a random variable whose distribution is the same as that of the degree distribution of the graph.
This seems like it should pretty straightforward. Yet somehow I am still managing to mess this up. My code looks like this:
import numpy as np
import scipy as sp
import graph_tool.all as gt
G = gt.random_graph(500, deg_sampler=lambda: np.random.poisson(1), directed=False)
deg = gt.vertex_hist(G,"total",float_count=False)
# Extract counts and values
count = list(deg[0])
value = list(deg[1])
# Generate vector of probabilities for each node
p = [float(x)/sum(count) for x in count]
# Load into a random variable for sampling
x = sp.stats.rv_discrete(values=(value,p))
print x.rvs(1)
However, upon running this it returns an error:
Traceback (most recent call last):
File "temp.py", line 16, in <module>
x = sp.stats.rv_discrete(values=(value,p))
File "/usr/lib/python2.7/dist-packages/scipy/stats/distributions.py", line 5637, in __init__
self.pk = take(ravel(self.pk),indx, 0)
File "/usr/lib/python2.7/dist-packages/numpy/core/fromnumeric.py", line 103, in take
return take(indices, axis, out, mode)
IndexError: index out of range for array
I'm not sure why this is. If in the code above I write instead:
x = sp.stats.rv_discrete(values=(range(len(count)),p))
Then the code runs fine, but it gives a weird result--clearly the way I've specified this distribution, a value of "0" ought to be most common. But this code gives "1" with high probability and never returns a "0," so something is getting shifted over somehow.
Can anyone clarify what is going on here? Any help would be greatly appreciated!
I believe the first argument for x.rvs() would be the loc arg. If you make loc=1 by calling x.rvs(1), you're adding 1 to all values.
Instead, you want
x.rvs(size=1)
As an aside, I'd recommend that you replace this:
# Extract counts and values
count = list(deg[0])
value = list(deg[1])
# Generate vector of probabilities for each node
p = [float(x)/sum(count) for x in count]
With:
count, value = deg # automatically unpacks along first axis
p = count.astype(float) / count.sum() # count is an array, so you can divide all elements at once
Related
I'm currently trying to determine the frequency of this plot of position vs time:
where the time and position datasets are lists of floats. I've tried using scipy.signal.find_peaks imported as fp but when I run this code:
peaks,_ = fp(pos)
peak_times = []
for i in range(len(peaks)):
peak_times.append(t[i])
peak_dists = [current-next for (current,next) in zip(peak_times,peak_times[1:])]
approx_freq = sum(peak_dists)/len(peak_dists)
return approx_freq
I get a type error: typeError: only integer scalar arrays can be converted to a scalar index
What is going wrong? And how can I fix it?
I spotted a few errors in your code. Here are the suggested corrections:
peaks,_ = fp(pos)
peak_times = []
for i in peaks: # notice correction here! No range or len needed.
peak_times.append(t[i])
peak_dists = [next-current for (current,next) in zip(peak_times,peak_times[1:])] # swapped next with current for positive result
approx_freq = 1/(sum(peak_dists)/len(peak_dists)) # take the inverse of what you did before
Good luck!
I have a similar problem to this one.
I am working on Qgis. To speed things up, I've created a small selection of my map on which I test my code. It works great. Here is the section that poses problem later :
layer = qgis.utils.iface.activeLayer()
iter = layer.getFeatures()
dict = {}
#iterate over features
for feature in iter:
#print feature.id()
geom = feature.geometry()
coord = geom.asPolyline()
### GET FIRST AND LAST POINTS OF POLY + N ORIENTATION###
# Get Objective Orientation
d=QgsDistanceArea()
d.setEllipsoidalMode(True)
points=geom.asPolyline()
#second way to get Endpoints
first = points[0]
last = points[-1]
r=d.bearing(first, last)
b= "NorthOrientation= %s" %(math.degrees(r))
# Assemble Features
dict[feature.id() ]= [first, last]
### KEY = INTERSECTION, VALUES = COMMONPOINTS###
dictionary = {}
a = dict
for i in a:
for j in a:
c = set(a[i]).intersection(set(a[j]))
if len(c) == 1:
d = set(a[i]).difference(c)
c = list(c)[0]
value = list(d)[0] #This is where the problem is
if c in dictionary and value not in dictionary[c]:
dictionary[c].append(value)
elif c not in dictionary:
dictionary.setdefault(c, [])
dictionary[c].append(value)
else: pass
print dictionary
This code works for the 10 polylines of my small selection (which I've stored in a seperate shapefile). But when I try to run it though the 40 000 lines of my original database, I get the following Error:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "c:/users/16116/appdata/local/temp/tmp96wd24.py", line 47, in <module>
value = list(d)[0]
IndexError: list index out of range
A few things:
This code stems from a first question that you can find here. I'm still pretty new to python so to be honest I have a hard time understanding how this exact part of the code works, but I know it does (at least for the small dataset).
The small "test selection"'s structure is identical to the entire database. Only the length has changed.
If anyone has had the same experience or knows why this problem occures, I would be very greatful for any indications.
I understand that there are similar questions to this here, here, and here. The first one addresses 1D lists, the second is great except it doesn't seem to work, and the third is close, but I still don't quite understand my issue.
Here is what I am trying to do. I need to create an 2D list (a 2D array in java and C++, which I am much more familiar with) filled with 0's. It needs to be size 20 across and 15 down.
Here is what I have tried:
self.grid = [[0 for x in range(GRID_COLUMN_SIZE)] for y in range(GRID_ROW_SIZE)] # where GRID_ROW_SIZE = 15, GRID_COLUMN_SIZE = 20
Note, I tried with the two constants switched (COLUMN first, then ROW), and it broke slightly later. In addition, I print the 2D list out and it's the wrong dimensions (15 across and 20 down).
Here is my later use of self.grid. Without getting too deep, I am iterating through all the values of the list (grid) and getting the surrounding points.
def populatePaths(self):
for row in range(len(self.grid)):
for column in range(len(self.grid[row])):
if self.isPointAccessible(column, row):
self.addPaths(column, row)
def addPaths(self, x, y):
key = Point(x, y)
print "Each: %s" % (key.toString())
points = key.getSurroundingPoints()
self.removeBarriersFromPath(points)
self.paths[key] = points # a map from Points to lists of surrounding Points
Basically, I remove points along the path that can't be reached:
def removeBarriersFromPath(self, path):
for point in list(path):
print "Surrounding %s" % (point.toString())
if not self.isPointAccessible(point.x, point.y):
path.remove(point)
return path
self.isPointAccessible() is trivial, but this is where it breaks. It checks to see if the value at the (x,y) location is 0: return self.grid[x][y] == 0
I added these print statements (point.toString() returns (x,y)) to show me the points as they happen, and I am able to iterate until x==14, but it breaks at x==15.
I suspect that I am getting the column/row order in the looping incorrect, but I'm not sure when/how.
Let me know if I didn't explain something clearly enough.
Edit Here is the traceback:
Traceback (most recent call last):
File "/home/nu/catkin_ws/src/apriltags_intrude_detector/scripts/sphero_intrude_gui.py", line 70, in start
self.populatePaths()
File "/home/nu/catkin_ws/src/apriltags_intrude_detector/scripts/sphero_intrude_gui.py", line 156, in populatePaths
self.addPaths(column, row)
File "/home/nu/catkin_ws/src/apriltags_intrude_detector/scripts/sphero_intrude_gui.py", line 162, in addPaths
self.removeBarriersFromPath(points)
File "/home/nu/catkin_ws/src/apriltags_intrude_detector/scripts/sphero_intrude_gui.py", line 168, in removeBarriersFromPath
if not self.isPointAccessible(point.x, point.y):
File "/home/nu/catkin_ws/src/apriltags_intrude_detector/scripts/sphero_intrude_gui.py", line 173, in isPointAccessible
return self.grid[x][y] == 0
IndexError: list index out of range
You did not post the whole source for isPointAccessible but from the error message it looks like your return line must be:
return self.grid[y][x] == 0
since y denotes the row number and x is the column.
I've been practicing on a ><> (Fish) interpreter and am stuck on an error I'm getting. The problematic code seems to be here:
import sys
from random import randint
file = sys.argv[1]
code = open(file)
program = code.read()
print(str(program))
stdin = sys.argv[2]
prgmlist = program.splitlines()
length = len(prgmlist)
prgm = {}
for x in range(0,length-1):
prgm[x+1] = list(prgmlist[x])
The goal here was to take the code and put it into a sort of grid, so that each command could be taken and computed separately. By grid, I mean a map to a list:
{line1:["code","code","code"]
line2:["code","code","code"]
line3:...}
and so on.
However, when I try to retrieve a command using cmd = prgm[y][x] it gives me KeyError: 0.
Any help is appreciated.
Here's a traceback:
Traceback (most recent call last):
File "/Users/abest/Documents/Python/><>_Interpreter.py", line 270, in <module>
cmd = prgm[cmdy][cmdx]
KeyError: 0
And a pastebin of the entire code.
The input is the hello world program from the wiki page:
!v"hello, world"r!
>l?!;o
Few issues -
You are not considering the last line , since your range is - for x in range(0,length-1): - and the stop argument of range is exlusive, so it does not go to length-1 . You actually do not need to get the length of use range, you can simply use for i, x in enumerate(prgmlist): . enumerate() in each iteration returns the index as well as the current element.
for i, x in enumerate(prgmlist, 1):
prgm[i] = list(x)
Secondly, from your actual code seems like you are defining cmdx initially as 0 , but in your for loop (as given above) , you are only starting the index in the dictionary from 1 . So you should define that starting at 1. Example -
stacks, str1, str2, cmdx, cmdy, face, register, cmd = {"now":[]}, 0, 0, 1, 0, "E", 0, None
And you should start cmdy from 0 . Seems like you had both of them reversed.
You'll want to use something like
cmd = prgm[x][y]
the first part prgm[x] will access the list that's the value for the x key in the dictionary then [y] will pull the yth element from the list.
I have a problem which people better versed than me in Fenics can probably solve quickly, and I'd appreciate it very much. I am trying to define a spatially dependent elasticity tensor (C_ijkl). After assembling the tensor, when I plot a particular component of it (let's say C_1100) using the fenics plot command then it works but if I try to evaluate it at some point within the domain then I get an error. The code is:
mesh = Mesh("geometry.xml")
cd = MeshFunction('size_t',mesh,"geometry_physical_region.xml")
def readMP2():
with open('Material.txt', 'r') as f:
N=([int(x) for x in f.readline().split()])[0];
rhoL=[];
for i, line in enumerate(f):
if i==N-1:
break
rhoL.append(([float(x) for x in line.split()])[0])
rhoL.append(([float(x) for x in line.split()])[0])
rho=np.asarray(rhoL)
lmL=[];
for i, line in enumerate(f):
lmL.append(([float(x) for x in line.split()])[:2])
if i==N-1:
break
lm=np.asarray(lmL)
return (rho,lm)
r,lm = readMP2()
V0=FunctionSpace(mesh, 'DG', 0)
M0=TensorFunctionSpace(mesh, 'DG', 0, shape=(2,2,2,2))
rho,lam,mu=Function(V0),Function(V0),Function(V0)
C=Function(M0)
i=Index()
j=Index()
k=Index()
l=Index()
delta=Identity(2)
rho.vector()[:] = numpy.choose(numpy.asarray(cd.array(), dtype=numpy.int32), r)
lam.vector()[:] = numpy.choose(numpy.asarray(cd.array(), dtype=numpy.int32), lm[:,0])
mu.vector()[:] = numpy.choose(numpy.asarray(cd.array(), dtype=numpy.int32), lm[:,1])
C=as_tensor((lam*(delta[i,j]*delta[k,l])+mu*(delta[i,k]*delta[j,l]+delta[i,l]*delta[j,k])),[i,j,k,l])
After this the following works:
plot(C[1,1,0,0])
interactive()
But if I try to do the following:
C1=C[0,0,0,0]
print C1(.0001,.0001)
then I get the following error:
ufl.log.UFLException: Expecting dim to match the geometric dimension, got dim=1 and gdim=2.
I feel like I am missing something rather trivial. Any light on this would be very appreciated