How to find the negative weight between two adjacent nodes in python - python

Hi i have a big text file with format like this:
1 3 1
2 3 -1
5 7 1
6 1 -1
3 2 -1
the first column is the starting node, the second column the ending node and the third column shows the sign between two nodes. So i have positive and negative signs.
Im reading the graph with the code below:
G = nx.Graph()
G = nx.read_edgelist('network.txt', delimiter='\t', nodetype=int, data=(('weight', int),))
print(nx.info(G))
I also found a function to find the neighbors of a specific node:
list1 = list(G.neigbors(1))
So i have a list with the adjacency nodes of node 1. How can a find the sign between the node 1 and each adjacency node? (For example that edge between 1-3 has sign 1, the edge 1-5 has sign -1 etc)

An example for node 1:
n_from = 1
for n_to in G.neighbors(n_from):
sign = G[n_from][n_to]['weight']
print('edge from {} to {} has sign {}'.format(
n_from, n_to, sign))
which prints, for the example input you gave:
edge from 1 to 3 has sign 1
edge from 1 to 6 has sign -1
A similar approach, treating G[n_from] as a dict:
n_from = 1
for n_to, e_data in G[n_from].items():
sign = e_data['weight']
# then print
You can alternatively use Graph.get_edge_data, as such:
e_data = G.get_edge_data(n_from, n_to)
sign = e_data.get('weight')

Related

Truth table with Boolean Functions

I am trying to generate a Truth Table using PANDAS in python.
I have been given a Boolean Network with 3 external nodes (U1,U2,U3) and 6 internal nodes (v1,v2,v3,v4,v5,v6).
I have created a table with all the possible combinations of the 3 external nodes which are 2^3 = 8.
import pandas as pd
import itertools
in_comb = list(itertools.product([0,1], repeat = 3))
df = pd.DataFrame(in_comb)
df.columns = ['U1','U2','U3']
df.index += 1
U1
U2
U3
0
0
0
0
0
1
0
1
0
0
1
1
1
0
0
1
1
0
1
0
1
1
1
1
And I also have created the same table but with all the possible combinations of the 6 internal nodes which are 2^6 = 64 combinations.
The functions for each node were also given
v1(t+1) = U1(t)
v2(t+1) = v1(t) and U2(t)
v3(t+1) = v2(t) and v5(t)
v5(t+1) = not U3(t)
v6(t+1) = v5(t) or v3(t)
The truth table has to be done with PANDAS and it has to show all the combinations with each combination possible.
For example.
v1
v2
v3
v4
v5
v6
0 0 0
0 0 1
0 1 0
0
0
0
0
0
0
000010
000000
000010
0
0
0
0
0
1
0
0
0
0
1
0
0
0
0
0
1
1
The table above is an example of how the end product should be. Where the [0 0 0] is the first combination of the external nodes.
I am confused as to how to compute the functions of each gene and how to filter the data and end up with a new table like the one here.
Here I attach an image of the problem I want to solve:
What you seem to have missed is the fact that you don't only have 3 inputs to your network, as the "old state" is also considered an input - that's what a feedback combinational network does, it turns the old state + input into new state (and often output).
This means that you have 3+6 inputs, for 2^9=512 combinations. Not very easy to understand when printed, but still possible. I modified your code to print this (beware that I'm quite new to pandas, so this code can definitely be improved)
import pandas as pd
import pandas as pd
import itertools
#list of (u, v) pairs (3 and 6 elements)
# uses bools instead of ints
inputs = list((row[0:3],row[3:]) for row in itertools.product([False,True], repeat = 9))
def new_state(u, v):
# implement the itnernal nodes
return (
u[0],
v[0] and u[1],
v[1] and v[4],
v[2],
not u[2],
v[4] or v[2]
)
new_states = list(new_state(u, v) for u,v in inputs)
# unzip inputs to (u,v), add new_states
raw_rows = zip(*zip(*inputs), new_states)
def format_boolvec(v):
"""Format a tuple of bools like (False, False, True, False) into a string like "0010" """
return "".join('1' if b else '0' for b in v)
formatted_rows = list(map(lambda row: list(map(format_boolvec, row)), raw_rows))
df = pd.DataFrame(formatted_rows)
df.columns = ['U', "v(t)", "v(t+1)"]
df.index += 1
df
The heart of it is the function new_state that takes the (u, v) pair of input & old state and produces the resulting new state. It's a direct translation of your specification.
I modified your itertools.product line to use bools, produce length-9 results and split them to 3+6 length tuples. To still print in your format, I added the format_boolvec(v) function. Other than that, it should be very easy to follow, but fell free to comment if you need more explanation.
To find an input sequence from a given start state to a given end state, you could do it yourself by hand, but it's tedious. I recommend using a graph algorithm, which is easy to implement since we also know the length of the desired path, so we don't need any fancy algorithms like Bellman-Ford or Dijkstra's - we need to just generate all length=3 paths and filter for the endpoint.
# to find desired inputs
# treat each state as a node in a graph
# (think of visual graph transition diagrams)
# and add edges between them labeled with the inputs
# find all l=3 paths, and check the end nodes
nodes = {format_boolvec(prod): {} for prod in itertools.product([False,True], repeat = 6)}
for index, row in df.iterrows():
nodes[row['v(t)']][row['U']] = row['v(t+1)']
# we now built the graph, only need to find a path from start state to end state
def prefix_paths(prefix, paths):
# aux helper function for all_length_n_paths
for path, endp in paths:
yield ([prefix]+path, endp)
def all_length_n_paths(graph, start_node, n):
"""Return all length n paths from a given starting point
Yield tuples (path, endpoint) where path is a list of strings of the inputs, and endpoint is the end of the path.
Uses internal recursion to generate paths"""
if n == 0:
yield ([], start_node)
return
for inp, nextstate in graph[start_node].items():
yield from prefix_paths(inp, all_length_n_paths(graph, nextstate, n-1))
# just iterate over all length=3 paths starting at 101100 and print it if it end's up at 011001
for path, end in all_length_n_paths(nodes, "101100", 3):
if end=="011001":
print(path)
This code should also be easy to follow, maybe except that iterator syntax.
The result is not just one, but 3 different paths:
['100', '110', '011']
['101', '110', '011']
['111', '110', '011']

How to count the different paths in a graph given a source node and a path length

I have a simple graph with 4 nodes A,B,C,D as well as the following edges:
[A,B]
[B,D]
[B,C]
I want to find paths that start at the node C given a certain length n. For example:
for n = 1 I will only have [C] as a possible path. Result is 1
for n = 2 we only have [C,B]. Result is 1
for n = 3 we have [C,B,C] , [C,B,D], [C,B,A]. Result is 3
etc.
I have written the following (python) code:
dg = {'A':['B'],
'B':['C','D','A'],
'D':['B'],
'C':['B']}
beg = ['C']
def makePath(n):
count = 0
curArr = beg
for i in range(n):
count = len(curArr)
tmp = []
for i in curArr:
tmp.extend(dg[i])
curArr = tmp
return count
However it gets extremely slow above n=12. Is there a better algorithm to solve this and more importantly. one that can be generalized for any undirected graph (i.e. with up to 20 nodes)?

Kosaraju's Algorithm for SCCs, non-recursive

I have an implementation of Kosaraju's algorithm for finding SCCs in Python. The code below contains a recursive (fine on the small test cases) version and a non-recursive one (which I ultimately need because of the size of the real dataset).
I have run both the recursive and non-recursive version on a few test datasets and get the correct answer. However running it on the much larger dataset that I ultimately need to use, produces the wrong result. Going through the real data is not really an option because it contains nearly a million nodes.
My problem is that I don't know how to proceed from here. My suspision is that I either forgot a certain case of graph constellation in my test cases, or that I have a more fundamental misunderstanding about how this algo is supposed to work.
#!/usr/bin/env python3
import heapq
class Node():
"""A class to represent nodes in a DirectedGraph. It has attributes for
performing DFS."""
def __init__(self, i):
self.id = i
self.edges = []
self.rev_edges = []
self.explored = False
self.fin_time = 0
self.leader = 0
def add_edge(self, edge_id):
self.edges.append(edge_id)
def add_rev_edge(self, edge_id):
self.rev_edges.append(edge_id)
def mark_explored(self):
self.explored = True
def set_leader(self, leader_id):
self.leader = leader_id
def set_fin_time(self, fin_time):
self.fin_time = fin_time
class DirectedGraph():
"""A class to represent directed graphs via the adjacency list approach.
Each dictionary entry is a Node."""
def __init__(self, length, list_of_edges):
self.nodes = {}
self.nodes_by_fin_time = {}
self.length = length
self.fin_time = 1 # counter for the finishing time
self.leader_count = 0 # counter for the size of leader nodes
self.scc_heapq = [] # heapq to store the ssc by size
self.sccs_computed = False
for n in range(1, length + 1):
self.nodes[str(n)] = Node(str(n))
for n in list_of_edges:
ns = n[0].split(' ')
self.nodes[ns[0]].add_edge(ns[1])
self.nodes[ns[1]].add_rev_edge(ns[0])
def n_largest_sccs(self, n):
if not self.sccs_computed:
self.compute_sccs()
return heapq.nlargest(n, self.scc_heapq)
def compute_sccs(self):
"""First compute the finishing times and the resulting order of nodes
via a DFS loop. Second use that new order to compute the SCCs and order
them by their size."""
# Go through the given graph in reverse order, computing the finishing
# times of each node, and create a second graph that uses the finishing
# times as the IDs.
i = self.length
while i > 0:
node = self.nodes[str(i)]
if not node.explored:
self.dfs_fin_times(str(i))
i -= 1
# Populate the edges of the nodes_by_fin_time
for n in self.nodes.values():
for e in n.edges:
e_head_fin_time = self.nodes[e].fin_time
self.nodes_by_fin_time[n.fin_time].add_edge(e_head_fin_time)
# Use the nodes ordered by finishing times to calculate the SCCs.
i = self.length
while i > 0:
self.leader_count = 0
node = self.nodes_by_fin_time[str(i)]
if not node.explored:
self.dfs_leaders(str(i))
heapq.heappush(self.scc_heapq, (self.leader_count, node.id))
i -= 1
self.sccs_computed = True
def dfs_fin_times(self, start_node_id):
stack = [self.nodes[start_node_id]]
# Perform depth-first search along the reversed edges of a directed
# graph. While doing this populate the finishing times of the nodes
# and create a new graph from those nodes that uses the finishing times
# for indexing instead of the original IDs.
while len(stack) > 0:
curr_node = stack[-1]
explored_rev_edges = 0
curr_node.mark_explored()
for e in curr_node.rev_edges:
rev_edge_head = self.nodes[e]
# If the head of the rev_edge has already been explored, ignore
if rev_edge_head.explored:
explored_rev_edges += 1
continue
else:
stack.append(rev_edge_head)
# If the current node has no valid, unexplored outgoing reverse
# edges, pop it from the stack, populate the fin time, and add it
# to the new graph.
if len(curr_node.rev_edges) - explored_rev_edges == 0:
sink_node = stack.pop()
# The fin time is 0 if that node has not received a fin time.
# Prevents dealing with the same node twice here.
if sink_node and sink_node.fin_time == 0:
sink_node.set_fin_time(str(self.fin_time))
self.nodes_by_fin_time[str(self.fin_time)] = \
Node(str(self.fin_time))
self.fin_time += 1
def dfs_leaders(self, start_node_id):
stack = [self.nodes_by_fin_time[start_node_id]]
while len(stack) > 0:
curr_node = stack.pop()
curr_node.mark_explored()
self.leader_count += 1
for e in curr_node.edges:
if not self.nodes_by_fin_time[e].explored:
stack.append(self.nodes_by_fin_time[e])
###### Recursive verions below ###################################
def dfs_fin_times_rec(self, start_node_id):
curr_node = self.nodes[start_node_id]
curr_node.mark_explored()
for e in curr_node.rev_edges:
if not self.nodes[e].explored:
self.dfs_fin_times_rec(e)
curr_node.set_fin_time(str(self.fin_time))
self.nodes_by_fin_time[str(self.fin_time)] = Node(str(self.fin_time))
self.fin_time += 1
def dfs_leaders_rec(self, start_node_id):
curr_node = self.nodes_by_fin_time[start_node_id]
curr_node.mark_explored()
for e in curr_node.edges:
if not self.nodes_by_fin_time[e].explored:
self.dfs_leaders_rec(e)
self.leader_count += 1
To run:
#!/usr/bin/env python3
import utils
from graphs import scc_computation
# data = utils.load_tab_delimited_file('data/SCC.txt')
data = utils.load_tab_delimited_file('data/SCC_5.txt')
# g = scc_computation.DirectedGraph(875714, data)
g = scc_computation.DirectedGraph(11, data)
g.compute_sccs()
# for e, v in g.nodes.items():
# print(e, v.fin_time)
# for e, v in g.nodes_by_fin_time.items():
# print(e, v.edges)
print(g.n_largest_sccs(20))
Most complex test case (SCC_5.txt):
1 5
1 4
2 3
2 11
2 6
3 7
4 2
4 8
4 10
5 7
5 5
5 3
6 8
6 11
7 9
8 2
8 8
9 3
10 1
11 9
11 6
Drawing of that test case: https://imgur.com/a/LA3ObpN
This produces 4 SCCs:
Bottom: Size 4, nodes 2, 8, 6, 11
Left: Size 3, nodes 1, 10, 4
Top: Size 1, node 5
Right: Size 3, nodes 7, 3, 9
Ok, I figured out the missing cases. The algorithm wasn't performing correctly on very strongly connected graphs and duplicated edges. Here is an adjusted version of the test case I posted above with a duplicated edge and more edges to turn the whole graph into one big SCC.
1 5
1 4
2 3
2 6
2 11
3 2
3 7
4 2
4 8
4 10
5 1
5 3
5 5
5 7
6 8
7 9
8 2
8 2
8 4
8 8
9 3
10 1
11 9
11 6

Incorrect output dictionary from user's input

I need output to be in the form
{0: {1:11,2:13}, 1: {0:11,3:14}}
But it comes out to
{0: {1:['11'],2:['13']}, 1: {0:['11'],3:['14']}}
using this
graph = {}
N,w = map(int,raw_input().split())
# print N, w
for x in range(0,C):
i,j,c = raw_input().split()
graph.setdefault(int(i), {}).setdefault(int(j),[]).append(w)
print graph
on INPUT
1st line: Ignore N=4, while C=4 is the number of lines.
2nd line: i,j are vertices, w is the edge weight.
4 4
0 1 11
0 2 13
1 0 11
1 3 14
You are setting lists as values inside your nested dictionary in the following line -
graph.setdefault(int(i), {}).setdefault(int(j),[]).append(w)
This is why you are getting values inside list, if you are 100% sure that the key:value pairs inside the nested dictionary would always be unique, then you can simply set the value to the key. Example -
graph.setdefault(int(i), {})[int(j)] = w

Implement Strongly connected Components for Integers in file

I am working on implementing the Strongly Connected Components Program from input file of numbers.I know the algorithm on how to do this,but having hard time implementing it in python.
STRONGLY-CONNECTED-COMPONENTS(G)
1. run DFS on G to compute finish times
2. compute G'
3. run DFS on G', but when selecting which node to vist do so
in order of decreasing finish times (as computed in step 1)
4. output the vertices of each tree in the depth-first forest
of step 3 as a separate strongly connected component
The file looks like this:
5 5
1 2
2 4
2 3
3 4
4 5
The first line is no. of nodes and edges.The rest of the lines are two integers u and v separated by a space, which means a directed edge from node u to node v.The output is to be a strongly connected component and the no.of these components.
DFS(G)
1 for each vertex u in G.V
2 u.color = WHITE
3 u.π = NIL
4 time = 0
5 for each vertex u in G.V
6 if u.color == WHITE
7 DFS-VISIT(G, u)
DFS-VISIT(G, u)
1 time = time + 1 // white vertex u has just been discovered
2 u.d = time
3 u.color = GRAY
4 for each v in G.adj[u]
5 if v.color == WHITE
6 v.π = u
7 DFS-VISIT(G, u)
8 u.color = BLACK // blacken u; it is finished
9 time = time + 1
10 u.f = time
In the above algorithm how should I traverse the reverse graph to find SCC.
Here, implemented in Python.
Please notice that I construct G and G' at the same time. My DFS is also modified. The visited array stores in which component each node is. Also, the DFS receives a sequence argument, that is the order in which the nodes will be tested. In the first DFS, we pass a xrange(n), but in the second time, we pass the reversed(order) from the first execution.
The program will output something like:
3
[1, 1, 1, 2, 3]
In that output, we have 3 strongly connected components, with the 3 first nodes in a single component and the remaining two with one component each.
def DFSvisit(G, v, visited, order, component):
visited[v] = component
for w in G[v]:
if not visited[w]:
DFSvisit(G, w, visited, order, component)
order.append(v);
def DFS(G, sequence, visited, order):
components = 0
for v in sequence:
if not visited[v]:
components += 1
DFSvisit(G, v, visited, order, components)
n, m = (int(i) for i in raw_input().strip().split())
G = [[] for i in xrange(n)]
Gt = [[] for i in xrange(n)]
for i in xrange(m):
a, b = (int(i) for i in raw_input().strip().split())
G[a-1].append(b-1)
Gt[b-1].append(a-1)
order = []
components = [0]*n
DFS(G, xrange(n), [0]*n, order)
DFS(Gt, reversed(order), components, [])
print max(components)
print components
class graphSCC:
def __init__(self, graplist):
self.graphlist = graphlist
self.visitedNode = {}
self.SCC_dict = {}
self.reversegraph = {}
def reversegraph(self):
for edge in self.graphlist:
line = edge.split("\t")
self.reverseGraph.setdefault(strip("\r"), []).append()
return self.reverseGraph
def dfs(self):
SCC_count = 0
for x in self.reversegraph.keys():
self.visitednode[x] = 0
for x in self.reversegraph.keys():
if self.visitednode[x] == 0:
count += 1
self.explore(x, count)
def explore(self, node, count):
self.visitednode[node] = 1
for val in self.reversegraph[node]:
if self.visitednode[val] == 0:
self.explore(val, count)
self.SCC_dict.setdefault(count, []).append(node)
length = 0
node = 0
for x in graph.SCC_dict.keys():
if length < len(graph.SCC_dict[x]):
length = len(graph.SCC_dict[x])
node = x
length is the required answer

Categories

Resources