python-ldap Modify phone number - python

i would like to change mobile phone numbers in an AD with a python(-ldap) script.
This is the Code I tried to use:
# import needed modules
import ldap
import ldap.modlist as modlist
# Open a connection
l = ldap.initialize("ldap://host/",trace_level=3)
# Bind/authenticate with a user with apropriate rights to add objects
l.simple_bind_s("user#domain","pw")
# The dn of our existing entry/object
dn="CN=common name,OU=Users,OU=x,DC=y,DC=z,DC=e"
# Some place-holders for old and new values
name = 'mobile'
nr1 = '+4712781271232'
nr2 = '+9812391282822'
old = {name:nr1}
new = {name:nr2}
# Convert place-holders for modify-operation using modlist-module
ldif = modlist.modifyModlist(old,new)
# Do the actual modification
l.modify_s(dn, ldif)
# Its nice to the server to disconnect and free resources when done
l.unbind_s()
Unfortunately i get the following error:
ldap.UNWILLING_TO_PERFORM: {'info': u'00000057: LdapErr:
DSID-0C090FC7, comment: Error in attribute conversion operation, data
0, v4563', 'desc': u'Server is unwilling to perform'}
I am able to delete the entry by leaving old empty, but when i try to set it, i get the following:
LDAPError - TYPE_OR_VALUE_EXISTS: {'info': u'00002083: AtrErr:
DSID-031519F7, #5:\n\t0: 00002083: DSID-031519F7, problem 1006
(ATT_OR_VALUE_EXISTS), data 0, Att 150029 (mobile):len 2\n\t1:
00002083: DSID-031519F7, problem 1006 (ATT_OR_VALUE_EXISTS), data 0,
Att 150029 (mobile):len 2\n\t2: 00002083: DSID-031519F7, problem 1006
(ATT_OR_VALUE_EXISTS), data 0, Att 150029 (mobile):len 2\n\t3:
00002083: DSID-031519F7, problem 1006 (ATT_OR_VALUE_EXISTS), data 0,
Att 150029 (mobile):len 2\n\t4: 00002083: DSID-031519F7, problem 1006
(ATT_OR_VALUE_EXISTS), data 0, Att 150029 (mobile):len 2\n', 'desc':
u'Type or value exists'}
Using the command line tool ldapmodify i was able to do those two:
dn:CN=common name,OU=Users,OU=x,DC=y,DC=z,DC=e
changetype: modify
add: mobile
mobile: +1 2345 6789
dn:CN=common name,OU=Users,OU=x,DC=y,DC=z,DC=e
changetype: modify
delete: mobile
mobile: +1 2345 6789
But unable to do this:
dn:CN=common name,OU=Users,OU=x,DC=y,DC=z,DC=e
changetype: modify
replace: mobile
mobile: +1 2345 6789
mobile: +4 567 89012345
Following error:
ldap_modify: Constraint violation (19)
additional info: 00002081: AtrErr: DSID-03151907, #1:
0: 00002081: DSID-03151907, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 150029 (mobile)
have been trying some time now and would really appreciate some help.

Nevermind the question. Replaced:
nr1 = '+4712781271232'
nr2 = '+9812391282822'
old = {name:nr1}
new = {name:nr2}
With:
old = {'mobile':["+4712781271232"]}
new = {'mobile':["+9812391282822"]}
Brackets do the trick ;)

Related

LLDP module in SCAPY produce malformed packets

I am using scapy.contrib.lldp library to craft an LLDP packet, with the following fields:
Chassis ID (l1)
Port ID (l2)
Time to live (l3)
System name (l4)
Generic Organisation Specific - custom data 1 (l5)
Generic Organisation Specific - custom data 2 (l6)
End of LLDP (l7)
The options for each field comes from a csv file imported as DataFrame and I use the library class for each field.
The problem I have is that after I craft the packet (p=ethernet/l1/l2/l3/l4/l5/l6/l7) the l6 field has the double amount of bytes it is supposed to have, from the read data. I also tried to set a fixed value but the problem persists.
Below is a sample of the packet in wireshark (ok packet and malformed packet), the DataFrame and relevant code.
Layer 
Field 
Value
Ethernet 
dst 
01:23:00:00:00:01
Ethernet 
src 
32:cb:cd:7b:5a:47
Ethernet
type 
35020
LLDPDUChassisID 
_type 
1
LLDPDUChassisID 
_length 
7
LLDPDUChassisID 
subtype
  4
LLDPDUChassisID
family 
None
LLDPDUChassisID
id 
00:00:00:00:00:01
LLDPDUPortID 
_type
  2
LLDPDUPortID 
_length
  2
LLDPDUPortID 
subtype 
7
LLDPDUPortID
family 
None
LLDPDUPortID
id 
1
LLDPDUTimeToLive
  _type 
3
LLDPDUTimeToLive 
_length 
2
LLDPDUTimeToLive 
ttl 
4919
LLDPDUSystemName 
_type 
5
LLDPDUSystemName 
_length 
10
LLDPDUSystemName 
system_name 
openflow:1
LLDPDUGenericOrganisationSpecific 
_type 
127
LLDPDUGenericOrganisationSpecific 
_length 
16
LLDPDUGenericOrganisationSpecific
org_code 
9953
LLDPDUGenericOrganisationSpecific 
subtype 
0
LLDPDUGenericOrganisationSpecific
data 
openflow:1:1
LLDPDUGenericOrganisationSpecific 
_type 
127
LLDPDUGenericOrganisationSpecific 
_length 
20
LLDPDUGenericOrganisationSpecific
org_code 
9953
LLDPDUGenericOrganisationSpecific 
subtype 
1
LLDPDUGenericOrganisationSpecific
data 
b'`\xafE\x16\t\xa0#5\x02\x7f\xd5\p\xf7\x11A'
LLDPDUEndOfLLDPDU 
_type 
0
LLDPDUEndOfLLDPDU 
_length 
0
def getlldppack(host_2,ifa):
lim = 1
log = "/root/log.log"
file = "/root/dic_"+str(host_2)+"_"+str(lim)+".csv"
while lim<5:
try:
lldp1 = pd.read_csv(file)
except:
with open(log,'a') as lf:
lf.write("error when reading the packet "+file+" for count "+str(lim)+"\n")
time.sleep(8)
else:
lldp1 = lldp1.iloc[: , 1:]
e1=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='dst')].iloc[0,2]
e2=lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='src')].iloc[0,2]
e3=int(lldp1.loc[(lldp1['Layer']=='Ethernet')&(lldp1['Field']=='type')].iloc[0,2])
e = Ether(dst=e1, src=e2, type=e3)
a1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_type')].iloc[0,2])
a2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='_length')].iloc[0,2])
a3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='subtype')].iloc[0,2])
a4=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='family')].iloc[0,2]
a5=lldp1.loc[(lldp1['Layer']=='LLDPDUChassisID')&(lldp1['Field']=='id')].iloc[0,2]
b1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_type')].iloc[0,2])
b2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='_length')].iloc[0,2])
b3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='subtype')].iloc[0,2])
b4=lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='family')].iloc[0,2]
b5=int(lldp1.loc[(lldp1['Layer']=='LLDPDUPortID')&(lldp1['Field']=='id')].iloc[0,2])
c1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_type')].iloc[0,2])
c2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='_length')].iloc[0,2])
c3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUTimeToLive')&(lldp1['Field']=='ttl')].iloc[0,2])
d1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_type')].iloc[0,2])
d2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='_length')].iloc[0,2])
d3=lldp1.loc[(lldp1['Layer']=='LLDPDUSystemName')&(lldp1['Field']=='system_name')].iloc[0,2]
e1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[0,2])
e2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[0,2])
e3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[0,2])
e4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[0,2])
e5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[0,2]
f1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_type')].iloc[1,2])
f2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='_length')].iloc[1,2])
f3=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='org_code')].iloc[1,2])
f4=int(lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='subtype')].iloc[1,2])
f5=lldp1.loc[(lldp1['Layer']=='LLDPDUGenericOrganisationSpecific')&(lldp1['Field']=='data')].iloc[1,2]
g1=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_type')].iloc[0,2])
g2=int(lldp1.loc[(lldp1['Layer']=='LLDPDUEndOfLLDPDU')&(lldp1['Field']=='_length')].iloc[0,2])
l1 = LLDPDUChassisID(_type=a1,_length=a2,subtype=a3,family=a4,id=a5)
l2 = LLDPDUPortID(_type=b1,_length=b2,subtype=b3,family=b4,id=str(b5))
l3 = LLDPDUTimeToLive(_type=c1,_length=c2,ttl=c3)
auxo=d3
l4 = LLDPDUSystemName(_type=d1,_length=d2,system_name=auxo)
auxo=e5
l5 = LLDPDUGenericOrganisationSpecific(_type=e1,_length=e2,org_code=e3,subtype=e4,data=auxo)
auxa=f5[2:-1]
l6 = LLDPDUGenericOrganisationSpecific(_type=f1,_length=f2,org_code=f3,subtype=f4,data=auxa)
l7 = LLDPDUEndOfLLDPDU(_type=0,_length=0)
lldpu_layer = LLDPDU()
lldpu_layer = l1/l2/l3/l4/l5/l6/l7
pack = e/lldpu_layer
flag = False
sendp(pack,count=1, iface=ifa)
flag = True
lim = lim +1
with open(log,'a') as lf:
lf.write('read packet '+file+"\n")
I tried changing the data types, also fixed the data in the option "data" of
LLDPDUGenericOrganisationSpecific, but it did not work.
I hope I can have a packet with the right length so it reproduces exactly the non-crafted packet.
The problem was the encoding, it was wrong since I wrote the data in the data frame.
The solution was to encode with base64 BEFORE saving the information in the data frame, I used this explanation: Convert byte[] to base64 and ASCII in Python
That changed my data from b'`\xafE\x16\t\xa0#5\x02\x7f\xd5\p\xf7\x11A' to b'YK9FFgmgIzUCf9VccPcRQQ=='.
Then, when I had to put it in the field, I removed the b'' characters and then did the decoding, as said in the link.

Python How to retrieve a specific cell value from a csv file based on other cell values in the same row without using pandas?

Currently, I have a code whereby "users" will key in their username and password to log in.
uname = input("Enter uname: ")
pword = input("Enter pword: ")
.
.
.
if row[1] == pword and row[0] == uname:
LOGIN()
However, I wish to add an "update info" and "generate report" function.
How can I code, using python, such that I can retrieve the "e unit price" of a specific row of the CSV file? (e.g. uname = donnavan12 and pword = Onwdsna)?
Another question that I have is: How can I code, using python, such that I can retrieve the sum of a particular column (e.g. "energy saved") with (e.g. uname = donnavan12 and pword = Onwdsna)?
Sorry that I don't have codes of what I have tried because I don't even know where to begin. I only learned basic python in the past and used dataframe which was much easier but in this project, Pandas is not allowed so I'm rather stumped. (I also added minimal code as I'm afraid of getting my groupmates striked for plagiarism. Please let me know if more code is necessary.)
Try using DictReader in the csv module
Example code:
mcsv = csv.DictReader(filename)
rows = list(mcsv)
def get_value(myname, mypass, clm):
for row in rows:
if row['uname']==myname and row['pass'] == mypass:
return row['e unit price']
def set_value(myname, mypass, clm, new_value):
for row in rows:
if row['uname']==myname and row['pass'] == mypass:
row[clm] = new_value
def get_sum(myname, mypass, clm):
esaved = 0
for row in rows:
if row['uname']==myname and row['pass'] == mypass:
esaved += int(row[clm])
return esaved
print('energy saved: ', get_sum(myname, mypass, 'energy saved'))
print('e unit price before: ', get_value(myname, mypass, 'e unit price'))
set_value(myname, mypass, 'e unit price', 201)
print('e unit price after: ', get_value(myname, mypass, 'e unit price'))
Input
uname
pass
e unit price
energy saved
abc
123
100
10
cde
456
101
11
abc
123
100
13
fgh
789
102
12
Output
energy saved: 23
e unit price before: 100
e unit price after: 201

How to search a partial text in dict?

I have a dictionary and would like to be able to search with a partial variable and return a text.
df = pd.read_csv('MY_PATH')
d = defaultdict(lambda: 'Error, input not listed')
d.update(df.set_index('msg')['reply'].to_dict())
d[last_msg()]
last_msg() should be my partial variable, and d is my dictionary.
The index on my dictionary is column msg from df.
In column msg i have a sample like Jeff Bezos.
In column reply i have a matching reply Jeff Bezos is the CEO of Amazon
How can I search a partial value in column msg and return matching value from column reply?
I want to search just Jeff or just Bezos and get the matching reply Jeff Bezos is the CEO of Amazon
PS. Alternatives to defaultdict may also help improve the code.
EDIT: last_msg() code extracts a text from selenium element.
def last_msg():
try:
post = driver.find_elements_by_class_name("_12pGw")
ultimo = len(post) - 1
texto = post[ultimo].find_element_by_css_selector(
"span.selectable-text").text
return texto
except Exception as e:
print("Error, input not valid")
When I print(d):
defaultdict(<function <lambda> at 0x0000021F959D37B8>, {'Jeff Bezos': 'Jeff Bezos is the CEO of Amazon', 'Serguey Brin': 'Serguey Brin co-founded Google', nan: nan})
When I print(df)
Unnamed: 0 msg reply
0 0 Jeff Bezos Jeff Bezos is the CEO of Amazon
1 1 Serguey Brin Serguey Brin co-founded Google
2 2 NaN NaN
3 3 NaN NaN
I found a way arround my own question using:
d = df.set_index('msg')['reply'].to_dict()
...
try:
x = next(v for k, v in d.items() if last_msg() in k)
except StopIteration:
x = 'Error, input not listed'

Python: Geocoding and extracting longitude, latitude, city

I have a dataframe with addresses in a column and I am using the code below to extract longitude and latitude separately. Is there a way to extract longitude and latitude together as well as extract city using this same approach?
In address column of my "add" dataframe, I have addresses in the following format: "35 Turkey Hill Rd Rt 21 Ste 101,Belchertown, MA 01007"
I am using Python 3, Spyder IDE and Windows 10 desktop.
Sample data:
Address
415 E Main St, Westfield, MA 01085
200 Silver St, Agawam, MA 01001
35 Turkey Hill Rd Rt 21 Ste 101,Belchertown, MA 01007
from geopy.geocoders import Nominatim, ArcGIS, GoogleV3
#from geopy.exc import GeocoderTimedOut
arc=ArcGIS(timeout=100)
nom = Nominatim(timeout=100)
goo = GoogleV3(timeout=100)
geocoders = [arc,nom, goo]
# Capture Longitude
def geocodelng(address):
i = 0
try:
while i < len(geocoders):
# try to geocode using a service
location = geocoders[i].geocode(address)
# if it returns a location
if location != None:
# return those values
return location.longitude
else:
# otherwise try the next one
i += 1
except:
# catch whatever errors, likely timeout, and return null values
print sys.exc_info()[0]
return ['null','null']
# if all services have failed to geocode, return null values
return ['null','null']
#Extract co-ordinates
add['longitude']=add['Address'].apply(geocodelng)
# Capture Latitude
def geocodelat(address):
i = 0
try:
while i < len(geocoders):
# try to geocode using a service
location = geocoders[i].geocode(address)
# if it returns a location
if location != None:
# return those values
return location.latitude
else:
# otherwise try the next one
i += 1
except:
# catch whatever errors, likely timeout, and return null values
print sys.exc_info()[0]
return ['null','null']
# if all services have failed to geocode, return null values
return ['null','null']
#Extract co-ordinates
add['latitude']=add['Address'].apply(geocodelat)

Unhandled exception in py2neo: Type error

I am writing an application whose purpose is to create a graph from a journal dataset. The dataset was a xml file which was parsed in order to extract leaf data. Using this list I wrote a py2neo script to create the graph. The file is attached to this message.
As the script was processed an exception was raised:
The debugged program raised the exception unhandled TypeError
"(1676 {"titulo":"reconhecimento e agrupamento de objetos de aprendizagem semelhantes"})"
File: /usr/lib/python2.7/site-packages/py2neo-1.5.1-py2.7.egg/py2neo/neo4j.py, Line: 472
I don't know how to handle this. I think that the code is syntactically correct...but...
I dont know if I shoud post the entire code here, so the code is at: https://gist.github.com/herlimenezes/6867518
There goes the code:
+++++++++++++++++++++++++++++++++++
'
#!/usr/bin/env python
#
from py2neo import neo4j, cypher
from py2neo import node, rel
# calls database service of Neo4j
#
graph_db = neo4j.GraphDatabaseService("DEFAULT_DOMAIN")
#
# following nigel small suggestion in http://stackoverflow.com
#
titulo_index = graph_db.get_or_create_index(neo4j.Node, "titulo")
autores_index = graph_db.get_or_create_index(neo4j.Node, "autores")
keyword_index = graph_db.get_or_create_index(neo4j.Node, "keywords")
dataPub_index = graph_db.get_or_create_index(neo4j.Node, "data")
#
# to begin, database clear...
graph_db.clear() # not sure if this really works...let's check...
#
# the big list, next version this is supposed to be read from a file...
#
listaBase = [['2007-12-18'], ['RECONHECIMENTO E AGRUPAMENTO DE OBJETOS DE APRENDIZAGEM SEMELHANTES'], ['Raphael Ghelman', 'SWMS', 'MHLB', 'RNM'], ['Objetos de Aprendizagem', u'Personaliza\xe7\xe3o', u'Perfil do Usu\xe1rio', u'Padr\xf5es de Metadados', u'Vers\xf5es de Objetos de Aprendizagem', 'Agrupamento de Objetos Similares'], ['2007-12-18'], [u'LOCPN: REDES DE PETRI COLORIDAS NA PRODU\xc7\xc3O DE OBJETOS DE APRENDIZAGEM'], [u'Maria de F\xe1tima Costa de Souza', 'Danielo G. Gomes', 'GCB', 'CTS', u'Jos\xe9 ACCF', 'MCP', 'RMCA'], ['Objetos de Aprendizagem', 'Modelo de Processo', 'Redes de Petri Colorida', u'Especifica\xe7\xe3o formal'], ['2007-12-18'], [u'COMPUTA\xc7\xc3O M\xd3VEL E UB\xcdQUA NO CONTEXTO DE UMA GRADUA\xc7\xc3O DE REFER\xcaNCIA'], ['JB', 'RH', 'SR', u'S\xe9rgio CCSPinto', u'D\xe9bora NFB'], [u'Computa\xe7\xe3o M\xf3vel e Ub\xedqua', u'Gradua\xe7\xe3o de Refer\xeancia', u' Educa\xe7\xe3o Ub\xedqua']]
#
pedacos = [listaBase[i:i+4] for i in range(0, len(listaBase), 4)] # pedacos = chunks
#
# lists to collect indexed nodes: is it really useful???
# let's think about it when optimizing code...
dataPub_nodes = []
titulo_nodes = []
autores_nodes = []
keyword_nodes = []
#
#
for i in range(0, len(pedacos)):
# fill dataPub_nodes and titulo_nodes with content.
#dataPub_nodes.append(dataPub_index.get_or_create("data", pedacos[i][0], {"data":pedacos[i][0]})) # Publication date nodes...
dataPub_nodes.append(dataPub_index.get_or_create("data", str(pedacos[i][0]).strip('[]'), {"data":str(pedacos[i][0]).strip('[]')}))
# ------------------------------- Exception raised here... --------------------------------
# The debugged program raised the exception unhandled TypeError
#"(1649 {"titulo":["RECONHECIMENTO E AGRUPAMENTO DE OBJETOS DE APRENDIZAGEM SEMELHANTES"]})"
#File: /usr/lib/python2.7/site-packages/py2neo-1.5.1-py2.7.egg/py2neo/neo4j.py, Line: 472
# ------------------------------ What happened??? ----------------------------------------
titulo_nodes.append(titulo_index.get_or_create("titulo", str(pedacos[i][1]).strip('[]'), {"titulo":str(pedacos[i][1]).strip('[]')})) # title node...
# creates relationship publicacao
publicacao = graph_db.get_or_create_relationships(titulo_nodes[i], "publicado_em", dataPub_nodes[i])
# now processing autores sublist and collecting in autores_nodes
#
for j in range(0, len(pedacos[i][2])):
# fill autores_nodes list
autores_nodes.append(autores_index.get_or_create("autor", pedacos[i][2][j], {"autor":pedacos[i][2][j]}))
# creates autoria relationship...
#
autoria = graph_db.get_or_create_relationships(titulo_nodes[i], "tem_como_autor", autores_nodes[j])
# same logic...
#
for k in range(0, len(pedacos[i][3])):
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k]))
# cria o relacionamento 'tem_como_keyword'
tem_keyword = graph_db.get_or_create_relationships(titulo_nodes[i], "tem_como_keyword", keyword_nodes[k])
`
The fragment of py2neo which raised the exception
def get_or_create_relationships(self, *abstracts):
""" Fetch or create relationships with the specified criteria depending
on whether or not such relationships exist. Each relationship
descriptor should be a tuple of (start, type, end) or (start, type,
end, data) where start and end are either existing :py:class:`Node`
instances or :py:const:`None` (both nodes cannot be :py:const:`None`).
Uses Cypher `CREATE UNIQUE` clause, raising
:py:class:`NotImplementedError` if server support not available.
.. deprecated:: 1.5
use either :py:func:`WriteBatch.get_or_create_relationship` or
:py:func:`Path.get_or_create` instead.
"""
batch = WriteBatch(self)
for abstract in abstracts:
if 3 <= len(abstract) <= 4:
batch.get_or_create_relationship(*abstract)
else:
raise TypeError(abstract) # this is the 472 line.
try:
return batch.submit()
except cypher.CypherError:
raise NotImplementedError(
"The Neo4j server at <{0}> does not support " \
"Cypher CREATE UNIQUE clauses or the query contains " \
"an unsupported property type".format(self.__uri__)
)
======
Any help?
I have already fixed it, thanks to Nigel Small. I made a mistake as I wrote the line on creating relationships. I typed:
publicacao = graph_db.get_or_create_relationships(titulo_nodes[i], "publicado_em", dataPub_nodes[i])
and it must to be:
publicacao = graph_db.get_or_create_relationships((titulo_nodes[i], "publicado_em", dataPub_nodes[i]))
By the way, there is also another coding error:
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k]))
must be
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k], {"keyword":pedacos[i][3][k]}))

Categories

Resources