Adding new fields conditioned and calculate the length using Python QGIS - python

I am developing a new plugin using python in qgis. I want to get an output with the result of a specific attribute table from another layer already existing in my project => layer "SUPPORT".
That's what I've done so far
my code :
`
tableSUPPORT = QgsVectorLayer('None', 'table_SUPPORT', 'memory')
tableSUPPORT.dataProvider().addAttributes(
[QgsField("Propriétaire", QVariant.String), QgsField("Nb. tronçons", QVariant.Int),
QgsField("Long. (m)", QVariant.Int)])
tableSUPPORT.updateFields()
dicoSUPPORT = {'FT': (0, 0), 'FREE MOBILE': (0, 0), 'PRIVE': (0, 0)}
for sup in coucheSUPPORT.getFeatures():
proprietaire = sup['PROPRIETAI']
if proprietaire in ['FT', 'FREE MOBILE', 'PRIVE']:
dicoSUPPORT[proprietaire] = tuple(map(operator.add, (1, sup['LGR_REEL']), dicoSUPPORT[proprietaire]))
else:
QMessageBox.critical(None, "Problème de support",
"Le support %s possède un propriétaire qui ne fait pas partie de la liste." % str(sup['LIBELLE']))
return None
ligneFT = QgsFeature()
ligneFT.setAttributes(['FT', int(dicoSUPPORT['FT'][0]), int(dicoSUPPORT['FT'][1])])
ligneFREE = QgsFeature()
ligneFREE.setAttributes(['FREE MOBILE', int(dicoSUPPORT['FREE MOBILE'][0]), int(dicoSUPPORT['FREE MOBILE'][1])])
lignePRIVE = QgsFeature()
lignePRIVE.setAttributes(['PRIVE', int(dicoSUPPORT['PRIVE'][0]), int(dicoSUPPORT['PRIVE'][1])])
tableSUPPORT.dataProvider().addFeatures([ligneFT])
tableSUPPORT.dataProvider().addFeatures([ligneFREE])
tableSUPPORT.dataProvider().addFeatures([lignePRIVE])
QgsProject.instance().addMapLayer(tableSUPPORT)
`
and here is the result obtained with this code :
result
but in fact I want this table with these specific rows and columns:
table
This is my support attribute table :
support attribute table
description of each row I want in the result:
`
> row 1 =>FT_SOUT:sum of the length ('LGR_REEL') and count the number if ('PROPRIETAI'='FT' and
> 'TYPE_STRUC'='TRANCHEE')
>
> row 2 =>FT_AERIEN:sum of the length('LGR_REEL') if ('PROPRIETAI'='FT' and
> 'TYPE_STRUC'='AERIEN')
>
> row 3 =>CREATION GC FREE RESEAU:sum of length('LGR_REEL') If
> ('PROPRIETAI'='FREE MOBILE' and 'TYPE_STRUC'='TRANCHEE')
>
> row 4 =>CREATION AERIEN FREE RESEAU:sum of the length ('LGR_REEL') If
> ('PROPRIETAI'='FREE MOBILE' and 'TYPE_STRUC'='AERIEN')
>
> row 5 =>PRIVE :sum of the length ('LGR_REEL') If
> ('PROPRIETAI'='PRIVE')
`
i have been asking this at stackexchang but no one answers me
this is my question
https://gis.stackexchange.com/questions/448698/adding-new-fields-conditioned-and-calculate-the-length-using-python-qgis

Related

Filter Pandas dataframe with user input

I'm trying to develop this code where I would have certain inputs for different variables, these would make the filter happen and return the filtered dataframe, this input will always only receive a single value that the user will choose amoung fewer options and if the input is empty, that filter must bring all the data.
I didn't put the user input because I was testing the function first, however, the function always returns an empty dataframe and I can't find out why. Here is the code I was developing:
I didn't put the dataframe because it comes from an excel, but if necessary I'll put together a sample that fits
df = pd.DataFrame({"FarolAging":["Vermelho","Verde","Amarelo"],"Dias Pendentes":["20 dias","40 dias","60 dias"],"Produto":["Prod1","Prod1","Prod2"],
"Officer":["Alexandre Denardi","Alexandre Denardi","Lucas Fernandes"],"Analista":["Guilherme De Oliveira Moura","Leonardo Silva","Julio Cesar"],
"Coord":["Anna Claudia","Bruno","Bruno"]})
FarolAging1 = ['Vermelho']
DiasPendentes = []
Produto = []
Officer = []
def func(FarolAging1,DiasPendentes,Produto,Officer):
if len(Officer) <1:
Officer = df['Officer'].unique()
if len(FarolAging1) <1:
FarolAging1 = df['FarolAging'].unique()
if len(DiasPendentes) <1:
DiasPendentes = df['Dias Pendentes'].unique()
if len(Produto) <1:
Produto = df['Produto'].unique()
dados2 = df.loc[df['FarolAging'].isin([FarolAging1]) & (df['Dias Pendentes'].isin([DiasPendentes])) & (df['Produto'].isin([Produto])) & (df['Officer'].isin([Officer]))]
print(dados2)
func(FarolAging1, DiasPendentes, Produto, Officer) ```
You have to remove the square brackets in isin because you already have lists:
def func(FarolAging1,DiasPendentes,Produto,Officer):
if len(Officer) <1:
Officer = df['Officer'].unique()
if len(FarolAging1) <1:
FarolAging1 = df['FarolAging'].unique()
if len(DiasPendentes) <1:
DiasPendentes = df['Dias Pendentes'].unique()
if len(Produto) <1:
Produto = df['Produto'].unique()
# Transform .isin([...]) into .isin(...)
dados2 = (df.loc[df['FarolAging'].isin(FarolAging1)
& (df['Dias Pendentes'].isin(DiasPendentes))
& (df['Produto'].isin(Produto))
& (df['Officer'].isin(Officer))])
print(dados2)
return dados2 # don't forget to return something
Output:
>>> func(FarolAging1, DiasPendentes, Produto, Officer)
FarolAging Dias Pendentes Produto Officer Analista Coord
0 Vermelho 20 dias Prod1 Alexandre Denardi Guilherme De Oliveira Moura Anna Claudia

Python Index out of range Error in lib loop issue

everything's fine? I hope so.
I'm dealing with this issue: List index out of range. -
Error message:
c:\Users.....\Documents\t.py:41: FutureWarning: As the xlwt package is no longer maintained, the xlwt engine will be removed in a future version of pandas. This is the only engine in pandas that supports writing in the xls format. Install openpyxl and write to an xlsx file instead. You can set the option io.excel.xls.writer to 'xlwt' to silence this warning. While this option is deprecated and will also raise a warning, it can be globally set and the warning suppressed.
read_file.to_excel(planilhaxls, index = None, header=True)
The goal: I need to create a loop that store a specific line of a worksheet such as sheet_1.csv, this correspondent line in sheet_2.csv and a third sheet also, stored in 3 columns in a sheet_output.csv
Issue: It's getting an index error out of range that I don't know what to do
Doubt: There is any other way that I can do it?
The code is below:
(Please, ignore portuguese comments)
import xlrd as ex
import pyautogui as pag
import os
import pyperclip as pc
import pandas as pd
import pygetwindow as pgw
import openpyxl
#Inputs
numerolam = int(input('Escolha o número da lamina: '))
amostra = input('Escoha a amostra: (X, Y, W ou Z): ')
milimetro_inicial = int(input("Escolha o milimetro inicial: "))
milimetro_final = int(input("Escolha o milimetro final: "))
tipo = input("Escolha o tipo - B para Branco & E para Espelho: ")
linha = int(input("Escolha a linha da planilha: "))
# Conversão de código
if tipo == 'B':
tipo2 = 'BRA'
else:
tipo2 = 'ESP'
#Arquivo xlsx
#planilhaxlsx = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.xlsx'
#planilhaxls = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.xls'
#planilhacsv = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.csv'
#planilhacsv_ = f'A{numerolam}{amostra}{milimetro_final}{tipo2}.csv'
#arquivoorigin = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.opj'
#Pasta
pasta = f'L{numerolam}{amostra}'
while milimetro_inicial < milimetro_final:
planilhaxlsx = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.xlsx'
planilhaxls = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.xls'
planilhacsv = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.csv'
planilhacsv_ = f'A{numerolam}{amostra}{milimetro_final}{tipo2}.csv'
arquivoorigin = f'A{numerolam}{amostra}{milimetro_inicial}{tipo2}.opj'
# Converte o arquivo .csv para .xls e .xlsx
read_file = pd.read_csv(planilhacsv)
read_file.to_excel(planilhaxls, index = None, header=True)
#read_file.to_excel(planilhaxlsx, index = None, header=True)
# Abre o arquivo .xls com o xlrd - arquivo excel.
book = ex.open_workbook(planilhaxls)
sh = book.sheet_by_index(0)
# Declaração de variáveis.
coluna_inicial = 16 # Q - inicia em 0
valor = []
index = 0
# Loop que armazena o valor da linha pela coluna Q-Z na variável valor 0-(len-1)
while coluna_inicial < 25:
**#ERRO NA LINHA ABAIXO**
**temp = sh.cell_value(linha, coluna_inicial)**
valor.append(temp) # Adiciona o valor
print(index)
print(valor[index])
index +=1
coluna_inicial += 1
# Abre planilha de saída
wb = openpyxl.Workbook()
ws = wb.active
#Inicia loop de escrita
colunas = ['A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z']
idx_colunas = 0
contador_loop = colunas[idx_colunas]
linha_loop = 1
index_out = 0
s = f'{contador_loop}{linha_loop}'
print(s)
while linha_loop < len(valor):
valor[index_out] = "{}".format(valor[index_out])
ws[s].value = valor[index_out]
print(valor[index_out] + ' feito')
linha_loop += 1
idx_colunas += 1
index_out += 1
# Salva planilha de saída
wb.save("teste.xlsx")
milimetro_inicial += 1
Your problem is on this line
temp = sh.cell_value(linha, coluna_inicial)
There are two index params used linha and coluna_inicial, 'linha' appears to be a static value so the problem would seem to be with 'coluna_inicial' which gets increased by 1 each iteration
coluna_inicial += 1
The loop continues while 'coluna_inicial' value is less than 25. I suggest you check number of columns in the sheet 'sh' using
sh.ncols
either for debugging or as the preferred upper value of your loop. If this is less than 25 you will get the index error once 'coluna_inicial' value exceeds the 'sh.ncols' value.
<---------------Additional Information---------------->
Since this is an xls file there would be no need for delimiter settings, your code as is should open it correctly. However since the xls workbook to be opened is determined by params entered by the user at the start presumably meaning there are a number in the directory to choose from, are you sure you are checking the xls file your code run is opening? Also if there is more than one sheet in the workbook(s) are you opening the correct sheet?
You can print the workbook name to be sure which one is being opened. Also by adding verbosity to the open_workbook command (level 2 should be high enough), it will upon opening the book, print in console details of the sheets available including number of rows and columns in each.
print(planilhaxls)
book = ex.open_workbook(planilhaxls, verbosity=2)
sh = book.sheet_by_index(0)
print(sh.name)
E.g.
BOF: op=0x0809 vers=0x0600 stream=0x0010 buildid=14420 buildyr=1997 -> BIFF80
sheet 0('Sheet1') DIMENSIONS: ncols=21 nrows=21614
BOF: op=0x0809 vers=0x0600 stream=0x0010 buildid=14420 buildyr=1997 ->
BIFF80
sheet 1('Sheet2') DIMENSIONS: ncols=13 nrows=13
the print(sh.name) as shown checks the name of the sheet that 'sh' is assigned to.

Using a text file to create a dataframe with Python

I am trying to use a text file to create a database in which I can easily search through with specific values such as "items with roi>50%" or "items with monthly sales >50"
I got as far as reading the file in and creating a list that was split after each 'New Flip!' which indexed each entry into the list but I don't know how to best name each of the attributes of each object within the list so they can be called upon.
I also tried creating a pd dataframe with each item being the rows and each column being the attributes but once again I was able to create the individual rows that contained the entire contents of each index but don't know how to split up each attribute of each item.
The text file output is set up as:
Donquixote
`New Flip!`
__**Product Details**__
> **Name:** Star Wars
> **ASIN:** B0
__**Analytics**__
> **Rank:** 79,371/7,911,836 (Top 1.00%) **Toys & Games**
> **Monthly Sales:** 150
> **Offer Count #:** 8
__**Calculation**__
> **Sell:** $49.99
> **Buy:** $20.99
> **FBA Fees:** $7.50
> **Pick and Pack Fees:** $3.64
> **Ship to Amazon:** $0.53
> ━━━━━━━━━━━━
> **Profit:** $17.34
> **ROI:** 82%
__**Links**__
> **[Check Restriction](https://sellercentr)**
> **[Amazon](https://www.amazon.com/dp/B)**
> **[Keepa](https://keepa.com/#!product/1-B)**
> **[Bestbuy](https://www.bestbuy.com/si**
https://images-ext-1.discorda
https://images-ext-1.discordapp.net/extern
A︱01/03/22
{Reactions}
💾 keepa barcode 📦 📋
[02-Jan-22 11:23 PM]
{Embed}
Donquixote
I have assumed that your text document is item after item all in the same document (I have added "Batman" as an additional item so that you can see how the code works).
Donquixote
`New Flip!`
__**Product Details**__
> **Name:** Star Wars
> **ASIN:** B0
__**Analytics**__
> **Rank:** 79,371/7,911,836 (Top 1.00%) **Toys & Games**
> **Monthly Sales:** 150
> **Offer Count #:** 8
__**Calculation**__
> **Sell:** $49.99
> **Buy:** $20.99
> **FBA Fees:** $7.50
> **Pick and Pack Fees:** $3.64
> **Ship to Amazon:** $0.53
> ━━━━━━━━━━━━
> **Profit:** $17.34
> **ROI:** 82%
__**Links**__
> **[Check Restriction](https://sellercentr)**
> **[Amazon](https://www.amazon.com/dp/B)**
> **[Keepa](https://keepa.com/#!product/1-B)**
> **[Bestbuy](https://www.bestbuy.com/si**
https://images-ext-1.discorda
https://images-ext-1.discordapp.net/extern
A︱01/03/22
{Reactions}
💾 keepa barcode 📦 📋
[02-Jan-22 11:23 PM]
{Embed}
Donquixote
`New Flip!`
__**Product Details**__
> **Name:** Batman
> **ASIN:** B1
__**Analytics**__
> **Rank:** 79,371/7,911,836 (Top 1.00%) **Toys & Games**
> **Monthly Sales:** 20
> **Offer Count #:** 8
__**Calculation**__
> **Sell:** $45.99
> **Buy:** $20.99
> **FBA Fees:** $7.50
> **Pick and Pack Fees:** $3.64
> **Ship to Amazon:** $0.53
> ━━━━━━━━━━━━
> **Profit:** $17.34
> **ROI:** 56%
__**Links**__
> **[Check Restriction](https://sellercentr)**
> **[Amazon](https://www.amazon.com/dp/B)**
> **[Keepa](https://keepa.com/#!product/1-B)**
> **[Bestbuy](https://www.bestbuy.com/si**
https://images-ext-1.discorda
https://images-ext-1.discordapp.net/extern
A︱01/03/22
{Reactions}
💾 keepa barcode 📦 📋
[02-Jan-22 11:23 PM]
{Embed}
This, messy and rudimentary, code could then be used:
database = {}
data = {}
name = ""
# open file
with open("data.txt", "rt", encoding="utf8") as file:
for line in file:
# if "New Flip!" then new item
if "New Flip!" in line:
for x in data.keys():
# convert money values to floats
if "$" in data[x]:
data[x] = float(data[x].strip("$"))
# convert other numbers to floats
elif data[x].isdigit():
data[x] = float(data[x])
# convert ROI to float
if x == "ROI:":
data[x] = float(data[x].strip("%"))
# add item's data to the database
database[name] = data
# start a new item
data = {}
try:
# if a name, define "name" as this value to name the data correctly
if line.split("**")[1] == "Name:":
name = line.split("**")[2].strip()
# dictionary key names are after the first "**", and values after the second
data[line.split("**")[1]] = line.split("**")[2].strip()
except:
pass
import pandas as pd
# convert database to dataframe and transpose so each row is an item and each column a dictionary key
df = pd.DataFrame(database).T
# filter for items with ROI > 60
df[df["ROI:"] > 60]
# Product Details Name: ASIN: Analytics Rank: Monthly Sales: Offer Count #: Calculation Sell: Buy: FBA Fees: Pick and Pack Fees: Ship to Amazon: Profit: ROI: Links [Check Restriction](https://sellercentr) [Amazon](https://www.amazon.com/dp/B) [Keepa](https://keepa.com/#!product/1-B) [Bestbuy](https://www.bestbuy.com/si
#Star Wars __ Star Wars B0 __ 79,371/7,911,836 (Top 1.00%) 150.0 8.0 __ 49.99 20.99 7.5 3.64 0.53 17.34 82.0 __
Notice that Batman, which has ROI of 56%, is not included in the filtered dataframe.

Generating a table with docx from a dataframe in python

Hellow,
Currently I´m working in a project in which I have to generate some info with docx library in python. I want to know how to generate a docx table from a dataframe in order to have the output with all the columns and rows from de dataframe I've created. Here is my code, but its not working correctly because I can´t reach the final output:
table = doc.add_table(rows = len(detalle_operaciones_total1), cols=5)
table.style = 'Table Grid'
table.rows[0].cells[0].text = 'Nombre'
table.rows[0].cells[1].text = 'Operacion Nro'
table.rows[0].cells[2].text = 'Producto'
table.rows[0].cells[3].text = 'Monto en moneda de origen'
table.rows[0].cells[4].text = 'Monto en moneda local'
for y in range(1, len(detalle_operaciones_total1)):
Nombre = str(detalle_operaciones_total1.iloc[y,0])
Operacion = str(detalle_operaciones_total1.iloc[y,1])
Producto = str(detalle_operaciones_total1.iloc[y,2])
Monto_en_MO = str(detalle_operaciones_total1.iloc[y,3])
Monto_en_ML = str(detalle_operaciones_total1.iloc[y,4])
table.rows[y].cells[0].text = Nombre
table.rows[y].cells[1].text = Operacion
table.rows[y].cells[2].text = Producto
table.rows[y].cells[3].text = Monto_en_MO
table.rows[y].cells[4].text = Monto_en_ML

Unhandled exception in py2neo: Type error

I am writing an application whose purpose is to create a graph from a journal dataset. The dataset was a xml file which was parsed in order to extract leaf data. Using this list I wrote a py2neo script to create the graph. The file is attached to this message.
As the script was processed an exception was raised:
The debugged program raised the exception unhandled TypeError
"(1676 {"titulo":"reconhecimento e agrupamento de objetos de aprendizagem semelhantes"})"
File: /usr/lib/python2.7/site-packages/py2neo-1.5.1-py2.7.egg/py2neo/neo4j.py, Line: 472
I don't know how to handle this. I think that the code is syntactically correct...but...
I dont know if I shoud post the entire code here, so the code is at: https://gist.github.com/herlimenezes/6867518
There goes the code:
+++++++++++++++++++++++++++++++++++
'
#!/usr/bin/env python
#
from py2neo import neo4j, cypher
from py2neo import node, rel
# calls database service of Neo4j
#
graph_db = neo4j.GraphDatabaseService("DEFAULT_DOMAIN")
#
# following nigel small suggestion in http://stackoverflow.com
#
titulo_index = graph_db.get_or_create_index(neo4j.Node, "titulo")
autores_index = graph_db.get_or_create_index(neo4j.Node, "autores")
keyword_index = graph_db.get_or_create_index(neo4j.Node, "keywords")
dataPub_index = graph_db.get_or_create_index(neo4j.Node, "data")
#
# to begin, database clear...
graph_db.clear() # not sure if this really works...let's check...
#
# the big list, next version this is supposed to be read from a file...
#
listaBase = [['2007-12-18'], ['RECONHECIMENTO E AGRUPAMENTO DE OBJETOS DE APRENDIZAGEM SEMELHANTES'], ['Raphael Ghelman', 'SWMS', 'MHLB', 'RNM'], ['Objetos de Aprendizagem', u'Personaliza\xe7\xe3o', u'Perfil do Usu\xe1rio', u'Padr\xf5es de Metadados', u'Vers\xf5es de Objetos de Aprendizagem', 'Agrupamento de Objetos Similares'], ['2007-12-18'], [u'LOCPN: REDES DE PETRI COLORIDAS NA PRODU\xc7\xc3O DE OBJETOS DE APRENDIZAGEM'], [u'Maria de F\xe1tima Costa de Souza', 'Danielo G. Gomes', 'GCB', 'CTS', u'Jos\xe9 ACCF', 'MCP', 'RMCA'], ['Objetos de Aprendizagem', 'Modelo de Processo', 'Redes de Petri Colorida', u'Especifica\xe7\xe3o formal'], ['2007-12-18'], [u'COMPUTA\xc7\xc3O M\xd3VEL E UB\xcdQUA NO CONTEXTO DE UMA GRADUA\xc7\xc3O DE REFER\xcaNCIA'], ['JB', 'RH', 'SR', u'S\xe9rgio CCSPinto', u'D\xe9bora NFB'], [u'Computa\xe7\xe3o M\xf3vel e Ub\xedqua', u'Gradua\xe7\xe3o de Refer\xeancia', u' Educa\xe7\xe3o Ub\xedqua']]
#
pedacos = [listaBase[i:i+4] for i in range(0, len(listaBase), 4)] # pedacos = chunks
#
# lists to collect indexed nodes: is it really useful???
# let's think about it when optimizing code...
dataPub_nodes = []
titulo_nodes = []
autores_nodes = []
keyword_nodes = []
#
#
for i in range(0, len(pedacos)):
# fill dataPub_nodes and titulo_nodes with content.
#dataPub_nodes.append(dataPub_index.get_or_create("data", pedacos[i][0], {"data":pedacos[i][0]})) # Publication date nodes...
dataPub_nodes.append(dataPub_index.get_or_create("data", str(pedacos[i][0]).strip('[]'), {"data":str(pedacos[i][0]).strip('[]')}))
# ------------------------------- Exception raised here... --------------------------------
# The debugged program raised the exception unhandled TypeError
#"(1649 {"titulo":["RECONHECIMENTO E AGRUPAMENTO DE OBJETOS DE APRENDIZAGEM SEMELHANTES"]})"
#File: /usr/lib/python2.7/site-packages/py2neo-1.5.1-py2.7.egg/py2neo/neo4j.py, Line: 472
# ------------------------------ What happened??? ----------------------------------------
titulo_nodes.append(titulo_index.get_or_create("titulo", str(pedacos[i][1]).strip('[]'), {"titulo":str(pedacos[i][1]).strip('[]')})) # title node...
# creates relationship publicacao
publicacao = graph_db.get_or_create_relationships(titulo_nodes[i], "publicado_em", dataPub_nodes[i])
# now processing autores sublist and collecting in autores_nodes
#
for j in range(0, len(pedacos[i][2])):
# fill autores_nodes list
autores_nodes.append(autores_index.get_or_create("autor", pedacos[i][2][j], {"autor":pedacos[i][2][j]}))
# creates autoria relationship...
#
autoria = graph_db.get_or_create_relationships(titulo_nodes[i], "tem_como_autor", autores_nodes[j])
# same logic...
#
for k in range(0, len(pedacos[i][3])):
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k]))
# cria o relacionamento 'tem_como_keyword'
tem_keyword = graph_db.get_or_create_relationships(titulo_nodes[i], "tem_como_keyword", keyword_nodes[k])
`
The fragment of py2neo which raised the exception
def get_or_create_relationships(self, *abstracts):
""" Fetch or create relationships with the specified criteria depending
on whether or not such relationships exist. Each relationship
descriptor should be a tuple of (start, type, end) or (start, type,
end, data) where start and end are either existing :py:class:`Node`
instances or :py:const:`None` (both nodes cannot be :py:const:`None`).
Uses Cypher `CREATE UNIQUE` clause, raising
:py:class:`NotImplementedError` if server support not available.
.. deprecated:: 1.5
use either :py:func:`WriteBatch.get_or_create_relationship` or
:py:func:`Path.get_or_create` instead.
"""
batch = WriteBatch(self)
for abstract in abstracts:
if 3 <= len(abstract) <= 4:
batch.get_or_create_relationship(*abstract)
else:
raise TypeError(abstract) # this is the 472 line.
try:
return batch.submit()
except cypher.CypherError:
raise NotImplementedError(
"The Neo4j server at <{0}> does not support " \
"Cypher CREATE UNIQUE clauses or the query contains " \
"an unsupported property type".format(self.__uri__)
)
======
Any help?
I have already fixed it, thanks to Nigel Small. I made a mistake as I wrote the line on creating relationships. I typed:
publicacao = graph_db.get_or_create_relationships(titulo_nodes[i], "publicado_em", dataPub_nodes[i])
and it must to be:
publicacao = graph_db.get_or_create_relationships((titulo_nodes[i], "publicado_em", dataPub_nodes[i]))
By the way, there is also another coding error:
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k]))
must be
keyword_nodes.append(keyword_index.get_or_create("keyword", pedacos[i][3][k], {"keyword":pedacos[i][3][k]}))

Categories

Resources