What is the invalid syntax error in this def? - python

I am trying to use this Anki add-on written in python.
However, on Anki 2.1 startup, an error is returned.
May anyone have a quick look at the short code in the link and spot the error? Maybe it is incompatible with a recent update of python?
# Anki 2.0 addon
# Author EJS
# https://eshapard.github.io/
#
# Sets the learning steps sequence of each deck options group.
from anki.hooks import addHook
from aqt import mw
#from aqt.utils import showInfo
#import time
ignoreList = ['Default', 'OnHold', 'Parent Category'] #Deck options groups to ignore
# run this on profile load
def updateLearningSteps():
#find all deck option groups
dconf = mw.col.decks.dconf
#cycle through them one by one
for k in dconf:
if dconf[k]["name"] not in ignoreList:
#showInfo(dconf[k]["name"])
ease = dconf[k]["new"]["initialFactor"]/1000.0
#create learning steps
tempList = [15]
for i in range(10):
l = int(1440*ease**i)
if l < 28800:
tempList.append(l)
else:
gradInts = [(l/1440),(l/1440)]
break
#showInfo(str(tempList))
#showInfo(str(gradInts))
mw.col.decks.dconf[k]["new"]["delays"] = tempList
mw.col.decks.dconf[k]["new"]["ints"] = gradInts
mw.col.decks.save(mw.col.decks.dconf[k])
mw.res
et()
# add hook to 'profileLoaded'
addHook("profileLoaded", updateLearningSteps)
This is the error: https://user-images.githubusercontent.com/52420923/66923073-ba2df980-f017-11e9-8b66-c8799db29850.png

The code you've posted doesn't match the error. The error has def updateLearningSteps() without a colon at the end of the line. That is indeed a syntax error.

Related

attempt to get argmax of an empty sequence

when tried to execute this code, it is showing 'attempt to get argmax of an empty sequence' error.
code:
import re
import numpy as np
code:
import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
Think the error is saying exactly what is happening. You are creating steps with no data so argmax() won't run. Perhaps you need to adjust how it's loaded so data is in steps. Hard to say based on info provided
Simplified Example to Demonstrate Issue:
steps = np.array([1,2,3])
#Works fine with data
print(steps.argmax())
#argmax() throws an error since the array is empty
emptySteps = np.delete(steps,[0,1,2])
print(emptySteps.argmax())
Possible Workaround
Appears you are searching a directory. If there are no files available, you may not want an error. Could achieve this with a simple check before running to see if there are files to process
if steps.size > 0:
print("Do something with argmax()")
else:
print ("No data in steps array")

Get taxa within an iNaturalist area or project using pyinaturalist

I set up projects on iNaturalist and I want to get a list of the current taxa found in those projects.
I tried:
from pyinaturalist import *
Bad_Durkheim = get_observations(project_id = "bad-durkheim-exkursion")
Species = [get_taxa(obs) for obs in Bad_Durkheim["results"]]
which resulted in
HTTPError: 414 Client Error: Request-URI Too Large for url: https://api.inaturalist.org/v1/taxa?q=%7B%27quality_gr...
I am not sure how to use this API, maybe someone has an explanation?
Also, I wondered why Bad_Durkheim["results"] is a list of 30, when there were more observations. If there is a limit of 30, how can it be changed?
Edit
This appears to get me a step further:
from pyinaturalist import *
Bad_Durkheim = get_observations(project_id = "bad-durkheim-exkursion")
Species = [obs["taxon"]["name"] for obs in Bad_Durkheim["results"]]
When I ran
Species = [obs for obs in Bad_Durkheim["results"]]
Species[0]["taxon"]["name"]
to test it, it worked in case Species[0], however the loop failed with TypeError: 'NoneType' object is not subscriptable.
Found the problem: Some entries didn't have the requested attributes, since they were listed as "unidentified".
Solution:
import pyinaturalist as pyn
Bad_Durkheim = pyn.get_observations(project_id = "bad-durkheim-exkursion",
identified = True)
Species = [obs["taxon"]["name"] for obs in Bad_Durkheim["results"]]

How do I preserve the colours in a STEP file when modifying the geometry in Open Cascade?

I'm writing a script in python using Open Cascade Technology (using the pyOCCT package for Anaconda) to import STEP files, defeature them procedurally and re-export them. I want to preserve the product hierarchy, names and colours as much as possible. Currently the script can import STEP files, simplify all of the geometry while roughly preserving the hierarchy and re-export the step file. The problem is no matter how I approach the problem, I can't manage to make it preserve the colours of the STEP file in a few particular cases.
Here's the model I pass in to the script:
And here's the result of the simplification:
In this case, the simplification has worked correctly but the colours of some of the bodies were not preserved. The common thread is that the bodies that loose their colours are children of products which only have other bodies as their children (ie: they don't contain sub-products).
This seems to be related to the way that Open Cascade imports STEP files which are translated as follows:
Alright, now for some code:
from OCCT.STEPControl import STEPControl_Reader, STEPControl_Writer, STEPControl_AsIs
from OCCT.BRepAlgoAPI import BRepAlgoAPI_Defeaturing
from OCCT.TopAbs import TopAbs_FACE, TopAbs_SHAPE, TopAbs_COMPOUND
from OCCT.TopExp import TopExp_Explorer
from OCCT.ShapeFix import ShapeFix_Shape
from OCCT.GProp import GProp_GProps
from OCCT.BRepGProp import BRepGProp
from OCCT.TopoDS import TopoDS
from OCCT.TopTools import TopTools_ListOfShape
from OCCT.BRep import BRep_Tool
from OCCT.Quantity import Quantity_ColorRGBA
from OCCT.ShapeBuild import ShapeBuild_ReShape
from OCCT.STEPCAFControl import STEPCAFControl_Reader, STEPCAFControl_Writer
from OCCT.XCAFApp import XCAFApp_Application
from OCCT.XCAFDoc import XCAFDoc_DocumentTool, XCAFDoc_ColorGen, XCAFDoc_ColorSurf
from OCCT.XmlXCAFDrivers import XmlXCAFDrivers
from OCCT.TCollection import TCollection_ExtendedString
from OCCT.TDF import TDF_LabelSequence
from OCCT.TDataStd import TDataStd_Name
from OCCT.TDocStd import TDocStd_Document
from OCCT.TNaming import TNaming_NamedShape
from OCCT.Interface import Interface_Static
# DBG
def export_step(shape, path):
writer = STEPControl_Writer()
writer.Transfer( shape, STEPControl_AsIs )
writer.Write(path)
# DBG
def print_shape_type(label, shapeTool):
if shapeTool.IsFree_(label):
print("Free")
if shapeTool.IsShape_(label):
print("Shape")
if shapeTool.IsSimpleShape_(label):
print("SimpleShape")
if shapeTool.IsReference_(label):
print("Reference")
if shapeTool.IsAssembly_(label):
print("Assembly")
if shapeTool.IsComponent_(label):
print("Component")
if shapeTool.IsCompound_(label):
print("Compound")
if shapeTool.IsSubShape_(label):
print("SubShape")
# Returns a ListOfShape containing the faces to be removed in the defeaturing
# NOTE: For concisness I've simplified this algorithm and as such it *MAY* not produce exactly
# the same output as shown in the screenshots but should still do SOME simplification
def select_faces(shape):
exp = TopExp_Explorer(shape, TopAbs_FACE)
selection = TopTools_ListOfShape()
nfaces = 0
while exp.More():
rgb = None
s = exp.Current()
exp.Next()
nfaces += 1
face = TopoDS.Face_(s)
gprops = GProp_GProps()
BRepGProp.SurfaceProperties_(face, gprops)
area = gprops.Mass()
surf = BRep_Tool.Surface_(face)
if area < 150:
selection.Append(face)
#log(f"\t\tRemoving face with area: {area}")
return selection, nfaces
# Performs the defeaturing
def simplify(shape):
defeaturer = BRepAlgoAPI_Defeaturing()
defeaturer.SetShape(shape)
sel = select_faces(shape)
if sel[0].Extent() == 0:
return shape
defeaturer.AddFacesToRemove(sel[0])
defeaturer.SetRunParallel(True)
defeaturer.SetToFillHistory(False)
defeaturer.Build()
if (not defeaturer.IsDone()):
return shape# TODO: Handle errors
return defeaturer.Shape()
# Given the label of an entity it finds it's displayed colour. If the entity has no defined colour the parents are searched for defined colours as well.
def find_color(label, colorTool):
col = Quantity_ColorRGBA()
status = False
while not status and label != None:
try:
status = colorTool.GetColor(label, XCAFDoc_ColorSurf, col)
except:
break
label = label.Father()
return (col.GetRGB().Red(), col.GetRGB().Green(), col.GetRGB().Blue(), col.Alpha(), status, col)
# Finds all child shapes and simplifies them recursively. Returns true if there were any subshapes.
# For now this assumes all shapes passed into this are translated as "SimpleShape".
# "Assembly" entities should be skipped as we don't need to touch them, "Compound" entities should work with this as well, though the behaviour is untested.
# Use the print_shape_type(shapeLabel, shapeTool) method to identify a shape.
def simplify_subshapes(shapeLabel, shapeTool, colorTool, set_colours=None):
labels = TDF_LabelSequence()
shapeTool.GetSubShapes_(shapeLabel, labels)
#print_shape_type(shapeLabel, shapeTool)
#print(f"{shapeTool.GetShape_(shapeLabel).ShapeType()}")
cols = {}
for i in range(1, labels.Length()+1):
label = labels.Value(i)
currShape = shapeTool.GetShape_(label)
print(f"\t{currShape.ShapeType()}")
if currShape.ShapeType() == TopAbs_COMPOUND:
# This code path should never be taken as far as I understand
simplify_subshapes(label, shapeTool, colorTool, set_colours)
else:
''' See the comment at the bottom of the main loop for an explanation of the function of this block
col = find_color(label, colorTool)
#print(f"{name} RGBA: {col[0]:.5f} {col[1]:.5f} {col[2]:.5f} {col[3]:.5f} defined={col[4]}")
cols[label.Tag()] = col
if set_colours != None:
colorTool.SetColor(label, set_colours[label.Tag()][5], XCAFDoc_ColorSurf)'''
# Doing both of these things seems to result in colours being reset but the geometry doesn't get replaced
nshape = simplify(currShape)
shapeTool.SetShape(label, nshape) # This doesn't work
return labels.Length() > 0, cols
# Set up XCaf Document
app = XCAFApp_Application.GetApplication_()
fmt = TCollection_ExtendedString('MDTV-XCAF')
doc = TDocStd_Document(fmt)
app.InitDocument(doc)
shapeTool = XCAFDoc_DocumentTool.ShapeTool_(doc.Main())
colorTool = XCAFDoc_DocumentTool.ColorTool_(doc.Main())
# Import the step file
reader = STEPCAFControl_Reader()
reader.SetNameMode(True)
reader.SetColorMode(True)
Interface_Static.SetIVal_("read.stepcaf.subshapes.name", 1) # Tells the importer to import subshape names
reader.ReadFile("testcolours.step")
reader.Transfer(doc)
labels = TDF_LabelSequence()
shapeTool.GetShapes(labels)
# Simplify each shape that was imported
for i in range(1, labels.Length()+1):
label = labels.Value(i)
shape = shapeTool.GetShape_(label)
# Assemblies are just made of other shapes, so we'll skip this and simplify them individually...
if shapeTool.IsAssembly_(label):
continue
# This function call here is meant to be the fix for the bug described.
# The idea was to check if the TopoDS_Shape we're looking at is a COMPOUND and if so we would simplify and call SetShape()
# on each of the sub-shapes instead in an attempt to preserve the colours stored in the sub-shape's labels.
#status, loadedCols = simplify_subshapes(label, shapeTool, colorTool)
#if status:
#continue
shape = simplify(shape)
shapeTool.SetShape(label, shape)
# The code gets a bit messy here because this was another attempt at fixing the problem by building a dictionary of colours
# before the shapes were simplified and then resetting the colours of each subshape after simplification.
# This didn't work either.
# But the idea was to call this function once to generate the dictionary, then simplify, then call it again passing in the dictionary so it could be re-applied.
#if status:
# simplify_subshapes(label, shapeTool, colorTool, loadedCols)
shapeTool.UpdateAssemblies()
# Re-export
writer = STEPCAFControl_Writer()
Interface_Static.SetIVal_("write.step.assembly", 2)
Interface_Static.SetIVal_("write.stepcaf.subshapes.name", 1)
writer.Transfer(doc, STEPControl_AsIs)
writer.Write("testcolours-simplified.step")
There's a lot of stuff here for a minimum reproducible example but the general flow of the program is that we import the step file:
reader.ReadFile("testcolours.step")
reader.Transfer(doc)
Then we iterate through each label in the file (essentially every node in the tree):
labels = TDF_LabelSequence()
shapeTool.GetShapes(labels)
# Simplify each shape that was imported
for i in range(1, labels.Length()+1):
label = labels.Value(i)
shape = shapeTool.GetShape_(label)
We skip any labels marked as assemblies since they contain children and we only want to simplify individual bodies. We then call simplify(shape) which performs the simplification and returns a new shape, we then call shapeTool.SetShape() to bind the new shape to the old label.
The thing that doesn't work here is that as explained, Component3 and Component4 don't get marked as Assemblies and are treated as SimpleShapes and when they are simplified as one shape, the colours are lost.
One solution I attempted was to call a method simplify_subshapes() which would iterate through each of the subshapes, and do the same thing as the main loop, simplifying them and then calling SetShape(). This ended up being even worse as it resulted in those bodies not being simplified at all but still loosing their colours.
I also attempted to use the simplify_subshapes() method to make a dictionary of all the colours of the subshapes, then simplify the COMPOUND shape and then call the same method again to this time re-apply the colours to the subshapes using the dictionary (the code for this is commented out with an explanation as to what it did).
col = find_color(label, colorTool)
#print(f"{name} RGBA: {col[0]:.5f} {col[1]:.5f} {col[2]:.5f} {col[3]:.5f} defined={col[4]}")
cols[label.Tag()] = col
if set_colours != None:
colorTool.SetColor(label, set_colours[label.Tag()][5], XCAFDoc_ColorSurf)
As far as I see it the issue could be resolved either by getting open cascade to import Component3 and Component4 as Assemblies OR by finding a way to make SetShape() work as intended on subshapes.
Here's a link to the test file:
testcolours.step

Python consumes excessive memory, doesn't complete run even given adequate memory

I am writing a code for an information retrieval project, which reads Wikipedia pages in XML format from a file, processes the string (I've omitted this part for the sake of simplicity), tokenizes the strings and build positional indexes for the terms found on the pages. Then it saves the indexes to a file using pickle once, and then reads it from that file for the next usages for less processing time (I've included the code for that parts, but they're commented)
After that, I need to fill a 1572 * ~97000 matrix (1572 is the number of Wiki pages, and 97000 is the number of terms found in them. Like each Wiki page is a vector of words, and vectors[i][j] is, the number of occurrences of the i'th word of the word set in the j'th Wiki Page. (Again it's been simplified but it doesn't matter)
The problem is that it takes way too much memory to run the code, and even then, from a point between the 350th and 400th row of the matrix beyond, it doesn't proceed to run the code (it doesn't stop either). I thought the problem was with memory, because when its usage exceeded my 7.7GiB RAM and 1.7GiB swap, it stopped and printed:
Process finished with exit code 137 (interrupted by signal 9: SIGKILL)
But when I added a 6GiB memory by making a swap file for Python3.7 (using the script recommended here, the program didn't run out of memory, but instead got stuck when it had 7.7GiB RAM + 3.9GiB swap memory occupied) as I said, at a point between the 350th and 400th iteration of i in the loop at the bottom. Instead of Ubuntu 18.04,I tried it on Windows 10, the screen simply went black. I tried this on Windows 7, again to no avail.
Next I thought it was a PyCharm issue, so I ran the python file using the python3 file.py command, and it got stuck at the very point it had with PyCharm. I even used the numpy.float16 datatype to save memory, but it had no effect. I asked a colleague about their matrix dimensions, they were similar to mine, but they weren't having problems with it. Is it malware or a memory leak? Or is it something am I doing something wrong here?
import pickle
from hazm import *
from collections import defaultdict
import numpy as np
'''For each word there's one of these. it stores the word's frequency, and the positions it has occurred in on each wiki page'''
class Positional_Index:
def __init__(self):
self.freq = 0
self.title = defaultdict(list)
self.text = defaultdict(list)
'''Here I tokenize words and construct indexes for them'''
# tree = ET.parse('Wiki.xml')
# root = tree.getroot()
# index_dict = defaultdict(Positional_Index)
# all_content = root.findall('{http://www.mediawiki.org/xml/export-0.10/}page')
#
# for page_index, pg in enumerate(all_content):
# title = pg.find('{http://www.mediawiki.org/xml/export-0.10/}title').text
# txt = pg.find('{http://www.mediawiki.org/xml/export-0.10/}revision') \
# .find('{http://www.mediawiki.org/xml/export-0.10/}text').text
#
# title_arr = word_tokenize(title)
# txt_arr = word_tokenize(txt)
#
# for term_index, term in enumerate(title_arr):
# index_dict[term].freq += 1
# index_dict[term].title[page_index] += [term_index]
#
# for term_index, term in enumerate(txt_arr):
# index_dict[term].freq += 1
# index_dict[term].text[page_index] += [term_index]
#
# with open('texts/indices.txt', 'wb') as f:
# pickle.dump(index_dict, f)
with open('texts/indices.txt', 'rb') as file:
data = pickle.load(file)
'''Here I'm trying to keep the number of occurrences of each word on each page'''
page_count = 1572
vectors = np.array([[0 for j in range(len(data.keys()))] for i in range(page_count)], dtype=np.float16)
words = list(data.keys())
word_count = len(words)
const_log_of_d = np.log10(1572)
""" :( """
for i in range(page_count):
for j in range(word_count):
vectors[i][j] = (len(data[words[j]].title[i]) + len(data[words[j]].text[i]))
if i % 50 == 0:
print("i:", i)
Update : I tried this on a friend's computer, this time it killed the process at someplace between the 1350th-1400th iteration.

Fatal Python error: (pygame parachute) Bus Error

Here is my code:
from psychopy import visual, event, gui, data, core
import random, os
from random import shuffle
from PIL import Image
import glob
a = glob.glob("DDtest/targetimagelist1/*")
b = glob.glob("DDtest/distractorimagelist1/*")
c = glob.glob("DDtest/targetimagelist2/*")
d = glob.glob("DDtest/distractorimagelist3/*")
e = glob.glob("DDtest/targetimagelist4/*")
shuffle(c)
shuffle(d)
ac = a + c
bd = b + d
indices = random.sample(range(len(ac)),len(ac))
ac = list(map(ac.__getitem__, indices))
bd = list(map(bd.__getitem__, indices))
ace = ac+e
shuffle(ace)
target = ac
distractor = bd
recognition = ace
def studyphase():
loc = [1, 2]
location = random.choice(loc)
if location == 1:
pos1 = [-.05,-.05]
pos2 = [.05, .05]
else:
pos1 = [.05, .05]
pos2 = [-.05, -.05]
win = visual.Window(size=(1920, 1080), fullscr=True, screen=0, monitor='testMonitor', color=[-1,-1,-1])
distractorstim = visual.ImageStim(win=win, pos=pos1, size=[0.5,0.5])
distractorstim.autoDraw = True
targetstim = visual.ImageStim(win=win, pos=pos2, size=[0.5,0.5])
targetstim.autoDraw = True
targetstim.image = target[i]
distractorstim.image = distractor[i]
win.flip()
core.wait(.1)
def testphase():
win = visual.Window(size=(1920, 1080 ), fullscr=True, screen=0, monitor='testMonitor', color=[-1,-1,-1])
recognitionstim = visual.ImageStim(win=win, pos=[0,0], size=[0.5,0.5])
recognitionstim.autoDraw = True
recognitionstim.image = recognition[k]
old = visual.TextStim(win,text='OLD',pos=[-0.5,-0.5],font='Lucida Console')
new = visual.TextStim(win,text='NEW', pos=[0.5,-0.5],font='Lucida Console')
old.draw()
new.draw()
win.flip()
core.wait(.1)
for i in range(len(ac)):
studyphase()
for k in range(len(ace)):
testphase()
What this is supposed to do is take a bunch of pictures and display them in two different phases (study and test), however, when I run this code the program crashes about half way through the second loop, and I get the following error message:
python(55762,0xa0afe1a8) malloc: *** mach_vm_map(size=8388608) failed (error code=3)
*** error: can't allocate region
*** set a breakpoint in malloc_error_break to debug
Fatal Python error: (pygame parachute) Bus Error
However, if I run either the study loop or the test loop independently they run fine. Anyone know what might be causing this error? Any help will be greatly appreciated. :)
Edit: so apparently, if I move the win command outside the loop, it works.
This issue was raised on the psychopy-users list a few years ago. It is quite likely caused by the images being too large (in pixels, not megabyte). So a solution would be to downscale them to approximately the resolution that you're going to display them, if possible. I found this by googling the error message.
You generate a new window and several new stimuli on every trial/presentation, since they are initiated within the functions and the functions are called in every iteration of the loop(s). Please see my answer to your earlier question for a strategy to create window/stimuli first and then update the properties that needs to change. This may even solve the problem on its own since creating new stimuli may fill up memory.

Categories

Resources