Why doxygen does not show annotated public attributes? - python

I am a newcomer and try to use doxygen for generate documentation. I use annotation for public attributes of classes to avoid type problems. But for a simple code:
class doxygenchk:
def __init__(self):
self.attr: float = 1.
self.visible_attr = 2.
The result in the documentation is:
Public Member Functions
def __init__ (self)
Public Attributes
visible_attr
The annotated attr atribute does not appear.
Is there a way to force doxygen to recognize annotated attribute?
I use doxygen version 1.8.17
doxygen was optimized for java, the output in doxywizard is:
Searching for include files...
Searching for example files...
Searching for images...
Searching for dot files...
Searching for msc files...
Searching for dia files...
Searching for files to exclude
Searching INPUT for files to process...
Searching for files in directory .../tmpproject
Reading and parsing tag files
Parsing files
Reading .../tmpproject/main.py...
Parsing file .../tmpproject/main.py...
Reading .../tmpproject/tmpguide.py...
Parsing file .../tmpproject/tmpguide.py...
Building group list...
Building directory list...
Building namespace list...
Building file list...
Building class list...
Computing nesting relations for classes...
Associating documentation with classes...
Building example list...
Searching for enumerations...
Searching for documented typedefs...
Searching for members imported via using declarations...
Searching for included using directives...
Searching for documented variables...
Building interface member list...
Building member list...
Searching for friends...
Searching for documented defines...
Computing class inheritance relations...
Computing class usage relations...
Flushing cached template relations that have become invalid...
Computing class relations...
Add enum values to enums...
Searching for member function documentation...
Creating members for template instances...
Building page list...
Search for main page...
Computing page relations...
Determining the scope of groups...
Sorting lists...
Determining which enums are documented
Computing member relations...
Building full member lists recursively...
Adding members to member groups.
Computing member references...
Inheriting documentation...
Generating disk names...
Adding source references...
Adding xrefitems...
Sorting member lists...
Setting anonymous enum type...
Computing dependencies between directories...
Generating citations page...
Counting members...
Counting data structures...
Resolving user defined references...
Finding anchors and sections in the documentation...
Transferring function references...
Combining using relations...
Adding members to index pages...
Correcting members for VHDL...
Generating style sheet...
Generating search indices...
Generating example documentation...
Generating file sources...
Generating file documentation...
Generating docs for file main.py...
Generating docs for file tmpguide.py...
Generating page documentation...
Generating group documentation...
Generating class documentation...
Generating namespace index...
Generating docs for namespace main
Generating docs for namespace tmpguide
Generating docs for compound tmpguide::doxygenchk...
Generating graph info page...
Generating directory documentation...
Generating index page...
Generating page index...
Generating module index...
Generating namespace index...
Generating namespace member index...
Generating annotated compound index...
Generating alphabetical compound index...
Generating hierarchical class index...
Generating member index...
Generating file index...
Generating file member index...
Generating example index...
finalizing index lists...
writing tag file...
Running plantuml with JAVA...
lookup cache used 4/65536 hits=5 misses=4
finished...
*** Doxygen has finished
The configuration is:
# Doxyfile 1.8.17
#---------------------------------------------------------------------------
# Project related configuration options
#---------------------------------------------------------------------------
DOXYFILE_ENCODING = UTF-8
PROJECT_NAME = "My Project"
PROJECT_NUMBER =
PROJECT_BRIEF =
PROJECT_LOGO =
OUTPUT_DIRECTORY = ../tmpproject/docs
CREATE_SUBDIRS = NO
ALLOW_UNICODE_NAMES = NO
OUTPUT_LANGUAGE = English
OUTPUT_TEXT_DIRECTION = None
BRIEF_MEMBER_DESC = YES
REPEAT_BRIEF = YES
ABBREVIATE_BRIEF = "The $name class" \
"The $name widget" \
"The $name file" \
is \
provides \
specifies \
contains \
represents \
a \
an \
the
ALWAYS_DETAILED_SEC = NO
INLINE_INHERITED_MEMB = NO
FULL_PATH_NAMES = YES
STRIP_FROM_PATH =
STRIP_FROM_INC_PATH =
SHORT_NAMES = NO
JAVADOC_AUTOBRIEF = NO
JAVADOC_BANNER = NO
QT_AUTOBRIEF = NO
MULTILINE_CPP_IS_BRIEF = NO
INHERIT_DOCS = YES
SEPARATE_MEMBER_PAGES = NO
TAB_SIZE = 4
ALIASES =
TCL_SUBST =
OPTIMIZE_OUTPUT_FOR_C = NO
OPTIMIZE_OUTPUT_JAVA = YES
OPTIMIZE_FOR_FORTRAN = NO
OPTIMIZE_OUTPUT_VHDL = NO
OPTIMIZE_OUTPUT_SLICE = NO
EXTENSION_MAPPING =
MARKDOWN_SUPPORT = YES
TOC_INCLUDE_HEADINGS = 5
AUTOLINK_SUPPORT = YES
BUILTIN_STL_SUPPORT = NO
CPP_CLI_SUPPORT = NO
SIP_SUPPORT = NO
IDL_PROPERTY_SUPPORT = YES
DISTRIBUTE_GROUP_DOC = NO
GROUP_NESTED_COMPOUNDS = NO
SUBGROUPING = YES
INLINE_GROUPED_CLASSES = NO
INLINE_SIMPLE_STRUCTS = NO
TYPEDEF_HIDES_STRUCT = NO
LOOKUP_CACHE_SIZE = 0
#---------------------------------------------------------------------------
# Build related configuration options
#---------------------------------------------------------------------------
EXTRACT_ALL = YES
EXTRACT_PRIVATE = NO
EXTRACT_PRIV_VIRTUAL = NO
EXTRACT_PACKAGE = NO
EXTRACT_STATIC = NO
EXTRACT_LOCAL_CLASSES = YES
EXTRACT_LOCAL_METHODS = NO
EXTRACT_ANON_NSPACES = NO
HIDE_UNDOC_MEMBERS = NO
HIDE_UNDOC_CLASSES = NO
HIDE_FRIEND_COMPOUNDS = NO
HIDE_IN_BODY_DOCS = NO
INTERNAL_DOCS = NO
CASE_SENSE_NAMES = NO
HIDE_SCOPE_NAMES = NO
HIDE_COMPOUND_REFERENCE= NO
SHOW_INCLUDE_FILES = YES
SHOW_GROUPED_MEMB_INC = NO
FORCE_LOCAL_INCLUDES = YES
INLINE_INFO = YES
SORT_MEMBER_DOCS = YES
SORT_BRIEF_DOCS = NO
SORT_MEMBERS_CTORS_1ST = NO
SORT_GROUP_NAMES = NO
SORT_BY_SCOPE_NAME = NO
STRICT_PROTO_MATCHING = NO
GENERATE_TODOLIST = YES
GENERATE_TESTLIST = YES
GENERATE_BUGLIST = YES
GENERATE_DEPRECATEDLIST= YES
ENABLED_SECTIONS =
MAX_INITIALIZER_LINES = 30
SHOW_USED_FILES = YES
SHOW_FILES = YES
SHOW_NAMESPACES = YES
FILE_VERSION_FILTER =
LAYOUT_FILE =
CITE_BIB_FILES =
#---------------------------------------------------------------------------
# Configuration options related to warning and progress messages
#---------------------------------------------------------------------------
QUIET = NO
WARNINGS = YES
WARN_IF_UNDOCUMENTED = YES
WARN_IF_DOC_ERROR = YES
WARN_NO_PARAMDOC = NO
WARN_AS_ERROR = NO
WARN_FORMAT = "$file:$line: $text"
WARN_LOGFILE =
#---------------------------------------------------------------------------
# Configuration options related to the input files
#---------------------------------------------------------------------------
INPUT = ../tmpproject
INPUT_ENCODING = UTF-8
FILE_PATTERNS = *.c \
*.cc \
*.cxx \
*.cpp \
*.c++ \
*.java \
*.ii \
*.ixx \
*.ipp \
*.i++ \
*.inl \
*.idl \
*.ddl \
*.odl \
*.h \
*.hh \
*.hxx \
*.hpp \
*.h++ \
*.cs \
*.d \
*.php \
*.php4 \
*.php5 \
*.phtml \
*.inc \
*.m \
*.markdown \
*.md \
*.mm \
*.dox \
*.doc \
*.txt \
*.py \
*.pyw \
*.f90 \
*.f95 \
*.f03 \
*.f08 \
*.f \
*.for \
*.tcl \
*.vhd \
*.vhdl \
*.ucf \
*.qsf \
*.ice
RECURSIVE = NO
EXCLUDE =
EXCLUDE_SYMLINKS = NO
EXCLUDE_PATTERNS =
EXCLUDE_SYMBOLS =
EXAMPLE_PATH =
EXAMPLE_PATTERNS = *
EXAMPLE_RECURSIVE = NO
IMAGE_PATH =
INPUT_FILTER =
FILTER_PATTERNS =
FILTER_SOURCE_FILES = NO
FILTER_SOURCE_PATTERNS =
USE_MDFILE_AS_MAINPAGE =
#---------------------------------------------------------------------------
# Configuration options related to source browsing
#---------------------------------------------------------------------------
SOURCE_BROWSER = NO
INLINE_SOURCES = NO
STRIP_CODE_COMMENTS = YES
REFERENCED_BY_RELATION = NO
REFERENCES_RELATION = NO
REFERENCES_LINK_SOURCE = YES
SOURCE_TOOLTIPS = YES
USE_HTAGS = NO
VERBATIM_HEADERS = YES
CLANG_ASSISTED_PARSING = NO
CLANG_OPTIONS =
CLANG_DATABASE_PATH =
#---------------------------------------------------------------------------
# Configuration options related to the alphabetical class index
#---------------------------------------------------------------------------
ALPHABETICAL_INDEX = YES
COLS_IN_ALPHA_INDEX = 5
IGNORE_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the HTML output
#---------------------------------------------------------------------------
GENERATE_HTML = YES
HTML_OUTPUT = html
HTML_FILE_EXTENSION = .html
HTML_HEADER =
HTML_FOOTER =
HTML_STYLESHEET =
HTML_EXTRA_STYLESHEET =
HTML_EXTRA_FILES =
HTML_COLORSTYLE_HUE = 220
HTML_COLORSTYLE_SAT = 100
HTML_COLORSTYLE_GAMMA = 80
HTML_TIMESTAMP = NO
HTML_DYNAMIC_MENUS = YES
HTML_DYNAMIC_SECTIONS = NO
HTML_INDEX_NUM_ENTRIES = 100
GENERATE_DOCSET = NO
DOCSET_FEEDNAME = "Doxygen generated docs"
DOCSET_BUNDLE_ID = org.doxygen.Project
DOCSET_PUBLISHER_ID = org.doxygen.Publisher
DOCSET_PUBLISHER_NAME = Publisher
GENERATE_HTMLHELP = NO
CHM_FILE =
HHC_LOCATION =
GENERATE_CHI = NO
CHM_INDEX_ENCODING =
BINARY_TOC = NO
TOC_EXPAND = NO
GENERATE_QHP = NO
QCH_FILE =
QHP_NAMESPACE = org.doxygen.Project
QHP_VIRTUAL_FOLDER = doc
QHP_CUST_FILTER_NAME =
QHP_CUST_FILTER_ATTRS =
QHP_SECT_FILTER_ATTRS =
QHG_LOCATION =
GENERATE_ECLIPSEHELP = NO
ECLIPSE_DOC_ID = org.doxygen.Project
DISABLE_INDEX = NO
GENERATE_TREEVIEW = NO
ENUM_VALUES_PER_LINE = 4
TREEVIEW_WIDTH = 250
EXT_LINKS_IN_WINDOW = NO
FORMULA_FONTSIZE = 10
FORMULA_TRANSPARENT = YES
FORMULA_MACROFILE =
USE_MATHJAX = NO
MATHJAX_FORMAT = HTML-CSS
MATHJAX_RELPATH = https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.5/
MATHJAX_EXTENSIONS =
MATHJAX_CODEFILE =
SEARCHENGINE = YES
SERVER_BASED_SEARCH = NO
EXTERNAL_SEARCH = NO
SEARCHENGINE_URL =
SEARCHDATA_FILE = searchdata.xml
EXTERNAL_SEARCH_ID =
EXTRA_SEARCH_MAPPINGS =
#---------------------------------------------------------------------------
# Configuration options related to the LaTeX output
#---------------------------------------------------------------------------
GENERATE_LATEX = YES
LATEX_OUTPUT = latex
LATEX_CMD_NAME =
MAKEINDEX_CMD_NAME = makeindex
LATEX_MAKEINDEX_CMD = makeindex
COMPACT_LATEX = NO
PAPER_TYPE = a4
EXTRA_PACKAGES =
LATEX_HEADER =
LATEX_FOOTER =
LATEX_EXTRA_STYLESHEET =
LATEX_EXTRA_FILES =
PDF_HYPERLINKS = YES
USE_PDFLATEX = YES
LATEX_BATCHMODE = NO
LATEX_HIDE_INDICES = NO
LATEX_SOURCE_CODE = NO
LATEX_BIB_STYLE = plain
LATEX_TIMESTAMP = NO
LATEX_EMOJI_DIRECTORY =
#---------------------------------------------------------------------------
# Configuration options related to the RTF output
#---------------------------------------------------------------------------
GENERATE_RTF = NO
RTF_OUTPUT = rtf
COMPACT_RTF = NO
RTF_HYPERLINKS = NO
RTF_STYLESHEET_FILE =
RTF_EXTENSIONS_FILE =
RTF_SOURCE_CODE = NO
#---------------------------------------------------------------------------
# Configuration options related to the man page output
#---------------------------------------------------------------------------
GENERATE_MAN = YES
MAN_OUTPUT = man
MAN_EXTENSION = .3
MAN_SUBDIR =
MAN_LINKS = NO
#---------------------------------------------------------------------------
# Configuration options related to the XML output
#---------------------------------------------------------------------------
GENERATE_XML = NO
XML_OUTPUT = xml
XML_PROGRAMLISTING = YES
XML_NS_MEMB_FILE_SCOPE = NO
#---------------------------------------------------------------------------
# Configuration options related to the DOCBOOK output
#---------------------------------------------------------------------------
GENERATE_DOCBOOK = NO
DOCBOOK_OUTPUT = docbook
DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
#---------------------------------------------------------------------------
GENERATE_AUTOGEN_DEF = NO
#---------------------------------------------------------------------------
# Configuration options related to the Perl module output
#---------------------------------------------------------------------------
GENERATE_PERLMOD = NO
PERLMOD_LATEX = NO
PERLMOD_PRETTY = YES
PERLMOD_MAKEVAR_PREFIX =
#---------------------------------------------------------------------------
# Configuration options related to the preprocessor
#---------------------------------------------------------------------------
ENABLE_PREPROCESSING = YES
MACRO_EXPANSION = NO
EXPAND_ONLY_PREDEF = NO
SEARCH_INCLUDES = YES
INCLUDE_PATH =
INCLUDE_FILE_PATTERNS =
PREDEFINED =
EXPAND_AS_DEFINED =
SKIP_FUNCTION_MACROS = YES
#---------------------------------------------------------------------------
# Configuration options related to external references
#---------------------------------------------------------------------------
TAGFILES =
GENERATE_TAGFILE =
ALLEXTERNALS = NO
EXTERNAL_GROUPS = YES
EXTERNAL_PAGES = YES
#---------------------------------------------------------------------------
# Configuration options related to the dot tool
#---------------------------------------------------------------------------
CLASS_DIAGRAMS = NO
DIA_PATH =
HIDE_UNDOC_RELATIONS = YES
HAVE_DOT = NO
DOT_NUM_THREADS = 0
DOT_FONTNAME = Helvetica
DOT_FONTSIZE = 10
DOT_FONTPATH =
CLASS_GRAPH = YES
COLLABORATION_GRAPH = YES
GROUP_GRAPHS = YES
UML_LOOK = NO
UML_LIMIT_NUM_FIELDS = 10
TEMPLATE_RELATIONS = NO
INCLUDE_GRAPH = YES
INCLUDED_BY_GRAPH = YES
CALL_GRAPH = NO
CALLER_GRAPH = NO
GRAPHICAL_HIERARCHY = YES
DIRECTORY_GRAPH = YES
DOT_IMAGE_FORMAT = png
INTERACTIVE_SVG = NO
DOT_PATH =
DOTFILE_DIRS =
MSCFILE_DIRS =
DIAFILE_DIRS =
PLANTUML_JAR_PATH =
PLANTUML_CFG_FILE =
PLANTUML_INCLUDE_PATH =
DOT_GRAPH_MAX_NODES = 50
MAX_DOT_GRAPH_DEPTH = 0
DOT_TRANSPARENT = NO
DOT_MULTI_TARGETS = NO
GENERATE_LEGEND = YES
DOT_CLEANUP = YES

Finally I have found a relatively simple solution:
if each object variable is defined as class variable, but it is redefined in __init__() then you get a dirty workaround till you do not find the proper documentation writer that fulfills all of your needs

Related

Python.NET - Sending data to a DataGridView

To give some background, I am attempting to use Python to give some of my users an interface through a third-party program (that already has IronPython embedded) to add, edit, and delete records from a SQL Server database.
I've received good assistance from one member here about Python .NET Winforms so I am attempting to get some more assistance. My goal is to populate a DataGridView (dgv) with records from a table. Seems pretty simple. However, when I run the code, my dgv is just an empty container.
I could use some assistance or better yet, point me in the right direction to find the answer. I am willing to learn.
import clr
clr.AddReference("System.Windows.Forms")
clr.AddReference("System.Drawing")
clr.AddReference("System.Data")
clr.AddReference("System.Globalization")
from System.Windows.Forms import *
from System.Drawing import *
from System.Data import *
from System.Data.SqlClient import *
from System.Globalization import *
class MyForm(Form):
def __init__(self):
# Setup the form
self.Text = "Calculated Control Limits Form"
self.StartPosition = FormStartPosition.CenterScreen # https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.form.startposition?view=net-5.0
# Create & configure the DataGridView label
self.setupDataGridViewLabel()
# Create, configure, and populate data for the DataGridView
self.setupDataGridView()
self.PopulateDataGridView("SELECT * FROM SPC_CALCULATED_CONTROL_LIMITS")
# Create & configure a line seperator
self.setupSeperator()
# Create & configure a Label to grab the users Employee ID
self.setupEmployeeIDLabel()
# Create & configure a label for the Lower Control Limit Label
self.Load_LCL_Label()
# Create & configure a TextBox for the Lower Control Limit
self.Load_LCL_TextBox()
# Create & configure a label for the Target Mean
self.LoadTargetMeanLabel()
# Create & configure a TextBox for the Target Mean
self.LoadTargetMeanTextBox()
# Create & configure a label for the Upper Control Limit
self.Load_UCL_Label()
# Create & configure a TextBox for the Upper Control Limit
self.Load_UCL_TextBox()
# Create & configure a label for the Comment
self.LoadCommentLabel()
# Create a TextBox for the Comment
self.LoadCommentTextBox()
# Create & configure the button
self.LoadButton()
# Register the event handler
self.button.Click += self.button_Click
self.Size = Size (550, self.button.Bottom + self.button.Height + 15)
#Load the Controls to the Form
self.Controls.Add(self.lblCCL)
self.Controls.Add(self.dgvCCL)
self.Controls.Add(self.button)
self.Controls.Add(self.lblSep)
self.Controls.Add(self.lblEmpID)
self.Controls.Add(self.lblLCL)
self.Controls.Add(self.txtLCL)
self.Controls.Add(self.lblTarget)
self.Controls.Add(self.txtTarget)
self.Controls.Add(self.lblUCL)
self.Controls.Add(self.txtUCL)
self.Controls.Add(self.lblComment)
self.Controls.Add(self.txtComment)
def button_Click(self, sender, args):
self.Close()
def setupDataGridView(self):
self.dgvCCL = DataGridView()
self.dgvCCL.AllowUserToOrderColumns = True
self.dgvCCL.ColumnHeadersHeightSizeMode = DataGridViewColumnHeadersHeightSizeMode.AutoSize
self.dgvCCL.Location = Point(self.lblCCL.Left, self.lblCCL.Bottom + 5)
self.dgvCCL.Size = Size(500, 250)
self.dgvCCL.TabIndex = 3
#self.dgvCCL.ColumnCount = len(headers)
self.dgvCCL.ColumnHeadersVisible = True
def setupSeperator(self):
self.lblSep = Label()
self.lblSep.Size = Size(500, 2)
self.lblSep.BackColor = Color.DarkGray
self.lblSep.BorderStyle = BorderStyle.Fixed3D
self.lblSep.Location = Point(self.dgvCCL.Left, self.dgvCCL.Bottom + 10)
def setupDataGridViewLabel(self):
self.lblCCL = Label()
self.lblCCL.Text = "Calculated Control Limits View"
self.lblCCL.Font = Font(self.Font, FontStyle.Bold)
self.lblCCL.Location = Point(10, 10)
self.lblCCL.BorderStyle = 0 # BorderStyle.None --> https://learn.microsoft.com/en-us/dotnet/api/system.windows.forms.borderstyle?view=net-5.0
self.lblCCL.Size = Size(self.lblCCL.PreferredWidth, self.lblCCL.PreferredHeight)
def setupEmployeeIDLabel(self):
self.lblEmpID = Label()
self.lblEmpID.Text = 'login.computeruser.upper()'
self.lblEmpID.Font = Font(self.Font, FontStyle.Bold)
self.lblEmpID.Size = Size(self.lblEmpID.PreferredWidth, self.lblEmpID.PreferredHeight)
self.lblEmpID.Location = Point(self.dgvCCL.Right - self.lblEmpID.Width, self.dgvCCL.Top - 15)
self.lblEmpID.BorderStyle = 0
def Load_LCL_Label(self):
self.lblLCL = Label()
self.lblLCL.Text = 'Lower Control Limit'
self.lblLCL.Size = Size(self.lblLCL.PreferredWidth, self.lblLCL.PreferredHeight)
self.lblLCL.Location = Point(self.lblSep.Left + 65, self.lblSep.Top + 15)
self.lblLCL.BorderStyle = 0
def Load_LCL_TextBox(self):
self.txtLCL = TextBox()
self.txtLCL.Size = Size(100, 50)
self.txtLCL.Location = Point(self.lblLCL.Left, self.lblLCL.Bottom + 3)
def LoadTargetMeanLabel(self):
self.lblTarget = Label()
self.lblTarget.Text = 'Target Mean'
self.lblTarget.Size = Size(self.lblLCL.PreferredWidth, self.lblLCL.PreferredHeight)
self.lblTarget.Location = Point(self.lblLCL.Left + 125, self.lblSep.Top + 15)
self.lblTarget.BorderStyle = 0
def LoadTargetMeanTextBox(self):
self.txtTarget = TextBox()
self.txtTarget.Size = Size(100, 50)
self.txtTarget.Location = Point(self.lblLCL.Left + 125, self.lblLCL.Bottom + 3)
def Load_UCL_Label(self):
self.lblUCL = Label()
self.lblUCL.Text = 'Upper Control Limit'
self.lblUCL.Size = Size(self.lblUCL.PreferredWidth, self.lblUCL.PreferredHeight)
self.lblUCL.Location = Point(self.lblTarget.Left + 125, self.lblSep.Top + 15)
self.lblUCL.BorderStyle = 0
def Load_UCL_TextBox(self):
self.txtUCL = TextBox()
self.txtUCL.Size = Size(100, 50)
self.txtUCL.Location = Point(self.lblTarget.Left + 125, self.lblLCL.Bottom + 3)
def LoadCommentLabel(self):
self.lblComment = Label()
self.lblComment.Text = 'Comment'
self.lblComment.Size = Size(self.lblUCL.PreferredWidth, self.lblUCL.PreferredHeight)
self.lblComment.Location = Point(self.lblSep.Left, 350)
self.lblComment.BorderStyle = 0
def LoadCommentTextBox(self):
self.txtComment = TextBox()
self.txtComment.Multiline = True
self.txtComment.Size = Size(500, 50)
self.txtComment.Location = Point(self.lblSep.Left, self.lblComment.Bottom + 5)
def LoadButton(self):
self.button = Button()
self.button.Size = Size(100, 30)
self.button.Location = Point(self.lblSep.Left, self.txtComment.Bottom + 10)
self.button.Text = "Click Me!"
def PopulateDataGridView(self, selectcommand):
bs = BindingSource()
try:
connectionstring = "YourSQLServerConnectionString"
# Create a new data adapter based on the specified query.
da = SqlDataAdapter(selectcommand, connectionstring)
# Create a command builder to generate SQL update, insert, and delete commands based on selectCommand.
cb = SqlCommandBuilder(da)
# Populate a new data table and bind it to the BindingSource.
dt = DataTable()
Locale = CultureInfo.InvariantCulture
da.Fill(dt)
bs.DataSource = dt
# Resize the DataGridView columns to fit the newly loaded content.
self.dgvCCL.AutoResizeColumns(DataGridViewAutoSizeColumnsMode.AllCellsExceptHeader)
#String connectionString =
#"Data Source=<SQL Server>;Initial Catalog=Northwind;" +
#"Integrated Security=True";
#// Create a new data adapter based on the specified query.
#dataAdapter = new SqlDataAdapter(selectCommand, connectionString);
#// Create a command builder to generate SQL update, insert, and
#// delete commands based on selectCommand.
#SqlCommandBuilder commandBuilder = new SqlCommandBuilder(dataAdapter);
#// Populate a new data table and bind it to the BindingSource.
#DataTable table = new DataTable
#{
# Locale = CultureInfo.InvariantCulture
#};
#dataAdapter.Fill(table);
#bindingSource1.DataSource = table;
#// Resize the DataGridView columns to fit the newly loaded content.
#dataGridView1.AutoResizeColumns(
#DataGridViewAutoSizeColumnsMode.AllCellsExceptHeader);
except SqlException:
Messagebox.Show("Something happened.")
Application.EnableVisualStyles()
Application.SetCompatibleTextRenderingDefault(False)
Application.Run(MyForm())

Bitcoin verify a single block in python

Currently i try to verify the Bitcoin Block 77504 by my own. But from the satoshi whitepaper it seems i have more questions than answer to do so.
First information from the previous block:
### What we know from last block ###
# height = 77503
# id = 00000000000447829abff59b3208a08ff28b3eb184b1298929abe6dd65c3578a
# version = 1
# timestamp = 1283325019
# bits = 459874456
# nonce = 1839166754
# difficulty = 623.3869598689275
# merkle_root = f18107935e8853011e477244241b5d786966495f8c59be46c92ac323c9cc8cde
# tx_count = 6
# size = 1438
# weight = 5752
Then the information from the block i want to verify
### What we now want to mine ###
# height = 77504
# id = 00000000004582246e63ff7e0760c6f009e5ef5ce1eb5397be6f3eb9d698bda5
# version = 1
# timestamp = 1283326637
# bits = 459874456
# nonce = 191169021
# difficulty = 623.3869598689275
# merkle_root = 59c77dabd9f005c771b23b846c79c7741dc0e70d912f9470eace886b42a0d601
# tx_count = 44
# size = 11052
# weight = 44208
# txids = ["b899c55adb5a9604b72643c0f6cd5bf6c2447bb0fc035c50e13d2e471cbf5aa5","05180e3252c48a54d4d0abe9359621f54f3031fd318a812be96da0f13bfa8bf3","29d641bd4a5d4b01ceee1126af920513d52e088bad500fad1358c96962e25e28","40d52b5aa4be889739410f82f36c71fdda554b999fb14fc12aeab5bb2e6498cb","62d5e84500cc674a5172bea5755a223da974f90f614deb45c160478a8974419c","78de7a104617f58620ae9e7cf58bcd875d8043ee5046d93c9d69224c2ae39a1e","8831ad38deb23e1fbea6d376f1805aec194760b0f334a3c4b623aa0751445c9b","8a6bd0c2d74ea785d886bd6d87b6a4eb4cd35af5fb7ae3a364eb1f76b114c375","90d6da6a4b48e7330ae926cd00623fa8d94fd0a2b9a001475da22cbc49435ff9","d002da9953844c767cf7d42092b81e8c5bb03baf520d79028013fd3400bc8651","d1f8573148126e8d17641276f22ece33b8276311d93794ed2975ebb802b98fc8","d22ed765adba9c7f5fef19ff15cb89559b4148d571fcb40ee2889231ac1b8dea","f32b000adf9ab6d7a66593cb20cba4d3a3e0cbb3453608ce11a780fab532add5","32d2ff811677a8dbed4f317c9fcae4796b491bde944cca4a993734be787b4e79","4b806d44d9aff762601f21ad541c0e99a77d0a14b730774a2d7721dd094d9030","8c5258a8e3f60c9edfa55b86780a9832c8cd5f407dbe25948cd2fd87910ca4c4","bc4fcea23cd93bac13ab75bad8d23576be88a89e72f2c455932f096d6dd2a2da","ca5c53ef34ff5a2f816daf648c8dafb01680502c2c0c98b82b9527392f707e70","f9db6e9a62502dfe8057e7b1c0f3b8f145d354ee4e341233bfe8861fff143822","fc3730bbfa443558c677da6898f106ee7d5516b14e21bf369def7cb6a5bf6b8b","1cfa85d94ebfb9206ad49f421319a6ee99b339e4e8d292b866459bb742731d83","80fa7f38cc02b05b765675adba589d426e6122b1e8158726df0c55cf44c937eb","8c72683585901ff96edd14bde9c87ee91a9d54c187a15aa333e3d6b916399fd2","905e015afa4df7d9dc4a1a80a029e469258045fe9288071b16af49a2f458c2cb","bd8fab0ca0072cd230a4bb0a6efff5964756a023ca53d1f06c3fa22800fe044c","464280d62b8965255c286f1c4c5c457f594db64bdef1c8aaa7ddf776fc4d320e","625b8ec5af9ad2c1506aca8ad61670ce3acf7070fe5aabc2dec06dcda119503a","a2e06f6b0ea68cc2c9bf44d09e54832c830971961ed8ea5ec553918ab7eb48d2","a4de41f56f0970d9b1948f1e386a124860891d790f506c2e3bbe71dd289031d4","11475d2fbbc5e3aee2eff54aa9bf2f83d5f33fffce528cc9804f820e0f6a76e7","5dc019a6397c25d0e7db56f3ed2ccdc1db5642701224d56fb9ad1d1017279e7b","e5d1e0e5a2309cb07ec522a1eb56da5aa5e58ecaea6d49e278a52c1c24230dae","21d192ea46007dbeef7c9673ac158c0f9dbf80e0785380ae562a1fbb10430ae7","8fafe7a8168563c4c186d792b49fc0fa4368c6b2e5a1217f2f98b127ff1cdf87","d2410a45bcc0e4f5b7a8d84e730ffd9744e0dd0d9fb2d7e93fb71e590bf0f1fb","6103334a35171bc5a153b51dd7c94977c62822b1cec2fcac20ea9d0a959129d7","6551831774420989df2d9deeab196e14025f2e5fd502feb86dfc7ccedb917ce0","7c1a188e0c94c7d61aea1ebddb359f508c99fdd0e028887bbf3a3036a1b5bf8a","8b9c989cee69c107697b13aebd677879db48275c089ae206c85eb8db45acf50f","4195c5abf97adb2108de8aeee99cb751e2b4f9698607f60e326b9a67b9127a31","800b308f49fe86ff3323dd6240190212626d052a017dd1cad01540790604c00f","1d2fb37bab59d6f3f83f7596fde128a0b7b0f7ccd8fabc8d2a929923a268a847","8a8149d58791ace6cefd803021b4e870acca5b2c40e2e1415f423e6ec4333e32","7a1eb6b8ee1ff52648cd9a099c7658be53627732b226aa93f56d430c85a52991"]
I have prepared a small script that should calculate it for me but no matter what i am not able to get to the target hash 00000000004582246e63ff7e0760c6f009e5ef5ce1eb5397be6f3eb9d698bda5 to verify the block mined. What is also unclear to me where would one have to add his own Bitcoin wallet to get the reward of the transaction.
from hashlib import sha256
def SHA256(text):
return sha256(text.encode("ascii")).hexdigest()
def mine(block_number, transactions, previous_hash, prefix_zeros):
prefix_str = '0'*prefix_zeros
text = str(block_number) + str(transactions) + str(previous_hash) + str(nonce_to_verify)
new_hash = SHA256(text)
if new_hash.startswith(prefix_str):
print(f"Jipiiii! Successfully mined bitcoins with nonce value:{nonce_to_verify}")
return new_hash
else:
new_hash = None
return new_hash
### normally this is unknown, would be somethinge like range(0,100000000000), i just want to verify a block ###
nonce_to_verify = 191169021
### In what format are transactions presented ? ###
transactions = ["b899c55adb5a9604b72643c0f6cd5bf6c2447bb0fc035c50e13d2e471cbf5aa5","05180e3252c48a54d4d0abe9359621f54f3031fd318a812be96da0f13bfa8bf3","29d641bd4a5d4b01ceee1126af920513d52e088bad500fad1358c96962e25e28","40d52b5aa4be889739410f82f36c71fdda554b999fb14fc12aeab5bb2e6498cb","62d5e84500cc674a5172bea5755a223da974f90f614deb45c160478a8974419c","78de7a104617f58620ae9e7cf58bcd875d8043ee5046d93c9d69224c2ae39a1e","8831ad38deb23e1fbea6d376f1805aec194760b0f334a3c4b623aa0751445c9b","8a6bd0c2d74ea785d886bd6d87b6a4eb4cd35af5fb7ae3a364eb1f76b114c375","90d6da6a4b48e7330ae926cd00623fa8d94fd0a2b9a001475da22cbc49435ff9","d002da9953844c767cf7d42092b81e8c5bb03baf520d79028013fd3400bc8651","d1f8573148126e8d17641276f22ece33b8276311d93794ed2975ebb802b98fc8","d22ed765adba9c7f5fef19ff15cb89559b4148d571fcb40ee2889231ac1b8dea","f32b000adf9ab6d7a66593cb20cba4d3a3e0cbb3453608ce11a780fab532add5","32d2ff811677a8dbed4f317c9fcae4796b491bde944cca4a993734be787b4e79","4b806d44d9aff762601f21ad541c0e99a77d0a14b730774a2d7721dd094d9030","8c5258a8e3f60c9edfa55b86780a9832c8cd5f407dbe25948cd2fd87910ca4c4","bc4fcea23cd93bac13ab75bad8d23576be88a89e72f2c455932f096d6dd2a2da","ca5c53ef34ff5a2f816daf648c8dafb01680502c2c0c98b82b9527392f707e70","f9db6e9a62502dfe8057e7b1c0f3b8f145d354ee4e341233bfe8861fff143822","fc3730bbfa443558c677da6898f106ee7d5516b14e21bf369def7cb6a5bf6b8b","1cfa85d94ebfb9206ad49f421319a6ee99b339e4e8d292b866459bb742731d83","80fa7f38cc02b05b765675adba589d426e6122b1e8158726df0c55cf44c937eb","8c72683585901ff96edd14bde9c87ee91a9d54c187a15aa333e3d6b916399fd2","905e015afa4df7d9dc4a1a80a029e469258045fe9288071b16af49a2f458c2cb","bd8fab0ca0072cd230a4bb0a6efff5964756a023ca53d1f06c3fa22800fe044c","464280d62b8965255c286f1c4c5c457f594db64bdef1c8aaa7ddf776fc4d320e","625b8ec5af9ad2c1506aca8ad61670ce3acf7070fe5aabc2dec06dcda119503a","a2e06f6b0ea68cc2c9bf44d09e54832c830971961ed8ea5ec553918ab7eb48d2","a4de41f56f0970d9b1948f1e386a124860891d790f506c2e3bbe71dd289031d4","11475d2fbbc5e3aee2eff54aa9bf2f83d5f33fffce528cc9804f820e0f6a76e7","5dc019a6397c25d0e7db56f3ed2ccdc1db5642701224d56fb9ad1d1017279e7b","e5d1e0e5a2309cb07ec522a1eb56da5aa5e58ecaea6d49e278a52c1c24230dae","21d192ea46007dbeef7c9673ac158c0f9dbf80e0785380ae562a1fbb10430ae7","8fafe7a8168563c4c186d792b49fc0fa4368c6b2e5a1217f2f98b127ff1cdf87","d2410a45bcc0e4f5b7a8d84e730ffd9744e0dd0d9fb2d7e93fb71e590bf0f1fb","6103334a35171bc5a153b51dd7c94977c62822b1cec2fcac20ea9d0a959129d7","6551831774420989df2d9deeab196e14025f2e5fd502feb86dfc7ccedb917ce0","7c1a188e0c94c7d61aea1ebddb359f508c99fdd0e028887bbf3a3036a1b5bf8a","8b9c989cee69c107697b13aebd677879db48275c089ae206c85eb8db45acf50f","4195c5abf97adb2108de8aeee99cb751e2b4f9698607f60e326b9a67b9127a31","800b308f49fe86ff3323dd6240190212626d052a017dd1cad01540790604c00f","1d2fb37bab59d6f3f83f7596fde128a0b7b0f7ccd8fabc8d2a929923a268a847","8a8149d58791ace6cefd803021b4e870acca5b2c40e2e1415f423e6ec4333e32","7a1eb6b8ee1ff52648cd9a099c7658be53627732b226aa93f56d430c85a52991"]
### Just a check of 5 leading zeros... but why the difficulty 623.3869598689275 how to get to the 11 zeros? ###
difficulty=11
### Last Block (77503) found ###
lastfoundblock = "00000000000447829abff59b3208a08ff28b3eb184b1298929abe6dd65c3578a"
print("start mining")
new_hash = mine(77504,transactions,lastfoundblock, difficulty)
print("finnished mining.")
print(f"Found block is: {new_hash} should be the same as 00000000004582246e63ff7e0760c6f009e5ef5ce1eb5397be6f3eb9d698bda5")
Help would be appreciated so that i can verify a single block. Already pointing in the right directions would be appreciated so that i can solve my problem.
As no one was able to answer it... here is the code to verify a block's nonce:
import hashlib, struct, binascii
from time import time
def get_target_str(bits):
# https://en.bitcoin.it/wiki/Difficulty
exp = bits >> 24
mant = bits & 0xffffff
target_hexstr = '%064x' % (mant * (1<<(8*(exp - 3))))
print(f'T: {target_hexstr}')
target_str = bytes.fromhex(target_hexstr)
return target_str
def verify_nonce(version, prev_block, mrkl_root,
timestamp, bits_difficulty,nonce):
target_str = get_target_str(bits_difficulty)
header = ( struct.pack("<L", version) +
bytes.fromhex(prev_block)[::-1] +
bytes.fromhex(mrkl_root)[::-1] +
struct.pack("<LLL", timestamp, bits_difficulty, nonce))
hash_result = hashlib.sha256(hashlib.sha256(header).digest()).digest()
return bytes.hex(hash_result[::-1])
#nonce += 1
test1_version = 0x3fff0000
test1_prev_block = "0000000000000000000140ac4688aea45aacbe7caf6aaca46f16acd93e1064c3"
test1_merkle_root = "422458fced12693312058f6ee4ada19f6df8b29d8cac425c12f4722e0dc4aafd"
test1_timestamp = 0x5E664C76
test1_bits_diff = 0x17110119
test1_nonce1 = 538463288 #(0x20184C38)
test1_block_hash = "0000000000000000000d493c3c1b91c8059c6b0838e7e68fbcf8f8382606b82c"
test1_calc_block_hash = verify_nonce(test1_version,
test1_prev_block,
test1_merkle_root,
test1_timestamp,
test1_bits_diff,
test1_nonce1)
print(f'S: {test1_block_hash}')
print(f'R: {test1_calc_block_hash}')
if test1_block_hash == test1_calc_block_hash:
print("hashing is correct")
Thanks to https://github.com/razvancazacu/bitcoin-mining-crypto

spaCy v3 train NER based on existing model or add custom trained NER to existing model

in spaCy < 3.0 I was able to train the NER component within the trained en_core_web_sm model:
python -m spacy train en model training validation --base-model en_core_web_sm --pipeline "ner" -R -n 10
Specifically, I need the tagger and in the parser of the en_core_web_sm model.
spaCy's new version doesn't take these commands anymore, they need to be set in the config file. According to spaCy's website these components can be added with the corresponding source and then insert to the frozen_component in the training section of the config file (I will provide my full config at the end of this question):
[components]
[components.tagger]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
[components.parser]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
.
.
.
[training]
frozen_components = ["tagger","parser"]
When I'm debugging, following error occurs:
ValueError: [E922] Component 'tagger' has been initialized with an output dimension of 49 - cannot add any more labels.
When I put tagger to the disabled components in the nlp section of the config file or if I delete everything related to the tagger, debugging and training works. However, when applying the trained model to a text loaded to a doc, only the trained NER works and none of the other components. E.g. the parser predicts everything is ROOT.
I also tried to train the NER model on its own and then add it to the loaded en_core_web_sm model:
MODEL_PATH = 'data/model/model-best'
nlp = spacy.load(MODEL_PATH)
english_nlp = spacy.load("en_core_web_sm")
ner_labels = nlp.get_pipe("ner")
english_nlp.add_pipe('ner_labels')
This leads to the following error:
ValueError: [E002] Can't find factory for 'ner_labels' for language English (en). This usually happens when spaCy calls `nlp.create_pipe` with a custom component name that's not registered on the current language class. If you're using a Transformer, make sure to install 'spacy-transformers'. If you're using a custom component, make sure you've added the decorator `#Language.component` (for function components) or `#Language.factory` (for class components).
Available factories: attribute_ruler, tok2vec, merge_noun_chunks, merge_entities, merge_subtokens, token_splitter, parser, beam_parser, entity_linker, ner, beam_ner, entity_ruler, lemmatizer, tagger, morphologizer, senter, sentencizer, textcat, textcat_multilabel, en.lemmatizer
Does anyone have a suggestion how I can either train my NER with the en_core_web_sm model or how I could integrate my trained component?
Here's the full config file:
[paths]
train = "training"
dev = "validation"
vectors = null
init_tok2vec = null
[system]
gpu_allocator = null
seed = 0
[nlp]
lang = "en"
pipeline = ["tok2vec","tagger","parser","ner"]
batch_size = 1000
disabled = []
before_creation = null
after_creation = null
after_pipeline_creation = null
tokenizer = {"#tokenizers":"spacy.Tokenizer.v1"}
[components]
[components.tagger]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
[components.parser]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
[components.ner]
factory = "ner"
moves = null
update_with_oracle_cut_size = 100
[components.ner.model]
#architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = true
nO = null
[components.ner.model.tok2vec]
#architectures = "spacy.Tok2VecListener.v1"
width = ${components.tok2vec.model.encode.width}
upstream = "*"
[components.tok2vec]
factory = "tok2vec"
[components.tok2vec.model]
#architectures = "spacy.Tok2Vec.v2"
[components.tok2vec.model.embed]
#architectures = "spacy.MultiHashEmbed.v1"
width = ${components.tok2vec.model.encode.width}
attrs = ["NORM","PREFIX","SUFFIX","SHAPE"]
rows = [5000,2500,2500,2500]
include_static_vectors = false
[components.tok2vec.model.encode]
#architectures = "spacy.MaxoutWindowEncoder.v2"
width = 256
depth = 8
window_size = 1
maxout_pieces = 3
[corpora]
[corpora.dev]
#readers = "spacy.Corpus.v1"
path = ${paths.dev}
max_length = 0
gold_preproc = false
limit = 0
augmenter = null
[corpora.train]
#readers = "spacy.Corpus.v1"
path = ${paths.train}
max_length = 2000
gold_preproc = false
limit = 0
augmenter = null
[training]
dev_corpus = "corpora.dev"
train_corpus = "corpora.train"
seed = ${system.seed}
gpu_allocator = ${system.gpu_allocator}
dropout = 0.1
accumulate_gradient = 1
patience = 1600
max_epochs = 0
max_steps = 20000
eval_frequency = 200
frozen_components = ["tagger","parser"]
before_to_disk = null
[training.batcher]
#batchers = "spacy.batch_by_words.v1"
discard_oversize = false
tolerance = 0.2
get_length = null
[training.batcher.size]
#schedules = "compounding.v1"
start = 100
stop = 1000
compound = 1.001
t = 0.0
[training.logger]
#loggers = "spacy.ConsoleLogger.v1"
progress_bar = false
[training.optimizer]
#optimizers = "Adam.v1"
beta1 = 0.9
beta2 = 0.999
L2_is_weight_decay = true
L2 = 0.01
grad_clip = 1.0
use_averages = false
eps = 0.00000001
learn_rate = 0.001
[training.score_weights]
ents_per_type = null
ents_f = 1.0
ents_p = 0.0
ents_r = 0.0
[pretraining]
[initialize]
vectors = "en_core_web_lg"
init_tok2vec = ${paths.init_tok2vec}
vocab_data = null
lookups = null
before_init = null
after_init = null
[initialize.components]
[initialize.tokenizer]
I provided a longer answer on spaCy's discussion forum here, but in a nutshell if you want to source and freeze your parser/tagger, use this in the config:
[components.tagger]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
[components.parser]
source = "en_core_web_sm"
replace_listeners = ["model.tok2vec"]
[components.tok2vec]
source = "en_core_web_sm"
i.e. make sure that the tagger & parser can connect to the correct tok2vec instance they were initially trained on.
You can then create an independent NER component either on top of the sourced (and pretrained) tok2vec, or create a new internal tok2vec component for the NER, or create a second tok2vec component with a distinct name, that you refer to as the upstream argument of the NER's Tok2VecListener.

Issue in creating Tflite model populated with metadata (for object detection)

I am trying to run a tflite model on Android for object detection. For the same,
I have successfully trained the model with my sets of images as follows:
(a) Training:
!python3 object_detection/model_main.py \
--pipeline_config_path=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \
--model_dir=training/
(modifying the config file to point to where my specific TFrecords are mentioned)
(b) Export inference graph
!python /content/drive/'My Drive'/'Detecto Tutorial'/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
(c) Create tflite ready graph
!python /content/drive/'My Drive'/'Detecto Tutorial'/models/research/object_detection/export_tflite_ssd_graph.py \
--pipeline_config_path=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/object_detection/samples/configs/ssd_mobilenet_v2_coco.config \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path} \
--add_postprocessing_op=true
I have created a tflite model using tflite_convert from the graph file as follows
!tflite_convert
--graph_def_file=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/fine_tuned_model/tflite_graph.pb
--output_file=/content/drive/My\ Drive/Detecto\ Tutorial/models/research/fine_tuned_model/detect3.tflite
--output_format=TFLITE
--input_shapes=1,300,300,3
--input_arrays=normalized_input_image_tensor
--output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3'
--inference_type=FLOAT
--allow_custom_ops
The above tflite model is validated independently and works fine (outside of Android).
There is a requirement now to populate the tflite model with metadata so that it can be processed in the sample Android code provided as per link below (as I am getting an error otherwise: not a valid Zip file and does not have associated files when run on Android studio).
https://github.com/tensorflow/examples/blob/master/lite/examples/object_detection/android/README.md
The sample .TFlite provided as part of the same link is populated with metadata and works fine.
When I try to use the following link:
https://www.tensorflow.org/lite/convert/metadata#deep_dive_into_the_image_classification_example
populator = _metadata.MetadataPopulator.with_model_file('/content/drive/My Drive/Detecto Tutorial/models/research/fine_tuned_model/detect3.tflite')
populator.load_metadata_buffer(metadata_buf)
populator.load_associated_files(['/content/drive/My Drive/Detecto Tutorial/models/research/fine_tuned_model/labelmap.txt'])
populator.populate()
to add metadata (rest of the code is practically the same with some changes of meta description to Object detection instead of Image classification and specifying the location of labelmap.txt), it gives the following error:
<ipython-input-6-173fc798ea6e> in <module>()
1 populator = _metadata.MetadataPopulator.with_model_file('/content/drive/My Drive/Detecto Tutorial/models/research/fine_tuned_model/detect3.tflite')
----> 2 populator.load_metadata_buffer(metadata_buf)
3 populator.load_associated_files(['/content/drive/My Drive/Detecto Tutorial/models/research/fine_tuned_model/labelmap.txt'])
4 populator.populate()
1 frames
/usr/local/lib/python3.6/dist-packages/tensorflow_lite_support/metadata/metadata.py in _validate_metadata(self, metadata_buf)
540 "The number of output tensors ({0}) should match the number of "
541 "output tensor metadata ({1})".format(num_output_tensors,
--> 542 num_output_meta))
543
544
ValueError: The number of output tensors (4) should match the number of output tensor metadata (1)
The 4 output tensors are the ones mentioned in the output_arrays in step 2 (someone may correct me there). I am not sure how to update output tensor metadata accordingly.
Can anyone who has recently used object detection using custom model (and then apply on Android) help? Or help understand how to update tensor metadata to 4 instead of 1.
Update on Jun 10, 2021:
See the latest tutorial about Metadata Writer Library on tensorflow.org.
Update:
The Metadata Writer library has been released. It currently supports image classifier and object detector, and more supported tasks are on the way.
Here is an example to write metadata for an object detector model:
Install the TFLite Support nightly Pypi package:
pip install tflite_support_nightly
Write metadata to the model using the following script:
from tflite_support.metadata_writers import object_detector
from tflite_support.metadata_writers import writer_utils
from tflite_support import metadata
ObjectDetectorWriter = object_detector.MetadataWriter
_MODEL_PATH = "ssd_mobilenet_v1_1_default_1.tflite"
_LABEL_FILE = "labelmap.txt"
_SAVE_TO_PATH = "ssd_mobilenet_v1_1_default_1_metadata.tflite"
writer = ObjectDetectorWriter.create_for_inference(
writer_utils.load_file(_MODEL_PATH), [127.5], [127.5], [_LABEL_FILE])
writer_utils.save_file(writer.populate(), _SAVE_TO_PATH)
# Verify the populated metadata and associated files.
displayer = metadata.MetadataDisplayer.with_model_file(_SAVE_TO_PATH)
print("Metadata populated:")
print(displayer.get_metadata_json())
print("Associated file(s) populated:")
print(displayer.get_packed_associated_file_list())
---------- Previous answer that writes metadata manually --------
Here is a code snippet you can use to populate metadata for object detection models, which is compatible with the TFLite Android app.
model_meta = _metadata_fb.ModelMetadataT()
model_meta.name = "SSD_Detector"
model_meta.description = (
"Identify which of a known set of objects might be present and provide "
"information about their positions within the given image or a video "
"stream.")
# Creates input info.
input_meta = _metadata_fb.TensorMetadataT()
input_meta.name = "image"
input_meta.content = _metadata_fb.ContentT()
input_meta.content.contentProperties = _metadata_fb.ImagePropertiesT()
input_meta.content.contentProperties.colorSpace = (
_metadata_fb.ColorSpaceType.RGB)
input_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.ImageProperties)
input_normalization = _metadata_fb.ProcessUnitT()
input_normalization.optionsType = (
_metadata_fb.ProcessUnitOptions.NormalizationOptions)
input_normalization.options = _metadata_fb.NormalizationOptionsT()
input_normalization.options.mean = [127.5]
input_normalization.options.std = [127.5]
input_meta.processUnits = [input_normalization]
input_stats = _metadata_fb.StatsT()
input_stats.max = [255]
input_stats.min = [0]
input_meta.stats = input_stats
# Creates outputs info.
output_location_meta = _metadata_fb.TensorMetadataT()
output_location_meta.name = "location"
output_location_meta.description = "The locations of the detected boxes."
output_location_meta.content = _metadata_fb.ContentT()
output_location_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.BoundingBoxProperties)
output_location_meta.content.contentProperties = (
_metadata_fb.BoundingBoxPropertiesT())
output_location_meta.content.contentProperties.index = [1, 0, 3, 2]
output_location_meta.content.contentProperties.type = (
_metadata_fb.BoundingBoxType.BOUNDARIES)
output_location_meta.content.contentProperties.coordinateType = (
_metadata_fb.CoordinateType.RATIO)
output_location_meta.content.range = _metadata_fb.ValueRangeT()
output_location_meta.content.range.min = 2
output_location_meta.content.range.max = 2
output_class_meta = _metadata_fb.TensorMetadataT()
output_class_meta.name = "category"
output_class_meta.description = "The categories of the detected boxes."
output_class_meta.content = _metadata_fb.ContentT()
output_class_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_class_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
output_class_meta.content.range = _metadata_fb.ValueRangeT()
output_class_meta.content.range.min = 2
output_class_meta.content.range.max = 2
label_file = _metadata_fb.AssociatedFileT()
label_file.name = os.path.basename("label.txt")
label_file.description = "Label of objects that this model can recognize."
label_file.type = _metadata_fb.AssociatedFileType.TENSOR_VALUE_LABELS
output_class_meta.associatedFiles = [label_file]
output_score_meta = _metadata_fb.TensorMetadataT()
output_score_meta.name = "score"
output_score_meta.description = "The scores of the detected boxes."
output_score_meta.content = _metadata_fb.ContentT()
output_score_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_score_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
output_score_meta.content.range = _metadata_fb.ValueRangeT()
output_score_meta.content.range.min = 2
output_score_meta.content.range.max = 2
output_number_meta = _metadata_fb.TensorMetadataT()
output_number_meta.name = "number of detections"
output_number_meta.description = "The number of the detected boxes."
output_number_meta.content = _metadata_fb.ContentT()
output_number_meta.content.contentPropertiesType = (
_metadata_fb.ContentProperties.FeatureProperties)
output_number_meta.content.contentProperties = (
_metadata_fb.FeaturePropertiesT())
# Creates subgraph info.
group = _metadata_fb.TensorGroupT()
group.name = "detection result"
group.tensorNames = [
output_location_meta.name, output_class_meta.name,
output_score_meta.name
]
subgraph = _metadata_fb.SubGraphMetadataT()
subgraph.inputTensorMetadata = [input_meta]
subgraph.outputTensorMetadata = [
output_location_meta, output_class_meta, output_score_meta,
output_number_meta
]
subgraph.outputTensorGroups = [group]
model_meta.subgraphMetadata = [subgraph]
b = flatbuffers.Builder(0)
b.Finish(
model_meta.Pack(b),
_metadata.MetadataPopulator.METADATA_FILE_IDENTIFIER)
self.metadata_buf = b.Output()

Compression of Inverted index using blocked compression

I have built an inverted index dictionary of text collection and need to compress dictionary using blocked compression k=8 and in posting file,gaps between docids using gamma encoding
def createCompressedIndex(dictionary_uncomp_v1):
for term in dictionary_uncomp_v1.keys():
entry = dictionary_uncomp_v1.get(term)
postingList = []
prevId = 0
pEntry = PostingEntry(0,0,0,0)
for pEntry in entry.postingList:
docId = getGammaCode(pEntry.docId - prevId)
frequency = getGammaCode(pEntry.termFreq)
newPEntry = PostingEntry(docId,frequency,0,0)
postingList.extend(newPEntry)
prevId = pEntry.docId
ptemp = docId+frequency
docFrequency = getGammaCode(entry.docFreq)
entrytemp = ptemp+docFrequency
totalTermFreq = getGammaCode(entry.totTermFreq)
compressedEntry = DictEntry(term,docFrequency,totalTermFreq,postingList)
dictionary_comp_v1[term] = bytearray(entrytemp)

Categories

Resources