I want to write a Python app that uses GTK (via gi.repository) to display a textual view of a huge amount of data. (Specifically, disassembled instructions from a program, similar to what IDA shows.)
I thought this should be fairly simple: use an ordinary GtkTextView, and a custom subclass of GtkTextBuffer which will handle the "give me some text" request, generate some text (disassemble some instructions) and some tags (for colouring, formatting, etc) and return them.
The issue is I can't find any information on how to subclass GtkTextBuffer in this way, to provide the text myself. I've tried just implementing the get_text and get_slice methods in my subclass, but they seem to never be called. It seems like the only thing I can do is use a standard GtkTextBuffer and the set_text method, and try somehow to keep track of the cursor position and number of lines to display, but this seems entirely opposite to how MVC should work. There are potentially millions of lines, so generating all text in advance is infeasible.
I'm using Python 3.4 and GTK3.
Gtk.TextBuffer is from an external library that isn't written in Python. You've run into one limitation of that situation. With most Python libraries you can subclass their classes or monkeypatch their APIs however you like. GTK's C code, on the other hand, is unaware that it's being used from Python, and as you have noticed, completely ignores your overridden get_text() and get_slice() methods.
GTK's classes additionally have the limitation that you can only override methods that have been declared "virtual". Here's how that translates into Python: you can see a list of virtual methods in the Python GI documentation (example for Gtk.TextBuffer). These methods all start with do_ and are not meant to be called from your program, only overridden. Python GI will make the GTK code aware of these overrides, so that when you override, e.g., do_insert_text() and subsequently call insert_text(), the chain of calls will look something like this:
Python insert_text()
C gtk_text_buffer_insert_text()
C GtkTextBufferClass->insert_text() (internal virtual method)
Python do_insert_text()
Unfortunately, as you can see from the documentation that I linked above, get_text() and get_slice() are not virtual, so you can't override them in a subclass.
You might be able to achieve your aim by wrapping one TextBuffer (which contains the entirety of your disassembled instructions) in another (which contains an excerpt, and is actually hooked up to the text view.) You could set marks in the first text buffer to show where the excerpt should begin or end, and connect signals such that when the text in the first text buffer changes, or the marks change, then the text between the marks is copied to the second text buffer.
Related
I'm developing a GUI application in Python that stores it's documents in an XML based format. The application is a mathematical model which several pre-defined components which can be drag-and-dropped. I'd also like the user to be able to create custom components by writing a python function inside an editor provided within the application. My issue is with storing these functions in the XML.
A function might look something like this:
def func(node, timestamp):
return node.weight * timestamp.day + 4
These functions are wrapped in an object which provides a standard way of calling them (compared to the pre-defined components). If I was to create one from Python directly it would look like this:
parameter = ParameterFunction(func)
The function is then called by the model like this:
parameter.value(node=node, timestamp=timestamp)
The ParameterFunction object has a to_xml and from_xml functions which need to serialise/deserialise the object to/from an XML representation.
My question is: how do I store the Python functions in an XML document?
One solution I have thought of so far is to store the function definition as a string, eval() or exec() it for use but keep the string, then store the string in a CDATA block in the XML. Are there any issues with this that I'm not seeing?
An alternative would be to store all of the Python code in a separate file, and have the XML reference just the function names. This could be nice as it could be edited easily in an external editor. In which case what is the best way to import the code? I am envisiging fighting with the python import path...
I'm aware there are will be security concerns with running untrusted code, but I'm willing to make this tradeoff for the freedom it gives users.
The specific application I'm referring to is on github. I'm happy to provide more information if it's needed, but I've tried to keep it fairly generic here. https://github.com/snorfalorpagus/pywr/blob/120928eaacb9206701ceb9bc91a5d73740db1953/pywr/core.py#L396-L402
Nope, you have the easiest and best solution that I can think of. Just keep them as strings, as long as your not worried about running the untrusted code.
The way I'd deal with external python scripts containing tiny snippets like yours would be to treat them as plain text files and read them in as strings. This avoids all the problems with importing them. Just read them in and call exec on them, then the functions will exist in scope.
EDIT: I was going to add something on sandboxing python code, but after a bit of research it seems this will not be an easy task, it would be easier to sandbox the entire program. Another longer and harder way to restrict the untrusted code would be to create your own tiny interpreter that only did safe operations (i.e mathematical operations, calling existing functions, etc..)
This isn't a keyboard remap request. My current dilemma is an inability to edit the goto_definition command (bound to F12 by default). If I could find the .py file for it, this would (hopefully) be a piece of cake.
The larger scope of my project requires me to modify the functionality of goto_definition to more closely resemble the equivalent function in CodeWright. I'm working in ST3, and reverting to ST2 isn't an option.
Let me more clearly elaborate on my hurdles:
Locate the .py file which contains the information that goto_definition uses when it runs.
Modify the nature of that command to be a little more flexible:
Essentially, there are a few methods, EditElementHandleR, MSBsplineCurveCR, GetElementDescrP, GetModelRef, and several others of similar nature.
There are "tags" appended to some of these, and if a method name is to have a tag, it will be one of the following four: CR, CP, R, and P.
There are also methods with these names, sans tags.
CodeWright's behavior in taking the programmer to the definition is to point to the equivalent method name, without the tags, even if the cursor was currently sitting on a method name with tags.
Sublime cannot find the original method if I hit F12 (recall: goto_definition) while the cursor is sitting in the "tagged" method name.
Here's the ideal situation: My cursor is sitting in a method named EditElem|entHandleR (| denotes cursor), and I hit F12. Sublime then takes me to the EditElementHandle definition.
Unfortunately, goto_definition isn't implemented in Python, it's part of Sublime's compiled executable (mostly written in C++), so it's not user-modifiable. However, there are several code intelligence plugins available, including SublimeCodeIntel and Anaconda, that may be more amenable to your needs. Code intelligence isn't magic, it simply uses (in your case fuzzy) searches to match what's under the cursor to what's in the index of a language library. All you'd have to do is slightly alter the searching logic to check for the possible presence of one of your "tags", and just ignore it. You may even be able to write a wrapper for goto_definition that does this for you, and not go through the bother of learning a large codebase. Sublime's API should help, as should looking closely at the source of sublime.py and sublime_plugin.py, as there are some undocumented functions.
Does anyone know how pydev determines what to use for code completion? I'm trying to define a set of classes specifically to enable code completion. I've tried using __new__ to set __dict__ and also __slots__, but neither seems to get listed in pydev autocomplete.
I've got a set of enums I want to list in autocomplete, but I'd like to set them in a generator, not hardcode them all for each class.
So rather than
class TypeA(object):
ValOk = 1
ValSomethingSpecificToThisClassWentWrong = 4
def __call__(self):
return 42
I'd like do something like
def TYPE_GEN(name, val, enums={}):
def call(self):
return val
dct = {}
dct["__call__"] = call
dct['__slots__'] = enums.keys()
for k, v in enums.items():
dct[k] = v
return type(name, (), dct)
TypeA = TYPE_GEN("TypeA",42,{"ValOk":1,"ValSomethingSpecificToThisClassWentWrong":4})
What can I do to help the processing out?
edit:
The comments seem to be about questioning what I am doing. Again, a big part of what I'm after is code completion. I'm using python binding to a protocol to talk to various microcontrollers. Each parameter I can change (there are hundreds) has a name conceptually, but over the protocol I need to use its ID, which is effectively random. Many of the parameters accept values that are conceptually named, but are again represented by integers. Thus the enum.
I'm trying to autogenerate a python module for the library, so the group can specify what they want to change using the names instead of the error prone numbers. The __call__ property will return the id of the parameter, the enums are the allowable values for the parameter.
Yes, I can generate the verbose version of each class. One line for each type seemed clearer to me, since the point is autocomplete, not viewing these classes.
Ok, as pointed, your code is too dynamic for this... PyDev will only analyze your own code statically (i.e.: code that lives inside your project).
Still, there are some alternatives there:
Option 1:
You can force PyDev to analyze code that's in your library (i.e.: in site-packages) dynamically, in which case it could get that information dynamically through a shell.
To do that, you'd have to create a module in site-packages and in your interpreter configuration you'd need to add it to the 'forced builtins'. See: http://pydev.org/manual_101_interpreter.html for details on that.
Option 2:
Another option would be putting it into your predefined completions (but in this case it also needs to be in the interpreter configuration, not in your code -- and you'd have to make the completions explicit there anyways). See the link above for how to do this too.
Option 3:
Generate the actual code. I believe that Cog (http://nedbatchelder.com/code/cog/) is the best alternative for this as you can write python code to output the contents of the file and you can later change the code/rerun cog to update what's needed (if you want proper completions without having to put your code as it was a library in PyDev, I believe that'd be the best alternative -- and you'd be able to grasp better what you have as your structure would be explicit there).
Note that cog also works if you're in other languages such as Java/C++, etc. So, it's something I'd recommend adding to your tool set regardless of this particular issue.
Fully general code completion for Python isn't actually possible in an "offline" editor (as opposed to in an interactive Python shell).
The reason is that Python is too dynamic; basically anything can change at any time. If I type TypeA.Val and ask for completions, the system had to know what object TypeA is bound to, what its class is, and what the attributes of both are. All 3 of those facts can change (and do; TypeA starts undefined and is only bound to an object at some specific point during program execution).
So the system would have to know st what point in the program run do you want the completions from? And even if there were some unambiguous way of specifying that, there's no general way to know what the state of everything in the program is like at that point without actually running it to that point, which you probably don't want your editor to do!
So what pydev does instead is guess, when it's pretty obvious. If you have a class block in a module foo defining class Bar, then it's a safe bet that the name Bar imported from foo is going to refer to that class. And so you know something about what names are accessible under Bar., or on an object created by obj = Bar(). Sure, the program could be rebinding foo.Bar (or altering its set of attributes) at runtime, or could be run in an environment where import foo is hitting some other file. But that sort of thing happens rarely, and the completions are useful in the common case.
What that means though is that you basically lose completions whenever you use "too much" of Python's dynamic language flexibility. Defining a class by calling a function is one of those cases. It's not ready to guess that TypeA has names ValOk and ValSomethingSpecificToThisClassWentWrong; after all, there's presumably lots of other objects that result from calls to TYPE_GEN, but they all have different names.
So if your main goal is to have completions, I think you'll have to make it easy for pydev and write these classes out in full. Of course, you could use similar code to generate the python files (textually) if you wanted. It looks though like there's actually more "syntactic overhead" of defining these with dictionaries than as a class, though; you're writing "a": b, per item rather than a = b. Unless you can generate these more systematically or parse existing definition files or something, I think I'd find the static class definition easier to read and write than the dictionary driving TYPE_GEN.
The simpler your code, the more likely completion is to work. Would it be reasonable to have this as a separate tool that generates Python code files containing the class definitions like you have above? This would essentially be the best of both worlds. You could even put the name/value pairs in a JSON or INI file or what have you, eliminating the clutter of the methods call among the name/value pairs. The only downside is needing to run the tool to regenerate the code files when the codes change, but at least that's an automated, simple process.
Personally, I would just go with making things more verbose and writing out the classes manually, but that's just my opinion.
On a side note, I don't see much benefit in making the classes callable vs. just having an id class variable. Both require knowing what to type: TypeA() vs TypeA.id. If you want to prevent instantiation, I think throwing an exception in __init__ would be a bit more clear about your intentions.
I'm trying to document a python class using Doxygen. The class exposes a set of properties over d-bus, but these have no corresponding public getters/setters in the python class. Instead, they are implemented through a d-bus properties interface (Set/Get/GetAll/Introspect).
What I want to do is to be able to document these properties using something like this:
## #property package::Class::Name description
The whole package::Class works (the same method finds functions, so it finds the right class).
When running doxygen I get the following error:
warning: documented function ``package::Class::Name' was not declared or defined.
I can live with a warning, but unfortunately the property fails to appear in the documentation generated for the class, so it is not only a warning, but it is silenced as well.
So, my question is, how, if I can, do I make the non-existing property member appear in the generated docs?
Define the attribute inside an if 0: block:
## #class X
## #brief this is useless
class X:
if 0:
## #brief whatevs is a property that doesn't exist in spacetime
##
## It is designed to make bunny cry.
whatevs = property
This will cause it to exist in the documentation (tested with doxygen 1.8.1.2-1 on debian-squeeze). The attribute will never be made to exist at runtime, and in fact it looks like the python bytecode optimizer eliminates if statement and its body altogether.
I looked into something similar previously and couldn't find a direct way to coax Doxygen into documenting an undefined member. There are two basic kludges you can use here:
1.) generate a dummy object (or dummy members) for doxygen to inventory which don't actually exist in the live code.
2.) If the adjustments you need are fairly predictable and regular you could write an INPUT_FILTER for doxygen which takes your files and converts them before parsing. There are some issues with this method--mostly that if you plan on including the code in the documentation and the filter has to add/remove lines from the file, the line numbers it indicates will be off, and any code windows shown with the documentation will be off by that number of lines. You can also check the option to filter the displayed sources to adjust for this, but depending on who the consumer of your documentation is, it may be confusing for the copy in Doxygen not to perfectly match what's in the real source.
In our case we use a python script which Doxygen runs from the command-line with the file path as the arg. We read the file indicated and write what we want Doxygen to interpret instead to stdout. If you need the source copies displayed in doxygen to be filtered as well you can set FILTER_SOURCE_FILES to YES.
I'm wondering how to go about implementing a macro recorder for a python gui (probably PyQt, but ideally agnostic). Something much like in Excel but instead of getting VB macros, it would create python code. Previously I made something for Tkinter where all callbacks pass through a single class that logged actions. Unfortunately my class doing the logging was a bit ugly and I'm looking for a nicer one. While this did make a nice separation of the gui from the rest of the code, it seems to be unusual in terms of the usual signals/slots wiring. Is there a better way?
The intention is that a user can work their way through a data analysis procedure in a graphical interface, seeing the effect of their decisions. Later the recorded procedure could be applied to other data with minor modification and without needing the start up the gui.
You could apply the command design pattern: when your user executes an action, generate a command that represents the changes required. You then implement some sort of command pipeline that executes the commands themselves, most likely just calling the methods you already have. Once the commands are executed, you can serialize them or take note of them the way you want and load the series of commands when you need to re-execute the procedure.
Thinking in high level, this is what I'd do:
Develop a decorator function, with which I'd decorate every event-handling functions.
This decorator functions would take note of thee function called, and its parameters (and possibly returning values) in a unified data-structure - taking care, on this data structure, to mark Widget and Control instances as a special type of object. That is because in other runs these widgets won't be the same instances - ah, you can't even serialize a toolkit widget instances, be it Qt or otherwise.
When the time comes to play a macro, you fill-in the gaps replacing the widget-representating object with the instances of the actually running objects, and simply call the original functions with the remaining parameters.
In toolkits that have an specialized "event" parameter that is passed down to event-handling functions, you will have to take care of serializing and de-serializing this event as well.
I hope this can help. I could come up with some proof of concept code for that (although I am in a mood to use tkinter today - would have to read a lot to come up with a Qt4 example).
An example of what you're looking for is in mayavi2. For your purposes, mayavi2's "script record" functionality will generate a Python script that can then be trivially modified for other cases. I hear that it works pretty well.