This isn't a keyboard remap request. My current dilemma is an inability to edit the goto_definition command (bound to F12 by default). If I could find the .py file for it, this would (hopefully) be a piece of cake.
The larger scope of my project requires me to modify the functionality of goto_definition to more closely resemble the equivalent function in CodeWright. I'm working in ST3, and reverting to ST2 isn't an option.
Let me more clearly elaborate on my hurdles:
Locate the .py file which contains the information that goto_definition uses when it runs.
Modify the nature of that command to be a little more flexible:
Essentially, there are a few methods, EditElementHandleR, MSBsplineCurveCR, GetElementDescrP, GetModelRef, and several others of similar nature.
There are "tags" appended to some of these, and if a method name is to have a tag, it will be one of the following four: CR, CP, R, and P.
There are also methods with these names, sans tags.
CodeWright's behavior in taking the programmer to the definition is to point to the equivalent method name, without the tags, even if the cursor was currently sitting on a method name with tags.
Sublime cannot find the original method if I hit F12 (recall: goto_definition) while the cursor is sitting in the "tagged" method name.
Here's the ideal situation: My cursor is sitting in a method named EditElem|entHandleR (| denotes cursor), and I hit F12. Sublime then takes me to the EditElementHandle definition.
Unfortunately, goto_definition isn't implemented in Python, it's part of Sublime's compiled executable (mostly written in C++), so it's not user-modifiable. However, there are several code intelligence plugins available, including SublimeCodeIntel and Anaconda, that may be more amenable to your needs. Code intelligence isn't magic, it simply uses (in your case fuzzy) searches to match what's under the cursor to what's in the index of a language library. All you'd have to do is slightly alter the searching logic to check for the possible presence of one of your "tags", and just ignore it. You may even be able to write a wrapper for goto_definition that does this for you, and not go through the bother of learning a large codebase. Sublime's API should help, as should looking closely at the source of sublime.py and sublime_plugin.py, as there are some undocumented functions.
Related
What do you call this?
I want to organize my code, and be able to click that little arrow and shadow certain things onto it, take this for example, I want to hide it under my comment # Words like this:
I use visual studio code and I'm working with python.
Some keywords that might be related: Bookmarks, whatever popovers are, and 'ribbons'.
This is called "code folding".
To do it with arbitrary blocks of lines of code, there is built-in functionality for some programming languages, which you can read more about in the VS Code docs on editor folding regions. The general patterns is that you create "marker" comments around the block: one like #region, and one like #endregion. The current languages supporting markers at the time of this writing (according to the documentation) are JS, TS, C#, C, C++, F#, PowerShell, and VB.
For other languages, you should be able to achieve that with the maptz.regionfolder extension or other similar ones.
I want to write a Python app that uses GTK (via gi.repository) to display a textual view of a huge amount of data. (Specifically, disassembled instructions from a program, similar to what IDA shows.)
I thought this should be fairly simple: use an ordinary GtkTextView, and a custom subclass of GtkTextBuffer which will handle the "give me some text" request, generate some text (disassemble some instructions) and some tags (for colouring, formatting, etc) and return them.
The issue is I can't find any information on how to subclass GtkTextBuffer in this way, to provide the text myself. I've tried just implementing the get_text and get_slice methods in my subclass, but they seem to never be called. It seems like the only thing I can do is use a standard GtkTextBuffer and the set_text method, and try somehow to keep track of the cursor position and number of lines to display, but this seems entirely opposite to how MVC should work. There are potentially millions of lines, so generating all text in advance is infeasible.
I'm using Python 3.4 and GTK3.
Gtk.TextBuffer is from an external library that isn't written in Python. You've run into one limitation of that situation. With most Python libraries you can subclass their classes or monkeypatch their APIs however you like. GTK's C code, on the other hand, is unaware that it's being used from Python, and as you have noticed, completely ignores your overridden get_text() and get_slice() methods.
GTK's classes additionally have the limitation that you can only override methods that have been declared "virtual". Here's how that translates into Python: you can see a list of virtual methods in the Python GI documentation (example for Gtk.TextBuffer). These methods all start with do_ and are not meant to be called from your program, only overridden. Python GI will make the GTK code aware of these overrides, so that when you override, e.g., do_insert_text() and subsequently call insert_text(), the chain of calls will look something like this:
Python insert_text()
C gtk_text_buffer_insert_text()
C GtkTextBufferClass->insert_text() (internal virtual method)
Python do_insert_text()
Unfortunately, as you can see from the documentation that I linked above, get_text() and get_slice() are not virtual, so you can't override them in a subclass.
You might be able to achieve your aim by wrapping one TextBuffer (which contains the entirety of your disassembled instructions) in another (which contains an excerpt, and is actually hooked up to the text view.) You could set marks in the first text buffer to show where the excerpt should begin or end, and connect signals such that when the text in the first text buffer changes, or the marks change, then the text between the marks is copied to the second text buffer.
So I have Emacs 24.3 and with it comes a quite recent python.el file providing a Python mode for editing.
But I keep reading that there is a python-mode.el on Launchpad, and comparing the two files it jumps out to me that the former is under 4000 lines, while the latter is almost 20000. This suggests that the latter is much more feature-rich.
And I can't find any online feature comparison about them, documentation, or at least a list about the features for each of them. Yep, there is syntax highlighting and embedded interpreter, but what about completion in shell buffer, completion in source file buffer, autoindent, reindent etc.
So what are the important features of these modes? (Or any other Python mode for Emacs which you recommend.) Please provide detailed answers.
I was python-mode.el user once but quit using it a year ago because I felt the way it was developed was not well organized. Here is a list from the note I took at that time. But I need to warn you that almost a year is passed since then so the situation may be changed.
Many copy & paste functions.
Many accidentally working code. For example, not passing around variables but using implicit binding. This produces many compile errors (and will not work if you change it to lexical scope).
Rough granularity of commit. I send a patch, and it was committed with unrelated changes.
One thing I like about python-mode.el is it comes with automated test set (although I've never run it). python.el does not have a test set yet. But I know the author of python.el is writing it now.
While python.el is compact, it does not mean you get poor functionality. It is more like keeping core small and let others to extend it by providing concise API. Same author of python.el wrote python-django.el to extend python.el for django projects. I wrote auto-completion plugin for Python called Jedi.el and advanced IPython plugin called EIN. Both of them have better support for python.el than python-mode.el (well, that's because I don't use python-mode.el, though).
I had a few thing I missed from python-mode.el first, but they are quickly fixed in python.el (Of course, this probably means that I was not using so much functionality in python-mode.el).
what about completion in shell buffer, completion in source file buffer, autoindent, reindent etc.
completion in shell buffer:
It sort of works in both python.el and python-mode.el. But sometimes it does not work if you have bad combination of Emacs version and python(-mode).el version. So probably python.el is safer in this manner.
But if you want better solution, use EIN :)
completion in source file buffer:
Just use Jedi.el :)
autoindent/reindent:
I don't know which one is better in performance-wise. However, keybind for return differs one to the other. In python-mode.el, if you type RET you get autoindent. In python.el, RET does not give you indentation and you should use C-j instead. Actually C-j for newline+indentation is universal behavior in Emacs. So python.el is better if you do programming in other languages.
Being heavily involved in developing python-mode.el last years, my comment probably is biased: Recommend to stay with python.el for Emacs beginners. Also its author deserves credit for some useful approaches.
python-mode.el is designed to boost edits productivity. It makes it easy to run or execute via python2 and python3 or IPython shells in parallel.
It reduces number of needed keystrokes providing tailored commands. It makes edits faster, assists programming by speech, macro-driven input etc.
Supports Python language features not known from python.el currently:
py-up, py-down - travelling nested blocks
Avoid typos fetching forms at point, a clause for example:
py-backward-clause
py-copy-clause
py-down-clause
...
No need to customize when testing different versions:
py-execute-clause-python2
py-execute-clause-python3
py-execute-clause-ipython
...
notion of finer grained parts - py-expression, py-minor-expression
commands running versioned and paralleled (I)Python-executables, no need to re-define the default Python
largely removing the need of an active region marked before, see py-execute-line and a whole bunch more
To get an overview, have a look at the menu. Directory "doc" lists commands.
As the quality of code is raised: a way to compare both modes probably is checking for the bugs listed in http://debbugs.gnu.org/. See for example bug #15510, #16875; or http://lists.gnu.org/archive/html/help-gnu-emacs/2014-04/msg00250.html
Commented at "rough granularity of commit" already: While tkf basically is right looking for smaller pieces, sometimes conditions make me leave the rule. Considerable parts are not written by hand, but by programs residing in directory "devel". They create files used in development branch frist - i.e. components-python-mode. When starting a new feature it's often not obvious if the path chosen is fruitful.
After a hundred would-be-commits or so it still might turn out impossible or not so recommendable. Instead of posting all the meanders, used to keep that experimental branch for several days in these cases and check in when tests passed.
BTW assume tkf refers not to compile-errors --which would be looked for instantly-- but compiler warnings. Unfortunately Emacs mixes warning about backed style preferences with real errors.
I am interested in modifying an existing plugin -- rabbit-eclipse -- that tracks time spent editing different java elements (classes, methods, etc.). The plugin currently tracks Java elements via the org.eclipse.jdt.core.IJavaElement interface. I would like to add the ability to track the different Python elements.
I have installed PyDev in Eclipse and looked through the included JAR files, but I'm unable to figure out which class would be the equivalent to IJavaElement (if it even exists).
What is the PyDev equivalent to IJavaElement?
PyDev doesn't provide an actual replacement for IJavaElement... (i.e.: it does not provide selection based on that).
Still, there may be different approaches which may work... one choice would be hearing regular text selections and doing what's done in org.python.pydev.editor.actions.PyMethodNavigation, which is finding out the scope from the current location using FastParser.firstClassOrFunction(doc, startLine, searchForward, pyEdit.isCythonFile()) -- would that be enough for what you want?
My project targets a low-cost and low-resource embedded device. I am dependent on a relatively large and sprawling Python code base, of which my use of its APIs is quite specific.
I am keen to prune the code of this library back to its bare minimum, by executing my test suite within a coverage tools like Ned Batchelder's coverage or figleaf, then scripting removal of unused code within the various modules/files. This will help not only with understanding the libraries' internals, but also make writing any patches easier. Ned actually refers to the use of coverage tools to "reverse engineer" complex code in one of his online talks.
My question to the SO community is whether people have experience of using coverage tools in this way that they wouldn't mind sharing? What are the pitfalls if any? Is the coverage tool a good choice? Or would I be better off investing my time with figleaf?
The end-game is to be able to automatically generate a new source tree for the library, based on the original tree, but only including the code actually used when I run nosetests.
If anyone has developed a tool that does a similar job for their Python applications and libraries, it would be terrific to get a baseline from which to start development.
Hopefully my description makes sense to readers...
What you want isn't "test coverage", it is the transitive closure of "can call" from the root of the computation. (In threaded applications, you have to include "can fork").
You want to designate some small set (perhaps only 1) of functions that make up the entry points of your application, and want to trace through all possible callees (conditional or unconditional) of that small set. This is the set of functions you must have.
Python makes this very hard in general (IIRC, I'm not a deep Python expert) because of dynamic dispatch and especially due to "eval". Reasoning about what function can get called can be pretty tricky for a static analyzers applied to highly dynamic languages.
One might use test coverage as a way to seed the "can call" relation with specific "did call" facts; that could catch a lot of dynamic dispatches (dependent on your test suite coverage). Then the result you want is the transitive closure of "can or did" call. This can still be erroneous, but is likely to be less so.
Once you get a set of "necessary" functions, the next problem will be removing the unnecessary functions from the source files you have. If the number of files you start with is large, the manual effort to remove the dead stuff may be pretty high. Worse, you're likely to revise your application, and then the answer as to what to keep changes. So for every change (release), you need to reliably recompute this answer.
My company builds a tool that does this analysis for Java packages (with appropriate caveats regarding dynamic loads and reflection): the input is a set of Java files and (as above) a designated set of root functions. The tool computes the call graph, and also finds all dead member variables and produces two outputs: a) the list of purportedly dead methods and members, and b) a revised set of files with all the "dead" stuff removed. If you believe a), then you use b). If you think a) is wrong, then you add elements listed in a) to the set of roots and repeat the analysis until you think a) is right. To do this, you need a static analysis tool that parse Java, compute the call graph, and then revise the code modules to remove the dead entries. The basic idea applies to any language.
You'd need a similar tool for Python, I'd expect.
Maybe you can stick to just dropping files that are completely unused, although that may still be a lot of work.
As others have pointed out, coverage can tell you what code has been executed. The trick for you is to be sure that your test suite truly exercises the code fully. The failure case here is over-pruning because your tests skipped some code that will really be needed in production.
Be sure to get the latest version of coverage.py (v3.4): it adds a new feature to indicate files that are never executed at all.
BTW:: for a first cut prune, Python provides a neat trick: remove all the .pyc files in your source tree, then run your tests. Files that still have no .pyc file were clearly not executed!
I haven't used coverage for pruning out, but it seems like it should do well. I've used the combination of nosetests + coverage, and it worked better for me than figleaf. In particular, I found the html report from nosetests+coverage to be helpful -- this should be helpful to you in understanding where the unused portions of the library are.