Visualize class dependencies in Python - python

I have started a new job and there are 100k lines of code written in Python 2.7 across four different repos.
The code is sometimes quite nested, with many library imports and a complex class structure, and no documentation.
I want to create a graph of the dependencies in order to understand the code better.
I have not found anything on the internet except https://pypi.org/project/pydeps/ but that is not working for some unknown reason.
The solution should either query all python files in the four repos automatically, or it should take a single python file with some function call I have saved, and then go through all dependencies and graphically display them.
A good solution would also display which arguments or (keyword arguments) are passed on, or how often a function is used within the 100k lines of code to understand which methods are more important etc. This is not a strong requirement, however.
If someone could post one or more python libraries (or VSCode extensions) that would be much appreciated.

Related

Static analyse and extract the difference between two versions of same Python Library

I'm doing a research project on detecting breaking changes from Python library upgrades. One of the steps is to extract the difference between two major versions of the same Python library by using static analysis(Coule be AST-based or not), in order to triage the pattern of change. The detection should not only find the difference from .py files, but also the difference from other project files including config files, resources, etc. Ideally, a scenario like if a .py file moved to another module should also be included. So I have two questions here:
Is there a tool that can do a similar job and also support flexible configuration for analysis?
If not, what will be the best strategy to search for that kind of difference and identify the category of this difference(e.g. variable, function, etc.)
Sorry, this might be a silly question, I'm not coming from a Python background, really running out of thoughts here. Any thoughts, ideas, and inputs are welcome. Thanks in advance.
Just spit balling some ideas here:
I don't think I'd be so concerned about detecting changes in the source files up front. There are a lot of ways to move code around among files without changing the interface to the module. For example you can put all of the code in __init__.py or, you can split it up into any number of files and subdirectories. However, the programmatic interface will stay the same.
Instead, you could use the dir() built-in to detect changes in the public classes and methods in the module. This will work well for libraries that used named arguments, but won't work well for functions which just use def func(*args, **kwargs) (this is why that should be avoided, all you former perl programmers!)
If the module uses the new type hinting, you can really get some mileage out of detecting change in types. If you use some tool that actually parses the python and infers types, that would work as well. I would guess VSCode probably contains such a library that it uses to give context-sensitive help.

How To Remove Unused Python Function Automatically [duplicate]

So you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions?
I've seen these two references: Find unused code and Tool to find unused functions in php project, but they seem specific to C# and PHP, respectively.
Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)?
In Python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are coverage and figleaf. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results.
Alternatively, you can use static code analyzers that just look at your code, but don't actually run it. They run much faster, but due to Python's dynamic nature the results may contain false positives.
Two tools in this category are pyflakes and vulture. Pyflakes finds unused imports and unused local variables. Vulture finds all kinds of unused and unreachable code. (Full disclosure: I'm the maintainer of Vulture.)
The tools are available in the Python Package Index https://pypi.org/.
I'm not sure if this is helpful, but you might try using the coverage, figleaf or other similar modules, which record which parts of your source code is used as you actually run your scripts/application.
Because of the fairly strict way python code is presented, would it be that hard to build a list of functions based on a regex looking for def function_name(..) ?
And then search for each name and tot up how many times it features in the code. It wouldn't naturally take comments into account but as long as you're having a look at functions with less than two or three instances...
It's a bit Spartan but it sounds like a nice sleepy-weekend task =)
unless you know that your code uses reflection, as you said, I would go for a trivial grep. Do not underestimate the power of the asterisk in vim as well (performs a search of the word you have under your cursor in the file), albeit this is limited only to the file you are currently editing.
Another solution you could implement is to have a very good testsuite (seldomly happens, unfortunately) and then wrap the routine with a deprecation routine. if you get the deprecation output, it means that the routine was called, so it's still used somewhere. This works even for reflection behavior, but of course you can never be sure if you don't trigger the situation when your routine call is performed.
its not only searching function names, but also all the imported packages not in use.
you need to search the code for all the imported packages (including aliases) and search used functions, then create a list of the specific imports from each package (example instead of import os, replace with from os import listdir, getcwd,......)

Structure of a python project that is not a package

If you search over the internet about python project structures, you will find some articles about python package structure. Based on it, What I want to know is if there is any kind of instructions for creating structure for python projects that isn't packages, that is, projects that the code is the end code it self?
For example, I created a package that handles some requests of some specific endpoints. This package will serve the main code that will handle the data fetched by this package. The main code is not a package, that is, it don't have classes and __init__ files, because in this software layer, there will be no necessity of code reuse. Instead, the main code relate straight to the end it self.
Is there any instructions for it?
It would be good to see the structure itself instead of reading the description of it - it can help visualize the problem and answer properly to your case 😉
projects that isn't packages, that is, projects that the code is the end code it self
In general, I would say you should always structure your code! And by telling that, I mean exactly the work with the modules/packages. It is needed mostly to sperate the responsibilities and to introduce things that can be reused. It also gives the possibility to find things easier/faster instead of going through the unstructured tones of the code.
Of course, as I said, it is a general thought and as far as you are experienced you can experiment with the structure to find the best one for the project which you are working on. But without any structure, you won't survive in a bigger project (or the life will be harder than you want).

Best practices for turning jupyter notebooks into python scripts

Jupyter (iPython) notebook is deservedly known as a good tool for prototyping the code and doing all kinds of machine learning stuff interactively. But when I use it, I inevitably run into the following:
the notebook quickly becomes too complex and messy to be maintained and improved further as notebook, and I have to make python scripts out of it;
when it comes to production code (e.g. one that needs to be re-run every day), the notebook again is not the best format.
Suppose I've developed a whole machine learning pipeline in jupyter that includes fetching raw data from various sources, cleaning the data, feature engineering, and training models after all. Now what's the best logic to make scripts from it with efficient and readable code? I used to tackle it several ways so far:
Simply convert .ipynb to .py and, with only slight changes, hard-code all the pipeline from the notebook into one python script.
'+': quick
'-': dirty, non-flexible, not convenient to maintain
Make a single script with many functions (approximately, 1 function for each one or two cell), trying to comprise the stages of the pipeline with separate functions, and name them accordingly. Then specify all parameters and global constants via argparse.
'+': more flexible usage; more readable code (if you properly transformed the pipeline logic to functions)
'-': oftentimes, the pipeline is NOT splittable into logically completed pieces that could become functions without any quirks in the code. All these functions are typically needed to be only called once in the script rather than to be called many times inside loops, maps etc. Furthermore, each function typically takes the output of all functions called before, so one has to pass many arguments to each function.
The same thing as point (2), but now wrap all the functions inside the class. Now all the global constants, as well as outputs of each method can be stored as class attributes.
'+': you needn't to pass many arguments to each method -- all the previous outputs already stored as attributes
'-': the overall logic of a task is still not captured -- it is data and machine learning pipeline, not just class. The only goal for the class is to be created, call all the methods sequentially one-by-one and then be removed. On top of this, classes are quite long to implement.
Convert a notebook into python module with several scripts. I didn't try this out, but I suspect this is the longest way to deal with the problem.
I suppose, this overall setting is very common among data scientists, but surprisingly I cannot find any useful advice around.
Folks, please, share your ideas and experience. Have you ever encountered this issue? How have you tackled it?
Life saver: as you're writing your notebooks, incrementally refactor your code into functions, writing some minimal assert tests and docstrings.
After that, refactoring from notebook to script is natural. Not only that, but it makes your life easier when writing long notebooks, even if you have no plans to turn them into anything else.
Basic example of a cell's content with "minimal" tests and docstrings:
def zip_count(f):
"""Given zip filename, returns number of files inside.
str -> int"""
from contextlib import closing
with closing(zipfile.ZipFile(f)) as archive:
num_files = len(archive.infolist())
return num_files
zip_filename = 'data/myfile.zip'
# Make sure `myfile` always has three files
assert zip_count(zip_filename) == 3
# And total zip size is under 2 MB
assert os.path.getsize(zip_filename) / 1024**2 < 2
print(zip_count(zip_filename))
Once you've exported it to bare .py files, your code will probably not be structured into classes yet. But it is worth the effort to have refactored your notebook to the point where it has a set of documented functions, each with a set of simple assert statements that can easily be moved into tests.py for testing with pytest, unittest, or what have you. If it makes sense, bundling these functions into methods for your classes is dead-easy after that.
If all goes well, all you need to do after that is to write your if __name__ == '__main__': and its "hooks": if you're writing script to be called by the terminal you'll want to handle command-line arguments, if you're writing a module you'll want to think about its API with the __init__.py file, etc.
It all depends on what the intended use case is, of course: there's quite a difference between converting a notebook to a small script vs. turning it into a full-fledged module or package.
Here's a few ideas for a notebook-to-script workflow:
Export the Jupyter Notebook to Python file (.py) through the GUI.
Remove the "helper" lines that don't do the actual work: print statements, plots, etc.
If need be, bundle your logic into classes. The only extra refactoring work required should be to write your class docstrings and attributes.
Write your script's entryways with if __name__ == '__main__'.
Separate your assert statements for each of your functions/methods, and flesh out a minimal test suite in tests.py.
We are having the similar issue. However we are using several notebooks for prototyping the outcomes which should become also several python scripts after all.
Our approach is that we put aside the code, which seams to repeat across those notebooks. We put it into the python module, which is imported by each notebook and also used in the production. We iteratively improve this module continuously and add tests of what we find during prototyping.
Notebooks then become rather like the configuration scripts (which we just plainly copy into the end resulting python files) and several prototyping checks and validations, which we do not need in the production.
Most of all we are not afraid of the refactoring :)
I made a module recently (NotebookScripter) to help address this issue. It allows you to invoke a jupyter notebook via a function call. Its as simple to use as
from NotebookScripter import run_notebook
run_notebook("./path/to/Notebook.ipynb", some_param="Provided Exteranlly")
Keyword parameters can be passed to the function call. Its easy to adapt a notebook to be parameterizable externally.
Within a .ipynb cell
from NotebookScripter import receive_parameter
some_param = receive_parameter(some_param="Return's this value by default when matching keyword not provided by external caller")
print("some_param={0} within the invocation".format(some_param))
run_notebook() supports .ipynb files or .py files -- allowing one to easily use .py files as might be generated by nbconvert of vscode's ipython. You can keep your code organized in a way that makes sense for interactive use, and also reuse/customize it externally when needed.
You should breakdown the logic in small steps, that way your pipeline will be easier to maintain. Since you already have a working codebase, you want to keep your code running, so make small changes, test and repeat.
I'd go this way:
Add some tests to your pipeline, for ML pipelines this is a bit hard, but if your notebook trains a model, you can use performance metrics to test if your pipeline still works (your test can be accuracy = 0.8, but make sure you define a tolerable range since the number hardly be the exact same for each run)
Break apart your single notebook into smaller ones, the output from one should the input for the other. As soon as you create a split, make sure you add a few tests for each notebook individually. To manage this sequential execution, you can use papermill to execute your notebooks or a workflow management tool such as ploomber which integrates with papermill, is able to resolve complex dependencies and has a hook to run tests upon notebook execution (Disclaimer: I'm ploomber's author)
Once you have a pipeline composed of several notebooks that passes all your tests you can decide whether you want to keep using the ipynb format or not. My recommendation would be to only keep as notebooks the tasks that have rich output (such as tables or plots), the rest can be refactored into Python functions, which are more maintainable

How can you find unused functions in Python code?

So you've got some legacy code lying around in a fairly hefty project. How can you find and delete dead functions?
I've seen these two references: Find unused code and Tool to find unused functions in php project, but they seem specific to C# and PHP, respectively.
Is there a Python tool that'll help you find functions that aren't referenced anywhere else in the source code (notwithstanding reflection/etc.)?
In Python you can find unused code by using dynamic or static code analyzers. Two examples for dynamic analyzers are coverage and figleaf. They have the drawback that you have to run all possible branches of your code in order to find unused parts, but they also have the advantage that you get very reliable results.
Alternatively, you can use static code analyzers that just look at your code, but don't actually run it. They run much faster, but due to Python's dynamic nature the results may contain false positives.
Two tools in this category are pyflakes and vulture. Pyflakes finds unused imports and unused local variables. Vulture finds all kinds of unused and unreachable code. (Full disclosure: I'm the maintainer of Vulture.)
The tools are available in the Python Package Index https://pypi.org/.
I'm not sure if this is helpful, but you might try using the coverage, figleaf or other similar modules, which record which parts of your source code is used as you actually run your scripts/application.
Because of the fairly strict way python code is presented, would it be that hard to build a list of functions based on a regex looking for def function_name(..) ?
And then search for each name and tot up how many times it features in the code. It wouldn't naturally take comments into account but as long as you're having a look at functions with less than two or three instances...
It's a bit Spartan but it sounds like a nice sleepy-weekend task =)
unless you know that your code uses reflection, as you said, I would go for a trivial grep. Do not underestimate the power of the asterisk in vim as well (performs a search of the word you have under your cursor in the file), albeit this is limited only to the file you are currently editing.
Another solution you could implement is to have a very good testsuite (seldomly happens, unfortunately) and then wrap the routine with a deprecation routine. if you get the deprecation output, it means that the routine was called, so it's still used somewhere. This works even for reflection behavior, but of course you can never be sure if you don't trigger the situation when your routine call is performed.
its not only searching function names, but also all the imported packages not in use.
you need to search the code for all the imported packages (including aliases) and search used functions, then create a list of the specific imports from each package (example instead of import os, replace with from os import listdir, getcwd,......)

Categories

Resources