Parsing class and function dependencies from a project - python

I'm trying to run some analysis of class and function dependencies in a Python code base. My first step was to create a .csv file for import into Excel using Python's csv module and regular expressions.
The current version of what I have looks like this:
import re
import os
import csv
from os.path import join
class ClassParser(object):
class_expr = re.compile(r'class (.+?)(?:\((.+?)\))?:')
python_file_expr = re.compile(r'^\w+[.]py$')
def findAllClasses(self, python_file):
""" Read in a python file and return all the class names
"""
with open(python_file) as infile:
everything = infile.read()
class_names = ClassParser.class_expr.findall(everything)
return class_names
def findAllPythonFiles(self, directory):
""" Find all the python files starting from a top level directory
"""
python_files = []
for root, dirs, files in os.walk(directory):
for file in files:
if ClassParser.python_file_expr.match(file):
python_files.append(join(root,file))
return python_files
def parse(self, directory, output_directory="classes.csv"):
""" Parse the directory and spit out a csv file
"""
with open(output_directory,'w') as csv_file:
writer = csv.writer(csv_file)
python_files = self.findAllPythonFiles(directory)
for file in python_files:
classes = self.findAllClasses(file)
for classname in classes:
writer.writerow([classname[0], classname[1], file])
if __name__=="__main__":
parser = ClassParser()
parser.parse("/path/to/my/project/main/directory")
This generates a .csv output in format:
class name, inherited classes (also comma separated), file
class name, inherited classes (also comma separated), file
... etc. ...
I'm at the point where I'd like to start parsing function declaration and definitions in addition to the class names. My question: Is there a better way to get the class names, inherited class names, function names, parameter names, etc.?
NOTE: I've considered using the Python ast module, but I don't have experience with it and don't know how to use it to get the desired information or if it can even do that.
EDIT: In response to Martin Thurau's request for more information - The reason I'm trying to solve this issue is because I've inherited a rather lengthy (100k+ lines) project that has no discernible structure to its modules, classes and functions; they all exist as a collection of files in a single source directory.
Some of the source files contain dozens of tangentially related classes and are 10k+ lines long which makes them difficult to maintain. I'm starting to perform analysis for the relative difficulty of taking every class and packaging it into a more cohesive structure using The Hitchhiker's Guide to Packaging as a base. Part of what I care about for that analysis is how intertwined a class is with other classes in its file and what imported or inherited classes a particular class relies on.

I've made a start on implementing this. Put the following code in a file, and run it, passing the name of a file or directory to analyse. It will print out all the classes it finds, the file it was found in, and the bases of the class. It is not intelligent, so if you have two Foo classes defined in your code base it will not tell you which one is being used, but it is a start.
This code uses the python ast module to examine .py files, and finds all the ClassDef nodes. It then uses this meta package to print bits of them out - you will need to install this package.
$ pip install -e git+https://github.com/srossross/Meta.git#egg=meta
Example output, run against django-featured-item:
$ python class-finder.py /path/to/django-featured-item/featureditem/
FeaturedField,../django-featured-item/featureditem/fields.py,models.BooleanField
SingleFeature,../django-featured-item/featureditem/tests.py,models.Model
MultipleFeature,../django-featured-item/featureditem/tests.py,models.Model
Author,../django-featured-item/featureditem/tests.py,models.Model
Book,../django-featured-item/featureditem/tests.py,models.Model
FeaturedField,../django-featured-item/featureditem/tests.py,TestCase
The code:
# class-finder.py
import ast
import csv
import meta
import os
import sys
def find_classes(node, in_file):
if isinstance(node, ast.ClassDef):
yield (node, in_file)
if hasattr(node, 'body'):
for child in node.body:
# `yield from find_classes(child)` in Python 3.x
for x in find_classes(child, in_file): yield x
def print_classes(classes, out):
writer = csv.writer(out)
for cls, in_file in classes:
writer.writerow([cls.name, in_file] +
[meta.asttools.dump_python_source(base).strip()
for base in cls.bases])
def process_file(file_path):
root = ast.parse(open(file_path, 'r').read(), file_path)
for cls in find_classes(root, file_path):
yield cls
def process_directory(dir_path):
for entry in os.listdir(dir_path):
for cls in process_file_or_directory(os.path.join(dir_path, entry)):
yield cls
def process_file_or_directory(file_or_directory):
if os.path.isdir(file_or_directory):
return process_directory(file_or_directory)
elif file_or_directory.endswith('.py'):
return process_file(file_or_directory)
else:
return []
if __name__ == '__main__':
classes = process_file_or_directory(sys.argv[1])
print_classes(classes, sys.stdout)

Related

Setting global variables from a dictionary within a function

I am looking to use .yaml to manage several global parameters for a program. I would prefer to manage this from within a function, something like the below. However, it seems globals().update() does not work when included inside a function. Additionally, given the need to load an indeterminate number of variables with unknown names, using the basic global approach is not appropriate. Ideas?
.yaml
test:
- 12
- 13
- 14
- stuff:
john
test2: yo
Python
import os
import yaml
def load_config():
with open(os.path.join(os.getcwd(), {file}), 'r') as reader:
vals = yaml.full_load(reader)
globals().update(vals)
Desired output
load_config()
test
---------------
[12,13,14,{'stuff':'john'}]
test2
---------------
yo
What I get
load_config()
test
---------------
NameError: name 'test' is not defined
test2
---------------
NameError: name 'test2' is not defined
Please note: {file} is for you, the code is not actually written that way. Also note that I understand the use of global is not normally recommended, however it is what is required for the answer of this question.
You had {file} in your code, I've assumed that was intended to just be a string of the actual filename. I certainly hope you weren't looking to .format() and then eval() this code? That would be a very bad and unsafe way to run code.
Just return the dictionary vals itself, and access it as needed:
import os
import yaml
def load_config(fn):
with open(os.path.join(os.getcwd(), fn), 'r') as reader:
# only returning the value, so doing it in one step:
return yaml.full_load(reader)
cfg = load_config('test.yaml')
print(cfg)
print(cfg['test2'])
Output:
{'test': [12, 13, 14, {'stuff': 'john'}], 'test2': 'yo'}
yo
You should definitely never just update globals() with content from an external file. Use of globals() is only for very specific use cases anyway.
Getting the exact desired output is just a matter of formatting the contents of the dictionary:
import os
import yaml
def load_config(fn):
with open(os.path.join(os.getcwd(), fn), 'r') as reader:
return yaml.full_load(reader)
def print_config(d):
for k, v in d.items():
print(f'{k}\n---------------\n{v}\n')
cfg = load_config('test.yaml')
print_config(cfg)
Which gives exactly the output you described.
Note that this is technically superfluous:
os.path.join(os.getcwd(), fn)
By default, file operations are executed on the current working directory, so you'd achieve the same with:
def load_config(fn):
with open(fn, 'r') as reader:
return yaml.full_load(reader)
If you wanted to open the file in the same folder as the script itself, consider this instead:
def load_config(fn):
with open(os.path.join(os.path.dirname(__file__), fn), 'r') as reader:
return yaml.full_load(reader)

Manage Python module dependency through a clever import

I'm writing a python module that abstracts the use of three different hardware debugging tools for different CPU architectures.
To illustrate my problem, let's assume that I'm writing a configuration database that abstracts the use of XML, YAML, and JSON files to store stuff:
import xml.etree.ElementTree as ET
import json
import yaml
class abstract_data:
def __init__(self, filename):
'''
Loads a file regardless of the type
Args:
filename (str): The file that has the data
'''
if filename.endswith('.xml'):
self.data = ET.parse(filename)
else:
with open(filename) as f:
if filename.endswith('.json'):
self.data = json.load(f)
elif filename.endswith('.yaml'):
self.data = yaml.load(f, Loader=yaml.FullLoader)
def do_something(self):
print(self.data)
def main():
d = abstract_data("test.yaml")
d.do_something()
if __name__ == "__main__":
# execute only if run as a script
main()
However, I know for a fact that 99 % of my users will only use JSON files, and that setting up the other two libraries isn't very easy.
However, if I just put the imports on top of my source code like PEP-8 states, will create a dependency of the three libraries for all users. And I'd like to avoid that.
My (probably bad solution) is to use conditional imports, like so:
class abstract_data:
def __init__(self, filename):
'''
Loads a file regardless of the type
Args:
filename (str): The file that has the data
'''
if filename.endswith('.xml'):
import xml.etree.ElementTree as ET
self.data = ET.parse(filename)
else:
with open(filename) as f:
if filename.endswith('.json'):
self.data = json.load(f)
elif filename.endswith('.yaml'):
import yaml
self.data = yaml.load(f, Loader=yaml.FullLoader)
While this seems to work on a simple module, is this the best way to handle this problem? Are there any side-effects?
Please note that I'm using XML, JSON, and YAML as an illustrative case of three different imports.
Thank you very much!
If you want to keep your class's implementation as it is now, a common pattern is to set a flag based on the import success, at the top of the module:
try:
import yaml
except ImportError:
HAS_YAML = False
else:
HAS_YAML = True
class UnsupportedFiletypeError(Exception):
pass
Especially for a larger project, it can be useful to put this into a single module, attempt to make the import only once, and then use that fact elsewhere. For example, put the below in _deps.py and use from ._deps import HAS_YAML.
Then later:
# ...
elif filename.endswith('.yaml'):
if not HAS_YAML:
raise UnsupportedFiletypeError("You must install PyYAML for YAML file support")
Secondly, if this is an installable Python package, consider using extras_require.
That would let the user do something like:
pip install pkgname[yaml]
Where, if pkgname[yaml] is specified rather than just pkgname, then PyYAML is installed as a dependency.
Short answer: Look at the entrypoints library to get loaders and setuptools to register loaders.
One way is to use one individual file per "loading method":
file json_loader:
import json
def load(filename):
with open(filename) as f:
return json.load(f)
file xml_loader:
import xml.etree.ElementTree as ET
def load(filename):
return ET.parse(filename)
But to know whether one of these is supported you have to try to import them and catch any import errors:
import os
# other imports
loaders = {}
try:
from json_loader import load as json_load
loaders["json"] = json_load
except ImportError:
print("json not supported")
...
file_ext = os.path.splitext(file_name)[1]
self.data = loaders[file_ext](file_name)
You could also move the registering code into the modules themselves, so that you would only need to except ImportErrors in the main script:
file loaders.py:
loaders = {}
file xml_loader.py:
import xml.etree.ElementTree as ET
import loaders
def load(filename):
return ET.parse(filename)
loaders.loaders["xml"] = load
file main.py:
try:
import xml_loader
except ImportError:
pass
There is also the entrypoints library, which does something similar to the solution above but integrated with setuptools:
import entrypoints
loaders = {name: ep.load() for name, ep in entrypoints.get_group_named("my_database_configurator.loaders").items()}
...
file_ext = os.path.splitext(file_name)[1]
self.data = loaders[file_ext](file_name)
To register the entrypoints you need a setup.py for each loader (see e.g. here). You can of course combine this with abstract classes and optional dependencies. One advantage is that you don't need to change anything to support more extensions, just installing a plugin is enough to register it. Having multiple implementations for one extension is also possible, the user just installs the one she wants and it gets automatically used.

Using pytest to ensure a file is created and written to

I'm using pytest and want to test that a function writes some content to a file. So I have writer.py which includes:
MY_DIR = '/my/path/'
def my_function():
with open('{}myfile.txt'.format(MY_DIR), 'w+') as file:
file.write('Hello')
file.close()
I want to test /my/path/myfile.txt is created and has the correct content:
import writer
class TestFile(object):
def setup_method(self, tmpdir):
self.orig_my_dir = writer.MY_DIR
writer.MY_DIR = tmpdir
def teardown_method(self):
writer.MY_DIR = self.orig_my_dir
def test_my_function(self):
writer.my_function()
# Test the file is created and contains 'Hello'
But I'm stuck with how to do this. Everything I try, such as something like:
import os
assert os.path.isfile('{}myfile.txt'.format(writer.MYDIR))
Generates errors which lead me to suspect I'm not understanding or using tmpdir correctly.
How should I test this? (If the rest of how I'm using pytest is also awful, feel free to tell me that too!)
I've got a test to work by altering the function I'm testing so that it accepts a path to write to. This makes it easier to test. So writer.py is:
MY_DIR = '/my/path/'
def my_function(my_path):
# This currently assumes the path to the file exists.
with open(my_path, 'w+') as file:
file.write('Hello')
my_function(my_path='{}myfile.txt'.format(MY_DIR))
And the test:
import writer
class TestFile(object):
def test_my_function(self, tmpdir):
test_path = tmpdir.join('/a/path/testfile.txt')
writer.my_function(my_path=test_path)
assert test_path.read() == 'Hello'

Import a module with optional arguments python

Currently, I have a file called utils.py where I keep all my functions and another file called main.py.
In my utils file, I have a two functions that load and save to a json file, along with a bunch of other functions that will edit the data.
def save_league(league_name, records):
with open('%s.json' % league_name, 'w') as f:
f.write(json.dumps(records))
def load_league(league_name):
with open('%s.json' % league_name, 'r') as f:
content = f.read()
records = json.loads(content)
return records
I am trying to add optional arguments for the save_league function by changing the function to:
def save_league(name = league_name, r = records):
with open('%s.json' % name, 'w') as f:
f.write(json.dumps(r))
This way the file will save just from save_league().
However, when I try to import a function with optional arguments in main.py, I get a name error because the default arguments are not set at the beginning.
NameError: name 'league_name' is not defined
Is it possible import functions with optional args into another file or do I have to combine the two files into one?

access logfile and return/open all files written in there

For Reference
I have a python class which is supposed to unpack an archive and recursively iterate over the directory structure and then return the files for further processing. In my case I want to hash those files. I'm struggling with returning the files. Here is my take.
I created an unzip function, a function which creates a log-file with all the paths of the files which were unpacked. Then I want to access this log-file and return ALL of the files so I can use them in another python class for further processing.This doesn't seem to work yet.
Structure of log-file:
/home/usr/Downloads/outdir/XXX.log
/home/usr/Downloads/outdir/Code/XXX.py
/home/usr/Downloads/outdir/Code/XXX.py
/home/usr/Downloads/outdir/Code/XXX.py
Code of interest:
#staticmethod
def read_received_files(from_log):
with open(from_log, 'r') as data:
data = data.readlines()
for lines in data:
\\ This does not seem to work zet
read_files = open(lines.strip())
return read_files
I believe that's what you're looking for:
#staticmethod
def read_received_files(from_log):
files = []
with open(from_log, 'r') as data:
for line in data:
files.append(open(line.strip()))
return files
You returned while iterating, preventing from opening the other files.
Since you are primarily after the meta data and hash of the files stored in the zip file, but not the file itself, there is no need to extract the files to the file system.
Instead you can use the ZipFile.open() method to access the contents of the file through a file-like object. Meta data could be gathered using the ZipInfo object for each file. Here's an example which gets file name and file size as meta data, and the hash of the file.
import hashlib
import zipfile
from collections import namedtuple
def get_files(archive):
FileInfo = namedtuple('FileInfo', ('filename', 'size', 'hash'))
with zipfile.ZipFile(archive) as zf:
for info in zf.infolist():
if not info.filename.endswith('/'): # exclude directories
f = zf.open(info)
hash_ = hashlib.md5(f.read()).hexdigest()
yield FileInfo(info.filename, info.file_size, hash_)
for f in get_files('some_file.zip'):
print('{}: {} {} bytes'.format(f.hash, f.filename, f.size))

Categories

Resources