I am trying to debug a memory leak in a module using Scalene.
Unfortunately, it appears that I can only run scalene script.py while I need to be able to specify the module to correctly run the application with python -m mymodule, which I can't seem to do with scalene.
Is there a way to overcome this? Thank you in advance
cf Scalene's documentation :
scalene your_prog.py # full profile (prints to console)
python3 -m scalene your_prog.py # equivalent alternative
You can use the second form with Scalene.
You can use runpy.run_module() to create a wrapper around your module, which you can then profile!
wrapper.py might contain:
from runpy import run_module
run_module('your_module_name', run_name='__main__')
and then you can run scalene wrapper.py!
The run_name argument is needed in order to "trick" the if __name__ == '__main__' clause into executing, if you have one.
Related
My issue is, how can I execute a setup method, that is based on a pytest argument when I run pytest? Basically I want this 'setup method' to run first before running any tests.
For simplicity lets say I have a file 'text_something.py'
import somemodule
def test_some(usefakemodule):
...
the 'somemodule' is either installed in the environment or not. If its installed already, then it will work no problem. If not, I need to add the path of the fake module by running a method that basically does sys.path.append(XXX).
In short, I want to run 'pytest -v --usefakemodule' if I want to use the fake module dir, or just 'pytest -v' to just use that library installed locally (assuming its installed there).
I tried adding a fixture (scope=session) in conftest.py, but it doesnt seem to run/execute it first before it executes 'test_something.py', which will then state no module named 'somemodule'
I can run a regular method in conftest, but I dont know how it can depend on the pytest command line argument 'usefakemodule'.
In your conftest.py you use pytest_addoption combined with pytest_cmdline_main,
pleas refer to the documentation for details.
import pytest
def pytest_addoption(parser):
parser.addoption(
"--usefakemodule", action="store_true", default=False, help="Use fake module "
)
def pytest_cmdline_main(config):
usefakemodule = config.getoption("--usefakemodule")
print(usefakemodule)
if usefakemodule:
print("OK fake module ")
else:
print("no fakes")
I don't know whether/how you can have pytest accept extra arguments, but here are a couple other ideas for accomplishing this:
Just try to import the real module, and update the load path if you get an ImportError:
try:
import somemodule
except ImportError:
sys.path.append(XXX)
import somemodule
Or, use an environment var, and run with USE_FAKE_MODULE=true pytest -v:
import os
if os.environ.get('USE_FAKE_MODULE'):
sys.path.append(XXX)
According to the python doc the -m flag should do the following:
Search sys.path for the named module and execute its contents as the
__main__ module.
When I run my script simply with the python command, everything works fine. Since I now want to import something from a higher level, I have to run the script with python -m. However the __name__ == "__main__" statement seems to return False and the following Error is produced:
/home/<name>/anaconda3/bin/python: Error while finding module specification for 'data.generate_dummies.py' (AttributeError: module 'data.generate_dummies' has no attribute '__path__')
I dont't understand what the __path__ attribute has to do with that.
The error you get occurs when python tries to look for a package/module that does not exist. As user2357112 mentions, data.generate_dummies.py is treated as a fully specified module path (that does not exist), and an attempt is made to import a submodule py (that is also non-existent).
Invoke your file without .py, if you're using the -m flag, like this:
python -m data.generate_dummies
I would like to know if there is a way to call module functions from the command line. My problem is that I have a file package called find_mutations. This requires that the input files are sorted in a specific way, so i made a function to do so. I would like the user to be able to have the option to run the code normally:
$ python -m find_mutations -h
or run the specific sort_files function:
$ python -m find_mutations.sort_files -h
Is something like this possible? If so, does the sort_files function need to be in its own strip? Do I need to add something to my __init__.py script? The package install properly and runs fine, I would just like to add this.
You could add flags to run the function. The following example is not a good way to work with command arguments, but demonstrates the idea.
import sys
if __name__ == "__main__":
if '-s' in sys.argv:
sort_files()
Add a flag to run the specific function, in this case -s to sort the files.
I have a fabric script called fwp.py that I run without calling it throug fab by using:
if __name__ == '__main__':
# imports for standalone mode only
import sys
import fabric.main
fabric.main.main(fabfile_locations=[__file__])
The thing is then have to call the script by calling fwp.py. I'd like to rename it as fwp to be able to call it as fwp. But doing that would result in
Fatal error: Couldn't find any fabfiles!
Is there a way to make Python/Fabric import this file, despite the lack of a ".py" extension?
To reiterate and clarify:
I'm not using the "fab" utility (e.g. as fab task task:parameter); just calling my script as fwp.py task task:parameter, and would like to be able to call it as fwp task task:parameter.
Update
It's not a duplicate of this question. The question is not "How to run a stand-alone fabric script?", but "How to do so, while having a script without a .py" extension.
EDIT: Original answer corrected
The fabric.main.main() function automatically adds .py to the end of supplied fabfile locations (see https://github.com/fabric/fabric/blob/master/fabric/main.py#L93). Unfortunately that function also uses Python's import machinery to load the file so it has to look like a module or package. Without reimplementing much of the fabric.main module I don't think it will be possible. You could try monkey-patching both fabric.main.find_fabfiles and fabric.main.load_fabfiles to make it work.
Origininal answer (wrong)
I can get this to work unaltered on a freshly installed fabric package. The following will execute with a filename fwp and executable permission on version 1.10.1, Python2.7. I would just try upgrading fabric.
#!/usr/bin/env python
from fabric.api import *
import fabric.main
def do():
local('echo "Hello World"')
if __name__ == '__main__':
fabric.main.main(fabfile_locations=[__file__])
Output:
$ ./fwp do
Hello World
Done
I would like to check if all modules imported by a script are installed before I actually run the script, because the script is quite complex and is usually running for many hours. Also, it may import different modules depending on the options passed to it, so just running it once may not check everything. So, I wouldn't like to run this script on a new system for few hours only to see it failing before completion because of a missing module.
Apparently, pyflakes and pychecker are not helpful here, correct me if I'm wrong. I can do something like this:
$ python -c "$(cat *.py|grep import|sed 's/^\s\+//g'|tr '\n' ';')"
but it's not very robust, it will break if the word 'import' appears in a string, for example.
So, how can I do this task properly?
You could use ModuleFinder from the standard lib modulefinder
Using the example from the docs
from modulefinder import ModuleFinder
finder = ModuleFinder()
finder.run_script('bacon.py')
print 'Loaded modules:'
for name, mod in finder.modules.iteritems():
print '%s: ' % name,
print ','.join(mod.globalnames.keys()[:3])
print '-'*50
print 'Modules not imported:'
print '\n'.join(finder.badmodules.iterkeys())
You could write a test.py that just contains all the possible imports for example:
import these
import are
import some
import modules
Run it and if there are any problems python will let you know