Is there an equivalent of Ruby's Yard in Python? - python

I use both Python and Ruby and I really love the Ruby's Yard documentation server :
http://yardoc.org/ ,
I would like to know if there is an equivalent in the Python world ? The "pydoc -p" is really old, ugly and not comfortable to use at all, and it don't looks like Sphinx and Epydoc support the server mode.
Do you know any equivalent ?
Thank you

Python packages don't really have a convention where to put the documentation. The main documentation of a package may be built with a range of different tools, sometimes based on the docstrings, sometimes not. What you see with pydoc -p is the package contents and the docstrings only, not the main documentation. If this is all you want, you can also use Sphinx for this purpose. Here's sphinx-server, a shell script I just coded up:
#!/bin/sh
sphinx-apidoc -F -o "$2" "$1"
cd "$2"
make html
cd _build/html
python -mSimpleHTTPServer 2345
Call this with the package directory of the package you want to have information on as the first argument and the directory where to build the new documentation as the second argument. Then, point your browser to http://localhost:2345/
(Note: You probably want to remove the webserver invocation from the script. It's more for the purpose of demonstrattion. This is assuming Python 2.x.)

Seems kind of unnecessary to implement a web server just to serve up some HTML. I tend to like the *ix philosophy of each tool doing one small thing, well. Not that a web server is small.
But you could look at http://docs.python.org/library/basehttpserver.html

Related

Can Python 3 react to opened files on my mac?

Is there a way to have a python program react to an opened file? For example, can I get it to do something when I open a text file or another python file?
The short answer is No.
The long answer is: It depends on what you mean by "open"—but for most reasonable definitions, on any modern macOS, it will be doable, but difficult, and will likely break in 10.14 or 10.15.
For example, let's say you're looking to hook every POSIX-level open by any process on the system. The DTrace API provides a way to do that. But if you try to use it:
$ sudo dtruss -t open_noncancel -f -p 1
… if you're on 10.9 or later, you'll see a message like this:
dtrace: system integrity protection is on, some features will not be available
And then, when someone opens a file, you'll either see nothing at all, or, at best, a string of errors like this:
dtrace: error on enabled probe ID 123 (ID 456: syscall::thread_selfid:entry): invalid user access in action #2 at DIF offset 0
You can read about SIP (System Integrity Protection) Runtime Protection here, or on various third-party blog posts like this one, but in recent versions of OS X, there's basically no way to disable it except in recovery mode without some major hackery.
Is there any way to get around it? For specific limited uses, yes. While that dtruss command above doesn't work, you can do this:
$ sudo /usr/bin/filebyproc.d
Or even this:
$ sudo dtrace -n 'syscall::open*:entry { printf("%s %s", execname, copyinstr(arg0)); }'
… and you could replace that printf with code that executes your Python script, instead of trying to run this in a subprocess and parse its output.
And you will get output… but not for all processes.
On 10.13, all processes that are specifically blacklisted by SIP won't show up at all. And sandboxed apps—which includes things like TextEdit, and everything you can install off the App Store—will only show files inside their own sandbox, not files you pass them explicitly. Which makes it a lot less useful.
What about getting around it in general? Well, then you're basically asking how to write a rootkit. Find some exploit in SIP/Darwin/Mach, do a lot of complicated work to take advantage of it, and then when 10.14 comes out, start all over again because Apple closed the exploit.
You can get alerts on create, delete, modify and move of directories / files using tools like inotify, fswatch (for OSX) or watchdog. However I'm not aware of a way to get an alert on a file open in the general case. You'd probably need to use lsof, or do what lsof does for you: scan through /proc/*/fs - polling, not an event-driven approach which is what it sounds like you want.

libsass-python compile a file

I recently find a fantastic python library compiling SASS really fast!
libsass-python seems to be very good and really fast
How I can use it to watch for any change in a sass folder or file and compile it in CSS ?
I do not understand how to pass a file and how to use --watch option
Thanks!
You may try Boussole that works on top of libsass-python on a per project configuration and comes with a "watch" command (using watchdog).
On top parent of your scss source directory, use:
boussole startproject
If needed, you can change settings options (from generated settings.json) then type:
boussole watch
The solution described here (--watch option) was removed from libsaas-python since version 0.13.0 (release notes), released in 2017.
Therefore this solution will no longer work.
As a replacement, you can use boussole as advertised in a subsequent answer.
The rest of this post can be ignored unless you are using versions older than 0.13.0.
According to the help instructions (http://hongminhee.org/libsass-python/sassc.html), you can watch for modifications in file simply with :
$ sassc --watch source.scss target.css
Now, I get you want to watch all the files contained in a folder, and it doesn't seem that the command-line utility provides that.
For what I can tell, I'd see two possible workarounds.
1 : launching several sassc instances, one for each of your files. It pretty dirty, but doesn't require any effort, and I guess it is okay if you don't have too many files. Don't forget to terminate all the process (with killall for instance).
$ sassc --watch a.scss a.css & sassc --watch b.scss b.css # etc.
This is really not a great way to handle things, but it can be considered a temporary solution if you're in a hurry.
2 : use libsass inside a python program that would trigger compilation when a watched file is saved. To that end you can use another library like watchdog or pyinotify.
This seems to be a much better way to handle things.
Hope this was helpful, good luck !

Why are there no Makefiles for automation in Python projects?

As a long time Python programmer, I wonder, if a central aspect of Python culture eluded me a long time: What do we do instead of Makefiles?
Most ruby-projects I've seen (not just rails) use Rake, shortly after node.js became popular, there was cake. In many other (compiled and non-compiled) languages there are classic Make files.
But in Python, no one seems to need such infrastructure. I randomly picked Python projects on GitHub, and they had no automation, besides the installation, provided by setup.py.
What's the reason behind this?
Is there nothing to automate? Do most programmers prefer to run style checks, tests, etc. manually?
Some examples:
dependencies sets up a virtualenv and installs the dependencies
check calls the pep8 and pylint commandlinetools.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
the coffeescript task compiles all coffeescripts to minified javascript
the runserver task depends on dependencies and coffeescript
the deploy task depends on check and test and deploys the project.
the docs task calls sphinx with the appropiate arguments
Some of them are just one or two-liners, but IMHO, they add up. Due to the Makefile, I don't have to remember them.
To clarify: I'm not looking for a Python equivalent for Rake. I'm glad with paver. I'm looking for the reasons.
Actually, automation is useful to Python developers too!
Invoke is probably the closest tool to what you have in mind, for automation of common repetitive Python tasks: https://github.com/pyinvoke/invoke
With invoke, you can create a tasks.py like this one (borrowed from the invoke docs)
from invoke import run, task
#task
def clean(docs=False, bytecode=False, extra=''):
patterns = ['build']
if docs:
patterns.append('docs/_build')
if bytecode:
patterns.append('**/*.pyc')
if extra:
patterns.append(extra)
for pattern in patterns:
run("rm -rf %s" % pattern)
#task
def build(docs=False):
run("python setup.py build")
if docs:
run("sphinx-build docs docs/_build")
You can then run the tasks at the command line, for example:
$ invoke clean
$ invoke build --docs
Another option is to simply use a Makefile. For example, a Python project's Makefile could look like this:
docs:
$(MAKE) -C docs clean
$(MAKE) -C docs html
open docs/_build/html/index.html
release: clean
python setup.py sdist upload
sdist: clean
python setup.py sdist
ls -l dist
Setuptools can automate a lot of things, and for things that aren't built-in, it's easily extensible.
To run unittests, you can use the setup.py test command after having added a test_suite argument to the setup() call. (documentation)
Dependencies (even if not available on PyPI) can be handled by adding a install_requires/extras_require/dependency_links argument to the setup() call. (documentation)
To create a .deb package, you can use the stdeb module.
For everything else, you can add custom setup.py commands.
But I agree with S.Lott, most of the tasks you'd wish to automate (except dependencies handling maybe, it's the only one I find really useful) are tasks you don't run everyday, so there wouldn't be any real productivity improvement by automating them.
There is a number of options for automation in Python. I don't think there is a culture against automation, there is just not one dominant way of doing it. The common denominator is distutils.
The one which is closed to your description is buildout. This is mostly used in the Zope/Plone world.
I myself use a combination of the following: Distribute, pip and Fabric. I am mostly developing using Django that has manage.py for automation commands.
It is also being actively worked on in Python 3.3
Any decent test tool has a way of running the entire suite in a single command, and nothing is stopping you from using rake, make, or anything else, really.
There is little reason to invent a new way of doing things when existing methods work perfectly well - why re-invent something just because YOU didn't invent it? (NIH).
The make utility is an optimization tool which reduces the time spent building a software image. The reduction in time is obtained when all of the intermediate materials from a previous build are still available, and only a small change has been made to the inputs (such as source code). In this situation, make is able to perform an "incremental build": rebuild only a subset of the intermediate pieces that are impacted by the change to the inputs.
When a complete build takes place, all that make effectively does is to execute a set of scripting steps. These same steps could just be deposited into a flat script. The -n option of make will in fact print these steps, which makes this possible.
A Makefile isn't "automation"; it's "automation with a view toward optimized incremental rebuilds." Anything scripted with any scripting tool is automation.
So, why would Python project eschew tools like make? Probably because Python projects don't struggle with long build times that they are eager to optimize. And, also, the compilation of a .py to a .pyc file does not have the same web of dependencies like a .c to a .o.
A C source file can #include hundreds of dependent files; a one-character change in any one of these files can mean that the source file must be recompiled. A properly written Makefile will detect when that is or is not the case.
A big C or C++ project without an incremental build system would mean that a developer has to wait hours for an executable image to pop out for testing. Fast, incremental builds are essential.
In the case of Python, probably all you have to worry about is when a .py file is newer than its corresponding .pyc, which can be handled by simple scripting: loop over all the files, and recompile anything newer than its byte code. Moreover, compilation is optional in the first place!
So the reason Python projects tend not to use make is that their need to perform incremental rebuild optimization is low, and they use other tools for automation; tools that are more familiar to Python programmers, like Python itself.
The original PEP where this was raised can be found here. Distutils has become the standard method for distributing and installing Python modules.
Why? It just happens that python is a wonderful language to perform the installation of Python modules with.
Here are few examples of makefile usage with python:
https://blog.horejsek.com/makefile-with-python/
https://krzysztofzuraw.com/blog/2016/makefiles-in-python-projects.html
I think that a most of people is not aware "makefile for python" case. It could be useful, but "sexiness ratio" is too small to propagate rapidly (just my PPOV).
Is there nothing to automate?
Not really. All but two of the examples are one-line commands.
tl;dr Very little of this is really interesting or complex. Very little of this seems to benefit from "automation".
Due to documentation, I don't have to remember the commands to do this.
Do most programmers prefer to run stylechecks, tests, etc. manually?
Yes.
generation documentation,
the docs task calls sphinx with the appropiate arguments
It's one line of code. Automation doesn't help much.
sphinx-build -b html source build/html. That's a script. Written in Python.
We do this rarely. A few times a week. After "significant" changes.
running stylechecks (Pylint, Pyflakes and the pep8-cmdtool).
check calls the pep8 and pylint commandlinetools
We don't do this. We use unit testing instead of pylint.
You could automate that three-step process.
But I can see how SCons or make might help someone here.
tests
There might be space for "automation" here. It's two lines: the non-Django unit tests (python test/main.py) and the Django tests. (manage.py test). Automation could be applied to run both lines.
We do this dozens of times each day. We never knew we needed "automation".
dependecies sets up a virtualenv and installs the dependencies
Done so rarely that a simple list of steps is all that we've ever needed. We track our dependencies very, very carefully, so there are never any surprises.
We don't do this.
the test task depends on dependencies enables the virtualenv, starts selenium-server for the integration tests, and calls nosetest
The start server & run nosetest as a two-step "automation" makes some sense. It saves you from entering the two shell commands to run both steps.
the coffeescript task compiles all coffeescripts to minified javascript
This is something that's very rare for us. I suppose it's a good example of something to be automated. Automating the one-line script could be helpful.
I can see how SCons or make might help someone here.
the runserver task depends on dependencies and coffeescript
Except. The dependencies change so rarely, that this seems like overkill. I supposed it can be a good idea of you're not tracking dependencies well in the first place.
the deploy task depends on check and test and deploys the project.
It's an svn co and python setup.py install on the server, followed by a bunch of customer-specific copies from the subversion area to the customer /www area. That's a script. Written in Python.
It's not a general make or SCons kind of thing. It has only one actor (a sysadmin) and one use case. We wouldn't ever mingle deployment with other development, QA or test tasks.

Python IDE that you can highlight a method/class and jump to its definition

I'm trying to use textmate, but I find it hard to navigate a project with it.
I admit I probably just don't know the the IDE well enough.
Is it possible to highlight a class or method and jump to its definition?
I am not sure that I understood you question, but if you look for an IDE for Python I would strongly recommand you have a look at PyDev
It's by far the most feature-rich IDE for Python and it has a really active development team. And did I mention it's free and open source?
Wing IDE is an excellent IDE for python.
I too have been looking for an IDE that makes this sort of thing easy.
About two hours ago I downloaded Pycharm and it has blown me away. This may be the coolest IDE I have ever used, for any language. So far it seems to do everything that big IDEs like VisualStudio or Eclipse do (for many languages), only without the learning curve or resource consumption of those monsters.
It does exactly what you are asking... just right-click on any class, or a method, or pretty much anything else, Select "Go to Implementation" (or Declaration), and up it pops in a new tab.
So many other amazing slick features too... just try it!
There's a 30 day trial and after that it's pretty reasonable (like $29 for academic, $100 for individual, and $200 for a commercial team. Oh and FREE if you have a bona fide OpenSource project that's been actively worked on for at least 3 months.)
(I apologize if this sounds like an advertisement. I can assure you that it's not. I'm just kind of... obsessive... about IDEs, and very frustrated that so few of them meet my standards. I will revise this if I find any "caveats" but so far, so good.)
If you're asking about IntelliJ IDEA, Python is only available for the commercial version.
If you're asking about a Python IDE, IDLE comes with Python already. I can also recommend Boa Constructor.
WingIDE if you can fork out some cash will do what you want all bundled up and with little to no configuration effort.
Otherwise Eclipse with Aptana's pydev is free, and does exactly that, plus a lot more (ctrl+click pretty much anything for redirection and a lot of other useful things like pyc removal etc.).
Navigation problems though are usually symptomatic of more than just lack of tools. A decent structure to your projects and a version control system (even if you work locally and solo) would go a long way helping to address that.
I'm not sure about what functionality is available in textmate but would a simple search work? i.e. Ctrl+F with the query "def function" including the def part so you find the definition instead of a call?
I'm a fan of pyscripter http://code.google.com/p/pyscripter/. Has those features and more (and a regex checker!)
Open source of course.
Aptana Studio 3.0
The PyDev team is now operating under the auspices of Aptana, which makes Aptana Studio 3 - an Eclipse customisation - preferable to the 2-step process of first downloading Eclipse, and then installing the PyDev extension.
Aptana comes pre-configured for Python (and others), and in addition features custom support for Django projects [including JavaScript support].
The product is fast and responsive, and features powerful meta-level functionality such as jumping to the definition of a callable, deducing object fields from init initialisation, module browsing, very good code completion, and more...
Thus far, out of Eclipse+PyDev, NetBeans Python Edition and Aptana Studio 3, based on relatively extensive personal testing, AS3 wins hands down.
Here's a small Bundle/Command for TextMate that can accomplish Python Jump to definition for 99% of cases:
FUNC="$TM_CURRENT_WORD"
DIR="$TM_PROJECT_DIRECTORY"
OUTPUT=''
# Define the class or function definition string that we're looking for.
FUNCDEF='(def|class) '$FUNC
# Find all files that contain FUNCDEF
FILES=(`egrep "$FUNCDEF" $DIR/* -r -l --include=*.py`)
#
# Look for a function declaration within a files contents.
#
# <file>
#
function lookup_function {
local line=`nl -b a "$1" | egrep "$FUNCDEF" | awk '{print $1}'`
if [[ "$line" -gt 0 ]]; then
# echo 'Jumping to --> '$1':'$line
mate "$1" -l "$line"
exit 0
fi
}
# Iterate files
for file in ${FILES[#]}; do
echo $file
lookup_function "$file"
done
# Nothing found
echo 'Function '${FUNC}' was not found within the current project.'

How can I install specialized environments for different Perl applications?

Is there anything equivalent or close in terms of functionality to Python's virtualenv, but for Perl?
I've done some development in Python and a possibility of having non-system versions of modules installed in a separate environment without creating any mess is a huge advantage. Now I have to work on a new project in Perl, and I'm looking for something like virtualenv, but for Perl. Can you suggest any Perl equivalent or replacement for python's virtualenv?
I'm trying to setup X different sets of non-system Perl packages for Y different applications to be deployed. Even worse, these applications may require different versions of the same package, so each of them may require to be installed in a separate module/library environment. You may want to do this manually for X < Y < 3. But you should not do this manually for 10 > Y > X.
Ideally what I'm looking should work like this:
perl virtualenv.pl my_environment
. my_environment/bin/activate
wget http://.../foo-0.1.tar.gz
tar -xzf foo-0.1.tar.gz ; cd foo-0.1
perl Makefile.pl
make install # <-- package foo-0.1 gets installed inside my_environment
perl -MCPAN -e 'install Bar' # <-- now package Bar with all its deps gets installed inside my_environment
There's a tool called local::lib that wraps up all of the work for you, much like virtualenv. It will:
Set up #INC in the process where it's used.
Set PERL5LIB and other such things for child processes.
Set the right variables to convince CPAN, MakeMaker, Module::Build, etc. to install libraries and store configuration in a local directory.
Set PATH so that installed binaries can be found.
Print environment variables to stdout when used from the commandline so that you can put eval $(perl -Mlocal::lib)
in your .profile and then mostly forget about it.
I've used schroot for this purpose. It is a bit heavier than virtualenv but you can be sure that nothing will leak in that shouldn't.
Schroot manages a chroot environment for you, but mounts your home directory in the chroot so it appears like a normal shell session, just using the binaries and libraries in the chroot.
I think it may be debian/ubuntu only though.
After setting up the schroot, your script above would look like
schroot -c my_perl_dev
wget ...
See http://www.debian-administration.org/articles/566 for an interesting article about it
Also checkout perl-virtualenv , this seems to be wrapper around local::lib as suggested by Hobbs, but creates a bin/activate and bin/deactivate so you can use it just like the python tool.
I've been using it quite successfully for a month or so without realising it wasn't as standards as perhaps it should be.
It makes it lot easier to set up a working virtualenv for perl as while local:lib will tell you what variables you need to set, etc. perl-virtualenv creates an activate script which does it for you.
While investigating, I discovered this and some other pages (this one is too old and misses new technologies, this reddit post is a slight misdirect).
The problem with perlbrew and plenv is that they seem to be replacements for pyenv, not virtualenv. As noted here pyenv is for managing python versions, virtualenv is for managing per-project module versions. So, yes, in some ways similar to local::lib, but with better usability.
I've not seen a proper answer to this question yet, but from what I've read, it looks like the best solution is something along the lines of:
Perl version management: plenv/perlbrew (with most people
favouring the more contemporary bash based plenv over the perl based
perlbrew from what I can see)
Module version management: Carton
Module installation: cpan (well, cpanminus anyway, ymmv)
To be honest, this is not an ideal set up, although I'm still learning, so it may yet be superior. It just doesn't feel right. It certainly isn't a like for like replacement for virtualenv.
There are a couple of posts I've found saying "it is possible" but neither has gone any further.
I am not sure whether this is the same as that virtualenv thing you are talking about, but have a look for the #INC special variable in the perlvar manpage.
Programs can modify what directories they check for libraries uwith use lib. This lib directory can be relative to the current directory. Libraries from these directories will be used before system libraries, as they are placed at the beginning of the #INC array.
I believe cpan can also install libraries to specific directories. Granted, cpan draws from the CPAN site in order to install things, so this may not be the best option.
It looks like you just need to use the INSTALL_BASE configuration for Makefile.PL (or the --install_base option for Build.PL)? What exactly do you need the solution to do for you? It sounds like you just need to get the installed module in the right place. You've presented your problem as an XY Problem by specifying what you think is the solution is rather than letting us help you with your task.
See How do I keep my own module/library directory? in perlfaq8, for instance.
If you are downloading modules from CPAN, the latest cpan command (in App::Cpan) has a -j switch to allow you to choose alternate CPAN.pm configuration files. In those configuration files you can set the CPAN.pm options to install wherever you like.
Based on your clarification, it sounds like local::lib might work for you in single, simple cases, but I do this for industrial strength deployments where I set up custom, private CPANs per application, and install directly from those custom CPANs. See my MyCPAN::App::DPAN module, for instance. From that, I use custom CPAN.pm configs that analyze their environment and set the proper values to each application can install everything in a directory just for that application.
You might also consider distributing your application as a Task::. You install it like any other Perl module, but dependencies share that same setup (i.e. INSTALL_BASE).
What I do is start the CPAN shell (cpan) and install my own Perl 5.10 from it
(I believe the command is install perl-5.10). This will ask for various configuration
settings; I make sure to make it point to paths under /usr/local
(or some other installation location other than the default).
Then I put its resulting location in my executable $PATH before the standard perl, and use its CPAN shell to install the modules I need (usually, a lot).
My Perl scripts all start with the line
#!/usr/bin/env perl
Never had a problem with this approach.

Categories

Resources