I'm using Pelican to generate a static website. I want to run a custom Python script during the build process before the website content is generated in the output folder. This script is primarily to do some text replacement via regex parsing. Can someone educate me how this can be achieved (if at all)?
I needed to accomplish the exact same thing you're needing. I needed to update URLs on some <img> tags before pushing to Gitlab pages. The only thing I could find was to just add a command to whatever deployment option you're using (Fabric, make, etc.). For example, I'm using make to "deploy" my site (output the code to a separate directory/repo which then gets pushed to Gitlab). So in Makefile, I appended to the publish line like this:
publish:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(PUBLISHCONF) $(PELICANOPTS) && find $(OUTPUTDIR)/* -type f -name '*.html' -exec sed -i '' 's/stuff to replace/replacement text/g' {} +
I'm pretty ignorant of how to use make, so there's probably a better way to format that in Makefile, but it works for me.
This seems like a task best suited for a Pelican plugin. The first thing I would do is look at the Pelican Plugin Repository and see if there is an existing plugin that can deliver the functionality you want. If not, you might consider finding a plugin that is close enough and modifying it to achieve your desired result. The documentation for Pelican plugins is reasonably extensive, and if you run into trouble, you can most likely solicit assistance from the folks in the Pelican IRC channel,
Related
I am wondering how it is possible to publish a generated HTML report via GitHub?
I think our use case is quite standard and it looks more or less like this:
Startup docker image
Some python setup work ( install packages from requirements.txt )
Run some python test ( slash run --with-coverage --cov . ./tests --cov-report html )
--> This generates a HTML report indicating the test coverage.
Publish that generated HTML report so that it can directly be viewed from within the browser ( without the need of downloading the report)
I am stuck with step 4. Even there is GitHub pages, it can only publish files that are actually checked in and not reports that get generated during a step in the actions.
Furthermore it seems like that via GitHub I can only specify a certain branch from where it will be published. However, I would like to have this functionality on all branches to see if coverage actually improves or not.
As mentioned, I don't think that this is a rare use case, therefore I am surprised that I don't find any resources about how to achieve this.
Is there a way to have a python program react to an opened file? For example, can I get it to do something when I open a text file or another python file?
The short answer is No.
The long answer is: It depends on what you mean by "open"—but for most reasonable definitions, on any modern macOS, it will be doable, but difficult, and will likely break in 10.14 or 10.15.
For example, let's say you're looking to hook every POSIX-level open by any process on the system. The DTrace API provides a way to do that. But if you try to use it:
$ sudo dtruss -t open_noncancel -f -p 1
… if you're on 10.9 or later, you'll see a message like this:
dtrace: system integrity protection is on, some features will not be available
And then, when someone opens a file, you'll either see nothing at all, or, at best, a string of errors like this:
dtrace: error on enabled probe ID 123 (ID 456: syscall::thread_selfid:entry): invalid user access in action #2 at DIF offset 0
You can read about SIP (System Integrity Protection) Runtime Protection here, or on various third-party blog posts like this one, but in recent versions of OS X, there's basically no way to disable it except in recovery mode without some major hackery.
Is there any way to get around it? For specific limited uses, yes. While that dtruss command above doesn't work, you can do this:
$ sudo /usr/bin/filebyproc.d
Or even this:
$ sudo dtrace -n 'syscall::open*:entry { printf("%s %s", execname, copyinstr(arg0)); }'
… and you could replace that printf with code that executes your Python script, instead of trying to run this in a subprocess and parse its output.
And you will get output… but not for all processes.
On 10.13, all processes that are specifically blacklisted by SIP won't show up at all. And sandboxed apps—which includes things like TextEdit, and everything you can install off the App Store—will only show files inside their own sandbox, not files you pass them explicitly. Which makes it a lot less useful.
What about getting around it in general? Well, then you're basically asking how to write a rootkit. Find some exploit in SIP/Darwin/Mach, do a lot of complicated work to take advantage of it, and then when 10.14 comes out, start all over again because Apple closed the exploit.
You can get alerts on create, delete, modify and move of directories / files using tools like inotify, fswatch (for OSX) or watchdog. However I'm not aware of a way to get an alert on a file open in the general case. You'd probably need to use lsof, or do what lsof does for you: scan through /proc/*/fs - polling, not an event-driven approach which is what it sounds like you want.
I recently find a fantastic python library compiling SASS really fast!
libsass-python seems to be very good and really fast
How I can use it to watch for any change in a sass folder or file and compile it in CSS ?
I do not understand how to pass a file and how to use --watch option
Thanks!
You may try Boussole that works on top of libsass-python on a per project configuration and comes with a "watch" command (using watchdog).
On top parent of your scss source directory, use:
boussole startproject
If needed, you can change settings options (from generated settings.json) then type:
boussole watch
The solution described here (--watch option) was removed from libsaas-python since version 0.13.0 (release notes), released in 2017.
Therefore this solution will no longer work.
As a replacement, you can use boussole as advertised in a subsequent answer.
The rest of this post can be ignored unless you are using versions older than 0.13.0.
According to the help instructions (http://hongminhee.org/libsass-python/sassc.html), you can watch for modifications in file simply with :
$ sassc --watch source.scss target.css
Now, I get you want to watch all the files contained in a folder, and it doesn't seem that the command-line utility provides that.
For what I can tell, I'd see two possible workarounds.
1 : launching several sassc instances, one for each of your files. It pretty dirty, but doesn't require any effort, and I guess it is okay if you don't have too many files. Don't forget to terminate all the process (with killall for instance).
$ sassc --watch a.scss a.css & sassc --watch b.scss b.css # etc.
This is really not a great way to handle things, but it can be considered a temporary solution if you're in a hurry.
2 : use libsass inside a python program that would trigger compilation when a watched file is saved. To that end you can use another library like watchdog or pyinotify.
This seems to be a much better way to handle things.
Hope this was helpful, good luck !
I use both Python and Ruby and I really love the Ruby's Yard documentation server :
http://yardoc.org/ ,
I would like to know if there is an equivalent in the Python world ? The "pydoc -p" is really old, ugly and not comfortable to use at all, and it don't looks like Sphinx and Epydoc support the server mode.
Do you know any equivalent ?
Thank you
Python packages don't really have a convention where to put the documentation. The main documentation of a package may be built with a range of different tools, sometimes based on the docstrings, sometimes not. What you see with pydoc -p is the package contents and the docstrings only, not the main documentation. If this is all you want, you can also use Sphinx for this purpose. Here's sphinx-server, a shell script I just coded up:
#!/bin/sh
sphinx-apidoc -F -o "$2" "$1"
cd "$2"
make html
cd _build/html
python -mSimpleHTTPServer 2345
Call this with the package directory of the package you want to have information on as the first argument and the directory where to build the new documentation as the second argument. Then, point your browser to http://localhost:2345/
(Note: You probably want to remove the webserver invocation from the script. It's more for the purpose of demonstrattion. This is assuming Python 2.x.)
Seems kind of unnecessary to implement a web server just to serve up some HTML. I tend to like the *ix philosophy of each tool doing one small thing, well. Not that a web server is small.
But you could look at http://docs.python.org/library/basehttpserver.html
Currently I have a bunch of git repos for a django site i'm looking to deploy, the repos' take the form:
sn-static
sn-django
sn-templates
[etc]
I then have a super repos that stores each of these as a submodules. In terms of deployment, I want to try to keep things fairly simple, would it be a valid method to:
Clone a stable tag from the super repos & therefore have stable clones of each repos in one place.
As the names are sn-* I would then look the symlink to a more friendly structure e.g. ln -s /path/to/super-repos/sn-static /home/site/media/
Then my nginx webserver (in the case of static content at least) could simply refer to /home/site/media
Without a great deal of technical knowledge i'm unsure if symlinking would have any implications, in terms of speed or stability. I'm also wondering if I can get away with this as a method of deployment, rather than, say, using something like Capistrano (that as yet I have no experience with).
An option you should consider is using pip in conjunction with virtualenv to install your packages especially as pip has the option to directly install certain branches or tags from a git repository.
That way you can use one requirements file to handle all your dependencies, your own packages and apps by other people. (See this post for the big picture.)
And to handle your static media I'd prefer to use Django's builtin staticfiles app instead of symlinking several dirs, as it seems cleaner and easier to manage.
When you reach a release point in your code, tag it (Git Tag). On your server, clone the master branch once and then simply to pull the release tag you want, each time you do a release.
git pull [tag]