I am wondering how it is possible to publish a generated HTML report via GitHub?
I think our use case is quite standard and it looks more or less like this:
Startup docker image
Some python setup work ( install packages from requirements.txt )
Run some python test ( slash run --with-coverage --cov . ./tests --cov-report html )
--> This generates a HTML report indicating the test coverage.
Publish that generated HTML report so that it can directly be viewed from within the browser ( without the need of downloading the report)
I am stuck with step 4. Even there is GitHub pages, it can only publish files that are actually checked in and not reports that get generated during a step in the actions.
Furthermore it seems like that via GitHub I can only specify a certain branch from where it will be published. However, I would like to have this functionality on all branches to see if coverage actually improves or not.
As mentioned, I don't think that this is a rare use case, therefore I am surprised that I don't find any resources about how to achieve this.
Related
I want to run a python script as part of a jenkins pipline triggered from a github repo. If I store the script directly in the repo itself, I can just do sh 'python path/to/my_package/script.py' which work perfectly. However since I want to use this from multiple pipelines from multiple repos, I want to put this in a jenkins shared library.
I found this question which suggested storing the python file in the resources directory and copying it to a temp file before use. That only works if the script is one standalone file. Unfortunately, mine is a package with multiple python files and imports between them, so thats a no go. I also tried to copy the entire folder containing the python package from the answer to this question, which suggests getting the location of the library with
import groovy.transform.SourceURI
import java.nio.file.Path
import java.nio.file.Paths
class ScriptSourceUri {
#SourceURI
static URI uri
}
but its gives me the following error:
Scripts not permitted to use staticMethod java.net.URI create java.lang.String. Administrators can decide whether to approve or reject this signature.
It seems that some additional permissions are required, which I don't think I'll be able to acquire (its a shared machine).
So? Does anyone know how I can run a python package from jenkins shared library? Right now the only solution I can think of is to manually recreate the directory structure of the python package, which is obviously very messy and non-generic.
PS: There is no particular reason for using the python script over writing the same script in groovy. Its just that the python script is well tested, well understood and well supported. Rewriting the whole thing in groovy just isn't feasible right now.
You can go to http://host:8080/jenkins/scriptApproval/ page of your Jenkins installation and approve the request for your scripts, please see below:-
And follow the link for more information.
In Scala I can use the following command (using Scala build tool), to get an initial project that has a pretty much standard skeleton:
sbt new scala/scala-seed.g8
This saved me loads of headache when it comes to clean code and initial structure of the project.
I want to achieve the same thing with Python, is there a way, a “seed” I can use, that pretty much sums up the standard skeleton for a python project? My criterias are:
Config files: Any files that could sums up the dependencies, for tests specifically coverage test and coverage report generation.
Source: the source folder for source files.
Test: For unit, integration and property-based tests.
Manageable build tool: a build I can use to create doc, compile, test, and run.
I also asked myself, what are the big open source projects in Python look like. None, I mean none, look the same in terms of structuring the code. I looked at Tensorflow, Scikit, Zulip, and Keras on their Github pages.
Cookiecutter should work in this situation. It's a command-line utility to set up a project from a template. You can install it by pip install --user cookiecutter.
You can use a variety of templates, from a full blown Python package to a minimal pip installable project.
Take a look here in order to see how to set up tests, CI, coverage reports, documentation etc.
Full documentation: https://cookiecutter.readthedocs.io/en/latest/readme.html
I'm using Pelican to generate a static website. I want to run a custom Python script during the build process before the website content is generated in the output folder. This script is primarily to do some text replacement via regex parsing. Can someone educate me how this can be achieved (if at all)?
I needed to accomplish the exact same thing you're needing. I needed to update URLs on some <img> tags before pushing to Gitlab pages. The only thing I could find was to just add a command to whatever deployment option you're using (Fabric, make, etc.). For example, I'm using make to "deploy" my site (output the code to a separate directory/repo which then gets pushed to Gitlab). So in Makefile, I appended to the publish line like this:
publish:
$(PELICAN) $(INPUTDIR) -o $(OUTPUTDIR) -s $(PUBLISHCONF) $(PELICANOPTS) && find $(OUTPUTDIR)/* -type f -name '*.html' -exec sed -i '' 's/stuff to replace/replacement text/g' {} +
I'm pretty ignorant of how to use make, so there's probably a better way to format that in Makefile, but it works for me.
This seems like a task best suited for a Pelican plugin. The first thing I would do is look at the Pelican Plugin Repository and see if there is an existing plugin that can deliver the functionality you want. If not, you might consider finding a plugin that is close enough and modifying it to achieve your desired result. The documentation for Pelican plugins is reasonably extensive, and if you run into trouble, you can most likely solicit assistance from the folks in the Pelican IRC channel,
I use Sphinx to document my code, which is a collection of Python modules. The autogenerated documentation that's made by scrubbing my source code is fine, but when I click on the "code" link that links directly to the HTML pages containing my package's sources that Sphinx generates, an older version of my code is shown.
I've tried deleting my Sphinx generated documentation, uninstalling the package from my own site-packages folder, and deleting everything in my build folder. I can find absolutely no files that match Sphinx's output - it's old, and I'm not sure where it's coming from. Does anybody know how to get Sphinx to put my new code in the documentation?
As I said, the autodocumentation works fine, so it's obviously parsing my code on some level. So why is the pure text different from the autodocumentation?
Sphinx caches (pickles) parsed source files. The cache is usually located in a directory called .doctrees under the build directory. To ensure that your source files are reparsed, delete this directory.
See http://www.sphinx-doc.org/en/stable/man/sphinx-build.html#cmdoption-sphinx-build-d.
Pass the poorly documented -a option to sphinx-build.
Doing so effectively disables all Sphinx caching by forcing Sphinx to rebuild all output documentation files (e.g., HTML in the docs/build/ subdirectory), regardless of whether the underlying source input files have been directly modified or not. This is my preferred mode of operation when locally authoring Sphinx documentation, because I frankly do not trust Sphinx to safely cache what I expect it to. It never does.
For safety, I also:
Prefer directly calling sphinx-build to indirectly invoking Sphinx through makefiles (e.g., make html).
Pass -W, converting non-fatal warnings into fatal errors.
Pass --keep-going, collecting all warnings before failing (rather than immediately failing on the first warning).
Pass -n, enabling "nit-picky mode" generating one warning for each broken reference (e.g., interdocument or intrasection link).
Pass -j auto, parallelizing the build process across all available CPU cores.
I have trust issues. I'm still shell-shocked from configuring Sphinx for ReadTheDocs three all-nighters ago. Now, I always locally run Sphinx via this command (wrapped in a Bash shell script or alias for convenience):
$ sphinx-build -M html doc/source/ doc/build/ -W -a -j auto -n --keep-going
That's just how we roll, Sphinx. It's Bash or bust.
Is there already a way to integrate one of Python lint programs (PyLint, PyChecker, PyFlakes, etc.) with GitHub commit status API? In this way Python lint could be automatically called on pull requests to check the code and provide feedback and code (and style).
You could use something like Travis-CI, and run pylint as part of your tests, along the lines of:
language: python
install: "pip install nose pylint"
script: "nosetests && pylint"
Of course that fails commits for minor stylistic violations - you'd probably want to disable certain messages, or use pylint --errors-only to make it less stringent
I had the same question, and just found this blog post describing a project called pylint-server for doing something similar (though triggered on Travis CI build events, not pulls).
From the README:
A small Flask app to keep keep track of pylint reports and ratings on a per-repository basis.
I haven't tried it yet, so I can't comment on its quality. If anyone tries it, please comment and let us know how you like it.