we've got a Django (1.5.5) based application using FeinCMS (1.7.4).
For a page formerly only the (general) en-based version was configured. Later specific configurations for en-us and en-ca were added, with different url-names (than used by the en version). This had the consequence that (en-based) links that had been distributed (via marketing channels) prior to that change, didn't work anymore.
Playing around with the url-names I noticed, that Django/FeinCMS only honours the url-name which was edited last. Meaning, that ever only one url-name is recognised for all contexts (en, en-us and en-ca). The one which was edited/created last.
Does someone know a way to fix this? I've tried to find the "responsible" code, but without success.
Creating manual redirects is no option as there are too many links to specific stories/articles.
[EDIT 17-10-2016 17:53]
Based on Jonas' comments I investigated cms_page table in the DB a little. I noticed...
That there is no row in cms_page which represents the country specific page configurations (e.g. for en-us and en-ca).
Although the last-edited url-name and title are the ones of the country-specific configuration, meaning the one which "works", they don't show up in the table.
Related
I've noticed in various GitHub Action workflow examples, often when calling a pre-defined action (with the uses: syntax) then a particular version of that action is specified. For example:
steps:
- uses: actions/checkout#v2
- name: Set up Python
uses: actions/setup-python#v2
with:
python-version: '3.x'
The above workflow specifies #v2 for both actions/checkout and actions/setup-python.
The question is, how does one know that #v2 is the best version to use?
And how will I know when #v3 becomes available?
Even more confusing is the case of the action used to publish to pypi, pypa/gh-action-pypi-publish. In examples I have looked at, I have seen at least four different versions specified:
pypa/gh-action-pypi-publish#27b31702a0e7fc50959f5ad993c78deac1bdfc29
pypa/gh-action-pypi-publish#master
pypa/gh-action-pypi-publish#v1
pypa/gh-action-pypi-publish#release/v1
How do I know which one to use?
And in general, how do you know which one's are available, and what the differences are?
How to know which version to use?
When writing a workflow and including an action, I recommend looking at the Release tab on the GitHub repository. For actions/setup-python, that would be https://github.com/actions/setup-python/releases
On that page, you should see what versions there are and what the latest one is. You want to use the latest version, because that way you can be sure you're not falling behind and upgrading doesn't become too painful in the future.
How to reference a version?
By convention, actions are published with specific tags (e.g. v1.0.1) as well as a major tag (e.g. v1). This allows you to reference an action like so actions/setup-python#v1. As soon as version v1.0.2 is published, you will automatically use that one. This means you profit from bug fixes and new features, but you're prevented from pulling in breaking changes.
However, note that this is only by convention. Not every author of an action publishes a major tag and moves that along as new tags are published. Furthermore, an author might introduce a breaking change without bumping the major version.
When to use other formats
As you said there are other ways you can reference an action such as a specific commit (e.g. actions/setup-python#27b31702a0e7fc50959f5ad993c78deac1bdfc29) and others.
In general, you want to stick to tags as described above. In particular, referencing #main or #master is dangerous, because you'll always get the latest changes, which might break your workflow. If there is an action that advises you to reference their default branch and they don't publish tags, I recommend creating an issue in their GitHub repository asking to publish tags.
Using a git hash can be useful if you need to use a specific version. A use-case could be that you want to test if a specific version would fix a problem or if you see that the author of the action has pushed some new commits with changes that are not tagged yet. You could then test that version of the action.
Security
From a security perspective, using a tag (e.g. #v1 or #v1.1.0) or a branch (e.g. #main) is problematic, because the author of that repository could change where it refers to. A malicious author of an action could add malicious code to that branch or even simply not be careful enough when reviewing a PR and thereby introduce a vulnerability (e.g. via a transitive dependency).
By using hashes (e.g. #27b31702a0e7fc50959f5ad993c78deac1bdfc29) you know exactly what you get and it doesn't change unless you choose change the version by updating the hash (at which point you can carefully review the changes).
As of early 2022, using hashes instead of tags is not widely adopted, but for example GitHub does this for their docs repository. As supply chain security becomes more important, tools are created to help with "pinning" (point to a specific version by hash rather than tag), such as sethvargo/ratchet. But even depedanbot (see below) should be able to update hashes to the latest hash.
How to know when there is a new version?
You can use Dependabot for that: Keeping your actions up to date with Dependabot. Dependabot is a tool that creates a pull request in your repository as soon as a new version of any of your actions is available such that you can review what the changes are and keep your workflow up to date.
Here's a sample Dependabot configuration that keeps your actions up to date by creating PRs:
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "daily"
People should get used to such tags based release management (other examples like Docker), as documented in articles like this.
How do a user know which tag to use? Usually the action documentation file contains the recommended version to use, so 99% of the users should follow that. You only need to use other tags if you want to live on the bleeding edge.
I use tldextract (version 2.2.2) to extract subdomain/domain/suffix from URLs.
I recently noticed a result that I was surprised by:
>>> from tldextract import extract
>>> extract('http://althawrah.ye/archives/597366')
ExtractResult(subdomain='', domain='', suffix='althawrah.ye')
Instead of being picked up as the domain, althawrah is picked up as part of the suffix. Why is this?
Snooping around a bit, I notice in the Public Suffice List itself that .ye is one of a small number of suffixes that uses a leading asterisk, e.g.
// fj : https://en.wikipedia.org/wiki/.fj
*.fj
// ye : http://www.y.net.ye/services/domain_name.htm
*.ye
The implication here is that these suffixes do not allow domain names to be registered directly under the suffix, but instead must be registered as a third level name. However, this is not the case with http://althawrah.ye/; that is, althawrah is not listed as a second-level domain of .ye. So, what is going on here?
Based on the history of the list and the description of the process for updating, it looks like the Yemen entry is simply wrong or out of date. The entry was added before 2007 (when the list was migrated from CVS to git), while the list guidelines state that:
Changes [for ICANN Domains] need to either come from a representative of the registry (authenticated in a similar manner to below) or be from public sources such as a registry website.
The website linked in the list (which hasn't changed since 2002) gives little detail but does mention URLs of the format www.yourcompany.com.ye, which is where the *.ye rule presumably came from. IANA's root zone database specifies TeleYemen as the current TLD manager, but there is no mention of domain registration on their site. The Wikipedia list of supposed "second level domains" was added in 2008 by a Canadian user linking to a since-deleted website of a company called phpcomet (archived here) which claimed to sell domains in the listed second level domains. However, a google search for "site:ye" reveals plenty of sites outside those domains (e.g. press24.ye, ndc.ye) and fails to give any result for many of them (me.ye, co.ye, ltd.ye, plc.ye).
I'm not sure what could be done to update the official list, but I wouldn't be surprised if the correct entry would read something like:
ye
com.ye
edu.ye
gov.ye
org.ye
These changes were merged into publicsuffix/list in pull request 1189, thanks to TeleYemen and the project maintainers.
The list now specifies subdomains explicitly and drops the * asterisk.
We're evaluating Zenoss and are interested in Device Access Control. We would like to set up the system so that our customers could access Zenoss and only see their devices and status. This feature apparently only exists in the enterprise version as can be seen here.
In the user configuration page there is a "Administered Objects" section but in the community version it has no practical effect, apparently. There is also a roles and permissions configuration page available at http://.../zport/manage_access but I haven't really figured out how to use it for this use case.
Can anyone give me some tips on how we could limit a certain user to certain devices or device groups? Would it require changing a lot of code in the Zenoss core? Can we do that with a ZenPack? Are there any examples on how to do this?
Thanks in advance!
I am working on this right now. Part of the issue is that there are a number of bugs around the Zenoss Administered Objects concept. I have posted some findings at the Zenoss forum at http://community.zenoss.org/message/59100#59100 . I have also opened a number of tickets with Zenoss (referenced in the previous url). If you can add extra supporting information to the tickets then it may get their priority raised. Meanwhile, I am working on my own code fixes / ZenPack workaround and almost have something ready for alpha testing if you are interested.
Cheers,
Jane
I have developed an AppEngine/Python/Django application that currently works in Spanish, and I am in the process of internationalizing with multi-language support. It is basically a dating website, in which people can browse other profiles and send messages. Viewing a profile in different languages will result in some of the text (menus etc) being displayed in whichever language is selected, but user-generated content (ie. user profile or message) will be displayed in the original language in which it was written.
My question is: is it necessary (or a good idea) to use unique URLs for the same page being displayed in different languages or is it OK to overload the same URL for a given page being displayed in different languages. In particular, I am worried that if I use the same URL for multiple languages, then some pages might be cached (either by Google, or by some other proxy that I might not be aware of), which could result in an incorrect language being displayed to a user.
Does anyone know if this is a legitimate concern, or if I am worrying about something that will not happen?
In principle, you can use the Content-Language and Vary response headers and the Accept-Language request header to control how caches behave and to prevent them serving up the wrong language to users.
In practice, however, Accept-Language is frequently set incorrectly in browsers, which is why most sites don't rely on it, or at least provide a secondary mechanism. Caches may be similarly unreliable about respecting the Vary header, but I'm not sure. Having language-specific URLs is certainly a practical way to do it, and avoids any potential issues with caching.
I don't know how this works with django, but looking at it from a general web-development perspective, you could:
use a query parameter to determine the language (e.g. /foo/bar/page.py?lang=en)
Add the language code to the url path (e.g. /foo/bar/en/page.py), and optionally use mod_rewrite so that that part of the path gets passed to your script as a query parameter.
I'm building this app in Python with Django.
I would like to give parts of the site wiki like functionality,
but I don't know how to go on about reliability and security.
Make sure that good content is not ruined
Check for quality
Prevent spam from invading the site
The items requiring wiki like functionality are just a few: a couple of text fields.
Can anyone help on this one?
Would be very much appreciated. :)
You could try using Django Wikiapp, which gives you most of the features you want in a wiki, including history and the ability to revert to older versions of an article. I have personally used this app and it's pretty self-explanatory; they also have a bit of documentation at http://code.google.com/p/django-wikiapp/source/browse/trunk/docs.
In terms of spam protection you can to one of two things or both: password protect the pages that have to do with editing the wiki, and use Akismet to filter for spam. I'm working on something similar and this is probably what we'll end up doing.
Assuming that there will be a community of users you can provide good tools for them to spot problems and easily undo damage. The most important of these is to provide a Recent Changes page that summarizes recent edits. Then each page that can be edited should retain prior versions of the page that can be used to replace any damaging edit. This makes it easier to undo damage than it is to damage things.
Then think about how you are going to handle either locking resources or handling simultaneous edits.
If you can tie edits to users you can provide some administrative functions for undoing all edits by a particular user, and banning that user.
Checking for quality would be tied to the particular data that your application is using.
Make sure that good content is not ruined = version each edit and allow roll-backs.
Check for quality = get people to help with that
Prevent spam from invading the site = get people to help with that, require login, add a captcha if need be, use nofollow for all links