Django documenation page - python

I'm busy making a documentation page in my Django project and would like to make a dynamic page where theres a sidenav with links but if you click on a nav item it loads the page within the "master page aka the documentation page" without leaving that page
like so https://www.nbinteract.com/tutorial/tutorial_github_setup.html
I have created the side nav etc and it's working fine and linked my first URL to a nav item but if you click on the nav item, it opens that .html file instead of loading it within the main documentation page.
I would like to find a way to do it with Django only and not JavaScript if possible, any guidance would be appreciated.
Yes, this could be a silly question but please don't flame me about learning how to do stuff :)

The site you linked uses Javascript and you will need to use Javascript as well if you want to update only portions of the client-page with new data from the database.

Related

Re-render page from JS Django Template

I have an endpoint that renders a template that has a list of movies
On the page, there is a button called "Add movie" and an input field to add the name of the movie. When the button is pressed, I want to add a movie of the name to the list of movies and re render the page.
Right now, I am doing it all in JS, with a fetch request, but I can't re render the page like this.
How could I achieve the effect I want?
You can update the page without reloading/re-rendering the whole webpage using AJAX calls and changing a specific element inside the HTML's DOM. You can do this in raw Javascript or use some JS libraries to simplify the code. Here are some libraries you can consider if you've a large'ish project and such requirements might come up again:
jQuery
intercoolerjs.org
unpoly.js
You're rendering on the server-side, so in order to re-render, you need to send data back to the server. One way of doing this creating a form element which posts data to the same page when you click the button. And then on the server side, take the inputted movie, add it to the other movies, and then you can re-render and return.
This answer attempts to provide general guidance, since your question was general.

Get method from requests library seems to return homepage rather than specific URL

I'm new to Python & object-oriented programming in general. I'm trying to build a simple web scraper to create data frames from NBA contract data on basketball-reference.com. I had planned to use the requests library together with BeautifulSoup. However, the get method seems to be returning the site's homepage rather than the page affiliated with the URL I give.
I give a URL to a team's contracts page (https://www.basketball-reference.com/contracts/IND.html), but when I print the html it looks like it belongs to the homepage.
I haven't been able to find any documentation on the web about anyone else having this problem...
I'm using the Spyder IDE.
# Import library
import requests
# Assign the URL for contract scraping
url = 'https://www.basketball-reference.com/contracts/IND.html'
# Pull contracts page
page = requests.get(url)
# Check that correct page is being pulled
print(page.text)
This seems like it should be very straightforward, so I'm not understanding why the console is displaying html that clearly doesn't pertain to the page I'm trying to point to. I'm not getting any errors, just html from the homepage.
After checking the code on repl.it and visiting the webpage myself, I can confirm you are pulling in the correct page's HTML. The page variable contains the tables of data, as well as their info... and also the page's advertisements, the contact info, the social media buttons and links, the adblock detection scripts, and everything else on the webpage. Your issue isn't that you're getting the wrong page, it's that you're getting the entire page, not just the data.
You'll want to pick out the exact bits you're interested in - maybe by selecting the table and its child elements? The table's HTML id is contracts - that should be a good place to start.
(Try visiting the page in your browser, right-clicking anywhere on the page, and clicking "view page source" - that's what your program is pulling in. There's a LOT more to a webpage than most people realize!)
As a word of warning, though, Sports Reference has a data use policy that precludes web crawlers / spiders on their site. I would recommend checking (and using) one of the free sites they link instead; you risk being IP banned otherwise.
Simply printing the result of the get request on the terminal won't be very helpful, as the HTML page content returned is long - your terminal will truncate the printed response. I'm assuming in your case maybe the website has parts of the homepage reused in other pages as well, so it might get confusing.
I recommend writing the response into a file and then opening the file in the browser. You will see that your code is pulling the right page.

Wagtail - extending PageChooserBlock to support external URLs

I'm building a site with Wagtail and using StreamField to build up content on a homepage. I've built a block that allows users to add featured links, which could be internal or external links. Currently the featured links have both a PageChooserBlock and a URLBlock, but I'd like to add a new custom block type that allows a user to specify either an internal page or a URL. I can't see anything in the docs that would help me. Any ideas where to start?
This is nothing Wagtail supports yet, but there's a lot of interest in this feature, see issue https://github.com/wagtail/wagtail/issues/3141.
To solve this there is a pull request work in progress (see https://github.com/wagtail/wagtail/pull/1645) that aims to unify link choosers.
Maybe you are able to contribute, I'm sure this would be very welcome!
you can use this library: https://pypi.org/project/wagtail-link-block/
From the docs:
"A link block to use as part of other StructBlocks which lets the user choose a link either to a Page, Document, or external URL, and whether or not they want the link to open in a new window."

How to make a folder landing-page looks like the initial-page of a sub-site in a Plone portal?

As a CMS, Plone receives contents and display these contents organized by menus in an initial "home" page, where you can browse other pages and other types of contents.
Is it possible to make a menu item to point to a second home page in the same plone portal?
Today I discovered that we can select a page or other type of content to replace the landing page of a folder. It's almost what I want. If I could make this page to show the news from that folder, and some other contents well designed in table like the "Initial page", that would be the goal. I believe I would need a portlet to make a new kind of content or make the 'page' to mimic an 'initial page'
It is different from creating a simple page linked to that menu, where there will be only lists of contents and widgets at most.
The case study is a Plone portal of an government office, and the subsection that wants to have it's own "home index" is the human resources division.
I need a real new 'home' page as if it was a home for an entire (and important) subsection of my portal.
If it is possible, will I need only administrator skills or will I have to alter some python code or config file?
This is the initial (home) page, the index of the Plone portal
This is the standard page of a menu root, having the content list for it's submenu items
This may well be possible. What you are really asking for is a "portal like view" on another folder in Plone. This doesn't come as standard in Plone, so what you've got working on the homepage indicates either:
a) you have an addon such as collective.portletpage or collective.cover (as #kuel suggested)
b) a Plone developer has set you up a custom view for your homepage
If it is b) then you will probably need help to tweak it to your needs for the HR folder, otherwise you should be able to do most of what you want on your own, by just adding a "Cover" or "PorletPage" (or ....) type as the default page for the HR folder.
The reason we are struggling to give you a perfect answer is that we don't know which addon or custom view you are using for your homepage. This is why I asked for the body tag. To get that (assuming you are using Firefox) just right click on the page and select "View Page Source", then press Ctrl-F for the Find menu and type body into that field. Just copy 4 or 5 lines around there to let us help you more!
Alternatively (though less certain to work) just click on the "Add New..." menu and list what types are available and we may be able to tell you what type to add.
What you want is really well implemented by collective.cover addon: https://github.com/collective/collective.cover/#don-t-panic
It's used by many brazilian federal government websites such as Brasil.gov.br, Planalto.gov.br, Secom.gov.br and also by the project https://identidade-digital-de-governo-plone.readthedocs.org/
Also you should join PloneGovBR community: https://colab.interlegis.leg.br/wiki/PloneGovBr
Happy Ploning! :)
You can use Products.ContentWellPortlets to assign additonal elements above and below the landing page. In case you don't want an element in the middle of the grid-/dashboard-like layout (similar to http://[HOST]/[PLONESITE_ID]/##dashboard), leave all fields of the landing page empty, but the required title, and hide the title via CSS.
EASI.
You should only try this if you have Zope administrator access.
Go to the URL:
/portal_skins/plone_login/logged_in/manage
In there you will see a line of code identifying the user:
member = membership_tool.getAuthenticatedMember()
Copy that line.
Now go to the URL:
/portal_skins/plone_login/login_next/manage
...and click on the Customize button. This will give you an override copy of "login_next" that you can edit. (At any point you can delete this copy and you will revert to the original.)
Now look at the end of the routine where a line begins with:
state.set(came_from=came_from, next=next)
You will see a variable next being set to the URL that the server will go to next. Now if you paste your member = ... line just before that, you can set the variable "next" to the HR homepage URL, for cases where member is in HR.
There is probably a Group to which HR people belong, but that's a slightly different question. Please see docs at: http://docs.plone.org/develop/plone/members/member_basics.html
Remember, just delete your customized copy of login_next to revert to the original.

Is a Web Crawler more suitable?

TL;DR Version :
I have only heard about web crawlers in intelluctual conversations Im not part of. All I want to know that can they follow a specific path like:
first page (has lot of links) -->go to links specified-->go to
links(specified, yes again)-->go to certain link-->reach final page
and download source.
I have googled a bit and came across Scrappy. But I am not sure if I fully understand web crawlers to begin with and if scrappy can help me follow the specific path I want.
Long Version
I wanted to extract some text of a group of static web pages. These web pages are very simple with just basic HTML. I used python and the urllib to access the URL,extract the text and work with it. Pretty soon I realized that I will have to basically visit all these pages and copy paste the URL into my program, which is tiresome. I wanted to know if this is more suitable for a web crawler. I want to access this
page. Then select only a few organisms (I have a list of those). On Clicking on of them you can see this page. If you look under the table - MTases active in the genome there are Enzymes which are hyperlinks. Clinking on those lead to this page. On the right hand side there is link named Sequence Data. Once clicked it leads to the page which has a small table on the lower right with yellow headers. under it it has an entry DNA (FASTA STYLE. Clicking on view will lead to the page im interested in and want to download the page source from.
I think you are definitely on the right track for looking at a web crawler to help you do this. You can also look at Norconex HTTP Collector which I know can let you follow links on a page without storing that page if is is just a listing page to you. That crawler lets you filter out pages after their links have been extracted to be followed. Ultimately, you can configure the right filters so that only the pages matching the pattern you want get downloaded for you to process (whether it is based on crawl depth, URL pattern, content pattern, etc).

Categories

Resources