Scraping data from a webpage based on VIEWSTATES - python

I'm attempting to scrape the details of all documents on this link.
The problem I'm facing is that the site is created using ASP.NET and the Viewstates aren't me to access the data directly, and I tried a mixture of beautifulSoup, Scrapy and Selenium, but to no avail. The data consists of 12782 documents whose pdf download link I need to extract from the page that redirects from each entry of the returned results on the aforementioned page.
The site also has an API here, but the catch here is that it only returns 2000 data points at any given point of time, so the ~12k data points is out of question.
Can someone help me with ANY ONE of the following:
Create a scraper to get the pdf links
Generate a query to get all the data from the API
Any recurrence relation that helps me generate links to get the queries for the API
Using the requests section in the API to get all the records at the same time delivered to your email
Ideally, a solution in python would be great, but if you can help me get a csv file of all the links, that would also work. Thanks in advance!

I ended up solving the problem by using the request functionality which was located here.
It took in a particular query and my email address and sent me the entire data dump I needed. From that data dump, I could use all the pdf links.

Related

I'm trying to scrap Stocks Data , Get requests method is working but is not giving data

First of all,
Thank you for your help.
I'm trying to create a stock data scraper based on the Frankfurt website.
I'm trying to get the historical prices section of the website.
I inspected the elements with chrome and I found an API in the network section.
Here is the response and preview on the network section.
I tried to use the code below :
When I run my script :
The code response seems good, but there is nothing in return.
I also tried to use BeautifulSoup to get my ways but I can't find where is located the data.
Here is what I tried :
Thank you for your time!
The reason why that the resquest return nothing is because that the data you want to scrape are rendered by javascript.
So first check out if the web data are rendered by javascript, if it is, try to use selenium or puppeteer to get those data.

Trying to parse all results from z-lib to build a database of book titles

I am trying to scrape a list of all available books in z-library, but results are only provided though a search term and I want the titles for all books.
Also, queries only feature 10 pages of 50 results per page, 500 in total. Doing an empty search using only a spacebar renders top 500 most popular books.
I intend to use Selenium and Python but I can't get around to accessing the entire list of books.
https://book4you.org/
Any ideas?
Thanks
You can not get all the books data by Selenium or web scraper on such site since these tools will only present you a results displayed by GUI for a specific key search query.
This maybe can be performed via some API GET protocol request to get ALL the data from the DB, however we can not know if that possible or not until we know all the API requests available for that specific site.

How to access a table in mediawiki

Right now I am using the mediawiki api and requests module to attempt to pull certain information from a sort of table off of a wikipedia page. As an example, we will use the song Zombie where there is a 'table' on the right where it tells me the album, the author, the release date and so forth. The only issue I'm running into is that I don't know how to query this data as I'm using this link as the endpoint: https://en.wikipedia.org/w/api.php?format=json&formatversion=2&action=query&titles=Zombie_(song)&prop=extracts
to attempt to search for what I need but it brings up the text on the page. I've tried the sandbox and I've had issues trying to find what would give me the information I need. I appreciate any advice and input, thanks.
For that sort of metadata you'd be best off using Wikidata. In the sidebar on Wikipedia there's a link to the Wikidata item, and you can use an API query such as https://www.wikidata.org/w/api.php?action=wbgetclaims&entity=Q169298 to get the data in a structured way. For information about what those results mean, see the Wikibase API docs.
[Edit:] To get the entity ID, you can use wbgetentities with a Wikipedia title (titles) and wiki ID (sites); e.g.: https://www.wikidata.org/w/api.php?action=wbgetentities&sites=enwiki&titles=Zombie_(song)

Web Scraping in R / Python

I need to extract data from https://eservices.dha.gov.ae/DHASearch/UIPages/ProfessionalSearch.aspx?PageLang=En. I need 4 columns -"name","gender", "Titles" ,"Hospital Name", "Contact details". The "Titles" info will be shown when you click on a name. Another problem I am facing is to extract info from multiple pages. In total, there are 10071 records. I need info of all these records. Currently I am using rvest package in R but it's throwing error. See the code below -
library(rvest)
session = html_session("https://eservices.dha.gov.ae/DHASearch/UIPages/ProfessionalSearch.aspx")
form = html_form(session)[[1]]
Error : Subscript out of bounds
I am open to solution in Python. I am novice in using beautifulsoup in Python. Any help would be highly appreciated!
If you have the right to scrape all this personal information then the best way to go about it would be to use selenium in python and a web driver to navigate the pages by calling the js function call used for each paginated page and pull the page source for each of them. This is probably your best bet seen as the data is loaded using Javascript calls.

can we capture data that comes to a browser from a website?

i am not sure whether we can capture data from website i.e. suppose if we submit a form we get some data in response from website. How can we capture that data ?.
for example consider a college results website if we enter roll number it gives results data in a browser.i want to know how we can capture and store that data to a database using a program instead of showing it on browser?
thanks in advance
You could use an entirely Python framework: using Mechanize as a browser and form filler, and an html parser like Beautiful Soup to extract and then store the various information you get. To store your results in a database you could then use SQLite.

Categories

Resources