Web Crawling :How to obtain URLs which use database information? - python

Here is my problem statement :
I'm trying to retrieve all the well specific information for a state from http://www.aogc2.state.ar.us/AOGConline/ . After doing a bit of R&D , i figured out that individual well information is stored in path structured as :
http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100280000&KeyType=STRING&DetailXML=WellDetails.xml
where each KeyValue is unique for every well.I was trying to derive a generic pattern in the KeyValue - for above URL eg in 3143100280000 , 03 represents state(arkansas),143 represents County , but the remaining no - 100280000 is not necessarily following a serial pattern and thus makes life difficult.
Is there a way through which all the KeyValues for 43K+ wells be obtained here (which i'm presuming is coming from a database) ?Tried looking for all sources js files being loaded from http://www.aogc2.state.ar.us/AOGConline/ but none points towards all KeyValues/Well API source directory
Using Python Scrapy i've written the following spider which crawls few specific Well XML URLs.In need to make this generic so as to obtain all 43k+ well information but not being able to attain a way to figure out all the KeyValues here
from scrapy.spider import Spider
from scrapy.selector import Selector
import codecs
class AogcSpider(Spider):
name = "aogc"
allowed_domains = ["http://www.aogc2.state.ar.us/"]
start_urls = [
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100280000&KeyType=STRING&DetailXML=WellDetails.xml",
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100290000&KeyType=STRING&DetailXML=WellDetails.xml",
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100300000&KeyType=STRING&DetailXML=WellDetails.xml",
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100310000&KeyType=STRING&DetailXML=WellDetails.xml",
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100320000&KeyType=STRING&DetailXML=WellDetails.xml",
"http://www.aogc2.state.ar.us/AOGConline/ED.aspx?KeyName=API_WELLNO&KeyValue=03143100330000&KeyType=STRING&DetailXML=WellDetails.xml"
]
def parse(self,response):
hxs = Selector(response)
trnodes = hxs.xpath("//td[#class='ColumnValue']")
filename = codecs.open("aogc_wells","a","utf-8-sig")
filename.write("\n")
for nodes in trnodes:
ftext = nodes.xpath("text()").extract()
for txt in ftext:
filename.write(txt)
filename.write("|")

Related

Why does scrapy provide unable to load error?

So i am working on a small crawler using scrapy and python on this website https://www.theverge.com/reviews. From there i am trying to extract the reviews based on the rules i have set which should match links that match this criteria:
example: https://www.theverge.com/22274747/tern-hsd-p9-ebike-review-electric-cargo-bike-price-specs
Extracting the url from the review page, title of the page, name of who made the review and the link to their profile. However i assume there is something either wrong with my code or something wrong with the way i have my files sorted. Because this error when i try to run it:
runspider: error: Unable to load 'spiders/vergespider.py': No module named 'oblig3.oblig3'
My folders look like this.
So my intended results should look something like this. Visiting up to 20 pages, which i don't quite understand how to fix through the scrapy settings, but that is another problem.
authorlink,authorname,title,url
"https://www.theverge.com/authors/cameron-faulkner,https://www.twitter.com/camfaulkner",Cameron
Faulkner,"Gigabyte’s Aorus 15G is great at gaming, but not much
else",https://www.theverge.com/22299226/gigabyte-aorus-15g-review-gaming-laptop-price-specs-features
So my question is what could be causing the error i am getting why am i not getting any csv output from this code. I am fairly new at python and scrapy oo any tips or improvement to the code are appreciated. I would like to keep the "solutions" through scrapy and python as those are the things i am trying to learn atm.
Edit:
This is what i use to run the code with scrapy runspider spiders/vergespider.py -o vergetest.csv -t csv. And this is what i have coded so far.
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from oblig3.items import VergeReview
class VergeSpider(CrawlSpider):
name = 'verge'
allowed_domains = ['theverge.com']
start_urls = ['https://www.theverge.com/reviews']
rules = [
Rule(LinkExtractor(allow=r'^(https://www.theverge.com/)(/d+)/([%5E/]+$)%27'),
callback='parse_items', follow=True),
Rule(LinkExtractor(allow=r'.*'),
callback='parse_items', cb_kwargs={'is_verge': False})
]
def parse(self, response, is_verge):
if is_verge:
verge = VergeReview()
verge['url'] = response.url
verge['title'] = response.xpath("//h1/text()").extract_first()
verge['authorname'] = response.xpath("//span[#class='c-byline__author-name']/text()").extract()
verge['authorlink'] = response.xpath("//*/span[#class = 'c-byline__item'][1]/a/#href").extract()
yield verge
else:
# Do something else
pass
My items file
import scrapy
class VergeReview(scrapy.Item):
url = scrapy.Field()
title = scrapy.Field()
authorname = scrapy.Field()
authorlink = scrapy.Field()
And my settings file is unchanged though i should implement CLOSESPIDER_PAGECOUNT = 20 but idk how.
The error you have is:
runspider: error ..... No module named 'oblig3.oblig3'
What I can see from your screenprint is that oblig3 is the name of your project.
This is a common error when you try to run your spider using:
scrapy runspider spider_file.py
If you are running your spider this way, you need to change the way you are running the spider:
First, make sure that you are in the directory where scrapy.cfg is located
then run
scrapy list
This should give you a list of all the spiders it found.
After that, you should use this command to run your spider.
scrapy crawl <spidername>
If this does not solve your problem, you need to share the code and share the details about how you are running your spider.

Scrapy not following the next parse function

I am trying to write a simple scraping script to scrape off google summer of code orgs with the tech that I require. Its work in progress. My parse function is working fine but whenever I callback into org function it doesn't throw any output.
# -*- coding: utf-8 -*-
import scrapy
class GsocSpider(scrapy.Spider):
name = 'gsoc'
allowed_domains = ['https://summerofcode.withgoogle.com/archive/2018/organizations/']
start_urls = ['https://summerofcode.withgoogle.com/archive/2018/organizations/']
def parse(self, response):
for href in response.css('li.organization-card__container a.organization-card__link::attr(href)'):
url = response.urljoin(href.extract())
yield scrapy.Request(url, callback = self.parse_org)
def parse_org(self,response):
tech=response.css('li.organization__tag organization__tag--technology::text').extract()
#if 'python' in tech:
yield
{
'name':response.css('title::text').extract_first()
#'ideas_list':response.css('')
}
first of all, you are configuring incorrectly the allowed_domains, as it specifies in the documentation:
An optional list of strings containing domains that this spider is
allowed to crawl. Requests for URLs not belonging to the domain names
specified in this list (or their subdomains) won’t be followed if
OffsiteMiddleware is enabled.
Let’s say your target url is https://www.example.com/1.html, then add
'example.com' to the list.
As you can see, you need to include only the domains, and this is a filtering functionality (so other domains don't get crawled). Also this is optional, so I would actually recommend to not include it.
Also your css for getting tech is incorrect, it should be:
li.organization__tag.organization__tag--technology

wrong Xpath in IMDB spider scrapy

Here:
IMDB scrapy get all movie data
response.xpath("//*[#class='results']/tr/td[3]")
returns empty list. I tried to change it to:
response.xpath("//*[contains(#class,'chart full-width')]/tbody/tr")
without success.
Any help please? Thanks.
I did not have time to go through IMDB scrapy get all movie data thoroughly, but have got the gist of it. The Problem statement is to get All movie data from the given site. It involves two things. First is to go through all the pages that contain the list of all the movies of that year. While the Second one is to get the link to each movie and then here you do your own magic.
The problem you faced is with the getting the xpath for the link to each movies. This may most likely be due to change in the website structure (I did not have time to verify what maybe the difference). Anyways, following is the xpath you would require.
FIRST :
We take div class nav as a landmark and find the lister-page-next next-page class in its children.
response.xpath("//div[#class='nav']/div/a[#class='lister-page-next next-page']/#href").extract_first()
Here this will give : Link for the next page | returns None if at the last page (since next-page tag not present)
SECOND :
This is the original doubt by the OP.
#Get the list of the container having the title, etc
list = response.xpath("//div[#class='lister-item-content']")
#From the container extract the required links
paths = list.xpath("h3[#class='lister-item-header']/a/#href").extract()
Now all you would need to do is loop through each of these paths element and request the page.
Thanks for your answer. I eventually used your xPath like so:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from crawler.items import MovieItem
IMDB_URL = "http://imdb.com"
class IMDBSpider(CrawlSpider):
name = 'imdb'
# in order to move the next page
rules = (Rule(LinkExtractor(allow=(), restrict_xpaths=("//div[#class='nav']/div/a[#class='lister-page-next next-page']",)),
callback="parse_page", follow= True),)
def __init__(self, start=None, end=None, *args, **kwargs):
super(IMDBSpider, self).__init__(*args, **kwargs)
self.start_year = int(start) if start else 1874
self.end_year = int(end) if end else 2017
# generate start_urls dynamically
def start_requests(self):
for year in range(self.start_year, self.end_year+1):
# movies are sorted by number of votes
yield scrapy.Request('http://www.imdb.com/search/title?year={year},{year}&title_type=feature&sort=num_votes,desc'.format(year=year))
def parse_page(self, response):
content = response.xpath("//div[#class='lister-item-content']")
paths = content.xpath("h3[#class='lister-item-header']/a/#href").extract() # list of paths of movies in the current page
# all movies in this page
for path in paths:
item = MovieItem()
item['MainPageUrl'] = IMDB_URL + path
request = scrapy.Request(item['MainPageUrl'], callback=self.parse_movie_details)
request.meta['item'] = item
yield request
# make sure that the start_urls are parsed as well
parse_start_url = parse_page
def parse_movie_details(self, response):
pass # lots of parsing....
Runs it with scrapy crawl imdb -a start=<start-year> -a end=<end-year>

Scrapy Extract ld+JSON

How to extract the name and url?
quotes_spiders.py
import scrapy
import json
class QuotesSpider(scrapy.Spider):
name = "quotes"
start_urls = ["http://www.lazada.com.my/shop-power-banks2/?price=1572-1572"]
def parse(self, response):
data = json.loads(response.xpath('//script[#type="application/ld+json"]//text()').extract_first())
//how to extract the name and url?
yield data
Data to Extract
<script type="application/ld+json">{"#context":"https://schema.org","#type":"ItemList","itemListElement":[{"#type":"Product","image":"http://my-live-02.slatic.net/p/2/test-product-0601-7378-08684315-8be741b9107b9ace2f2fe68d9c9fd61a-webp-catalog_233.jpg","name":"test product 0601","offers":{"#type":"Offer","availability":"https://schema.org/InStock","price":"99999.00","priceCurrency":"RM"},"url":"http://www.lazada.com.my/test-product-0601-51348680.html?ff=1"}]}</script>
This line of code returns a dictionary with the data you want:
data = json.loads(response.xpath('//script[#type="application/ld+json"]//text()').extract_first())
All you need to do is to access it like:
name = data['itemListElement'][0]['name']
url = data['itemListElement'][0]['url']
Given that the microdata contains a list you will need to check you are referring to the correct product in the list.
A really easy solution for this would be to use https://github.com/scrapinghub/extruct. It handles all the hard parts of extracting structured data.

Webscrape from 2500 links - courses of action?

I have nearly 2500 unique links, from which I want to run BeautifulSoup and gather some text captured in paragraphs on each of the 2500 pages. I could create variables for each link, but having 2500 is obviously not the most efficient course of action. The links are contained in a list like the following:
linkslist = ["http://www.website.com/category/item1","http://www.website.com/category/item2","http://www.website.com/category/item3", ...]
Should I just write a for loop like the following?
for link in linkslist:
opened_url = urllib2.urlopen(link).read()
soup = BeautifulSoup(opened_url)
...
I'm looking for any constructive criticism. Thanks!
This is a good use case for Scrapy - a popular web-scraping framework based on Twisted:
Scrapy is written with Twisted, a popular event-driven networking
framework for Python. Thus, it’s implemented using a non-blocking (aka
asynchronous) code for concurrency.
Set the start_urls property of your spider and parse the page inside the parse() callback:
class MySpider(scrapy.Spider):
name = "myspider"
start_urls = ["http://www.website.com/category/item1","http://www.website.com/category/item2","http://www.website.com/category/item3", ...]
allowed_domains = ["website.com"]
def parse(self, response):
print response.xpath("//title/text()").extract()
How about writing a function that would treat each URL separately?
def processURL(url):
pass
# Your code here
map(processURL, linkslist)
This will run your function on each url in your list. If you want to speed things up, this is easy to run in parallel:
from multiprocessing import Pool
list(Pool(processes = 10).map(processURL, linkslist))

Categories

Resources