I have one function which needs to be run twice with a different request.meta in scrapy
request = scrapy.Request(tournament_url, callback=self.parse_tournament)
request.meta['data'] = team1_data
yield request
request1 = scrapy.Request(tournament_url, callback=self.parse_tournament)
request1.meta['data'] = team2_data
yield request1
As of now, only the first request is working!
You will want to include dont_filter in your 2nd Request to avoid the Scrapy DupeFilter dropping the already-seen URL:
request1 = scrapy.Request(tournament_url, callback=self.parse_tournament,
dont_filter=True)
request1.meta['data'] = team2_data
yield request
Related
I have a list of entries in a database that each corresponds to some scraping task. Only once one is finished, do I want the spider to continue to the next one. Here is some pseudocode that gives the idea of what I want to do though it is not exactly what I want because it uses a while loop creating a massive backlog of entries waiting to be processed.
def start_requests(self):
while True:
rec = GetDocumentAndMarkAsProcessing()
if rec == None:
break;
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
def parse(self, response):
... store results in database ...
How can I make scrapy work on the next entry only when it has received a response from the previous SplashRequest for the previous entry?
I am not sure if simple callback functions would be enough to do the trick or if I need something more sophisticated.
All I needed to do was explicitly call another request with yield in the parse function with parse as the callback itself. So in the end I have something like this:
def start_requests(self):
rec = GetDocumentAndMarkAsProcessing()
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
def parse(self, response):
... store results in database ...
rec = GetDocumentAndMarkAsProcessing()
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
I believe you might be able to achieve this by setting CONCURRENT_REQUESTS to 1 in your settings.py. That will make it so the crawler only sends one request at a time, although I admit I am not sure how the timing works on the second request - whether it sends it when the callback is finished executing or when the response is retrieved.
I've created a python script using scrapy to scrape some information available in a certain webpage. The problem is the link I'm trying with gets redirected very often. However, when I try few times using requests, I get the desired content.
In case of scrapy, I'm unable to reuse the link because I found it redirecting no matter how many times I try. I can even catch the main url using response.meta.get("redirect_urls")[0] meant to be used resursively within parse method. However, it always gets redirected and as a result callback is not taking place.
This is my current attempt (the link used within the script is just a placeholder):
import scrapy
from scrapy.crawler import CrawlerProcess
class StackoverflowSpider(scrapy.Spider):
handle_httpstatus_list = [301, 302]
name = "stackoverflow"
start_url = 'https://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean'
def start_requests(self):
yield scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
def parse(self,response):
if response.meta.get("lead_link"):
self.lead_link = response.meta.get("lead_link")
elif response.meta.get("redirect_urls"):
self.lead_link = response.meta.get("redirect_urls")[0]
try:
if response.status!=200 :raise
if not response.css("[itemprop='text'] > h2"):raise
answer_title = response.css("[itemprop='text'] > h2::text").get()
print(answer_title)
except Exception:
print(self.lead_link)
yield scrapy.Request(self.lead_link,meta={"lead_link":self.lead_link},dont_filter=True, callback=self.parse)
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(StackoverflowSpider)
c.start()
Question: How can I force scrapy to make a callback using the url that got redirected?
As far as I understand, you want to scrape a link until it stops redirecting and you finally get http status 200
If yes, then you have to first remove handle_httpstatus_list = [301, 302] from your code
Then create a CustomMiddleware in middlewares.py
class CustomMiddleware(object):
def process_response(self, request, response, spider):
if not response.css("[itemprop='text'] > h2"):
logging.info('Desired text not found so re-scraping' % (request.url))
req = request.copy()
request.dont_filter = True
return req
if response.status in [301, 302]:
original_url = request.meta.get('redirect_urls', [response.url])[0]
logging.info('%s is redirecting to %s, so re-scraping it' % (request._url, request.url))
request._url = original_url
request.dont_filter = True
return request
return response
Then your spider should look like something this
class StackoverflowSpider(scrapy.Spider):
name = "stackoverflow"
start_url = 'https://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean'
custom_settings = {
'DOWNLOADER_MIDDLEWARES': {
'YOUR_PROJECT_NAME.middlewares.CustomMiddleware': 100,
}
}
def start_requests(self):
yield scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
def parse(self,response):
answer_title = response.css("[itemprop='text'] > h2::text").get()
print(answer_title)
If you tell me which site you are scraping then I can help you out, you can email me as well which is on my profile
You may want to see this.
If you need to prevent redirecting it is possible by request meta:
request = scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
request.meta['dont_redirect'] = True
yield request
Due to documentation this is a way to stop redirecting.
I've written a script in python's scrapy to make a proxied requests using either of the newly generated proxies by get_proxies() method. I used requests module to fetch the proxies in order to reuse them in the script. However, the problem is the proxy my script chooses to use may not be the good one always so sometimes it doesn't fetch valid response.
How can I let my script keep trying with different proxies until there is a valid response?
My script so far:
import scrapy
import random
import requests
from itertools import cycle
from bs4 import BeautifulSoup
from scrapy.http.request import Request
from scrapy.crawler import CrawlerProcess
class ProxySpider(scrapy.Spider):
name = "sslproxies"
check_url = "https://stackoverflow.com/questions/tagged/web-scraping"
proxy_link = "https://www.sslproxies.org/"
def start_requests(self):
proxylist = self.get_proxies()
random.shuffle(proxylist)
proxy_ip_port = next(cycle(proxylist))
print(proxy_ip_port) #Checking out the proxy address
request = scrapy.Request(self.check_url, callback=self.parse,errback=self.errback_httpbin,dont_filter=True)
request.meta['proxy'] = "http://{}".format(proxy_ip_port)
yield request
def get_proxies(self):
response = requests.get(self.proxy_link)
soup = BeautifulSoup(response.text,"lxml")
proxy = [':'.join([item.select_one("td").text,item.select_one("td:nth-of-type(2)").text]) for item in soup.select("table.table tbody tr") if "yes" in item.text]
return proxy
def parse(self, response):
print(response.meta.get("proxy")) #Compare this to the earlier one whether they both are the same
def errback_httpbin(self, failure):
print("Failure: "+str(failure))
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
'DOWNLOAD_TIMEOUT' : 5,
})
c.crawl(ProxySpider)
c.start()
PS My intension is to seek any solution the way I've started here.
As we know http response needs to pass all middlewares in order to reach spider methods.
It means that only requests with valid proxies can proceed to spider callback functions.
In order to use valid proxies we need to check ALL proxies first and after that choose only from valid proxies.
When our previously chosen proxy doesn't work anymore - we mark this proxy as not valid and choose new one from remaining valid proxies in spider errback.
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.http.request import Request
class ProxySpider(scrapy.Spider):
name = "sslproxies"
check_url = "https://stackoverflow.com/questions/tagged/web-scraping"
proxy_link = "https://www.sslproxies.org/"
current_proxy = ""
proxies = {}
def start_requests(self):
yield Request(self.proxy_link,callback=self.parse_proxies)
def parse_proxies(self,response):
for row in response.css("table#proxylisttable tbody tr"):
if "yes" in row.extract():
td = row.css("td::text").extract()
self.proxies["http://{}".format(td[0]+":"+td[1])]={"valid":False}
for proxy in self.proxies.keys():
yield Request(self.check_url,callback=self.parse,errback=self.errback_httpbin,
meta={"proxy":proxy,
"download_slot":proxy},
dont_filter=True)
def parse(self, response):
if "proxy" in response.request.meta.keys():
#As script reaches this parse method we can mark current proxy as valid
self.proxies[response.request.meta["proxy"]]["valid"] = True
print(response.meta.get("proxy"))
if not self.current_proxy:
#Scraper reaches this code line on first valid response
self.current_proxy = response.request.meta["proxy"]
#yield Request(next_url, callback=self.parse_next,
# meta={"proxy":self.current_proxy,
# "download_slot":self.current_proxy})
def errback_httpbin(self, failure):
if "proxy" in failure.request.meta.keys():
proxy = failure.request.meta["proxy"]
if proxy == self.current_proxy:
#If current proxy after our usage becomes not valid
#Mark it as not valid
self.proxies[proxy]["valid"] = False
for ip_port in self.proxies.keys():
#And choose valid proxy from self.proxies
if self.proxies[ip_port]["valid"]:
failure.request.meta["proxy"] = ip_port
failure.request.meta["download_slot"] = ip_port
self.current_proxy = ip_port
return failure.request
print("Failure: "+str(failure))
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
'COOKIES_ENABLED': False,
'DOWNLOAD_TIMEOUT' : 10,
'DOWNLOAD_DELAY' : 3,
})
c.crawl(ProxySpider)
c.start()
you need write a downloader middleware, to install a process_exception hook, scrapy calls this hook when exception raised. in the hook, you could return a new Request object, with dont_filter=True flag, to let scrapy reschedule the request until it succeeds.
in the meanwhile, you could verify response extensively in process_response hook, check the status code, response content etc., and reschedule request as necessary.
in order to change proxy easily, you should use built-in HttpProxyMiddleware, instead of tinker with environ:
request.meta['proxy'] = proxy_address
take a look at this project as an example.
I'm trying to scrape a website for broken links, so far I have this code which is successfully logging in and crawling the site, but it's only recording HTTP status 200 codes:
class HttpStatusSpider(scrapy.Spider):
name = 'httpstatus'
handle_httpstatus_all = True
link_extractor = LinkExtractor()
def start_requests(self):
"""This method ensures we login before we begin spidering"""
# Little bit of magic to handle the CSRF protection on the login form
resp = requests.get('http://localhost:8000/login/')
tree = html.fromstring(resp.content)
csrf_token = tree.cssselect('input[name=csrfmiddlewaretoken]')[0].value
return [FormRequest('http://localhost:8000/login/', callback=self.parse,
formdata={'username': 'mischa_cs',
'password': 'letmein',
'csrfmiddlewaretoken': csrf_token},
cookies={'csrftoken': resp.cookies['csrftoken']})]
def parse(self, response):
item = HttpResponseItem()
item['url'] = response.url
item['status'] = response.status
item['referer'] = response.request.headers.get('Referer', '')
yield item
for link in self.link_extractor.extract_links(response):
r = Request(link.url, self.parse)
r.meta.update(link_text=link.text)
yield r
The docs and these answers lead me to believe that handle_httpstatus_all = True should cause scrapy to pass errored requests to my parse method, but so far I've not been able to capture any.
I've also experimented with handle_httpstatus_list and a custom errback handler in a different iteration of the code.
What do I need to change to capture the HTTP error codes scrapy is encountering?
handle_httpstatus_list can be defined on the spider level, but handle_httpstatus_all can only be defined on the Request level, including it on the meta argument.
I would still recommend using an errback for these cases, but if everything is controlled, it shouldn't create new problems.
So, I don't know if this is the proper scrapy way, but it does allow me to handle all HTTP status codes (including 5xx).
I disabled the HttpErrorMiddleware by adding this snippet to my scrapy project's settings.py:
SPIDER_MIDDLEWARES = {
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': None
}
I have the parse function as :
def parse(self,response):
a = list(map(chr, range(97,123)))
for i in a:
yield FormRequest.from_response(
response,
formdata = {'posted':'posted', 'LastName':i, 'distance':'0', 'current_page':'2'},
callback = self.after
)
Here I am sending requests to same URL but with different LastName parameter as shown above. But it is not returning response to all my requests. Instead it only retrieves the result for letter 'Q'. How to force it to visit same URL with different parameter each time?
You need to set dont_filter = True on your FormRequest.
yield FormRequest.from_response(
response,
formdata = {'posted':'posted', 'LastName':i, 'distance':'0', 'current_page':'2'},
callback = self.after,
dont_filter=True
)
See http://doc.scrapy.org/en/latest/topics/request-response.html for more info about it.