Getting a New ASP.NET Session ID after Expiration in Scrapy - python

I have written a scraper in Scrapy 1.5 that successfully navigates to a webpage (running ASP.NET running on IIS version 8.5), submits a form, and then gets to scraping. After a few hours, all of the pages start returning blank data. I believe that my ASP.NET session id is expiring when this happens. I can never make it through the entire table (several thousand pages) while crawling at a respectful rate, but the table doesn't change from session to session. My approach was to scrape until the pages were returned blank, then go back to the form submission page and resubmit the form. I am keeping track of the page number so that I can pick up where I left off. The problem is that when I resubmit the form, the pages are still returned blank. If I stop the scraper, and set the count variable manually to the last page scraped, it works fine when I restart the scraper. Using fiddler I can see that the only thing that is different is that I have a new ASP.NET session id. So my question is, how can I clear out my ASP.NET session id so that I am given a new one and I can continue scraping? Here is a redacted version of the spider:
class assessorSpider(scrapy.Spider):
name = 'redacted'
allowed_domains = ['redacted.redacted']
start_urls = ['http://redacted.redacted/search.aspx']
base_url = start_urls[0]
rows_selector = '#T1 > tbody:nth-child(3) > tr'
numberOfPages = -1
count = 1
def parse(self, response):
#ASP.NET Session Id gets stored in the headers after initial search
frmdata = {'id':'frmSearch', 'SearchField':'%','cmdGo':'Go'}
yield scrapy.FormRequest(url = self.base_url, formdata = frmdata, callback = self.parse_index)
self.log('Search submitted')
def parse_index(self, response):
self.log('proceeding to next page')
rows = response.css(self.rows_selector)
if (len(rows) < 50 and self.count != self.numberOfPages):
self.log('Deficient rows. Resubmitting')
yield scrapy.Request(callback=self.parse, url = self.base_url, headers='')
self.log('Ready to yield value')
for row in rows:
value = {
#a whole bunch of css selectors
}
yield value
if self.numberOfPages == -1:
self.numberOfPages = response.css('a.button::attr(href)')[2].extract().split('=')[-1]
self.count = self.count + 1
if self.count <= self.numberOfPages:
self.log( self.base_url + '?page=' + str(self.count))
yield scrapy.Request(callback=self.parse_index, url = self.base_url + '?page=' + str(self.count))
Note: I have read that making a request with an expired ASP.NET session id should result in a new one being issued (depending on how the site is set up), so it is possible that scrapy is not accepting the new session id. I am not sure how to diagnose this issue.

There are two things that come to mind:
1) Your "start a fresh session" request might be getting rejected by the downloader: By default, it filters URLs that it's already seen, such as your base url. Try yield scrapy.Request(callback=self.parse, url = self.base_url, dont_filter=True, headers='') in your "reset the session request"
2) If that doesn't work (or perhaps in addition to):
I'm pretty new to Scrapy and Python, so there might be a more direct method to "reset your cookies" but specifying a fresh cookiejar should work.
https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#multiple-cookie-sessions-per-spider
The cookiejar is essentially a dict-like object that is tracking your current session's cookies. You can specify the cookiejar key by using meta.
# Set up a new session if bad news:
if (len(rows) < 50 and self.count != self.numberOfPages):
self.log('Deficient rows. Resubmitting')
yield scrapy.Request(
callback=self.parse,
url=self.base_url,
dont_filter=True,
meta={
# Since you are already tracking a counter,
# this might make for a reasonable "next cookiejar id"
'cookiejar': self.count
}
)
Now that you are specifying a new cookiejar, you are in a new session. You must take account for this in your other requests, checking whether a cookiejar is set, and continuing to pass this value. Otherwise you end up back in the default cookiejar. It might be easiest to manage this expectation from the very beginning by defining start_requests:
def start_requests(self):
return [
scrapy.Request(
url,
dont_filter=True,
meta={'cookiejar': self.count}
) for url in self.start_urls
]
Now your other request objects just need to implement the following pattern to "stay in the same session", such as in your parse method:
yield scrapy.FormRequest(
url = self.base_url,
formdata = frmdata,
callback = self.parse_index,
meta={
'cookiejar': response.meta.get('cookiejar')
}
)

Related

How to give scrapy spider a new task only once the current one has received a response from the server

I have a list of entries in a database that each corresponds to some scraping task. Only once one is finished, do I want the spider to continue to the next one. Here is some pseudocode that gives the idea of what I want to do though it is not exactly what I want because it uses a while loop creating a massive backlog of entries waiting to be processed.
def start_requests(self):
while True:
rec = GetDocumentAndMarkAsProcessing()
if rec == None:
break;
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
def parse(self, response):
... store results in database ...
How can I make scrapy work on the next entry only when it has received a response from the previous SplashRequest for the previous entry?
I am not sure if simple callback functions would be enough to do the trick or if I need something more sophisticated.
All I needed to do was explicitly call another request with yield in the parse function with parse as the callback itself. So in the end I have something like this:
def start_requests(self):
rec = GetDocumentAndMarkAsProcessing()
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
def parse(self, response):
... store results in database ...
rec = GetDocumentAndMarkAsProcessing()
script = getScript(rec)
yield SplashRequest(..., callback=self.parse, endpoint="execute",
args={
'lua_source': script
}
)
I believe you might be able to achieve this by setting CONCURRENT_REQUESTS to 1 in your settings.py. That will make it so the crawler only sends one request at a time, although I admit I am not sure how the timing works on the second request - whether it sends it when the callback is finished executing or when the response is retrieved.

Unable to force scrapy to make a callback using the url that got redirected

I've created a python script using scrapy to scrape some information available in a certain webpage. The problem is the link I'm trying with gets redirected very often. However, when I try few times using requests, I get the desired content.
In case of scrapy, I'm unable to reuse the link because I found it redirecting no matter how many times I try. I can even catch the main url using response.meta.get("redirect_urls")[0] meant to be used resursively within parse method. However, it always gets redirected and as a result callback is not taking place.
This is my current attempt (the link used within the script is just a placeholder):
import scrapy
from scrapy.crawler import CrawlerProcess
class StackoverflowSpider(scrapy.Spider):
handle_httpstatus_list = [301, 302]
name = "stackoverflow"
start_url = 'https://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean'
def start_requests(self):
yield scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
def parse(self,response):
if response.meta.get("lead_link"):
self.lead_link = response.meta.get("lead_link")
elif response.meta.get("redirect_urls"):
self.lead_link = response.meta.get("redirect_urls")[0]
try:
if response.status!=200 :raise
if not response.css("[itemprop='text'] > h2"):raise
answer_title = response.css("[itemprop='text'] > h2::text").get()
print(answer_title)
except Exception:
print(self.lead_link)
yield scrapy.Request(self.lead_link,meta={"lead_link":self.lead_link},dont_filter=True, callback=self.parse)
if __name__ == "__main__":
c = CrawlerProcess({
'USER_AGENT': 'Mozilla/5.0',
})
c.crawl(StackoverflowSpider)
c.start()
Question: How can I force scrapy to make a callback using the url that got redirected?
As far as I understand, you want to scrape a link until it stops redirecting and you finally get http status 200
If yes, then you have to first remove handle_httpstatus_list = [301, 302] from your code
Then create a CustomMiddleware in middlewares.py
class CustomMiddleware(object):
def process_response(self, request, response, spider):
if not response.css("[itemprop='text'] > h2"):
logging.info('Desired text not found so re-scraping' % (request.url))
req = request.copy()
request.dont_filter = True
return req
if response.status in [301, 302]:
original_url = request.meta.get('redirect_urls', [response.url])[0]
logging.info('%s is redirecting to %s, so re-scraping it' % (request._url, request.url))
request._url = original_url
request.dont_filter = True
return request
return response
Then your spider should look like something this
class StackoverflowSpider(scrapy.Spider):
name = "stackoverflow"
start_url = 'https://stackoverflow.com/questions/22937618/reference-what-does-this-regex-mean'
custom_settings = {
'DOWNLOADER_MIDDLEWARES': {
'YOUR_PROJECT_NAME.middlewares.CustomMiddleware': 100,
}
}
def start_requests(self):
yield scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
def parse(self,response):
answer_title = response.css("[itemprop='text'] > h2::text").get()
print(answer_title)
If you tell me which site you are scraping then I can help you out, you can email me as well which is on my profile
You may want to see this.
If you need to prevent redirecting it is possible by request meta:
request = scrapy.Request(self.start_url,meta={"lead_link":self.start_url},callback=self.parse)
request.meta['dont_redirect'] = True
yield request
Due to documentation this is a way to stop redirecting.

Capturing HTTP Errors using scrapy

I'm trying to scrape a website for broken links, so far I have this code which is successfully logging in and crawling the site, but it's only recording HTTP status 200 codes:
class HttpStatusSpider(scrapy.Spider):
name = 'httpstatus'
handle_httpstatus_all = True
link_extractor = LinkExtractor()
def start_requests(self):
"""This method ensures we login before we begin spidering"""
# Little bit of magic to handle the CSRF protection on the login form
resp = requests.get('http://localhost:8000/login/')
tree = html.fromstring(resp.content)
csrf_token = tree.cssselect('input[name=csrfmiddlewaretoken]')[0].value
return [FormRequest('http://localhost:8000/login/', callback=self.parse,
formdata={'username': 'mischa_cs',
'password': 'letmein',
'csrfmiddlewaretoken': csrf_token},
cookies={'csrftoken': resp.cookies['csrftoken']})]
def parse(self, response):
item = HttpResponseItem()
item['url'] = response.url
item['status'] = response.status
item['referer'] = response.request.headers.get('Referer', '')
yield item
for link in self.link_extractor.extract_links(response):
r = Request(link.url, self.parse)
r.meta.update(link_text=link.text)
yield r
The docs and these answers lead me to believe that handle_httpstatus_all = True should cause scrapy to pass errored requests to my parse method, but so far I've not been able to capture any.
I've also experimented with handle_httpstatus_list and a custom errback handler in a different iteration of the code.
What do I need to change to capture the HTTP error codes scrapy is encountering?
handle_httpstatus_list can be defined on the spider level, but handle_httpstatus_all can only be defined on the Request level, including it on the meta argument.
I would still recommend using an errback for these cases, but if everything is controlled, it shouldn't create new problems.
So, I don't know if this is the proper scrapy way, but it does allow me to handle all HTTP status codes (including 5xx).
I disabled the HttpErrorMiddleware by adding this snippet to my scrapy project's settings.py:
SPIDER_MIDDLEWARES = {
'scrapy.spidermiddlewares.httperror.HttpErrorMiddleware': None
}

How to redirect after form post with scrapy_splash package?

I'm using Python, Scrapy, Splash, and the scrapy_splash package to scrap a website.
I'm able to log in using the SplashRequest object in scrapy_splash.
Login creates a cookie which gives me access to a portal page. To this point all works.
On the portal page, there is a form element wrapping a number of buttons. When clicked the action URL gets updated and a form submission is triggered. The form submission results in a 302 redirect.
I tried the same approach with the SplashRequest, however, I'm unable to capture the SSO query parameter that is returned with the redirect. I've tried to read the header Location parameter without success.
I've also tried using lua scripts in combination with the SplashRequest object, however, I'm still unable to access the redirect Location object.
Any guidance would be greatly appreciated.
I realize there are other solutions (i.e. selenium) available however the above tech is what we are using across a large number of other scripts and I hesitate to add new tech for this specific use case.
# Lua script to capture cookies and SSO query parameter from 302 Redirect
lua_script = """
function main(splash)
if splash.args.cookies then
splash:init_cookies(splash.args.cookies)
end
assert(splash:go{
splash.args.url,
headers=splash.args.headers,
http_method=splash.args.http_method,
body=splash.args.body,
formdata=splash.args.formdata
})
assert(splash:wait(0))
local entries = splash:history()
local last_response = entries[#entries].response
return {
url = splash:url(),
headers = last_response.headers,
http_status = last_response.status,
cookies = splash:get_cookies(),
html = splash:html(),
}
end
"""
def parse(self, response):
yield SplashRequest(
url='https://members.example.com/login',
callback=self.portal_page,
method='POST',
endpoint='execute',
args={
'wait': 0.5,
'lua_source': self.lua_script,
'formdata': {
'username': self.login,
'password': self.password
},
}
)
def portal_page(self, response):
yield SplashRequest(
url='https://data.example.com/portal'
callback=self.data_download,
args={
'wait': 0.5,
'lua_source': self.lua_script,
'formdata': {}
},
)
def data_download(self, response):
print(response.body.decode('utf8')
I updated the question above with a working example.
I changed a few things however the problem I was having was directly related to missing the splash:init_cookies(splash.args.cookies) reference.
I also converted from using SplashFormRequest to SplashRequest, refactored the splash:go block and removed a reference to the specific form.
Thanks again #MikhailKorobov for your help.

How to visit URL multiple times with different parameters in Scrapy?

I have the parse function as :
def parse(self,response):
a = list(map(chr, range(97,123)))
for i in a:
yield FormRequest.from_response(
response,
formdata = {'posted':'posted', 'LastName':i, 'distance':'0', 'current_page':'2'},
callback = self.after
)
Here I am sending requests to same URL but with different LastName parameter as shown above. But it is not returning response to all my requests. Instead it only retrieves the result for letter 'Q'. How to force it to visit same URL with different parameter each time?
You need to set dont_filter = True on your FormRequest.
yield FormRequest.from_response(
response,
formdata = {'posted':'posted', 'LastName':i, 'distance':'0', 'current_page':'2'},
callback = self.after,
dont_filter=True
)
See http://doc.scrapy.org/en/latest/topics/request-response.html for more info about it.

Categories

Resources