I accidentally disconnected my internet connection and received this error below. However, why did this line trigger the error?
self.content += tuple(subreddit_posts)
Or perhaps I should ask, why did the following line not lead to a sys.exit? It seems it should catch all errors:
try:
subreddit_posts = self.r.get_content(url, limit=10)
except:
print '*** Could not connect to Reddit.'
sys.exit()
Does this mean I am inadvertently hitting reddit's network twice?
FYI, praw is a reddit API client. And get_content() fetches a subreddit's posts/submissons as a generator object.
The error message:
Traceback (most recent call last):
File "beam.py", line 49, in <module>
main()
File "beam.py", line 44, in main
scan.scanNSFW()
File "beam.py", line 37, in scanNSFW
map(self.getSub, self.nsfw)
File "beam.py", line 26, in getSub
self.content += tuple(subreddit_posts)
File "/Library/Python/2.7/site-packages/praw/__init__.py", line 504, in get_co
page_data = self.request_json(url, params=params)
File "/Library/Python/2.7/site-packages/praw/decorators.py", line 163, in wrap
return_value = function(reddit_session, *args, **kwargs)
File "/Library/Python/2.7/site-packages/praw/__init__.py", line 557, in reques
retry_on_error=retry_on_error)
File "/Library/Python/2.7/site-packages/praw/__init__.py", line 399, in _reque
_raise_response_exceptions(response)
File "/Library/Python/2.7/site-packages/praw/internal.py", line 178, in _raise
response.raise_for_status()
File "/Library/Python/2.7/site-packages/requests/models.py", line 831, in rais
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 503 Server Error: Service Unavailable
The script (it's short):
import sys, os, pprint, praw
class Scanner(object):
''' A scanner object. '''
def __init__(self):
self.user_agent = 'debian.22990.myapp'
self.r = praw.Reddit(user_agent=self.user_agent)
self.nsfw = ('funny', 'nsfw')
self.nsfw_posters = set()
self.content = ()
def getSub(self, subreddit):
''' Accepts a subreddit. Connects to subreddit and retrieves content.
Unpacks generator object containing content into tuple. '''
url = 'http://www.reddit.com/r/{sub}/'.format(sub=subreddit)
print 'Scanning:', subreddit
try:
subreddit_posts = self.r.get_content(url, limit=10)
except:
print '*** Could not connect to Reddit.'
sys.exit()
print 'Constructing list.',
self.content += tuple(subreddit_posts)
print 'Done.'
def addNSFWPoster(self, post):
print 'Parsing author and adding to posters.'
self.nsfw_posters.add(str(post.author))
def scanNSFW(self):
''' Scans all NSFW subreddits. Makes list of posters.'''
# Get content from all nsfw subreddits
print 'Executing map function.'
map(self.getSub, self.nsfw)
# Scan content and get authors
print 'Executing list comprehension.'
[self.addNSFWPoster(post) for post in self.content]
def main():
scan = Scanner()
scan.scanNSFW()
for i in scan.nsfw_posters:
print i
print len(scan.content)
main()
It looks like praw is going to lazily get objects, so when you actually use subreddit_posts is when the request gets made, which explains why it's blowing up on that line.
See: https://praw.readthedocs.org/en/v2.1.20/pages/lazy-loading.html
Related
I use reddit API praw and psraw to extract comments from a subreddit, however, I got two errors today after running a few loops:
JSON object decoded error or empty -> ValueError, even I catch exception in my code, still doesnt work.
http request
example:
Traceback (most recent call last):
File "C:/Users/.../subreddit psraw.py", line 20, in <module>
for comment in submission.comments:
File "C:\Python27\lib\site-packages\praw\models\reddit\base.py", line 31, in __getattr__
self._fetch()
File "C:\Python27\lib\site-packages\praw\models\reddit\submission.py", line 142, in _fetch
'sort': self.comment_sort})
File "C:\Python27\lib\site-packages\praw\reddit.py", line 367, in get
data = self.request('GET', path, params=params)
File "C:\Python27\lib\site-packages\praw\reddit.py", line 451, in request
params=params)
File "C:\Python27\lib\site-packages\prawcore\sessions.py", line 174, in request
params=params, url=url)
File "C:\Python27\lib\site-packages\prawcore\sessions.py", line 108, in _request_with_retries
data, files, json, method, params, retries, url)
File "C:\Python27\lib\site-packages\prawcore\sessions.py", line 93, in _make_request
params=params)
File "C:\Python27\lib\site-packages\prawcore\rate_limit.py", line 33, in call
response = request_function(*args, **kwargs)
File "C:\Python27\lib\site-packages\prawcore\requestor.py", line 49, in request
raise RequestException(exc, args, kwargs)
prawcore.exceptions.RequestException: error with request
HTTPSConnectionPool(host='oauth.reddit.com', port=443): Read timed out. (read timeout=16.0)
Since a subreddit contains 10k+ comments, is there a way to solve such issue? is it because reddit website has some problems today?
My code:
import praw, datetime, os, psraw
reddit = praw.Reddit('bot1')
subreddit = reddit.subreddit('example')
for submission in psraw.submission_search(reddit, subreddit='example', limit=1000000):
try:
#get comments
for comment in submission.comments:
subid = submission.id
comid = comment.id
com_body = comment.body.encode('utf-8').replace("\n", " ")
com_date = datetime.datetime.utcfromtimestamp(comment.created_utc)
string_com = '"{0}", "{1}", "{2}"\n'
formatted_string_com = string_com.format(comid, com_body, com_date)
indexFile_comment = open('path' + subid + '.txt', 'a+')
indexFile_comment.write(formatted_string_com)
except ValueError:
print ("error")
pass
continue
except AttributeError:
print ("error")
pass
continue
My program is written to scan through a large list of websites for SQLi vulnerabilities by adding a simple string query (') to the end of URLs and looking for errors in the page source.
My program keeps getting stuck on the same website. Here's the error I keep receiving:
[-] http://www.pluralsight.com/guides/microsoft-net/getting-started-with-asp-net-mvc-core-1-0-from-zero-to-hero?status=in-review'
[-] Page not found.
[-] http://lfg.go2dental.com/member/dental_search/searchprov.cgi?P=LFGDentalConnect&Network=L'
[-] http://www.parlimen.gov.my/index.php?lang=en'
[-] http://www.otakunews.com/category.php?CatID=23'
[-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
[-] Page not found.
[-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
[-] http://ensec.org/index.php?option=com_content&view=article&id=547:lord-howell-british-fracking-policy--a-change-of-direction-needed&catid=143:issue-content&Itemid=433'
[-] URL Timed Out
[-] http://www.videohelp.com/tools.php?listall=1'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 44, in mapstar
return list(map(*args))
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 81, in
mp_worker
mainMethod(URLS)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 77, in
mainMethod
tryMethod(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 48, in
tryMethod
checkforMySQLError(req, URL)
File "C:\Users\Brice\Desktop\My Site Hunter\sitehunter.py", line 23, in
checkforMySQLError
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 532, in open
response = meth(req, response)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 642, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 564, in error
result = self._call_chain(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 504, in _call_chain
result = func(*args)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\urllib\request.py", line 753, in http_error_302
fp.read()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 462, in read
s = self._safe_read(self.length)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\http\client.py", line 614, in _safe_read
raise IncompleteRead(b''.join(s), amt)
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more
expected)
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "sitehunter.py", line 91, in <module>
mp_handler(URLList)
File "sitehunter.py", line 86, in mp_handler
p.map(mp_worker, URLList)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-
32\lib\multiprocessing\pool.py", line 608, in get
raise self._value
http.client.IncompleteRead: IncompleteRead(4659 bytes read, 15043 more
expected)
C:\Users\Brice\Desktop\My Site Hunter>
Here's my full source code. I narrow it down for you in the next section.
# Start off with imports
import urllib.request
import urllib.error
import socket
import threading
import multiprocessing
import time
import ssl
# Fake a header to get less errors
headers={'User-agent' : 'Mozilla/5.0'}
# Make a class to pass to upon exception errors
class MyException(Exception):
pass
# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):
# gcontext is to bypass a no SSL error from shutting down my program
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
page_source = response.read()
page_source_string = page_source.decode(encoding='cp866', errors='ignore')
# The if statements behind the whole thing. Checks page source for these errors,
# and returns any that come up positive.
# I'd like to do my outputting here, if possible.
if "You have an error in your SQL syntax" in page_source_string:
print ("\t [+] " + URL)
elif "mysql_fetch" in page_source_string:
print ("\t [+] " + URL)
elif "mysql_num_rows" in page_source_string:
print ("\t [+] " + URL)
elif "MySQL Error" in page_source_string:
print ("\t [+] " + URL)
elif "MySQL_connect()" in page_source_string:
print ("\t [+] " + URL)
elif "UNION SELECT" in page_source_string:
print ("\t [+] " + URL)
else:
print ("\t [-] " + URL)
# Attempts to connect to the URL, and passes an error on if it fails.
def tryMethod(req, URL):
try:
checkforMySQLError(req, URL)
except urllib.error.HTTPError as e:
if e.code == 404:
print("\t [-] Page not found.")
if e.code == 400:
print ("\t [+] " + URL)
except urllib.error.URLError as e:
print("\t [-] URL Timed Out")
except socket.timeout as e:
print("\t [-] URL Timed Out")
pass
except socket.error as e:
print("\t [-] Error in URL")
pass
# This is where the magic begins.
def mainMethod(URLList):
##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
# URL = urllib.request.urlopen(URLList, timeout=2)
# Replace any newlines or we get an invalid URL request.
URL = URLList.replace("\n", "")
# URLLib doesn't like https, not sure why.
URL = URL.replace("https://","http://")
# Python likes to truncate urls after spaces, so I add a typical %20.
URL = URL.replace("\s", "%20")
# The blind sql query that makes the errors occur.
URL = URL + "'"
# Requests to connect to the URL and sends it to the tryMethod.
req = urllib.request.Request(URL)
tryMethod(req, URL)
# Multi-processing worker
def mp_worker(URLS):
mainMethod(URLS)
# Multi-processing handler
def mp_handler(URLList):
p = multiprocessing.Pool(25)
p.map(mp_worker, URLList)
# The beginning of it all
if __name__=='__main__':
URLList = open('sites.txt', 'r')
mp_handler(URLList)
Here's the important parts of the code, specifically the parts where I read from URLs using urllib:
def mainMethod(URLList):
##### THIS IS THE WORK-AROUND I USED TO FIX THIS ERROR ####
# URL = urllib.request.urlopen(URLList, timeout=2)
# Replace any newlines or we get an invalid URL request.
URL = URLList.replace("\n", "")
# URLLib doesn't like https, not sure why.
URL = URL.replace("https://","http://")
# Python likes to truncate urls after spaces, so I add a typical %20.
URL = URL.replace("\s", "%20")
# The blind sql query that makes the errors occur.
URL = URL + "'"
# Requests to connect to the URL and sends it to the tryMethod.
req = urllib.request.Request(URL)
tryMethod(req, URL)
# Checks for mySQL error responses after putting a string (') query on the end of a URL
def checkforMySQLError(req, URL):
# gcontext is to bypass a no SSL error from shutting down my program
gcontext = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
response = urllib.request.urlopen(req, context=gcontext, timeout=2)
page_source = response.read()
page_source_string = page_source.decode(encoding='cp866', errors='ignore')
I got past this error by making a request to read from URLList before making any changes to it. I commented out the part that fixed it - but only to get another error that looks worse/harder to fix (which is why I included this error although I had fixed it)
Here's the new error when I remove the comment from that line of code:
[-] http://www.davis.k12.ut.us/site/Default.aspx?PageType=1&SiteID=6497&ChannelID=6507&DirectoryType=6'
[-] http://www.surreyschools.ca/NewsEvents/Posts/Lists/Posts/ViewPost.aspx?ID=507'
[-] http://plaine-d-aunis.bibli.fr/opac/index.php?lvl=cmspage&pageid=6&id_rubrique=100'
[-] http://www.parlimen.gov.my/index.php?lang=en'
[-] http://www.rvparkhunter.com/state.asp?state=britishcolumbia'
[-] URL Timed Out
[-] http://www.videohelp.com/tools.php?listall=1'
Traceback (most recent call last):
File "sitehunter.py", line 91, in <module>
mp_handler(URLList)
File "sitehunter.py", line 86, in mp_handler
p.map(mp_worker, URLList)
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "C:\Users\Brice\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\pool.py", line 608, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x0381C790>'. Reason: 'TypeError("cannot serialize '_io.BufferedReader' object",)'
C:\Users\Brice\Desktop\My Site Hunter>
The new error seems worse than the old one, to be honest. That's why I included both. Any information on how to fix this would be greatly appreciated, as I've been stuck trying to fix it for the past few hours.
Trying to get the GET parameters from the URL. I have it working in my __init__.py file, but in a different file its not working.
I tried to use with app.app_context(): but I am still getting the same issue.
def log_entry(entity, type, entity_id, data, error):
with app.app_context():
zip_id = request.args.get('id')
RuntimeError: working outside of request context
Any suggestions?
Additional Info:
This is using Flask web framework which is setup as a service (API).
Example URL the user would hit http://website.com/api/endpoint?id=1
As mentioned above using `zip_id = request.args.get('id') works fine in the main file but I am in runners.py (just another file with definitions in)
Full traceback:
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/wsgi.py", line 703, in __next__
return self._next()
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/wrappers.py", line 81, in _iter_encoded
for item in iterable:
File "/Users/ereeve/Documents/TechSol/pi-automation-api/automation_api/runners.py", line 341, in create_agencies
log_entry("test", "created", 1, "{'data':'hey'}", "")
File "/Users/ereeve/Documents/TechSol/pi-automation-api/automation_api/runners.py", line 315, in log_entry
zip_id = request.args.get('id')
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/flask/globals.py", line 20, in _lookup_req_object
raise RuntimeError('working outside of request context')
RuntimeError: working outside of request context
Def in the same file calling the log_entry def
def create_agencies(country_code, DB, session):
document = DB.find_one({'rb_account_id': RB_COUNTRIES_new[country_code]['rb_account_id']})
t2 = new_t2(session)
log_entry("test", "created", 1, "{'data':'hey'}", "")
The next problem happens only when SSL is enabled!
I am running a server with bottle version 0.12.6 and cherrypy version 3.2.2 using https.
The client code sends a file to the server and the server saves it.
when i send a file with size below 102199 bytes, it is received and saved successfully. However, When i send a file with size bigger or equal to 102199, I get the exception:
The Server Code:
from bottle import request, response,static_file, run,server_names
from OpenSSL import crypto,SSL
from bottle import Bottle, run, request, server_names, ServerAdapter
app = Bottle()
app.mount('/test' , app)
class MySSLCherryPy(ServerAdapter):
def run(self, handler):
from cherrypy import wsgiserver
server = wsgiserver.CherryPyWSGIServer((self.host, self.port), handler)
server.ssl_certificate = "./cert"
server.ssl_private_key = "./key"
try:
server.start()
finally:
server.stop()
#app.post('/upload')
def received_file():
file = request.files.file
# file.save("./newfile")
file_path="./newfile"
with open(file_path, 'w') as open_file:
open_file.write(file.read())
if __name__=='__main__':
server_names['mysslcherrypy'] = MySSLCherryPy
run(app, host='0.0.0.0', port=4430, server='mysslcherrypy')
exit(0)
Why does the server fail to get file more than a given limit? is there a limit that i need to change?
(I tried to set the constant MEMFILE_MAX at the function received_file but it didn't help)
The problem vanish if the server is http and not https!
The exception in plain text (in case you cannot view the image):
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/bottle.py", line 861, in _handle
return route.call(**args)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1727, in wrapper
rv = callback(*a, **ka)
File "testser", line 28, in received_file
file = request.files.file
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1106, in files
for name, item in self.POST.allitems():
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1222, in POST
args = dict(fp=self.body, environ=safe_env, keep_blank_values=True)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1193, in body
self._body.seek(0)
File "/usr/lib/python2.6/site-packages/bottle.py", line 165, in get
if key not in storage: storage[key] = self.getter(obj)
File "/usr/lib/python2.6/site-packages/bottle.py", line 1162, in _body
for part in body_iter(read_func, self.MEMFILE_MAX):
File "/usr/lib/python2.6/site-packages/bottle.py", line 1125, in _iter_body
part = read(min(maxread, bufsize))
File "/usr/lib/python2.6/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 329, in read
data = self.rfile.read(size)
File "/usr/lib/python2.6/site-packages/cherrypy/wsgiserver/wsgiserver2.py", line 1052, in read
assert n <= left, "recv(%d) returned %d bytes" % (left, n)
AssertionError: recv(47) returned 48 bytes
Solution
In the file bottle.py I changed the value of .MEMFILE_MAX to be 10000000 and by this i solved the problem. The best way to do this is from your server code by adding the next line:
bottle.BaseRequest.MEMFILE_MAX=30000000000
Now I am writing simple OpenERP module for my customer. I use SUDS to connect with bank, to get Statements.
I wrote xml request, that works without problem. I get response from bank, that also looks ok. Problem is that response from bank use type, that is not defined in WSDL (I wrote to bank support, that they have a bug).
Traceback (most recent call last):
File "example3.py", line 112, in <module>
wiadomosc = client.service.GetStatement(__inject={'msg': xml})
File "/usr/lib/python2.7/dist-packages/suds/client.py", line 542, in __call__
return client.invoke(args, kwargs)
File "/usr/lib/python2.7/dist-packages/suds/client.py", line 773, in invoke
return self.send(msg)
File "/usr/lib/python2.7/dist-packages/suds/client.py", line 647, in send
result = self.succeeded(binding, reply.message)
File "/usr/lib/python2.7/dist-packages/suds/client.py", line 684, in succeeded
reply, result = binding.get_reply(self.method, reply)
File "/usr/lib/python2.7/dist-packages/suds/bindings/binding.py", line 156, in get_reply
result = self.replycomposite(rtypes, nodes)
File "/usr/lib/python2.7/dist-packages/suds/bindings/binding.py", line 230, in replycomposite
sobject = unmarshaller.process(node, resolved)
File "/usr/lib/python2.7/dist-packages/suds/umx/typed.py", line 66, in process
return Core.process(self, content)
File "/usr/lib/python2.7/dist-packages/suds/umx/core.py", line 48, in process
return self.append(content)
File "/usr/lib/python2.7/dist-packages/suds/umx/core.py", line 63, in append
self.append_children(content)
File "/usr/lib/python2.7/dist-packages/suds/umx/core.py", line 140, in append_children
cval = self.append(cont)
File "/usr/lib/python2.7/dist-packages/suds/umx/core.py", line 61, in append
self.start(content)
File "/usr/lib/python2.7/dist-packages/suds/umx/typed.py", line 80, in start
raise TypeNotFound(content.node.qname())
suds.TypeNotFound: Type not found: 'ns29:BkToCstmrStmt'
And types, they provide
....
ns37:BaselineStatus3Code
ns32:BatchBookingIndicator
ns33:BatchBookingIndicator
ns36:BatchBookingIndicator
ns30:BatchInformation1
ns31:BatchInformation2
ns29:BatchInformation2
ns39:BilateralLimitDetails3
ns27:BkToCstmrCardRptType
ns27:BkTxCdType
ns27:BookgDtType
ns10:BookingDate
ns4:Bool
ns16:Bool
ns2:Bool
ns15:Bool
ns17:BouldingNumber
....
So there is no BkToCstmrStmt. How can I make suds to just get response from server, and not analyse it? Just build tree?
Thank you
Still I don't know what is wrong with my code. I solved it in that way:
class MessageInterceptor(suds.plugin.MessagePlugin):
def __init__(self, *args, **kwargs):
self.message = None
def received(self, context):
#recieved xml as a string
#print "%s bytes received" % len(context.reply)
self.message = context.reply
#clean up reply to prevent parsing
context.reply = ""
return context
message_interceptor = MessageInterceptor()
client = Client('https://some-adress-to?wsdl',plugins=[message_interceptor])
So now I can call client method
xml = Raw("""
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
<soapenv:Header>
...
</soapenv:Header>
<soapenv:Body>
<urn1:GetStatement>
...
</urn1:GetStatement>
</soapenv:Body>
</soapenv:Envelope>
""")
response = client.service.GetStatement(__inject={'msg': xml})
Now suds thinks that got nothing from server. But message we recieved is stored in
message_interceptor.message
Now to get normal dict object from message I do it like that:
import xmltodict
message_interceptor.message = message_interceptor.message.replace('ns17:','')
message_interceptor.message = message_interceptor.message.replace('ns40:','')
message_interceptor.message = message_interceptor.message.replace('soap:','')
response = xmltodict.parse(message_interceptor.message)['Envelope']['Body']['GetStatementResponse']['Document']
Now I can use response as normal response from suds.