iterate the list in python - python

I have a loop inside loop i'm using try n catch once get error try n catch works fine but loop continues to next value. What I need is that where the loop breaks start from the same value don't continue to next so how i can do that with my code [like in other languages: in c++, it is i--]
for
r = urllib2.urlopen(url)
encoding = r.info().getparam('charset')
html = r.read()
c = td.find('a')['href']
urls = []
urls.append(c)
#collecting urls from first page then from those url collecting further info in below loop
for abc in urls:
try:
r = urllib2.urlopen(abc)
encoding = r.info().getparam('charset')
html = r.read()
except Exception as e:
last_error = e
time.sleep(retry_timeout) #here is the problem once get error then switch from next value
I need a more pythonic way to do this.
Waiting for a reply. Thank you.

Unfortunatly, there is no simple way to go back with iterator in Python :
http://docs.python.org/2/library/stdtypes.html
You should be interested in this stackoverflow's thread :
Making a python iterator go backwards?
For your particular case, i will use a simple while loop :
url = []
i = 0
while i < len(url): #url is list contain all urls which contain infinite as url updates every day
data = url[i]
try:
#getting data from there
i+=1
except:
#shows the error received and continue to next loop i need to make the loop start from same position
The problem with the way, you want to handle your problem is that you will risk to go on a infinite loop. For example if a link is broken r = urllib2.urlopen(abc) will always run an exception and you will always stay at the same position. You should consider doing something like that :
r = urllib2.urlopen(url)
encoding = r.info().getparam('charset')
html = r.read()
c = td.find('a')['href']
urls = []
urls.append(c)
#collecting urls from first page then from those url collecting further info in below loop
NUM_TRY = 3
for abc in urls:
for _ in range(NUM_TRY):
try:
r = urllib2.urlopen(abc)
encoding = r.info().getparam('charset')
html = r.read()
break #if we arrive to this line, it means no error occur so we don't need to retry again
#this is why we break the inner loop
except Exception as e:
last_error = e
time.sleep(retry_timeout) #here is the problem once get error then switch from next value

Related

Why is no data stored in my list in Python?

I have the following code to get some data using selenium. That goes through a list with ids with a for loop and to store them in my lists (titulos = [] and ids = []. It was working fine until I added the try/except. The code would look like this:
for item in registros:
found = False
ids = []
titulos = []
try:
while true:
#code to request data
try:
error = False
error = #error message
if error is True:
break
except:
continue
except:
continue
try:
found = #if id has data
if found.is_displayed:
titulo = #locator
ids.append(item)
titulos.append(titulo)
except NoSuchElementException:
input.clear()
The first inner try block needs to be indented. Also, the error parameter will always be set to the text message so it will always be true. Try formatting your code correctly and then identifying the problem.

Python Web Scraping error - Reading from JSON- IndexError: list index out of range - how do I ignore

I am performing web scraping via Python \ Selenium \ Chrome headless driver. I am reading the results from JSON - here is my code:
CustId=500
while (CustId<=510):
print(CustId)
# Part 1: Customer REST call:
urlg = f'https://mywebsite/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
dict_from_json = json.loads(soup.find("body").text)
# print(dict_from_json)
#try:
CustID = (dict_from_json['customerAddressCreateCommand']['customerId'])
# Addr = (dict_from_json['customerShowCommand']['customerAddressShowCommandSet'][0]['addressDisplayName'])
writefunction()
CustId = CustId+1
The issue is sometimes 'addressDisplayName' will be present in the result set and sometimes not. If its not, it errors with the error:
IndexError: list index out of range
Which makes sense, as it doesn't exist. How do I ignore this though - so if 'addressDisplayName' doesn't exist just continue with the loop? I've tried using a TRY but the code still stops executing.
try..except block should resolved your issue.
CustId=500
while (CustId<=510):
print(CustId)
# Part 1: Customer REST call:
urlg = f'https://mywebsite/customerRest/show/?id={CustId}'
driver.get(urlg)
soup = BeautifulSoup(driver.page_source,"lxml")
dict_from_json = json.loads(soup.find("body").text)
# print(dict_from_json)
CustID = (dict_from_json['customerAddressCreateCommand']['customerId'])
try:
Addr = (dict_from_json['customerShowCommand']['customerAddressShowCommandSet'][0]'addressDisplayName'])
except:
Addr ="NaN"
CustId = CustId+1
If you get an IndexError (with an index of '0') it means that your list is empty. So it is one step in the path earlier (otherwise you'd get a KeyError if 'addressDisplayName' was missing from the dict).
You can check if the list has elements:
if dict_from_json['customerShowCommand']['customerAddressShowCommandSet']:
# get the data
Otherwise you can indeed use try..except:
try:
# get the data
except IndexError, KeyError:
# handle missing data

While Not Loop for empty list in python

I am making a request to a server... for whatever reason (beyond my comprehension), the server will give me a status code of 200, but when I use Beautiful Soup to grab a list from the html, nothing is returned. It only happens on the first page of pagination.
To get around a known bug, I have to loop until the list is not empty.
This works, but it's clunky. Is there a better way to do this? Knowing that I have to force the request until the list contains an item.
# look for attractions
attraction_list = soup.find_all(attrs={'class': 'listing_title'})
while not attraction_list:
print('the list is empty')
try:
t = requests.Session()
t.cookies.set_policy(BlockAll)
page2 = t.get(search_url)
print(page2.status_code)
soup2 = BeautifulSoup(page2.content, 'html.parser')
attraction_list = soup2.find_all(attrs={'class': 'listing_title'})
except:
pass
I came up with this.
attraction_list = soup.find_all(attrs={'class': 'listing_title'})
while not attraction_list:
print('the list is empty')
for q in range(0, 4):
try:
t = requests.Session()
t.cookies.set_policy(BlockAll)
page2 = t.get(search_url)
print(page2.status_code)
soup2 = BeautifulSoup(page2.content, 'html.parser')
attraction_list = soup2.find_all(attrs={'class': 'listing_title'})
except Exception as str_error:
print('FAILED TO FIND ATTRACTIONS')
time.sleep(3)
continue
else:
break
It'll try 4 times to get that pull the attractions, if attractions_list ends up with a valid list, it breaks. Good enough.

Verify if link starts with http when using lxml and xpath in Python

I am trying to print all the links from multiple pages using the following:
my_page = '//div[#class="product_info"]//table//tr[7]//td[2]//a/#href'
Now, this works for most of the links, but in some cases I have something like:
To follow which isn't a link.
How can I omit these links ? What condition should I use when using:
# some more code
EMPTY = ''
my_page = '//div[#class="product_info"]//table//tr[7]//td[2]//a/#href'
for part in dom1.xpath(my_page):
FINAL_URL = urlparse.urljoin(url, part)
if part == EMPTY:
continue
print part
To filter those links that start with https:// or http://, simply add a condition in your loop:
# some more code
EMPTY = ''
other_links = set()
processed_links = set()
my_page = '//div[#class="product_info"]//table//tr[7]//td[2]//a/#href'
for part in dom1.xpath(my_page):
if part[:4] == 'http':
if part not in processed_links:
processed_links.add(part)
FINAL_URL = urlparse.urljoin(url, part)
else:
other_links.add(part)
I've also added some code so that:
You collect all the other links that are not processed.
If the same (valid) link appears in the page more than once, you only process it once.

Recursive function gives no output

I'm scraping all the URL of my domain with recursive function.
But it outputs nothing, without any error.
#usr/bin/python
from bs4 import BeautifulSoup
import requests
import tldextract
def scrape(url):
for links in url:
main_domain = tldextract.extract(links)
r = requests.get(links)
data = r.text
soup = BeautifulSoup(data)
for href in soup.find_all('a'):
href = href.get('href')
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == main_domain.domain :
problem.append(href)
elif not href == '#' and link_domain.tld == '':
new = 'http://www.'+ main_domain.domain + '.' + main_domain.tld + '/' + href
problem.append(new)
return len(problem)
return scrape(problem)
problem = ["http://xyzdomain.com"]
print(scrape(problem))
When I create a new list, it works, but I don't want to make a list every time for every loop.
You need to structure your code so that it meets the pattern for recursion as your current code doesn't - you also should not call variables the same name as libraries, e.g. href = href.get() because this will usually stop the library working as it becomes the variable, your code as it currently is will only ever return the len() as this return is unconditionally reached before: return scrap(problem).:
def Recursive(Factorable_problem)
if Factorable_problem is Simplest_Case:
return AnswerToSimplestCase
else:
return Rule_For_Generating_From_Simpler_Case(Recursive(Simpler_Case))
for example:
def Factorial(n):
""" Recursively Generate Factorials """
if n < 2:
return 1
else:
return n * Factorial(n-1)
Hello I've made a none recursive version of this that appears to get all the links on the same domain.
The code below I've tested using the problem included in the code. When I'd solved the problems with the recursive version the next problem was hitting the recursion depth limit so I rewrote it so it ran in an iterative fashion, the code and result below:
from bs4 import BeautifulSoup
import requests
import tldextract
def print_domain_info(d):
print "Main Domain:{0} \nSub Domain:{1} \nSuffix:{2}".format(d.domain,d.subdomain,d.suffix)
SEARCHED_URLS = []
problem = [ "http://Noelkd.neocities.org/", "http://youpi.neocities.org/"]
while problem:
# Get a link from the stack of links
link = problem.pop()
# Check we haven't been to this address before
if link in SEARCHED_URLS:
continue
# We don't want to come back here again after this point
SEARCHED_URLS.append(link)
# Try and get the website
try:
req = requests.get(link)
except:
# If its not working i don't care for it
print "borked website found: {0}".format(link)
continue
# Now we get to this point worth printing something
print "Trying to parse:{0}".format(link)
print "Status Code:{0} Thats: {1}".format(req.status_code, "A-OK" if req.status_code == 200 else "SOMTHINGS UP" )
# Get the domain info
dInfo = tldextract.extract(link)
print_domain_info(dInfo)
# I like utf-8
data = req.text.encode("utf-8")
print "Lenght Of Data Retrived:{0}".format(len(data)) # More info
soup = BeautifulSoup(data) # This was here before so i left it.
print "Found {0} link{1}".format(len(soup.find_all('a')),"s" if len(soup.find_all('a')) > 1 else "")
FOUND_THIS_ITERATION = [] # Getting the same links over and over was boring
found_links = [x for x in soup.find_all('a') if x.get('href') not in SEARCHED_URLS] # Find me all the links i don't got
for href in found_links:
href = href.get('href') # You wrote this seems to work well
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == dInfo.domain: # JUST FINDING STUFF ON SAME DOMAIN RIGHT?!
if href not in FOUND_THIS_ITERATION: # I'ma check you out next time
print "Check out this link: {0}".format(href)
print_domain_info(link_domain)
FOUND_THIS_ITERATION.append(href)
problem.append(href)
else: # I got you already
print "DUPE LINK!"
else:
print "Not on same domain moving on"
# Count down
print "We have {0} more sites to search".format(len(problem))
if problem:
continue
else:
print "Its been fun"
print "Lets see the URLS we've visited:"
for url in SEARCHED_URLS:
print url
Which prints, after a lot of other logging loads of neocities websites!
What's happening is the script is popping a value of the list of websites yet to visit, it then gets all the links on the page which are on the same domain. If those links are to pages we haven't visited we add the link to the list of links to be visited. After we do that we pop the next page and do the same thing again until there are no pages left to visit.
Think this is what your looking for, get back to us in the comments if this doesn't work in the way that you want or if anyone can improve please leave a comment.

Categories

Resources