For loops with user input on Python - python

Hello I'm learning how to parse HTML with BeautifulSoup. I would like to know if it is possible to use a user input in a for loop, as:
for (user input) in A
As A is a list of links so the user can choose to go for a link, using an input.
And then I use urllib to open that link and repeat the process.

You can use something like this:
import urllib2
from bs4 import BeautifulSoup
choice = ''
for url in urls:
print('Go to {}?'.format(url))
decision = input('Y/n ')
if decision == 'Y':
choice = url
break
if choice:
r = urllib2.urlopen(choice).read()
soup = BeautifulSoup(r, 'lxml')
# do something else

It wasn't exactly clear to me if you really wanted to "open" the link in a browser, so I included some code to do that. Is this maybe what you wanted from "digit a position"?
tl;dr
print("Which URL would you like to open?"
" (Please select an option between 1-{})".format(len(A)))
for index, link in enumerate(A):
print index+1, link
Full:
from bs4 import BeautifulSoup
import requests
import webbrowser
A = [
'https://www.google.com',
'https://www.stackoverflow.com',
'https://www.xkcd.com',
]
print("Which URL would you like to open?"
" (Please select an option between 1-{})".format(len(A)))
for index, link in enumerate(A):
print index+1, link
_input = input()
try:
option_index = int(_input) - 1
except ValueError:
print "{} is not a valid choice.".format(_input)
raise
try:
selection = A[option_index]
except IndexError:
print "{} is not a valid choice.".format(_input)
raise
webbrowser.open(selection)
response = requests.get(selection)
html_string = response.content
# Do parsing...

Thanks for your help. I achieved a solution on this.
Created two variables: count = input() and postion = input()
The count I have used in a for loop: for _ in range(c) - with this I can made a process repeat the number of times that the user want (on this assignement is 4).
The position (that for this assignement is predefined on 3), I use for list index, in a list with all url. So for open the url in position 3 I have:
url = links[p-1] (-1 for the reason that user inputs 3, but the list index starts with 0 (0,1,2...)
And then I can use urllib.request.urlopen.read()

Related

How to search multiple keywords in web page? this only input one keyword

import mechanize
from bs4 import BeautifulSoup
import time
import smtplib
True by default
while True:
url = "https://www.google.com"
browser = mechanize.Browser()
browser.open(url)
response = browser.response().read()
soup = BeautifulSoup(response, "lxml")
count = 1
if str(soup).find("English") == -1:
# wait 60 seconds (change the time(in seconds) as you wish),
print('Checking - ' + str(count) + 'th Time')
time.sleep(60)
count += 1
# continue with the script
continue
There is a couple of problems here:
Beautiful soup provide a method get_text() to extract the text, so you do not need to convert it to string.
String's find() return -1 when no value was found. Are you sure that is what you want?
Why do you use time.sleep()? What is the purpose of stopping the program?
You did not create a loop, which make count redundant and you will get error for continue.
If you want to get the number of occurence of a string, you can use regex's findall() and then get its length like: len(re.findall("English", soup_text)).
If you want to find multiple keywords, you can create a list of the keywords and then loop through them like:
for k in ["a", "b", "c"]:
print(f'{k}: {len(re.findall(k, soup.get_text()))}')
Full example:
from bs4 import BeautifulSoup
import requests # simple http request
import re # regex
url = "https://www.google.com"
doc = requests.get(url)
soup = BeautifulSoup(doc.text, "lxml")
soup_text = soup.get_text()
keywords = ["Google", "English", "a"]
for k in keywords:
print(f'{k}: {len(re.findall(k, soup_text))}')
You are strongly suggested to study python thoroughly:
Python: w3school tutorial
BeautifulSoup: Documentation
Regex: w3schools tutorial or RegExr

How to get all emails from a page individually

I am trying to get all emails from a specific page and separate them into an individual variable or even better a dictionary. This is some code.
import requests
import re
import json
from bs4 import BeautifulSoup
page = "http://www.example.net"
info = requests.get(page)
if info.status_code == 200:
print("Page accessed")
else:
print("Error accessing page")
code = info.content
soup = BeautifulSoup(code, 'lxml')
allEmails = soup.find_all("a", href=re.compile(r"^mailto:"))
print(allEmails)
sep = ","
allEmailsStr = str(allEmails)
print(type(allEmails))
print(type(allEmailsStr))
j = allEmailsStr.split(sep, 1)[0]
print(j)
Excuse the poor variable names because I put this together so it would be fine by itself. The output from the example website would be for example something like
[k, kolyma, location, balkans]
So if I ran the problem it would return only
[k
But if I wanted it to return every email on there individually how would I do that?
To get just the email str you can try:
emails = []
for email_link in allEmails:
emails.append(email_link.get("href").replace('mailto:', ''))
print(emails)
Based on your expected output, you can use the unwrap function of BeautifulSoup
allEmails = soup.find_all("a", href=re.compile(r"^mailto:"))
for Email in allEmails:
print(Email.unwrap()) #This will print the whole element along with tag
# k

Web crawler does not open all links in a page

I'am trying to build a web crawler using beautifulsoup and urllib. The crawler is working, but it does not open all the pages in a site. It opens the first link and goes to that link, opens the first link of that page and so on.
Here's my code:
from bs4 import BeautifulSoup
from urllib.request import urlopen
from urllib.parse import urljoin
import json, sys
sys.setrecursionlimit(10000)
url = input('enter url ')
d = {}
d_2 = {}
l = []
url_base = url
count = 0
def f(url):
global count
global url_base
if count <= 100:
print("count: " + str(count))
print('now looking into: '+url+'\n')
count += 1
l.append(url)
html = urlopen(url).read()
soup = BeautifulSoup(html, "html.parser")
d[count] = soup
tags = soup('a')
for tag in tags:
meow = tag.get('href',None)
if (urljoin(url, meow) in l):
print("Skipping this one: " + urljoin(url,meow))
elif "mailto" in urljoin(url,meow):
print("Skipping this one with a mailer")
elif meow == None:
print("skipping 'None'")
elif meow.startswith('http') == False:
f(urljoin(url, meow))
else:
f(meow)
else:
return
f(url)
print('\n\n\n\n\n')
print('Scrapping Completed')
print('\n\n\n\n\n')
The reason you're seeing this behavior is due to when the code recursively calls your function. As soon as the code finds a valid link, the function f gets called again preventing the rest of the for loop from running until it returns.
What you're doing is a depth first search, but the internet is very deep. You want to do a breadth first search instead.
Probably the easiest way to modify your code to do that is to have a global list of links to follow. Have the for loop append all the scraped links to the end of this list and then outside of the for loop, remove the first element of the list and follow that link.
You may have to change your logic slightly for your max count.
If count reaches 100, no further links will be opened. Therefore I think you should decrease count by one after leaving the for loop. If you do this, count would be something like the current link depth (and 100 would be the maximum link depth).
If the variable count should refer to the number of opened links, then you might want to control the link depth in another way.

Finding information on a website without an external module

I am creating a program in Python where you search up a tv show/movie, and from IMDb, it gives you:
The title, year, rating, age rating, and synopsis of the movie.
I want to use no external modules at all, only the ones that come with Python 3.4.
I know I will have to use urllib, but I do not know where to go from there.
How would I do this?
This is an example taken from here:
import json
from urllib.parse import quote
from urllib.request import urlopen
def search(title):
API_URL = "http://www.omdbapi.com/?r=json&s=%s"
title = title.encode("utf-8")
url = API_URL % quote(title)
data = urlopen(url).read().decode("utf-8")
data = json.loads(data)
if data.get("Response") == "False":
print(data.get("Error", "Unknown error"))
return data.get("Search", [])
Then you can do:
>>> search("Idiocracy")
[{'Year': '2006', 'imdbID': 'tt0387808', 'Title': 'Idiocracy'}]
It's maybe too complex but:
I look at the webpage code. I look where the info I want is and then I extract the info.
import urllib.request
def search(title):
html = urllib.request.urlopen("http://www.imdb.com/find?q="+title).read().decode("utf-8")
f=html.find("<td class=\"result_text\"> <a href=\"",0)+34
openlink=""
while html[f]!="\"":
openlink+= html[f]
f+=1
html = urllib.request.urlopen("http://www.imdb.com"+openlink).read().decode("utf-8")
f = html.find("<meta property='og:title' content=\"",0)+35
titleyear=""
while html[f] !="\"":
titleyear+=html[f]
f+=1
f = html.find("title=\"Users rated this ",0)+24
rating = ""
while html[f] !="/":
rating+= html[f]
f+=1
f=html.find("<meta name=\"description\" content=\"",0)+34
shortdescription = ""
while html[f] !="\"":
shortdescription+=html[f]
f+=1
print (titleyear,rating,shortdescription)
return (titleyear,rating,shortdescription)
search("friends")
The number adding to f has to be just right, you count the lenght of the string you are searching, because find() returns you the position of the first letter in the string.
It looks bad, is there any other simpler way to do it?

Recursive function gives no output

I'm scraping all the URL of my domain with recursive function.
But it outputs nothing, without any error.
#usr/bin/python
from bs4 import BeautifulSoup
import requests
import tldextract
def scrape(url):
for links in url:
main_domain = tldextract.extract(links)
r = requests.get(links)
data = r.text
soup = BeautifulSoup(data)
for href in soup.find_all('a'):
href = href.get('href')
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == main_domain.domain :
problem.append(href)
elif not href == '#' and link_domain.tld == '':
new = 'http://www.'+ main_domain.domain + '.' + main_domain.tld + '/' + href
problem.append(new)
return len(problem)
return scrape(problem)
problem = ["http://xyzdomain.com"]
print(scrape(problem))
When I create a new list, it works, but I don't want to make a list every time for every loop.
You need to structure your code so that it meets the pattern for recursion as your current code doesn't - you also should not call variables the same name as libraries, e.g. href = href.get() because this will usually stop the library working as it becomes the variable, your code as it currently is will only ever return the len() as this return is unconditionally reached before: return scrap(problem).:
def Recursive(Factorable_problem)
if Factorable_problem is Simplest_Case:
return AnswerToSimplestCase
else:
return Rule_For_Generating_From_Simpler_Case(Recursive(Simpler_Case))
for example:
def Factorial(n):
""" Recursively Generate Factorials """
if n < 2:
return 1
else:
return n * Factorial(n-1)
Hello I've made a none recursive version of this that appears to get all the links on the same domain.
The code below I've tested using the problem included in the code. When I'd solved the problems with the recursive version the next problem was hitting the recursion depth limit so I rewrote it so it ran in an iterative fashion, the code and result below:
from bs4 import BeautifulSoup
import requests
import tldextract
def print_domain_info(d):
print "Main Domain:{0} \nSub Domain:{1} \nSuffix:{2}".format(d.domain,d.subdomain,d.suffix)
SEARCHED_URLS = []
problem = [ "http://Noelkd.neocities.org/", "http://youpi.neocities.org/"]
while problem:
# Get a link from the stack of links
link = problem.pop()
# Check we haven't been to this address before
if link in SEARCHED_URLS:
continue
# We don't want to come back here again after this point
SEARCHED_URLS.append(link)
# Try and get the website
try:
req = requests.get(link)
except:
# If its not working i don't care for it
print "borked website found: {0}".format(link)
continue
# Now we get to this point worth printing something
print "Trying to parse:{0}".format(link)
print "Status Code:{0} Thats: {1}".format(req.status_code, "A-OK" if req.status_code == 200 else "SOMTHINGS UP" )
# Get the domain info
dInfo = tldextract.extract(link)
print_domain_info(dInfo)
# I like utf-8
data = req.text.encode("utf-8")
print "Lenght Of Data Retrived:{0}".format(len(data)) # More info
soup = BeautifulSoup(data) # This was here before so i left it.
print "Found {0} link{1}".format(len(soup.find_all('a')),"s" if len(soup.find_all('a')) > 1 else "")
FOUND_THIS_ITERATION = [] # Getting the same links over and over was boring
found_links = [x for x in soup.find_all('a') if x.get('href') not in SEARCHED_URLS] # Find me all the links i don't got
for href in found_links:
href = href.get('href') # You wrote this seems to work well
if not href:
continue
link_domain = tldextract.extract(href)
if link_domain.domain == dInfo.domain: # JUST FINDING STUFF ON SAME DOMAIN RIGHT?!
if href not in FOUND_THIS_ITERATION: # I'ma check you out next time
print "Check out this link: {0}".format(href)
print_domain_info(link_domain)
FOUND_THIS_ITERATION.append(href)
problem.append(href)
else: # I got you already
print "DUPE LINK!"
else:
print "Not on same domain moving on"
# Count down
print "We have {0} more sites to search".format(len(problem))
if problem:
continue
else:
print "Its been fun"
print "Lets see the URLS we've visited:"
for url in SEARCHED_URLS:
print url
Which prints, after a lot of other logging loads of neocities websites!
What's happening is the script is popping a value of the list of websites yet to visit, it then gets all the links on the page which are on the same domain. If those links are to pages we haven't visited we add the link to the list of links to be visited. After we do that we pop the next page and do the same thing again until there are no pages left to visit.
Think this is what your looking for, get back to us in the comments if this doesn't work in the way that you want or if anyone can improve please leave a comment.

Categories

Resources