Selenium, Firefox, Error checking if an ID is on a page? - python

sorry if this is kind of vague I do not know how to explain it well but basically I am trying to run a function which checks if an ID is on a page and I do not know how to do it. Here is my code of what I've attempted so far.
def checkoutpage():
driver1.find_element_by_id('test')
try:
if checkoutpage == True:
print("Working")
else:
print("Not working")
except:
print("ERROR")
It returns Not working not matter if the ID is on the page or not, help is appreciated.

Related

Python Selenium 2captcha bypass funcaptcha

I would like to bypass the funcaptchas on the Outlook login page. At the moment it is solved and I get the code, but I don't know how to change the HTML with Python to get further.
Hope you know what I mean and can help me. Thanks :)
Thats the Output:
result: {'captchaId': '72351812278', 'code': '81263a01d67793f12.2257185204|r=ap-southeast-1|metabgclr=%23ffffff|maintxtclr=%231B1B1B|mainbgclr=%23ffffff|guitextcolor=%23747474|metaiconclr=%23757575|meta_height=325|meta=7|pk=B7D8911C-5CC8-A9A3-35B0-554ACEE604DA|at=40|ag=101|cdn_url=https%3A%2F%2Fclient-api.arkoselabs.com%2Fcdn%2Ffc|lurl=https%3A%2F%2Faudio-ap-southeast-1.arkoselabs.com|surl=https%3A%2F%2Fclient-api.arkoselabs.com|smurl=https%3A%2F%2Fclient-api.arkoselabs.com%2Fcdn%2Ffc%2Fassets%2Fstyle-manager'}
def solveCaptcha():
time.sleep(10)
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__))))
api_key = f'{API_KEY}'
solver = TwoCaptcha(api_key)
try:
result = solver.funcaptcha(sitekey=f'{SITE_KEY}',
url=f'{outlookurl}',
surl='https://client-api.arkoselabs.com')
except Exception as e:
sys.exit(e)
else:
sys.exit('result: ' + str(result)
Like i said i want to bypass the funcatpchas for the outlook signup page.

While loop not working on element data grab

def Item_Finder():
item_finder = driver.find_element(By.XPATH, "XPATH HERE").text
item_finder = re.sub('[%+]', '', item_finder)
item_finder = float(item_finder)
return item_finder
while Item_Finder() <= 5:
driver.find_element(By.XPATH, "XPATH HERE").click()
else:
Item_Finder()
print("No item found. Retrying...")
Cant seem to get this while loop working it only runs once. Code just looks at item markup on peoples listings . First post on here aswell not to sure how to get the indentations to show but they are there. Any help appreciated only recently started learning.

Anti-Captcha not working, validation happening before callback - Selenium

So, I'm trying to login in this website with Selenium:
https://carrinho.pontofrio.com.br/Checkout?ReturnUrl=%2fSite%2fMeusPedidos.aspx#login
And I'm using anti-captcha, here's my login code:
my_driver = webdriver.Chrome(executable_path=chrome_path)
wait = WebDriverWait(my_driver, 20)
#Realizar o Login
def login():
my_driver.get(url)
time.sleep(4)
my_driver.find_element_by_id('Email').send_keys(usuario)
my_driver.find_element_by_id('Senha').send_keys(senha)
my_driver.find_element_by_id('Senha').send_keys(Keys.ENTER)
time.sleep(1)
solver = recaptchaV2Proxyless()
solver.set_verbose(1)
solver.set_key("")
solver.set_website_url('https://carrinho.pontofrio.com.br/Checkout?ReturnUrl=%2fSite%2fMeusPedidos.aspx#login')
solver.set_website_key("6LfeX6kZAAAAAIhuSyQ1XRwZdOS26O-r4UJbW3y1")
# solver.set_data_s('"data-s" token from Google Search results "protection"')
g_response = solver.solve_and_return_solution()
if g_response != 0:
print("g-response: " + g_response)
else:
print("task finished with error " + solver.error_code)
time.sleep(1)
my_driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "%s"' % g_response)
time.sleep(1)
my_driver.execute_script(f"callbackCaptcha('{g_response}');")
login()
Website Key is correct, but the website is not accepting my Captcha responses.
So I've tried to check how the Login Process happens with the developer tools, and it goes like that:
The callback function happens after a function that I don't know what it that calls the website:
https://www.google.com/recaptcha/api2/userverify?k=6LfeX6kZAAAAAIhuSyQ1XRwZdOS26O-r4UJbW3y1
Post Method before callback method
And I'm not being able to find a way to simulate this post method, since Selenium doesn't do post methods.
Is there anyway that I can listen to all the Javascript events (the codes called) while running the page?
Any help would be much appreciated, thanks!
I was able to solve the validation thing, with the following code:
options.add_argument('--disable-blink-features=AutomationControlled')
But the Anti-Captcha is still giving me a wrong answer :(
I solved the problem
I've finally managed to resolve this myself. In case anyone else is struggling with a similar issue, here was my solution:
Open the console and execute the following cmd: ___grecaptcha_cfg.clients
Find the path which has the callback function, in my case it's ___grecaptcha_cfg.clients[0].O.O
Use the following code: driver.execute_script(f"___grecaptcha_cfg.clients[0].O.O.callback('{new_token}')") (Remember to change the path accordingly)
This Article will help you to find the ___grecaptcha_cfg.clients of your recaptcha site
driver.execute_script('document.getElementById("g-recaptcha-response").innerHTML = "{}";'.format(g_response))
driver.execute_script(f"___grecaptcha_cfg.clients[0].O.O.callback('{g_response}')")
So yeah, I've discovered that there were 2 problems, first one being the validation, which I solved with this option:
options.add_argument('--disable-blink-features=AutomationControlled')
And the other one was that the website was generating a new token when I clicked the login button. So I decided to solve the captcha before asking to login, and using the callbackCaptcha to ask for login.

Extracting like count instagram-post using Python

I am trying to get the like count from someone's latest posts (and also get the Instagram link for that post) using Python but I can't seem to figure it out. I have tried every single method that is used online but none of them seem to work anymore.
My idea was to let Python open a browser tab and go to the www.instagram.com/p/ link but that also doesn't seem to work anymore.
I have no code to upload because it's all a big mess with different strategies so I just deleted it and decided to start over.
Just grab an https proxy list and here ya go, I wrote this a while back and can be easily edited to your needs.
import requests, sys, time
from random import choice
if len(sys.argv) < 2: sys.exit(f"Usage: {sys.argv[0]} <Post Link> <Proxy List>")
Comments = 0
ProxList = []
Prox = open(sys.argv[2], "r")readlines()
for line in ReadProx:
ProxList.append(line.strip('\n'))
while True:
try:
g = requests.get(sys.argv[1], proxies={'https://': 'https://'+choice(ProxList)})
print(g.json())
time.sleep(5)
if len(g.json()['data']['shortcode_media']['edge_liked_by']['count']) > Comments:
print(f"[+] Like | {g.json()['data']['shortcode_media']['edge_liked_by']['edges'][int(Comments)]['node']['username']}")
if len(g.json()['data']['shortcode_media']['edge_liked_by']['count']) == Comments - 1:
pass
else:
Comments += 1
time.sleep(1.5)
except KeyboardInterrupt: sys.exit("Done")
except Exception as e: print(e)

How to bypass missing link and continue to scrape good data?

How to bypass missing link and continue to scrape good data?
I am using Python2 and Ubuntu 14.04.3.
I am scraping a web page with multiple links to associated data.
Some associated links are missing so I need a way to bypass the missing links and continue scraping.
Web page 1
part description 1 with associated link
part description 2 w/o associated link
more part descriptions with and w/o associcated links
Web page n+
more part descriptions
I tried:
try:
Do some things.
Error caused by missing link.
except Index Error as e:
print "I/O error({0}): {1}".format(e.errno, e.strerror)
break # to go on to next link.
# Did not work because program stopped to report error!
Since link is missing on web page can not use if missing link statement.
Thanks again for your help!!!
I corrected my faulty except error by following Python 2 documentation. Except correction jumped faulty web site missing link and continued on scraping data.
Except correction:
except:
# catch AttributeError: 'exceptions.IndexError' object has no attribute 'errno'
e = sys.exc_info()[0]
print "Error: %s" % e
break
I will look into the answer(s) posted to my questions.
Thanks again for your help!
Perhaps you are looking for something like this:
import urllib
def get_content_safe(url):
try:
contents = urllib.open(url)
return contents
except IOError, ex:
# Report ex your way
return None
def scrape:
# ....
content = get_content_safe(url)
if content == None:
pass # or continue or whatever
# ....
Long story short, just like Basilevs said, when you catch exception, your code will not break and will keep its execution.

Categories

Resources