Loading imaging interposition with clicking element. Fluent waiting time not working - python

This is my first question so apologice in advance.
I am trying to automate loading an online form, which you have to do one case at a time.
Some inputs request information from the server and freezes the webpage with a loading spining gif, afterwards some fields are autocompleted.
I am having issues at the end of the first entry where I need to ADD the current info in order to reset everything and start the process again.
The exception ElementClickInterceptedException raises inspite of using fluent waiting. I have tried several ways using Xpath or script executor but it throws the same error
any thoughts?
send ID 1 and 2
driver.find_element(By.ID,'bodyContent_PRD_EFE_txtNroIDPrestador').send_keys(ID)
driver.find_element(By.ID,'bodyContent_PRD_PRE_txtNroIDPrestador').send_keys(ID2)
for i in prestaciones.index: #this is a pd.DataFrame wherei store the data to fill the form
afi=WebDriverWait(driver,5).until(
EC.element_to_be_clickable((By.ID,'bodyContent_PID_txtNroIDAfiliado'))) #store the input element
if afi.get_attribute('value')=='': #check if its empty and fill it
afi.send_keys(str(prestaciones['n af'][i]))
else:
driver.find_element(By.ID,'bodyContent_PID_btnNroIDAfiliado_Clean').click()
afi.send_keys(str(prestaciones['n af'][i]))
#select something from a scroll down list
prog_int=Select(driver.find_element(By.ID,'bodyContent_PV1Internacion_selTipoAdmision'))
prog_int.select_by_value('P')
#fill more input
diag=driver.find_element(By.ID,'bodyContent_DG1_txtNroIDDiagnostico').get_attribute('value')
if diag=='':
driver.find_element(By.ID,'bodyContent_DG1_txtNroIDDiagnostico').send_keys('I10')
#select more inputs
tip_prac=Select(driver.find_element(By.ID,'bodyContent_PRE_selSistemaCodificacion'))
tip_prac.select_by_value('1')
#Codigo de prestacion
prest= driver.find_element(By.ID,'bodyContent_PRE_txtCodigoPrestacion')
if prest=='': #deal with the data in the input for next round of loading
prest.send_keys(str(prestaciones['codigo'][i]))
else:
prest.clear()
prest.send_keys(str(prestaciones['codigo'][i]))
#select amount of items
cant=driver.find_element(By.ID,'bodyContent_PRE_txtCantidadTratamientoSolicitados').get_attribute('value')
if cant == '':
driver.find_element(By.ID,'bodyContent_PRE_txtCantidadTratamientoSolicitados').send_keys('1')
#HERE IS THE DEAL. some fields make a loading gift appears and caches the click. I have tried several ways and it throws that exception or the time out one with the execute script
aceptar=WebDriverWait(driver, 10).until(EC.element_to_be_clickable((By.ID, "bodyContent_PRE_btnTablaPrestaciones_Add")))
aceptar.click()

Related

how to Automate the process of Checking twitter profile have option of sending message or not

i want to check list of certain profile whether it profile has message function or not
so is it possible with selenium with the help python and requests
You can just use try-else to find that element and get value based on this:
try:
button = driver.find_element(...)
send_button = 1
else:
send_button = 0
You can expand this structure to a loop and store values in a nicer way like an array, column...
Be sure to carefully verify if the else statement triggers only when there is no button, e.g. by loading error.

If Xpath doesn't exist, play mp3 - Python

I want to set up an alarm, so when a website changes, it will play a song on my computer.
I'm not sure how to set up the conditional statement for this. Below is what I've written, but obviously it isn't correct. If the element on the website exists, I want to end but if it doesnt exist, (the website has changed) I want the mp3 to play as an alarm.
if driver.find_element(By.XPATH, "/html/body/div[1]/div[2]/h1"):
else:
webbrowser.open(r"C:\Users\Julian Layton\Desktop\Andalusia\I Remember.mp3")
I also want this script run every 2 minutes. How can I do this? (Using VS Code)
You can do something like this:
if not driver.find_elements(By.XPATH, "/html/body/div[1]/div[2]/h1"):
webbrowser.open(r"C:\Users\Julian Layton\Desktop\Andalusia\I Remember.mp3")
find_elements method returns a list of web elements.
In case no elements found it returns an empty list interpreted as False by Python. This will involve your command.
In order to run this every 2 minutes you can put this in loop with 2 minutes delay
while True:
if not driver.find_elements(By.XPATH, "/html/body/div[1]/div[2]/h1"):
webbrowser.open(r"C:\Users\Julian Layton\Desktop\Andalusia\I Remember.mp3")
break
else:
time.sleep(120)
here I added a 2 minutes sleep between iterations and a break to stop this in case of calling the music file. Not sure what exactly logic you are looking for, this seems to me to be matching your question.

How to 'save' progress whilst web scraping in Python?

I am scraping some data and making a lot of requests from Reddit's pushshift API, along the way I keep encountering http errors, which halt all the progress, is there any way in which I can continue where I left off if an error occurs?
X = []
for i in ticklist:
f = urlopen("https://api.pushshift.io/reddit/search/submission/?q={tick}&subreddit=wallstreetbets&metadata=true&size=0&after=1610928000&before=1613088000".format(tick=i))
j = json.load(f)
subs = j['metadata']['total_results']
X.append(subs)
print('{tick} has been scraped!'.format(tick=i))
time.sleep(1)
I've so far mitigated the 429 error by waiting for a second in between requests - although I am experiencing connection time outs, I'm not sure how to efficiently proceed with this without wasting a lot of my time rerunning the code and hoping for the best.
Python sqlitedb approach: Refrence: https://www.tutorialspoint.com/sqlite/sqlite_python.htm
Create sqlitedb.
Create a table with urls to be scraped with schema like CREATE TABLE COMPANY (url NOT NULL UNIQUE, Status NOT NULL default "Not started")
Now read the rows only for which the status is "Not started".
you can change the status column of the URL to success once scraping is done.
So wherever the script starts it will only run run for the not started once.

Catching an exception and continuing with the loop to continue web search with python

I am running into exceptions when i try to search for data from a list of values in search bar. I would like to capture these exceptions and continue with the rest of the loop. Is there a way i could do this. I am getting two kinds of exceptions, one is above the search bar and one below. I am currently using selenium to login and get the necessary details
Error Messages:
Above search bar
our search returned more than 100 results. Only the first 100 results will be displayed. Please select 'Reset' and refine the search criteria for specific results. (29)
Employer Number is not a valid . Minimum length should be 8. (890)
Error Message below the search bar.
No records found...
This is my code:
for i in ids:
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[16]/td[1]/a').click()
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[4]/td[3]/a').click()
# searching for an id.
driver.find_element_by_xpath('//*[#id="ctl00_ctl00_cphMain_cphMain_txtEmprAcctNu"]').send_keys(i)
driver.find_element_by_id('ctl00_ctl00_cphMain_cphMain_btnSearch').click()
driver.find_element_by_xpath('//*[#id="ctl00_ctl00_cphMain_cphMain_grdAgentEmprResults"]/tbody/tr[2]/td[1]/a').click()
#navigating to the employee details
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[8]/td[3]/a').click()
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[4]/td[1]/a').click()
After the above code runs if there is an error or mismatch i am getting the mentioned exceptions and the code shuts down. How do i capture those exceptions and continue with the code. If i code do something similar the way i am capturing the date would be really helpful.
#copying the and storing the date
subdate = driver.find_element_by_id('ctl00_ctl00_cphMain_cphMain_frmViewAccountProfile_lblSubjectivityDate').text
subjectivitydate.append(subdate)
#exiting current employee details
driver.find_element_by_id('ctl00_ctl00_cphMain_ULinkButton4').click()
sleep(1)
Edited Code:
for i in ids:
try:
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[16]/td[1]/a').click()
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[4]/td[3]/a').click()
# searching for an id.
driver.find_element_by_xpath('//*[#id="ctl00_ctl00_cphMain_cphMain_txtEmprAcctNu"]').send_keys(i)
driver.find_element_by_id('ctl00_ctl00_cphMain_cphMain_btnSearch').click()
driver.find_element_by_xpath('//*[#id="ctl00_ctl00_cphMain_cphMain_grdAgentEmprResults"]/tbody/tr[2]/td[1]/a').click()
#navigating to the employee profile
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[8]/td[3]/a').click()
driver.find_element_by_xpath('//*[#id="print_area"]/table/tbody/tr[4]/td[1]/a').click()
#copying the and storing the date
subdate = driver.find_element_by_id('ctl00_ctl00_cphMain_cphMain_frmViewAccountProfile_lblSubjectivityDate').text
subjectivitydate.append(subdate)
#exiting current employee details
driver.find_element_by_id('ctl00_ctl00_cphMain_ULinkButton4').click()
sleep(1)
except:
continue
How do i restart the loop?
Regards,
Ren

Checking if A follows B on twitter using Tweepy/Python

I have a list of a few thousand twitter ids and I would like to check who follows who in this network.
I used Tweepy to get the accounts using something like:
ids = {}
for i in list_of_accounts:
for page in tweepy.Cursor(api.followers_ids, screen_name=i).pages():
ids[i]=page
time.sleep(60)
The values in the dictionary ids form the network I would like to analyze. If I try to get the complete list of followers for each id (to compare to the list of users in the network) I run into two problems.
The first is that I may not have permission to see the user's followers - that's okay and I can skip those - but they stop my program. This is the case with the following code:
connections = {}
for x in user_ids:
l=[]
for page in tweepy.Cursor(api.followers_ids, user_id=x).pages():
l.append(page)
connections[x]=l
The second is that I have no way of telling when my program will need to sleep to avoid the rate-limit. If I put a 60 second wait after every page in this query - my program would take too long to run.
I tried to find a simple 'exists_friendship' command that might get around these issues in a simpler way - but I only find things that became obsolete with the change to API 1.1. I am open to using other packages for Python. Thanks.
if api.exists_friendship(userid_a, userid_b):
print "a follows b"
else:
print "a doesn't follow b, check separately if b follows a"

Categories

Resources