Python/Selenium Webdriver: Adding new input text - python

I am using the python3 unittest library with selenium webdriver. I have the exisiting already question in my application. I am trying to add another question but when I am trying to insert text into text field in my new question - it changes text in my existing question instead of the new one...
I am using:
wd.find_element_by_id("id_question-1-title").click()
wd.find_element_by_id("id_question-1-title").clear()
wd.find_element_by_id("id_question-1-title").send_keys("ABC")
But this solution requires changing id everytime I run code into id_question-2-title, id_question-3-title etc

You can check how many questions you have and give the index accordingly
questions = wd.find_elements_by_css_selector('[id*="id_question-1-title"]')
index = len(questions)
wd.find_element_by_id('id_question-' + index + '-title').click()
wd.find_element_by_id('id_question-' + index + '-title').clear()
wd.find_element_by_id('id_question-' + index + '-title').send_keys("ABC")

Guy, thank you very much! I just added str to len(questions) because it required str insted of int
questions = wd.find_elements_by_css_selector('[id*="id_question-1-title"]')
index = str(len(questions))
wd.find_element_by_id('id_question-' + index + '-title').click()
wd.find_element_by_id('id_question-' + index + '-title').clear()
wd.find_element_by_id('id_question-' + index + '-title').send_keys("ABC")

Related

Use variable to find xpath selenium not working

I am working on web automation with selenium and I need to find an element with the xpath. This by itself is not a problem but the code needs to run multiple times and when it does that the HTML xpath changes. This is also not a problem. I have used simple math to get the new xpath every time.
The xpathes look like this
1. run: '//*[#id="input-text-3"]'
2. run: '//*[#id="input-text-5"]'
3. run: '//*[#id="input-text-7"]' etc.
I solved this problem using this code:
y = 1
#Corme browser already defined and on website
while True:
mathop1 = y*2 + 1
xxpath = ""'//*[#id="input-text-' + str(mathop1) + '"]'""
xxpath1 = "'" + str(xxpath) + "'"
print(xxpath1)
Bezeichnung = driver.find_element_by_xpath(xxpath1)
Bezeichnung.send_keys(file1name)
y = y + 1
What this does is that every time the program loops it updates y so the xpath will be correct. I tried using the output from xxpath1 to find the element like you normally would and it works fine however as soon as I use the variable it does not work. Specifically, the problem is this I can't use the variable.
Bezeichnung = driver.find_element_by_xpath(xxpath1)
why does this not work?
First of all, I guess you have to put a wait condition there.
Also, I do not understand why are you using so much strings inside string and converting string to string again, so I removed that
y = 1
#Corme browser already defined and on website
while True:
mathop1 = y*2 + 1
xxpath = '//*[#id="input-text-{}"]'.format(mathop1)
Bezeichnung = WebDriverWait(driver, 20).until(EC.element_to_be_clickable((By.XPATH, xxpath)))
Bezeichnung.send_keys(file1name)
y = y + 1

Getting text? from full xpath quick question

Relatively new in programming, doing Python with Selenium and Xpath
this is from this website:https://www.marketbeat.com/stocks/NASDAQ/AMZN/earnings/
what I am trying to do is copy the dates of the past earnings.
Problem is it is in the table and it also has tag.
I have this full Xpath:
/html/body/div[1]/main/article/form/div[8]/div[2]/div[2]/div/div[5]/div/table/tbody/tr[2]/td[1]
and upon inspection it shows this
< td data-sort-value="20201027000000">10/27/2020 < / td >
I am trying to get the 10/27/2020 out from here but I don't know how to.
It is easy when it is just the < td >$1.53< /td > where just giving the full xpath and then doing a .text on it gives me the text.
point is how do I get 10/27/2020. I would suspect that I would have to go over data-sort-value="" part.
Here is what I have :
stringeEPSxpath = """/html/body/div[1]/main/article/form/div[8]/div[2]/div[2]/div/div[5]/div/table/tbody/tr[""" + row + """]/td[1]"""
Date = stringeEPSxpath.text
(the row part is just me iterating the website page through a loop no problem there.)
For the rest of my code it has been pretty simple :
stringeEPSxpath = """/html/body/div[1]/main/article/form/div[8]/div[2]/div[2]/div/div[5]/div/table/tbody/tr[""" + row + """]/td[3]"""
stringEPSxpath = """/html/body/div[1]/main/article/form/div[8]/div[2]/div[2]/div/div[5]/div/table/tbody/tr[""" + row + """]/td[4]"""
elem = driver.find_element_by_xpath(stringeEPSxpath)
elem2 = driver.find_element_by_xpath(stringEPSxpath)
fix1 = (elem.text).replace(')','')
fix2 = (elem2.text).replace(')','')
eps1 = fix1.replace('(','-')
eps2 = fix2.replace('(','-')
As you can see above all I had to do was set a variable to it then use .text to convert it to a string. But now that does not work for dates.
The end result is the error of AttributeError: 'str' object has no attribute 'text' which I would suspect is because of the data-sort-value="20201027000000" part.
In this code stringeEPSxpath is string, not WebElement.
stringeEPSxpath = """/html/body/div[1]/main/article/form/div[8]/div[2]/div[2]/div/div[5]/div/table/tbody/tr[""" + row + """]/td[1]"""
Date = stringeEPSxpath.text
You can try this one:
from selenium.webdriver.common.by import By
element = driver.find_element(By.XPATH,"//td[#data-sort-value='20201027000000']")
Date = element.text
Forgot driver.find_element_by_xpath(stringeEPSxpath) which actually uses xpath. Since I copy and pasted from my other program I guess I deleted too much and completely forgot to add in the driver.findxpath.

Click on Javascript Link using Selenium Python

I need to click on the "Visualizar cruzeiros" button on this site: https://www.disneytravelcenter.com/MIN-000000000031063/sites++disney-cruise-line++cruises-destinations++alaska++view-sailings++view-sailings/
Unfortunately, I cannot do it by simply using:
visualizar = driver.find_element_by_css_selector('#jb-card-txt-blk-redirect-link-zone-' + zona + 'night-' + str(c)+ '.redirectLink')
visualizar.click()
Or using:
visualizar= driver.find_element_by_class_name('redirectLink')
visualizar.click()
Or using:
visualizar= driver.find_element_by_xpath('//*[#id="jb-card-txt-blk-redirect-link-zone-' + zona + 'night-' + str(c) +'"]')
visualizar.click()
How am I supposed to click on this button then?
OBS: The "zona" and str(c) are loop variables, but they don't change the result of the code, actually.
Have you tried using xpath //a[contains(text(),'Visualizar cruzeiros')? This should return more than one element, you need to get the second one.
Try to write a test case with selenium IDE, it may give you a hint how to access the element.
Found out the answer to my problem:
button = driver.find_elements_by_xpath('//*[#id="jb-card-txt-blk-redirect-link-zone-' + zona + 'night-' + str(c) + '"]')[2]
button.click()

Code working a week ago, now I am getting an error without changing anything in my code

I wrote a small web scraper that was working fine a couple of weeks ago, but now gives me an error without me having changed any part of my code. My code is listed below for reference:
address = driver.find_elements_by_xpath('//h3[#class = "street"]')
price = driver.find_elements_by_xpath('//div[#class = "price"]')
details = driver.find_elements_by_xpath('//div[#class = "details"]')
num_page_items = len(details)
with open('results.csv', 'a') as f:
for x in range(num_page_items):
f.write(address[x].text + " , " + price[x].text.replace(",", "") + "," + details[x].text + "\n")
I am using selenium (I omitted the import and setup since that part of the code works fine) and when I run my code I get the following error:
line 25, in <module>
f.write(address[x].text + " , " + price[x].text.replace(",", "") + "," + details[x].text + "\n")
IndexError: list index out of range
I did some researching but when I print len(details) I get 24, which indicates that there are values in the details variable. Since the range is defined, and I get a result for the length of the list, why would I get an out of range error?
Your code assumes that the length of each of the arrays is the same, but that's not guaranteed. Like others have said, reconsider your implementation if the design of the site has changed.
Alternatively, if you want to stop throwing errors, you could look into the built in zip library. https://docs.python.org/3.3/library/functions.html#zip
This will group together your arrays into an array of tuples, creating n tuples where n is the length of your smallest array. Consider though that if the site has changed its design, the meaningfulness of the newly created zip may not be valid.

Python dictionary too slow for crosscomparison, improvements? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I am currently struggling with a performance issue when using Python dictionaries. I have a few huge dicts (up to 30k entries), and I want to do a cross-comparison on these entries. So, if one entry (identifier is a key) is given, how many other dicts contain this entry with this key, too? It currently takes up to 5h on my machine, but it should work in about a few minutes to make sense for my tool. I already tried to remove entries to make the search more efficient.
all_cached_data is a list with these lists of dicts. sources is a list with information about the lists in all_cached_data.
appearsin_list = []
# first, get all the cached data
sources = sp.get_sources()
all_cachedata = [0]*len(sources)
for source in sources:
iscached = source[8]
sourceid = int(source[0])
if iscached == "True":
cachedata, _ = get_local_storage_info(sourceid)
else:
cachedata = []
all_cachedata[sourceid-1] = cachedata
# second, compare cache entries
# iterate over all cached sources
for source in sources:
sourceid = int(source[0])
datatype = source[3]
iscached = source[8]
if verbose:
print("Started comparing entries from source " + str(sourceid) +
" with " + str(len(all_cachedata[sourceid-1])) + " entries.")
if iscached == "True":
# iterate over all other cache entries
for entry in all_cachedata[sourceid-1]:
# print("Comparing source " + str(sourceid) + " with source " + str(cmpsourceid) + ".")
appearsin = 0
for cmpsource in sources:
cmpsourceid = int(cmpsource[0])
cmpiscached = cmpsource[8]
# find entries for same potential threat
if cmpiscached == "True" and len(all_cachedata[cmpsourceid-1]) > 0 and cmpsourceid != sourceid:
for cmpentry in all_cachedata[cmpsourceid-1]:
if datatype in cmpentry:
if entry[datatype] == cmpentry[datatype]:
appearsin += 1
all_cachedata[cmpsourceid-1].remove(cmpentry)
break
appearsin_list.append(appearsin)
if appearsin > 0:
if verbose:
print(entry[datatype] + " appears also in " + str(appearsin) + " more source/s.")
all_cachedata[sourceid-1].remove(entry)
avg = float(sum(appearsin_list)) / float(len(appearsin_list))
print ("Average appearance: " + str(avg))
print ("Median: " + str(numpy.median(numpy.array(appearsin_list))))
print ("Minimum: " + str(min(appearsin_list)))
print ("Maximum: " + str(max(appearsin_list)))
I would be very thankful for some tips on speeding this up.
I think your algorithm can be improved; nested loops are not great in this case. I also think that Python is probably not the best for this particular purpouse: use SQL to do compare and search in a big amount of data. You can use something like sqlite_object to convert your data set in a SQLite db and query it.
If you want to go ahead with pure Python, you can try to compile your script with Cython; you can have some resonable improvements in speed.
http://docs.cython.org/src/tutorial/pure.html
Then you can improve your code with some static type hinting:
http://docs.cython.org/src/tutorial/pure.html#static-typing

Categories

Resources