In the main page of my website there is a long list, <ul>. Each list item represents a model object. Within the list there are the objects attributes, for example its name etc, and also buttons to change those attributes. The list is loaded from the sqlite database. The issue is since each button changes data in the database, to display the changed data, the view function reloads the page on each button click. That is fine, except the list is very long, and is necessary to scroll down, and on each button click, the page is reloaded therefore goes to the very top of the list. This makes the webpage almost unusable, or at least very annoying to use.
Can someone recommend a workaround to this problem. Please let me know if my question is not clear
It looks like what you want to achieve is something that should be done with Javascript. It will be the simplest way of doing it! For instance, you can call the corresponding API when the user clicks on one of the buttons, and if the API returns 'OK' then you can just update the item with the changes you made (because if the server returned yes, then you can assume that the local version of the data is the same than the one on the server)
Related
What I want to do - Now I want to crawl the contents (similar to stock prices of companies) in a website. The value of each element (i.e. stock price) is updated every 1s. However, this web is a lazy-loaded page, so only 5 elements are visible at a time, meanwhile, I need to collect all data from ~200 elements.
What I tried - I use Python Splinter to get the data in the div.class of the elements, however, only 5-10 elements surrounding the current view appear in the HTML code. I tried scrolling down the browser, then I can get the next elements (stock prices of next companies), but the information of the prior elements is no longer available. This process (scrolling down and get new data) is too slow and when I can finish getting all 200 elements, the first element's value was changed several times.
So, can you suggest some approaches to handle this issue? Is there any way to force the browser to load all contents instead of lazy-loading?
there is not the one right way. It depends on how is the website working in background. Normaly there are two options if its a lazy loaded page.
Selenium. It executes all js scripts and "merges" all requests from the background to a complete page like a normal webbrowser.
Access the API. In this case you dont have to care for the ui and dynamicly hidden elements. The API gives you access to all data on the webpage, often more than displayed.
In your case, if there is an update every second it sounds like a
stream connection (maybe webstream). So try to figure out how the
website gets its data and then try to scrape the api endpoint directly.
What page is it?
I was able to use selenium to log into a scheduling website and click to the list of clients. Every client can be clicked on, to gather info about how many appointments they have left. What I want to do now is loop through all the clients, clicking on them, getting whatever info I need in an array or whatever (problem for later).
As of right now my main question would actually just be clicking on one client and then clicking on the next one until the list is complete. I can figure out the rest later.
How do I go about doing this? In previous questions I see that many people already have the list of URLs ready, here I obviously don't.
You can first fetch all the links you would want to click on by using
findElements method.
Then you will need a loop using foreach.
pseduo code will be
foreach(linkwebelement in listoflinks){
link.click
do your work
go back to page
}
you may come across in here stale element excpetion, if you do, you will need page handle again.
hope this helps.
Is it possible to save current state of Selenium browser?
To understand I will provide an example:
Let's say that there is a web page. I clicked on the button where I found many other buttons. I want to check each of those buttons sequentially. The problem is that each of this buttons needs to obtain browsers information like referrer which should be the first page after first click etc.
In this case I would need to have those information saved in a browser because if I clicked on the second button the referrer would be page which was recently opened. I can't click on a third right after that. I would have to go back, but some web pages does not allow browsers 'back'. Another advantage is that I would not have to send a new request to the server.
Something like this:
for but in driver.find_elements_by_class_name('button'):
state = driver.save_state()
but.click()
# do stuff
driver.load_state()
There is another solution. The web browser usually uses the cache to save the state of it. Therefore, you can save the cache before clicking each button then reuse the cache to get back to before clicking state.
I already use that solution in my project. It will work perfectly if nothing is saved into database or another third party.
I've got a text view and a web view, each inside a scrolled window of their own and I'm trying to achieve synchronized scrolling between the two but I can't seem to get it to work.
The web view is basically taking the text from the text view and rendering it as marked up HTML via webview.load_html_string(). I think the problem could be the delay in loading the HTML as every time the web view is refreshed it is scrolled back to the very start.
Right now I call a function every time the content of the text view is changed and then modify the vadjustment.value of the scrolled window containing the web view.
But this doesn't work. Is it because of the delay? I can't think of any way to solve this issue.
why do you want sync those scrollbars? You can achieve this by using the same Gtk.Adjustment (number of pages sets to 0).
I haven't use much of webkit but it essentialy a widget. so maybe a workaround would be disconnect a signal "value-changed" from Gtk.Adjustment until "load-status" signal from WebKitView reached Webkit.LoadStatus.FINISHED (if that's the correct syntax).
If that doesn't work, maybe you use WebKitView.move_cursor () (if i remember the function properly) based on Gtk.Adjustment on your text view (we use 2 adjustments this time)
I have written a script that goes through a bunch of files and snips out a portion of the files for further processing. The script creates a new directory and creates new files for each snip that is taken out. I have to now evaluate each of the files that were created to see if it is what I needed. The script also creates an html index file with links to each of the snips. So I can click the hyperlink to see the file, make a note in a spreadsheet to indicate if the file is correct or not and then use the back button in the browser to take me back to the index list.
I was sitting here wondering if I could somehow create a delete button in the browser next to the hyperlink. My thought is I would click the hyperlink, make a judgment about the file and if it is not one I want to keep then when I get back to the main page I just press the delete button and it is gone from the directory.
Does anyone have any idea if this is possible. I am writing this in python but clearly the issue is is there a way to create an htm file with a delete button-I would just use Python to write the commands for the deletion button.
You could make this even simpler by making it all happen in one main page. Instead of having a list of hyperlinks, just have the main page have one frame that loads one of the autocreated pages in it. Put a couple of buttons at the bottom - a "Keep this page" and a "Delete this page." When you click either button, the main page refreshes, this time with the next autocreated page in the frame.
You could make this as a cgi script in your favorite scripting language. You can't just do this in html because an html page only does stuff client-side, and you can only delete files server-side. You will probably need as cgi args the page to show in the frame, and the last page you viewed if the button click was a "delete".
You would have to write the web page in Python. There are many Python web frameworks out there (e.g. Django) that are easy to work with. You could convert your entire scripting framework to a web application that has a worker thread going and crawling through html pages, saving them to a particular location, indexing them for you to see and providing a delete button that calls the system's delete function on the particular file.
Rather than having your script output static HTML files, with a little amount of work you could probably adapt your script to run as a small web application with the help of something like web.py.
You would start your script and point a browser at http://localhost:8080, for instance. The web browser would be your user interface.
To achieve the 'delete' functionality, all you need to do is write some Python that gets executed when a form is submitted to actually perform the deletion.
Well I finally found an answer that achieved what I wanted-I did not want to learn a new language-Python is hard enough given my lack or experience
def OnDelete(self, event):
assert self.current, "invalid delete operation"
try:
os.remove(os.path.join(self.cwd, self.current))