Please help me with best/easiest approach for the below scenario:
Url A has 'n' number of stacks and under each stack many user would be added and removed in a regular phase. And, I have built a python script using API call (json) which added all the stacks and host information to Url B.
Now, whenever an event (adding/removing hosts under stacks) occurs in url A has to be reflected in url B. What would be the best approach to accomplish this?
Related
I am in the process of creating a Python script to collect CSV data from a web URL. I've had no issue setting this up with:
request.urlretrieve("www.RandomWebsiteInfo.com", "my output info")
However, one of the web URLs I wish to retrieve includes an edit date in the naming convention. Example:
ftp://ftp.ncdc.noaa.gov/pub/data/swdi/stormevents/csvfiles/StormEvents_details-ftp_v1.0_d2018_c20180418.csv.gz
The portion of the string with "_c20180418" represents the edit date that the file was last updated. Since this constantly updates, and happens at random intervals, is there a way for Python to search within these URL strings and create and/or recognize and/or match an updated "_cYYYYMMDD" string?
I feel like, worst case scenario, I could create a looping mechanism that would try each date between the last edit and the current date until the request.urlretrieve locates a URL string that doesn't fail. I'm just not sure if that is the best practice for the task I'm attempting to accomplish. Before I try that approach, I thought I'd ask the community for any insight. Thanks for your time.
Edited Question:
I guess I worded my previous question improperly, I actually want to get away from "unit tests" and create automated, modular system tests that build off of each other to test the application as whole. Many parts are dependent upon the previous pages and subsequent pages cannot be reached without first performing the necessary steps on the previous pages.
For example (and I am sorry I cannot give the actual code), I want to sign into an app, then insert some data, then show that the data was sent successfully. It is more involved than that, however, I would like to make the web driver portion, 'Module 1.x'. Then the sign in portion, 'Module 2.x'. The data portion, 'Module 3.x'. Finally, success portion, 'Module 4.x'. I was hoping to achieve this so that I could eventually say, "ok, for this test, I need it to be a bit more complicated so let's do, IE (ie. Module 1.4), sign in (ie. Module 2.1), add a name (ie Module 3.1), add an address (ie. Module 3.2), add a phone number (ie Module 3.3), then check for success (ie Module 4.1). So, I need all of these strung together. (This is extremely simplified and just an example of what I need to occur. Even in the case of the unit tests, I am unable to simply skip to a page to check that the elements are present without completing the required prerequisite information.) The issue that I am running into with the lengthy tests that I have created is that each one requires multiple edits when something is changed and then multiplied by the number of drivers, in this case Chrome, IE, Edge and Firefox (a factor of 4). Maybe my approach is totally wrong but this is new ground for me, so any advice is much appreciated. Thank you again for your help!
Previous Question:
I have found many answers for creating unit tests, however, I am unable to find any advice on how to make said tests sequential.
I really want to make modular tests that can be reused when the same action is being performed repeatedly. I have tried various ways to achieve this but I have been unsuccessful. Currently I have several lengthy tests that reuse much of the same code in each test, but I have to adjust each one individually with any new changes.
So, I really would like to have .py files that only contain a few lines of code for the specific task that I am trying to complete, while re-using the same browser instance that is already open and on the page where the previous portion of the test left off. Hoping to achieve this by 'calling' the smaller/modular test files.
Any help and/or examples are greatly appreciated. Thank you for your time and assistance with this issue.
Respectfully,
Billiamaire
You don't really want your tests to be sequential. That breaks one of the core rules of unit tests where they should be able to be run in any order.
You haven't posted any code so it's hard to know what to suggest but if you aren't using the page object model, I would suggest that you start. There are a lot of resources on the web for this but the basics are that you create a single class per page or widget. That class would hold all the code and locators that pertains to that page. This will help with the modular aspect of what you are seeking because in your script you just instantiate the page object and then consume the API. The details of interacting with the page, the logic, etc. all lives in the page object is exposed via the API it provides.
Changes/updates are easy. If the login page changes, you edit the page object for the login page and you're done. If the page objects are properly implemented and the changes to the page aren't severe, many times you won't need to change the scripts at all.
A simple example would be the login page. In the login class for that page, you would have a login() method that takes username and password. The login() method would handle entering the username and password into the appropriate fields and clicking the sign in button, etc.
How to generate a random yet valid website link, regardless of languages. Actually, the more diverse the language of the website it generates, the better it is.
I've been doing it by using other people's script on their webpage, how can i not rely on these random site forwarding script and make my own?. I've been doing it as such:
import webbrowser
from random import choice
random_page_generator = ['http://www.randomwebsite.com/cgi-bin/random.pl',
'http://www.uroulette.com/visit']
webbrowser.open(choice(random_page_generator), new=2)
I've been doing it by using other people's script on their webpage, how can i not rely on these random site forwarding script and make my own?
There are two ways to do this:
Create your own spider that amasses a huge collection of websites, and pick from that collection.
Access some pre-existing collection of websites, and pick from that collection. For example, DMOZ/ODP lets you download their entire database;* Google used to have a customized random site URL;** etc.
There is no other way around it (short of randomly generating and testing valid strings of arbitrary characters, which would be a ridiculously bad idea).
Building a web spider for yourself can be a fun project. Link-driven scraping libraries like Scrapy can do a lot of the grunt work for you, leaving you to write the part you care about.
* Note that ODP is a pretty small database compared to something like Google's or Yahoo's, because it's primarily a human-edited collection of significant websites rather than an auto-generated collection of everything anyone has put on the web.
** Google's random site feature was driven by both popularity and your own search history. However, by feeding it an empty search history, you could remove that part of the equation. Anyway, I don't think it exists anymore.
A conceptual explanation, not a code one.
Their scripts are likely very large and comprehensive. If it's a random website selector, they have a huge, huge list of websites line by line, and the script just picks one. If it's a random URL generator, it probably generates a string of letters (e.g. "asljasldjkns"), plugs it between http:// and .com, tries to see if it is a valid URL, and if it is, sends you that URL.
The easiest way to design your own might be to ask to have a look at theirs, though I'm not certain of the success you'd have there.
The best way as a programmer is simply to decipher the nature of URL language. Practice the building of strings and testing them, or compile a huge database of them yourself.
As a hybridization, you might try building two things. One script that, while you're away, searches for/tests URLs and adds them to a database. Another script that randomly selects a line out of this database to send you on your way. The longer you run the first, the better the second becomes.
EDIT: Do Abarnert's thing about spiders, that's much better than my answer.
The other answers suggest building large databases of URL, there is another method which I've used in the past and documented here:
http://41j.com/blog/2011/10/find-a-random-webserver-using-libcurl/
Which is to create a random IP address and then try and grab a site from port 80 of that address. This method is not perfect with modern virtual hosted sites, and of course only fetches the top page but it can be an easy and effective way of getting random sites. The code linked above is C but it should be easily callable from python, or the method could be easily adapted to python.
i have found on StackOverflow and manage to use a skeleton proxy based on twisted, but i wonder if it can help me to solve my problem :
i would like to be able to remember the dependencies bewteen the data that pass through this proxy, (i'am new to this network behaviour and not know how to do it) that is i would like to be able to determine that this particular ajax answer is following i call made by this javascript which was itself loaded from this url/html page and so on.
is it possible ?
(as another example, i would like to be able to record/log in a similar way as the tree structure that is available when using Firebug, so that the proxy can log and say "i was first asked to go to this url", then, i see "this particular xyz.js file" and "this azerty.css" both where loaded as they depended on "this first url". Another example is for Ajax answer/response being able to tknow that it is linked to the initial page url page.à
(an so on, on a eventually recursive basis... asked in another I mean i need to determine which are the external files/data/ajax response loaded after by the initial html page passing through the proxy.)
can i do this kind of tracing from a proxy based on twisted ?
maybe i am wrong by thinking that a proxy can be able to do this without parsing/understanding the complete initial url. IF so my trouble wil lbe that i must handle Javascript and thus be able to parse and executing it :/
thanks
first edit : removing my attempts as asked. Sorry i have some difficulties in my question because i am not an expert in internet communication.
EDIT : Following Monoid comment, is it at least possible to be able to pair a particular Ajax answer with a javascript file or call from the proxy point of view ?
I'm trying to request parameters form the TwiML Voice Requests. I'm especially interested in in the gather verb that will allow me to figure out how many users "Pressed" a specific Digit.
http://www.twilio.com/docs/api/twiml/twilio_request#synchronous-request-parameters
http://www.twilio.com/docs/api/twiml/gather#attributes-action
I've seen most of the code is in xml but was wondering if there was a python equivalent code of the above.
The Gather verb is created in XML but you can use a static template such as the one here:
http://www.twilio.com/docs/howto/weatherbyphone#step2
The key things are specifying how many digits you expect and what the next step (action) is.
Then after the user pressed the digit, you can process the results with Python too. Here's an example from our Howtos:
http://www.twilio.com/docs/howto/weatherbyphone#step3
Does that make sense?