I'm trying to request parameters form the TwiML Voice Requests. I'm especially interested in in the gather verb that will allow me to figure out how many users "Pressed" a specific Digit.
http://www.twilio.com/docs/api/twiml/twilio_request#synchronous-request-parameters
http://www.twilio.com/docs/api/twiml/gather#attributes-action
I've seen most of the code is in xml but was wondering if there was a python equivalent code of the above.
The Gather verb is created in XML but you can use a static template such as the one here:
http://www.twilio.com/docs/howto/weatherbyphone#step2
The key things are specifying how many digits you expect and what the next step (action) is.
Then after the user pressed the digit, you can process the results with Python too. Here's an example from our Howtos:
http://www.twilio.com/docs/howto/weatherbyphone#step3
Does that make sense?
Related
How do I handle multiple redirects to one site with different parameters in the HTTP body? In my code, I have a loop with HTTP redirect... It supposes to loop and do redirects with different parameters so many times as I have different parameters. But it just does one-time redirect and goes to this web site, so I end up with only one redirect instead of multiple ones. I really do interested in simple sequential redirects, nothing difficult like parallel multi-threading. My code in view looks like this:
for code in codes:
print(code)
base_url = 'https://base_url/'
code_part = 'code={}'.format(code)
url = '{}?{}'.format(base_url, code_part)
return redirect(url)
I thought about enveloping this into the parent-child function, which will process itself so many times as the structure of the list goes, but I think I will end up with the same result as normal for loop. Also, I saw redirects application, but I am not sure if it helps me with this exact task. And it doesn't matter how I implement it, but as soon as I call redirect, the program quits to the external web site and function stops.
update
I was asked to provide more of the code, so it helps to give the answer to my question. But that is the thing, that only relevant code is in my view function, which I included in the question, and I'm thinking at this time how to approach the question, so I don't have any other code at this time. Unfortunately :( Any push to the right direction would be very helpful. Thank you!
update
Unfortunately, redirects app for Django doesn't suit to do many queries to one site with different parameters. It suppose to handle 404 mistake, and creates moved permanently link in its table...
The only problem i see in your code was the indentation.
I just rewrote your code so it is a little simpler.
Maybe the problem is in the method called redirect() that you are returning in the end of your loop.
for code in codes:
print(code)
return redirect(f'https://base_url/?{code}')
And also when you return something then you actually quits the whole function. This means that whatever function your for loop is a part of, actually stops, and the for loop will never go any further than the first itteration.
So if you want the results from each code then you should write this instead:
results = []
for code in codes:
print(code)
results.append(redirect(f'https://base_url/?{code}'))
return results
I'm coming to you with the following issue:
I have a bunch of physical boxes onto which I still stick QR codes generated using a python module named qrcode. In a nutshell, what I would like to do is everytime someone wants to take the object contained in a box, he scans the qr code with his phone, then takes it and put it back when he is done, not forgetting to scan the QR code again.
Pretty simple, isn't it?
I already have a django table containing all my objects.
Now my question is related to the design. I suspect the easiest way to achieve that is to have a POST request link in the QR code which will create a new entry in a table with the name of the object that has been picked or put back, the time (I would like to store this information).
If that's the correct way to do, how would you approach it? I'm not too sure I see how to make a POST request with a QR code. Would you have any idea?
Thanks.
PS: Another alternative I can think of would be to a link in the QR code to a form with a dummy button the user would click on. Once clicked the button would update the database. But I would fine a solution without any button more convenient...
The question boils down to a few choices: (a) what data do you want to encode into the QR code; (b) what app will you use to scan the QR code; and (c) how do you want the app to use / respond to the encoded data.
If you want your users to use off-the-shelf QR code readers (like free smartphone apps), then encoding a full URL to the appropriate API on your backend makes sense. Whether this should be a GET or POST depends on the QR code reader. I'd expect most to use GET, but you should verify that for your choice of app. That should be functionally fine, if you don't have any concerns about who should be able to scan the code.
If you want more control, e.g. you'd like to keep track of who scanned the code or other info not available to the server side just from a static URL request, you need a different approach. Something like, store the item ID (not URL) in the QR code; create your own simple QR code scanner app (many good examples exist) and add a little extra logic to that client, like requiring the user to log in with an ID + password, and build the URL dynamically from the item ID and the user ID. Many security variations possible (like JWT token) -- how you do that won't be dictated by the contents of the QR code. You could do a lot of other things in that QR code scanner / client, like add GPS location, ask the user to indicate why or where they're taking the item, etc.
So you can choose between a simple way with no controls, and a more complex way that would allow you to layer in whatever other controls and extra data you need.
If security is not a big concern: an API with a simple get method that takes as argument the object id and I will presume you have the code to make sure if the object is given as taken it will be switched to returned.
And why not post? POST needs headers that you can't include in qr unless you have a dedicated app, so GET and the ability to use example.com/api/leaseandret?id=12345 is a better alternative that allows for better usage with a QR.
A summary of the methods*
* A note here is that GET is not forbidden from being used to modify data and send data to a server GET is exclusively for getting data from a REST purist standpoint.
Please help me with best/easiest approach for the below scenario:
Url A has 'n' number of stacks and under each stack many user would be added and removed in a regular phase. And, I have built a python script using API call (json) which added all the stacks and host information to Url B.
Now, whenever an event (adding/removing hosts under stacks) occurs in url A has to be reflected in url B. What would be the best approach to accomplish this?
Recently, I've been trying to get to grips with scrapy. I feel if I had a better understanding to the architecture, I'd move a lot faster. The current, concrete problem I have this: I want to store all of the links that scrapy extracts in a database, not the responses, the links. This is for sanity checking.
My initial thought was to use the process_links parameter on a rule and generate items in the function that it points to. However, whereas the callback parameter points to a function that is an item generator, the process_links paramter works more like a filter. In the callback function you yield items and they are automaticaly collected and put in the pipeline. In the process_links function you return a list of links. You don't generate items.
I could just make a database connection in the process_links function and write directly to the datatabase, but that doesn't feel like the right way to go when scrapy has built-in asynchronous database transaction processing via Twisted.
I could try to pass items from the process_links function to the callback function, but I'm not sure about the relationship between the two functions. One is used to generate items, and one receives a list and has to return a list.
In trying to think this through, I keep coming up against the fact that I don't understand the control loop within scapy. What is the process that is reading the items yielded by the callback function? What's the process that supplies the links to, and receives the links from, the process_links function? The one that takes requests and returns responses?
From my point of view, I write code in a spider which genreates items. The items are automatically read and moved through a pipeline. I can create code in the pipeline and the items will be automatically passed into and taken out of that code. What's missing is my understanding of exactly how these items get moved through the pipeline.
Looking through the code I can see that the base code for a spider is hiding way in corner, as all good spiders should, and going under the name of __init__.py. It contains the starts_requests() and make_requests_from_url() functions which according to docs are the starting points. But it's not a controlling loop. It's being called by something else.
Going from the opposite direction, I can see that when I execute the command scrapy crawl... I'm calling crawl.py which in turn calls self.crawler_process.start() in crawler.py. That starts a Twisted reactor. There is also core/engine.py which is another collection of functions which look as though they are designed to control the operation of the spiders.
Despite looking through the code, I don't have a clear mental image of the entire process. I realise that the idea of a framework is that it hides much of the complexity, but I feel that with a better understanding of what is going on, I could make better use of the framework.
Sorry for the long post. If anyone can give me an answer to my specific problem regarding save links to the database, that would be great. If you were able to give a brief overview of the architecture, that would be extremely helpful.
This is how Scrapy works in short:
You have Spiders which are responsible for crawling sites. You can use separate spiders for separate sites/tasks.
You provide one or more start urls to the spider. You can provide them as a list or use the start_requests method
When we run a spider using Scrapy, it takes these URLs and fetches the HTML response. The response is passed to the callback on the spider class. You can explicitly define a callback when using the start_requests method. If you don't, Scrapy will use the parse method as the callback.
You can extract whatever data you need from the HTML. The response object you get in the parse callback allows you do extract the data using css selectors or xpath.
If you find the data from the response, you can construct the Items and yield them. If you need to go to another page, you can yield scrapy.Request.
If you yield a dictionary or Item object, Scrapy will send those through the registered pipelines. If you yield scrapy.Request, the request would be further parsed and the response will be fed back to a callback. Again you can define a separate callback or use the default one.
In the pipelines, your data (dictionary or Item) go through the pipeline processors. In the pipelines you can store them in database or whatever you want to do.
So in short:
In parse method or in any method inside the spider, we would extract and yield our data so they are sent through the pipelines.
In the pipelines, you do the actual processing.
Here's a simple spider and pipeline example: https://gist.github.com/masnun/e85b38a00a74737bb3eb
I started using Scrapy not so long ago and I had some of your doubts myself (also considering I started with Python overall), but now it works for me, so don’t get discouraged – it’s a nice framework.
First, I would not get too worried at this stage about the details behind the framework, but rather start writing some basic spiders yourself.
Some of really key concepts are:
Start_urls – they define an initial URL (or URLs), where you will further look either for text or for further links to crawl. Let’s say you want to start from e.g. http://x.com
Parse(self.response) method – this will be the first method that will be processed that will give you Response of http://x.com. (basically its HTML markup)
You can use Xpath or CSS selectors to extract information from this markup e.g. a = Response.Xpath(‘//div[#class=”foo”]/#href’) will extract the link to a page (e.g. http://y.com)
If you want to extract the text of the link, so literally "http://y.com" you just yield (return) an item within Parse(self.response) method. So your final statement in this method will be yield item. If you want to go deeper and dwell to http://y.com your final statement will be scrapy.Request(a, callback= self.parse_final) - parse_final being here an example of the callback to the parse_final(self.response) method.
Then you can extract the elements of html of http://y.com as the final call in parse_final(self.response) method, or keep repeating the process to dig for further links in the page structure
Pipelines are for processing items. So when items get yielded, they are by default just printed on the screen. So in pipelines you can redirect them either to csv file, database etc.
The entire process gets more complex, when you start getting more links in each of the methods, based on various conditions you call various callbacks etc. I think you should start with getting this concept first, before going to pipelines. The examples from Scrapy are somewhat difficult to get, but once you get the idea it is really nice and not that complicated in the end.
i have found on StackOverflow and manage to use a skeleton proxy based on twisted, but i wonder if it can help me to solve my problem :
i would like to be able to remember the dependencies bewteen the data that pass through this proxy, (i'am new to this network behaviour and not know how to do it) that is i would like to be able to determine that this particular ajax answer is following i call made by this javascript which was itself loaded from this url/html page and so on.
is it possible ?
(as another example, i would like to be able to record/log in a similar way as the tree structure that is available when using Firebug, so that the proxy can log and say "i was first asked to go to this url", then, i see "this particular xyz.js file" and "this azerty.css" both where loaded as they depended on "this first url". Another example is for Ajax answer/response being able to tknow that it is linked to the initial page url page.à
(an so on, on a eventually recursive basis... asked in another I mean i need to determine which are the external files/data/ajax response loaded after by the initial html page passing through the proxy.)
can i do this kind of tracing from a proxy based on twisted ?
maybe i am wrong by thinking that a proxy can be able to do this without parsing/understanding the complete initial url. IF so my trouble wil lbe that i must handle Javascript and thus be able to parse and executing it :/
thanks
first edit : removing my attempts as asked. Sorry i have some difficulties in my question because i am not an expert in internet communication.
EDIT : Following Monoid comment, is it at least possible to be able to pair a particular Ajax answer with a javascript file or call from the proxy point of view ?