Getting additional input from SlackClient using python - python

One of the issues is I’m running into is getting additional data from a command in slack. I️ don’t want to use Slash commands because I️ can’t expose my localhost to the world.
Example:
#mybot do
Will return, let’s say “I’m doing something”. However I️ want to be able to do something like
#mybot do 2
Where 2 is a parameter in the back end. Basically what I’m trying to do is have it where the user can say #mybot do 2 and it will get data from the database where the ID is 2. You could make it a 3, 4, 5 etc and the command will pull the information from the database. I️ have found where I️ can make it match the exact “do” command, though I️ can’t get it to read the follow on data. I️ was following this tutorial. Any help will be great.
if you see weird symbols it’s because I’m doing this from an iPhone and I️ have that stupid bug where the turns I️ (eye) into I️

You need to use regular expressions to extract arguments from text. I hope this will help.
import re
def handle_command(command, channel):
response = "Not sure what you mean. Use the *" + EXAMPLE_COMMAND + \
"* command with numbers, delimited by spaces."
match = re.match("do (?P<arg>\S+)", command)
if match:
arg = match.groupdict('arg')
response = "Wow! My argument is: " + arg
slack_client.api_call("chat.postMessage", channel=channel,
text=response, as_user=True)

How to get "additional information"
You will get the complete input string in the text property, e.g. "do 2". All you need to do is split the string into words. I am not a Python developer, but apparently split() will do the job.
Exposing your localhost
I would strongly recommend to go ahead and expose your localhost with a VPN tunnel. It makes development so much easier. You can use ngrok to securely expose your localhost to Slack.
"Dont want to use slash commands"
You will always need an app (e.g. Python script) on an exposed host for any custom functionality to work with Slack. Actually, slash commands are easier to implement then Event API and RTM, so I would recommend it for your case.

Related

Py3 imaplib: get only immediate body (no reply) of email [duplicate]

There are two pre-existing questions on the site.
One for Python, one for Java.
Java How to remove the quoted text from an email and only show the new text
Python Reliable way to only get the email text, excluding previous emails
I want to be able to do pretty much exactly the same (in PHP). I've created a mail proxy, where two people can have a correspondance together by emailing a unique email address.
The problem I am finding however, is that when a person receives the email and hits reply, I am struggling to accurately capture the text that he has written and discard the quoted text from previous correspondance.
I'm trying to find a solution that will work for both HTML emails and Plaintext email, because I am sending both.
I also have the ability if it helps to insert some <*****RESPOND ABOVE HERE*******> tag if neccessary in the emails meaning that I can discard everything below.
What would you recommend I do? Always add that tag to the HTML copy and the plaintext copy then grab everything above it?
I would still then be left with the scenario of knowing how each mail client creates the response. Because for example Gmail would do this:
On Wed, Nov 2, 2011 at 10:34 AM, Message Platform <35227817-7cfa-46af-a190-390fa8d64a23#dev.example.com> wrote:
## In replies all text above this line is added to your message conversation ##
Any suggestions or recommendations of best practices?
Or should I just grab the 50 most popular mail clients, and start creating custom Regex for each. Then for each of these clients, also a bizallion different locale settings since I'm guessing the locale of the user will also influence what is added.
Or should I just remove the preceding line always if it contains a date?.. etc
Unfortunately, you're in for a world of hurt if you want to try to clean up emails meticulously (removing everything that's not part of the actual reply email itself). The ideal way would be to, as you suggest, write up regex for each popular email client/service, but that's a pretty ridiculous amount of work, and I recommend being lazy and dumb about it.
Interestingly enough, even Facebook engineers have trouble with this problem, and Google has a patent on a method for "Detecting quoted text".
There are three solutions you might find acceptable:
Leave It Alone
The first solution is to just leave everything in the message. Most email clients do this, and nobody seems to complain. Of course, online message systems (like Facebook's 'Messages') look pretty odd if they have inception-style replies. One sneaky way to make this work okay is to render the message with any quoted lines collapsed, and include a little link to 'expand quoted text'.
Separate the Reply from the Older Message
The second solution, as you mention, is to put a delineating message at the top of your messages, like --------- please reply above this line ----------, and then strip that line and anything below when processing the replies. Many systems do this, and it's not the worst thing in the world... but it does make your email look more 'automated' and less personal (in my opinion).
Strip Out Quoted Text
The last solution is to simply strip out any new line beginning with a >, which is, presumably, a quoted line from the reply email. Most email clients use this method of indicating quoted text. Here's some regex (in PHP) that would do just that:
$clean_text = preg_replace('/(^\w.+:\n)?(^>.*(\n|$))+/mi', '', $message_body);
There are some problems using this simpler method:
Many email clients also allow people to quote earlier emails, and preface those quote lines with > as well, so you'll be stripping out quotes.
Usually, there's a line above the quoted email with something like On [date], [person] said. This line is hard to remove, because it's not formatted the same among different email clients, and it may be one or two lines above the quoted text you removed. I've implemented this detection method, with moderate success, in my PHP Imap library.
Of course, testing is key, and the tradeoffs might be worth it for your particular system. YMMV.
There are many libraries out there that can help you extract the reply/signature from a message:
Ruby: https://github.com/github/email_reply_parser
Python: https://github.com/zapier/email-reply-parser or https://github.com/mailgun/talon
JavaScript: https://github.com/turt2live/node-email-reply-parser
Java: https://github.com/Driftt/EmailReplyParser
PHP: https://github.com/willdurand/EmailReplyParser
I've also read that Mailgun has a service to parse inbound email and POST its content to a URL of your choice. It will automatically strip quoted text from your emails: https://www.mailgun.com/blog/handle-incoming-emails-like-a-pro-mailgun-api-2-0/
Hope this helps!
Possibly helpful: quotequail is a Python library that helps identify quoted text in emails
Afaik, (standard) emails should quote the whole text by adding a ">" in front of every line. Which you could strip by using strstr(). Otherwise, did you trie to port that Java example to php? It's nothing else than Regex.
Even pages like Github and Facebook do have this problem.
Just an idea: You have the text which was originally sent, so you can look for it and remove it and additional surrounding noise from the reply. It is not trivial, because additional line breaks, HTML elements, ">" characters are added by the mail client application.
The regex is definitely better if it works, because it is simple and it perfectly cuts the original text, but if you find that it frequently does not work then this can be a fallback method.
I agree that quoted text or reply is just a TEXT. So there's no accurate way to fetch it. Anyway you can use regexp replace like this.
$filteringMessage = preg_replace('/.*\n\n((^>+\s{1}.*$)+\n?)+/mi', '', $message);
Test
https://regex101.com/r/xO8nI1/2

I cannot calculate a working AWS signature version 4 (hexadecimal string) for curl commands to work to test the REST API

I have never been able to get Rest APIs to completely work with AWS. The error messages I have seen have been about the time not being correct or the command not being recognized (e.g., list-users). I have verified the "version" was appropriate for the command with AWS's website documentation.
I am trying to use curl with Linux to list the users or instances in my AWS account. I have a problem when I run it. My current error, that I would like to focus on, is "request signatures calculated does not match the signature provided." I went through the process of creating a signature carefully. It wasn't that surprising that it did not work given the hours of trouble and the seemingly many potential pitfalls in the tedious task of creating a signature.
I used this link to generate the hexadecimal string for the signature:
http://docs.aws.amazon.com/general/latest/gr/signature-v4-examples.html#signature-v4-examples-python
I analyzed the output of the signatureKey using a modification of the Python code in the above link. The result is not hexadecimal nor alphanumeric. The result is a combination of special non-alphabet, non-numeric symbols and alphabet letters. I tried to work around this problem by using import binascii and binascii.hexlify. I was able to get a hexadecimal string from otherwise strictly adhering to the sample of Python code from the above link. I tend to think my signatureKey is not right because of this binascii work that I had to do. But what did I do wrong? How is that Python code supposed to calculate a signature?
Alternatively, are there thorough directions not written by Amazon to create a signature key? The process is not simple and seemingly error prone. I could start over with creating a signature if someone cannot clearly tell me how to create a signature. Amazon's forums have few postings related to this topic. I'd prefer to create the signature with Python. If someone recommends Ruby (an accessible language for me), I could try something like that.

Specifying doc-id and plot-id (i.e. the URL) for output_server()

I would like to be able to specify the URL where output_server publishes my app in bokeh-server (as an example, I am trying to do this in the animate_widgets.py example that Christine Doig presented in Scipy2015).
I am already running bokeh-server in a separate terminal. When I run my app with output_server, is there any way to specify the URL where the app will be rendered?
It seems to currently follow the syntax:
http://localhost:5006/bokeh/doc/some-doc-id/some-plot-id
but I don't see the ability to specify those fields <doc-id> and <plot-id> with output_server (documentation for output_server below).
Is there any way to specify where exactly (URL-wise) I want the app to be published?
Note that just entering the string http://localhost:5006/bokeh/doc/some-doc-id/some-plot-id as URL for output_server() does not work.
The short answer is not really. Those URLs are meant to be unambiguous and to avoid collisions. Letting users choose their own URLs would be fairly unworkable in the general multi-user scenario. But that's OK, what you really probably want to is to embed a Bokeh server plot in your own document (as opposed to just linking to the bare page that has the plot and nothing else). This you can accomplish easily with server_session:
https://docs.bokeh.org/en/latest/docs/user_guide/embed.html#bokeh-applications
Edit: I won't actually say its impossible, but it's far enough outside normal usage that I don't know offhand how you could accomplish it, and even if you could it would probably not be advisable for several reasons.

Shortcode SMS Campaigns - Python

I have built a app that handles incoming sms for a short number. We might have several campaigns running at the same time, each with different mechanics and keywords etc.
At the moment I am using Django to handle this.
My biggest concern atm is that the consumer who is using out service is a very low LSM market in South Africa. The sms's come ins weird ways and I get allot of wrong entries.
Im trying to rethink the way to interpret the sms and would like some ideas.
At the moment it basically works like this:
Get the SMS, I split it by a ' or * THEN look first for the KEYWORD. All campaigns have Keywords and I run through a list of live campaign keywords to see if there is a match in the message. Then I go on in the split message and compare or find more words if needed. I need to reply based on the message, like the KEYWORD is there, but not the 2nd or 3rd or what ever parameter.
Megan from Twilio here.
You may or may not be using a Twilio shortcode. But I think your question relates to a little guide we have on How to Build a SMS Keyword Response Application.
The example there is in PHP but you can get started in Python here and your code might look like:
def sms():
r = twiml.Response()
body = request.form['Body']
if KEYWORD in body:
//do some action

Scripting HTTP more effeciently

Often times I want to automate http queries. I currently use Java(and commons http client), but would probably prefer a scripting based approach. Something really quick and simple. Where I can set a header, go to a page and not worry about setting up the entire OO lifecycle, setting each header, calling up an html parser... I am looking for a solution in ANY language, preferable scripting
Have a look at Selenium. It generates code for C#, Java, Perl, PHP, Python, and Ruby if you need to customize the script.
Watir sounds close to what you want although it (like Selenium linked to in another answer) actually opens up a browser to do stuff. You can see some examples here. Another browser based record + playback approach system is sahi.
If your application is uses WSGI, then paste is a nice option.
Mechanize linked to in another answer is a "browser in a library" and there are clones in perl, Ruby and Python. The Perl one is the original one and this seems to be the way to go if you don't want a browser. The problem with this approach is that all the front end code (which might rely on JavaScript), won't be exercised.
Mechanize for Python seems easy to use: http://wwwsearch.sourceforge.net/mechanize/
My turn : wget or perl with lwp. You'll find example on the linked page.
If you have simple needs (fetch a page and then parse it), it is hard to beat LWP::Simple and HTML::TreeBuilder.
use strict;
use warnings;
use LWP::Simple;
use HTML::TreeBuilder;
my $url = 'http://www.example.com';
my $content = get( $url) or die "Couldn't get $url";
my $t = HTML::TreeBuilder->new_from_content( $content );
$t->eof;
$t->elementify;
# Get first match:
my $thing = $t->look_down( _tag => 'p', id => qr/match_this_regex/ );
print $thing ? $thing->as_text : "No match found\n";
# Get all matches:
my #things = $t->look_down( _tag => 'p', id => qr/match_this_regex/ );
print $_ ? $_->as_text : "No match found" for #things;
I'm testing ReST APIs at the moment and found the ReST Client very nice. It's a GUI program, but nonetheless you can save and restore queries as XML files (or let them be generated), embed, write test scripts, and so on. And it's Java based (which is not an ad-hoc advantage, but you mentioned it).
Minus points for recording sessions. The ReST Client is good for stateless "one-shots".
If it doesn't suit your needs, I'd go for the already mentioned Mechanize (or WWW-Mechanize, as it is called at CPAN).
Depending on exactly what you're doing the easiest solution looks to be bash + curl.
The man page for the latter is available here:
http://curl.haxx.se/docs/manpage.html
You can do posts as well as gets, HTTPS, show headers, work with cookies, basic and digest HTTP authentication, tunnel through all sorts of proxies, including NTLM on *nix amongst other things.
curl is also available as shared library with C and PHP support.
HTH
C.
Python urllib may be what you're looking for.
Alternatively powershell exposes the full .NET http library in a scripting environment.
Twill is pretty good and made for testing. It can be used as script, in an interactive session or within a Python program.
Perl and WWW::Mechanize can make web scraping etc simple and easy, including easy handling of forms (let's say you want to go to a login page, fill in a username and password and submit the form, handling cookies / hidden session identifiers just as a browser would...)
Similarly, finding or extracting links from the fetched page is trivial.
If you need to parse stuff out of the resulting pages that WWW::Mechanize can't easily help with, then feed the result to HTML::TreeBuilder to make parsing easy.
What about using PHP+Curl, or just bash?
Some ruby libraries:
httparty: really interesting, the philosophy is interesting.
mechanize: classic good-quality web automatization library.
scrubYt: puzzling at first glance but fun to use.

Categories

Resources