I just tried the basic example of IBM's text to speech with Python:
!pip install ibm_watson
from ibm_watson import TextToSpeechV1
from ibm_cloud_sdk_core.authenticators import IAMAuthenticator
apikey = 'my API KEY'
url = 'my SERVICE URL'
authenticator = IAMAuthenticator(apikey)
tts = TextToSpeechV1(authenticator=authenticator)
tts.set_service_url(url)
with open('./speech.mp3', 'wb') as audio_file:
res = tts.synthesize('Hello World!', accept='audio/mp3', voice='en-US_AllisonV3Voice').get_result()
audio_file.write(res.content)
But I get an error message:
DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode().*
DecodeError Traceback (most recent call last)
<ipython-input-5-53b9d398591b> in <module>
1 with open('./speech.mp3', 'wb') as audio_file:
----> 2 res = tts.synthesize('Hello World!', accept='audio/mp3', voice='en-US_AllisonV3Voice').get_result()
3 audio_file.write(res.content)
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_watson\text_to_speech_v1.py in synthesize(self, text, accept, voice, customization_id, **kwargs)
275
276 url = '/v1/synthesize'
--> 277 request = self.prepare_request(method='POST',
278 url=url,
279 headers=headers,
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_cloud_sdk_core\base_service.py in prepare_request(self, method, url, headers, params, data, files, **kwargs)
295 request['data'] = data
296
--> 297 self.authenticator.authenticate(request)
298
299 # Next, we need to process the 'files' argument to try to fill in
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_cloud_sdk_core\authenticators\iam_authenticator.py in authenticate(self, req)
104 """
105 headers = req.get('headers')
--> 106 bearer_token = self.token_manager.get_token()
107 headers['Authorization'] = 'Bearer {0}'.format(bearer_token)
108
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_cloud_sdk_core\jwt_token_manager.py in get_token(self)
77 """
78 if self._is_token_expired():
---> 79 self.paced_request_token()
80
81 if self._token_needs_refresh():
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_cloud_sdk_core\jwt_token_manager.py in paced_request_token(self)
122 if not request_active:
123 token_response = self.request_token()
--> 124 self._save_token_info(token_response)
125 self.request_time = 0
126 return
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\ibm_cloud_sdk_core\jwt_token_manager.py in _save_token_info(self, token_response)
189
190 # The time of expiration is found by decoding the JWT access token
--> 191 decoded_response = jwt.decode(access_token, verify=False)
192 # exp is the time of expire and iat is the time of token retrieval
193 exp = decoded_response.get('exp')
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\jwt\api_jwt.py in decode(self, jwt, key, algorithms, options, **kwargs)
111 **kwargs,
112 ) -> Dict[str, Any]:
--> 113 decoded = self.decode_complete(jwt, key, algorithms, options, **kwargs)
114 return decoded["payload"]
115
c:\users\chris\appdata\local\programs\python\python39\lib\site-packages\jwt\api_jwt.py in decode_complete(self, jwt, key, algorithms, options, **kwargs)
77
78 if options["verify_signature"] and not algorithms:
---> 79 raise DecodeError(
80 'It is required that you pass in a value for the "algorithms" argument when calling decode().'
81 )
DecodeError: It is required that you pass in a value for the "algorithms" argument when calling decode().
The issue may be with the newest version of PyJWT package (2.0.0).
use
pip install PyJWT==1.7.1
to downgrade to the previous version and your project may work now. ( this worked for me )
Yea there is definitely an issue with pyjwt > 2.0, like earlier mentioned you need to uninstall the previous version and install a more stable one like 1.7.1.
it worked for me
I was having this problem with PyJWT==2.1.0,
My existing code,
import jwt
data = 'your_data'
key = 'your_secret_key'
encoded_data = jwt.encode(data, key) # worked
decoded_data = jwt.decode(encoded_data, key) # did not worked
I passed the algorithm for encoding and afterwards it worked fine,
Solution:
import jwt
data = 'your_data'
key = 'your_secret_key'
encoded_data = jwt.encode(data, key, algorithm="HS256")
decoded_data = jwt.decode(encoded_data, key, algorithms=["HS256"])
More information is available in JWT Docs
I faced the same error when i was working with speech-to-text using ibm_watson then i solved my issue by installing PyJWT of version 1.7.1 to do that try:
pip install PyJWT==1.7.1
OR
python -m pip install PyJWT
Good Luck
For those who want to use the latest version(as of writing this v2.1.0) of PyJWT.
If you don't want to continue with the older version(i.e PyJWT==1.7.1) and want to upgrade it for some reason, you need to use the verify_signature parameter and set it to False(It is True by default if you don't provide it). In older versions (before <2.0.0) the parameter was verify and you could directly use that. but in the newer version, you have to use it inside options parameter which is of type dict
jwt.decode(...., options={"verify_signature": False})
If you don't want to use verify_signature, you can simply pass the algorithms parameter without verify_signature.
jwt.decode(.... algorithms=['HS256'])
This is from the official Changelog.
Dropped deprecated verify param in jwt.decode(...)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Use jwt.decode(encoded, key, options={"verify_signature": False})
instead.
Require explicit algorithms in jwt.decode(...) by default
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Example: jwt.decode(encoded, key, algorithms=["HS256"]).
I had this problem, with version 2.3.0 of jwt.
I have solved it like this:
jwt.decode(token, MY_SECRET, algorithms=['HS256'])
you must use algorithms instead of algorithm
Related
I am trying to query public data from the BigQuery API (The Ethereum dataset) on my colab.
I have tried this
from google.colab import auth
auth.authenticate_user()
from google.cloud import bigquery
eth_project_id = 'crypto_ethereum_classic'
client = bigquery.Client(project=eth_project_id)
and receive this error message:
WARNING:google.auth._default:No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
I have also tried using the BigQueryHelper library and receive a similar error message
from bq_helper import BigQueryHelper
eth_dataset = BigQueryHelper(active_project="bigquery-public-data",dataset_name="crypto_ethereum_classic")
Error:
WARNING:google.auth._default:No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
<ipython-input-21-53ac8b2901e1> in <module>()
1 from bq_helper import BigQueryHelper
----> 2 eth_dataset = BigQueryHelper(active_project="bigquery-public-data",dataset_name="crypto_ethereum_classic")
/content/src/bq-helper/bq_helper.py in __init__(self, active_project, dataset_name, max_wait_seconds)
23 self.dataset_name = dataset_name
24 self.max_wait_seconds = max_wait_seconds
---> 25 self.client = bigquery.Client()
26 self.__dataset_ref = self.client.dataset(self.dataset_name, project=self.project_name)
27 self.dataset = None
/usr/local/lib/python3.6/dist-packages/google/cloud/bigquery/client.py in __init__(self, project, credentials, _http, location, default_query_job_config)
140 ):
141 super(Client, self).__init__(
--> 142 project=project, credentials=credentials, _http=_http
143 )
144 self._connection = Connection(self)
/usr/local/lib/python3.6/dist-packages/google/cloud/client.py in __init__(self, project, credentials, _http)
221
222 def __init__(self, project=None, credentials=None, _http=None):
--> 223 _ClientProjectMixin.__init__(self, project=project)
224 Client.__init__(self, credentials=credentials, _http=_http)
/usr/local/lib/python3.6/dist-packages/google/cloud/client.py in __init__(self, project)
176 if project is None:
177 raise EnvironmentError(
--> 178 "Project was not passed and could not be "
179 "determined from the environment."
180 )
OSError: Project was not passed and could not be determined from the environment.
Just to reiterate I am using Colab- I know how to query the data on Kaggle, but need to do it on my colab
IN Colab - you need to first authenticate.
from google.colab import auth
auth.authenticate_user()
That will authenticate your user account to a project.
I am trying to push a local Docker image to ECR using the Docker Python API. As part of that process I need to tag the image a certain way. When I do so on the CLI, it works:
docker tag foo/bar '{user_id}.dkr.ecr.us-east-1.amazonaws.com/foo/bar'
However when I try to do the same thing using the docker.images.Image.tag function in the Docker Python SDK it fails:
import docker
(docker.client.from_env().images.get('foo/bar')
.tag('foo/bar',
'{user-id}.dkr.ecr.us-east-1.amazonaws.com/foo/bar'
)
)
(replace user_id in the code samples above with an AWS user id value, e.g. 717171717171; I've obfuscated it here for the purposes of this question)
With the following error:
In [10]: docker.client.from_env().images.get('foo/bar').ta
...: g('foo/bar', '{user_id}.dkr.ecr.us-east-1.amaz
...: onaws.com/foo/bar')
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/api/client.py in _raise_for_status(self, response)
255 try:
--> 256 response.raise_for_status()
257 except requests.exceptions.HTTPError as e:
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/requests/models.py in raise_for_status(self)
939 if http_error_msg:
--> 940 raise HTTPError(http_error_msg, response=self)
941
HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.35/images/sha256:afe07035bce72b6c496878a7e3960bedffd46c1bedc79f1bd2b89619e8457194/tag?tag={user_id}.dkr.ecr.us-east-1.amazonaws.com%2Ffoo%2Fbar&repo=foo%2Fbar&force=0
During handling of the above exception, another exception occurred:
APIError Traceback (most recent call last)
<ipython-input-10-5bb015d17409> in <module>
----> 1 docker.client.from_env().images.get('alekseylearn-example/build').tag('foo/bar', '{user_id}.dkr.ecr.us-east-1.amazonaws.com/foo/bar')
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/models/images.py in tag(self, repository, tag, **kwargs)
120 (bool): ``True`` if successful
121 """
--> 122 return self.client.api.tag(self.id, repository, tag=tag, **kwargs)
123
124
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/utils/decorators.py in wrapped(self, resource_id, *args, **kwargs)
17 'Resource ID was not provided'
18 )
---> 19 return f(self, resource_id, *args, **kwargs)
20 return wrapped
21 return decorator
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/api/image.py in tag(self, image, repository, tag, force)
531 url = self._url("/images/{0}/tag", image)
532 res = self._post(url, params=params)
--> 533 self._raise_for_status(res)
534 return res.status_code == 201
535
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/api/client.py in _raise_for_status(self, response)
256 response.raise_for_status()
257 except requests.exceptions.HTTPError as e:
--> 258 raise create_api_error_from_http_exception(e)
259
260 def _result(self, response, json=False, binary=False):
~/miniconda3/envs/alekseylearn-dev/lib/python3.6/site-packages/docker/errors.py in create_api_error_from_http_exception(e)
29 else:
30 cls = NotFound
---> 31 raise cls(e, response=response, explanation=explanation)
32
33
APIError: 500 Server Error: Internal Server Error ("invalid tag format")
Why does the CLI command succeed and the Python API command fail?
In detailed Docker API lingo, an image name like 123456789012.dkr.ecr.us-east-1.amazon.aws.com/foo/bar:baz is split up into a repository (before the colon) and a tag (after the colon). The host-name part of the repository name is a registry. The default tag value if none is specified is the literal latest.
In your case, you already have an Image object, so you need to apply the two "halves" of the second argument:
docker.client.from_env().images.get('foo/bar')
.tag('{user-id}.dkr.ecr.us-east-1.amazonaws.com/foo/bar',
'latest'
)
(In many practical cases using the latest tag isn't a great idea; something like a timestamp or source control commit ID better identifies the image and helps indicate to services like ECS or EKS or plain Kubernetes that they need to do an update. Also, while the ECR image IDs are kind of impractically long, in a scripting context nothing stops you from using them directly; you can, for example, docker build -t 12345...amazonaws.com/foo/bar:abcdef0 and skip the intermediate docker tag step if you want.)
I am trying to import some data from kaggle into notebook. The error I am receiving is a 401 unauthorized, however I have accepted the competition rules and I am able to download the data.
This is the code I am running:
from kaggle.api.kaggle_api_extended import KaggleApi
api = KaggleApi()
files = api.competition_download_files("twosigmanews")
api.competitions_submit("submission.csv", "my submission message", "twosigmanews")
EDIT: Added more of the error: No matter which kaggle data I wish to import I obtain the same error.
ApiException Traceback (most recent call last)
<ipython-input-7-65a92f19da82> in <module>()
2
3 api = KaggleApi()
----> 4 files = api.competition_download_files("twosigmanews")
5 api.competitions_submit("submission.csv", "my submission message", "twosigmanews")
~\Anaconda3\lib\site-packages\kaggle\api\kaggle_api_extended.py in competition_download_files(self, competition, path, force, quiet)
637 quiet: suppress verbose output (default is False)
638 """
--> 639 files = self.competition_list_files(competition)
640 if not files:
641 print('This competition does not have any available data files')
~\Anaconda3\lib\site-packages\kaggle\api\kaggle_api_extended.py in competition_list_files(self, competition)
554 """
555 competition_list_files_result = self.process_response(
--> 556 self.competitions_data_list_files_with_http_info(id=competition))
557 return [File(f) for f in competition_list_files_result]
558
~\Anaconda3\lib\site-packages\kaggle\api\kaggle_api.py in competitions_data_list_files_with_http_info(self, id, **kwargs)
416 _preload_content=params.get('_preload_content', True),
417 _request_timeout=params.get('_request_timeout'),
--> 418 collection_formats=collection_formats)
419
420 def competitions_list(self, **kwargs): # noqa: E501
~\Anaconda3\lib\site-packages\kaggle\api_client.py in call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, async_req, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
332 response_type, auth_settings,
333 _return_http_data_only, collection_formats,
--> 334 _preload_content, _request_timeout)
335 else:
336 thread = self.pool.apply_async(self.__call_api, (resource_path,
~\Anaconda3\lib\site-packages\kaggle\api_client.py in __call_api(self, resource_path, method, path_params, query_params, header_params, body, post_params, files, response_type, auth_settings, _return_http_data_only, collection_formats, _preload_content, _request_timeout)
163 post_params=post_params, body=body,
164 _preload_content=_preload_content,
--> 165 _request_timeout=_request_timeout)
166
167 self.last_response = response_data
~\Anaconda3\lib\site-packages\kaggle\api_client.py in request(self, method, url, query_params, headers, post_params, body, _preload_content, _request_timeout)
353 _preload_content=_preload_content,
354 _request_timeout=_request_timeout,
--> 355 headers=headers)
356 elif method == "HEAD":
357 return self.rest_client.HEAD(url,
~\Anaconda3\lib\site-packages\kaggle\rest.py in GET(self, url, headers, query_params, _preload_content, _request_timeout)
249 _preload_content=_preload_content,
250 _request_timeout=_request_timeout,
--> 251 query_params=query_params)
252
253 def HEAD(self, url, headers=None, query_params=None, _preload_content=True,
~\Anaconda3\lib\site-packages\kaggle\rest.py in request(self, method, url, query_params, headers, body, post_params, _preload_content, _request_timeout)
239
240 if not 200 <= r.status <= 299:
--> 241 raise ApiException(http_resp=r)
242
243 return r
ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Cache-Control': 'private', 'Content-Length': '37', 'Content-Type': 'application/json; charset=utf-8', 'X-MiniProfiler-Ids': '["b1df1310-4d5b-4000-8f43-e5b6f4958a48","b9dcdaa4-64ef-4be1-bbbe-90fe664a81bd","db1868eb-0a12-4217-a89a-5cbb3946b0e7","b8166dda-a74f-4e64-8bd4-fe529e95bf04","205f9250-b5eb-4cfd-b94c-976778be8f17","229360b9-37d4-456f-b030-9e56879d7c84"]', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'strict-origin-when-cross-origin', 'Set-Cookie': 'ARRAffinity=87506ffb959c51b2ba135ec75a7dffc3bc28e2948e5cb4ee012d8d916b147438;Path=/;HttpOnly;Domain=www.kaggle.com', 'Date': 'Sat, 06 Oct 2018 16:23:01 GMT'})
HTTP response body: {"code":401,"message":"Unauthorized"}
I think that the name of the competition is wrong. Try:
from kaggle.api.kaggle_api_extended import KaggleApi
api = KaggleApi('copy and paste kaggle.json content here')
api.authenticate()
files = api.competition_download_files("two-sigma-financial-news")
While looking through the source code, I found this class. I think the notebook doesn't auto authenticate when you call KaggleApi(), hence you need to call the authenticate function on the API to connect to the Kaggle API.
Try:
api = KaggleApi()
api.authenticate()
I was able to connect and download the samples after this call.
Here is another pythonic way to import your data from kaggle API. I assume you are working on cloud instance with linux OS.
Here is how I do it :
get your kaggle.json file from your kaggle account page: https://www.kaggle.com/<username>/account
Run this code and make sure you have the kaggle.json in the right directory.
import json
import os
os.chdir("~/.kaggle")
data = {"username":"username","key":"tockenvalue"} # get this data from kaggle.json file
with open('kaggle.json', 'w') as outfile:
json.dump(data, outfile)
in terminal, cd change directory to where you want to put your data and then type in terminal :
kaggle competitions download -c two-sigma-financial-news
This is available everytime you want to import data from Kaggle API.
You have not provided any authorization to your code, e.g. your user id, password, and the most important authentication key. The authentication key is given after user id and its Kaggle password is provided.
The Kaggle authentication can be obtained from api.authenticate() function after assigning Kaggle API to the variable named "api".
Your username and key is either not provided or invalid.
Goto https://www.kaggle.com/username/account and create new API token. kaggle.json file will be downloaded. Place it in ~/.kaggle/kaggle.json or C:\Users\User\.kaggle\kggle.json.
Also, you have to click "I understand and accept" in Rules Acceptance section for the data your going to download.
Colab is also best methode to import the kaggle dataset the steps are:
! pip install kaggle
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d rohanrao/air-quality-data-in-india
Apologies in advance if anyone has asked this before, but I've got some basic questions about a server-side Python script I'm writing that does a nightly upload of CSV files to a folder in our Google Drive account. The folder owner has created a Google API project, enabled the Drive API for this and created a credentials object which I've downloaded as a JSON file. The folder has been shared with the service account email, so I assume that the script will now have access to this folder, once it is authorized.
The JSON file contains the following fields private_key_id, private_key, client_email, client_id, auth_uri, token_uri, auth_provider_x509_cert_url, client_x509_cert_url.
I'm guessing that my script will not need all of these - which are the essential or compulsory fields for OAuth2 authorization?
The example Python script given here
https://developers.google.com/drive/web/quickstart/python
seems to assume that the credentials are retrieved directly from a JSON file:
...
home_dir = os.path.expanduser('~')
credential_dir = os.path.join(home_dir, '.credentials')
if not os.path.exists(credential_dir):
os.makedirs(credential_dir)
credential_path = os.path.join(credential_dir,
'drive-python-quickstart.json')
store = oauth2client.file.Storage(credential_path)
credentials = store.get()
...
but in our setup we are storing them in our own DB and the script accesses them via a dict. How would the authorization be done if the credentials were in a dict?
Thanks in advance.
After browsing the source code, it seems that it was designed to accept only JSON files. They are using simplejson to encode and decode. Take a look at the source code:
This is where is gets the content from the file you provide:
62 def locked_get(self):
63 """Retrieve Credential from file.
64
65 Returns:
66 oauth2client.client.Credentials
67
68 Raises:
69 CredentialsFileSymbolicLinkError if the file is a symbolic link.
70 """
71 credentials = None
72 self._validate_file()
73 try:
74 f = open(self._filename, 'rb')
75 content = f.read()
76 f.close()
77 except IOError:
78 return credentials
79
80 try:
81 credentials = Credentials.new_from_json(content) #<----!!!
82 credentials.set_store(self)
83 except ValueError:
84 pass
85
86 return credentials
In new_from_json they attempt to decode the content using simplejson.
205 def new_from_json(cls, s):
206 """Utility class method to instantiate a Credentials subclass from a JSON
207 representation produced by to_json().
208
209 Args:
210 s: string, JSON from to_json().
211
212 Returns:
213 An instance of the subclass of Credentials that was serialized with
214 to_json().
215 """
216 data = simplejson.loads(s)
Long story short, it seems you'll have to construct a JSON file from your dict. See this. Basically, you'll need to json.dumps(your_dictionary) and create a file.
Actually it looks like a SignedJwtAssertionCredentials may be one answer. Another possibly is
from oauth2client import service_account
...
credentials = service_account._ServiceAccountCredentials(
service_account_id=client.clid,
service_account_email=client.email,
private_key_id=client.private_key_id,
private_key_pkcs8_text=client.private_key,
scopes=client.auth_scopes,
)
....
I'm trying to use Python to search the Facebook Graph for Pages. When I use the Graph API Explorer on the Facebook webpage and I enter:
search?q=aquafresh&type=page
I get the results I'm looking for. When I do the same in Python (after installing the PythonForFacebook module):
post = graph.get_object("search?q=aquafresh&type=post")
I get:
facebook.GraphAPIError: Unsupported get request. Please read the Graph API documentation at https://developers.facebook.com/docs/graph-api
I believe I'm properly identifying with the token, I'm using the same one as the webpage and it works on the webpage. I'm also able to do basic queries in Python (e.g., querying for "me" works fine)
I figured out a way to do this that doesn't involve the Facebook Graph API. Instead, use the Requests library:
import requests
token = "your_token"
query = "your_query"
requests.get("https://graph.facebook.com/search?access_token=" + token + "&q=" + query + "&type=page")
You can directly use search method of `GraphAPI
import facebook
TOKEN = "" # Your GraphAPI token here.
graph = facebook.GraphAPI(access_token=TOKEN, version="2.7")
posts = graph.search(q="aquafresh", type="post")
print posts['data']
Instead of:-
$ pip install facebook-sdk
Use this:-
pip install -e git+https://github.com/mobolic/facebook-sdk.git#egg=facebook-sdk
What works for me is:
graph.request('search', {'q': 'aquafresh', 'type': 'page'})
That’s not exactly what you need, but when I try to search for posts (rather than pages) containing “aquafresh,” I get:
In [13]: graph.request('search', {'q': 'aquafresh', 'type': 'post'})
---------------------------------------------------------------------------
GraphAPIError Traceback (most recent call last)
<ipython-input-13-9aa008df54ba> in <module>()
----> 1 graph.request('search', {'q': 'aquafresh', 'type': 'post'})
/home/telofy/.buildout/eggs/facebook_sdk-0.4.0-py2.7.egg/facebook.pyc in request(self, path, args, post_args)
296 except urllib2.HTTPError, e:
297 response = _parse_json(e.read())
--> 298 raise GraphAPIError(response)
299 except TypeError:
300 # Timeout support for Python <2.6
GraphAPIError: (#11) Post search has been deprecated
(I wonder why deprecation doesn’t result in a mere warning.)
The request method seems to be missing from the documentation, but it’s documented in the code.
This should work:
import facebook
aToken = "your access tocken"
graph = facebook.GraphAPI(access_token=aToken, version="2.10")
data = graph.search(
q='aquafresh',
type='post'
)