I've been trying to use Etherpad lite/beta API via PyEtherpadLite.
My question is that where do I find my API Key ? As per the documentations, the api key is stored in a .txt file in the base dir of the client folder. But my actual question is that i just want to access this pad via API which is hosted by someone. So now where will i find the api key?
In another blog, I found that the Api key for https://beta.etherpad.org/ is EtherpadFTW.
But that doesn't seems to be working. I'm getting no or wrong API Key response.
Don't use the beta.etherpad.org API, it's not for public consumption.
The API key is in your Etherpad Server home folder as APIKEY.txt
You can't use the API of another instance unless they give you the API Key.
I maintain the beta.etherpad.org site, I don't see the value in exposing the API Key publicly.
Related
data source is from SaaS Server's API endpoints, aim to use python to move data into AWS S3 Bucket(Python's Boto3 lib)
API is assigned via authorized Username/password combination and unique api-key.
then every time initially call API need get Token for further info fetch.
have 2 question:
how to manage those secrets above, save to a head file (*.ini, *.json *.yaml) or saved via AWS's Secret-Manager?
the Token is a bit challenging, the basically way is each Endpoint, fetch a new token and do the API call
then that's end of too many pipeline (like if 100 Endpoints info need per downstream business needs) then
need to craft 100 pipeline like an universal template repeating 100 times.
I am new to Python programing world, you all feel free to comment to share any user-case.
Much appreciate !!
I searched and read this show-case
[saving-from-api-to-s3-bucket/74648533]
saving from api to s3 bucket
and
"how-to-write-a-file-or-data-to-an-s3-object-using-boto3"
How to write a file or data to an S3 object using boto3
I found this has been helpful:
#Python-decopule summary: store parameters in .ini or .env files;
#few options of manage(hiding) sensitive info
a. IAM role
b. Store Secrets using **Parameter Store**
c. Store Secrets using **Secrets Manager** - Current method
recommended by AWS
I want to create a Local Streamlit app (for UI porpuses), which only I will use i.e. this peice of code will not be given to anyone else
I have to access few Azure services using an API Key - which I am afraid to do because I don't know how secure Streamlit is.
Specifically speaking I am unsure whether storing the API Key anywhere in my python file can get "leaked" as I might unknowingly send data out (to streamlit).
By python file , I mean the one which I will run using run streamlit python_file.py
Apologies for asking such a basic question.
Note: I have read about Streamlit Secret Management , but still don't understand how storing the API Key as secret prevents Streamlit / Streamlit devs from accessing my secrets.
I'm using google api client to call YouTube Data and YouTube Analytics APIs and get video stats. I'm authenticated as the owner of the channel I am querying.
I am interested in only getting the public videos uploaded in the channel. I first query the YouTube Data API to get the list of videos in the uploads playlist and once I have the list of video ids I call the youtube Analytics API for each video and get the stats I need.
The problem I'm having is that when I list all videos in the uploads playlist I get hundreds of thousands (!!!) of unlisted videos, which I don't need.
I cannot afford to download the entire list and then check status to keep only the public videos as the number is too big and I am reaching my daily quota. It would also be a very inefficient way to do it.
is there a way to list only videos with status public for a specific playlist?
This is the current method I use:
data = service.playlistItems().list(
part="snippet,status",
playlistId=playlistID,
maxResults="50",
).execute()
I couldn't find any in the youtube API documentation on how to achieve this.
Have you tried to use PlaylistItems endpoint by not being authenticated, but, instead, using only your API key parameter?
I'm assuming here -- though cannot check it myself -- that when not authenticated, the endpoint response will contain only videos that are public.
As per the doc, there are two ways an user can access any given API endpoint: using an API key or, otherwise, by way of an OAuth token.
Any user may request from Google developer's console an API key that he will pass to his endpoint of interest as the key parameter. In case of using the Python API Client Library, your code will have to be similar to this:
from googleapiclient.discovery import build
service = build(serviceName = 'youtube', version = 'v3', developerKey = DEVELOPER_KEY)
where DEVELOPER_KEY is the API key string obtained from Google. Please note that such an API key is an user's private information.
That's it: instead of OAuth, use an API key.
This doc text delineates clearly the API's concepts of authentication and authorization.
Basically, I was assuming that when using an API key instead of being authenticated via OAuth -- that is when build receives the parameter developerKey instead of credentials -- then the endpoint's response will include only public videos, even in the case when the respective API key (passed on as parameter developerKey) originates from your initial authentication account.
I am developing model to calculate origin to destination using Python3. I tried google-maps-services-python from github and obtain an error.
[API Key already enabled][1]
ApiError: REQUEST_DENIED (This API project is not authorized to use this API. Please ensure this API is activated in the Google Developers Console:)
However the same key I tried work as http request over browser.
The API key did not apply any key restriction
[API key did not apply any key restriction][2]
Any idea what need to be done?
Solution
In order to make it direct query from IPython, you need both API enabled.
-Google Maps Directions API
-Google Maps Geocoding API
Thanks problem solved.
Is there a way to use Simple Access API (Developer Key) instead of oAuth2 key with Google Cloud Endpoint?
Extra fields in your protorpc request object that aren't part of the definition are still stored with the request.
If you wanted to use a key field as a query parameter, you could access it via
request.get_unrecognized_field_info('key')
even if key is not a field in your message definition.
This is done in users_id_token.py (the Auth part of the endpoints library) to allow sending bearer_token or access_token as query parameters instead of as header values.
Unfortunately, the nice quota checking and other associated pieces that a "Simple API Access" key gives are not readily available. However, you could issue your own keys and manually check a key against your list and potentially check against quotas that you have defined.
For those looking to use #bossylobster's answer in Java, use the the SO Answer here:
Getting raw HTTP Data (Headers, Cookies, etc) in Google Cloud Endpoints
P.S.
I tried to make this a comment in #bossylobster's answer, but I don't have the reputation to do that. Feel free to clean up this answer so that other's can follow the path