I am working on a python package which speaks with an API (using HTTP protocol).
In my package I have a python configuration file that contains the API URL (hardcoded) and other settings.
The API URL is saved in a global variable so I could import it and use it in my package modules. For example:
API_URL = "https://api-url.com"
The configuration file is part of the package, that means once that a user installs the package he also gets the configuration file including the hardcoded URL.
The problem is that in some point in the future the API's URL could change, and all the factuality's of the package will break, and the users will have to update (i.g. pip install etc.) the package.
What is the right way to handle hardcoded URLs in a package?
The common pattern here is to not hardcode the URL base, but instead provide a API class that will get instantiate with the corresponding URL. This class can also get additional config/a config file:
class MyAPI:
def __init__(self, config_file: str = DEFAULT_CONFIG, **overrides):
self.config = build_config(config_file, overrides)
self.url = self.config.url
#property
def value(self):
return requests.get(urljoin(self.url, "./value"))
The user would then use it like this:
api = MyAPI()
print(api.value)
or
api = MyAPI(url="NEW_URL")
print(api.value)
This is as example similar to what praw and wikipedia use.
This is an especially good idea if there are different URL that can be used with the same/a very similar Interface. (Often at least the live product + a test sandbox)
You can design your API so that URIs don't change. This might be achieved using versioning, e.g. serving your API at https://api-url.com/v1 instead of https://api-url.com. In this case users
won't have to update or change anything because any changes to the API won't affect them since the changes will exist only in the newer version of you API served at https://api-url.com/v2.
Another option is to force library clients to pass the URL they want to use. This will solve the issue with hardcoding but won't solve the root cause. API usually change because something has changed in the understanding of the domain, which in turn means there are likely more changes than URLs alone (which will break your library anyway).
In case you think you simply might want to change domain name and don't anticipate any other changes that might affect clients you can use redirect from old URL to the new one.
Related
I am new to working with APIs in general and am writing code in python that needs to consume/interact with an API someone else has set up. I was wondering if there is any package out there that would build some sort of custom client class to interact with an API given a file outlining the API in some way (like a json or something where each available endpoint and http verb could be outlined in terms of stuff like allowed payload json schema for posts, general params allowed and their types, expected response json schema, the header key/value for a business verb, etc.). It would be helpful if I could have one master file outlining the endpoints available and then some package uses that to generate a client class we can use to consume the API as described.
In my googling most API packages I have found in python are much more focused on the generation of APIs but this isn't what I want.
Basically I believe you are looking for the built in requests package.
response = requests.get(f'{base_url}{endpoint}',
params={'foo': self.bar,
'foo_2':self.bar_2},
headers={'X-Api-Key': secret}
)
And from here, you can build you own class, pass it to a dataframe or whatever.
In the requests package is basically everything you need. Status handling, exception handling everything you need.
Please check the docs.
https://pypi.org/project/requests/
I am new to Cloud foundry. Here is the use case which I want to achieve,
I want to write a python script, which will invoke the api end point, go to the corresponding org/space and then issue a cf push command.
I was able to login and get the metadata of the orgs using below script:
import os
from cloudfoundry_client.client import CloudFoundryClient
target_endpoint = 'https://run.api.pivotal.io'
proxy = dict(http=os.environ.get('HTTP_PROXY', ''), https=os.environ.get('HTTPS_PROXY', ''))
client = CloudFoundryClient(target_endpoint, proxy=proxy, skip_verification=True)
client.init_with_user_credentials('abcd#mail.com', 'password')
for organization in client.organizations:
print organization['metadata']['guid']
Please sugggest, also if there are any links do share.
Assuming you are using this library? https://github.com/cloudfoundry-community/cf-python-client, if not please clarify as your question leaves ambiguity.
The docs state that each entity manager exposes a generic _create method, and the App Entity manager does not look to expose a specific push method. You may be able to use the generic _create and pass a dict defining the application.
But I would suggest looking at the CF-CLI or Java Client which are both maintained by cloud foundry community, and much more well documented.
I am new to Flask.
I have a public api, call it api.example.com.
#app.route('/api')
def api():
name = request.args.get('name')
...
return jsonify({'address':'100 Main'})
I am building an app on top of my public api (call it www.coolapp.com), so in another app I have:
#app.route('/make_request')
def index():
params = {'name':'Fred'}
r = requests.get('http://api.example.com', params=params)
return render_template('really_cool.jinja2',address=r.text)
Both api.example.com and www.coolapp.com are hosted on the same server. It seems inefficient the way I have it (hitting the http server when I could access the api directly). Is there a more efficient way for coolapp to access the api and still be able to pass in the params that api needs?
Ultimately, with an API powered system, it's best to hit the API because:
It's user testing the API (even though you're the user, it's what others still access);
You can then scale easily - put a pool of API boxes behind a load balancer if you get big.
However, if you're developing on the same box you could make a virtual server that listens on localhost on a random port (1982) and then forwards all traffic to your api code.
To make this easier I'd abstract the API_URL into a setting in your settings.py (or whatever you are loading in to Flask) and use:
r = requests.get(app.config['API_URL'], params=params)
This will allow you to make a single change if you find using this localhost method isn't for you or you have to move off one box.
Edit
Looking at your comments you are hoping to hit the Python function directly. I don't recommend doing this (for the reasons above - using the API itself is better). I can also see an issue if you did want to do this.
First of all we have to make sure the api package is in your PYTHONPATH. Easy to do, especially if you're using virtualenvs.
We from api import views and replace our code to have r = views.api() so that it calls our api() function.
Our api() function will fail for a couple of reasons:
It uses the flask.request to extract the GET arg 'name'. Because we haven't made a request with the flask WSGI we will not have a request to use.
Even if we did manage to pass the request from the front end through to the API the second problem we have is using the jsonify({'address':'100 Main'}). This returns a Response object with an application type set for JSON (not just the JSON itself).
You would have to completely rewrite your function to take into account the Response object and handle it correctly. A real pain if you do decide to go back to an API system again...
Depending on how you structure your code, your database access, and your functions, you can simply turn the other app into package, import the relevant modules and call the functions directly.
You can find more information on modules and packages here.
Please note that, as Ewan mentioned, there's some advantages to using the API. I would advise you to use requests until you actually need faster requests (this is probably premature optimization).
Another idea that might be worth considering, depending on your particular code, is creating a library that is used by both applications.
If I create an application and some controller, by default I will access it using:
http:// 127.0.0.1/application/controller/function
I want to change the behaviour of the URLs that I can access any controller by not asking for the application part. Using my example, I want to be able to access all the controllers of my app like this:
http:// 127.0.0.1 /application/controller/function1
http:// 127.0.0.1 /application/controller2/function2
http:// 127.0.0.1 /application/controller2/function3 (and etc.)
What I want to do is remove the need to indicate the application to be able to access all my controllers like this:
http:// 127.0.0.1/controller/function1
http:// 127.0.0.1/controller2/function2
http:// 127.0.0.1/controller2/function3 (and etc.)
Modifying my routes.py:
# routes.py
default_application = 'application'
default_controller = 'controller'
default_function = 'index'
I can access http://127.0.0.1/ and I am redirected to http://127.0.0.1/controller/index
But If I try to access other function I need to indicate the application.
I didn't find a good reference about how routes.py can be configured, and I think that I have to change this file to get what I want.
Anyone can help me?
Thanks!
The web2py URL rewrite functionality is explained in the book. Note, you have a choice between the newer (and simpler) parameter-based system and an alternative pattern-based system (which provides some additional flexibility for more complex cases). In your case, the parameter-based system would be easiest -- just include the following in your routes.py file:
routers = dict(
BASE = dict(
default_application = 'application',
default_controller = 'controller',
),
)
If you need additional help, I would recommend asking on the web2py mailing list.
I've got a website that I wrote in python using the CGI. This was great up until very recently, when the ability to scale became important.
I decided, because it was very simple, to use mod_python. Most of the functionality of my site is stored in a python module which I call to render the various pages. One of the CGI scripts might look like this:
#!/usr/bin/python
import mysite
mysite.init()
mysite.foo_page()
mysite.close()
and in mysite, I might have something like this:
def get_username():
cookie = Cookie.SimpleCookie(os.environ.get("HTTP_COOKIE",""))
sessionid = cookie['sessionid'].value
ip = os.environ['REMOTE_ADDR']
username = select username from sessions where ip = %foo and session = %bar
return(username)
to fetch the current user's username. Problem is that this depends on os.envrion getting populated when os is imported to the script (at the top of the module). Because I'm now using mod_python, the interpreter only loads this module once, and only populates it once. I can't read cookies because it's os has the environment variables of the local machine, not the remote user.
I'm sure there is a way around this, but I'm not sure what it is. I tried re-importing os in the get_username function, but no dice :(.
Any thoughts?
Which version of mod_python are you using? Mod_python 3.x includes a separate Cookie class to make this easier (see here)
Under earlier versions IIRC you can get the incoming cookies inside of the headers_in member of the request object.