lambda aws python get url id - python

I am currently learning aws, lambda, and python all at ones.
Have been going well till I am trying to get an id from the browser.
https://[apinumber].execute-api.ap-northeast-1.amazonaws.com/prod?id=1
I have put the right settings (I think) in aws
I have put this setting under resources -> method request -> URL Query String Parameters
What would be the best way to get this id?
I have tried many ways but didn't really find a solution yet.
I have been stuck with this for the last days.
#always start with the lambda_handler
def lambda_handler(event, context):
# get page id
page_id = event['id']
if page_id:
return page_id
else:
return 'this page id is empty'
Help would be highly appreciated.

Found the answer.
Needed to add the mapping terms in integration request -> mapping (below page)
{
"id": "$input.params('id')"
}
The function is now working.

Related

Python JIRA jira.issue slow performance

I am trying to retrieve from JIRA tree graph of parent child issues
epic->story->task using python3.10 jira=3.1.1
for project with 400 issues it takes minutes to retrieve the result.
is there a way how to improve this code for better performance
The result is displayed in flask frontend web and response time 2minutes is not pleasant for users
jira = JIRA(options={'server': self.url, 'verify': False}, basic_auth=(self.username, self.password))
for singleIssue in jira.search_issues(jql_str=querystring, maxResults=False):
issue = jira.issue(singleIssue.key)
links = issue.fields.issuelinks
for link in links:
if hasattr(link, 'inwardIssue'):
item['inward_issue'] = link.inwardIssue.key
if hasattr(link, 'outwardIssue'):
item['outward_issue'] = link.outwardIssue.key
item['typename'] = link.type.nam
I realised this is moreless matter of flask and long queries handling.
It should be solved either by asynchronous calls or queuing.
Limit the fields you are requesting via the "fields=[ 'key' ]" parameter to include only those which you need.
In your example you are fetching the whole issue twice: once from the .search_issues() iterator every time round the loop (as expected), and (unnecessarily) again in the .issue() call on the following line.

How to use Azure DevOps / VSTS to fetch query results in python

Below is my current code. It connects successfully to the organization. How can I fetch the results of a query in Azure like they have here? I know this was solved but there isn't an explanation and there's quite a big gap on what they're doing.
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
from azure.devops.v5_1.work_item_tracking.models import Wiql
personal_access_token = 'xxx'
organization_url = 'zzz'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
wit_client = connection.clients.get_work_item_tracking_client()
results = wit_client.query_by_id("my query ID here")
P.S. Please don't link me to the github or documentation. I've looked at both extensively for days and it hasn't helped.
Edit: I've added the results line that successfully gets the query. However, it returns a WorkItemQueryResult class which is not exactly what is needed. I need a way to view the column and results of the query for that column.
So I've figured this out in probably the most inefficient way possible, but hope it helps someone else and they find a way to improve it.
The issue with the WorkItemQueryResult class stored in variable "result" is that it doesn't allow the contents of the work item to be shown.
So the goal is to be able to use the get_work_item method that requires the id field, which you can get (in a rather roundabout way) through item.target.id from results' work_item_relations. The code below is added on.
for item in results.work_item_relations:
id = item.target.id
work_item = wit_client.get_work_item(id)
fields = work_item.fields
This gets the id from every work item in your result class and then grants access to the fields of that work item, which you can access by fields.get("System.Title"), etc.

Building a LoadMore or Show More with Python and React

Application is a React/Redux frontend with an API built with Flask Python.
Goal: I'm building a list that instead of using pagination I would like to do a "show more" button. The idea is that on ComponentDidMount it goes and gets the first set of 5 items from the API, then when a user clicks "show more" it will go out and populate the next 5 items and so on and so forth until the user has emptied the database and they can't load anymore.
Issue: I'm entirely unsure if this would be the best practice way to build this function. I don't currently see any glaring negative ramifications, but I thought it'd be best to check with the experts here.
Given that this is an issue that a lot of people seem to have, I figured I'd post my approach for others.
How I built it:
API Route in Python
Description: This gets params from the action in redux. It uses "offset" so that it always knows where to start it's query in the database. It limits the results returned to 5 to keep the the load down. Eventually I'll turn the limit into a parameter as well.
#app.route("/api/items", methods=["GET"])
def get_items():
offset = request.args['offset'];
items = Item.query.order_by(Item.name).offset(offset).limit(5)
items_serialized = []
for item in items:
items_serialized.append(item.serialize())
if items:
return jsonify(items=items_serialized)
Function from my React component:
I have a list inside of my Component that takes listData and prints out all the list items. I won't get into the details on this, but this is how I handle the load more functionality.
Offset - uses listData.length to keeps it place. That way I don't have to use a counter.
.Then() - Since I use an Axios Promise in my actionCreators, I realized I could wait for the promise response to return with .then() and set my state from that point [this avoids the error where it's null or pending...].
I then concat my listData with itemsList with all duplicates removed via the function _uniqBy using the id of each item.
componentDidMount() {
this.updateListData()
}
updateListData = () => {
const offset = this.state.listData.length;
this.props.fetchItems(offset).then(() => {
this.setState(state => {
const itemsList = _.uniqBy(state.listData.concat(this.props.itemsList), 'id')
return { listData: itemsList }
});
});
}
onLoadMore = () => {
this.setState({
loadingMore: true,
});
this.updateListData();
};
Actions and Reducers:
Here's a general description of how it works for all the redux using people.
fetchItems does a GET towards the API with a param of offset passed in.
Reducer either returns itemsList or an error to my store state.
I haven't fully cleaned up the code on list and I feel like I've made some errors typing out the approach, but hopefully someone can take something away from this because I was really struggling. In addition, if people can tell me if this is the right approach that would be great?

Updated approach to Google search with python

I was trying to use xgoogle but I has not been updated for 3 years and I just keep getting no more than 5 results even if I set 100 results per page. If anyone uses xgoogle without any problem please let me know.
Now, since the only available(apparently) wrapper is xgoogle, the option is to use some sort of browser, like mechanize, but that is gonna make the code entirely dependant on google HTML and they might change it a lot.
Final option is to use the Custom search API that google offers, but is has a redicolous 100 requests per day limit and a pricing after that.
I need help on which direction should I go, what other options do you know of and what works for you.
Thanks !
It only needs a minor patch.
The function GoogleSearch._extract_result (Line 237 of search.py) calls GoogleSearch._extract_description (Line 258) which fails causing _extract_result to return None for most of the results therefore showing fewer results than expected.
Fix:
In search.py, change Line 259 from this:
desc_div = result.find('div', {'class': re.compile(r'\bs\b')})
to this:
desc_div = result.find('span', {'class': 'st'})
I tested using:
#!/usr/bin/python
#
# This program does a Google search for "quick and dirty" and returns
# 200 results.
#
from xgoogle.search import GoogleSearch, SearchError
class give_me(object):
def __init__(self, query, target):
self.gs = GoogleSearch(query)
self.gs.results_per_page = 50
self.current = 0
self.target = target
self.buf_list = []
def __iter__(self):
return self
def next(self):
if self.current >= self.target:
raise StopIteration
else:
if(not self.buf_list):
self.buf_list = self.gs.get_results()
self.current += 1
return self.buf_list.pop(0)
try:
sites = {}
for res in give_me("quick and dirty", 200):
t_dict = \
{
"title" : res.title.encode('utf8'),
"desc" : res.desc.encode('utf8'),
"url" : res.url.encode('utf8')
}
sites[t_dict["url"]] = t_dict
print t_dict
except SearchError, e:
print "Search failed: %s" % e
I think you misunderstand what xgoogle is. xgoogle is not a wrapper; it's a library that fakes being a human user with a browser, and scrapes the results. It's heavily dependent on the format of Google's search queries and results pages as of 2009, so it's no surprise that it doesn't work the same in 2013. See the announcement blog post for more details.
You can, of course, hack up the xgoogle source and try to make it work with Google's current format (as it turns out, they've only broken xgoogle by accident, and not very badly…), but it's just going to break again.
Meanwhile, you're trying to get around Google's Terms of Service:
Don’t misuse our Services. For example, don’t interfere with our Services or try to access them using a method other than the interface and the instructions that we provide.
They're been specifically asked about exactly what you're trying to do, and their answer is:
Google's Terms of Service do not allow the sending of automated queries of any sort to our system without express permission in advance from Google.
And you even say that's explicitly what you want to do:
Final option is to use the Custom search API that google offers, but is has a redicolous 100 requests per day limit and a pricing after that.
So, you're looking for a way to access Google search using a method other than the interface they provide, in a deliberate attempt to get around their free usage quota without paying. They are completely within their rights to do anything they want to break your code—and if they get enough hits from people doing things kind of thing, they will do so.
(Note that when a program is scraping the results, nobody's seeing the ads, and the ads are what pay for the whole thing.)
Of course nobody's forcing you to use Google. EntireWeb has an free "unlimited" (as in "as long as you don't use too much, and we haven't specified the limit") search API. Bing gives you a higher quota, and amortized by month instead of by day. Yahoo BOSS is flexible and super-cheap (and even offers a "discount server" that provides lower-quality results if it's not cheap enough), although I believe you're forced to type the ridiculous exclamation point. If none of them are good enough for you… then pay for Google.

Getting Lat and Long from Google Maps API v3

I'm building a standalone proximity search tool in python 2.7 (the intent is distributing it using py2exe and NSIS) and tkinter that takes a center point address, queries the database, and returns all addresses in the database within a certain range.
This is my first time venturing into the google api, and I am extremely confused about how to make use of it to retrieve this data.
I followed the code here: http://www.libertypages.com/clarktech/?p=315
And receive nothing but a 610.
I tried using this url instead, based on a question here on stack overflow and receive a Access Denied Error: http://maps.googleapis.com/maps/api/geocode/xml?
I've set up the project in the API console, enabled the maps service, added both a browser and a server api key, tried them both, and failed.
I've spent all morning pouring through the API documentation, and I can find NOTHING that tells me what URL to specify for a simple API information request for google maps api v3.
This is the actual code of my function, it's a slightly modified version of what I linked above, with some debugging output mixed in, when I ran it with http://maps.google.com/maps/geo? I received 610,0,0,0 :
def get_location(self, query):
params = { }
params[ 'key' ] = "key from console" # the actual key, of course, is not provided here
params[ 'sensor' ] = "false"
params[ 'address' ] = query
params = urllib.urlencode( params )
print "http://maps.googleapis.com/maps/api/geocode/json?%s" % params
try:
f = urllib.urlopen( "http://maps.googleapis.com/maps/api/geocode/json?%s" % params )
Everything Runs perfectly, I just wind up with "610" and "Fail" as the lat and long for all the addresses in my database. :/
I've tried a server app API key and a browser app API key, an Oauth Client ID isn't an option, because I don't want my users to have to allow the access.
I'm REALLY hoping someone will just say "go read this document, you moron" and I can shuffle off with my tail between my legs.
UPDATE: I found this: https://developers.google.com/maps/documentation/geocoding/ implemented the changes it suggests, no change, but it's taking longer to give me the "ACCESS DENIED" response.
The API Key is causing it to fail. I took it out of the query parameters dictionary and it simply works as an anonymous request.
params = { }
params[ 'sensor' ] = "false"
params[ 'address' ] = query

Categories

Resources