How to get/put Yahoo contacts in bulk - python

I'm adding contacts one by one using Yahoo REST API and Python. The set of contacts can be relatively big (around 500), however the API doesn't provide any method that can add bigger parts of my contacts in one request (for example 100 items at once). Maybe someone knows any other way to add multiple contacts at once?
Thanks

Looking at the page linked in your question, the request URL has a count parameter. Have you tried this?

Related

Google Places API - filter by contact fields

is there a way to filter Google Places API requests by contact fields?
I can filter by city, country, etc, but what if I want to filter by 'website'? How do I do that?
Note that I know I can filter it in my script but I don't want to waste time on places that doesn't have website.
Thank you in advance!
It is not possible to get filtered results on contact fields (such as "website") through the Places API at this time.
However, you can file a Feature Request for this in Google's Public Issue Tracker: https://issuetracker.google.com
The Places API component can be found here
Hope this helps!

How to page through results using Shopify Python API wrapper

I want to page through the results from the Shopify API using the Python wrapper. The API recently (2019-07) switched to "cursor-based pagination", so I cannot just pass a "page" query parameter to get the next set of results.
The Shopify API docs have a page dedicated to cursor-based pagination.
The API response supposedly includes a link in the response headers that includes info for making another request, but I cannot figure out how to access it. As far as I can tell, the response from the wrapper is a standard Python list that has no headers.
I think I could make this work without using the python API wrapper, but there must be an easy way to get the next set of results.
import shopify
shopify.ShopifyResource.set_site("https://example-store.myshopify.com/admin/api/2019-07")
shopify.ShopifyResource.set_user(API_KEY)
shopify.ShopifyResource.set_password(PASSWORD)
products = shopify.Product.find(limit=5)
# This works fine
for product in products:
print(product.title)
# None of these work for accessing the headers referenced in the docs
print(products.headers)
print(products.link)
print(products['headers'])
print(products['link'])
# This throws an error saying that "page" is not an acceptable parameter
products = shopify.Product.find(limit=5, page=2)
Can anyone provide an example of how to get the next page of results using the wrapper?
As mentioned by #babis21, this was a bug in the shopify python api wrapper. The library was updated in January of 2020 to fix it.
For anyone stumbling upon this, here is an easy way to page through all results. This same format also works for other API objects like Products as well.
orders = shopify.Order.find(since_id=0, status='any', limit=250)
for order in orders:
# Do something with the order
while orders.has_next_page():
orders = orders.next_page()
for order in orders:
# Do something with the remaining orders
Using since_id=0 will fetch ALL orders because order IDs are guaranteed to be greater than 0.
If you don't want to repeat the code that processes the order objects, you can wrap it all in an iterator like this:
def iter_all_orders(status='any', limit=250):
orders = shopify.Order.find(since_id=0, status=status, limit=limit)
for order in orders:
yield order
while orders.has_next_page():
orders = orders.next_page()
for order in orders:
yield order
for order in iter_all_orders():
# Do something with each order
If you are fetching a large number of orders or other objects (for offline analysis like I was), you will find that this is slow compared to your other options. The GraphQL API is faster than the REST API, but performing bulk operations with the GraphQL API was by far the most efficient.
You can find response header with below code
resp_header = shopify.ShopifyResource.connection.response.headers["link"];
then you can split(',') string of link index and then remove(<>) and can get next link url.
I am not familiar with python , but i think i will work, you can also review below links:
https://community.shopify.com/c/Shopify-APIs-SDKs/Python-API-library-for-shopify/td-p/529523
https://community.shopify.com/c/Shopify-APIs-SDKs/Trouble-with-pagination-when-fetching-products-API-python/td-p/536910
thanks
#rseabrook
I have exactly the same issue, it seems others do as well and someone has raised this: https://github.com/Shopify/shopify_python_api/issues/337
where I see there is an open PR for this: https://github.com/Shopify/shopify_python_api/pull/338
I guess it should be ready soon, so an alternative idea would be to wait a bit and use 2019-04 version (which supports the page parameter to perform pagination).
UPDATE: It seems this has been released now: https://github.com/Shopify/shopify_python_api/pull/352

Counting Records in Azure Table Storage (Year: 2017)

We have a table in Azure Table Storage that is storing a LOT of data in it (IoT stuff). We are attempting a simple migration away from Azure Tables Storage to our own data services.
I'm hoping to get a rough idea of how much data we are migrating exactly.
EG: 2,000,000 records for IoT device #1234.
The problem I am facing is in getting a count of all the records that are present in the table with some constrains (EG: Count all records pertaining to one IoT device #1234 etc etc).
I did some fair amount of research to find posts that say that this count feature is not implemented in the ATS. These posts however, were circa 2010 to 2014.
I'm assumed (hoped) that this feature has been implemented now since it's now 2017 and I'm trying to find docs to it.
I'm using python to interact with out ATS.
Could someone please post the link to the docs here that show how I can get the count of records using python (or even HTTP / rest etc)?
Or if someone knows for sure that this feature is still unavailable, that would help me move on as well and figure another way to go about things!
Thanks in advance!
Returning number of entities in the table storage is for sure not available in Azure Table Storage SDK and service. You could make a table scan query to return all entities from your table but if you have millions of these entities the query will probably time out. it is also going to have pretty big perf impact on your table. Alternatively you could try making segmented queries in a loop until you reach the end of the table.
Or if someone knows for sure that this feature is still unavailable,
that would help me move on as well and figure another way to go about
things!
This feature is still not available or in other words as of today there's no API which will give you a count of total number of rows in a table. You would have to write your own code to do so.
Could someone please post the link to the docs here that show how I
can get the count of records using python (or even HTTP / rest etc)?
For this you would need to list all entities in a table. Since you're only interested in the count, you can reduce the size response data by making use of Query Projection and fetching just one or two attributes of the entities (may be PartitionKey and RowKey). Please see my answer here for more details: Count rows within partition in Azure table storage.

use standard datastore index or build my own

I am running a webapp on google appengine with python and my app lets users post topics and respond to them and the website is basically a collection of these posts categorized onto different pages.
Now I only have around 200 posts and 30 visitors a day right now but that is already taking up nearly 20% of my reads and 10% of my writes with the datastore. I am wondering if it is more efficient to use the google app engine's built in get_by_id() function to retrieve posts by their IDs or if it is better to build my own. For some of the queries I will simply have to use GQL or the built in query language because they are retrieved on more than just and ID but I wanted to see which was better.
Thanks!
Are you doing efficient caching? (or any caching at all).
Also, if you're using that many writes for 300 posts, seems like you might have a problem with your models. Have you looked at the Datastore viewer to seem how many writes you use per entity?
You might read the docs on Exploding indexes, maybe that's part of your problem?
It's way better to use get_by_id(). It finds the exact object, and costs way less (counts as a query with only one entity).
I'd suggest using pre-existing code and building around that in stead of re-inventing the wheel.

How to optimize for Django's paginator module

I have a question about how Django's paginator module works and how to optimize it. I have a list of around 300 items from information that I get from different APIs on the internet. I am using Django's paginator module to display the list for my visitors, 10 items at a time. The pagination does not work as well as I want it to. It seems that the paginator has to get all 300 items before pulling out the ten that need to be displayed each time the page is changed. For example, if there are 30 pages, then going to page 2 requires my website to query the APIs again, put all the information in a list, and then access the ten that the visitor's browser requests. I do not want to keep querying the APIs for the same information that I already have on each page turn.
Right now, my views has a function that looks at the get request and queries the APIs for information based on the query. Then it puts all that information into a list and passes it onto the template file. So, this function always loads whenever someone turns the page, resulting in querying the APIs again.
How should I fix this?
Thank you for your help.
The paginator will in this case need the full list in order to do its job.
My advice would be to update a cache of the feeds at a regular interval, and then use that cache as the input to the paginator module. Doing an intensive or length task on each and every request is always a bad idea. If not for the page load times the user will experience, think of the vulnerability of your server to attack.
You may want to check out Django's low level cache API which would allow you to store the feed result in a globally accessible place under a key, which you can later use to retrieve the cache and paginate for each page request.
ORM's do not load data until the row is selected:
query_results = Foo(id=1) # No sql executed yet, just stored.
foo = query_results[0] # now it fires
or
for foo in query_results:
foo.bar() # sql fires
If you are using a custom data source that is loading results on initialization then the pagination will not work as expected since all feeds will be fetched at once. You may want to subclass __getitem__ or __iter__ to do the actual fetch. It will then coincide with the way Django expects the results to be loaded.
Pagination is going to need to know how many results there are to do things like has_next(). In sql it is usually inexpensive to get a count(*) with an index. So you would also, want to have know how many results there would be (or maybe just estimate if it too expensive to know exactly).

Categories

Resources