Using jira-python, I want to retrieve the entire changelog for a JIRA issue:
issues_returned = jira.search_issues(args.jql, expand='changelog')
I discovered that for issues with more than 100 entries in their changelog I am only receiving the first 100:
My question is how do I specify a startAt and make another call to get subsequent pages of the changelog (using python-jira)?
From this thread at Atlassian I see that API v3 provides an endpoint to get the change log directly:
/rest/api/3/issue/{issueIdOrKey}/changelog
but this doesn't seem to be accessible via jira-python. I'd like to avoid having to do the REST call directly and authenticate separately. Barring a way to do it directly via jira-python, is there a way to make a 'raw' REST API call from jira-python?
In instances where more than 100 results are present, you'll need to edit the 'startAt' parameter when searching issues:
issues_returned = jira.search_issues(args.jql, expand='changelog', startAt=100)
You'll need to setup a statement that compares the 'total' and 'maxResults' data points, then run another query with a different 'startAt' parameter if the total is higher and append the two together.
Related
I'm having a similar problem, but I haven't been able to find a solution.
api.get_user with Tweepy will not give description
I use tweepy (4.8.0) and auth 2.0 (bearer_token)
I tried to load user information by using get_user(...) like this
client = tweepy.Client(
bearer_token=bearer_token
)
result = client.get_user(username="name", user_fields=['created_at'])
I expected to get additional data put in the user_fields, but only simple basic data was passed.
Response(data=<User id=1234 name=NAME username=name>, includes={}, errors=[], meta={})
Maybe I'm missing something or made a mistake?
save me plz...
My answer to that question applies here as well.
From the relevant FAQ section in Tweepy's documentation:
Why am I not getting expansions or fields data with API v2 using Client?
If you are simply printing the objects and looking at that output, the string representations of API v2 models/objects only include the default attributes that are guaranteed to exist.
The objects themselves still include the relevant data, which you can access as attributes or by key, like a dictionary.
I have deployed a GCP cloud function that updates Firestore time_created and time_updated fields in Firestore. My front-end app first creates these fields in Firestore but my function updates them after processing the documents. A snippet of code below generates the timestamp and I use Firestore update function to update the document. There are a few instances where the fields in Firestore will be updated as a dictionary with keys as "seconds" and "nano_seconds" and their values but not as Timestamp. I have been wondering and trying to track down where the issue is coming from. I suspect datetime.now() sometimes does not generate a timestamp value. Help me if you have an idea or seen something like this before. I have attached a snapshot below. The image attached shows an instance of the wrongly formatted date returned from Firestore to my Front-end.
Documents affected have the field showing as this:
time_created: {'seconds': 1637694047.0, 'nanoseconds': 580592000.0}
from datetime import datetime
update_doc = {
u"time_created": datetime.now(),
u"time_updated": datetime.now()
}
Per #mark-tolonen, please don't include images in questions when it's trivial to copy-and-paste the test. Various reasons.
I experienced a different issue with Firestore timestamps and using the Go SDK. When I read your question, I wondered if the issues were related but, I think not.
That said, you can perform some diagnosis. You can emit the Python datetime.now() values of course to ensure you know what's being applied.
You could (!) then use the underlying REST API directly to mimic|repro the calls that your code is making to determine whether the error arises in the API itself or the Python SDK (or your code).
Here's projects.databases.documents.patch which I think underlies the Set. There's also projects.databases.documents.create. In both cases, APIs Explorer provides a way for you to try the API methods in the browser and will yield the e.g. curl equivalents for you.
NOTE
The API requires a parent parameter, defined to be something of the form projects/{project_id}/databases/{databaseId}/documents. Replace project_id with your Project ID and use (default) (with the parenthesis) for the value of {databaseId}.
I am trying to see if we can pull list of all Salesforce cases that have been deleted using their API using python.
The given below query returns back all Salesforce cases created, but I am trying to see how to retrieve all cases that have been deleted.
SELECT Id FROM Case
I tried doing the below, but it returned no data whereas I know there are deleted cases
SELECT Id FROM Case where isDeleted = true
Queries that include Recycle Bin need to be issued differently. In Apex you need to add "ALL ROWS"
In SOAP API it's queryAll vs normal query call. in REST API it's a different service, also "queryAll".
If you're using simple salesforce it's supposed to be
query = 'SELECT Id FROM Case LIMIT 10'
sf.bulk.Account.query_all(query)
If you're using another library - you'll need to check internals, which API it uses and whether it exposed queryAll to you.
(rememeber that records that are purged from recycle bin don't show up in these queries anymore and then your only hope is something like Data Replication API's getDeleted())
I want to page through the results from the Shopify API using the Python wrapper. The API recently (2019-07) switched to "cursor-based pagination", so I cannot just pass a "page" query parameter to get the next set of results.
The Shopify API docs have a page dedicated to cursor-based pagination.
The API response supposedly includes a link in the response headers that includes info for making another request, but I cannot figure out how to access it. As far as I can tell, the response from the wrapper is a standard Python list that has no headers.
I think I could make this work without using the python API wrapper, but there must be an easy way to get the next set of results.
import shopify
shopify.ShopifyResource.set_site("https://example-store.myshopify.com/admin/api/2019-07")
shopify.ShopifyResource.set_user(API_KEY)
shopify.ShopifyResource.set_password(PASSWORD)
products = shopify.Product.find(limit=5)
# This works fine
for product in products:
print(product.title)
# None of these work for accessing the headers referenced in the docs
print(products.headers)
print(products.link)
print(products['headers'])
print(products['link'])
# This throws an error saying that "page" is not an acceptable parameter
products = shopify.Product.find(limit=5, page=2)
Can anyone provide an example of how to get the next page of results using the wrapper?
As mentioned by #babis21, this was a bug in the shopify python api wrapper. The library was updated in January of 2020 to fix it.
For anyone stumbling upon this, here is an easy way to page through all results. This same format also works for other API objects like Products as well.
orders = shopify.Order.find(since_id=0, status='any', limit=250)
for order in orders:
# Do something with the order
while orders.has_next_page():
orders = orders.next_page()
for order in orders:
# Do something with the remaining orders
Using since_id=0 will fetch ALL orders because order IDs are guaranteed to be greater than 0.
If you don't want to repeat the code that processes the order objects, you can wrap it all in an iterator like this:
def iter_all_orders(status='any', limit=250):
orders = shopify.Order.find(since_id=0, status=status, limit=limit)
for order in orders:
yield order
while orders.has_next_page():
orders = orders.next_page()
for order in orders:
yield order
for order in iter_all_orders():
# Do something with each order
If you are fetching a large number of orders or other objects (for offline analysis like I was), you will find that this is slow compared to your other options. The GraphQL API is faster than the REST API, but performing bulk operations with the GraphQL API was by far the most efficient.
You can find response header with below code
resp_header = shopify.ShopifyResource.connection.response.headers["link"];
then you can split(',') string of link index and then remove(<>) and can get next link url.
I am not familiar with python , but i think i will work, you can also review below links:
https://community.shopify.com/c/Shopify-APIs-SDKs/Python-API-library-for-shopify/td-p/529523
https://community.shopify.com/c/Shopify-APIs-SDKs/Trouble-with-pagination-when-fetching-products-API-python/td-p/536910
thanks
#rseabrook
I have exactly the same issue, it seems others do as well and someone has raised this: https://github.com/Shopify/shopify_python_api/issues/337
where I see there is an open PR for this: https://github.com/Shopify/shopify_python_api/pull/338
I guess it should be ready soon, so an alternative idea would be to wait a bit and use 2019-04 version (which supports the page parameter to perform pagination).
UPDATE: It seems this has been released now: https://github.com/Shopify/shopify_python_api/pull/352
I have a Django application to log the character sequences from an autocomplete interface. Each time a call is made to the server, the parameters are added to a list and when the user submits the query, the list is written to a file.
Since I am not sure how to preserve the list between subsequent calls, I relied on a global variable say query_logger. Now I can preserve the list in the following way:
def log_query(query, completions, submitted=False):
global query_logger
if query_logger is None:
query_logger = list()
query_logger.append(query, completions, submitted)
if submitted:
query_logger = None
While this hack works for a single client sending requests I don't think this is a stable solution when requests come from multiple clients. My question is two-fold:
What is the order of execution of requests: Do they follow first come first serve (especially if the requests are asynchronous)?
What is a better approach for doing this?
If your django server is single-threaded, then yes, it will respond to requests as it receives them. If you're using wsgi or another proxy, that becomes more complicated. Regardless, I think you'll want to use a db to store the information.
I encountered a similar problem and ended up using sqlite to store the data temporarily, because that's super simple and easy to manage. You'll want to use IP addresses or create a unique ID passed as a url parameter in order to identify clients on subsequent requests.
I also scheduled a daily task (using cron on ubuntu) that goes through and removes any incomplete requests that haven't been completed (excluding those started in the last hour).
You must not use global variables for this.
The proper answer is to use the session - that is exactly what it is for.
Simplest (bad) solution would be to have a global variable. Which means you need some in memory location or a db to store this info