I am trying to create a QuickBooks Category using python-quickbooks package.
However, I cannot seem to find this object in the package. How do I do that?
I recommend the fo_qbo repo. You would use qbs.create("Account", {your_category_blob}).
From documentation about Category it looks like you need to use Item entity with specific payload, like:
{
"SubItem": true,
"Type": "Category",
"Name": "Cedar",
"ParentRef": {
"name": "Trees",
"value": "29"
}
}
See more details on this page.
Related
I have a flask app which runs in a Docker container and I wanted to use Solr with it for indexing and searching, so I built a container for Solr using the Solr official image and used it with my app using docker-compose.
In the app I have multiple types of objects that I want to index for example type1 and type2 and each type has specific fields, so I got in Solr, documents that have different fields, such as doc1 could have field1 and field2, and doc2 could have field3, field4 and field5, and each document has a field called type to specify its type.
I have two types of search first one is searching for documents of a specific type and this is an example URL of it which is used with requests Python package:
response = requests.get("http://solr:8983/solr/myCollection/select?q=*val*&defType=edismax&fq=type:type1&qf=field1^2&qf=field2^1")
, and the other is overall search so I search for documents of all types, and here is its URL example:
response = requests.get("http://solr:8983/solr/myCollection/select?q=*val*&defType=edismax&fq=type:type1||type2&qf=field1^1&qf=field2^1&qf=field3^1&qf=field4^1&qf=field1^1")
I have two problems with my work:
I don't get the result that I expected when I run some queries.
some fields have values with special characters like (z=x+y*f) and when I try to escape these special characters by '\' it doesn't work.
So, is the queries that I wrote have something wrong and is there any article or tutorial that could help me because I searched a lot in the documentation and the internet but I couldn't find I way to solve my problems.
Note: I didn't change the schema file I let it as default.
I've solved the problems by using the tokenizers and filters in indexing and querying.
You can use them by the Client API that Solr provide.
Here is an example of JSON data to add tokenizers and filters to a field type:
{
"replace-field-type": {
"name": "field_name",
"class": "solr.TextField",
"multiValued": True,
"indexAnalyzer": {
"tokenizer": {
"class": "solr.LowerCaseTokenizerFactory"
},
"filters": [
{
"class": "solr.LowerCaseFilterFactory"
}
]
},
"queryAnalyzer": {
"tokenizer": {
"class": "solr.WhitespaceTokenizerFactory",
"rule": "java"
},
"filters": [
{
"class": "solr.LowerCaseFilterFactory"
}
]
}
}
}
I'm on exploration Dataplex API with Python in Google Documentation, there is documentation to Get Lake, Zone, Assets, etc. I've explored that documentation, but I didn't find any documentation related to Tag Policies, for example, I need to attach my Tag Template and add Policy Tag to my BigQuery Table via API.
Is it possible to attach Tag Template and add Policy Tag into BigQuery Table via API?
Here is the link that I've explored:
Dataplex API Service
Dataplex API Metadata Service
Data Catalog API
for attaching tag templates to BigQuery table, first you will have to lookup the entry in dataplex using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.LookupEntryRequest
and then attach is to table using api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreateTagRequest
here's sample code, this creates tag template and also attaches it to table in same code base
https://cloud.google.com/data-catalog/docs/samples/data-catalog-quickstart
and to attach policy, use this api
https://cloud.google.com/python/docs/reference/datacatalog/latest/google.cloud.datacatalog_v1.types.CreatePolicyTagRequest
hope this helps
again.
To simulate the pythonic api's behaviour, I used google cloud api explorer to explain in detail. see below.
The entry lookup is to search for object(s) you want to attach a tag/tag templates
Basically here's how I simulated the api calls using api explorer
To attach a tag to a BigQuery table, first step is to search the table up using Datacatalog api url below
LookupEntryRequest
The parameters I passed to get below response is
sqlResource: "bigquery.table.myproject.zz_DataSet.tblOne"
Above should give you output as
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey",
"type": "TABLE",
"schema": {
"columns": [
{
"type": "STRING",
"mode": "NULLABLE",
"column": "firstname"
},
{
"type": "STRING",
"mode": "NULLABLE",
"column": "lastname"
}
]
},
"sourceSystemTimestamps": {
"createTime": "2023-01-16T04:22:49.397Z",
"updateTime": "2023-01-16T04:22:49.397Z"
},
"linkedResource": "//bigquery.googleapis.com/projects/myproject/datasets/zz_DataSet/tables/tblOne",
"bigqueryTableSpec": {
"tableSourceType": "BIGQUERY_TABLE"
},
"usageSignal": {
"updateTime": "2023-02-05T07:59:59.928Z",
"usageWithinTimeRange": {
"30D": {
"totalCompletions": 7,
"totalFailures": 1,
"totalExecutionTimeForCompletionsMillis": 7385
}
}
},
"integratedSystem": "BIGQUERY",
"fullyQualifiedName": "bigquery:myproject.zz_DataSet.tblOne"
}
The search gives you ability to query multiple tables or attach tags at Dataset level too, see parameters section on link above.
This is why I suggest you use entry look up first as its more scalable code.
API Call two: This is how i simulated the attach tag to resource. If you go to link below
CreateTagRequest
As an example: I pre created an tag template from console and then used the template-id value to pass as a parameter to the request
Input:
parent: projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey from above name element
request body:
{
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"stringValue": "apitestcall"
}
}
}
Output:
Below is the response generated and if you see in data catalog console, you will see bigquery table with the tag template attached to it with value to name field as "apitestcall" attached to it. see image attached
{
"name": "projects/myproject/locations/australia-southeast2/entryGroups/#bigquery/entries/mykey/tags/tagsKey",
"template": "projects/myproject/locations/australia-southeast1/tagTemplates/api_call_test_tag_template",
"fields": {
"name": {
"displayName": "name",
"stringValue": "apitestcall"
}
},
"templateDisplayName": "api-call-test-tag-template"
}
Finally, please do make sure you have all the correct IAM permissions required for this task.
I need to create a file using the Google Drive API (I'm using v3, the latest at the moment). Using python if it matters.
My code is below,
drive_service.files().create(supportsTeamDrives=True, body={
'name': 'test-file',
'mimeType': 'application/vnd.google-apps.spreadsheet',
'parents': [folder_id],
'properties': {'locale': 'en_GB',
'timeZone': 'Europe/Berlin'}
})
Following the documentation #here, I tried to set the properties key with the locale set to the wanted one but it keeps creating the file with the default locale of my account.
How can I make it work at the creation time? is there another parameter I can fill?
Your problem is that you are mixing up two different "properties".
The properties you are setting are user-defined properties which are only ever consumed by you yourself. They are of no significance to Google.
The properties you want to set are part of the Spreadsheet API. See https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets#SpreadsheetProperties
The simplest solution is to not use the Drive API to create your spreadsheet. Instead use the Spreadsheet API as descibed https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/create
I just tested this in the Apis Explorer
Create file Request
POST https://www.googleapis.com/drive/v3/files?key={YOUR_API_KEY}
{
"properties": {
"test": "test"
},
"name": "Hello"
}
Response
{
"kind": "drive#file",
"id": "1CYFI5rootSO5cndBD2gFb1n8SVvJ7_jo",
"name": "Hello",
"mimeType": "application/octet-stream"
}
File get request
GET https://www.googleapis.com/drive/v3/files/1CYFI5rootSO5cndBD2gFb1n8SVvJ7_jo?fields=*&key={YOUR_API_KEY}
Response
"kind": "drive#file",
"id": "1CYFI5rootSO5cndBD2gFb1n8SVvJ7_jo",
"name": "Hello",
"mimeType": "application/octet-stream",
"starred": false,
"trashed": false,
"explicitlyTrashed": false,
"parents": [
"0AJpJkOVaKccEUk9PVA"
],
"properties": {
"test": "test"
},
It appears to be working just fine i suggest you try checking the following:
The file id that is returned in the response from creating the file. To ensure that you are checking the one you just uploaded. Every time you run that you are going t create a new file.
Also remember to add fields=* with file.get if that's what you are using to check the result of your properties.
I am playing with Facebook Graph API. I am interested in getting results like pages search.
I can look for places like this:
search?q=audi&type=place
and I get json with data I want, like website, location etc.
Example:
{{
""datadata": ": [[
{{
"category": "Automotive Parts Store",
"category_list": [
{
"id": "139492049448901",
"name": "Automotive Parts Store"
}
],
"location": {
"city": "Belgrade",
"country": "Serbia",
"latitude": 44.8206,
"longitude": 20.4622,
"zip": "11000"
},
But when I want to search pages and get public page data in json with a request like this:
search?q=audi&type=page
this is json I get:
{
"data": [
]
}
What's happening here, why can't I do the search and is there a way to achieve this?
I am reading Facebook API documentation for two days now and I can't find anything relevant.
Python code that I am using (token removed)
import facebook
token = "////////"
graph = facebook.GraphAPI(access_token=token, version = 2.7)
events = graph.request("/search?type=page&q=cafe&fields=about,fan_count,website")
print(events)
Use the pages/search endpoint.
Here is the api end point, use with valid access token:
https://graph.facebook.com/v3.2/pages/search?q=m&fields=id,is_eligible_for_branded_content,is_unclaimed,link,location,name,verification_status
How can I update elasticsearch data by using elasticsearch-dsl package? Is that possible ?
I found elasticsearch update api, but it seems like bit difficult. What I am looking for is,
searchObj = Search(using=logserver, index=INDEX)
searchObj=searchObj.query("term",attribute=value).update(attribute=new_value)
response = searchObj.execute()
#kingArther answer is not correct.
elasticsearch-dsl support update very well!
By mapping an index to an object (DocType) it allows
you to save and update easily without any JSON rest requests.
You can find examples an API here
Its probably late but i am leaving this reply for those who are still having same issue:
elasticsearch-dsl offers update function that can be called on classes extending Document class of elasticsearch-dsl. Following is the code:
data = yourIndexClass.get(id=documentIdInIndex)
data.update(key=NewValue)
That is it. Simple. Find details Here
If I'm not wrong elasticsearch_dsl doesn't have an option for update/bulk update.
So, if you like, you can use elasticsearch-py pckage for the same.
Example
from elasticsearch import Elasticsearch
INDEX = 'myindex'
LOG_HOST = 'myhost:myport'
logserver = Elasticsearch(LOG_HOST)
script = "ctx._source.attribute = new_value"
update_body = {
"query": {
"bool": {
"filter": {
"term": {"attribute": "value"}
}
}
},
"script": {
"source": script,
"lang": "painless"
}
}
update_response = logserver.update_by_query(index=INDEX, body=update_body)
For more information, see this official documentation