SugarCRM response ordered dict key _hash - python

What is
_hash
that is received with the API request?
My request url,
url = "https://" + sugar_instance + "/rest/v10/Leads"
Is there a unique user_id for each Lead/Employee/Module in SugarCRM? And if yes, how can I obtain is using a request. I am using Python.

There are a few different questions within your question. I'll try to answer all of them.
What is _hash?
Have a look at this subset of an API response:
"modified_user_id": "e8b433d5-5d17-456c-8506-fe56452fcce8",
"modified_by_name": "Reisclef",
"modified_user_link": {
"full_name": "Administrator",
"id": "1",
"_acl": {
"fields": [],
"delete": "no",
"_hash": "8e11bf9be8f04daddee9d08d44ea891e"
}
},
"created_by": "1",
"created_by_name": "Administrator",
"created_by_link": {
"full_name": "Administrator",
"id": "1",
"_acl": {
"fields": [],
"delete": "no",
"_hash": "8e11bf9be8f04daddee9d08d44ea891e"
}
},
The "_hash" in the above response is a hash of the related acl record, representing the user's access control limits to the record in question.
We can prove this by looking further down my response. You will notice that the hash changes, but is consistent with each object with the same criteria:
"member_of": {
"name": "",
"id": "",
"_acl": {
"fields": [],
"_hash": "654d337e0e912edaa00dbb0fb3dc3c17"
}
},
"campaign_id": "",
"campaign_name": "",
"campaign_accounts": {
"name": "",
"id": "",
"_acl": {
"fields": [],
"_hash": "654d337e0e912edaa00dbb0fb3dc3c17"
}
},
What we can gather from this is that the _hash is a hash of the _acl object. You can confirm this by looking at include/MetaDataManager/MetaDataManager.php, line 1035.
Therefore, it's not a hash of the user record, it's a hash of the ACL settings of the record.
Is there a unique user_id?
Strictly speaking, no, there won't be a unique user id for every record (unless one user only ever created/edited one record).
If you refer back to my first block of JSON, you'll see there are two user relationships:
modified_user_id
and
created_by
These indicate what the unique id is of the user record, which we can guarantee to be unique (as far as GUIDs are).
How can I obtain it?
It's technically already in the request, but if you just wanted to retrieve the created by user id and modified by user id, you can do the call using this:
https://{INSTANCE}/rest/v10/{MODULE}?fields=created_by,modified_user_id

Related

Python + ElasticSearch: Mapper Parsing Exceptions for join field

I'm using ElasticSearch 8.3.2 to store some data I have. The data consists of metabolites and several "studies" for each metabolite, with each study in turn containing concentration values. I am also using the Python ElasticSearch client to communicate with the backend, which works fine.
To associate metabolites with studies, I was considering using a join field as described here.
I have defined this index mapping:
INDEXMAPPING_MET = {
"mappings": {
"properties": {
"id": {"type": "keyword"},
"entry_type": {"type": "text"},
"pc_relation": {
"type": "join",
"relations": {
"metabolite": "study"
}
},
"concentration": {
"type": "nested",
}
}
}
}
pc_relation is the join field here, with metabolites being the parent documents of each study document.
I can create metabolite entries (the parent documents) just fine using the Python client, for example
self.client.index(index="metabolitesv2", id=metabolite, body=json.dumps({
#[... some other fields here]
"pc_relation": {
"name": "metabolite",
},
}))
However, once I try adding child documents, I get a mapping_parser_exception. Notably, I only get this exception when trying to add the pc_relation field, any other fields work just fine and I can create documents if I omit the join field. Here is an example for a study document I am trying to create (on the same index):
self.client.index(index="metabolitesv2", id=study, body=json.dumps({
#[... some other fields here]
"pc_relation": {
"name": "study",
"parent": metabolite_id
},
}))
At first I thought there might be some typing issues, but casting everything to a string sadly does not change the outcome. I would really appreciate any help with regards to where the error could be as I am not really sure what the issue is - From what I can tell from the official ES documentation and other Python+ES projects I am not really doing anything differently.
Tried: Creating an index with a join field, creating a parent document, creating a child document with a join relation to the parent.
Expectation: Documents get created and can be queried using has_child or has_parent tags.
Result: MappingParserException when trying to create the child document
Tldr;
You need to provide a routing value at indexing time for the child document.
The routing value is mandatory because parent and child documents must be indexed on the same shard
By default the routing value of a document is its _id, so in practice you need to provide the _id of the parent document when indexing the child.
Solution
self.client.index(index="metabolitesv2", id=study, routing=metabolite, body=json.dumps({
#[... some other fields here]
"pc_relation": {
"name": "study",
"parent": metabolite_id
},
}))
To reproduce
PUT 75224800
{
"settings": {
"number_of_shards": 4
},
"mappings": {
"properties": {
"id": {
"type": "keyword"
},
"pc_relation": {
"type": "join",
"relations": {
"metabolite": "study"
}
}
}
}
}
PUT 75224800/_doc/1
{
"id": "1",
"pc_relation": "metabolite"
}
# No routing Id this is going to fail
PUT 75224800/_doc/2
{
"id": 2,
"pc_relation":{
"name": "study",
"parent": "1"
}
}
PUT 75224800/_doc/3
{
"id": "3",
"pc_relation": "metabolite"
}
PUT 75224800/_doc/4?routing=3
{
"id": 2,
"pc_relation":{
"name": "study",
"parent": "3"
}
}

How to get only specific values from all mongodb documents and return them as json?

I have user documents like this for each individual user :
{
"_id": {
"$oid": "638df6dd4774e9573010b138"
},
"username": "abc",
"email": "abc#xyz.com",
"stats": {
"ranking": "1",
"match": "0.214"
},
"stats_extra": {
"pre_ranking": "10",
"pre_match": "0.290"
}
}
and I am trying to fetch only "username" and "stats" for each individual user and return them as JSON api response.
I can print usernames and stats for each individual user like this :
#app.get("/Stats", tags=["userstats"])
def get_stats():
for doc in app.Users.find():
print(doc["username"],doc["stats"])
return { }
**but I am struggling to find the right way to send all user's usernames and stats as json response like this: **
{"data": [
{"username":"abc", "stats":{"ranking": "1","match": "0.214"}} ,
{"username":"xyz", "stats":{"ranking": "10","match": "0.2104"}} ,
{"username":"ijk", "stats":{"ranking": "12","match": "0.2014"}}]
}
You can use the projection parameter to indicate which fields of the documents have to be returned. Check out this link.
In your case something like this should work:
app.Users.find(projection={"username": 1, "stats": 1})

mongodb data retrieval from nested fields using flask pymongo

I'm passing id and customer id fields as parameters to get the document. With my below code I'm only able to fetch only those fields of a document. How do I get entire document with multiple fields as parameter?
Code:
#reviews.route('/<inp_id>/<cat_id>', methods=['GET'])
def index(inp_id, cat_id):
my_coln = mongo_connection.db.db_name
document = collection.find_one({'id': inp_id}, {'category.id': cat_id})
Result:
{
"category": {
"id": "13"
},
"_id": "5cdd36cd8a348e81d8995d3b"
}
I want:
{
"customer": {
"id": "1",
"name": "Kit Data"
},
"category": {
"id": "13",
"name": "TrainKit"
},
"review_date": "2019-05-06",
"phrases": null,
.....
}
Pass all your filters in the first dict, the second one is for projection.
document = collection.find_one({'id': inp_id, 'category.id': cat_id})
Your original query, collection.find_one({'id': inp_id}, {'category.id': cat_id}) means give me only category.id (and nothing else (well, apart from _id which is returned by default)) of a document in which the value of id equals inp_id.

Two JSON douments linked by a key

I have a python server listening to POST from an external server.I expect two JSON documents for every incident happening on the external server. One of the fields in the JSON documents is a unique_key which can be used to identify that these two documents belong together. Upon recieving the JSON documents, my python server sticks into elasticsearch. The two documents related to the incident will be indexed in the elastic search as follows.
/my_index/doc_type/doc_1
/my_index/doc_type/doc_2
i.e the documents belong to the same index and has the same document type. But I don't have an easy way to know that these two documents are related. I want to do some processing before inserting into ElasticSearch when I can use the unique_key on the two documents to link these two. What are your thoughts on doing some normalization across the two documents and merging them into a single JSON document. It has to be remembered that I will be recieving a large number of such documents per second. I need some temporary storage to store and process the JSON documents. Can some one give some suggestions for approaching this problem.
As updated I am adding the basic structure of the JSON files here.
json_1
{
"msg": "0",
"tdxy": "1",
"data": {
"Metric": "true",
"Severity": "warn",
"Message": {
"Session": "None",
"TransId": "myserver.com-14d9e013794",
"TransName": "dashboard.action",
"Time": 0,
"Code": 0,
"CPUs": 8,
"Lang": "en-GB",
"Event": "false",
},
"EventTimestamp": "1433192761097"
},
"Timestamp": "1433732801097",
"Host": "myserver.myspace.com",
"Group": "UndefinedGroup"
}
json_2
{
"Message": "Hello World",
"Session": "4B5ABE9B135B7EHD49343865C83AD9E079",
"TransId": "myserver.com-14d9e013794",
"TransName": "dashboard.action"
"points": [
{
"Name": "service.myserver.com:9065",
"Host": "myserver.com",
"Port": "9065",
}
],
"Points Operations": 1,
"Points Exceeded": 0,
"HEADER.connection": "Keep-Alive",
"PARAMETER._": "1432875392706",
}
I have updated the code as per the suggestion.
if rx_buffer:
txid = json.loads(rx_buffer)['TransId']
if `condition_1`:
res = es.index(index='its', doc_type='vents', id=txid, body=rx_buffer)
print(res['created'])
elif `condition_2`:
res = es.update(index='its', doc_type='vents', id=txid, body={"f_vent":{"b_vent":rx_buffer}})
I get the following error.
File "/usr/lib/python2.7/site-packages/elasticsearch/transport.py", line 307, in perform_request
status, headers, data = connection.perform_request(method, url, params, body, ignore=ignore, timeout=timeout)
File "/usr/lib/python2.7/site-packages/elasticsearch/connection/http_urllib3.py", line 89, in perform_request
self._raise_error(response.status, raw_data)
File "/usr/lib/python2.7/site-packages/elasticsearch/connection/base.py", line 105, in _raise_error
raise HTTP_EXCEPTIONS.get(status_code, TransportError)(status_code, error_message, additional_info)
RequestError: TransportError(400, u'ActionRequestValidationException[Validation Failed: 1: script or doc is missing;]')
The code below makes the assumption you're using the official elasticsearch-py library, but it's easy to transpose the code to another library.
We'd also probably need to create a specific mapping for your assembled document of type doc_type, but it heavily depends on how you want to query it later on.
Anyway, based on our discussion above, I would then index json1 first
from elasticsearch import Elasticsearch
es_client = Elasticsearch(hosts=[{"host": "localhost", "port": 9200}])
json1 = { ...JSON of the first document you've received... }
// extract the unique ID
// note: you might want to only take 14d9e013794 and ditch "myserver.com-" if that prefix is always constant
doc_id = json1['data']['Message']['TransID']
// index the first document
es_client.index(index="my_index", doc_type="doc_type", id=doc_id, body=json1)
At this point json1 is stored in Elasticsearch. Then, when you later get your second document json2 you can proceed like this:
json2 = { ...JSON of the first document you've received... }
// extract the unique ID
// note: same remark about keeping only the second part of the id
doc_id = json2['TransID']
// make a partial update of your first document
es_client.update(index="my_index", doc_type="doc_type", id=doc_id, body={"doc": {"SecondDoc": json2}})
Note that SecondDoc can be any name of your choosing here, it's simply a nested field that will contain your second document.
At this point you should have a single document having the id 14d9e013794 and the following content:
{
"msg": "0",
"tdxy": "1",
"data": {
"Metric": "true",
"Severity": "warn",
"Message": {
"Session": "None",
"TransId": "myserver.com-14d9e013794",
"TransName": "dashboard.action",
"Time": 0,
"Code": 0,
"CPUs": 8,
"Lang": "en-GB",
"Event": "false"
},
"EventTimestamp": "1433192761097"
},
"Timestamp": "1433732801097",
"Host": "myserver.myspace.com",
"Group": "UndefinedGroup",
"SecondDoc": {
"Message": "Hello World",
"Session": "4B5ABE9B135B7EHD49343865C83AD9E079",
"TransId": "myserver.com-14d9e013794",
"TransName": "dashboard.action",
"points": [
{
"Name": "service.myserver.com:9065",
"Host": "myserver.com",
"Port": "9065"
}
],
"Points Operations": 1,
"Points Exceeded": 0,
"HEADER.connection": "Keep-Alive",
"PARAMETER._": "1432875392706"
}
}
Of course, you can make any processing on json1 and json2 before indexing/updating them.

Where to change the form of json response in django rest framework?

Lets say I have a model:
class MyModel(models.Model):
name = models.CharField(max_length=100)
description= models.TextField()
...
Then I created ModelViewSet with HyperLinkedSerializer, so when I call my /api/mymodels endpint I get responses like this:
{
"count": 2,
"next": null,
"previous": null,
"results": [
{ "name": "somename", "description": "desc"},
{ "name": "someothername", "description": "asdasd"},
]
}
and when I call /api/mymodels/1 I get:
{ "name": "somename", "description": "asdasd"}
but what I would like to get is:
{
"metadata":{ ...},
"results": { "name": "somename", "description": "desc"}
}
And I would like to use this format for all models at my website, so I dont want to change every viewset, I want to implement it in (most likely) one class and then use it for all my viewsets.
So my question is: which renderer or serializer or other class (Im really not sure) should I alter or create to get this behavior of json response?
The first response appears to be a paginated response, which is determined by the pagination serializer. You can create a custom pagination serializer that will use a custom format. You are looking for something similar to the following:
class MetadataSerialier(pagination.BasePaginationSerializer):
count = serializers.Field(source='paginator.count')
next = NextPageField(source='*')
previous = PreviousPageField(source='*')
class CustomPaginationSerializer(pagination.BasePaginationSerializer):
metadata = MetadataSerializer(source='*')
This should give you an output similar to the following:
{
"metadata": {
"count": 2,
"next": null,
"previous": null
},
"results": [
{ "name": "somename", "description": "desc"},
{ "name": "someothername", "description": "asdasd"}
]
}
The pagination serializer can be set globally through your settings, as described in the documentation.
REST_FRAMEWORK = {
'DEFAULT_PAGINATION_SERIALIZER_CLASS': {
'full.path.to.CustomPaginationSerializer',
}
}

Categories

Resources