I am new to Python. I have two lists. One is key list and another list is value list.
title = ["Code","Title","Value",....] value = [["100","abcd",100",...],["101","efgh",200",...],...] data={} data.setdefault("data",[]).append({"code": sp[0],"val": sp[2]})
this code gives me the following result.
{'data': [{'code': '100', 'val': '100'},{'code': '101', 'val': '200'}]}
But I want the result as the below,
{ "100": { "Title": "abcd", "Value": "100", ............, ............}, "101": { "Title": "efgh", "Value": "200", ............, ............} }
i.e., The first column of the value list should be the key of every Json array list and other items of the lists should be generated as key and value pair. How can I generate the Json array using Python code referring that two lists.
As it is not mentioned that about the size of list ,the below could would do the job.I am using python3.x
title = ["Code","Title","Value"]
value = [["100","abcd","100"],["101","efgh","200"]]
dic1={}
for i in range(len(title)-1):
for j in range(len(title)-1):
dic1.setdefault(value[i][0],{}).update({title[j+1]:value[i][j+1]})
Output is
{'101': {'Title': 'efgh', 'Value': '200'}, '100': {'Title': 'abcd', 'Value': '100'}}
I hope it is helpful!
You can build a dict with this lists. I made a quick snippet just for you to understand
title = ["Code","Title","Value"]
value = [['100','abcd','100'],['101','efgh','200']]
data={}
for whatever in value:
your_titles = {}
print(whatever[0])
your_titles[title[1]] = whatever[1]
your_titles[title[2]] = whatever[0]
your_titles[title[0]] = whatever[2]
data[whatever[0]] = your_titles
print(data)
The output:
{'100': {'Code': '100', 'Value': '100', 'Title': 'abcd'}, '101': {'Code': '200', 'Value': '101', 'Title': 'efgh'}}
Please read this tutorial and try to make it yourself. This is not the optimal solution for this problem.
Make a data frame and then set the column to index and then convert it to json:
data_frame = pd.DataFrame(columns = title, data = value)
data = data_frame.set_index('Code')
json1 = data.to_json(orient='index')
Related
For a JSON file, on doing json.load its getting converted into list of dictionaries.
sample rows are from list of dictionaries are as below: -
config_items_list = [{'hostname': '"abc2164"', 'Status': '"InUse"', 'source': '"excel"', 'port': '"[445]"', 'tech': '"Others"', 'ID': '"123456"'},
{'hostname': '"xyz2164"', 'Status': '"InUse"', 'source': '"web"', 'port': '"[123]"', 'tech': '"Others"', 'ID': '"456789"'},
{'hostname': '"pqr2164"', 'Status': '"NotInUse"', 'source': '"web"', 'port': '"[777]"', 'tech': '"Others"', 'ID': '"123456"'}]
Requirement is to parse this list of dictionaries and extract all rows having specific value in ID key. For example, 123456 (means two rows from above sample).
try this solution it should solve your problem
def filter_json_with_id(config_items_list, id):
items = []
for item in config_items_list:
if item['ID'] == id:
items.append(item)
return items
select_id = '"123456"'
[d for d in config_items_list if d["ID"] == select_id]
will produce a list with those dictionaries that have given select_id as "ID" in config_items_list.
By using the below solution, it searches for the id_value even if the id_value is surrounded by any symbol or any other number/character as well:
expectedResult = [d for d in config_items_list if "123456" in d['ApplicationInstanceID']]
I have a list of dictionaries and I would like to remove duplicates while keeping some informations.
For contexts, these are blockchain transactions. The informations are:
transaction hash,
ID of the NFT,
value of the transaction (it can be buying, selling or minting price).
In the original list below every action is separated. However, I realized that some have the same hash so they are just one transaction with multiple actions (in this case minting 2 NFTs in one transaction). So y goal is to have a cleaned list with only the individual transactions (identified by their hash) and the multiple IDs separated by a comma if there are multiple actions in one transaction, and the total price.
original_list = [
{
'hash': '12345',
'ID': '355',
'price': 12
},
{
'hash': '12345',
'ID': '356',
'price': 12
},
{
'hash': '635',
'ID': '355',
'price': 30
},
{
'hash': '637',
'ID': '356',
'price': 35
}
]
Here is the end result I want:
clean_list = [
{
'hash': '12345',
'ID': '355, 356',
'price': 12
},
{
'hash': '635',
'ID': '355',
'price': 30
},
{
'hash': '637',
'ID': '356',
'price': 35
}
]
How can I do this?
You can just loop through your list and create a new dictionary where you merge the data:
def exist_in_list[hash,lst]: #function that checks if the hash is already in the list and returns the index if it does
for i in range(len(lst)):
if lst[i]['hash']==hash:
return i
return -1
new_list = []
for dict in clean_list:
index = exist_in_list(dict['hash'],new_list):
if index<0:
new_list.append(dict)
else:
new_list[index]['ID'] += f", {dict['ID']}"
There may be some small syntax errors but the logic is there. You loop through the first list, check if the dictionary is in the new one, if it isn't then you add it and if it is, you add the new ID in it.
Given a list of dictionaries such as:
list_ = [
{ 'name' : 'date',
'value': '2021-01-01'
},
{ 'name' : 'length',
'value': '500'
},
{ 'name' : 'server',
'value': 'g.com'
},
How can I access the value where the key name == length?
I want to avoid iteration if possible, and just be able to check if the key called 'length' exists, and if so, get its value.
With iteration, and using next, you could do:
list_ = [
{'name': 'date',
'value': '2021-01-01'
},
{'name': 'length',
'value': '500'
},
{'name': 'server',
'value': 'g.com'
}
]
res = next(dic["value"] for dic in list_ if dic.get("name", "") == "length")
print(res)
Output
500
As an alternative, if the "names" are unique you could build a dictionary to avoid further iterations on list_, as follows:
lookup = {d["name"] : d["value"] for d in list_}
res = lookup["length"]
print(res)
Output
500
Notice that if you need a second key such as "server", you won't need to iterate, just do:
lookup["server"] # g.com
It sure is hard to find an element in a list without iterating through it. Thats the first solution I will show:
list(filter(lambda element: element['name'] == 'length', list_))[0]['value']
this will filter through your list only the elements with name 'length', choose the first from that list, then select the 'value' of that element.
Now, if you had a better data structure, you wouldn't have to iterate. In order to create that better data structure, unfortunately, we will have to iterate the list. A list of dicts with "name" and "value" could really just be a single dict where "name" is the key and "value" is the value. To create that dict:
dict_ = {item['name']:item['value'] for item in list_}
then you can just select 'length'
dict_['length']
I'm trying to perform operations on a nested dictionary (data retrieved from a yaml file):
data = {'services': {'web': {'name': 'x'}}, 'networks': {'prod': 'value'}}
I'm trying to modify the above using the inputs like:
{'services.web.name': 'new'}
I converted the above to a list of indices ['services', 'web', 'name']. But I'm not able to/not sure how to perform the below operation in a loop:
data['services']['web']['name'] = new
That way I can modify dict the data. There are other values I plan to change in the above dictionary (it is extensive one) so I need a solution that works in cases where I have to change, EG:
data['services2']['web2']['networks']['local'].
Is there a easy way to do this? Any help is appreciated.
You may iterate over the keys while moving a reference:
data = {'networks': {'prod': 'value'}, 'services': {'web': {'name': 'x'}}}
modification = {'services.web.name': 'new'}
for key, value in modification.items():
keyparts = key.split('.')
to_modify = data
for keypart in keyparts[:-1]:
to_modify = to_modify[keypart]
to_modify[keyparts[-1]] = value
print(data)
Giving:
{'networks': {'prod': 'value'}, 'services': {'web': {'name': 'new'}}}
I'm inserting documents into elasticsearch and trying to sort on a given field that's present in all documents. However, whenever I update a document, indexing seems to break and I do not get a sorted order. I have created an index by doing:
self.conn = ES(server=url)
self.conn.create_index("test.test")
For instance, I would like to sort on a "_ts" field. Given the following dictionaries and code:
def update_or_insert(doc):
doc_type = "string"
index = doc['ns']
doc['_id'] = str(doc['_id'])
doc_id = doc['_id']
self.conn.index(doc, index, doc_type, doc_id)
to_insert = [
{'_id': '4', 'name': 'John', '_ts': 3, 'ns':'test.test'},
{'_id': '5', 'name': 'Paul', '_ts': 2', ns':'test.test'},
{'_id': '6', 'name': 'George', '_ts': 1', ns':'test.test'},
{'_id': '6', 'name': 'Ringo', '_ts': 4, 'ns':'test.test'} ,
]
for x in to_insert:
update_or_insert(x)
result = self.conn.search(q, sort={'_ts:desc'})
for it in result:
print it
I would expect to get an ordering of "Ringo, John, Paul" but instead get an ordering of "John, Paul, Ringo". Any reason why this might be the case? I see there's a bug here:
https://github.com/elasticsearch/elasticsearch/issues/3078
But that seems to affect ES .90.0 and I'm using .90.1.
It should be:
sort={"_ts":"desc"}