dictionary update sequence element #0 has length 3; 2 is required - python

I want to add lines to the object account.bank.statement.line through other object But I get following error:
"dictionary update sequence element #0 has length 3; 2 is required"
Here is my code:
def action_account_line_create(self, cr, uid, ids):
res = False
cash_id = self.pool.get('account.bank.statement.line')
for exp in self.browse(cr, uid, ids):
company_id = exp.company_id.id
#statement_id = exp.statement_id.id
lines = []
for l in exp.line_ids:
lines.append((0, 0, {
'name': l.name,
'date': l.date,
'amount': l.amount,
'type': l.type,
'statement_id': exp.statement_id.id,
'account_id': l.account_id.id,
'account_analytic_id': l.analytic_account_id.id,
'ref': l.ref,
'note': l.note,
'company_id': l.company_id.id
}))
inv_id = cash_id.create(cr, uid, lines,context=None)
res = inv_id
return res
I changed it on that but then I ran into this error:
File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\workflow\wkf_expr.py", line 68, in execute
File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\workflow\wkf_expr.py", line 58, in _eval_expr
File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\tools\safe_eval.py", line 241, in safe_eval
File "C:\Program Files (x86)\OpenERP 6.1-20121029-003136\Server\server\.\openerp\tools\safe_eval.py", line 108, in test_expr
File "<string>", line 0
^
SyntaxError: unexpected EOF while parsing
Code:
def action_account_line_create(self, cr, uid, ids, context=None):
res = False
cash_id = self.pool.get('account.bank.statement.line')
for exp in self.browse(cr, uid, ids):
company_id = exp.company_id.id
lines = []
for l in exp.line_ids:
res = cash_id.create ( cr, uid, {
'name': l.name,
'date': l.date,
'amount': l.amount,
'type': l.type,
'statement_id': exp.statement_id.id,
'account_id': l.account_id.id,
'account_analytic_id': l.analytic_account_id.id,
'ref': l.ref,
'note': l.note,
'company_id': l.company_id.id
}, context=None)
return res

This error raised up because you trying to update dict object by using a wrong sequence (list or tuple) structure.
cash_id.create(cr, uid, lines,context=None) trying to convert lines into dict object:
(0, 0, {
'name': l.name,
'date': l.date,
'amount': l.amount,
'type': l.type,
'statement_id': exp.statement_id.id,
'account_id': l.account_id.id,
'account_analytic_id': l.analytic_account_id.id,
'ref': l.ref,
'note': l.note,
'company_id': l.company_id.id
})
Remove the second zero from this tuple to properly convert it into a dict object.
To test it your self, try this into python shell:
>>> l=[(0,0,{'h':88})]
>>> a={}
>>> a.update(l)
Traceback (most recent call last):
File "<pyshell#11>", line 1, in <module>
a.update(l)
ValueError: dictionary update sequence element #0 has length 3; 2 is required
>>> l=[(0,{'h':88})]
>>> a.update(l)

I was getting this error when I was updating the dictionary with the wrong syntax:
Try with these:
lineItem.values.update({attribute,value})
instead of
lineItem.values.update({attribute:value})

Not really an answer to the specific question, but if there are others, like me, who are getting this error in fastAPI and end up here:
It is probably because your route response has a value that can't be JSON serialised by jsonable_encoder. For me it was WKBElement: https://github.com/tiangolo/fastapi/issues/2366
Like in the issue, I ended up just removing the value from the output.

One of the fast ways to create a dict from equal-length tuples:
>>> t1 = (a,b,c,d)
>>> t2 = (1,2,3,4)
>>> dict(zip(t1, t2))
{'a':1, 'b':2, 'c':3, 'd':4, }

I got dictionary update sequence element #0 has length 3; 2 is required
When I was trying to convert a dict to a list using .values()
Solved it by using .items()
list(dict(new_row.items()))

To anyone following patrick collins video[1] on smart contract development who runs into this error code; in the brownie-config.yaml file I had an '=' instead of a '-' in the following line(s)
...
compiler:
solc:
remappings:
= '#openzeppelin=OpenZeppelin/openzeppelin- contracts#4.2.0'
The first '=' should be '-'. The second one is as it should be.
reference:
[1]: https://www.youtube.com/watch?v=M576WGiDBdQ

Related

redmine - updating a list type custom field failed

I'm trying to update a list type custom field, with json like
{'id': xx, "name": "xxxx", "multiple": True,
'value': ['string1', 'string2', 'string3']}]
I sent request using python-redmine. And our redmine version is 4.1.1.stable.
Python code:
redmine.project.update(xx,
custom_fields=[{'id': xx, "name": "xxxx", "multiple": True,
'value': ['new_string1', 'new_string2', 'new_string3']}])
Python gave me below error, and the value failed to update. Tried various ways to input the value, but no luck. Thanks in advance for any help!
Traceback (most recent call last):
File "/Users/solidfire/.pyenv/versions/3.8.2/envs/psirt-tools/lib/python3.8/site-packages/redminelib/managers/base.py", line 248, in update
response = self.redmine.engine.request(self.resource_class.http_method_update, url, data=request)
File "/Users/solidfire/.pyenv/versions/3.8.2/envs/psirt-tools/lib/python3.8/site-packages/redminelib/engines/base.py", line 87, in request
return self.process_response(self.session.request(method, url, **kwargs))
File "/Users/solidfire/.pyenv/versions/3.8.2/envs/psirt-tools/lib/python3.8/site-packages/redminelib/engines/base.py", line 182, in process_response
raise exceptions.ValidationError(', '.join(': '.join(e) if isinstance(e, list) else e for e in errors))
redminelib.exceptions.ValidationError: <field name> is not included in the list

Issue with findall over a string (expected string or byte-like object)

Previous question I did about this topic got put on hold as off topic, did the modifications but it's still holded, don't know why. Here is what i asked:
I am currently working on an amazon scraper and I needed to get the images from a product, for example:
https://www.amazon.com/gp/product/B0711BMXVB?pf_rd_p=1581d9f4-062f-453c-b69e-0f3e00ba2652&pf_rd_r=X7FDBW1DN25C8PM5A01C
What I did was, using the xpath
//script[contains(., "ImageBlockATF")]/text()
Get a bunch of text that, inside, contains all the urls for the 'large' images
Basically this:
P.when('A').register("ImageBlockATF", function(A){
var data = {
'colorImages': { 'initial': [{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41rNitnJpsL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41rNitnJpsL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX395_.jpg":[282,395],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX500_.jpg":[357,500],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX535_.jpg":[382,535],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX575_.jpg":[410,575],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX625_.jpg":[446,625],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX675_.jpg":[481,675],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX695_.jpg":[496,695]},"variant":"MAIN","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41Q1eJ1c1tL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41Q1eJ1c1tL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY395_.jpg":[395,249],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY500_.jpg":[500,316],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY535_.jpg":[535,338],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY575_.jpg":[575,363],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY625_.jpg":[625,395],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY675_.jpg":[675,426],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY695_.jpg":[695,439]},"variant":"FRNT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/51%2BKgvmEndL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/51%2BKgvmEndL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY395_.jpg":[395,301],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY500_.jpg":[500,381],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY535_.jpg":[535,408],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY575_.jpg":[575,438],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY625_.jpg":[625,477],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY675_.jpg":[675,515],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY695_.jpg":[695,530]},"variant":"BACK","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/31rBxkzNDgL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/31rBxkzNDgL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX395_.jpg":[146,395],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX500_.jpg":[185,500],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX535_.jpg":[198,535],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX575_.jpg":[213,575],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX625_.jpg":[231,625],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX675_.jpg":[250,675],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX695_.jpg":[257,695]},"variant":"BOTT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41qECXntKAL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41qECXntKAL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX395_.jpg":[139,395],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX500_.jpg":[177,500],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX535_.jpg":[189,535],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX575_.jpg":[203,575],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX625_.jpg":[221,625],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX675_.jpg":[238,675],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX695_.jpg":[245,695]},"variant":"TOPP","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41rT%2B2GI9ZL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41rT%2B2GI9ZL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX395_.jpg":[186,395],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX500_.jpg":[235,500],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX535_.jpg":[252,535],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX575_.jpg":[271,575],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX625_.jpg":[294,625],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX675_.jpg":[318,675],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX695_.jpg":[327,695]},"variant":"RGHT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/419Wv4M%2B-bL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/419Wv4M%2B-bL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX395_.jpg":[255,395],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX500_.jpg":[322,500],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX535_.jpg":[345,535],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX575_.jpg":[371,575],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX625_.jpg":[403,625],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX675_.jpg":[435,675],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX695_.jpg":[448,695]},"variant":"PAIR","lowRes":null}]},
'colorToAsin': {'initial': {}},
'holderRatio': 1.2,
'holderMaxHeight': 700,
'heroImage': {'initial': []},
'heroVideo': {'initial': []},
'spin360ColorData': {'initial': {}},
'spin360ColorEnabled': {'initial': 0},
'spin360ConfigEnabled': false,
'spin360LazyLoadEnabled': false,
'playVideoInImmersiveView':'false',
'tabbedImmersiveViewTreatment':'C',
'totalVideoCount':'0',
'videoIngressATFSlateThumbURL':'',
'mediaTypeCount':'0',
'atfEnhancedHoverOverlay' : true,
'winningAsin': 'B072596K2C',
'weblabs' : {},
'aibExp3Layout' : 1,
'aibRuleName' : 'frank-powered',
'acEnabled' : false
};
A.trigger('P.AboveTheFold'); // trigger ATF event.
return data;
});
That I named imagesString.
I get this string by doing, with scrapy:
imagesString = (response.xpath('//script[contains(., "ImageBlockATF")]/text()').extract_first())
When you search the previous xpath in the example URL, 2 'blocks' of text pop up. With the extract_first() you get the first one extracted, which is the long string above.
Then I had to obtain the urls so I did this:
images = re.findall('\"large\":\"(https.*?\.jpg)\"', imagesString)
Which gave me a list of all the large images urls.
The issue I'm having is that, at one point of the program running, i get this error:
Traceback (most recent call last):
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output
for x in result:
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Manuel\Desktop\scrapyProject\genericScraper\genericScraper\spiders\finalClothes_spider.py", line 52, in parse
imagenes = re.findall('\"large\":\"(https.*?\.jpg)\"', imagenesString)
File "C:\Users\Manuel\Anaconda3\lib\re.py", line 223, in findall
return _compile(pattern, flags).findall(string)
TypeError: expected string or bytes-like object
I honestly have no idea what's going on. The thing I can see is that this error never occurs at the start of the process. If I have to do this for 30 products it works fine but when I start to get more products this happens.
By using Json approach with #Maurice Mayer help
s = response.xpath('//script[contains(., "ImageBlockATF")]/text()').extract_first()
m = re.search(r'^var data = ({.*};)', s, re.S | re.M)
data = m.groups()[0]
jsonObj = json.loads(data[:-1].replace("'", '"'))
I'm getting this error
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\utils\defer.py", line 102, in iter_errback
yield next(it)
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\offsite.py", line 30, in process_spider_output
for x in result:
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\referer.py", line 339, in <genexpr>
return (_set_referer(r) for r in result or ())
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Manuel\Anaconda3\lib\site-packages\scrapy\spidermiddlewares\depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "C:\Users\Manuel\Desktop\scrapyProject\genericScraper\genericScraper\spiders\finalClothes_spider.py", line 59, in parse
data = m.groups()[0]
AttributeError: 'NoneType' object has no attribute 'groups'
EDIT: Added user suggestion and new error
EDIT2: Added Json tag
The javascript variable data is a JSON object, it might be easier to treat as such and you can iterate quickly over the object:
import json
import re
s = """P.when('A').register("ImageBlockATF", function(A){
var data = {
'colorImages': { 'initial': [{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41rNitnJpsL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41rNitnJpsL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX395_.jpg":[282,395],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX500_.jpg":[357,500],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX535_.jpg":[382,535],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX575_.jpg":[410,575],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX625_.jpg":[446,625],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX675_.jpg":[481,675],"https://images-na.ssl-images-amazon.com/images/I/81Qs-sOznzL._UX695_.jpg":[496,695]},"variant":"MAIN","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41Q1eJ1c1tL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41Q1eJ1c1tL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY395_.jpg":[395,249],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY500_.jpg":[500,316],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY535_.jpg":[535,338],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY575_.jpg":[575,363],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY625_.jpg":[625,395],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY675_.jpg":[675,426],"https://images-na.ssl-images-amazon.com/images/I/71ZLo7ef-GL._UY695_.jpg":[695,439]},"variant":"FRNT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/51%2BKgvmEndL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/51%2BKgvmEndL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY395_.jpg":[395,301],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY500_.jpg":[500,381],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY535_.jpg":[535,408],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY575_.jpg":[575,438],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY625_.jpg":[625,477],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY675_.jpg":[675,515],"https://images-na.ssl-images-amazon.com/images/I/71Fny8%2BI-mL._UY695_.jpg":[695,530]},"variant":"BACK","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/31rBxkzNDgL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/31rBxkzNDgL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX395_.jpg":[146,395],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX500_.jpg":[185,500],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX535_.jpg":[198,535],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX575_.jpg":[213,575],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX625_.jpg":[231,625],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX675_.jpg":[250,675],"https://images-na.ssl-images-amazon.com/images/I/71a7BKbdD3L._UX695_.jpg":[257,695]},"variant":"BOTT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41qECXntKAL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41qECXntKAL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX395_.jpg":[139,395],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX500_.jpg":[177,500],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX535_.jpg":[189,535],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX575_.jpg":[203,575],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX625_.jpg":[221,625],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX675_.jpg":[238,675],"https://images-na.ssl-images-amazon.com/images/I/8139cgDppVL._UX695_.jpg":[245,695]},"variant":"TOPP","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/41rT%2B2GI9ZL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/41rT%2B2GI9ZL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX395_.jpg":[186,395],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX500_.jpg":[235,500],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX535_.jpg":[252,535],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX575_.jpg":[271,575],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX625_.jpg":[294,625],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX675_.jpg":[318,675],"https://images-na.ssl-images-amazon.com/images/I/81a3uUSxI%2BL._UX695_.jpg":[327,695]},"variant":"RGHT","lowRes":null},{"hiRes":"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UL1500_.jpg","thumb":"https://images-na.ssl-images-amazon.com/images/I/419Wv4M%2B-bL._US40_.jpg","large":"https://images-na.ssl-images-amazon.com/images/I/419Wv4M%2B-bL.jpg","main":{"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX395_.jpg":[255,395],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX500_.jpg":[322,500],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX535_.jpg":[345,535],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX575_.jpg":[371,575],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX625_.jpg":[403,625],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX675_.jpg":[435,675],"https://images-na.ssl-images-amazon.com/images/I/815uXTfk02L._UX695_.jpg":[448,695]},"variant":"PAIR","lowRes":null}]},
'colorToAsin': {'initial': {}},
'holderRatio': 1.2,
'holderMaxHeight': 700,
'heroImage': {'initial': []},
'heroVideo': {'initial': []},
'spin360ColorData': {'initial': {}},
'spin360ColorEnabled': {'initial': 0},
'spin360ConfigEnabled': false,
'spin360LazyLoadEnabled': false,
'playVideoInImmersiveView':'false',
'tabbedImmersiveViewTreatment':'C',
'totalVideoCount':'0',
'videoIngressATFSlateThumbURL':'',
'mediaTypeCount':'0',
'atfEnhancedHoverOverlay' : true,
'winningAsin': 'B072596K2C',
'weblabs' : {},
'aibExp3Layout' : 1,
'aibRuleName' : 'frank-powered',
'acEnabled' : false
};
A.trigger('P.AboveTheFold'); // trigger ATF event.
return data;
});"""
m = re.search(r'^var data = ({.*};)', s, re.S | re.M)
data = m.groups()[0]
jsonObj = json.loads(data[:-1].replace("'", '"')) # remove the last semicolon and replace single quotes!
for img in jsonObj['colorImages']['initial']:
print (img['large'])
Prints:
https://images-na.ssl-images-amazon.com/images/I/41rNitnJpsL.jpg
https://images-na.ssl-images-amazon.com/images/I/41Q1eJ1c1tL.jpg
https://images-na.ssl-images-amazon.com/images/I/51%2BKgvmEndL.jpg
https://images-na.ssl-images-amazon.com/images/I/31rBxkzNDgL.jpg
https://images-na.ssl-images-amazon.com/images/I/41qECXntKAL.jpg
https://images-na.ssl-images-amazon.com/images/I/41rT%2B2GI9ZL.jpg
https://images-na.ssl-images-amazon.com/images/I/419Wv4M%2B-bL.jpg

tinydb: how to update a document with a condition

Hi I would like to update some documents that match a query. So for each document I would like to update the field 'parent_id' if and only if this document have an ID greater then i.e. 6
for result in results:
db.update(set('parent_id', current_element_id),
result.get('id') > current_element_id )
error:
Traceback (most recent call last):
File "debug.py", line 569, in <module>
convertxml=parse(xmlfile, force_list=('interface',))
File "debug.py", line 537, in parse
parser.Parse(xml_input, True)
File "..\Modules\pyexpat.c", line 468, in EndElement
File "debug.py", line 411, in endElement
db.update(set('parent_id', current_element_id), result.get('id') > current_element_id )
File "C:\ProgramData\Miniconda3\lib\site-packages\tinydb\database.py", line 477, in update
cond, doc_ids
File "C:\ProgramData\Miniconda3\lib\site-packages\tinydb\database.py", line 319, in process_elements
if cond(data[doc_id]):
TypeError: 'bool' object is not callable
example of document that should be update:
...,
{'URI': 'http://www.john-doe/',
'abbr': 'IDD',
'affiliation': 'USA',
'closed': False,
'created': '2018-06-01 22:49:02.927347',
'element': 'distrbtr',
'id': 7,
'parent_id': None
},...
In the documentation of tinydb I see that I can use set. Otherwise if I don't use Set it will update all the document db.update(dict) which I don't want to.
Using the Docs using write_back to replace part of a document is better
>>> docs = db.search(User.name == 'John')
[{name: 'John', age: 12}, {name: 'John', age: 44}]
>>> for doc in docs:
... doc['name'] = 'Jane'
>>> db.write_back(docs) # Will update the documents we retrieved
>>> docs = db.search(User.name == 'John')
[]
>>> docs = db.search(User.name == 'Jane')
[{name: 'Jane', age: 12}, {name: 'Jane', age: 44}]
implementing it to my situation
for result in results:
if result['parent_id'] != None:
result['parent_id'] = current_element_id
db.write_back(results)

Elastic search import error using python

Here is the json variable
jsonout = [{"city": "Springfield", "id": 1, "name": "Moes Tavern"}, {"city": "Springfield", "id": 2, "name": "Springfield Power Plant"}, {"city": "Fountain Lakes", "id": 3, "name": "Kath and Kim Pty Ltd"}]
The following command i am using to import json variable
es.bulk((es.index_op(doc, id=doc('id')) for doc in jsonout), index='dbmysql', doc_type='person')
The following is the error
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-14-10faf5c5bb89> in <module>()
1 docs = [{'id': 2, 'name': 'Jessica Coder', 'age': 32, 'title': 'Programmer'}, {'id': 3, 'name': 'Freddy Tester', 'age': 29, 'title': 'Office Assistant'}]
----> 2 es.bulk((es.index_op(doc, id=doc('id')) for doc in jsonout), index='dbmysql', doc_type='person')
d:\nvk\USER\Anaconda2\lib\site-packages\pyelasticsearch\client.pyc in decorate(*args, **kwargs)
91 elif k in convertible_args:
92 query_params[k] = kwargs.pop(k)
---> 93 return func(*args, query_params=query_params, **kwargs)
94 return decorate
95 return decorator
d:\nvk\USER\Anaconda2\lib\site-packages\pyelasticsearch\client.pyc in bulk(self, actions, index, doc_type, query_params)
445 response = self.send_request('POST',
446 [index, doc_type, '_bulk'],
--> 447 body='\n'.join(actions) + '\n',
448 query_params=query_params)
449
<ipython-input-14-10faf5c5bb89> in <genexpr>((doc,))
1 docs = [{'id': 2, 'name': 'Jessica Coder', 'age': 32, 'title': 'Programmer'}, {'id': 3, 'name': 'Freddy Tester', 'age': 29, 'title': 'Office Assistant'}]
----> 2 es.bulk((es.index_op(doc, id=doc('id')) for doc in jsonout), index='dbmysql', doc_type='person')
TypeError: 'str' object is not callable
doc is likely the uncallable string. Usually jsonout doesn't sound like it should have functions.
You are missing something in your question - in the error example you have this code:
1 docs = [{'id': 2, 'name': 'Jessica Coder', ...}, {...}]
2 es.bulk((es.index_op(doc, id=doc('id')) for doc in jsonout), index='dbmysql', doc_type='person')
You have the docs variable in the line 1, but you have jsonout in the line 2.
And, if you put the docs variable instead of jsonout into the second line, you should get an error like 'dict' object is not callable because you have doc('id') (instead of doc['id']) and doc is a dictionary.
So I suspect that also something is wrong with your actual jsonout variable value - it is probably a list of strings instead of list of dictionaries.
Found the solution finally.
We can use json.loads to convert str object to json object.
json.loads(jsonout)

python re extract items within curly brakets

I have a large dataset with such as in my sql such as:
("Successfully confirmed payment - {'PAYMENTINFO_0_TRANSACTIONTYPE': ['expresscheckout'], 'ACK': ['Success'], 'PAYMENTINFO_0_PAYMENTTYPE': ['instant'], 'PAYMENTINFO_0_RECEIPTID': ['1037-5147-8706-9322'], 'PAYMENTINFO_0_REASONCODE': ['None'], 'SHIPPINGOPTIONISDEFAULT': ['false'], 'INSURANCEOPTIONSELECTED': ['false'], 'CORRELATIONID': ['1917b2c0e5a51'], 'PAYMENTINFO_0_TAXAMT': ['0.00'], 'PAYMENTINFO_0_TRANSACTIONID': ['3U4531424V959583R'], 'PAYMENTINFO_0_ACK': ['Success'], 'PAYMENTINFO_0_PENDINGREASON': ['authorization'], 'PAYMENTINFO_0_AMT': ['245.40'], 'PAYMENTINFO_0_PROTECTIONELIGIBILITY': ['Eligible'], 'PAYMENTINFO_0_ERRORCODE': ['0'], 'TOKEN': ['EC-82295469MY6979044'], 'VERSION': ['95.0'], 'SUCCESSPAGEREDIRECTREQUESTED': ['true'], 'BUILD': ['7507921'], 'PAYMENTINFO_0_CURRENCYCODE': ['GBP'], 'TIMESTAMP': ['2013-08-29T09:15:59Z'], 'PAYMENTINFO_0_SECUREMERCHANTACCOUNTID': ['XFQALBN3EBE8S'], 'PAYMENTINFO_0_PROTECTIONELIGIBILITYTYPE': ['ItemNotReceivedEligible,UnauthorizedPaymentEligible'], 'PAYMENTINFO_0_ORDERTIME': ['2013-08-29T09:15:59Z'], 'PAYMENTINFO_0_PAYMENTSTATUS': ['Pending']}", 1L, datetime.datetime(2013, 8, 29, 11, 15, 59))
I use the following regex to pull the data from the first item list that is within curley brackets
paypal_meta_re = re.compile(r"""\{(.*)\}""").findall
This works as expected, but when I try to remove the square brackets from the dictionary values, I get an error.
here is my code:
paypal_meta = get_paypal(order_id)
paypal_msg_re = paypal_meta_re(paypal_meta[0])
print type(paypal_msg_re), len(paypal_msg_re)
paypal_str = ''.join(map(str, paypal_msg_re))
print paypal_str, type(paypal_str)
paypal = ast.literal_eval(paypal_str)
paypal_dict = {}
for k, v in paypal.items():
paypal_dict[k] = str(v[0])
if paypal_dict:
namespace['payment_gateway'] = { 'paypal' : paypal_dict}
and here is the traceback:
Traceback (most recent call last):
File "users.py", line 383, in <module>
orders = get_orders(user_id, mongo_user_id, address_book_list)
File "users.py", line 290, in get_orders
paypal = ast.literal_eval(paypal_str)
File "/usr/local/Cellar/python/2.7.2/lib/python2.7/ast.py", line 49, in literal_eval
node_or_string = parse(node_or_string, mode='eval')
File "/usr/local/Cellar/python/2.7.2/lib/python2.7/ast.py", line 37, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
'PAYMENTINFO_0_TRANSACTIONTYPE': ['expresscheckout'], 'ACK': ['Success'], 'PAYMENTINFO_0_PAYMENTTYPE': ['instant'], 'PAYMENTINFO_0_RECEIPTID': ['2954-8480-1689-8177'], 'PAYMENTINFO_0_REASONCODE': ['None'], 'SHIPPINGOPTIONISDEFAULT': ['false'], 'INSURANCEOPTIONSELECTED': ['false'], 'CORRELATIONID': ['5f22a1dddd174'], 'PAYMENTINFO_0_TAXAMT': ['0.00'], 'PAYMENTINFO_0_TRANSACTIONID': ['36H74806W7716762Y'], 'PAYMENTINFO_0_ACK': ['Success'], 'PAYMENTINFO_0_PENDINGREASON': ['authorization'], 'PAYMENTINFO_0_AMT': ['86.76'], 'PAYMENTINFO_0_PROTECTIONELIGIBILITY': ['PartiallyEligible'], 'PAYMENTINFO_0_ERRORCODE': ['0'], 'TOKEN': ['EC-6B957889FK3149915'], 'VERSION': ['95.0'], 'SUCCESSPAGEREDIRECTREQUESTED': ['true'], 'BUILD': ['6680107'], 'PAYMENTINFO_0_CURRENCYCODE': ['GBP'], 'TIMESTAMP': ['2013-07-02T13:02:50Z'], 'PAYMENTINFO_0_SECUREMERCHANTACCOUNTID': ['XFQALBN3EBE8S'], 'PAYMENTINFO_0_PROTECTIONELIGIBILITYTYPE': ['ItemNotReceivedEligible'], 'PAYMENTINFO_0_ORDERTIME': ['2013-07-02T13:02:49Z'], 'PAYMENTINFO_0_PAYMENTSTATUS': ['Pending']
^
SyntaxError: invalid syntax
where as if i split the code, using
msg, paypal_msg = paypal_meta[0].split(' - ')
paypal = ast.literal_eval(paypal_msg)
paypal_dict = {}
for k, v in paypal.items():
paypal_dict[k] = str(v[0])
if paypal_dict:
namespace['payment_gateway'] = { 'paypal' : paypal_dict}
insert = orders_dbs.save(namespace)
return insert
This works, but I can't use it, as some of the records returned don't split and is not accurate.
Basically, I want to take the items in the curly brackets and remove the square brackets from the values and then create a new dictionary from that.
You need to include the curly braces, your code omits these:
r"""({.*})""")
Note that the parentheses are now around the {...}.
Alternatively, if there is always a message and one dash before the dictionary, you can use str.partition() to split that off:
paypal_msg = paypal_meta[0].partition(' - ')[-1]
or limit your splitting with str.split() to just once:
paypal_msg = paypal_meta[0].split(' - ', 1)[-1]
Try to avoid putting Python structures like that into the database instead; store JSON in a separate column rather than a string dump of the object.

Categories

Resources