Bitcoin core on osx and bitcoinlib socket error - python

I have some problem about my study..
When I make python file about bitcoinlib like it
from bitcoin.rpc import RawProxy
p = RawProxy()
info = p.getblockchaininfo()
print(info['blocks'])
But it shows me some error like
>> python3 rpc_example.py
Traceback (most recent call last):
File "rpc_example.py", line 5, in <module>
info = p.getblockchaininfo()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/bitcoin/rpc.py", line 315, in <lambda>
f = lambda *args: self._call(name, *args)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/bitcoin/rpc.py", line 231, in _call
self.__conn.request('POST', self.__url.path, postdata, headers)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
from bitcoin.rpc import RawProxy
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/http/client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py", line 724, in create_connection
raise err
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused
ConnectionRefusedError...
this is my bitcoin.conf file
alertnotify=myemailscript.sh "Alert: %s"
server=1
rpcuser=bitcoinrpc
rpcpassword=any_long_random_password
txindex=1
testnet=1
daemon=1
When I start -testnet, I can execute some API like getblockchaininfo or getblockhash like
bitcoin-cli getblockchaininfo
{
"chain": "test",
"blocks": 1863439,
"headers": 1863439,
"bestblockhash": "00000000133f751916630063f86f5b27cd6b5871ce70a5fcba7112a111941706",
"difficulty": 1,
"mediantime": 1602990284,
"verificationprogress": 0.999998381271931,
"initialblockdownload": false,
"chainwork": "0000000000000000000000000000000000000000000001d9736dcfa64f3ea8bf",
"size_on_disk": 28355982038,
"pruned": false,
"softforks": {
"bip34": {
"type": "buried",
"active": true,
"height": 21111
},
"bip66": {
"type": "buried",
"active": true,
"height": 330776
},
"bip65": {
"type": "buried",
"active": true,
"height": 581885
},
"csv": {
"type": "buried",
"active": true,
"height": 770112
},
"segwit": {
"type": "buried",
"active": true,
"height": 834624
}
},
"warnings": "Warning: unknown new rules activated (versionbit 28)"
}
But I can't execute on python...
Please help me TT

I solved it !!
I did not check my bitcoin.conf about testnet
When I add some comments in my bitcoin.conf
alertnotify=myemailscript.sh "Alert: %s"
server=1
rpcuser=bitcoinrpc
rpcpassword=any_long_random_password
txindex=1
testnet=1
daemon=1
[testnet]
rpcallowip=*
rpcport=18332
below the [testnet] two line was required..
when you like me
Do add this line!! thanks

Related

pytest does not find tests when structlog is configured

I have a python project where I use pytest for my unit testing.
Normally if I run the following command from my test folder:
pytest --collect-only
I will get all my tests:
...
40 tests collected
...
Now let's say I define a new class structlogconf.py (Based on this example)
import logging
import logging.config
import structlog
def configure() -> None:
timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
pre_chain = [
# Add the log level and a timestamp to the event_dict if the log entry
# is not from structlog.
structlog.stdlib.add_log_level,
timestamper,
]
logging.config.dictConfig(
{
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"plain": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=False),
"foreign_pre_chain": pre_chain,
},
"colored": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=True),
"foreign_pre_chain": pre_chain,
},
},
"handlers": {
"default": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "colored",
},
"file": {
"level": "DEBUG",
"class": "logging.handlers.WatchedFileHandler",
"filename": "test.log",
"formatter": "plain",
},
},
"loggers": {
"": {
"handlers": ["default", "file"],
"level": "DEBUG",
"propagate": True,
},
},
}
)
structlog.configure(
processors=[
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
timestamper,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
If now I run the pytest collect command again, pytest will not recover any test suite which import directly or indirectly the configure() function from the structlogconf.py. So I now obtain someting like:
...
5 tests collected
...
Is there anyone here which know how to use a struct log conf in a way that won't affect my pytest tests dicscovery?
FYI: Here is the stacktrace when running the collect in the problematic scenario:
...
my remaining 5 tests which does not indirectly import my configure() function are showing here
...
================================================================== 5 tests collected in 1.03s ===================================================================
Traceback (most recent call last):
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 269, in wrap_session
session.exitstatus = doit(config, session) or 0
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 322, in _main
config.hook.pytest_collection(session=session)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
return outcome.get_result()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 333, in pytest_collection
session.perform_collect()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 634, in perform_collect
self.items.extend(self.genitems(node))
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 811, in genitems
yield from self.genitems(subnode)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 808, in genitems
rep = collect_one_node(node)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\runner.py", line 458, in collect_one_node
rep: CollectReport = ihook.pytest_make_collect_report(collector=collector)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 55, in _multicall
gen.send(outcome)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 796, in pytest_make_collect_report
out, err = self.read_global_capture()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 718, in read_global_capture
return self._global_capturing.readouterr()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 622, in readouterr
err = self.err.snap()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 479, in snap
self.tmpfile.seek(0)
ValueError: I/O operation on closed file.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 289, in wrap_session
config.notify_exception(excinfo, config.option)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\config\__init__.py", line 1037, in notify_exception
res = self.hook.pytest_internalerror(excrepr=excrepr, excinfo=excinfo)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
return outcome.get_result()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 826, in pytest_internalerror
self.stop_global_capturing()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 693, in stop_global_capturing
self._global_capturing.pop_outerr_to_orig()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 573, in pop_outerr_to_orig
out, err = self.readouterr()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 622, in readouterr
err = self.err.snap()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 479, in snap
self.tmpfile.seek(0)
ValueError: I/O operation on closed file.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\Scripts\pytest.exe\__main__.py", line 7, in <module>
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\config\__init__.py", line 185, in console_main
code = main()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\config\__init__.py", line 162, in main
ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_hooks.py", line 265, in __call__
return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_manager.py", line 80, in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 60, in _multicall
return outcome.get_result()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_result.py", line 60, in get_result
raise ex[1].with_traceback(ex[2])
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\pluggy\_callers.py", line 39, in _multicall
res = hook_impl.function(*args)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 316, in pytest_cmdline_main
return wrap_session(config, _main)
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\main.py", line 311, in wrap_session
config._ensure_unconfigure()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\config\__init__.py", line 991, in _ensure_unconfigure
fin()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 693, in stop_global_capturing
self._global_capturing.pop_outerr_to_orig()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 573, in pop_outerr_to_orig
out, err = self.readouterr()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 622, in readouterr
err = self.err.snap()
File "C:\Users\MY_USER\.conda\envs\MY_ENV\lib\site-packages\_pytest\capture.py", line 479, in snap
self.tmpfile.seek(0)
ValueError: I/O operation on closed file.
This ain't the best solution but after trying to comment various sections from the configuration. I found out the issue originated from the "colored" section which is not critical for usability:
import logging
import logging.config
import structlog
def configure() -> None:
timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
pre_chain = [
# Add the log level and a timestamp to the event_dict if the log entry
# is not from structlog.
structlog.stdlib.add_log_level,
timestamper,
]
logging.config.dictConfig(
{
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"plain": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=False),
"foreign_pre_chain": pre_chain,
},
# "colored": {
# "()": structlog.stdlib.ProcessorFormatter,
# "processor": structlog.dev.ConsoleRenderer(colors=True),
# "foreign_pre_chain": pre_chain,
# },
},
"handlers": {
"default": {
"level": "ERROR",
"class": "logging.StreamHandler",
"formatter": "plain", # <---Change to "plain"
},
"file": {
"level": "ERROR",
"class": "logging.handlers.WatchedFileHandler",
"filename": "test.log",
"formatter": "plain",
},
},
"loggers": {
"": {
"handlers": ["default", "file"],
"level": "ERROR",
"propagate": True,
},
},
}
)
structlog.configure(
processors=[
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
timestamper,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
I must admit I would prefer a solution that allows me to keep the "colored" section...

Problem in parsing data of a JSON file with Python

This is my ciao.json that i need to parse some data from it
{
"opcua": [
{
"ip": ciao,
"port": 4840,
"uri": "http://examples.freeopcua.github.io"}
"objects":[
{
object_name: ListaDeiSensori,
variables:[
"Temperature_Sensor",
"Water_Sensor"]
}]
}]
}
To pase the fileds of the json file i am using this python script:
import json
with open('ciao.json') as file:
data = json.load(file)
print(data
But i get this error:
Traceback (most recent call last):
File "getdata.py", line 11, in <module>
data = json.load(file)
File "/usr/lib/python3.6/json/__init__.py", line 299, in load
parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
File "/usr/lib/python3.6/json/__init__.py", line 354, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.6/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.6/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 4 column 28 (char 72)
Is there anyone that can help me in solving this problem?
I do not know how to solve it...
I think there are some errors in the json file.
update json, I changed undefined obj to string.
{
"opcua": [
{
"ip": "ciao",
"port": 4840,
"uri": "http://examples.freeopcua.github.io"
},
{
"objects": [
{
"object_name": "ListaDeiSensori",
"variables": [
"Temperature_Sensor",
"Water_Sensor"
]
}
]
}
]
}

ForwardReference NameError when loading a recursive dict in dataclass

I'm using marshmallow-dataclass to load a json which represents a sequence of rules where each rule is represented by a LogicalGroup and applies a logical operator on its child expressions, knowing that an expression can itself be a LogicalGroup.
The input dict follows this structure:
import marshmallow_dataclass
from dataclasses import field
from api_handler import BaseSchema
from typing import Sequence, Union, Literal, Type, List, ForwardRef, TypeVar, Generic
filter_input = { "rules" :
[{
"groupOperator" : "and",
"expressions" : [
{ "field": "xxxxx", "operator": "eq", "value": 'level1' },
{ "field": "xxxxx", "operator": "eq", "value": 'm'},
{ "field": "xxxxx", "operator": "eq", "value": "test"},
{
"groupOperator" : "or",
"expressions" : [
{ "field": "xxxx", "operator": "eq", "value": 'level2' },
{ "field": "xxxx", "operator": "eq", "value": 'm' },
{ "field": "xxxx", "operator": "eq", "value": "test" }
]
}
]
}]
}
The dataclasses i'm using for this purpose are the following :
#marshmallow_dataclass.dataclass(base_schema=BaseSchema)
class Expression:
field : str
operator : str
value : str
#marshmallow_dataclass.dataclass(base_schema=BaseSchema)
class LogicalGroup:
group_operator : str
expressions : List[Union['LogicalGroup', Expression]] = field(default_factory=list)
#marshmallow_dataclass.dataclass(base_schema=BaseSchema)
class Filter:
rules: List[LogicalGroup] = field(default_factory=list)
The problem is when i try to load the dict using the Filter dataclass i get the following error
filt = Filter.Schema().load(filter_input)
Traceback (most recent call last):
File "/home/adam/billing/billing/filter/filter.py", line 96, in <module>
filt = Filter.Schema().load(filter_input)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py", line 628, in load
all_loaded = super().load(data, many=many, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 725, in load
return self._do_load(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 859, in _do_load
result = self._deserialize(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 667, in _deserialize
value = self._call_and_store(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 496, in _call_and_store
value = getter_func(data)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 664, in <lambda>
getter = lambda val: field_obj.deserialize(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 354, in deserialize
output = self._deserialize(value, attr, data, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 726, in _deserialize
result.append(self.inner.deserialize(each, **kwargs))
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 354, in deserialize
output = self._deserialize(value, attr, data, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 609, in _deserialize
return self._load(value, data, partial=partial)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 592, in _load
valid_data = self.schema.load(value, unknown=self.unknown, partial=partial)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow_dataclass/__init__.py", line 628, in load
all_loaded = super().load(data, many=many, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 725, in load
return self._do_load(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 859, in _do_load
result = self._deserialize(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 667, in _deserialize
value = self._call_and_store(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 496, in _call_and_store
value = getter_func(data)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/schema.py", line 664, in <lambda>
getter = lambda val: field_obj.deserialize(
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 354, in deserialize
output = self._deserialize(value, attr, data, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 726, in _deserialize
result.append(self.inner.deserialize(each, **kwargs))
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow/fields.py", line 354, in deserialize
output = self._deserialize(value, attr, data, **kwargs)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/marshmallow_dataclass/union_field.py", line 56, in _deserialize
typeguard.check_type(attr or "anonymous", result, typ)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/typeguard/__init__.py", line 655, in check_type
expected_type = resolve_forwardref(expected_type, memo)
File "/home/adam/thanos-envv/lib/python3.9/site-packages/typeguard/__init__.py", line 198, in resolve_forwardref
return evaluate_forwardref(maybe_ref, memo.globals, memo.locals, frozenset())
File "/usr/lib/python3.9/typing.py", line 533, in _evaluate
eval(self.__forward_code__, globalns, localns),
File "<string>", line 1, in <module>
NameError: name 'LogicalGroup' is not defined
I'm guessing the problem comes from declaring LogicalGroup as a ForwardRef inside type hint Union, because when i use only
Union['LogicalGroup'] and modify my dict to be a nested dict of LogicalGroups without the Expressions it works fine.
Does someone have any idea on the source of the bug ? Or maybe a proposition to adress this problem in another way ?
Thanks in advance !

Strange behaviour solidity contract returning huge uint array

I use web3.py to read out a large amount of data and I cannot see what goes wrong here. getValue() and getValue2(), both can be called from remix.ethereum.org without an error, but when I use the python code I posted, then I can only read out getValue2(), the function getValue() throws an error and it looks like it runs in a gas limit. But since the function throws no errors called from remix.ethereum, I really don't see why there should be such a gas error:
Solidity Contract:
pragma solidity ^0.4.19;
contract TestContract {
uint [1000000] val;
uint [200000] val2;
function TestContract(){
}
function getValue() external view returns(uint [1000000]){
return val;
}
function getValue2() external view returns(uint [200000]){
return val2;
}
}
Python code:
import json
import web3
from web3 import Web3, HTTPProvider
from web3.contract import ConciseContract
# web3
w3 = Web3(HTTPProvider('https://ropsten.infura.io'))
# Instantiate and deploy contract
contractAddress = '0x37c587c2174bd9248f203947d7272bf1b8f91fa9'
with open('testfactory.json', 'r') as abi_definition:
abi = json.load(abi_definition)
contract_instance = w3.eth.contract(contractAddress, abi=abi,ContractFactoryClass=ConciseContract)
arr2=contract_instance.getValue2() #works fine
print(len(arr2))
arr=contract_instance.getValue() #throws an error, posted below
print(len(arr))
testfactory.json:
[
{
"constant": true,
"inputs": [],
"name": "getValue2",
"outputs": [
{
"name": "",
"type": "uint256[200000]"
}
],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"constant": true,
"inputs": [],
"name": "getValue",
"outputs": [
{
"name": "",
"type": "uint256[1000000]"
}
],
"payable": false,
"stateMutability": "view",
"type": "function"
},
{
"inputs": [],
"payable": false,
"stateMutability": "nonpayable",
"type": "constructor"
}
]
Error in pyton:
Traceback (most recent call last):
File "C:\Python36\lib\site-packages\web3\contract.py", line 844, in call_contract_function
output_data = decode_abi(output_types, return_data)
File "C:\Python36\lib\site-packages\eth_abi\abi.py", line 109, in decode_abi
return decoder(stream)
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 102, in __call__
return self.decode(stream)
File "C:\Python36\lib\site-packages\eth_utils\functional.py", line 22, in inner
return callback(fn(*args, **kwargs))
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 140, in decode
yield decoder(stream)
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 102, in __call__
return self.decode(stream)
File "C:\Python36\lib\site-packages\eth_utils\functional.py", line 22, in inner
return callback(fn(*args, **kwargs))
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 198, in decode
yield cls.item_decoder(stream)
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 102, in __call__
return self.decode(stream)
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 165, in decode
raw_data = cls.read_data_from_stream(stream)
File "C:\Python36\lib\site-packages\eth_abi\decoding.py", line 247, in read_data_from_stream
len(data),
eth_abi.exceptions.InsufficientDataBytes: Tried to read 32 bytes. Only got 0 bytes
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:/Users/Sebi/PycharmProjects/web3/test.py", line 18, in <module>
arr=contract_instance.getValue()
File "C:\Python36\lib\site-packages\web3\contract.py", line 805, in __call__
return self.__prepared_function(**kwargs)(*args)
File "C:\Python36\lib\site-packages\web3\contract.py", line 866, in call_contract_function
raise_from(BadFunctionCallOutput(msg), e)
File "C:\Python36\lib\site-packages\web3\utils\exception_py3.py", line 2, in raise_from
raise my_exception from other_exception
web3.exceptions.BadFunctionCallOutput: Could not decode contract function call getValue return data 0x for output_types ['uint256[1000000]']
Any suggestions what I could do? Is there a bug in web3.py v4.0?

(InvalidChangeBatch) when calling the ChangeResourceRecordSets operation - using boto3 to update resource record

I am trying to update a number of CNAME records to A hosted on Route53 using boto3, here is my function:
def change_resource_record(domain, zone_id, hosted_zone_id, balancer_id):
print(domain, zone_id, hosted_zone_id, balancer_id)
client.change_resource_record_sets(
HostedZoneId=zone_id,
ChangeBatch={
"Comment": "Automatic DNS update",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": domain,
"Type": "A",
"AliasTarget": {
"HostedZoneId": hosted_zone_id,
"DNSName": balancer_id,
"EvaluateTargetHealth": False
}
}
},
]
}
)
I get this error:
Traceback (most recent call last):
File "load_balancer.py", line 138, in <module>
get_balancer(domain)
File "load_balancer.py", line 135, in get_balancer
change_resource_record(domain, zone_id, hosted_zone_id, balancer_id)
File "load_balancer.py", line 116, in change_resource_record
"EvaluateTargetHealth": False
File "C:\Python36\lib\site-packages\botocore\client.py", line 312, in _api_call
return self._make_api_call(operation_name, kwargs)
File "C:\Python36\lib\site-packages\botocore\client.py", line 601, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.InvalidChangeBatch: An error occurred (InvalidChangeBatch) when calling the ChangeResourceRecordSets operation: RRSet of type A with DNS name suffix.domain.tld. is not permitted because a conflicting RRSet of type CNAME with the same DNS name already exists in zone domain.tld.
What is the correct way to update the record, should I delete the entry and then re-create it?
Any advice is much appreciated.
I had a missing closing period in the domain, so changing to this solved my issue:
"ResourceRecordSet": {
"Name": domain + '.',
"Type": "A",
"AliasTarget": {
"HostedZoneId": hosted_zone_id,
"DNSName": balancer_id,
"EvaluateTargetHealth": False
}
}

Categories

Resources