Problems connecting to Brownie/Rinkeby - python

I'm pretty new to this programming and maybe need more practice but I've been having problems trying to connect to the Rinkeby testnet and can't seem to see the problem (Windows 10, Powershell) not sure if it's the .env or how I set them up but I'll send the code or the respective files created with screenshots to see I someone could illuminate the problem. Copy pasted the code and some screenshots of the environment variables. Your help is very much appreciated. For some reason the export commands in the .env are seems as they are not executing(Third attachment).
Thank you
Error:
Brownie v1.18.1 - Python development framework for Ethereum
BrownieSimpleStorageProject is the active project.
Running 'scripts\deploy.py::main'...
File "C:\Users\jorge\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\brownie\_cli\run.py", line 51, in main
return_value, frame = run(
File "C:\Users\jorge\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\brownie\project\scripts.py", line 103, in run
return_value = f_locals[method_name](*args, **kwargs)
File ".\scripts\deploy.py", line 25, in main
deploy_simple_storage()
File ".\scripts\deploy.py", line 5, in deploy_simple_storage
account = get_account()
File ".\scripts\deploy.py", line 21, in get_account
return accounts.add(config["wallets"]["from_key"])
KeyError: 'wallets'
deploy.py:
from brownie import accounts, config, SimpleStorage, network
def deploy_simple_storage():
account = get_account()
simple_storage = SimpleStorage.deploy({"from": account})
# Transaction
# Call
stored_value = simple_storage.retrieve()
print(stored_value)
transaction = simple_storage.store(15, {"from": account})
transaction.wait(1)
updated_value = simple_storage.retrieve()
print(updated_value)
def get_account():
if network.show_active() == "development":
return accounts[0]
else:
return accounts.add(config["wallets"]["from_key"])
def main():
deploy_simple_storage()
Here's the .env file:
export WEB3_INFURA_PROJECT_ID=xxx
export PRIVATE_KEY=xxxx
brownie-config.yaml file:
dotenv: .env
wallets:
from_key: ${PRIVATE_KEY}
Enviroment variable
Environment variable
.env file export

config reads from brownie-config.yaml file which is in the root directory. make sure the location of the file is in root and its name is correct in the project.

Related

Solcx compile source is throwing error - error occurred during execution

I am new to blockchain technology. I am trying to deploy smart contract. But I am always getting below error while compiling sol file. It was working, but suddenly stopped working. I have not made any changes.
This is the error I am getting
Traceback (most recent call last):
File ".\main.py", line 68, in <module>
main()
File ".\main.py", line 57, in main
compiled_sol = compile_source_file('contract.sol')
File ".\main.py", line 20, in compile_source_file
return solcx.compile_source(source)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\solcx\main.py", line 130, in compile_source
allow_empty=allow_empty,
File "C:\Program Files (x86)\Python37-32\lib\site-packages\solcx\main.py", line 277, in _compile_combined_json
combined_json = _get_combined_json_outputs(solc_binary)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\solcx\main.py", line 242, in _get_combined_json_outputs
help_str = wrapper.solc_wrapper(solc_binary=solc_binary, help=True)[0].split("\n")
File "C:\Program Files (x86)\Python37-32\lib\site-packages\solcx\wrapper.py", line 163, in solc_wrapper
stderr_data=stderrdata,
solcx.exceptions.SolcError: An error occurred during execution
> command: `C:\Users\Nikesh\.solcx\solc-v0.8.11\solc.exe --help -`
> return code: `0`
> stdout:
solc, the Solidity commandline compiler.
This program comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See 'solc --license'
for details.
Usage: solc [options] [input_file...]
Compiles the given Solidity input files (or the standard input if none given or
"-" is used as a file name) and outputs the components specified in the options
at standard output or in files in the output directory, if specified.
Imports are automatically read from the filesystem, but it is also possible to
remap paths using the context:prefix=path syntax.
Example:
solc --bin -o /tmp/solcoutput dapp-bin=/usr/local/lib/dapp-bin contract.sol
General Information:
--help Show help message and exit.
--version Show version and exit.
--license Show licensing information and exit.
--input-file arg input file
Input Options:
--base-path path Use the given path as the root of the source tree
instead of the root of the filesystem.
--include-path path Make an additional source directory available to the
default import callback. Use this option if you want to
import contracts whose location is not fixed in relation
to your main source tree, e.g. third-party libraries
installed using a package manager. Can be used multiple
times. Can only be used if base path has a non-empty
value.
--allow-paths path(s)
Allow a given path for imports. A list of paths can be
supplied by separating them with a comma.
--ignore-missing Ignore missing files.
--error-recovery Enables additional parser error recovery.
I am using web3 and solcx
This is the contract.sol file
// SPDX-License-Identifier: MIT
pragma solidity ^0.8.11;
contract StoreVar {
uint8 public _myVar;
event MyEvent(uint indexed _var);
function setVar(uint8 _var) public {
_myVar = _var;
emit MyEvent(_var);
}
function getVar() public view returns (uint8) {
return _myVar;
}
}
And this is the main file
from web3 import Web3
from web3.middleware import geth_poa_middleware
# from solcx import compile_source, install_solc
import solcx
"""
pip install py-solc-x
pip install web3
"""
def compile_source_file(file_path):
solcx.install_solc(version='latest')
# install_solc(version='latest')
# install_solc("0.6.0")
# print(solcx.get_compilable_solc_versions())
with open(file_path, 'r') as f:
source = f.read()
print(source)
return solcx.compile_source(source)
def deploy_contract(w3, contract_interface):
# unicorns = w3.eth.contract(address="0x589a1532Aasfgs4e38345b58C11CF4697Ea89A866", abi=contract_interface['abi'])
# nonce = w3.eth.get_transaction_count('0x589a1532AAaE84e38345b58C11CF4697Eaasasd')
# unicorn_txn = unicorns.functions.transfer(
# '0x589a1532AAaE84e38345b58C11CF4697asdfasd',
# 1,
# ).buildTransaction({
# 'chainId': 1,
# 'gas': 70000,
# 'maxFeePerGas': w3.toWei('2', 'gwei'),
# 'maxPriorityFeePerGas': w3.toWei('1', 'gwei'),
# 'nonce': nonce,
# })
# print(unicorn_txn)
tx_hash = w3.eth.contract(
abi=contract_interface['abi'],
bytecode=contract_interface['bin']).constructor().transact()
address = w3.eth.get_transaction_receipt(tx_hash)['contractAddress']
return address
def main():
w3 = Web3(Web3.HTTPProvider("https://rinkeby.infura.io/v3/c0easgasgas2666cd56ef4e3"))
w3.middleware_onion.inject(geth_poa_middleware, layer=0)
# print(w3.eth.get_block('latest'))
# print(w3.eth.get_balance('0x589a15asdfasdfab58C11CF4697Ea89A866'))
address = '0x589a1532AAaE8asdfasdf8C11CF4697Ea89A866'
abi = '[{"inputs":[{"internalType":"address","name":"account","type":"address"},{"internalType":"address","name":"minter_","type":"address"}'
compiled_sol = compile_source_file('contract.sol')
contract_id, contract_interface = compiled_sol.popitem()
# print(contract_id)
# print(contract_interface)
address = deploy_contract(w3, contract_interface)
print(f'Deployed {contract_id} to: {address}\n')
main()
Please help me out resolving this.
Update:
solcx.compile_source has an optional output_values argument. If you don't provide it, py-solc-x will parse the solc --help output for it. In solc v0.8.9 and earlier, the return value for solc --help was 1, but that changed to 0 with v0.8.10, which breaks what py-solc-x is expecting.
I have a pull request in with py-solc-x to fix it. In the meantime, you can either use solc <=v0.8.9 or specify your own output_values.
Original:
I'm not sure what the root of the problem is, but specifying an solc version <=0.8.9 worked for me. In your example:
def compile_source_file(file_path):
solcx.install_solc(version='0.8.9')
solcx.set_solc_version('0.8.9')
with open(file_path, 'r') as f:
source = f.read()
print(source)
return solcx.compile_source(source)

Python erronously adding file extension to path string

I'm currently struggling with the following problem:
My folder structer looks like:
master
- resources
customFile.fmu
fileCallingFMU.py
While executing the fileCallingFMU.py I pass a path string like
path = "./resources/customFile.fmu"
My script contains a super function where I pass the path variable. But everytime I execute the script it trips with an exception:
Exception has occurred: FileNotFoundError
[Errno 2] No such file or directory: b'2021-11-16_./resources/customFile.fmu.txt'
File "[projectFolder]\fileCallingFMU.py", line 219, in __init__
super().__init__(path, config, log_level)
File "[projectFolder]\fileCallingFMU.py", line 86, in <module>
env = gym.make(env_name)
My urging question now is the following:
Why and how does python manipulate the path variable with a date-prefix and .txt as file extension?!
Hope anyone can enlighten me on this one...
EDIT
I'm trying to get the example of ModelicaGym running.
My fileCallingFMU.py contains the following code:
path = "./resources/customFile.fmu"
env_entry_point = 'cart_pole_env:JModelicaCSCartPoleEnv'
config = {
'path': path,
'm_cart': m_cart,
'm_pole': m_pole,
'theta_0': theta_0,
'theta_dot_0': theta_dot_0,
'time_step': time_step,
'positive_reward': positive_reward,
'negative_reward': negative_reward,
'force': force,
'log_level': log_level
}
from gym.envs.registration import register
env_name = env_name
register(
id=env_name,
entry_point=env_entry_point,
kwargs=config
)
env = gym.make(env_name)
The full code for the entryPoint can be found here.
As jjramsey pointed out the problem was burried within the ModelicaGym library.
The logger could not create a propper log file, because the model name was not properly stored in the self.model variable.
Source of this error lied in the line
self.model_name = model_path.split(os.path.sep)[-1]
due to the fact that the os library was not able to separate my path string
"./resources/customFile.fmu"
After changing it to
".\\resources\\customFile.fmu"
everythig works as expected.
Thanks again!

Cannot load azure.ml workspace from within a webservice entry_script! Where is the `/var/azureml-app/` located?

I am creating an azure-ml webservice. The following script shows the code for creating the webservice and deploying it locally.
from azureml.core.model import InferenceConfig
from azureml.core.environment import Environment
from azureml.core import Workspace
from azureml.core.model import Model
ws = Workspace.from_config()
model = Model(ws,'textDNN-20News')
ws.write_config(file_name='config.json')
env = Environment(name="init-env")
python_packages = ['numpy', 'pandas']
for package in python_packages:
env.python.conda_dependencies.add_pip_package(package)
dummy_inference_config = InferenceConfig(
environment=env,
source_directory="./source_dir",
entry_script="./init_score.py",
)
from azureml.core.webservice import LocalWebservice
deployment_config = LocalWebservice.deploy_configuration(port=6789)
service = Model.deploy(
ws,
"myservice",
[model],
dummy_inference_config,
deployment_config,
overwrite=True,
)
service.wait_for_deployment(show_output=True)
As it can be seen, the above code deploys "entry_script = init_score.py" to my local machine. Within the entry_script, I need to load the workspace again to connect to azure SQL database. I do it like the following :
from azureml.core import Dataset, Datastore
from azureml.data.datapath import DataPath
from azureml.core import Workspace
def init():
pass
def run(data):
try:
ws = Workspace.from_config()
# create tabular dataset from a SQL database in datastore
datastore = Datastore.get(ws, 'sql_db_name')
query = DataPath(datastore, 'SELECT * FROM my_table')
tabular = Dataset.Tabular.from_sql_query(query, query_timeout=10)
df = tabular.to_pandas_dataframe()
return len(df)
except Exception as e:
output0 = "{}:".format(type(e).__name__)
output1 = "{} ".format(e)
output2 = f"{type(e).__name__} occured at line {e.__traceback__.tb_lineno} of {__file__}"
return output0 + output1 + output2
The try-catch block is for catching the potential exception thrown and return it as an output.
The exception that I keep getting is:
UserErrorException: The workspace configuration file config.json, could not be found in /var/azureml-app or its
parent directories. Please check whether the workspace configuration file exists, or provide the full path
to the configuration file as an argument. You can download a configuration file for your workspace,
via http://ml.azure.com and clicking on the name of your workspace in the right top.
I have actually tried to save the config file by passing an absolute path to the path argument of both ws.write_config(path='my_absolute_path'), and also when loading it to the Workspace.from_config(path='my_absolute_path'), but I got pretty much the same error:
UserErrorException: The workspace configuration file config.json, could not be found in /var/azureml-app/my_absolute_path or its
parent directories. Please check whether the workspace configuration file exists, or provide the full path
to the configuration file as an argument. You can download a configuration file for your workspace,
via http://ml.azure.com and clicking on the name of your workspace in the right top.
Looks like even providing the path does not change the root directory that the entry script starts locating from.
I also tried to directly saving the file to /var/azureml-app/, but this path is not recognized when I passed it to the ws.write_config(path='/var/azureml-app/').
Do you have any idea where exactly is the /var/azureml-app/?
Any idea on how to fix this?

Boto Exception NoAuthHandler Found

I'm getting the following error:
File "/Users/tai/Desktop/FlashY/flashy/sniffer/database.py", line 21, in <module>
import dynamoStorage
File "/Users/tai/Desktop/FlashY/flashy/sniffer/dynamoStorage.py", line 37, in <module>
swfTable = Table(decompiled_dynamo_table, connection=dynamoConn)
File "/Library/Python/2.7/site-packages/boto/dynamodb2/table.py", line 107, in __init__
self.connection = DynamoDBConnection()
File "/Library/Python/2.7/site-packages/boto/dynamodb2/layer1.py", line 183, in __init__
super(DynamoDBConnection, self).__init__(**kwargs)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 1073, in __init__
profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 572, in __init__
host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 883, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV4Handler'] Check your credentials
When I had the auth directly in the file my keys worked so I know the keys are correct.
I have for awsAccess.py:
#aswAccess holds the names of the bash environment set keys.
#used by other classes to create a connection to aws
aws_access_key_id=os.getenv('AWS_ACCESS_KEY');
aws_secret_access_key=os.getenv('AWS_SECRET_KEY');
aws_dynamo_region=os.getenv('DYANAMO_REGION')
and I have for database.py
#for connecting to aws
aws_access_key_id=awsAccess.aws_access_key_id
aws_secret_access_key=awsAccess.aws_secret_access_key
aws_dynamo_region=awsAccess.aws_dynamo_region
aws_dynamo_table="decompiled_swf_text"
conn= S3Connection(aws_access_key_id,aws_secret_access_key);
dynamoConn = boto.connect_dynamodb(aws_access_key_id, aws_secret_access_key)
dTable = dynamoConn.get_table(aws_dynamo_table)
So I know the keys themselves are correct.
I have a .bash_profile file that looks like this (**indicating removed, also I tried with and without ""'s):
export AWS_ACCESS_KEY="myAccessKey**"
export AWS_SECRET_KEY="mySecretKey**"
export DYNAMO_REGION="us-east"
I run source ~/.bash_profile, and then tried running but get the error. I can't see why importing would alter the impact of the same key string.
Few tips:
assert in your code, that the values for ID aws_access_key and aws_secret_access_key are not empty. It is likely, you do not have them set at the place you think you have.
remove authentication arguments from boto.connect_<something>. This will let boto to use built in authentication handlers and these shall try reading the file, checking environmental variables etc. You will have your code simpler while still being able to provide all what is needed by any of boto authentication methods.
my favourit authentication method is to use ini file (usually named boto.cfg) and having BOTO_CONFIG environmental variable set to full path to this file, e.g. to `"/home/javl/.boto/boto.cfg"
note, that if you pass any of the authentication parameters to boto.connect_<something> with value null, it will work as boto will check other methods to find the value.
since about a year ago, there is an option profile, allowing to refer to specific profile in boto config file. This let me switching to different profiles anywhere in the code.
For more tips and details, see related SO answer
I had Problem with Ubuntu Wily (15.10).
Here there is an update for ConfigParser.
.get has a additional Parameter, but that are not supported with boto.pyami.config.
see here and here

Debugging GAE in Python Tools for Visual Studio

I'm able to run my Google App Engine webapp2 app using Python Tools for Visual Studio 2012 without issues after following this tutorial, and even step through the server initialization code, but I can't get it to break at get or post methods when the website is loaded, similar to what is shown in this video with the main() method. When I pause the debugger, it always ends up in the following infinite loop in wsgi_server.py:
def _loop_forever(self):
while True:
self._select()
def _select(self):
with self._lock:
fds = self._file_descriptors
fd_to_callback = self._file_descriptor_to_callback
if fds:
if _HAS_POLL:
# With 100 file descriptors, it is approximately 5x slower to
# recreate and reinitialize the Poll object on every call to _select
# rather reuse one. But the absolute cost of contruction,
# initialization and calling poll(0) is ~25us so code simplicity
# wins.
poll = select.poll()
for fd in fds:
poll.register(fd, select.POLLIN)
ready_file_descriptors = [fd for fd, _ in poll.poll(1)]
else:
ready_file_descriptors, _, _ = select.select(fds, [], [], 1)
for fd in ready_file_descriptors:
fd_to_callback[fd]()
else:
# select([], [], [], 1) is not supported on Windows.
time.sleep(1)
Is it possible to set breakpoints in a Google App Engine webapp2 app in PTVS, which are triggered when the page is loaded from localhost?
Edit: using cprcrack's settings, I was able to successfully run GAE, but when loading the main page I get the error
Traceback (most recent call last):
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 3003, in _HandleRequest
self._Dispatch(dispatcher, self.rfile, outfile, env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 2862, in _Dispatch
base_env_dict=env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 719, in Dispatch
base_env_dict=base_env_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1797, in Dispatch
self._module_dict)
File "C:\Program Files (x86)\Google\google_appengine\google\appengine\tools\dev_appserver.py", line 1648, in ExecuteCGI
app_log_handler = app_logging.AppLogsHandler()
File "C:\Python\lib\logging\__init__.py", line 660, in __init__
_addHandlerRef(self)
File "C:\Python\lib\logging\__init__.py", line 639, in _addHandlerRef
_releaseLock()
File "C:\Python\lib\logging\__init__.py", line 224, in _releaseLock
_lock.release()
File "C:\Python\lib\threading.py", line 138, in release
self.__count = count = self.__count - 1
File "C:\Python\lib\threading.py", line 138, in release
self.__count = count = self.__count - 1
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 557, in trace_func
return self._events[event](frame, arg)
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 650, in handle_line
if filename == frame.f_code.co_filename or (not bound and filename_is_same(filename, frame.f_code.co_filename)):
File "C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\Python Tools for Visual Studio\2.0\visualstudio_py_debugger.py", line 341, in filename_is_same
import ntpath
File "C:\Python\lib\ntpath.py", line 8, in <module>
import os
File "C:\Python\lib\os.py", line 120, in <module>
from os.path import (curdir, pardir, sep, pathsep, defpath, extsep, altsep,
ImportError: cannot import name curdir
Is this error occurring because I need to roll back to Python 2.5 to use the old dev_appserver?
UPDATE#2
gcloud preview deprecated
it's back to original method
UPDATE#1
gcloud preview (it's newer and simpler),
replace this:
General->Startup File:
C:\Program Files\Google\Cloud SDK\google-cloud-sdk\lib\googlecloudsdk\gcloud\gcloud.py
Debug->Script Arguments:
preview app run app.yaml --python-startup-script "pydevd_startup.py" --max-module-instances="default:1"
all rest is the same as the original answer below:
ORIGINAL ANSWER:
A.) Create A File to Inject remote debugger
make a new python file "pydevd_startup.py"
insert this:
import json
import sys
if ':' not in config.version_id:
# The default server version_id does not contain ':'
sys.path.append("lib")
import ptvsd #ptvsd.settrace() equivalent
ptvsd.enable_attach(secret = 'joshua')
ptvsd.wait_for_attach()
Save it in your working directory of your app
for more info look at the pytool remote debuging docu I mentioned above
B.) Edit Project Settings in VS 2013
Now open your Project Settings in VS and enter this:
General->Startup File: C:\Cloud SDK\google-cloud-sdk\bin\dev_appserver.py
General->Working Directory: .
Debug->Search Paths: C:\Cloud SDK\google-cloud-sdk\lib
Debug->Script Arguments: --python_startup_script=".\pydevd_startup.py" --automatic_restart=no --max_module_instances="default:1" ".\app.yaml"
You could probably also use . instead of <path-to-your-app> but I wanted to be safe.
C.) Run Debugger
With Ctrl+F5 you run the debugger, without debugging. This sound weird, but we are actually not debugging right now, just running the dev server which than starts our script to inject the debugger code and wait for our remote debugger to connect, which will happen in the next step
D.) Start Remote Debugger
DEBUG->Attach to Process <Ctrl+Alt+P>
Qualifier: tcp://joshua#localhost:5678 <ENTER>
joshua is your secret key. If you want to change it (and you should), you also have to change it in the pydevd_startup.py. See pytool reference for more info.
F.) Be really happy!
You now can remote debug your application locally (erm, weird). To test this you probably should use a breakpoint at the start of your own script.
If you have any question, please ask. In the end it seems really simple, but to get this going was rough. Especially because pytools said, they don't support it...
G.) Start Debugging for real!
Open http://localhost:8080 in a browser (or any other address you configure your app to use). Now it should invoke the breaking point. If you are done and reload the site, it starts all over again. If you really want to end debugging or change some code, you have to restart the server and attach again. Don't forget to close the terminal window with the server open (use <Crtl+C> )
This is a known issue with Google App Engine for Python: currently, debugging does not work on any debugger. See here, here and here.
There's a workaround, but I don't know about getting this working for python tools for vs. In theory it should be possible.
https://groups.google.com/forum/#!topicsearchin/google-appengine/Boa/google-appengine/-m00Qz4Vc7U
You'd probably need this guide to get it working:
https://docs.google.com/document/d/1CCSaRiIWCLgbD3OwmuKsRoHHDfBffbROWyVWWL0ZXN4/edit#heading=h.fj44xnkhr0gr
I'm using the old dev_appserver for debugging and it's working for me in an scenario similar to yours. I also got a bunch of exceptions but I was able to just skip all of them following the instructions on this link (I also had to add "exceptions" for some ValueError exceptions).
These are my project properties:
General tab:
Startup File: C:\Program Files (x86)\Google\google_appengine\old_dev_appserver.py
Working Directory: ./
Windows Application: (unchecked)
Interpreter: Python 2.7
Debug tab:
Search Paths: C:\Program Files (x86)\Google\google_appengine
Script Arguments: --use_sqlite ./
Interpreter Arguments: (blank)
Interpreter Path: C:\Python27\python.exe
When there is no need for breakpoints I run the project with DEBUG > Execute Project in Python Interactive. This way you don't get the unneeded console window.

Categories

Resources