I've used the Swagger Editor to manually generate my Swagger spec file and generated the files for a Python Flask server. Following the README I installed connexion, but when I run python app.py I get the error:
ValueError: need more than 1 value to unpack. Any ideas?
Full stack trace below:
No handlers could be found for logger "connexion.api"
Traceback (most recent call last):
File "app.py", line 5, in <module>
app.add_api('swagger.yaml')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/app.py", line 144, in add_api
debug=self.debug)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/api.py", line 127, in __init__
self.add_paths()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/api.py", line 198, in add_paths
six.reraise(*sys.exc_info())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/api.py", line 187, in add_paths
self.add_operation(method, path, endpoint, path_parameters)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/api.py", line 160, in add_operation
resolver=self.resolver)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/operation.py", line 168, in __init__
resolution = resolver.resolve(self)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/resolver.py", line 50, in resolve
return Resolution(self.resolve_function_from_operation_id(operation_id), operation_id)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/resolver.py", line 71, in resolve_function_from_operation_id
return self.function_resolver(operation_id)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/connexion/utils.py", line 106, in get_function_from_name
module_name, attr_path1 = module_name.rsplit('.', 1)
ValueError: need more than 1 value to unpack
I ran into this as well. From what I see, the generated code from Swagger seems to assume you're using Python 3. While connexion supports both Python 2.7 & 3.4+, it does need a __init__.py file in the generated python-flask-server/ base directory as well as inside the controllers/ subdirectory to work for Python 2.7 (Implicit Namespace Packages were introduced in Python 3.3). If you create those 2 empty files after generating the code, things should work. If the Swagger generator wants to support Python 2.7 (since connexion allows for it), it would just need to provide those files as well.
Related
I am trying to push the data to GCP DataStore, The below code snippet works fine in Jupyter Notebook but it is throwing error in VS Code.
def load_data_json(self, kind_name, data_with_qp_ID, qp_id):
#Load the data in JSON format to upload into the DataStore
data_with_qp_ID_as_JSON = self.convert_DF_to_JSON(data_with_qp_ID, qp_id)
#Loop to iterate through the JSON format and upload into the GCS Storage
for data in data_with_qp_ID_as_JSON.keys():
with self.client.transaction():
incomplete_key = self.client.key(kind_name)
task = datastore.Entity(key=incomplete_key)
task.update(data_with_qp_ID_as_JSON[data])
self.client.put(task)
return 'Ingestion Successful - Data Store Repository'
I have defined the name of the bucket in "Kind Name", data_with_qp_id is a pandas dataframe, qp_id is the name of the column name in pandas. Please see the error message that I get below,
Traceback (most recent call last):
File "/Users/ajaykrishnan/Desktop/Projects/Sprint 3/Data Migration/DataMigration_v1.1/main2.py", line 139, in <module>
write_datastore_db.load_data_json(ds_kindname, bookmarks_data_with_qp_ID, qp_id)
File "/Users/ajaykrishnan/Desktop/Projects/Sprint 3/Data Migration/DataMigration_v1.1/pkg/repository/ds_repository.py", line 50, in load_data_json
self.client.put(task)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 597, in put
self.put_multi(entities=[entity], retry=retry, timeout=timeout)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/client.py", line 634, in put_multi
current.put(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/transaction.py", line 315, in put
super(Transaction, self).put(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/batch.py", line 227, in put
_assign_entity_to_pb(entity_pb, entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/batch.py", line 373, in _assign_entity_to_pb
bare_entity_pb = helpers.entity_to_protobuf(entity)
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/helpers.py", line 208, in entity_to_protobuf
key_pb = entity.key.to_protobuf()
File "/opt/anaconda3/lib/python3.9/site-packages/google/cloud/datastore/key.py", line 298, in to_protobuf
key.path.append(element)
TypeError: Parameter to MergeFrom() must be instance of same class: expected google.datastore.v1.Key.PathElement got PathElement.
My environment is as follows,
Mac OS Monterey V12.06
Python - Conda 3.9.12
I was able to clear this error. It was an issue with Protobuf library that my environment was using. I downgraded the version of protobuf from 4.x.x to 3.20.1 and it worked.
I am trying to run a preprocessing pipeline using nipype and I get the following error message:
Traceback (most recent call last):
File "preprocscript.py", line 211, in <module>
preproc.run('MultiProc', plugin_args={'n_procs': 8})
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py", line 579, in run
runner = plugin_mod(plugin_args=plugin_args)
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 162, in __init__
initargs=(self._cwd,)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 175, in __init__
self._repopulate_pool()
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 236, in _repopulate_pool
self._wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 250, in _repopulate_pool_static
wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/process.py", line 73, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
and I am not sure what exactly might be wrong in my code that leads to this or if this is an issue with my software.
I am on a linux system and use python 3.6.
The module you are using has a ProcessPoolExecuter being used in it. In Python 3.7 they added some additional arguments to that class, namely initargs which is what is being called in nipype multiprocess module you are using. Unfortunately it is not backwards compatible to 3.6 and they did not write in another way to use that module.
Your options are to upgrade or not use the multiprocessing portion of nipype.
Hi I've installed pynsist in order to make my python files into executables but I am having some issues. The project consists of two files that I've written. The main program to be run is Filereader.py and a supplied file called spuriousReq.py which Filereader.py uses a function from. Currently my installer.cfg file looks like this
[Application]
name=WFilereader
version=1.0
entry_point=Filereader
console=true
[Python]
version=3.4.0
[Include]
packages = matplotlib
statistics
bisect
files = spuriousReq.py
I've moved the installer.cfg file and both python files to the C:\Python34\Scripts folder in order to access them from the cmd (Yes I am new at this..). But I get the following error which I dont know how to interpret or solve..
C:\Python34\Scripts>"C:\Python34\python.exe" "C:\Python34\Scripts\\pynsist" inst
aller.cfg
Traceback (most recent call last):
File "C:\Python34\Scripts\\pynsist", line 3, in <module>
main()
File "C:\Python34\lib\site-packages\nsist\__init__.py", line 393, in main
shortcuts = configreader.read_shortcuts_config(cfg)
File "C:\Python34\lib\site-packages\nsist\configreader.py", line 172, in read_
shortcuts_config
appcfg = cfg['Application']
File "C:\Python34\lib\configparser.py", line 937, in __getitem__
raise KeyError(key)
KeyError: 'Application'
As mentioned in the documentation [http://pynsist.readthedocs.io/en/latest/] you need to specify the function from your 'Filereader.py' file which will be starting the execution of your script. For example if you have a 'main' function that will be the entry point or starting point of your script then you need to specify that in your 'installer.cfg' file like below:-
[Application]
name=WFilereader
version=1.0
entry_point=Filereader:main <------ Here mention your entry point function.
console=true
[Python]
version=3.4.0
[Include]
packages = matplotlib
statistics
bisect
files = spuriousReq.py
I am starting with Microsoft Azure SDK for Python (https://github.com/Azure/azure-sdk-for-python), but I have problems.
I am using Scientific Linux and I have installed the SDK for Python 3.4 following the next steps:
(instead of the SDK directory)
python setup.py install
after that I created a simple script just to test the connection:
from azure.storage import BlobService
blob_service = BlobService(account_name='thename', account_key='Mxxxxxxx3w==' )
blob_service.create_container('testcontainer')
for i in blob_service.list_containers():
print(i.name)
following this documentation:
http://blogs.msdn.com/b/tconte/archive/2013/04/17/how-to-interact-with-windows-azure-blob-storage-from-linux-using-python.aspx
http://azure.microsoft.com/en-us/documentation/articles/storage-python-how-to-use-blob-storage/#large-blobs
but is not working, I always receive the same error:
python3 test.py
Traceback (most recent call last):
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 143, in _perform_request
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 132, in _perform_request_worker
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/http/httpclient.py", line 247, in perform_request
azure.http.HTTPError: The value for one of the HTTP headers is not in the correct format.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 21, in <module>
blob_service.create_container('testcontainer')
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/blobservice.py", line 192, in create_container
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/__init__.py", line 905, in _dont_fail_on_exist
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/blobservice.py", line 189, in create_container
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/storageclient.py", line 150, in _perform_request
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/storage/__init__.py", line 889, in _storage_error_handler
File "/usr/local/lib/python3.4/site-packages/azure-0.9.0-py3.4.egg/azure/__init__.py", line 929, in _general_error_handler
azure.WindowsAzureError: Unknown error (The value for one of the HTTP headers is not in the correct format.)
<?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidHeaderValue</Code><Message>The value for one of the HTTP headers is not in the correct format.
RequestId:b37c5584-0001-002b-24b8-c2c245000000
Time:2014-11-19T14:54:38.9378626Z</Message><HeaderName>x-ms-version</HeaderName><HeaderValue>2012-02-12</HeaderValue></Error>
Thanks in advance and best regards.
I have this exact same issue. I believe it's a library bug, but the author/s haven't had their say yet.
It looks like the response states the version, but it's actually giving you the header that's wrong. Its value should be "2014-02-14", you can do the fix shown in https://github.com/Azure/azure-sdk-for-python/pull/289 .
Hopefully this will be fixed and nobody will ever read this answer. Cheers!
Is it possible to use the antlr4 python runtime with python 2.6 or is the minimun required version python 2.7? I want to use it on CentOS 6.3 that comes with python 2.6.6. If it is not possible, is it known what features of python 2.7 that is used?
Finally I can answer this myself. It is not possible to use antlr4-python2 runtime on centos6.3 (that has python 2.6.6), I have installed it and tried this example. The result is as follows:
[lg#localhost py]$ python test.py data.txt
Traceback (most recent call last):
File "test.py", line 15, in <module>
main(sys.argv)
File "test.py", line 10, in main
tree = parser.r()
File "/home/lg/antlr/py/HelloParser.py", line 74, in r
self.enterRule(localctx, 0, self.RULE_r)
File "/usr/lib/python2.6/site-packages/antlr4/Parser.py", line 367, in enterRule
self._ctx.start = self._input.LT(1)
File "/usr/lib/python2.6/site-packages/antlr4/CommonTokenStream.py", line 85, in LT
self.lazyInit()
File "/usr/lib/python2.6/site-packages/antlr4/BufferedTokenStream.py", line 209, in lazyInit
self.setup()
File "/usr/lib/python2.6/site-packages/antlr4/BufferedTokenStream.py", line 212, in setup
self.sync(0)
File "/usr/lib/python2.6/site-packages/antlr4/BufferedTokenStream.py", line 134, in sync
fetched = self.fetch(n)
File "/usr/lib/python2.6/site-packages/antlr4/BufferedTokenStream.py", line 146, in fetch
t = self.tokenSource.nextToken()
File "/usr/lib/python2.6/site-packages/antlr4/Lexer.py", line 131, in nextToken
tokenStartMarker = self._input.mark()
AttributeError: 'builtin_function_or_method' object has no attribute 'mark'
[lg#localhost py]$
The example you used was badly written. input is the name of a built-in Python function. Give the Lexer a FileStream and it might work.