Error on writing to Google cloud spanner using Google cloud functions - python

I am trying to insert data into cloud spanner table using cloud functions but it is throwing the error given below.Reading data from cloud spanner is working properly but writing using both the Data Definition Language commands and batch.insert method both throws the same error. I am thinking its some kind of permissions problem! I don't know how to fix it?
Requirements file contains only google-cloud-spanner==1.7.1
Code running in cloud functions
import json
from google.cloud import spanner
INSTANCE_ID = 'AARISTA'
DATABASE_ID = 'main'
TABLE_NAME = 'userinfo'
dataDict = None
def new_user(request):
dataDict = json.loads(request.data)# Data is available in dict format
if dataDict['USER_ID']==None:
return "User id empty"
elif dataDict['IMEI'] == None:
return "Imei number empty"
elif dataDict['DEVICE_ID'] == None:
return "Device ID empty"
elif dataDict['NAME'] == None:
return "Name field is empty"
elif dataDict['VIRTUAL_PRIVATE_KEY']== None:
return "User's private key cant be empty"
else:
return insert_data(INSTANCE_ID,DATABASE_ID)
def insert_data(instance_id, database_id):
spanner_client = spanner.Client()
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)
def insert_user(transcation):
row_ct= transcation.execute_update("INSERT userinfo
(USER_ID,DEVICE_ID,IMEI,NAME,VIRTUAL_PRIVATE_KEY) VALUES"
"("+dataDict['USER_ID']+',
'+dataDict['DEVICE_ID']+', '+ dataDict['IMEI']+',
'+dataDict['NAME']+',
'+dataDict['VIRTUAL_PRIVATE_KEY']+")")
database.run_in_transaction(insert_user)
return 'Inserted data.'
Error logs on Cloud Functions
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/pool.py", line 265, in get session = self._sessions.get_nowait()
File "/opt/python3.7/lib/python3.7/queue.py", line 198, in get_nowait return self.get(block=False)
File "/opt/python3.7/lib/python3.7/queue.py", line 167, in get raise Empty _queue.Empty
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable return callable_(*args, **kwargs)
File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 547, in __call__ return _end_unary_response_blocking(state, call, False, None)
File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 466, in _end_unary_response_blocking raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = "Invalid CreateSession request." debug_error_string = "{"created":"#1547373361.398535906","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1036,"grpc_message":"Invalid> CreateSession request.","grpc_status":3}" >
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 297, in run_http_function result = _function_handler.invoke_user_function(flask.request)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 199, in invoke_user_function return call_user_function(request_or_event)
File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 192, in call_user_function return self._user_function(request_or_event)
File "/user_code/main.py", line 21, in new_user return insert_data(INSTANCE_ID,DATABASE_ID)
File "/user_code/main.py", line 31, in insert_data database.run_in_transaction(insert_user)
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/database.py", line 438, in run_in_transaction with SessionCheckout(self._pool) as session:
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/pool.py", line 519, in __enter__ self._session = self._pool.get(**self._kwargs)
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/pool.py", line 268, in get session.create()
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/session.py", line 116, in create session_pb = api.create_session(self._database.name, metadata=metadata, **kw)
File "/env/local/lib/python3.7/site-packages/google/cloud/spanner_v1/gapic/spanner_client.py", line 276, in create_session request, retry=retry, timeout=timeout, metadata=metadata
File "/env/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__ return wrapped_func(*args, **kwargs)
File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 270, in retry_wrapped_func on_error=on_error,
File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py", line 179, in retry_target return target()
File "/env/local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout return func(*args, **kwargs)
File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.InvalidArgument: 400 Invalid CreateSession request.

I tried to reproduce this but it seems to work for me as a Python 3.7 function. I used the latest google-cloud-spanner library in requirements.txt.
While I am unsure what would be causing your error I did notice a few other things.
It seemed odd to declare a global dataDict and not use the one constructed and pass it. Instead I added that as a param to the insert method.
The spacing of the query was a bit odd and the use of single and double quotes was odd. this made it hard to parse visually. As the function runs as python 3.7 you can also use f-strings which likely would make it even more readable.
Here is the code I ran in a function that seemed to work.
import json
from google.cloud import spanner
INSTANCE_ID = 'testinstance'
DATABASE_ID = 'testdatabase'
TABLE_ID = 'userinfo'
def new_user(request):
data = { 'USER_ID': '10', 'DEVICE_ID': '11' }
return insert_data(INSTANCE_ID, DATABASE_ID, data)
def insert_data(instance_id, database_id, data):
spanner_client = spanner.Client()
instance = spanner_client.instance(instance_id)
database = instance.database(database_id)
def insert_user(transaction):
query = f"INSERT {TABLE_ID} (USER_ID,DEVICE_ID) VALUES ({data['USER_ID']},{data['DEVICE_ID']})"
row_ct = transaction.execute_update(query)
database.run_in_transaction(insert_user)
return 'Inserted data.'

Related

How to get list of events using the Python kubernetes API?

I am trying to obtain the list of events from a minikube cluster usigh the Python Kubernetes api using the following code:
from kubernetes import config, client
config.load_kube_config()
api = client.EventsV1beta1Api()
print(api.list_event_for_all_namespaces())
I am getting the following error:
C:\Users\lameg\kubesense>python test.py
Traceback (most recent call last):
File "C:\Users\lameg\kubesense\test.py", line 6, in <module>
print(api.list_event_for_all_namespaces())
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api\events_v1beta1_api.py", line 651, in list_event_for_all_namespaces
return self.list_event_for_all_namespaces_with_http_info(**kwargs) # noqa: E501
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api\events_v1beta1_api.py", line 758, in list_event_for_all_namespaces_with_http_info
return self.api_client.call_api(
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 192, in __call_api
return_data = self.deserialize(response_data, response_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 264, in deserialize
return self.__deserialize(data, response_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 639, in __deserialize_model
kwargs[attr] = self.__deserialize(value, attr_type)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 280, in __deserialize
return [self.__deserialize(sub_data, sub_kls)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 280, in <listcomp>
return [self.__deserialize(sub_data, sub_kls)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 303, in __deserialize
return self.__deserialize_model(data, klass)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\api_client.py", line 641, in __deserialize_model
instance = klass(**kwargs)
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\models\v1beta1_event.py", line 112, in __init__
self.event_time = event_time
File "C:\Users\lameg\miniforge3\lib\site-packages\kubernetes\client\models\v1beta1_event.py", line 291, in event_time
raise ValueError("Invalid value for `event_time`, must not be `None`") # noqa: E501
ValueError: Invalid value for `event_time`, must not be `None`
Any ideas ?
That looks like either a bug in the Python client, or a bug in the OpenAPI specification used to generate the client: clearly, null is a value for eventTime that is supported by the API.
I think the only workaround is to monkey-patch the kubernetes.client module so that it accepts null values. Something like this:
from kubernetes import config, client
config.load_kube_config()
api = client.EventsV1beta1Api()
# This is descriptor, see https://docs.python.org/3/howto/descriptor.html
class FakeEventTime:
def __get__(self, obj, objtype=None):
return obj._event_time
def __set__(self, obj, value):
obj._event_time = value
# Monkey-patch the `event_time` attribute of ` the V1beta1Event class.
client.V1beta1Event.event_time = FakeEventTime()
# Now this works.
events = api.list_event_for_all_namespaces()
The above code runs successfully against my OpenShift instance, whereas previously it would fail as you describe in your question.

Error to connect firestore to store an API result made at Cloud Shell with Python

I just want to learn how to store data in Firestore using Python and Google Cloud Platform, so I'm calling an API to query some example data.
For it, I'm using requests library and Firebase library from google.cloud package.
Here is the code that I'm running at Cloud Shell:
import requests
from google.cloud import firestore
url = "https://api.coindesk.com/v1/bpi/currentprice.json"
r = requests.get(url)
resp: str = r.text
if not (resp=="null" or resp=="[]"):
db = firestore.Client()
doc_ref=db.collection("CoinData").add(r.json())
When the code try to connect to Firebase to add the json of the API response I got this error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 66, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 946, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Invalid resource field value in the request."
debug_error_string = "{"created":"#1634689546.004704998","description":"Error received from peer ipv4:74.125.134.95:443","file":"src/core/lib/surface/call.cc","file_line":1070,"grpc_message":"Invalid resource field value in the request.","grpc_status":3}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/antonyare_93/cloudshell_open/pruebas/data_coin_query.py", line 11, in <module>
doc_ref=db.collection("CoinData").add(r.json())
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/collection.py", line 107, in add
write_result = document_ref.create(document_data, **kwargs)
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/document.py", line 99, in create
write_results = batch.commit(**kwargs)
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/batch.py", line 60, in commit
request=request, metadata=self._client._rpc_metadata, **kwargs,
File "/home/antonyare_93/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/services/firestore/client.py", line 815, in commit
response = rpc(request, retry=retry, timeout=timeout, metadata=metadata,)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/gapic_v1/method.py", line 142, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 288, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 190, in retry_target
return target()
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 68, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Invalid resource field value in the request.
Anyone knows how I can fix it?
I've just got it.
It was a little mistake, I was missing the name of the project at the firestore client method call, here's the code working:
import requests
from google.cloud import firestore
url = "https://api.coindesk.com/v1/bpi/currentprice.json"
r = requests.get(url)
resp: str = r.text
if not (resp=="null" or resp=="[]"):
db = firestore.Client(project="mytwitterapitest")
doc_ref=db.collection("CoinData").add(r.json())

Error trying to connect Celery through SQS using STS

I'm trying to use Celery with SQS as broker. In order to use the SQS from my container I need to assume a role and for that I'm using STS. My code looks like this:
role_info = {
'RoleArn': 'arn:aws:iam::xxxxxxx:role/my-role-execution',
'RoleSessionName': 'roleExecution'
}
sts_client = boto3.client('sts', region_name='eu-central-1')
credentials = sts_client.assume_role(**role_info)
aws_access_key_id = credentials["Credentials"]['AccessKeyId']
aws_secret_access_key = credentials["Credentials"]['SecretAccessKey']
aws_session_token = credentials["Credentials"]["SessionToken"]
os.environ["AWS_ACCESS_KEY_ID"] = aws_access_key_id
os.environ["AWS_SECRET_ACCESS_KEY"] = aws_secret_access_key
os.environ["AWS_DEFAULT_REGION"] = 'eu-central-1'
os.environ["AWS_SESSION_TOKEN"] = aws_session_token
broker = "sqs://"
backend = 'redis://redis-service:6379/0'
celery = Celery('tasks', broker=broker, backend=backend)
celery.conf["task_default_queue"] = 'my-queue'
celery.conf["broker_transport_options"] = {
'region': 'eu-central-1',
'predefined_queues': {
'my-queue': {
'url': 'https://sqs.eu-central-1.amazonaws.com/xxxxxxx/my-queue'
}
}
}
In the same file I have the following task:
#celery.task(name='my-queue.my_task')
def my_task(content) -> int:
print("hello")
return 0
When I execute the following code I get an error:
[2020-09-24 10:38:03,602: CRITICAL/MainProcess] Unrecoverable error: ClientError('An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.',)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 921, in create_channel
return self._avail_channels.pop()
IndexError: pop from empty list
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/celery/worker/worker.py", line 208, in start
self.blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 369, in start
return self.obj.start()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 318, in start
blueprint.start(self)
File "/usr/local/lib/python3.6/site-packages/celery/bootsteps.py", line 119, in start
step.start(parent)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/connection.py", line 23, in start
c.connection = c.connect()
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 405, in connect
conn = self.connection_for_read(heartbeat=self.amqheartbeat)
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 412, in connection_for_read
self.app.connection_for_read(heartbeat=heartbeat))
File "/usr/local/lib/python3.6/site-packages/celery/worker/consumer/consumer.py", line 439, in ensure_connected
callback=maybe_shutdown,
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 422, in ensure_connection
callback, timeout=timeout)
File "/usr/local/lib/python3.6/site-packages/kombu/utils/functional.py", line 341, in retry_over_time
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 275, in connect
return self.connection
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 823, in connection
self._connection = self._establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/connection.py", line 778, in _establish_connection
conn = self.transport.establish_connection()
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 941, in establish_connection
self._avail_channels.append(self.create_channel(self))
File "/usr/local/lib/python3.6/site-packages/kombu/transport/virtual/base.py", line 923, in create_channel
channel = self.Channel(connection)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 100, in __init__
self._update_queue_cache(self.queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/kombu/transport/SQS.py", line 105, in _update_queue_cache
resp = self.sqs.list_queues(QueueNamePrefix=queue_name_prefix)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 337, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 656, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the ListQueues operation: Access to the resource https://eu-central-1.queue.amazonaws.com/ is denied.
If I use boto3 directly without Celery, I'm able to connect to the queue and retrieve data without this error. I don't know why Celery/Kombu try to list queues when I specify the predefined_queues configuration, tha is used to avoid these behavior (from docs):
If you want Celery to use a set of predefined queues in AWS, and to never attempt to list SQS queues, nor attempt to create or delete them, pass a map of queue names to URLs using the predefined_queue_urls setting
Source here
Anyone know what happens? How I should modify my code in order to make it work?. Seems that Celery is not using the credentials at all.
The versions I'm using:
celery==4.4.7
boto3==1.14.54
kombu==4.5.0
Thanks!
PS: I created and issue in Github to track if this can be a library error or not...
I solved the problem updating dependencies to the latest versions:
celery==5.0.0
boto3==1.14.54
kombu==5.0.2
pycurl==7.43.0.6
I was able to get celery==4.4.7 and kombu==4.6.11 working by setting the following configuration option:
celery.conf["task_create_missing_queues"] = False

Python - RuntimeError: working outside of request context

Trying to get the GET parameters from the URL. I have it working in my __init__.py file, but in a different file its not working.
I tried to use with app.app_context(): but I am still getting the same issue.
def log_entry(entity, type, entity_id, data, error):
with app.app_context():
zip_id = request.args.get('id')
RuntimeError: working outside of request context
Any suggestions?
Additional Info:
This is using Flask web framework which is setup as a service (API).
Example URL the user would hit http://website.com/api/endpoint?id=1
As mentioned above using `zip_id = request.args.get('id') works fine in the main file but I am in runners.py (just another file with definitions in)
Full traceback:
Debugging middleware caught exception in streamed response at a point where response headers were already sent.
Traceback (most recent call last):
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/wsgi.py", line 703, in __next__
return self._next()
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/wrappers.py", line 81, in _iter_encoded
for item in iterable:
File "/Users/ereeve/Documents/TechSol/pi-automation-api/automation_api/runners.py", line 341, in create_agencies
log_entry("test", "created", 1, "{'data':'hey'}", "")
File "/Users/ereeve/Documents/TechSol/pi-automation-api/automation_api/runners.py", line 315, in log_entry
zip_id = request.args.get('id')
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/local.py", line 343, in __getattr__
return getattr(self._get_current_object(), name)
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/werkzeug/local.py", line 302, in _get_current_object
return self.__local()
File "/Users/ereeve/.virtualenvs/pi-automation-api/lib/python2.7/site-packages/flask/globals.py", line 20, in _lookup_req_object
raise RuntimeError('working outside of request context')
RuntimeError: working outside of request context
Def in the same file calling the log_entry def
def create_agencies(country_code, DB, session):
document = DB.find_one({'rb_account_id': RB_COUNTRIES_new[country_code]['rb_account_id']})
t2 = new_t2(session)
log_entry("test", "created", 1, "{'data':'hey'}", "")

Python Whoosh AttributeError when updating index

I'm trying to update my whoosh search index when I make changes to my database, but keep getting an error that I haven't been able to figure out.
One of my views:
from search import update_index
#view_config(route_name='update_effect_brief', permission='edit')
def update_effect_brief(request):
name = request.matchdict['pagename']
brief = request.params['effect_brief']
effect = Effect.get_by_name(name) # This is an "effect" object
effect.update_brief(brief)
update_index(effect)
return HTTPFound(location=request.route_url('view_effect', pagename=name))
My search.py file:
from whoosh.index import create_in, open_dir
def update_index(obj):
'''Update single ingredient, product or effect.'''
index = open_dir('searchindex') # searchindex is the name of the directory
with index.searcher as searcher: # crashes on this line!
writer = index.writer()
update_doc(writer, obj)
Traceback:
Traceback (most recent call last):
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.9-py2.7.egg/pyramid_debugtoolbar/toolbar.py", line 152, in toolbar_tween
response = _handler(request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid_debugtoolbar-1.0.9-py2.7.egg/pyramid_debugtoolbar/panels/performance.py", line 55, in resource_timer_handler
result = handler(request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid-1.4.5-py2.7.egg/pyramid/tweens.py", line 21, in excview_tween
response = handler(request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid_tm-0.7-py2.7.egg/pyramid_tm/__init__.py", line 82, in tm_tween
reraise(*exc_info)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid_tm-0.7-py2.7.egg/pyramid_tm/__init__.py", line 63, in tm_tween
response = handler(request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid-1.4.5-py2.7.egg/pyramid/router.py", line 161, in handle_request
response = view_callable(context, request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid-1.4.5-py2.7.egg/pyramid/config/views.py", line 237, in _secured_view
return view(context, request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid-1.4.5-py2.7.egg/pyramid/config/views.py", line 377, in viewresult_to_response
result = view(context, request)
File "/home/home/SkinResearch/env/local/lib/python2.7/site-packages/pyramid-1.4.5-py2.7.egg/pyramid/config/views.py", line 493, in _requestonly_view
response = view(request)
File "/home/home/SkinResearch/skinresearch/skinproject/views.py", line 544, in update_effect_brief
update_index(effect)
File "/home/home/SkinResearch/skinresearch/skinproject/search.py", line 37, in start_update
update_index(obj)
File "/home/home/SkinResearch/skinresearch/skinproject/search.py", line 92, in update_index
with index.searcher as searcher:
AttributeError: __exit__
What am I doing wrong?
You need to call the index.searcher() method to create the context manager:
with index.searcher() as searcher:
See the Searcher object section in the Whoosh quickstart, and The Searcher object documentation.
It isn't entirely clear to me why you are creating a searcher, but then create a writer in the block and update the index. Perhaps you wanted to use the writer as a context manager instead here:
with index.writer() as writer:
update_doc(writer, obj)
and leave the searcher for searching operations.

Categories

Resources