I am trying to do bulk updates, while at the same time retaining the state of a specific field.
In my code I am either creating a document, or add to the list 'stuff'.
#init bulk
data = [...]
bulkop = col.initialize_ordered_bulk_op()
for d in data:
bulkop.find({'thing':d}).upsert().update({'$setOnInsert':{'status':0},'$push':{'stuff':'something'},'$inc': { 'seq': 1 }})
bulkop.execute()
However, I am getting an error when I try this.
Error: pymongo.errors.BulkWriteError: batch op errors occurred
It works fine without the $setOnInsert':{'status':0} addition, but I need this to make sure that the state var is not getting updated.
I think you are using pymongo 3.x, the above code works on pymongo 2.x. Try this...
from pymongo import MongoClient, UpdateMany
client = MongoClient(< connection string >)
db = client[test_db]
col = db[test]
docs = [doc_1,doc_2,doc_3,....,doc_n]
requests = [UpdateMany({ },{'$setOnInsert':{'status':0},"$set":{"$push":docs}}, upsert = True)]
result = col.bulk_write(requests)
print(result.inserted_count)
Here empty { } means updating all the records.
Hope this helps...
Related
I am a very inexperienced programmer with no formal education. Details will be extremely helpful in any responses.
I have made several basic python scripts to call SOAP APIs, but I am running into an issue with a specific API function that has an embedded array.
Here is a sample excerpt from a working XML format to show nested data:
<bomData xsi:type="urn:inputBOM" SOAP-ENC:arrayType="urn:bomItem[]">
<bomItem>
<item_partnum></item_partnum>
<item_partrev></item_partrev>
<item_serial></item_serial>
<item_lotnum></item_lotnum>
<item_sublotnum></item_sublotnum>
<item_qty></item_qty>
</bomItem>
<bomItem>
<item_partnum></item_partnum>
<item_partrev></item_partrev>
<item_serial></item_serial>
<item_lotnum></item_lotnum>
<item_sublotnum></item_sublotnum>
<item_qty></item_qty>
</bomItem>
</bomData>
I have tried 3 different things to get this to work to no avail.
I can generate the near exact XML from my script, but a key attribute missing is the 'SOAP-ENC:arrayType="urn:bomItem[]"' in the above XML example.
Option 1 was using MessagePlugin, but I get an error because my section is like the 3 element and it always injects into the first element. I have tried body[2], but this throws an error.
Option 2 I am trying to create the object(?). I read a lot of stack overflow, but I might be missing something for this.
Option 3 looked simple enough, but also failed. I tried setting the values in the JSON directly. I got these examples by an XML sample to JSON.
I have also done a several other minor things to try to get it working, but not worth mentioning. Although, if there is a way to somehow do the following, then I'm all ears:
bomItem[]: bomData = {"bomItem"[{...,...,...}]}
Here is a sample of my script:
# for python 3
# using pip install suds-py3
from suds.client import Client
from suds.plugin import MessagePlugin
# Config
#option 1: trying to set it as an array using plugin
class MyPlugin(MessagePlugin):
def marshalled(self, context):
body = context.envelope.getChild('Body')
bomItem = body[0]
bomItem.set('SOAP-ENC:arrayType', 'urn:bomItem[]')
URL = "http://localhost/application/soap?wsdl"
client = Client(URL, plugins=[MyPlugin()])
transact_info = {
"username":"",
"transaction":"",
"workorder":"",
"serial":"",
"trans_qty":"",
"seqnum":"",
"opcode":"",
"warehouseloc":"",
"warehousebin":"",
"machine_id":"",
"comment":"",
"defect_code":""
}
#WIP - trying to get bomData below working first
inputData = {
"dataItem":[
{
"fieldname": "",
"fielddata": ""
}
]
}
#option 2: trying to create the element here and define as an array
#inputbom = client.factory.create('ns3:inputBOM')
#inputbom._type = "SOAP-ENC:arrayType"
#inputbom.value = "urn:bomItem[]"
bomData = {
#Option 3: trying to set the time and array type in JSON
#"#xsi:type":"urn:inputBOM",
#"#SOAP-ENC:arrayType":"urn:bomItem[]",
"bomItem":[
{
"item_partnum":"",
"item_partrev":"",
"item_serial":"",
"item_lotnum":"",
"item_sublotnum":"",
"item_qty":""
},
{
"item_partnum":"",
"item_partrev":"",
"item_serial":"",
"item_lotnum":"",
"item_sublotnum":"",
"item_qty":""
}
]
}
try:
response = client.service.transactUnit(transact_info,inputData,bomData)
print("RESPONSE: ")
print(response)
#print(client)
#print(envelope)
except Exception as e:
#handle error here
print(e)
I appreciate any help and hope it is easy to solve.
I have found the answer I was looking for. At least a working solution.
In any case, option 1 worked out. I read up on it at the following link:
https://suds-py3.readthedocs.io/en/latest/
You can review at the '!MessagePlugin' section.
I found a solution to get message plugin working from the following post:
unmarshalling Error: For input string: ""
A user posted an example how to crawl through the XML structure and modify it.
Here is my modified example to get my script working:
#Using MessagePlugin to modify elements before sending to server
class MyPlugin(MessagePlugin):
# created method that could be reused to modify sections with similar
# structure/requirements
def addArrayType(self, dataType, arrayType, transactUnit):
# this is the code that is key to crawling through the XML - I get
# the child of each parent element until I am at the right level for
# modification
data = transactUnit.getChild(dataType)
if data:
data.set('SOAP-ENC:arrayType', arrayType)
def marshalled(self, context):
# Alter the envelope so that the xsd namespace is allowed
context.envelope.nsprefixes['xsd'] = 'http://www.w3.org/2001/XMLSchema'
body = context.envelope.getChild('Body')
transactUnit = body.getChild("transactUnit")
if transactUnit:
self.addArrayType('inputData', 'urn:dataItem[]', transactUnit)
self.addArrayType('bomData', 'urn:bomItem[]', transactUnit)
I recently found out about sqlalchemy in Python. I'd like to use it for data science rather than website applications.
I've been reading about it and I like that you can translate the sql queries into Python.
The main thing that I am confused about that I'm doing is:
Since I'm reading data from an already well established schema, I wish I didn't have to create the corresponding models myself.
I am able to get around that reading the metadata for the table and then just querying the tables and columns.
The problem is when I want to join to other tables, this metadata reading is taking too long each time, so I'm wondering if it makes sense to pickle cache it in an object, or if there's another built in method for that.
Edit: Include code.
Noticed that the waiting time was due to an error in the loading function, rather than how to use the engine. Still leaving the code in case people comment something useful. Cheers.
The code I'm using is the following:
def reflect_engine(engine, update):
store = f'cache/meta_{engine.logging_name}.pkl'
if update or not os.path.isfile(store):
meta = alq.MetaData()
meta.reflect(bind=engine)
with open(store, "wb") as opened:
pkl.dump(meta, opened)
else:
with open(store, "r") as opened:
meta = pkl.load(opened)
return meta
def begin_session(engine):
session = alq.orm.sessionmaker(bind=engine)
return session()
Then I use the metadata object to get my queries...
def get_some_cars(engine, metadata):
session = begin_session(engine)
Cars = metadata.tables['Cars']
Makes = metadata.tables['CarManufacturers']
cars_cols = [ getattr(Cars.c, each_one) for each_one in [
'car_id',
'car_selling_status',
'car_purchased_date',
'car_purchase_price_car']] + [
Makes.c.car_manufacturer_name]
statuses = {
'selling' : ['AVAILABLE','RESERVED'],
'physical' : ['ATOURLOCATION'] }
inventory_conditions = alq.and_(
Cars.c.purchase_channel == "Inspection",
Cars.c.car_selling_status.in_( statuses['selling' ]),
Cars.c.car_physical_status.in_(statuses['physical']),)
the_query = ( session.query(*cars_cols).
join(Makes, Cars.c.car_manufacturer_id == Makes.c.car_manufacturer_id).
filter(inventory_conditions).
statement )
the_inventory = pd.read_sql(the_query, engine)
return the_inventory
I am just getting started with the python BigQuery API (https://github.com/GoogleCloudPlatform/google-cloud-python/tree/master/bigquery) after briefly trying out (https://github.com/pydata/pandas-gbq) and realizing that the pandas-gbq does not support RECORD type, i.e. no nested fields.
Now I am trying to upload nested data to BigQuery. I managed to create the table with the respective Schema, however I am struggling with the upload of the json data.
from google.cloud import bigquery
from google.cloud.bigquery import Dataset
from google.cloud.bigquery import LoadJobConfig
from google.cloud.bigquery import SchemaField
SCHEMA = [
SchemaField('full_name', 'STRING', mode='required'),
SchemaField('age', 'INTEGER', mode='required'),
SchemaField('address', 'RECORD', mode='REPEATED', fields=(
SchemaField('test', 'STRING', mode='NULLABLE'),
SchemaField('second','STRING', mode='NULLABLE')
))
]
table_ref = client.dataset('TestApartments').table('Test2')
table = bigquery.Table(table_ref, schema=SCHEMA)
table = client.create_table(table)
When trying to upload a very simple JSON to bigquery I get a rather ambiguous error
400 Error while reading data, error message: JSON table encountered
too many errors, giving up. Rows: 1; errors: 1. Please look into the
error stream for more details.
Beside the fact that it makes me kinda sad that its giving up on me :), obviously that error description does not really help ...
Please find below how I try to upload the JSON and the sampledata.
job_config = bigquery.LoadJobConfig()
job_config.source_format = bigquery.SourceFormat.NEWLINE_DELIMITED_JSON
with open('testjson.json', 'rb') as source_file:
job = client.load_table_from_file(
source_file,
table_ref,
location='US', # Must match the destination dataset location.
job_config=job_config) # API request
job.result() # Waits for table load to complete.
print('Loaded {} rows into {}:{}.'.format(
job.output_rows, dataset_id, table_id))
This is my JSON object
"[{'full_name':'test','age':2,'address':[{'test':'hi','second':'hi2'}]}]"
A JSON example would be wonderful as this seems the only way to upload nested data if I am not mistaken.
I have been reproducing your scenario using your same code and the JSON content you shared, and I suspect the issue is only that you are defining the JSON content between quotation marks (" or '), while it should not have that format.
The correct format is the one that #ElliottBrossard already shared with you in his answer:
{'full_name':'test', 'age':2, 'address': [{'test':'hi', 'second':'hi2'}]}
If I run your code using that content in the testjson.json file, I get the response Loaded 1 rows into MY_DATASET:MY_TABLE and the content gets loaded in the table. If, otherwise, I use the format below (which you are using according to your question and the comments in the other answer), I get the result google.api_core.exceptions.BadRequest: 400 Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the error stream for more details.
"{'full_name':'test','age':2,'address':[{'test':'hi','second':'hi2'}]}"
Additionally, you can go to the Jobs page in the BigQuery UI (following the link https://bigquery.cloud.google.com/jobs/YOUR_PROJECT_ID), and there you will find more information about the failed Load job. As an example, when I run your code with the wrong JSON format, this is what I get:
As you will see, here the error message is more relevant:
error message: JSON parsing error in row starting at position 0: Value encountered without start of object
It indicates that the it could not find any valid start of a JSON object (i.e. a bracket { at the very beginning of the object).
TL;DR: remove the quotation marks in your JSON object and the load job should be fine.
I think your JSON content should be:
{'full_name':'test','age':2,'address':[{'test':'hi','second':'hi2'}]}
(No brackets.) As an example using the command-line client:
$ echo "{'full_name':'test','age':2,'address':[{'test':'hi','second':'hi2'}]}" \
> example.json
$ bq query --use_legacy_sql=false \
"CREATE TABLE tmp_elliottb.JsonExample (full_name STRING NOT NULL, age INT64 NOT NULL, address ARRAY<STRUCT<test STRING, second STRING>>);"
$ bq load --source_format=NEWLINE_DELIMITED_JSON \
tmp_elliottb.JsonExample example.json
$ bq head tmp_elliottb.JsonExample
+-----------+-----+--------------------------------+
| full_name | age | address |
+-----------+-----+--------------------------------+
| test | 2 | [{"test":"hi","second":"hi2"}] |
+-----------+-----+--------------------------------+
I am trying to write a django app and use elasticsearch in it with elasticsearch-dsl library of python. I don't want to create all switch-case statements and then pass search queries and filters accordingly.
I want a function that does the parsing stuff by itself.
For e.g. If i pass "some text url:github.com tags:es,es-dsl,django",
the function should output corresponding query.
I searched for it in elasticsearch-dsl documentation and found a function that does the parsing.
https://github.com/elastic/elasticsearch-dsl-py/search?utf8=%E2%9C%93&q=simplequerystring&type=
However, I dont know how to use it.
I tried s = Search(using=client).query.SimpleQueryString("1st|ldnkjsdb"), but it is showing me parsing error.
Can anyone help me out?
You can just plug the SimpleQueryString in the Search object, instead of a dictionary send the elements as parameters of the object.
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
from elasticsearch_dsl.query import SimpleQueryString
client = Elasticsearch()
_search = Search(using=client, index='INDEX_NAME')
_search = _search.filter( SimpleQueryString(
query = "this + (that | thus) -those",
fields= ["field_to_search"],
default_operator= "and"
))
A lot of elasticsearch_dsl simply change the dictionary representation to classes of functions that makes the code look pythonic, and avoid the use of hard-to-read elasticsearch JSONs.
Im guessing you are asking about the usage of elasticsearch-dsl with query string like you are making a request with json data to the elasticsearch api. If that's the case, this is how you are going to use elasticsearch-dsl:
assume you have the query in query variable like this:
{
"query": {
"query_string" : {
"default_field" : "content",
"query" : "this AND that OR thus"
}
}
}
and now do this:
es = Elasticsearch(
host=settings.ELASTICSEARCH_HOST_IP, # Put your ES host IP
port=settings.ELASTICSEARCH_HOST_PORT, # Put yor ES host port
)
index = settings.MY_INDEX # Put your index name here
result = es.search(index=index, body=query)
I'm trying to add a new field to an existing document in elasticsearch. I use the elasticsearch python API to do this.
My query is :
addField = { "script" : { "inline" : "ctx._source.langTranslation = test"}}
esclient.bulk(body = addField)
When I try this I get this error :
I search how to avoid this error but not found anything. What is wrong with my resquest ?
Thanks for your answer
I have found a solution, instead of use bulk function I use update function :
esclient.update(index = hit['_index'], doc_type = hit['_type'], id = hit['_id'], body = {"doc": {"langTranslation": translation}})
And it works, I don't have any error message.