I have a JSON document in my database that I want to modify frequently from my python program, once every 25 seconds. I know how to upload a document to the database and read a document from it, but I do not know how to modify/replace a document.
This link shows the functions offered in the python module. I see the ReplaceDocument function, but it takes in a document-link. Though how can I get the document link? Where am I suppose to look for this information?
Thanks.
It sounds like you had resolved it. Just as summary, the code below.
# Query a document
query = { 'query': 'SELECT * FROM <collection name> ....'}
docs = client.QueryDocuments(coll_link, query)
doc = list(docs)[0]
# Get the document link from attribute `_self`
doc_link = doc['_self']
# Modify the document
.....
# Replace the document via document link
client.ReplaceDocument(doc_link, doc)
April 2020
If you are reading MS Azure's Quickstart guide and following a supporting git repo, note that there might be some differences.
For example,
from azure.cosmos import exceptions, CosmosClient, PartitionKey
endpoint = 'endpoint'
key = 'key'
db_name = 'cosmos-db-name'
container_name = 'container-name'
client = CosmosClient(endpoint, key)
db = client.create_database_if_not_exists(id=db_name)
container = db.create_container_if_not_exists(id=container_name, partition_key=PartitionKey(path="/.."), offer_throughput=456
...
# Replace item
container.replace_item(doc_link, doc)
When it comes to doc_link and doc, in the above case, I encountered an error when I used doc['_self']. By using the primary key of the doc, the doc is updated.
Related
I have a python program that loads an order to a treeview, this order is loaded in the form of documents in a firestore collection from firebase. When I press the order that I want to load, I call the function loadOrder with the necessary id to filter them. But for some reason they don't load.
This is my code:
def loadPedido(idTarget):
docs = db.collection(u'slots').where(u'slotId', u'==', idTarget).stream()
for doc in docs:
docu = doc.to_dict()
nombre = (docu.get('SlotName'))
entero = (docu.get('entero'))
valor = (docu.get('slotPrecio'))
print(f'{doc.id} => {nombre}')
trvPedido.insert("",'end',iid= doc.id, values=(doc.id,nombre, entero, valor))
idTarget is the id to filter, and check with a print that it arrives correctly.
i tried this:
If I write the result of the varable directly in the code, it loads correctly, like so:
...
docs = db.collection(u'slots').where(u'slotId', u'==', u"2996gHQ32CNFMp5vyieu").stream()
...
I am a very inexperienced programmer with no formal education. Details will be extremely helpful in any responses.
I have made several basic python scripts to call SOAP APIs, but I am running into an issue with a specific API function that has an embedded array.
Here is a sample excerpt from a working XML format to show nested data:
<bomData xsi:type="urn:inputBOM" SOAP-ENC:arrayType="urn:bomItem[]">
<bomItem>
<item_partnum></item_partnum>
<item_partrev></item_partrev>
<item_serial></item_serial>
<item_lotnum></item_lotnum>
<item_sublotnum></item_sublotnum>
<item_qty></item_qty>
</bomItem>
<bomItem>
<item_partnum></item_partnum>
<item_partrev></item_partrev>
<item_serial></item_serial>
<item_lotnum></item_lotnum>
<item_sublotnum></item_sublotnum>
<item_qty></item_qty>
</bomItem>
</bomData>
I have tried 3 different things to get this to work to no avail.
I can generate the near exact XML from my script, but a key attribute missing is the 'SOAP-ENC:arrayType="urn:bomItem[]"' in the above XML example.
Option 1 was using MessagePlugin, but I get an error because my section is like the 3 element and it always injects into the first element. I have tried body[2], but this throws an error.
Option 2 I am trying to create the object(?). I read a lot of stack overflow, but I might be missing something for this.
Option 3 looked simple enough, but also failed. I tried setting the values in the JSON directly. I got these examples by an XML sample to JSON.
I have also done a several other minor things to try to get it working, but not worth mentioning. Although, if there is a way to somehow do the following, then I'm all ears:
bomItem[]: bomData = {"bomItem"[{...,...,...}]}
Here is a sample of my script:
# for python 3
# using pip install suds-py3
from suds.client import Client
from suds.plugin import MessagePlugin
# Config
#option 1: trying to set it as an array using plugin
class MyPlugin(MessagePlugin):
def marshalled(self, context):
body = context.envelope.getChild('Body')
bomItem = body[0]
bomItem.set('SOAP-ENC:arrayType', 'urn:bomItem[]')
URL = "http://localhost/application/soap?wsdl"
client = Client(URL, plugins=[MyPlugin()])
transact_info = {
"username":"",
"transaction":"",
"workorder":"",
"serial":"",
"trans_qty":"",
"seqnum":"",
"opcode":"",
"warehouseloc":"",
"warehousebin":"",
"machine_id":"",
"comment":"",
"defect_code":""
}
#WIP - trying to get bomData below working first
inputData = {
"dataItem":[
{
"fieldname": "",
"fielddata": ""
}
]
}
#option 2: trying to create the element here and define as an array
#inputbom = client.factory.create('ns3:inputBOM')
#inputbom._type = "SOAP-ENC:arrayType"
#inputbom.value = "urn:bomItem[]"
bomData = {
#Option 3: trying to set the time and array type in JSON
#"#xsi:type":"urn:inputBOM",
#"#SOAP-ENC:arrayType":"urn:bomItem[]",
"bomItem":[
{
"item_partnum":"",
"item_partrev":"",
"item_serial":"",
"item_lotnum":"",
"item_sublotnum":"",
"item_qty":""
},
{
"item_partnum":"",
"item_partrev":"",
"item_serial":"",
"item_lotnum":"",
"item_sublotnum":"",
"item_qty":""
}
]
}
try:
response = client.service.transactUnit(transact_info,inputData,bomData)
print("RESPONSE: ")
print(response)
#print(client)
#print(envelope)
except Exception as e:
#handle error here
print(e)
I appreciate any help and hope it is easy to solve.
I have found the answer I was looking for. At least a working solution.
In any case, option 1 worked out. I read up on it at the following link:
https://suds-py3.readthedocs.io/en/latest/
You can review at the '!MessagePlugin' section.
I found a solution to get message plugin working from the following post:
unmarshalling Error: For input string: ""
A user posted an example how to crawl through the XML structure and modify it.
Here is my modified example to get my script working:
#Using MessagePlugin to modify elements before sending to server
class MyPlugin(MessagePlugin):
# created method that could be reused to modify sections with similar
# structure/requirements
def addArrayType(self, dataType, arrayType, transactUnit):
# this is the code that is key to crawling through the XML - I get
# the child of each parent element until I am at the right level for
# modification
data = transactUnit.getChild(dataType)
if data:
data.set('SOAP-ENC:arrayType', arrayType)
def marshalled(self, context):
# Alter the envelope so that the xsd namespace is allowed
context.envelope.nsprefixes['xsd'] = 'http://www.w3.org/2001/XMLSchema'
body = context.envelope.getChild('Body')
transactUnit = body.getChild("transactUnit")
if transactUnit:
self.addArrayType('inputData', 'urn:dataItem[]', transactUnit)
self.addArrayType('bomData', 'urn:bomItem[]', transactUnit)
I'm running into the following error when trying to load a table from google cloud storage:
BadRequest: 400 Load configuration must specify at least one source URI (POST https://www.googleapis.com/bigquery/v2/projects/fansidata/jobs)
Meanwhile my uri is valid (ie: i can see it in the gcs web app)
uris = ['gs://my-bucket-name/datastore_backup_analytics_2016_12_21_2_User/1569751766512529035929A5AA9742/output-0']
job_name = 'Load_User'
destinationTable = dataset.table('Transfer')
job = bigquery_client.load_table_from_storage(job_name, destinationTable, uris)
job.begin()
I could be wrong, but it looks like load_table_from_storage in the Python API expects a single string for the third argument as opposed to a list. If you want to match multiple files, you can use a * at the end. For example,
uri = 'gs://my-bucket-name/datastore_backup_analytics_2016_12_21_2_User/1569751766512529035929A5AA9742/output-*']
job_name = 'Load_User'
destinationTable = dataset.table('Transfer')
job = bigquery_client.load_table_from_storage(job_name, destinationTable, uri)
job.begin()
Client.load_table_from_storage takes one or more source URIs (that is what *soure_uris means in python). E.g.:
job = client.load_table_from_storage(
'load-job-123', my_table_object,
'gs://my-bucket-name/table_one',
'gs://my-bucket-name/table_two')
I am trying to get a list of workset name and id's from the active document using Revit API inside of Python node in Dynamo. I am trying to access workset table but this code returns nothing:
doc = __doc__
workset = ActiveWorkset(doc)
active_id = workset.ActiveWorksetId()
OUT = active_id
For now I was just trying to see if i can get active workset first but even that doesnt work.
I haven't tried this in Dynamo, but my trusty RevitPythonShell thinks this should work:
worksetTable = doc.GetWorksetTable()
activeId = worksetTable.GetActiveWorksetId()
workset = worksetTable.GetWorkset(activeId)
this is based on the example from the Revit 2014 API document in the SDK...
The output:
>>> workset
<Autodesk.Revit.DB.Workset object at 0x000000000000002E [Autodesk.Revit.DB.Workset]>
Based on your example, you probably want to do this at the end:
OUT = activeId
I've been using Django for a couple of days & setup a basic blog from a tutorial with django comments.
I've got a totally separate python script that generates screenshots and uploads them to Amazon S3, now I'd like my django app to display all the images in the bucket and use a comment system on the images. Preferably I'd do this by just storing the URLs in my sqlite db, which I've got hard-coded currently to display all images in the db and has comments enabled on these.
My model:
(Does this need a foreign key to the django comments or is that just part of the Django Magic?!)
class Image(models.Model):
imgUrl=models.CharField(max_length=200)
meta=models.CharField(max_length=300)
def __unicode__(self):
return self.imgUrl
My bucket structure:
https://s3-eu-west-1.amazonaws.com/bucket/revision/process/images.png
Almost all the tutorials and packages I'm finding are based on upload/download rather than a simple for keys in bucket type approach that I want.
One of my problems is understanding how I can integrate my Boto functions with Django if I'm using Base.html. In an earlier tutorial I had an index page which had a view and could call functions from there. But base doesn't need that so I'm starting to get a little lost.
haven't looked up if boto api changed, but this is how it worked last time i looked
from boto.s3.connection import S3Connection
from boto.s3.key import Key
import s3config
conn = S3Connection(s3config.passwd, s3config.secret)
bucket = conn.get_bucket(s3config.bucket)
s3_path = '/some/path/in/your/bucket'
keys = bucket.list(s3_path)
# or if you want all keys:
# keys = bucket.get_all_keys()
for key in keys:
print key
# here you can download or do other stuff
# with the keys like get some metadata
print key.name
print key.etag
print key.size
print key.last_modified
#s3config.py
passwd = 'BLABALBALABALA'
secret = 'xvdwv3efefefefefef'
bucket = 'name-of-your-bucket'
Update:
Amazon s3 is a key value store, where key is a string. So nothing prevents you from putting in keys like:
/this/string/key/looks/like/a/unix/path
/folder/images/fileA.jpg
/folder/images/fileB.jpg
/folder/images/folderX/fileX1.jpg
now bucket.list(prefix="/folder/images/") would yield the latter three.
Look here for further details:
http://readthedocs.org/docs/boto/en/latest/ref/s3.html#boto-s3-bucket
http://docs.amazonwebservices.com/AmazonS3/latest/API/RESTBucketGET.html?r=8270
This is my code to store result from s3 to mysql by boto, django.
from demo.models import Movies
import boto
from boto.s3.key import Key
import string
from django.db import connection, transaction
def movietitle(b):
key = b.get_key('netflix/movie_titles.txt')
content = key.get_contents_as_string()
line = content.split('\n')
args = []
for imovie in line:
if len(imovie) > 0:
imovie = imovie.split(',')
movieid = imovie[0]
year = imovie[1]
title = imovie[2]
iargs = [string.atoi(movieid),title,year]
args.append(iargs)
cursor = connection.cursor()
sql = "insert into demo_movies(MovieID,MovieName,ReleaseYear) values(%s,%s,%s)"
cursor.executemany(sql,args)
transaction.commit_unless_managed()
cursor.close()