Python to C# Dictionary - python

I'm trying to convert some python code into C#. The python code uses a dictionary structure and I've created a similar object in C#. I'm hoping to get help on a specific section:
'''
letterHTMLTemplates = {}
cursor = conn.cursor()
cursor.execute("select lower(cast([id] as nvarchar(36))) as [id], [template] from dbo.
[lookup.letter]")
for row in cursor:
letterHTMLTemplates.update( {row.id : str(row.template)} )
# Replace the <head> tag in all letters, as it contains the confetti nonsense
for letterKey in letterHTMLTemplates:
startHead = letterHTMLTemplates[letterKey].find("<head>")
endHead = letterHTMLTemplates[letterKey].find("</head>") + len("</head>")
beforeHead = letterHTMLTemplates[letterKey][:startHead]
endHead = letterHTMLTemplates[letterKey][endHead:]
newHead = '<head><meta charset="utf-8"></head>' # Replace the head with this - necessary to render weird characters
letterHTMLTemplates[letterKey] = beforeHead + newHead + endHead
'''
I've written this so far in C# but I'm having trouble with the the find part in C#:
'''
public static void parseSlateDocs()
{
SqlConnection conn = new SqlConnection(GetSlateConnectionString());
string query = "select lower(cast([id] as nvarchar(36))) as [id], [template] from dbo.[lookup.letter]";
Dictionary<int, string> letterTemplate = new Dictionary<int, string>();
using (SqlCommand cmd = new SqlCommand(query,conn))
{
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
foreach(KeyValuePair<int,string> kvp in letterTemplate)
{
letterTemplate.Add(kvp.Key, kvp.Value);
//find the "<head>" tag
}
}
}
'''

Here you need IndexOf(string str) method:
string str = "Hello Friends....How are you...";
int i = str.IndexOf("How");
// As this string is present in the
// main string then it will obviously
// output the value as 17
Console.WriteLine("First value Index of 'How' is " + i);
// now the following string is not present
// So as per the rules, it will return -1
int i1 = str.IndexOf("Chair");
// As this string is present in
// the main string then it will
// obviously output the value as -1
Console.WriteLine("First value Index of 'Chair' is " + i1);
Use this method to find head.
Example from GeeksForGeeks.

Related

How to update a document with new attribute

I'm using ArangoDB in order to update a document with a new attribute.
My steps are as follows:
Get the document and store it in a variable
doc = self.db.get_document(document_name="document_name")
Iterate over the doc, and add to it the attribute, for e.g.:
for idx, movie in enumerate(doc):
print(movie)
query = 'UPSERT { movie_id: #movie_id, scene_element: #scene_element} INSERT\
{ movie_id: #movie_id, scene_element: #scene_element, index: #index, \
_key: #_key, _id: #_id, _rev: #_rev, url_path: #url_path, captions: #captions, ref: #ref, \
experts: #experts, groundtruth: #groundtruth, base: #base, source: #source, File: #File } UPDATE \
{ index: #index \
} IN document_name'
movie["index"] = idx
self.pdb.aql.execute(query, bind_vars=movie)
And this works, I'm trying to use this query differently without the INSERT command. Why something like this doesn't work:
query = 'UPSERT { movie_id: #movie_id, scene_element: #scene_element} UPDATE \
{ index: #index \
} IN document_name'
Finding the correct movie_id and scene_element, and just updating it with new index.
The error I'm getting is invalid syntax, and it asks for me to put the INSERT command.

Firestore - Recursively Copy a Document and all it's subcollections/documents

we're using Google's Firestore for embedded machine configuration data. Because this data controls a configurable pageflow and lots of other things, it's segmented up into lots of subcollections. Each machine has it's own top level document in this system. However, it takes forever when we go to add machines to the fleet because we have to manually copy over all this data in multiple documents. Does anyone know how to go about recursively copying a Firestore document, all it's subcollections, their documents, subcollections, etc in Python. You'd have a document ref to the top level as well as a name for the new top level doc.
You can use something like this to recursively read and write from a collection to another one:
def read_recursive(
source: firestore.CollectionReference,
target: firestore.CollectionReference,
batch: firestore.WriteBatch,
) -> None:
global batch_nr
for source_doc_ref in source:
document_data = source_doc_ref.get().to_dict()
target_doc_ref = target.document(source_doc_ref.id)
if batch_nr == 500:
log.info("commiting %s batched operations..." % batch_nr)
batch.commit()
batch_nr = 0
batch.set(
reference=target_doc_ref,
document_data=document_data,
merge=False,
)
batch_nr += 1
for source_coll_ref in source_doc_ref.collections():
target_coll_ref = target_doc_ref.collection(source_coll_ref.id)
read_recursive(
source=source_coll_ref.list_documents(),
target=target_coll_ref,
batch=batch,
)
batch = db_client.batch()
read_recursive(
source=db_client.collection("src_collection_name"),
target=db_client.collection("target_collection_name"),
batch=batch,
)
batch.commit()
Writes are in batches and this saves a lot of time (in my case it finished in half the time compared with set).
The questions asks for Python, but in my case I needed to do recursive deep copy of Firestore docs / collections in NodeJS (Typescript), and using a Document as starting point of the recursion.
(This is a solution based on the Python script by #cristi)
Function definition
import {
CollectionReference,
DocumentReference,
DocumentSnapshot,
QueryDocumentSnapshot,
WriteBatch,
} from 'firebase-admin/firestore';
interface FirestoreCopyRecursiveContext {
batchSize: number;
/**
* Wrapped Firestore WriteBatch. In firebase-admin#11.0.1, you can't continue
* using the WriteBatch object after you call WriteBatch.commit().
*
* Hence, we need to replaced "used up" WriteBatch's with new ones.
* We also need to reset the count after committing, and because we
* want all recursive invocations to share the same count + WriteBatch instance,
* we pass this data via object reference.
*/
writeBatch: {
writeBatch: WriteBatch,
/** Num of items in current batch. Reset to 0 when `commitBatch` commits. */
count: number;
};
/**
* Function that commits the batch if it reached the limit or is forced to.
* The WriteBatch instance is automatically replaced with fresh one
* if commit did happen.
*/
commitBatch: (force?: boolean) => Promise<void>;
/** Callback to insert custom logic / write operations when we encounter a document */
onDocument?: (
sourceDoc: QueryDocumentSnapshot | DocumentSnapshot,
targetDocRef: DocumentReference,
context: FirestoreCopyRecursiveContext
) => unknown;
/** Callback to insert custom logic / write operations when we encounter a collection */
onCollection?: (
sourceDoc: CollectionReference,
targetDocRef: CollectionReference,
context: FirestoreCopyRecursiveContext
) => unknown;
logger?: Console['info'];
}
type FirestoreCopyRecursiveOptions = Partial<Omit<FirestoreCopyRecursiveContext, 'commitBatch'>>;
/**
* Copy all data from one document to another, including
* all subcollections and documents within them, etc.
*/
export const firestoreCopyDocRecursive = async (
/** Source Firestore Document Snapshot, descendants of which we want to copy */
sourceDoc: QueryDocumentSnapshot | DocumentSnapshot,
/** Destination Firestore Document Ref */
targetDocRef: DocumentReference,
options?: FirestoreCopyRecursiveOptions,
) => {
const batchSize = options?.batchSize ?? 500;
const writeBatchRef = options?.writeBatch || { writeBatch: firebaseFirestore.batch(), count: 0 };
const onDocument = options?.onDocument;
const onCollection = options?.onCollection;
const logger = options?.logger || console.info;
const commitBatch = async (force?: boolean) => {
// Commit batch only if size limit hit or forced
if (writeBatchRef.count < batchSize && !force) return;
logger(`Commiting ${writeBatchRef.count} batched operations...`);
await writeBatchRef.writeBatch.commit();
// Once we commit the batched data, we have to create another WriteBatch,
// otherwise we get error:
// "Cannot modify a WriteBatch that has been committed."
// See https://dev.to/wceolin/cannot-modify-a-writebatch-that-has-been-committed-265f
writeBatchRef.writeBatch = firebaseFirestore.batch();
writeBatchRef.count = 0;
};
const context = {
batchSize,
writeBatch: writeBatchRef,
onDocument,
onCollection,
commitBatch,
};
// Copy the contents of the current docs
const sourceDocData = sourceDoc.data();
await writeBatchRef.writeBatch.set(targetDocRef, sourceDocData, { merge: false });
writeBatchRef.count += 1;
await commitBatch();
// Allow to make additional changes to the target document from
// outside the func after copy command is enqueued / commited.
await onDocument?.(sourceDoc, targetDocRef, context);
// And try to commit in case user updated the count but forgot to commit
await commitBatch();
// Check for subcollections and docs within them
for (const sourceSubcoll of await sourceDoc.ref.listCollections()) {
const targetSubcoll = targetDocRef.collection(sourceSubcoll.id);
// Allow to make additional changes to the target collection from
// outside the func after copy command is enqueued / commited.
await onCollection?.(sourceSubcoll, targetSubcoll, context);
// And try to commit in case user updated the count but forgot to commit
await commitBatch();
for (const sourceSubcollDoc of (await sourceSubcoll.get()).docs) {
const targetSubcollDocRef = targetSubcoll.doc(sourceSubcollDoc.id);
await firestoreCopyDocRecursive(sourceSubcollDoc, targetSubcollDocRef, context);
}
}
// Commit all remaining operations
return commitBatch(true);
};
How to use it
const sourceDocRef = getYourFaveFirestoreDocRef(x);
const sourceDoc = await sourceDocRef.get();
const targetDocRef = getYourFaveFirestoreDocRef(y);
// Copy firestore resources
await firestoreCopyDocRecursive(sourceDoc, targetDocRef, {
logger,
// Note: In my case some docs had their doc ID also copied as a field.
// Because the copied documents get a new doc ID, we need to update
// those fields too.
onDocument: async (sourceDoc, targetDocRef, context) => {
const someDocPattern = /^nameOfCollection\/[^/]+?$/;
const subcollDocPattern = /^nameOfCollection\/[^/]+?\/nameOfSubcoll\/[^/]+?$/;
// Update the field that holds the document ID
if (targetDocRef.path.match(someDocPattern)) {
const docId = targetDocRef.id;
context.writeBatch.writeBatch.set(targetDocRef, { docId }, { merge: true });
context.writeBatch.count += 1;
await context.commitBatch();
return;
}
// In a subcollection, I had to update multiple ID fields
if (targetDocRef.path.match(subcollDocPattern)) {
const docId = targetDocRef.parent.parent?.id;
const subcolDocId = targetDocRef.id;
context.writeBatch.writeBatch.set(targetDocRef, { docId, subcolDocId }, { merge: true });
context.writeBatch.count += 1;
await context.commitBatch();
return;
}
},
});

How to store python output recieved in node js?

I'm invoking a python script from node js. The python script retrieves data from a REST API and stores it in a dataframe and then there's a search function based on user input. I'm confused as to what variable type does python send the data to node js in? I've tried to convert into a string but in node js it says it is an unresolved variable type. Here's the code:
r = requests.get(url)
data = r.json()
nested = json.loads(r.text)
nested_full = json_normalize(nested)
req_data= json_normalize(nested,record_path ='items')
search = req_data.get(["name", "id"," ])
#search.head(10)
filter = sys.argv[1:]
print(filter)
input = filter[0]
print(input)
result = search[search["requestor_name"].str.contains(input)]
result = result.to_String(index=false)
response = '```' + str(result) + '```'
print(response)
sys.stdout.flush()
Here's the node js program that invokes the above python script. How do i store the output in a format which i can pass to another function in node?
var input = 'robert';
var childProcess = require("child_process").spawn('python', ['./search.py', input], {stdio: 'inherit'})
const stream = require('stream');
const format = require('string-format')
childProcess.on('data', function(data){
process.stdout.write("python script output",data)
result += String(data);
console.log("Here it is", data);
});
childProcess.on('close', function(code) {
if ( code === 1 ){
process.stderr.write("error occured",code);
process.exit(1);
}
else{
process.stdout.write('done');
}
});
According to the docs:
childProcess.stdout.on('data', (data) => {
console.log(`stdout: ${data}`);
});

Django/JSON: Finding Species in Range, Printing in JSON

I am working on a Species Recommendation tool that will find all of the trees in a GPS range, average out the trees of allowed species, and then return the results in a JSON dict that can be parsed and read out.
I was working with folks from Code for SF who were using the K Nearest Neighbors algorithm, which was really nice, but kept crashing the application and was only pulling from a csv I uploaded, rather than the database of trees and planting sites I created.
Is there a way to implement K Nearest Neighbors to search my database that's better than using latitude__range and longitude__range? This is my current view:
def sg_species_guide_latlong_json(request, latitude=1, longitude=1):
min_latitude = decimal.Decimal(latitude)
min_latitude -= decimal.Decimal(0.01)
max_latitude = decimal.Decimal(latitude)
max_latitude += decimal.Decimal(0.01)
min_longitude = decimal.Decimal(longitude)
min_longitude -= decimal.Decimal(0.01)
max_longitude = decimal.Decimal(longitude)
max_longitude += decimal.Decimal(0.01)
response = JsonResponse(dict(species_list=list(RecommendedSpecies.objects.values(
"species__id_species", "species__trees_of_species__condition__score", "species__name_scientific",
).filter(
Q(species__trees_of_species__planting_site__coord_lat__range=[min_latitude, max_latitude]),
Q(species__trees_of_species__planting_site__coord_long__range=[min_longitude, max_longitude]),
).annotate(avg_condition=(Avg('species__trees_of_species__condition__score')),
num_trees=(Count('species__trees_of_species'))).order_by('-avg_condition'))))
return response
It's VERY slow, especially for what should be an AJAX/JSON call.
In addition, I want to print out the results in a format that contains the species name, number of trees, and average condition. Sorting that is easy in my view, but printing it out in JSON has been really tough. The dicts that JSON returns look like this:
{"species_list": [{"num_trees": 102, "species__trees_of_species__condition__score": 5, "species__id_species": 88, "avg_condition": 5.0, "species__name_scientific": "Arbutus x marina"}, {"num_trees": 828, "species__trees_of_species__condition__score": 4, "species__id_species": 88, "avg_condition": 4.0, "species__name_scientific": "Arbutus x marina"}]}
I would like to parse each object and display the scientific name, average condition, and number of trees.
Currently this is my HTML:
<script type="text/javascript">
function if_gmap_updateInfoWindow()
{
var lng = gmapmarker.getPosition().lng().toFixed(6)
var lat = gmapmarker.getPosition().lat().toFixed(6)
infoWindow.setContent("Longitude: " + lng + "<br>" + "Latitude: " + lat);
document.getElementById("location-info").innerHTML = 'Lat / Long: ' + lat + ' x ' + lng;
// Call the model api
// TODO consts
$.ajax({
url: '/species-guide/json/' + lat + '/' + lng,
dataType: 'json',
type: 'get',
success: function(response){
document.getElementById("species-recommendations").innerHTML = response['species_list'].join('<br>')
console.log(response)
}
})
} // end of if_gmap_bindInfoWindow
</script>
which returns a list of [object Object] instead of keys & values
I would love to have it spit out keys & values using something like this:
var species_recs = response['species_list']\
document.getElementById("species-recommendations").innerHTML =
species_recs.forEach(species => {
Object.keys(species).forEach(key => {
'<pre>' + species[key] + '</pre><br>';
})
})
but that returns undefined. Is there a better way to get it to yield the data?

How to get all attributes for a particular xml node in qt

is it possible to get all attributes for a particular node in pyqt ?
for example .. consider for following node:
< asset Name="3dAsset" ID="5"/>
i want to retrieve the ("Name" and "ID") strings
is it possible?
thanks in advance
You can retrieve the particular value of the attribute using the function,
QString QDomElement::attribute ( const QString & name, const QString & defValue = QString() ) const
To get all the attributes use,
QDomNamedNodeMap QDomElement::attributes () const
and you have to traverse through the DomNamedNodeMap and get the value of each of the attributes. Hope it helps.
Edit : Try this one.
With the QDomNamedNodeMap you are having give,
QDomNode QDomNamedNodeMap::item ( int index ) const
which will return a QDomNode for the particular attribute.
Then give,
QDomAttr QDomNode::toAttr () const
With the QDomAttr obtained give,
QString name () const
which will return the name of the attribute.
Hope it helps.
How to get first attribute name/value in PySide/PyQt:
if node.hasAttributes():
nodeAttributes = node.attributes()
attributeItem = nodeAttributes.item(0) #pulls out first item
attribute = attributeItem.toAttr()
attributeName = attr.name()
attributeValue = attr.value()
This just shows how to get one name/value pair, but it should be easy enough to extend looping with nodeAttributes.length() or something similar.
This is for c++. I Ran into the same problem. You need to convert to QDomAttr. I'm sure API is the same in python.
if( Node.hasAttributes() )
{
QDomNamedNodeMap map = Node.attributes();
for( int i = 0 ; i < map.length() ; ++i )
{
if(!(map.item(i).isNull()))
{
QDomNode debug = map.item(i);
QDomAttr attr = debug.toAttr();
if(!attr.isNull())
{
cout << attr.value().toStdString();
cout << attr.name().toStdString();
}
}
}

Categories

Resources