I am trying to search a SQL database to confirm if a directory exists. If the directory does not exist the script should send off an email for notification. I have attempted to create something but I am not well versed in PowerShell.
I am able to get all of the data from our SQL server. I am running into an error with $($Row.[Last Name]). It states that it is unable to find [Last Name] type, but it finds Account and IsActive just fine.
Unable to find type [Last Name]: make sure that the assembly containing this type is loaded.
At \cottonwood\users\CB\My Documents\SQLserver-search.ps1:44 char:17
+ $Row.[Last Name] <<<<
+ CategoryInfo : InvalidOperation: (Last Name:String) [], RuntimeException
+ FullyQualifiedErrorId : TypeNotFound
I'm not sure if my question is clear or not. I'm new to Stack Overflow. Any help would be greatly appreciated. Thanks in advance.
Param (
$Path = "\\cottonwood\users\Shared\Pool Acquisitions",
$SMTPServer = "generic-mailserver",
$From = "generic-outbound-email",
#The below commented out line is used to test with just one individual. Be sure to comment out the one with all individuals before troubleshooting.
#$To = #("generic-email"),
$port = "587",
$To = #("generic-inbound-email"),
$Subject = "Folders Added in",
$logname = "\\cottonwood\users\Shared\Loan Documents - Active\logs\New Folders$date.txt",
$date = (Get-Date -Format MMddyyyy),
$SMTPBody = "`nThe following Pool Acquisitions folders have been added in the last 24 hours:`n`n"
)
$SQLServer = "REDWOOD" #use Server\Instance for named SQL instances!
$SQLDBName = "MARS"
$SqlQuery = "select Account, IsActive, [Last Name] FROM vw_loans WHERE LEFT(Account,1)<>'_' ORDER BY Account"
$SqlConnection = New-Object System.Data.SqlClient.SqlConnection
$SqlConnection.ConnectionString = "Server = $SQLServer; Database = $SQLDBName; Integrated Security = True"
$SqlCmd = New-Object System.Data.SqlClient.SqlCommand
$SqlCmd.CommandText = $SqlQuery
$SqlCmd.Connection = $SqlConnection
$SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter
$SqlAdapter.SelectCommand = $SqlCmd
$DataSet = New-Object System.Data.DataSet
$SqlAdapter.Fill($DataSet)
$SqlConnection.Close()
clear
$DataSet.Tables[0]
foreach ($Row in $dataset.Tables[0].Rows) {
write-Output "$($Row)"
write-Output "$($Row.Account)"
write-Output "$($Row.IsActive)"
write-Output "$($Row.[Last Name])"
}
if($Row.IsActive -eq $True){
$ChkPath = "U:\Shared\Loan Documents - Active\$Row.Account - $Row.[Last Name]"
}
else{
$ChkPath = "U:\Shared\Loan Documents - Inactive\$Row.Account - $Row.[Last Name]"
}
$FileExist = Test-Path $ChkPath
$SMTPMessage = #{
To = $To
From = $From
Subject = "$Subject $Path"
Smtpserver = $SMTPServer
Port = $port
}
If($FileExist -eq $True) {Write-Host "Null Response"}
else
{ $SMTPBody = "Loan folder is missing. Please advise."
$LastWrite | ForEach { $SMTPBody += "$($_.FullName)`n" }
UseSSL = $true
}
if($Activity = 'y') should be if($Activity -eq 'y')
In PowerShell = always sets a value so this statement is always true.
I'm also pretty sure param blocks have to be at the top of a script.
Related
I have an SSHOperator that writes a filepath to stdout. I'd like to get the os.path.basename of that filepath so that I can pass it as a parameter to my next task (which is an sftp pull). The idea is to download a remote file into the current working directory. This is what I have so far:
with DAG('my_dag',
default_args = dict(...
xcom_push = True,
)
) as dag:
# there is a get_update_id task here, which has been snipped for brevity
get_results = SSHOperator(task_id = 'get_results',
ssh_conn_id = 'my_remote_server',
command = """cd ~/path/to/dir && python results.py -t p -u {{ task_instance.xcom_pull(task_ids='get_update_id') }}""",
cmd_timeout = -1,
)
download_results = SFTPOperator(task_id = 'download_results',
ssh_conn_id = 'my_remote_server',
remote_filepath = base64.b64decode("""{{ task_instance.xcom_pull(task_ids='get_results') }}"""),
local_filepath = os.path.basename(base64.b64decode("""{{ task_instance.xcom_pull(task_ids='get_results') }""").decode()),
operation = 'get',
)
Airflow tells me there's an error on the remote_filepath = line. Investigating this further, I see that the value passed to base64.b64decode is not the xcom value from the get_results task, but is rather the raw string starting with {{.
My feeling is that since tasks are templated, there's some under-the-hood magic to resolve the templated string. Whereas this is not exactly supported by os.path.basename. So would I need to create an intermediate task to get the basename? Is there no way to shorthand this the way I've tried?
I'd appreciate any help on this
You want to decode the XCOM return value when Airflow renders the remote_filepath property for the Task instance.
This means that the b64decode function must be invoked within the template string.
There is a catch though, we have to make this function available in the template context by providing it as a parameter or on the DAG level as a user defined filter or macro.
def basename_b64decode(value):
return os.path.basename(base64.b64decode(value)).decode()
download_results = SFTPOperator(
task_id = 'download_results',
ssh_conn_id = 'my_remote_server',
remote_filepath = """{{params.b64decode(ti.xcom_pull(task_ids='get_results'))}}""",
local_filepath = """{{params.basename_b64decode(ti.xcom_pull(task_ids='get_results'))}}""",
operation = 'get',
params = {
'b64decode': base64.b64decode
'basename_b64decode': basename_b64decode
}
)
For the DAG user-defined macro approach, you can write:
with DAG('my_dag',
default_args = dict(...
xcom_push = True,
user_defined_macros=dict(
basename_b64decode=basename_b64decode,
b64decode=base64.b64decode
)
)
) as dag:
download_results = SFTPOperator(
task_id = 'download_results',
ssh_conn_id = 'my_remote_server',
remote_filepath = """{{b64decode(ti.xcom_pull(task_ids='get_results'))}}""",
local_filepath = """{{basename_b64decode(ti.xcom_pull(task_ids='get_results'))}}""",
operation = 'get',
)
In this SO question I had learnt that I cannot delete a Cosmos DB document using SQL.
Using Python, I believe I need the DeleteDocument() method. This is how I'm getting the document ID's that are required (I believe) to then call the DeleteDocument() method.
# set up the client
client = document_client.DocumentClient()
# use a SQL based query to get a bunch of documents
query = { 'query': 'SELECT * FROM server s' }
result_iterable = client.QueryDocuments('dbs/DB/colls/coll', query, options)
results = list(result_iterable);
for x in range(0, len (results)):
docID = results[x]['id']
Now, at this stage I want to call DeleteDocument().
The inputs into which are document_link and options.
I can define document_link as something like
document_link = 'dbs/DB/colls/coll/docs/'+docID
And successfully call ReadAttachments() for example, which has the same inputs as DeleteDocument().
When I do however, I get an error...
The partition key supplied in x-ms-partitionkey header has fewer
components than defined in the the collection
...and now I'm totally lost
UPDATE
Following on from Jay's help, I believe I'm missing the partitonKey element in the options.
In this example, I've created a testing database, it looks like this
So I think my partition key is /testPART
When I include the partitionKey in the options however, no results are returned, (and so print len(results) outputs 0).
Removing partitionKey means that results are returned, but the delete attempt fails as before.
# Query them in SQL
query = { 'query': 'SELECT * FROM c' }
options = {}
options['enableCrossPartitionQuery'] = True
options['maxItemCount'] = 2
options['partitionKey'] = '/testPART'
result_iterable = client.QueryDocuments('dbs/testDB/colls/testCOLL', query, options)
results = list(result_iterable)
# should be > 0
print len(results)
for x in range(0, len (results)):
docID = results[x]['id']
print docID
client.DeleteDocument('dbs/testDB/colls/testCOLL/docs/'+docID, options=options)
print 'deleted', docID
According to your description, I tried to use pydocument module to delete document in my azure document db and it works for me.
Here is my code:
import pydocumentdb;
import pydocumentdb.document_client as document_client
config = {
'ENDPOINT': 'Your url',
'MASTERKEY': 'Your master key',
'DOCUMENTDB_DATABASE': 'familydb',
'DOCUMENTDB_COLLECTION': 'familycoll'
};
# Initialize the Python DocumentDB client
client = document_client.DocumentClient(config['ENDPOINT'], {'masterKey': config['MASTERKEY']})
# use a SQL based query to get a bunch of documents
query = { 'query': 'SELECT * FROM server s' }
options = {}
options['enableCrossPartitionQuery'] = True
options['maxItemCount'] = 2
result_iterable = client.QueryDocuments('dbs/familydb/colls/familycoll', query, options)
results = list(result_iterable);
print(results)
client.DeleteDocument('dbs/familydb/colls/familycoll/docs/id1',options)
print 'delete success'
Console Result:
[{u'_self': u'dbs/hitPAA==/colls/hitPAL3OLgA=/docs/hitPAL3OLgABAAAAAAAAAA==/', u'myJsonArray': [{u'subId': u'sub1', u'val': u'value1'}, {u'subId': u'sub2', u'val': u'value2'}], u'_ts': 1507687788, u'_rid': u'hitPAL3OLgABAAAAAAAAAA==', u'_attachments': u'attachments/', u'_etag': u'"00002100-0000-0000-0000-59dd7d6c0000"', u'id': u'id1'}, {u'_self': u'dbs/hitPAA==/colls/hitPAL3OLgA=/docs/hitPAL3OLgACAAAAAAAAAA==/', u'myJsonArray': [{u'subId': u'sub3', u'val': u'value3'}, {u'subId': u'sub4', u'val': u'value4'}], u'_ts': 1507687809, u'_rid': u'hitPAL3OLgACAAAAAAAAAA==', u'_attachments': u'attachments/', u'_etag': u'"00002200-0000-0000-0000-59dd7d810000"', u'id': u'id2'}]
delete success
Please notice that you need to set the enableCrossPartitionQuery property to True in options if your documents are cross-partitioned.
Must be set to true for any query that requires to be executed across
more than one partition. This is an explicit flag to enable you to
make conscious performance tradeoffs during development time.
You could find above description from here.
Update Answer:
I think you misunderstand the meaning of partitionkey property in the options[].
For example , my container is created like this:
My documents as below :
{
"id": "1",
"name": "jay"
}
{
"id": "2",
"name": "jay2"
}
My partitionkey is 'name', so here I have two paritions : 'jay' and 'jay1'.
So, here you should set the partitionkey property to 'jay' or 'jay2',not 'name'.
Please modify your code as below:
options = {}
options['enableCrossPartitionQuery'] = True
options['maxItemCount'] = 2
options['partitionKey'] = 'jay' (please change here in your code)
result_iterable = client.QueryDocuments('dbs/db/colls/testcoll', query, options)
results = list(result_iterable);
print(results)
Hope it helps you.
Using the azure.cosmos library:
install and import azure cosmos package:
from azure.cosmos import exceptions, CosmosClient, PartitionKey
define delete items function - in this case using the partition key in query:
def deleteItems(deviceid):
client = CosmosClient(config.cosmos.endpoint, config.cosmos.primarykey)
# Create a database if not exists
database = client.create_database_if_not_exists(id=azure-cosmos-db-name)
# Create a container
# Using a good partition key improves the performance of database operations.
container = database.create_container_if_not_exists(id=container-name, partition_key=PartitionKey(path='/your-pattition-path'), offer_throughput=400)
#fetch items
query = f"SELECT * FROM c WHERE c.device.deviceid IN ('{deviceid}')"
items = list(container.query_items(query=query, enable_cross_partition_query=False))
for item in items:
container.delete_item(item, 'partition-key')
usage:
deviceid=10
deleteItems(items)
github full example here: https://github.com/eladtpro/python-iothub-cosmos
I am trying to upload some data to Dydra from a Sesame triplestore I have on my computer. While the download from Sesame works fine, the triples get mixed up (the s-p-o relationships change as the object of one becomes object of another). Can someone please explain why this is happening and how it can be resolved? The code is below:
#Querying the triplestore to retrieve all results
sesameSparqlEndpoint = 'http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name'
sparql = SPARQLWrapper(sesameSparqlEndpoint)
queryStringDownload = 'SELECT * WHERE {?s ?p ?o}'
dataGraph = Graph()
sparql.setQuery(queryStringDownload)
sparql.method = 'GET'
sparql.setReturnFormat(JSON)
output = sparql.query().convert()
print output
for i in range(len(output['results']['bindings'])):
#The encoding is necessary to parse non-English characters
output['results']['bindings'][i]['s']['value'].encode('utf-8')
try:
subject_extract = output['results']['bindings'][i]['s']['value']
if 'http' in subject_extract:
subject = "<" + subject_extract + ">"
subject_url = URIRef(subject)
print subject_url
predicate_extract = output['results']['bindings'][i]['p']['value']
if 'http' in predicate_extract:
predicate = "<" + predicate_extract + ">"
predicate_url = URIRef(predicate)
print predicate_url
objec_extract = output['results']['bindings'][i]['o']['value']
if 'http' in objec_extract:
objec = "<" + objec_extract + ">"
objec_url = URIRef(objec)
print objec_url
else:
objec = objec_extract
objec_wip = '"' + objec + '"'
objec_url = URIRef(objec_wip)
# Loading the data on a graph
dataGraph.add((subject_url,predicate_url,objec_url))
except UnicodeError as error:
print error
#Print all statements in dataGraph
for stmt in dataGraph:
pprint.pprint(stmt)
# Upload to Dydra
URL = 'http://dydra.com/login'
key = 'my_key'
with requests.Session() as s:
resp = s.get(URL)
soup = BeautifulSoup(resp.text,"html5lib")
csrfToken = soup.find('meta',{'name':'csrf-token'}).get('content')
# print csrf_token
payload = {
'account[login]':key,
'account[password]':'',
'csrfmiddlewaretoken':csrfToken,
'next':'/'
}
# print payload
p = s.post(URL,data=payload, headers=dict(Referer=URL))
# print p.text
r = s.get('http://dydra.com/username/rep_name/sparql')
# print r.text
dydraSparqlEndpoint = 'http://dydra.com/username/rep_name/sparql'
for stmt in dataGraph:
queryStringUpload = 'INSERT DATA {%s %s %s}' % stmt
sparql = SPARQLWrapper(dydraSparqlEndpoint)
sparql.setCredentials(key,key)
sparql.setQuery(queryStringUpload)
sparql.method = 'POST'
sparql.query()
A far simpler way to copy your data over (apart from using a CONSTRUCT query instead of a SELECT, like I mentioned in the comment) is simply to have Dydra itself directly access your Sesame endpoint, for example via a SERVICE-clause.
Execute the following on your Dydra database, and (after some time, depending on how large your Sesame database is), everything will be copied over:
INSERT { ?s ?p ?o }
WHERE {
SERVICE <http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name>
{ ?s ?p ?o }
}
If the above doesn't work on Dydra, you can alternatively just directly access the RDF statements from your Sesame store by using the URI http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements. Assuming Dydra has an upload-feature where you can provide the URL of an RDF document, you can simply provide it the above URI and it should be able to load it.
The code above can work if the following changes are made:
Use CONSTRUCT query instead of SELECT. Details here -> How to iterate over CONSTRUCT output from rdflib?
Use key as input for both account[login] and account[password]
However, this is probably not the most efficient way. Primarily, doing individual INSERTs for every triple is not a good way. Dydra doesn't record all statements this way (I got only about 30% of the triples inserted). On the contrary, using the http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements method as suggested by Jeen enabled me to port all the data successfully.
I got this code in default.py :
form_0 = SQLFORM(db.base_folder, record=db.base_folder(1))
query = db.base_folder.folder != ''
set = db(query)
rows = set.select()
if rows:
form_0.vars.folder = rows[0]['folder']
and in db.py :
db.define_table(
'base_folder',
Field('folder',
type='string',
default='You need to set up a directory to backup to !',
),
format='%(folder)s'
)
and unfortunately, the form displayed, displays also :
id: 1
above the field value. This issue disappears when I omit the record option.
How could I avoid that behavior please - as I need to keep the update
function ?
Thank you,
SQLFORM(db.base_folder, record=db.base_folder(1), showid=False)
I'm writing a python script to populate a mongodb database, my models look like the following:
from mongoengine import *
from mongoengine.django.auth import User
class TrackGroup (Document):
name = StringField(required=True)
users = ListField(ReferenceField('User'))
devices = ListField(ReferenceField('Device'))
class Device(Document):
name = StringField(max_length=50, required=True)
grp = ListField(ReferenceField(TrackGroup))
class User(User):
first_name = StringField(max_length=50)
last_name = StringField(max_length=50)
grp = ListField(ReferenceField(TrackGroup))
And my script goes like this:
#Create a group
gName = 'group name'
g = TrackGroup(name=gName)
g.users = []
g.devices = []
g.save()
#create a user
u = User.create_user(username='name', password='admin', email='mail#ex.com')
gRef = g
u.grp = [gRef, ]
u.first_name = 'first'
u.last_name = 'last'
u.save()
gRef.users.append(u)
gRef.save()
#create a device
dev = Device(name='name').save()
gRef = g
dev.grp = [gRef, ]
dev.save()
gRef.devices.append(dev)
gRef.save() #Problem happens here
The problem happens when I call gRef.save() I get the following error:
raise OperationError(message % unicode(err))
mongoengine.errors.OperationError: Could not save document (LEFT_SUBFIELD only supports Object: users.0 not: 7)
I looked around for a while, and here it says that this means I'm trying to set a filed with an empty key, like this: (The example is from the link above, not mine)
{
"_id" : ObjectId("4e52b5e08ead0e3853320000"),
"title" : "Something",
"url" : "http://www.website.org",
"" : "",
"tags" : [ "international"]
}
I don't know where such a field can come from, but I opened a mongo shell and looked at my documents from the three collections, and I couldn't find such a field.
Note: If I add the device first, the same error occurs while saving the group after adding the user.
I had the same error, and this trick work for me:
the_obj_causing_error.reload()
/* make some change */
the_obj_causing_error.price = 5
the_obj_causing_error.save()
just try reload() the object before changing it.