Hi i have succeeded ton connect azure devops with python and retrieve data from system fields with wiql query.but i Can not execute programm and i got an error like this
KeyError[Microsoft.vsts.common.acceptedcriteria] i do not understand where is the probleme because i get the value for priority and not for acceptedcriteria. what am i missing ? any help would be appreciated
Every time i exécuté this Quercy i get this error
from vsts.vss_connection import VssConnection
from msrest.authentication import BasicAuthentication
import json
from vsts.work_item_tracking.v4_1.models.wiql import Wiql
def emit(msg, *args):
print(msg % args)
def print_work_item(work_item):
emit(
"{0} ,{1},{2},{3}".format(
work_item.fields["System.WorkItemType"],
work_item.id,
work_item.fields["System.Title"],
work_item.fields["Microsoft.VSTS.Common.AcceptanceCriteria"]
work_item.fields["Microsoft.VSTS.Common.Priority"]
)
)
personal_access_token = 'xbxbxxbxhhdhdjdkdkddkdkdkdkdkdkkdkdkd'
organization_url = 'https://dev.azure.com/YourorgName'
# Create a connection to the org
credentials = BasicAuthentication('', personal_access_token)
connection = VssConnection(base_url=organization_url, creds=credentials)
wiql = Wiql(
query="""select [System.Id],[Microsoft.VSTS.Common.AcceptanceCriteria],[System.Title],[System.WorkItemType],[Microsoft.VSTS.Common.Priority] From WorkItems """
)
wit_client = connection.get_client('vsts.work_item_tracking.v4_1.work_item_tracking_client.WorkItemTrackingClient')
wiql_results = wit_client.query_by_wiql(wiql).work_items
if wiql_results:
# WIQL query gives a WorkItemReference with ID only
# => we get the corresponding WorkItem from id
work_items = (
wit_client.get_work_item(int(res.id)) for res in wiql_results
As a simple answer, you can update your print function to
def print_work_item(work_item):
emit(
"{0} ,{1},{2}".format(
work_item.fields["System.WorkItemType"],
work_item.id,
work_item.fields["System.Title"]
)
)
Some fields (like Microsoft.VSTS.Common.AcceptanceCriteria and Microsoft.VSTS.Common.Priority) may not be represented in some work item types. Therefore, before printing a field you have to check its availability in fields dictionary.
Related
I have a glue script to create new partitions using create_partition(). The glue script is running successfully, and i could see the partitions in the Athena console when using SHOW PARTITIONS. For glue script create_partitions, I did refer to this sample code here : https://medium.com/#bv_subhash/demystifying-the-ways-of-creating-partitions-in-glue-catalog-on-partitioned-s3-data-for-faster-e25671e65574
When I try to run a Athena query for a given partition which was newly added, I am getting no results.
Is it that I need to trigger the MSCK command, even if I add the partitions using create_partitions. Appreciate any suggestions please
.
I have got the solution myself, wanted to share with SO community, so it would be useful someone. The following code when run as a glue job, creates partitions, and can also be queried in Athena for the new partition columns. Please change/add the parameter values db name, table name, partition columns as needed.
import boto3
import urllib.parse
import os
import copy
import sys
# Configure database / table name and emp_id, file_id from workflow params?
DATABASE_NAME = 'my_db'
TABLE_NAME = 'enter_table_name'
emp_id_tmp = ''
file_id_tmp = ''
# # Initialise the Glue client using Boto 3
glue_client = boto3.client('glue')
#get current table schema for the given database name & table name
def get_current_schema(database_name, table_name):
try:
response = glue_client.get_table(
DatabaseName=DATABASE_NAME,
Name=TABLE_NAME
)
except Exception as error:
print("Exception while fetching table info")
sys.exit(-1)
# Parsing table info required to create partitions from table
table_data = {}
table_data['input_format'] = response['Table']['StorageDescriptor']['InputFormat']
table_data['output_format'] = response['Table']['StorageDescriptor']['OutputFormat']
table_data['table_location'] = response['Table']['StorageDescriptor']['Location']
table_data['serde_info'] = response['Table']['StorageDescriptor']['SerdeInfo']
table_data['partition_keys'] = response['Table']['PartitionKeys']
return table_data
#prepare partition input list using table_data
def generate_partition_input_list(table_data):
input_list = [] # Initializing empty list
part_location = "{}/emp_id={}/file_id={}/".format(table_data['table_location'], emp_id_tmp, file_id_tmp)
input_dict = {
'Values': [
emp_id_tmp, file_id_tmp
],
'StorageDescriptor': {
'Location': part_location,
'InputFormat': table_data['input_format'],
'OutputFormat': table_data['output_format'],
'SerdeInfo': table_data['serde_info']
}
}
input_list.append(input_dict.copy())
return input_list
#create partition dynamically using the partition input list
table_data = get_current_schema(DATABASE_NAME, TABLE_NAME)
input_list = generate_partition_input_list(table_data)
try:
create_partition_response = glue_client.batch_create_partition(
DatabaseName=DATABASE_NAME,
TableName=TABLE_NAME,
PartitionInputList=input_list
)
print('Glue partition created successfully.')
print(create_partition_response)
except Exception as e:
# Handle exception as per your business requirements
print(e)
Is it possible to update with the scan() method from Python's elasticsearch-dsl library?
For example:
search = Search(index=INDEX).query(query).params(request_timeout=6000)
print('Scanning Query and updating.')
for hit in search.scan():
_id = hit.meta.id
# sql query to get the record from th database
record = get_record_by_id(_id)
hit.column1 = record['column1']
hit.column2 = record['column2']
# not sure what use to update here
This can be done by using the elasticsearch client:
for hit in search.scan():
_id = hit.meta.id
print(_id)
# get data from database
record = get_record_by_id(_id)
body = dict(record)
body.pop('_id')
# update the elastichsearch client
elastic_client.update(
index=INDEX,
id=_id,
body={"doc": body}
)
There are campaigns; however, none of them are being returned from this sample script:
nicholas#mordor:~/python$
nicholas#mordor:~/python$ python3 chimp.py
key jfkdljfkl_key
user fdjkslafjs_user
password dkljfdkl_pword
server fjkdls_server
nicholas#mordor:~/python$
nicholas#mordor:~/python$ cat chimp.py
import os
from mailchimp3 import MailChimp
key=(os.environ['chimp_key'])
user=(os.environ['chimp_user'])
password=(os.environ['chimp_password'])
server=(os.environ['chimp_server'])
print ("key\t\t", key)
print ("user\t\t", user)
print ("password\t", password)
print ("server\t\t", server)
client = MailChimp(mc_api=key, mc_user=user)
client.lists.all(get_all=True, fields="lists.name,lists.id")
client.campaigns.all(get_all=True)
nicholas#mordor:~/python$
do I need to send additional information to get back a list of campaigns? Just looking to log some basic responses from Mailchimp.
(obviously, I've not posted my API key, nor other other sensitive info.)
This is what I use and works for me. Just call the get_all_campaigns function with the MailChimp client. I added n_days for my specific needs, but you can choose to delete that part of the code if you do not need it. You can also customize the renames and drop columns as per your needs.
from typing import Optional, Union, List, Tuple
from datetime import timedelta, date
import pandas as pd # type: ignore
from mailchimp3 import MailChimp # type: ignore
default_campaign_fields = [
'id',
'send_time',
'emails_sent',
'recipients.recipient_count',
'settings.title',
'settings.from_name',
'settings.reply_to',
'report_summary'
]
def get_campaigns(client: MailChimp, n_days: int = 7, fields: Optional[Union[str, List[str]]] = None) -> pd.DataFrame:
"""
Gets the statistics for all sent campaigns in the last 'n_days'
client: (Required) MailChimp client object
n_days: (int) Get campaigns for the last n_days
fields: Specific fields to return. Default is None which gets some predefined columns.
"""
keyword = 'campaigns'
if fields is None:
fields = default_campaign_fields
# If it is a string (single field), convert to List so that join operation works properly
if isinstance(fields, str):
fields = [fields]
fields = [keyword + '.' + field for field in fields]
fields = ",".join(fields)
now = date.today()
last_ndays = now - timedelta(days=n_days)
rvDataFrame = pd.json_normalize(
client.campaigns.all(
get_all=True,
since_send_time=last_ndays,
fields=fields).get(keyword))
if 'send_time' in rvDataFrame.columns:
rvDataFrame.sort_values('send_time', ascending=False, inplace=True)
mapper = {
"id": "ID",
"emails_sent": "Emails Sent",
"settings.title": "Campaign Name",
"settings.from_name": "From",
"settings.reply_to": "Email",
"report_summary.unique_opens": "Opens",
"report_summary.open_rate": "Open Rate (%)",
"report_summary.subscriber_clicks": "Unique Clicks",
"report_summary.click_rate": "Click Rate (%)"
}
drops = [
"recipients.recipient_count",
"report_summary.opens",
"report_summary.clicks",
"report_summary.ecommerce.total_orders",
"report_summary.ecommerce.total_spent",
"report_summary.ecommerce.total_revenue"]
rvDataFrame.drop(columns=drops, inplace=True)
rvDataFrame.rename(columns=mapper, inplace=True)
rvDataFrame.loc[:,"Open Rate (%)"] = round(rvDataFrame.loc[:,"Open Rate (%)"]*100,2)
rvDataFrame.loc[:,"Click Rate (%)"] = round(rvDataFrame.loc[:,"Click Rate (%)"]*100,2)
return rvDataFrame
I am trying to get all group members of Active Directory.
I have this code:
from ldap3 import Server, Connection, ALL, core
server = Server(address, get_info=ALL)
ad_conn = Connection(server, dn, password, auto_bind=True)
members = []
AD_GROUP_FILTER = '(&(objectClass=GROUP)(cn={group_name}))'
ad_filter = AD_GROUP_FILTER.replace('{group_name}', group_name)
result = ad_conn.search_s('OU details', ldap3.SCOPE_SUBTREE, ad_filter)
if result:
if len(result[0]) >= 2 and 'member' in result[0][1]:
members_tmp = result[0][1]['member']
for m in members_tmp:
email = get_email_by_dn(m, ad_conn)
if email:
members.append(email)
return members
But I am getting an error
'Connection' object has no attribute 'search_s'
Use search(), specify the attributes you need (it seems you build 'email' from user dn but if it were present in the directory), and fix the arguments in function call (arg. order filter then scope, plus use the proper constant SUBTREE) :
from ldap3 import Server, Connection, ALL, core
server = Server(address, get_info=ALL)
ad_conn = Connection(server, dn, password, auto_bind=True)
members = []
AD_GROUP_FILTER = '(&(objectClass=GROUP)(cn={group_name}))'
ad_filter = AD_GROUP_FILTER.replace('{group_name}', group_name)
ad_conn.search('OU details', ad_filter, ldap3.SUBTREE, attributes=['member', 'mail'])
if len(ad_conn.response):
# To grab data, you might prefer the following - or use ad_conn.entries :
for entry in ad_conn.response:
print(entry['dn'], entry['attributes'])
In short, how do i
var="TableName"
models.var.query.all()
explanation
My goal is to allow the user to change the order of list of items.
I set up an ajax call that sends an array of id's to the api below.
It works if i hard code the query, and make a api view per table.
my problem is that i want "table" to fill in this line
models.table.query.filter_by(id=item).first()
to complete the query.
here is the view api which gives me an error "no attribute 'table'"
#app.route('/order/<table>')
def order(table):
# import pdb;pdb.set_trace()
sortedItems = request.args.listvalues()[0]
o=1
import pdb;pdb.set_trace()
for item in sortedItems:
grab = models.table.query.filter_by(id=item).first()
grab.order=o
o=o+1
db.session.commit()
return jsonify(result=sortedItems)
You can use getattr():
>>> var = 'table_name'
>>> table = getattr(models, var)
>>> table.query.filter_by(id=item).first()
getattr() will raise an AttributeError if the attribute your trying to get does not exist.
Example for your order()-function:
#app.route('/order/<table>')
def order(table):
sortedItems = request.args.listvalues()[0]
o=1
table = getattr(models, table)
for item in sortedItems:
grab = table.query.filter_by(id=item).first()
grab.order=o
o=o+1
db.session.commit()
return jsonify(result=sortedItems)