I am trying to migrate some existing blog entries into our confluence wiki using XML-RPC with Python. It is currently working with such things as title, content, space etc but will not work for created date.
This is what was currently attempted
import xmlrpclib
proxy=xmlrpclib.ServerProxy('<my_confluence>/rpc/xmlrpc')
token=proxy.confluence1.login('username', 'password')
page = {
'title':'myTitle',
'content':'My Content',
'space':'myspace',
'created':sometime
}
proxy.confluence1.storePage(token, page)
sometime is the date I want to set to a time in the past. I have tried using Date objects, various string formats and even the date object returned by a previous save, but no luck.
If you would try to store the existing content as actual blog entries in Confluence, then you could use the "publishDate" parameter:
import xmlrpclib
import datetime
proxy=xmlrpclib.ServerProxy('<my_confluence>/rpc/xmlrpc')
token=proxy.confluence1.login('username', 'password')
blogpost = {
'title' : 'myTitle',
'content' : 'My Content',
'space' : 'myspace',
'publishDate' : datetime.datetime(2001, 11, 21, 16, 30)
}
proxy.confluence1.storeBlogEntry(token, blogpost)
The XML-API for pages ignores the "created" parameter.
You can use strptime because type will not match directly. Hope this works.
new_sometime = datetime.strptime(sometime, '%Y-%m-%d')
page = {
'title':'myTitle',
'content':'My Content',
'space':'myspace',
'created':new_sometime
}
Related
i am trying to figure out how can i tag resources with operation Merge like in PS.
example in powershell -
Update-AzTag -ResourceId $s.ResourceId -Tag $mergedTags -Operation Replace
my code in python -
# Tag the resource groups.
resource_group_client.resource_groups.create_or_update(resource_group_name=rg["Resource-group-name"],parameters=
{'location': rg['location'],
'tags':tags_dict,
'Operation': 'Merge'})
as you can see i am trying my luck to put 'operation' : 'merge' but it dosent work...
any help here please?
We don’t have an option as merging in python create_or_update function, have a look on the documentation on merging tags.
We can use resource_group_params.update(tags={'hello': 'world'}) to update all the tags at once.
Below is the code about updating
resource_group_params = {'location':'westus'} #adding location tag to a variable to pass it in create_or_update function
resource_group_params.update(tags={'hello': 'world'}) # adding tags (This will remove all the previous tags and update with the one which we are passing currently
client.resource_groups.create_or_update('azure-sample-group', resource_group_params)
But the above code from documentation will remove all the previous tags and update with the one which we are passing currently.
Here our requirement is to append/merge tags, So I have created a python script where we can append tags to the old tag:
# Import the needed credential and management objects from the libraries.
from types import FunctionType
from azure.identity import AzureCliCredential
from azure.mgmt.resource import ResourceManagementClient
import os
from azure.mgmt.resource.resources.v2016_02_01 import operations
credential = AzureCliCredential()
subscription_id = "SUBSCRIPTION ID" #Add your subscription ID
resource_group = "RESOURCE_GRP_ID" #Add your resource group
resource_client = ResourceManagementClient(credential, subscription_id)
resource_list = resource_client.resource_groups.list()
for resource in resource_list:
if resource.name == resource_group:
appendtags = resource.tags #gathering old tags
newTags = {"Name1": "first"} #new tags
appendtags.update(newTags) # adding my new tags to old ones
print(appendtags)
resource_client.resource_groups.create_or_update(resource_group_name=resource_group, parameters= {"location": "westus2", "tags": appendtags})
i have fixed it with this code.. and it worked PERFECT -
(you should work with "update_at_scope".
resource_group_client = ResourceManagementClient(credential, subscription_id=sub.subscription_id)
body = {
"operation" : "Merge",
"properties" : {
"tags" :
tags_dict,
}
}
resource_group_client.tags.update_at_scope(rg["Resource-id"] ,body)
I found this approach in the documentation :
https://elasticsearch-dsl.readthedocs.io/en/latest/index.html
class Article(Document):
title = Text(analyzer='snowball', fields={'raw': Keyword()})
body = Text(analyzer='snowball')
tags = Keyword()
published_from = Date()
lines = Integer()
and indexing like
# create and save and article
article = Article(meta={'id': 42}, title='Hello world!', tags=['test'])
article.body = ''' looong text '''
article.published_from = datetime.now()
article.save()
But this approach is not working out for me because I have very large JSON containing fields more than 100, and I have multiple JSON just like that. It would be difficult to convert JSON in this form :
article = Article(meta={'id': 42}, title='Hello world!', tags=['test'])
Any Idea, how to index json directly without model like wrapper?
You should be able to use the dict-structure of json without converting everything to your DSL. Just use the normal elasticsearch client, e.g. see the example for the low level client here: https://github.com/elastic/elasticsearch-py
Please do note that if you have a large number of docs, it is very useful to use the bulk-import API, which can be magnitudes faster.
I am using pyarango driver (https://github.com/tariqdaouda/pyArango) for arangoDB, but I cannot understand how the field validation works. I have set the fields of a collection as in the github example:
import pyArango.Collection as COL
import pyArango.Validator as VAL
from pyArango.theExceptions import ValidationError
import types
class String_val(VAL.Validator) :
def validate(self, value) :
if type(value) is not types.StringType :
raise ValidationError("Field value must be a string")
return True
class Humans(COL.Collection) :
_validation = {
'on_save' : True,
'on_set' : True,
'allow_foreign_fields' : True # allow fields that are not part of the schema
}
_fields = {
'name' : Field(validators = [VAL.NotNull(), String_val()]),
'anything' : Field(),
'species' : Field(validators = [VAL.NotNull(), VAL.Length(5, 15), String_val()])
}
So I was expecting that when I try to add a document into "Humans" collection, if 'name' field is not a string, an error would rise. But it didn't seem to work that easy.
This is how I add documents to the collection:
myjson = json.loads(open('file.json').read())
collection_name = "Humans"
bindVars = {"doc": myjson, '#collection': collection_name}
aql = "For d in #doc INSERT d INTO ##collection LET newDoc = NEW RETURN newDoc"
queryResult = db.AQLQuery(aql, bindVars = bindVars, batchSize = 100)
So if 'name' is not a string I actually don't get any error and is uploaded into the collection.
Does someone knows how can check if a document contains proper fields for that collection using the built-in validation of pyarango?
I don't see anything wrong with your validator, its just that if you're using AQL queries to insert your documents, pyArango has no way of knowing the contents prior to insertion.
Validators only work on pyArango documents if you do:
humans = db["Humans"]
doc = humans.createDocument()
doc["name"] = 101
That should trigger the exception because you've defined:
'on_set': True
ArangoDB as document store itself doesn't enforce schemas, neither do the drivers.
If you need schema validation, this can be done on top of the driver or inside of ArangoDB using a Foxx service (via the joi validation library).
One possible solution for doing this is using JSON Schema with its python implementation on top of the driver in your application:
from jsonschema import validate
schema = {
"type" : "object",
"properties" : {
"name" : {"type" : "string"},
"species" : {"type" : "string"},
},
}
Another real life example using JSON Schema is swagger.io, which is also used to document the ArangoDB REST API and ArangoDB Foxx services.
I don't know yet what was wrong with the code I posted but now seems to work. However I had to convert unicode to utf-8 when reading the json file otherwise it was not able to identify strings. I know ArangoDB as itself does not enforce schemes but I am using that has a built-in validation.
For those interested in a built-in validation of arangoDB using python visit pyarango github.
I have created a script that posts to my website when I run it (see code below) and everything works as it should, except I cannot seem to set the template for pages that I create.
from wordpress_xmlrpc import Client
from wordpress_xmlrpc.methods import posts, pages
from wordpress_xmlrpc import WordPressPage
client = Client(...)
page = WordPressPage()
page.template = 'template_name.php'
page.parent_id = '206'
page.title = 'Test'
page.slug = 'rrr'
page.content = 'Some text goes here'
page.post_status = 'publish'
page.id = client.call(posts.NewPost(page))
I found this issue posted at the repository (https://github.com/maxcutler/python-wordpress-xmlrpc/issues/44), and I tried using the alternative method:
page.wp_page_template = 'template_name.php'
I've also tried the names of the templates and other variations. I also find that my template(s) when I use the GetPageTemplates code:
templates = client.call(pages.GetPageTemplates())
All of my templates gets shown here.
Anyone had any success using this setting?
Seems a bug of wordpress_xmlrpc.
wordpress_xmlrpc converts field 'template' -> 'wp_page_tempalte' but its wrong.
Should be converted to 'page_template' (without wp_ prefix).
"class-wp-xmlrpc-server.php" then converts field of post data from 'page_template' to 'wp_page_template'.
So wordpress_xmlrpc should send page_template not wp_page_template.
wordpress_xmlrpc converts template to wp_page_template before post, so:
use 'template'
page.template = 'template_name.php'
patch wordpress.py like:
*** wordpress.py
--- wordpress.py.orig
***************
*** 126,132 ****
class WordPressPage(WordPressPost):
definition = dict(WordPressPost.definition, **{
! 'template': 'page_template',
'post_type': FieldMap('post_type', default='page'),
})
--- 126,132 ----
class WordPressPage(WordPressPost):
definition = dict(WordPressPost.definition, **{
! 'template': 'wp_page_template',
'post_type': FieldMap('post_type', default='page'),
})
It worked.
I am trying to use Facebook's ads-api to get data about advertising accounts/campaigns/etc within a specified time range.
Up until now I managed to get overall information (added below) using the official python sdk ,
but I can't figure out how to insert the time filter condition.
The answer is probably here under "Filtering results", but I don't understand how to translate what they are doing there to python...
https://developers.facebook.com/docs/reference/ads-api/adstatistics/v2.2
I would really appreciate any help you can provide,
Thanks!
This is the relevant module (I think) from the official python sdk project:
https://github.com/facebook/facebook-python-ads-sdk/blob/master/facebookads/objects.py
My current code is:
from facebookads.session import FacebookSession
from facebookads.api import FacebookAdsApi
from facebookads import objects
from facebookads.objects import (
AdUser,
AdCampaign,
)
my_app_id = 'APP_ID'
my_app_secret = 'AP_SECRET'
my_access_token = 'ACCESS_TOKEN'
my_session = FacebookSession(my_app_id, my_app_secret, my_access_token)
my_api = FacebookAdsApi(my_session)
FacebookAdsApi.set_default_api(my_api)
me = objects.AdUser(fbid='me')
my_accounts = list(me.get_ad_accounts())
my_account=my_accounts[1]
print(">>> Campaign Stats")
for campaign in my_account.get_ad_campaigns(fields=[AdCampaign.Field.name]):
for stat in campaign.get_stats(fields=[
'impressions',
'clicks',
'spent',
'unique_clicks',
'actions',
]):
print(campaign[campaign.Field.name])
for statfield in stat:
print("\t%s:\t\t%s" % (statfield, stat[statfield]))
and the output I get is (All caps and xxxx are mine):
Campaign Stats
CAMPAIGN_NAME1
impressions: xxxx
unique_clicks: xxxx
clicks: xxxx
actions: {u'mobile_app_install': xxxx, u'app_custom_event': xxxx, u'app_custom_event.fb_mobile_activate_app': xxx}
spent: xxxx
CAMPAIGN_NAME2
impressions: xxxx
unique_clicks: xxxx
clicks: xxxx
actions: {XXXX}
spent: xxxx
The get_stats() method has an additional parameter named params where you can pass in start_time and/or end_time.
params_data = {
'start_time': 1415134405,
}
stats = campaign.get_stats(
params=params_data,
fields=[
'impressions',
'clicks',
...
]
)
for stat in stats:
...
The API accepts a number of different parameters documented here: https://developers.facebook.com/docs/reference/ads-api/adstatistics
More optional reading
The reason for both a params parameter and fields parameter requires a bit of explanation. Feel free to ignore this if you're not interested. :)
The implementation for the params parameter basically just constructs the query string for the API call:
params['start_time'] = 1415134405
creates:
?start_time=1415134405
The Ads API endpoints generally accept a fields parameter to define what data you want to return:
?fields=impressions,clicks&start_time=1415134405
which you have correctly defined, but because it's just fields in the query string, you could also technically do this:
params['fields'] = 'impressions,clicks'
The fields parameter in get_stats() (and other read methods) is simply an easy way to define this fields parameter. The implementation looks something like this:
def remote_read(self, params=[], fields=[]):
...
params['fields'] = fields
...
The "get_stats" method is deprecated in the V2.4 version of the API.
Instead, "get_insights" method should be used. The parameters for that method are liste on the page below:
https://developers.facebook.com/docs/marketing-api/insights/v2.5
From the page above, the replacement for the "start_time" and "end_time" is the "time_ranges" attribute. Example:
"time_ranges": {'since': '2015-01-01', 'until': '2015-01-31'}