Pepper Linux log date format looks meaningless - python

I have a problem with reading the date value from Pepper robot log. I use Python to fetch the log messages from a remote robot. See an example code:
def onMessage(mess):
print mess # mess is a dictionary with all known LogMessage information.
def main():
app = qi.Application(url="tcp://10.0.11.252:9559")
app.start()
logmanager = app.session.service("LogManager")
listener = logmanager.getListener()
listener.onLogMessage.connect(onMessage)
app.run()
if __name__ == "__main__":
main()
Đ¢his is how one log message looks like:
{
'category': 'ALMemory',
'level': 5,
'source': ':notify:0',
'location': '7b5400e2-18b1-48e4-1127-g4e6544d0621b:3107',
'date': 11112334305291,
'message': 'notifying module: _WholeBodyLooker for datachange for key: ALTracker/ObjectLookAt',
'id': 5599547,
'systemDate': 1533208857670649344,
}
The problem is that I don't know the date value meaning. I didn't find any documentation for this value. When I try to covert 11112334305291 to date the result is not meaningful: Sunday, February 19, 2322 11:31:45.291 PM.
Does anyone have any idea what this might mean?

Most likely nanoseconds since the robot has been on (so in your case, about three hours)- see the qi clock API in the documentation.

Related

get request using a script python and web API in sonarQUbe

I'v recently started using sonar.cloud.whatever.com. on the profolio I have by default the analysisi result for a master branch by default for any given project so if I need to collect the information for multiple branched in this project I have to select those branched one by one.
For the moment I don't know if there is any other simple way to do this without given by sonar
so I stated a script on python using the webservice API given by sonar .
I stated (just to try the result) by collecting all the issues using /api/issues/search
import json
import sys
import os
import requests
def usage():
print("hello")
def collectIssues():
r=requests.get('https://sonar.cloud.sec.NotToMention.com/api/issues/search? componentKeys=project_key&statuses=OPEN')
print("le code ",r.status_code)
print(r.url)
#print(r.headers)
if(r.status_code!=200):
exit(0)
data=r.json()
print(r.headers,"\n\n")
print(data)
print(data['issues'])
def main(args):
collectIssues()
if __name__== '__main__':
main(sys.argv)
exit(0)
if I copy the link and browse it I have the right result with total 1000 issues, but the result of this script gives total 0 and issues = [].
(I want just to sign that project_name and NotToMention are not the real values I replaced them here for security issues.)
the result of this script is :
status_code : 200
https://sonar.cloud.NotTOMention.com/api/issues/search?componentKeys=project_name&statuses=OPEN
JSON RESULT :
{'total': 0, 'p': 1, 'ps': 100, 'paging': {'pageIndex': 1, 'pageSize': 100, 'total': 0}, 'effortTotal': 0, 'debtTotal': 0, 'issues': [], 'components': [], 'facets': []}
thanks for any advice.
best regards

How to use json API information from python requests

I want to be able to read json API information and depending on the information make something happen. For example:
I get this information from a Streamelements API.
{
"donation":{
"user":{
"username":"StreamElements"
"geo":"null"
"email":"streamelements#streamelements.com"
}
"message":"This is a test"
"amount":100
"currency":"USD"
}
"provider":"paypal"
"status":"success"
"deleted":false
"_id":"5c0aab85de9a4c6756a14e0d"
"channel":"5b2e2007760aeb7729487dab"
"transactionId":"IMPORTED"
"createdAt":""2018-12-07T17:19:01.957Z""
"approved":"allowed"
"updatedAt":""2018-12-07T17:19:01.957Z""
}
I then want to check if the amount on that specific tip is $10 and if that is the case I want something to happen.
This is what I have so far but I do not know how to get the right variable:
data = json.loads(url.text)
if (data[0]['amount'] == 10):
DoTheThing();
The amount field is under the inner donation object:
data = json.loads(url.text)
donation_amount = data["donation"]["amount"]
if donation_amount == 10:
# do something
Verified using the Stream Elements Tips API documentation.

Dataflow Streaming using Python SDK: Transform for PubSub Messages to BigQuery Output

I am attempting to use dataflow to read a pubsub message and write it to big query. I was given alpha access by the Google team and have gotten the provided examples working but now I need to apply it to my scenario.
Pubsub payload:
Message {
data: {'datetime': '2017-07-13T21:15:02Z', 'mac': 'FC:FC:48:AE:F6:94', 'status': 1}
attributes: {}
}
Big Query Schema:
schema='mac:STRING, status:INTEGER, datetime:TIMESTAMP',
My goal is to simply read the message payload and insert into bigquery. I am struggling with getting my head around the transformations and how should I map the key/values to the big query schema.
I am very new to this so any help is appreciated.
Current code:https://codeshare.io/ayqX8w
Thanks!
I was able to successfully parse the pubsub string by defining a function that loads it into a json object (see parse_pubsub()). One weird issue I encountered was that I was not able to import json at the global scope. I was receiving "NameError: global name 'json' is not defined" errors. I had to import json within the function.
See my working code below:
from __future__ import absolute_import
import logging
import argparse
import apache_beam as beam
import apache_beam.transforms.window as window
'''Normalize pubsub string to json object'''
# Lines look like this:
# {'datetime': '2017-07-13T21:15:02Z', 'mac': 'FC:FC:48:AE:F6:94', 'status': 1}
def parse_pubsub(line):
import json
record = json.loads(line)
return (record['mac']), (record['status']), (record['datetime'])
def run(argv=None):
"""Build and run the pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--input_topic', required=True,
help='Input PubSub topic of the form "/topics/<PROJECT>/<TOPIC>".')
parser.add_argument(
'--output_table', required=True,
help=
('Output BigQuery table for results specified as: PROJECT:DATASET.TABLE '
'or DATASET.TABLE.'))
known_args, pipeline_args = parser.parse_known_args(argv)
with beam.Pipeline(argv=pipeline_args) as p:
# Read the pubsub topic into a PCollection.
lines = ( p | beam.io.ReadStringsFromPubSub(known_args.input_topic)
| beam.Map(parse_pubsub)
| beam.Map(lambda (mac_bq, status_bq, datetime_bq): {'mac': mac_bq, 'status': status_bq, 'datetime': datetime_bq})
| beam.io.WriteToBigQuery(
known_args.output_table,
schema=' mac:STRING, status:INTEGER, datetime:TIMESTAMP',
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND)
)
if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
run()
Data written to Python SDK's BigQuery sink should be in the form of a dictionary where each key of the dictionary gives a field of the BigQuery table and corresponding value gives the value to be written to that field. For a BigQuery RECORD type, value itself should be a dictionary with corresponding key,value pairs.
I filed a JIRA to improve documentation of corresponding python module in Beam: https://issues.apache.org/jira/browse/BEAM-3090
I have a similar usecase (reading rows as strings from PubSub, converting them to dicts and then processing them).
I am using ast.literal_eval(), which seems to work for me. This command will evaluate the string, but in a safer way than eval() (see here). It should return a dict whose keys are strings, and values are evaluated to the most likely type (int, str, float...). You may want to make sure the values take the correct type though.
This would give you a pipeline like this
import ast
lines = ( p | beam.io.ReadStringsFromPubSub(known_args.input_topic)
| "JSON row to dict" >> beam.Map(
lambda s: ast.literal_eval(s))
| beam.io.WriteToBigQuery( ... )
)
I have not used BigQuery (yet), so I cannot help you on the last line, but what you wrote seems correct at first glance.

Flask jsonify print results on new lines

First time using Flask, I have created a very basic app and I am trying to print the results of a recommender system. The first set of code is from my python function (print_most_similar) and is creating a formatted string in hopes to print every REC on a new line. The second section of code is obviously my flask routing. You can see that the flask part calls the function, so it is returned 'y'.
I believe the jsonify will not take the \n characters.
I have tried using just '\n' in the string formatting, but it just appears as a string. As does '\t'.
for k in range(len(sugg)):
x = str("REC {}: {}\\n".format(k+1, sugg[k]))
y += x
return y
#app.route("/getrecomm",methods=['GET','POST'])
def getrecomm():
restname = request.args.get('restname', type=str)
number = request.args.get('number', type=int)
i = getBusIndex(restname, names)
return make_response(jsonify(result=(print_most_similar(rating, names, i, number))),200)
Currently, the results print like this:
REC 1: Harbor House Cafe & Lounge\nREC 2: Starbucks\nREC 3: McDonald's\nREC 4: Taco Bell\nREC 5: Panda Express\n
I would like them to print like this:
REC 1: Harbor House Cafe & Lounge
REC 2: Starbucks
REC 3: McDonald's
REC 4: Taco Bell
REC 5: Panda Express
I'm using python 3, fyi. Any suggestions would be super appreciated!
Summary
Answer: <br>
Alternative: JSONView Chrome Extension
The only one that give me good results was <br>:
Example
from flask import Flask, jsonify
app = Flask(__name__)
tasks = [
{
'<br>id': 1,
'title': u'Buy groceries',
'description': u'Milk, Cheese, Pizza, Fruit, Tylenol',
'done': False
},
{
'<br>id': 2,
'title': u'Learn Python',
'description': u'Need to find a good Python tutorial on the web',
'done': False
}
]
#app.route('/todo/api/v1.0/tasks', methods=['GET'])
def get_tasks():
return jsonify({'tasks': tasks})
if __name__ == '__main__':
app.run(debug=True)
In your browser the <br> character will be rendered as html and reproduce a new line.
Result:
"creates" new lines in json
Jsonify can't help you because it takes the values (integer,boolean,float, etc) as a string and avoid special characters like \n, \t, etc
Finally, if you just want a fancy way to visualize json files in your browser, you could use JSONView, is a Chrome extension that render Json files in a more understandable way, like this.
rendering with JSONView
Finally I have found the solutions.
It seems like that jsonify() function doesn't apply the "new lines" situation.
You can use Response()
from flask import Flask, Response
statement = """
try
try
try
"""
#app.route('/**/api/v1/**', methods=['GET'])
def get_statement():
return Response(statement, mimetype='text/plain')
I'm also new learner for flask, Response()function works on my app.

Confluence XML-RPC: Set the "creation" date

I am trying to migrate some existing blog entries into our confluence wiki using XML-RPC with Python. It is currently working with such things as title, content, space etc but will not work for created date.
This is what was currently attempted
import xmlrpclib
proxy=xmlrpclib.ServerProxy('<my_confluence>/rpc/xmlrpc')
token=proxy.confluence1.login('username', 'password')
page = {
'title':'myTitle',
'content':'My Content',
'space':'myspace',
'created':sometime
}
proxy.confluence1.storePage(token, page)
sometime is the date I want to set to a time in the past. I have tried using Date objects, various string formats and even the date object returned by a previous save, but no luck.
If you would try to store the existing content as actual blog entries in Confluence, then you could use the "publishDate" parameter:
import xmlrpclib
import datetime
proxy=xmlrpclib.ServerProxy('<my_confluence>/rpc/xmlrpc')
token=proxy.confluence1.login('username', 'password')
blogpost = {
'title' : 'myTitle',
'content' : 'My Content',
'space' : 'myspace',
'publishDate' : datetime.datetime(2001, 11, 21, 16, 30)
}
proxy.confluence1.storeBlogEntry(token, blogpost)
The XML-API for pages ignores the "created" parameter.
You can use strptime because type will not match directly. Hope this works.
new_sometime = datetime.strptime(sometime, '%Y-%m-%d')
page = {
'title':'myTitle',
'content':'My Content',
'space':'myspace',
'created':new_sometime
}

Categories

Resources