i have noticed when using python gnupg, that if i sign some data and save the signed data to a file using pickle, lots of data gets saved along with the signed data. one of these things is a timestamp in unix time, for example the following line is part of a timestamp:
p24
sS'timestamp'
p25
V1347364912
the documentation does not mention any of this, which makes me a little confused. after loading in the file using pickle, i can't see any mention of the timestamp or any way to return the value. but if pickle is saving it, it must be part of the python object. does this mean there should be a way i can get to this information in python? i would also like to utilise this data, which i can maybe do by reading in the file itself but am looking for a cleaner way to do it using the gnupg module.
gnupg isn't very well documented but if you Inspect it you will see there are attributes besides the ones normally used...
#234567891123456789212345678931234567894123456789512345678961234567897123456789
# core
import inspect
import pickle
import datetime
# 3rd party
import gnupg
def depickle():
""" pull and depickle our signed data """
f = open('pickle.txt', 'r')
signed_data = pickle.load(f)
f.close()
return signed_data
# depickle our signed data
signed_data = depickle()
# inspect the object
for key, value in inspect.getmembers(signed_data):
print key
One of them is your timestamp... aptly named timestamp. Now that you know it you can use it easily enough...
# use the attribute now that we know it
print signed_data.timestamp
# make it pretty
print datetime.datetime.fromtimestamp(float(signed_data.timestamp))
That felt long winded but I thought this discussion would benefit from documenting the use of inspect to identify the undocumented attributes instead of just saying "use signed_data.timestamp".
I have found that some fields of python-gnupg Sign and Verify classes are not described in documentation. You will have to look at python-gnupg source: [PYTHONDIR]/Lib/site-packages/gnupg.py. There is Sign class with handle_status() method that fill all the variables/fields connected with signature including timestamp field.
Related
I am trying to create a Python script that can take a JSON object and insert it into a headless Couchbase server. I have been able to successfully connect to the server and insert some data. I'd like to be able to specify the path of a JSON object and upsert that.
So far I have this:
from couchbase.bucket import Bucket
from couchbase.exceptions import CouchbaseError
import json
cb = Bucket('couchbase://XXX.XXX.XXX?password=XXXX')
print cb.server_nodes
#tempJson = json.loads(open("myData.json","r"))
try:
result = cb.upsert('healthRec', {'record': 'bob'})
# result = cb.upsert('healthRec', {'record': tempJson})
except CouchbaseError as e:
print "Couldn't upsert", e
raise
print(cb.get('healthRec').value)
I know that the first commented out line that loads the json is incorrect because it is expecting a string not an actual json... Can anyone help?
Thanks!
Figured it out:
with open('myData.json', 'r') as f:
data = json.load(f)
try:
result = cb.upsert('healthRec', {'record': data})
I am looking into using cbdocloader, but this was my first step getting this to work. Thanks!
I know that you've found a solution that works for you in this instance but I thought I'd correct the issue that you experienced in your initial code snippet.
json.loads() takes a string as an input and decodes the json string into a dictionary (or whatever custom object you use based on the object_hook), which is why you were seeing the issue as you are passing it a file handle.
There is actually a method json.load() which works as expected, as you have used in your eventual answer.
You would have been able to use it as follows (if you wanted something slightly less verbose than the with statement):
tempJson = json.load(open("myData.json","r"))
As Kirk mentioned though if you have a large number of json documents to insert then it might be worth taking a look at cbdocloader as it will handle all of this boilerplate code for you (with appropriate error handling and other functionality).
This readme covers the uses of cbdocloader and how to format your data correctly to allow it to load your documents into Couchbase Server.
I am confronted to the loss of alpha channel when I try to send image to clipboard, none of the solutions described here worked with the software I am working with but when I copy paste png files into this software, the alpha channel seems to be preserved.
Under this consideration, I want to simulate the Ctrl+C on files allowed by Windows Explorer. Using Clipview I found that the field 15 : CF_HDROP is relevant to my goal. tried to set this field using win32clipboard
import win32clipboard
win32clipboard.OpenClipboard(0)
file1="C:\\Users\\User\\Desktop\\test.png"
win32clipboard.SetClipboardData(15, file1)
win32clipboard.CloseClipboard()
I don't get any error doing this, but it does not work when I try to use this new clipboard content, because as described there tuple of unicode filenames must be stored in the CF_HDROP field.
I have no clue how to proceed. I also tried with
file1= (unicode('C:\\Users\\User\\Desktop\\CANEVAS\\test.png'),)
but I got this error:
TypeError: expected a readable buffer object.
The documentation for CF_HDROP says
The data consists of an STGMEDIUM structure that contains a global memory object. The structure's hGlobal member points to a DROPFILES structure as its hGlobal member.
win32clipboard.GetClipboardData has built-in support for CF_HDROP. It decodes the STGMEDIUM and DROPFILES structures to produce a tuple of file names.
The documentation does not state that SetClipboardData has the corresponding code to construct the STGMEDIUM and DROPFILES structures from a tuple of file names.
I don't know enough about Python or its FFI to know how straightforward it is to construct the structures and pass them to the SetClipboardData function. Or if there is an existing library that will do this for you.
I need to pull information on a long list of JIRA issues that live in a CSV file. I'm using the JIRA REST API in Python in a small script to see what kind of data I can expect to retrieve:
#!/usr/bin/python
import csv
import sys
from jira.client import JIRA
*...redacted*
csvfile = list(csv.reader(open(sys.argv[1])))
for row in csvfile:
r = str(row).strip("'[]'")
i = jira.issue(r)
print i.id,i.fields.summary,i.fields.fixVersions,i.fields.resolution,i.fields.resolutiondate
The ID (Key), Summary, and Resolution dates are human-readable as expected. The fixVersions and Resolution fields are resources as follows:
[<jira.resources.Version object at 0x105096b11>], <jira.resources.Resolution object at 0x105096d91>
How do I use the API to get the set of available fixVersions and Resolutions, so that I can populate this correctly in my output CSV?
I understand how JIRA stores these values, but the documentation on the jira-python code doesn't explain how to harness it to grab those base values. I'd be happy to just snag the available fixVersion and Resolution values globally, but the resource info I receive doesn't map to them in an obvious way.
You can use fixVersion.name and resolution.name to get the string versions of those values.
User mdoar answered this question in his comment:
How about using version.name and resolution.name?
I am very new to Python and am not very familiar with the data structures in Python.
I am writing an automatic JSON parser in Python, the JSON message is read into a dictionary using Ultra-JSON:
jsonObjs = ujson.loads(data)
Now, if I try something like:
jsonObjs[param1][0][param2] it works fine
However, I need to get the path from an external source (I read it from the DB), we initially thought we'll just write in the DB:
myPath = [param1][0][param2]
and then try to access:
jsonObjs[myPath]
But after a couple of failures I realized I'm trying to access:
jsonObjs[[param1][0][param2]]
Is there a way to fix this without parsing myPath?
Many thanks for your help and advice
Store the keys in a format that preserves type information, e.g. JSON, and then use reduce() to perform recursive accesses on the structure.
I'm creating a script to convert a whole lot of data into CSV format. It runs on Google AppEngine using the mapreduce API, which is only relevant in that it means each row of data is formatted and output separately, in a callback function.
I want to take advantage of the logic that already exists in the csv module to convert my data into the correct format, but because the CSV writer expects a file-like object, I'm having to instantiate a StringIO for each row, write the row to the object, then return the content of the object, each time.
This seems silly, and I'm wondering if there is any way to access the internal CSV formatting logic of the csv module without the writing part.
The csv module wraps the _csv module, which is written in C. You could grab the source for it and modify it to not require the file-like object, but poking around in the module, I don't see any clear way to do it without recompiling.
One option could be having your own "file-like" object. Actually, cvs.writer requires for the object only to have a write method, so:
class PseudoFile(object):
def write(self, string):
# Do whatever with your string
csv.writer(PseudoFile()).writerow(row)
You're skipping a couple steps in there, but maybe it's just what you want.