I am trying to make a simple command line client for accessing shares via the Python bindings of gio (yes, the main requirement is to use gio).
I can see that comparing with it's predecessor gnome-vfs, it provides some means to do authentication stuff (subclassing MountOperation), and even some methods which are quite specific to samba shares, like set_domain().
But I'm stuck with this code:
import gio
fh = gio.File("smb://server_name/")
If that server needs authentication, I suppose that a call to fh.mount_enclosing_volume() is needed, as this methods takes a MountOperation as a parameter. The problem is that calling this methods does nothing, and the logical fh.enumerate_children() (to list the available shares) that comes next fails.
Anybody could provide a working example of how this would be done with gio ?
The following appears to be the minimum code needed to mount a volume:
def mount(f):
op = gio.MountOperation()
op.connect('ask-password', ask_password_cb)
f.mount_enclosing_volume(op, mount_done_cb)
def ask_password_cb(op, message, default_user, default_domain, flags):
op.set_username(USERNAME)
op.set_domain(DOMAIN)
op.set_password(PASSWORD)
op.reply(gio.MOUNT_OPERATION_HANDLED)
def mount_done_cb(obj, res):
obj.mount_enclosing_volume_finish(res)
(Derived from gvfs-mount.)
In addition, you may need a glib.MainLoop running because GIO mount functions are asynchronous. See the gvfs-mount source code for details.
Related
I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))
I've used boto to interact with S3 with with no problems, but now I'm attempting to connect to the AWS Support API to pull back info on open tickets, trusted advisor results, etc. It seems that the boto library has different connect methods for each AWS service? For example, with S3 it is:
conn = S3Connection(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
According to the boto docs, the following should work to connect to AWS Support API:
>>> from boto.support.connection import SupportConnection
>>> conn = SupportConnection('<aws access key>', '<aws secret key>')
However, there are a few problems I see after digging through the source code. First, boto.support.connection doesn't actually exist. boto.connection does, but it doesn't contain a class SupportConnection. boto.support.layer1 exists, and DOES have the class SupportConnection, but it doesn't accept key arguments as the docs suggest. Instead it takes 1 argument - an AWSQueryConnection object. That class is defined in boto.connection. AWSQueryConnection takes 1 argument - an AWSAuthConnection object, class also defined in boto.connection. Lastly, AWSAuthConnection takes a generic object, with requirements defined in init as:
class AWSAuthConnection(object):
def __init__(self, host, aws_access_key_id=None,
aws_secret_access_key=None,
is_secure=True, port=None, proxy=None, proxy_port=None,
proxy_user=None, proxy_pass=None, debug=0,
https_connection_factory=None, path='/',
provider='aws', security_token=None,
suppress_consec_slashes=True,
validate_certs=True, profile_name=None):
So, for kicks, I tried creating an AWSAuthConnection by passing keys, followed by AWSQueryConnection(awsauth), followed by SupportConnection(awsquery), with no luck. This was inside a script.
Last item of interest is that, with my keys defined in a .boto file in my home directory, and running python interpreter from the command line, I can make a direct import and call to SupportConnection() (no arguments) and it works. It clearly is picking up my keys from the .boto file and consuming them but I haven't analyzed every line of source code to understand how, and frankly, I'm hoping to avoid doing that.
Long story short, I'm hoping someone has some familiarity with boto and connecting to AWS API's other than S3 (the bulk of material that exists via google) to help me troubleshoot further.
This should work:
import boto.support
conn = boto.support.connect_to_region('us-east-1')
This assumes you have credentials in your boto config file or in an IAM Role. If you want to pass explicit credentials, do this:
import boto.support
conn = boto.support.connect_to_region('us-east-1', aws_access_key_id="<access key>", aws_secret_access_key="<secret key>")
This basic incantation should work for all services in all regions. Just import the correct module (e.g. boto.support or boto.ec2 or boto.s3 or whatever) and then call it's connect_to_region method, supplying the name of the region you want as a parameter.
I think this is quite an easy question to answer, I just haven't been able to find anywhere detailing how to do it.
I'm developing a GAE app.
In my main file I have a few request handlers, for example:
class Query(webapp.RequestHandler):
def post(self):
queryDOI = cgi.escape(self.request.get('doiortitle'))
import queryCosine
self.response.out.write(queryCosine.cosine(queryDOI))
In that handler there I'm importing from a queryCosine.py script which is doing all of the work. If something in the queryCosine script fails, I'd like to be able to print a message or do a redirect.
Inside queryCosine.py there is just a normal Python function, so obviously doing things like
self.response.out.write("Done")
doesn't work. What should I use instead of self or what do I need to include within my included file? I've tried using Query.self.response.out.write instead but that doesn't work.
A much better, more modular approach, is to have your queryCosine.cosine function throw an exception if something goes wrong. Then, your handler method can output the appropriate response depending on the return value or exception. This avoids unduly coupling the code that calculates whatever it is you're calculating to the webapp that hosts it.
Pass it to the function.
main file:
import second
...
second.somefunction(self.response.out.write)
second.py:
def somefunction(output):
output('Done')
I'm trying to generate a uuid for a filename, and I'm also using the multiprocessing module. Unpleasantly, all of my uuids end up exactly the same. Here is a small example:
import multiprocessing
import uuid
def get_uuid( a ):
## Doesn't help to cycle through a bunch.
#for i in xrange(10): uuid.uuid4()
## Doesn't help to reload the module.
#reload( uuid )
## Doesn't help to load it at the last minute.
## (I simultaneously comment out the module-level import).
#import uuid
## uuid1() does work, but it differs only in the first 8 characters and includes identifying information about the computer.
#return uuid.uuid1()
return uuid.uuid4()
def main():
pool = multiprocessing.Pool( 20 )
uuids = pool.map( get_uuid, range( 20 ) )
for id in uuids: print id
if __name__ == '__main__': main()
I peeked into uuid.py's code, and it seems to depending-on-the-platform use some OS-level routines for randomness, so I'm stumped as to a python-level solution (to do something like reload the uuid module or choose a new random seed). I could use uuid.uuid1(), but only 8 digits differ and I think there are derived exclusively from the time, which seems dangerous especially given that I'm multiprocessing (so the code could be executing at exactly the same time). Is there some Wisdom out there about this issue?
This is the correct way to generate your own uuid4, if you need to do that:
import os, uuid
return uuid.UUID(bytes=os.urandom(16), version=4)
Python should be doing this automatically--this code is right out of uuid.uuid4, when the native _uuid_generate_random doesn't exist. There must be something wrong with your platform's _uuid_generate_random.
If you have to do this, don't just work around it yourself and let everyone else on your platform suffer; report the bug.
I dont see a way to make this work either. But you could just generate all the uuids in the main thread and pass them to the workers.
This works fine for me. Does your Python installation have os.urandom? If not, random number seeding will be very poor and would lead to this problem (assuming there's also no native UUID module, uuid._uuid_generate_random).
Currently, I am working on a script, which fetches file either from a zip archive or disk. After fetching, the payload gets pushed to an external tool via web API.
For performance reason, I used the multiprocessing.Pool.map method. And for the tmp file name uuid looked quite handy. But I ran into the same issue you asked here.
First please check out the official docs from uuid. There is an class attribute called is_safe which provides more information if the uuid is multiprocess safe or not. In my case it was not.
After some research, I finally changed my strategy and moved from uuid to process pid and name.
Because I just need the uuid for tmp file naming, pid and name also works fine. We can access the current worker Process instance via multiprocessing.current_process(). If you really need an uuid, you could potentially integrate the worker pid somehow.
In addition, uuid uses system entropy for the generation (uuid source). Because for me it does not matter how the file is named, this solution also prevents laking entropy.
Looking for help/tutorials/sample code of using python to listen to distributed notifications from applications on a mac. I know the py-objc lib is the bridge between python and mac/cocoa classes, and the Foundation library can be used to add observers, but looking for examples or tutorials on how to use this to monitor iTunes.
If anyone comes by to this question, i figured out how to listen, the code below works. However accessing attributes do not seem to work like standard python attribute access.
Update: you do not access attributes as you would in python i.e (.x), the code has been updated below, it now generates a dict called song_details.
Update3: Update to the code, now subclassing NSObject, removed adding the addObserver from the class. Will keep the code updated on github, no more updates here.
import Foundation
from AppKit import *
from PyObjCTools import AppHelper
class GetSongs(NSObject):
def getMySongs_(self, song):
song_details = {}
ui = song.userInfo()
for x in ui:
song_details[x] = ui.objectForKey_(x)
print song_details
nc = Foundation.NSDistributedNotificationCenter.defaultCenter()
GetSongs = GetSongs.new()
nc.addObserver_selector_name_object_(GetSongs, 'getMySongs:', 'com.apple.iTunes.playerInfo',None)
NSLog("Listening for new tunes....")
AppHelper.runConsoleEventLoop()
The source code for GrowlTunes might give you some clues here. You'd have to translate from Objective-C to PyObjC, but eh, whatever. :)
GrowlTurnesController.m
(Or grab the whole growl source tree and navigate to GrowlTunes so you can see it all in action.: here's a link to the directions on how to get the source