How to read LV2 ttl file in Python? - python

I have an LV2 plugin and I want to use Python to extract its metadata - plugin name, description, list of control and audio ports and specification of each port.
With LADSPA the instructions were pretty clear, although a bit difficult to implement in Python: I just needed to call ladspa_descriptor() function. Now with LV2 there's a .ttl file, simples to access but more complicated to parse.
Is there any python library that will make this job simple?

The LV2 documentation generation tools use RDFLib. It is probably the most popular RDF interface for Python, though does much more than just parse Turtle. It is a good choice if performance is not an issue, but is unfortunately really slow.
If you need to actually instantiate and use plugins, you probably want to use an existing LV2 implementation. As Steve mentioned, Lilv is for this. It is not limited to any static default location, but will look in all the locations in LV2_PATH. You can set this environment variable to whatever you want before calling Lilv and it will only look in those locations. Alternatively, if you want to specifically load just one bundle at a time, there is a function for that: lilv_world_load_bundle().
There are SWIG-based Python bindings included with Lilv, but they stop short of actually allowing you to process data. However there is a project to wrap Lilv that allows processing of audio using scipy arrays: http://pyslv2.sourceforge.net/ (despite the name they are indeed Lilv bindings and not bindings for its predecessor SLV2)
That said, if you only need to get static information from the Turtle files, involving C libraries is probably more trouble than it is worth. One of the big advantages of using standard data files is ease of use with existing tools. To get the number of ports on a plugin, you simply need to count the number of triples that match the pattern (plugin, lv2:port, *). Here is an example Python script that prints the number of ports of a plugin, given the file to read and the plugin URI as command line arguments:
#!/usr/bin/env python
import rdflib
import sys
lv2 = rdflib.Namespace('http://lv2plug.in/ns/lv2core#')
path = sys.argv[1]
plugin = rdflib.URIRef(sys.argv[2])
model = rdflib.ConjunctiveGraph()
model.parse(path, format='n3')
num_ports = 0
for i in model.triples(plugin, lv2.port, None]):
num_ports += 1
print('%s has %u ports' % (plugin, num_ports))

This is how to get the number of ports each plugin supports:
w = lilv.World()
w.load_all()
for p in w.get_all_plugins():
print p.get_name().as_string(), p.get_num_ports()
At least this is all i got while trying to figure this out.

Related

How can I edit a line in .tcl file?

I need to run a .tcl file via command line which get invoked with a Python script. However, a single line in that .tcl file needs to change based on input from the user. For example:
info = input("Prompt for the user: ")
Now I need the string contained in info to replace one of the lines in .tcl file.
Rewriting the script is one of the trickier options to pick. It makes things harder to audit and it is tremendously easy to make a mess of. It's not recommended at all unless you take special steps, such as factoring out the bit you set into its own file:
File that you edit, e.g., settings.tcl (simple enough that it is pretty trivial to write and you can rewrite the whole lot each time without making a mess of it)
set value "123"
Use of that file:
set value 0
if {[file readable settings.tcl]} {
source settings.tcl
}
puts "value is $value"
More sophisticated versions of that are possible with safe interpreters and language profiling… but they're only really needed when the settings and the code are in different trust domains.
That said, there are other approaches that are usually easier. If you are invoking the Tcl script by running a subprocess, the easiest ways to pass an arbitrary parameter are to use one of:
A command line argument. These can be read on the Tcl side from the $argv global, which holds a list of all arguments after the script name. (The lindex and lassign commands tend to be useful here, e.g., set value [lindex $argv 0].)
An environment variable. These can be read on the Tcl side from the env global array, e.g., set value $env(MyVarName)
On standard input. A line can be read from that on the Tcl side using set line [gets stdin].
In more complex cases, you'd pass values in their own files, or by writing them into something like an SQLite database, or… well, there's lots of options.
If on the other hand the Tcl interpreter is in the same process, pass the values by setting the variables in it before asking for the script to run. (Tcl has almost no true globals — environment variables are a special exception, and only because the OS forces it upon us — so everything is specific to the interpreter context.)
Specifically, if you've got a Tcl instance object from tkinter (Tk is a subclass of that) then you can do:
import tkinter
interp = tkinter.Tcl()
interp.call("set", "value", 123)
interp.eval("source program.tcl")
# Or interp.call("source", "program.tcl")
That has the advantage of doing all the quoting for you.

unix tools to parse file on the command line

I have a python script that looks like the following that I want to transform:
import sys
# more imports
''' some comments '''
class Foo:
def _helper1():
etc.
def _helper2():
etc.
def foo1():
d = { a:3, b:2, c:4 }
etc.
def foo2():
d = { a:2, b:2, c:7 }
etc.
def foo3():
d = { a:3, b:2, c:7 }
etc.
etc.
if __name__ == "__main__":
etc.
I'd like to be able to parse JUST the foo*() functions and keep just the ones that have certain attributes, like d={a:3, b:2}. Obviously keep everything else that is non foo*() so the transformation will still run. The foo*() will be well defined though d may have different key, values.
Is there some set of unix tools I can use to do this through chaining? I can use grep to identify foo but how would I scan the next couple of lines to apply the keep or reject portion of my logic?
edit: note, i'm trying to see if it's reasonable to do this with command line tools before writing a custom parser. i know how to write the parser.
You haven't specified your problem with enough detail to recommend a particular solution, but there are many tools and techniques that will handle this type of problem.
As I understand this, you want to
Identify the boundaries of your class
Identify the methods within the class
Remove the methods lacking certain textual features
My general approach to this would be a script with logic based on "open old and new files; write everything you read from the old file, unless ."
You can blithely write things until you get into the class (one flag) and start finding methods (another flag). The one slightly tricky part here is the buffering: you need to keep the text of each method until you know whether it contains the target text. You can either read in the entire method (minor parsing task) and search that for the target, or simply hold lines of text until you find the target (then return to your write-it-all mode) or run off the end (empty the buffer without writing).
This is simply enough that you could cobble a script in any handy language to handle the problem. UNIX provides a variety of tools; in that paradigm I'd use awk. However, I recommend a read-friendly tool, such as Python or Perl. If you want to move formally into the world of parsing, I suggest a trivial Lex-YACC couplet: you can have very simple tokens (perhaps even complete lines, depending on your coding style) and actions (write line, hold line, set status flag, flush buffer, etc.).
Is that enough to get you moving?

How to unit test program interacting with block devices

I have a program that interacts with and changes block devices (/dev/sda and such) on linux. I'm using various external commands (mostly commands from the fdisk and GNU fdisk packages) to control the devices. I have made a class that serves as the interface for most of the basic actions with block devices (for information like: What size is it? Where is it mounted? etc.)
Here is one such method querying the size of a partition:
def get_drive_size(device):
"""Returns the maximum size of the drive, in sectors.
:device the device identifier (/dev/sda and such)"""
query_proc = subprocess.Popen(["blockdev", "--getsz", device], stdout=subprocess.PIPE)
#blockdev returns the number of 512B blocks in a drive
output, error = query_proc.communicate()
exit_code = query_proc.returncode
if exit_code != 0:
raise Exception("Non-zero exit code", str(error, "utf-8")) #I have custom exceptions, this is slight pseudo-code
return int(output) #should always be valid
So this method accepts a block device path, and returns an integer. The tests will run as root, since this entire program will end up having to run as root anyway.
Should I try and test code such as these methods? If so, how? I could try and create and mount image files for each test, but this seems like a lot of overhead, and is probably error-prone itself. It expects block devices, so I cannot operate directly on image files in the file system.
I could try mocking, as some answers suggest, but this feels inadequate. It seems like I start to test the implementation of the method, if I mock the Popen object, rather than the output. Is this a correct assessment of proper unit-testing methodology in this case?
I am using python3 for this project, and I have not yet chosen a unit-testing framework. In the absence of other reasons, I will probably just use the default unittest framework included in Python.
You should look into the mock module (I think it's part of the unittest module now in Python 3).
It enables you to run tests without the need to depened in any external resources while giving you control over how the mocks interact with your code.
I would start from the docs in Voidspace
Here's an example:
import unittest2 as unittest
import mock
class GetDriveSizeTestSuite(unittest.TestCase):
#mock.patch('path/to/original/file.subprocess.Popen')
def test_a_scenario_with_mock_subprocess(self, mock_popen):
mock_popen.return_value.communicate.return_value = ('Expected_value', '')
mock_popen.return_value.returncode = '0'
self.assertEqual('expected_value', get_drive_size('some device'))

Python: Windows System File

In python, how can I identify a file that is a "window system file". From the command line I can do this with the following command:
ATTRIB "c:\file_path_name.txt"
If the return has the "S" character, then it's a windows system file. I cannot figure out the equivilant in python. A few example of similar queries look like this:
Is a file writeable?
import os
filePath = r'c:\testfile.txt'
if os.access(filePath, os.W_OK):
print 'writable'
else:
print 'not writable'
another way...
import os
import stat
filePath = r'c:\testfile.txt'
attr = os.stat(filePath)[0]
if not attr & stat.S_IWRITE:
print 'not writable'
else:
print 'writable'
But I can't find a function or enum to identify a windows system file. Hopefully there's a built in way to do this. I'd prefer not to have to use win32com or another external module.
The reason I want to do this is because I am using os.walk to copy files from one drive to another. If there was a way to walk the directory tree while ignoring system files that may work too.
Thanks for reading.
Here's the solutions I came up with based on the answer:
Using win32api:
import win32api
import win32con
filePath = r'c:\test_file_path.txt'
if not win32api.GetFileAttributes(filePath) & win32con.FILE_ATTRIBUTE_SYSTEM:
print filePath, 'is not a windows system file'
else:
print filePath, 'is a windows system file'
and using ctypes:
import ctypes
import ctypes.wintypes as types
# From pywin32
FILE_ATTRIBUTE_SYSTEM = 0x4
kernel32dll = ctypes.windll.kernel32
class WIN32_FILE_ATTRIBUTE_DATA(ctypes.Structure):
_fields_ = [("dwFileAttributes", types.DWORD),
("ftCreationTime", types.FILETIME),
("ftLastAccessTime", types.FILETIME),
("ftLastWriteTime", types.FILETIME),
("nFileSizeHigh", types.DWORD),
("nFileSizeLow", types.DWORD)]
def isWindowsSystemFile(pFilepath):
GetFileExInfoStandard = 0
GetFileAttributesEx = kernel32dll.GetFileAttributesExA
GetFileAttributesEx.restype = ctypes.c_int
# I can't figure out the correct args here
#GetFileAttributesEx.argtypes = [ctypes.c_char, ctypes.c_int, WIN32_FILE_ATTRIBUTE_DATA]
wfad = WIN32_FILE_ATTRIBUTE_DATA()
GetFileAttributesEx(pFilepath, GetFileExInfoStandard, ctypes.byref(wfad))
return wfad.dwFileAttributes & FILE_ATTRIBUTE_SYSTEM
filePath = r'c:\test_file_path.txt'
if not isWindowsSystemFile(filePath):
print filePath, 'is not a windows system file'
else:
print filePath, 'is a windows system file'
I wonder if pasting the constant "FILE_ATTRIBUTE_SYSTEM" in my code is legit, or can I get its value using ctypes as well?
But I can't find a function or enum to identify a windows system file. Hopefully there's a built in way to do this.
There is no such thing. Python's file abstraction doesn't have any notion of "system file", so it doesn't give you any way to get it. Also, Python's stat is a very thin wrapper around the stat or _stat functions in Microsoft's C runtime library, which doesn't have any notion of "system file". The reason for this is that both Python files and Microsoft's C library are both designed to be "pretty much like POSIX".
Of course Windows also has a completely different abstraction for files. But this one isn't exposed by the open, stat, etc. functions; rather, there's a completely parallel set of functions like CreateFile, GetFileAttributes, etc. And you have to call those if you want that information.
I'd prefer not to have to use win32com or another external module.
Well, you don't need win32com, because this is just Windows API, not COM.
But win32api is the easiest way to do it. It provides a nice wrapper around GetFileAttributesEx, which is the function you want to call.
If you don't want to use an external module, you can always call Windows API functions via ctypes instead. Or use subprocess to run command-line tools (like ATTRIB—or, if you prefer, like DIR /S /A-S to let Windows do the recursive-walk-skipping-system-files bit for you…).
The ctypes docs show how to call Windows API functions, but it's a little tricky the first time.
First you need to go to the MSDN page to find out what DLL you need to load (kernel32), and whether your function has separate A and W variants (it does), and what values to pass for any constants (you have to follow a link to another page, and know how C enums works, to find out that GetFileExInfoStandard is 0), and then you need to figure out how to define any structs necessary. In this case, something like this:
from ctypes import *
kernel = windll.kernel32
GetFileExInfoStandard = 0
GetFileAttributesEx = kernel.GetFileAttributesEx
GetFileAttributesEx.restype = c_int
GetFileAttributesEx.argypes = # ...
If you really want to avoid using win32api, you can do the work to finish the ctypes wrapper yourself. Personally, I'd use win32api.
Meanwhile:
The reason I want to do this is because I am using os.walk to copy files from one drive to another. If there was a way to walk the directory tree while ignoring system files that may work too.
For that case, especially given your complaint that checking each file was too slow, you probably don't want to use os.walk either. Instead, use FindFirstFileEx, and do the recursion manually. You can distinguish files and directories without having to stat (or GetFileAttributesEx) each file (which os.walk does under the covers), you can filter out system files directly inside the find function instead of having to stat each file, etc.
Again, the options are the same: use win32api if you want it to be easy, use ctypes otherwise.
But in this case, I'd take a look at Ben Hoyt's betterwalk, because he's already done 99% of the ctypes-wrapping, and 95% of the rest of the code, that you want.

What's the official way of storing settings for Python programs?

Django uses real Python files for settings, Trac uses a .ini file, and some other pieces of software uses XML files to hold this information.
Are one of these approaches blessed by Guido and/or the Python community more than another?
Depends on the predominant intended audience.
If it is programmers who change the file anyway, just use python files like settings.py
If it is end users then, think about ini files.
As many have said, there is no "offical" way. There are, however, many choices. There was a talk at PyCon this year about many of the available options.
Don't know if this can be considered "official", but it is in standard library: 14.2. ConfigParser — Configuration file parser.
This is, obviously, not an universal solution, though. Just use whatever feels most appropriate to the task, without any necessary complexity (and — especially — Turing-completeness! Think about automatic or GUI configurators).
I use a shelf ( http://docs.python.org/library/shelve.html ):
shelf = shelve.open(filename)
shelf["users"] = ["David", "Abraham"]
shelf.sync() # Save
Just one more option, PyQt. Qt has a platform independent way of storing settings with the QSettings class. Underneath the hood, on windows it uses the registry and in linux it stores the settings in a hidden conf file. QSettings works very well and is pretty seemless.
There is no blessed solution as far as I know. There is no right or wrong way to storing app settings neither, xml, json or all types of files are fine as long as you are confortable with. For python I personally use pypref it's very easy, cross platform and straightforward.
pypref is very useful as one can store static and dynamic settings and preferences ...
from pypref import Preferences
# create singleton preferences instance
pref = Preferences(filename="preferences_test.py")
# create preferences dict
pdict = {'preference 1': 1, 12345: 'I am a number'}
# set preferences. This would automatically create preferences_test.py
# in your home directory. Go and check it.
pref.set_preferences(pdict)
# lets update the preferences. This would automatically update
# preferences_test.py file, you can verify that.
pref.update_preferences({'preference 1': 2})
# lets get some preferences. This would return the value of the preference if
# it is defined or default value if it is not.
print pref.get('preference 1')
# In some cases we must use raw strings. This is most likely needed when
# working with paths in a windows systems or when a preference includes
# especial characters. That's how to do it ...
pref.update_preferences({'my path': " r'C:\Users\Me\Desktop' "})
# Sometimes preferences to change dynamically or to be evaluated real time.
# This also can be done by using dynamic property. In this example password
# generator preference is set using uuid module. dynamic dictionary
# must include all modules name that must be imported upon evaluating
# a dynamic preference
pre = {'password generator': "str(uuid.uuid1())"}
dyn = {'password generator': ['uuid',]}
pref.update_preferences(preferences=pre, dynamic=dyn)
# lets pull 'password generator' preferences twice and notice how
# passwords are different at every pull
print pref.get('password generator')
print pref.get('password generator')
# those preferences can be accessed later. Let's simulate that by creating
# another preferences instances which will automatically detect the
# existance of a preferences file and connect to it
newPref = Preferences(filename="preferences_test.py")
# let's print 'my path' preference
print newPref.get('my path')
I am not sure that there is an 'official' way (it is not mentioned in the Zen of Python :) )- I tend to use the Config Parser module myself and I think that you will find that pretty common. I prefer that over the python file approach because you can write back to it and dynamically reload if you want.
One of the easiest ways which is use is using the json module.
Save the file in config.json with the details as shown below.
Saving data in the json file:
{
"john" : {
"number" : "948075049" ,
"password":"thisisit"
}
}
Reading from json file:
import json
#open the config.json file
with open('config.json') as f:
mydata = json.load(f) ;
#Now mydata is a python dictionary
print("username is " , mydata.get('john').get('number') , " password is " , mydata.get('john').get('password')) ;
It depends largely on how complicated your configuration is. If you're doing a simple key-value mapping and you want the capability to edit the settings with a text editor, I think ConfigParser is the way to go.
If your settings are complicated and include lists and nested data structures, I'd use XML or JSON and create a configuration editor.
For really complicated things where the end user isn't expected to change the settings much, or is more trusted, just create a set of Python classes and evaluate a Python script to get the configuration.
For web applications I like using OS environment variables: os.environ.get('CONFIG_OPTION')
This works especially well for settings that vary between deploys. You can read more about the rationale behind using env vars here: http://www.12factor.net/config
Of course, this only works for read-only values because changes to the environment are usually not persistent. But if you don't need write access they are a very good solution.
It is more of convenience. There is no official way per say. But using XML files would make sense as they can be manipulated by various other applications/libraries.
Not an official one but this way works well for all my Python projects.
pip install python-settings
Docs here: https://github.com/charlsagente/python-settings
You need a settings.py file with all your defined constants like:
# settings.py
DATABASE_HOST = '10.0.0.1'
Then you need to either set an env variable (export SETTINGS_MODULE=settings) or manually calling the configure method:
# something_else.py
from python_settings import settings
from . import settings as my_local_settings
settings.configure(my_local_settings) # configure() receives a python module
The utility also supports Lazy initialization for heavy to load objects, so when you run your python project it loads faster since it only evaluates the settings variable when its needed
# settings.py
from python_settings import LazySetting
from my_awesome_library import HeavyInitializationClass # Heavy to initialize object
LAZY_INITIALIZATION = LazySetting(HeavyInitializationClass, "127.0.0.1:4222")
# LazySetting(Class, *args, **kwargs)
Just configure once and now call your variables where is needed:
# my_awesome_file.py
from python_settings import settings
print(settings.DATABASE_HOST) # Will print '10.0.0.1'
why would Guido blessed something that is out of his scope? No there is nothing particular blessed.

Categories

Resources