I am writing a Vim plugin to set iBus engines and input methods. So far I can change the engines with the code below:
function! im#setEngine(name)
python << EOF
try:
import ibus,vim
bus = ibus.Bus()
ic = ibus.InputContext(bus, bus.current_input_contxt())
name = vim.eval("a:name")
engines = bus.get_engines_by_names([name])
size = len(engines)
if size <= 0:
print "Could not find engine %s"%name
else:
engine = engines[0]
ic.set_engine(engine)
except Exception, e:
print "Failed to connect to iBus"
print e
EOF
endfunction
function! im#listEngines()
let l:engines = []
python << EOF
try:
import ibus,dbus,vim
bus = ibus.Bus()
names = []
for engine in bus.list_engines():
names.append(str(engine.name))
vim.command("let l:engines = %s"% names)
except Exception, e:
print "Failed to connect to iBus"
print e
EOF
return l:engines
endfunction
Now I am trying to also set the input method for the engines but I am unable to find how to do this. The iBus documentation is lacking on details so far.
Does anyone can provide pointers or examples on how to programatically (Python) change the iBus input method? Also a way to get a list of supported input methods for each engine would be great.
====
From this point I will try to provide more context on the problem I am trying to solve. Skip if you are not interested.
I implemented this plugin vim-im to disable input methods when entering Vim normal mode. This is important because Vim normal mode is unusable if iBus is set to a non ascii input method. If you use vim to write in Japanese, Chinese, Korean, etc... you may understand the issue.
The problem is that since iBus 1.5 the enable/disable methods my plugin depends on are deprecated. So my plugin works in Ubuntu <= 13.04 but not in Debian Jessie and possibly won't work on future Ubuntu versions either.
The only way I see to have similar functionality is to define a default iBus engine and input method and change iBus to those every time Vim enters in normal mode.
Reading the ibus library code I found an acceptable solution:
function! im#setInputMode(mode)
python << EOF
try:
import ibus,dbus,vim
bus = ibus.Bus()
conn = bus.get_dbusconn().get_object(ibus.common.IBUS_SERVICE_IBUS, bus.current_input_contxt())
ic = dbus.Interface(conn, dbus_interface=ibus.common.IBUS_IFACE_INPUT_CONTEXT)
mode = vim.eval("a:mode")
ic.PropertyActivate("InputMode." + mode, ibus.PROP_STATE_CHECKED)
except Exception, e:
print "Failed to connect to iBus"
print e
EOF
endfunction
This method allows me to change the input method of iBus by passing it a name like:
call im#setInputMode("Hiragana")
Unfortunately the input method name depends on the engine being used. For example for mozc I need to set it to "Direct" while for anthy I have to use "WideLatin" in order get correct input in vim Normal Mode.
If someone knows a way to query the iBus engines to get a list of supported InputMode would be great. Also a way to query the engine for the current set InputMethod would help too.
Related
I am accessing an Intersystems cache 2017.1.xx instance through a python process to get various attributes about the database in able to monitor the database.
One of the items I want to monitor is license usage. I wrote a objectscript script in a Terminal window to access license usage by user:
s Rset=##class(%ResultSet).%New("%SYSTEM.License.UserListAll")
s r=Rset.Execute()
s ncol=Rset.GetColumnCount()
While (Rset.Next()) {f i=1:1:ncol w !,Rset.GetData(i)}
But, I have been unable to determine how to convert this script into a Python equivalent. I am using the intersys.pythonbind3 import for connecting and accessing the cache instance. I have been able to create python functions that accessing most everything else in the instance but this one piece of data I can not figure out how to translate it to Python (3.7).
Following should work (based on the documentation):
query = intersys.pythonbind.query(database)
query.prepare_class("%SYSTEM.License","UserListAll")
query.execute();
# Fetch each row in the result set, and print the
# name and value of each column in a row:
while 1:
cols = query.fetch([None])
if len(cols) == 0: break
print str(cols[0])
Also, notice that InterSystems IRIS -- successor to the Caché now has Python as an embedded language. See more in the docs
Since the noted query "UserListAll" is not defined correctly in the library; not SqlProc. So to resolve this issue would require a ObjectScript with the query and the use of #Result set or similar in Python to get the results. So I am marking this as resolved.
Not sure which Python interface you're using for Cache/IRIS, but this Open Source 3rd party one is worth investigating for the kind of things you're trying to do:
https://github.com/chrisemunt/mg_python
I'd like to setup a "timeout" for the ldap library (python-ldap-2.4.15-2.el7.x86_64) and python 2.7
I'm forcing my /etc/hosts to resolve an non existing IP address in
order to raise the timeout.
I've followed several examples and took a look at the documentation or questions like this one without luck.
By now I've tried forcing global timeouts before the initialize:
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap.set_option(ldap.OPT_TIMEOUT, 1)
ldap.protocol_version=ldap.VERSION3
Forced the same values at object level:
ldap_obj = ldap.initialize("ldap://%s:389" % LDAPSERVER ,trace_level=9)
ldap_obj.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap_obj.set_option(ldap.OPT_TIMEOUT, 1)
I've also tried with:
ldap.network_timeout = 1
ldap.timelimit = 1
And used both methods, search and search_st (The synchronous form with timeout)
Finally this is the code:
def testLDAPConex( LDAPSERVER ) :
"""
Check ldap
"""
ldap.set_option(ldap.OPT_NETWORK_TIMEOUT, 1)
ldap.set_option(ldap.OPT_TIMEOUT, 1)
ldap.protocol_version=ldap.VERSION3
ldap.network_timeout = 1
ldap.timelimit = 1
try:
ldap_obj = ldap.initialize("ldap://%s:389" % LDAPSERVER ,trace_level=9)
ldap_result_id = ldap_obj.search("ou=people,c=this,o=place", ldap.SCOPE_SUBTREE, "cn=user")
I've printed the constants of the object OPT_NETWORK_TIMEOUT and OPT_TIMEOUT, the values were assigned correctly.
The execution time is 56s everytime and I'm not able to control the amount of seconds for the timeout.
Btw, the same code in python3 does work as intended:
real 0m10,094s
user 0m0,072s
sys 0m0,013s
After some testing I've decided to rollback the VM to a previous state.
The Networking Team were doing changes across the network configuration, that might have changed the behaviour over the interface and some kind of bug when the packages were being sent over the wire in:
ldap_result_id = ldap_obj.search("ou=people,c=this,o=place", ldap.SCOPE_SUBTREE, "cn=user")
After restoring the VM, which included a restart, the error of not being able to reach LDAP raises as expected.
Why shelve raise an error if I try to open a file just created by shelve?
import shelve
info_file_name = "/Users/bacon/myproject/temp/test.info"
info_file = shelve.open(info_file_name)
info_file['ok'] = 'wass'
info_file.close()
info_file = shelve.open(info_file_name) # raise exception db type could not be determined..
info_file.close()
I'm running python 2.5 in case is relevant
The precise error is raising is:
db type could not be determined its raised by anydbm.py open method.
I know it;s using gdbm. I checked on the whichdb.py file, and it tries to identify gdbm files with this
# Read the start of the file -- the magic number
s16 = f.read(16)
s = s16[0:4]
# Convert to 4-byte int in native byte order -- return "" if impossible
(magic,) = struct.unpack("=l", s)
# Check for GNU dbm
if magic == 0x13579ace:
return "gdbm"
But the "magic" number in my file is 324508367 (0x13579acf) (only the last digit change!!)
I tried opening the file with another language (ruby) and I were able to open it without any problem, so this seems to be a bug in whichdb.py trying to identify the correct dbm
As explained on the question this error was due to a bug in whichdb that is not able to identify some newest gdb files, more information is on this bug report: https://bugs.python.org/issue13007
The best solution is to force the db defining a method that load the shelve with gdbm instead of trying to guess the dbm.
def gdbm_shelve(filename, flag="c"):
mod = __import__("gdbm")
return shelve.Shelf(mod.open(filename, flag))
And then use it instead of shelve.open:
info_file = gdbm_shelve(info_file_name)
I use a Raspberry Pi to collect sensor data and set digital outputs, to make it easy for other applications to set and get values I'm using a socket server. But I am having some problems finding an elegant way of making all the data available on the socket server without having to write a function for each data type.
Some examples of values and methods I have that I would like to make available on the socket server:
do[2].set_low() # set digital output 2 low
do[2].value=0 # set digital output 2 low
do[2].toggle() # toggle digital output 2
di[0].value # read value for digital input 0
ai[0].value # read value for analog input 0
ai[0].average # get the average calculated value for analog input 0
ao[4].value=255 # set analog output 4 to byte value 255
ao[4].percent=100 # set analog output 4 to 100%
I've tried eval() and exec():
self.request.sendall(str.encode(str(eval('item.' + recv_string)) + '\n'))
eval() works unless I am using equal sign (=), but I'm not to happy about the solution because of dangers involved. exec() does the work but does not return any value, also dangerous.
I've also tried getattr():
recv_string = bytes.decode(self.data).lower().split(';')
values = getattr(item, recv_string[0])
self.request.sendall(str.encode(str(values[int(recv_string[1])].value) + '\n'))
^^^^^
This works for getting my attributes, and the above example works for getting the value of the attribute I am getting with getattr(). But I can not figure out how to use getattr() on the value attribute as well.
The semi-colon (;) is used to split the incoming command, I've experimented with multiple ways of formatting the commands:
# unit means that I want to talk to a I/O interface module,
# and the name specified which one
unit;unit_name;get;do;1
unit;unit_name;get;do[1]
unit;unit_name;do[1].value
I am free to choose the format since I am also writing the software that uses these commands. I have not yet found a good format which covers all my needs.
Any suggestions how I can write an elegant way of accessing and returning the data above? Preferably with having to add new methods to the socket server every time a new value type or method is added to my I/O ports.
Edit: This is not public, it's only available on my LAN.
Suggestions
Make your API all methods so that eval can always be used:
def value_m(self, newValue=None):
if newValue is not None:
self.value = newValue
return self.value
Then you can always do
result = str(eval(message))
self.request.sendall(str.encode(result + '\n'))
For your message, I would suggest that your messages are formatted to include the exact syntax of the command exactly so that it can be evaled as-is, e.g.
message = 'do[1].value_m()' # read a value, alternatively...
message = 'do[1].value_m(None)'
or to write
message = 'do[1].value_m(0)' # write a value
This will make it easy to keep your messages up-to-date with your API, because they must match exactly, you won't have a second DSL to deal with. You really don't want to have to maintain a second API, on top of your IO one.
This is a very simple scheme, suitable for a home project. I would suggest some error handling in evaluation, like so:
import traceback
try:
result = str(eval(message))
except Exception:
result = traceback.format_exc()
self.request.sendall(str.encode(result + '\n'))
This way your caller will receive a printout of the exception traceback in the returned message. This will make it much, much easier to debug bad calls.
NOTE If this is public-facing, you cannot do this. All input must be sanitised. You will have to parse each instruction and compare it to the list of available (and desirable) commands, and verify input validity and validity ranges for everything. For such a scenario you are better off simply using one of the input validation systems used for web services, where this problem receives a great deal of attention.
I'm having quite a bit of trouble getting started with scripting in Mule. To be quite honest, I'm falling at the first hurdle - I can't find anything in the documentation which tells me how to access the payload or how to return data to my flow.
I'm using Jython 2.5 and Mule 3.4.
My flow is extremely simple: it takes some text from an Ajax source and simply echoes it. At the moment the Python script does nothing (as I cannot figure out how to get it to do something with the payload).
<flow name="Python Script" doc:name="Python Script">
<ajax:connector name="connector-ajax" serverUrl="http://192.168.0.1:8000" resourceBase="C:\mule\workspace\scripting\src\main\app\docroot" doc:name="Ajax" />
<scripting:component doc:name="Python">
<scripting:script engine="jython" file="C:\mule\workspace\scripting\src\main\app\python\myscript.py"/>
</scripting:component>
<echo-component doc:name="Echo"/>
</flow>
I have read through the Script Component Reference and the Scripting Module Reference - the module reference appears to have some relevant information but I can't figure out how to use it in Python.
I have also read through an article about 'Mule Punching' which seems like it would have answered my question if I was running version 2 of Mule. I attempted to use the same techniques in my Mule 3 project but it did not work.
Edit 24/07/2012
Using #ppiixx's response, I have got a little bit further with Python scripting.
Just having a single line of code, for example return len(payload) causes the Jython interpreter to throw an error as return cannot exist outside of a function. Fair enough, that's standard.
However, with the code
def main():
return len(payload)
main()
I get an error saying that 'No serializer can be found for class org.mule.transport.NullPayload'.
The log is below:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+ Started app 'python-test' +
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
INFO 2013-07-24 09:16:08,821 [[python-test].flow.stage1.02] org.mule.component.simple.LogComponent:
********************************************************************************
* Message received in service: flow. Content is: '{NullPayload}' *
********************************************************************************
ERROR 2013-07-24 09:16:08,849 [[python-test].flow.stage1.02] org.mule.exception.CatchMessagingExceptionStrategy:
********************************************************************************
Message : No serializer found for class org.mule.transport.NullPayload and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (org.codehaus.jackson.map.JsonMappingException)
Code : MULE_ERROR--2
--------------------------------------------------------------------------------
Exception stack is:
1. No serializer found for class org.mule.transport.NullPayload and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (org.codehaus.jackson.map.JsonMappingException)
org.codehaus.jackson.map.ser.impl.UnknownSerializer:52 (null)
2. No serializer found for class org.mule.transport.NullPayload and no properties discovered to create BeanSerializer (to avoid exception, disable SerializationConfig.Feature.FAIL_ON_EMPTY_BEANS) ) (org.codehaus.jackson.map.JsonMappingException) (org.mule.api.transformer.TransformerException)
org.mule.module.json.transformers.ObjectToJson:107 (http://www.mulesoft.org/docs/site/current3/apidocs/org/mule/api/transformer/TransformerException.html)
This leads me to think that using return on its own isn't enough to return data to the flow.
I've read through the scripting module reference, and while it gives examples in Groovy, it does not give examples in Python so I'm not sure where I'm going wrong.
Have a look at the 'Script Context Bindings' section here.
Basically a number of variables are available in the script context including: message,payload and log.
To return data using the python engine you set the result variable.
result = len(payload)
There is a example in the mule github.