I am using pyusb and according to the docs it runs on any one of three backends. libusb01 libusb10 and openusb. I have all three backends installed. How can I tell which backend it is using and how can I switch to a different one?
I found the answer by looking inside the usb.core source file.
You do it by importing the backend and then setting a parameter inside the find method of usb.core. Like so:
import usb.backend.libusb1 as libusb1
import usb.backend.libusb0 as libusb0
import usb.backend.openusb as openusb
and then any one of:
devices = usb.core.find(find_all=1, backend=libusb1.get_backend() )
devices = usb.core.find(find_all=1, backend=libusb0.get_backend() )
devices = usb.core.find(find_all=1, backend=openusb.get_backend() )
This assumes you are using pyusb-1.0.0a3. For 1.0.0a2 the libs are called libusb10, libusb01 and openusb. Of course, you'd only need to import the one you want.
Related
I'am making an API that returns all the versions currently available. The versions are structured like this: 22.12a
22 is the year,12 is the month and a goes up by a letter everytime we launch another version and resets every month.
My problem is that I need to sort the version so that they can be in the release order like this:
["22.12b","22.12a","22.11a","22.9a"]
But I have no idea how to do it.
You can use the natsort.natsorted():
from natsort import natsorted
versions = ['22.12b', '22.12a', '22.11a', '22.9a']
natsorted(versions)
#['22.9a', '22.11a', '22.12a', '22.12b']
It can also be done via packging.version.parse():
from packaging.version import parse
versions = ['22.12b', '22.12a', '22.11a', '22.9a']
versions.sort(key=parse)
#['22.9a', '22.11a', '22.12a', '22.12b']
I'm trying to use azure-mgmt-kusto Pkg for some Kusto Cluster operations, using KustoManagementClient. This client requires TokenCredential on constructor. For my own scenario, I would like to use my own AAD credentials, preferably using interactive login or IWA (Integrated Windows Authentication). The closest I was able to achieve this is using the following code:
creds = DefaultAzureCredential(exclude_interactive_browser_credential=False).get_token('')
kusto_client = azure.mgmt.kusto.KustoManagementClient(credential=creds, subscription_id='<>')
but this raises an error in the second line:
Expected type 'TokenCredential', got 'AccessToken' instead
which I couldn't find any way around!
Any suggestions on how to resolve this? or other methods to use?
Actually, after simply trying despite the Pycharm warning, this worked:
from azure.identity import DefaultAzureCredential
from azure.mgmt.kusto import KustoManagementClient
credential = DefaultAzureCredential()
kusto_management_client = KustoManagementClient(credential, subId)
I am using the RasaNLUHttpInterpreter as stated here to start my server. I give the class all the 4 parameters required (model-name, token, server-name, and project-name). However, I always get the error, that apparently I am handing over 5 arguments (what I don't really do).
The error occurred since I updated my Rasa-Core and NLU to the latest version. However, as in docs, I feel that I use the method correctly. Does anyone have an idea what I am doing wrong or what's happening here?
Here is my run-server.py where I use the RasaNLUHttpInterpreter:
import os
from os import environ as env
from gevent.pywsgi import WSGIServer
from server import create_app
from rasa_core import utils
from rasa_core.interpreter import RasaNLUHttpInterpreter
utils.configure_colored_logging("DEBUG")
user_input_dir = "/app/nlu/" + env["RASA_NLU_PROJECT_NAME"] + "/user_input"
if not os.path.exists(user_input_dir):
os.makedirs(user_input_dir)
nlu_interpreter = RasaNLUHttpInterpreter(
'model_20190702-103405', None, 'http://rasa-nlu:5000', 'test_project')
app = create_app(
model_directory = env["RASA_CORE_MODEL_PATH"],
cors_origins="*",
loglevel = "DEBUG",
logfile = "./logs/rasa_core.log",
interpreter = nlu_interpreter)
http_server = WSGIServer(('0.0.0.0', 5005), app)
http_server.serve_forever()
I am using:
rasa_nlu~=0.15.1
rasa_core==0.14.5
as already mentioned here I have analyzed the problem in detail.
First of all, the method calls and the given link belong to a rasa version that is deprecated. After updating to the latest rasa version which splits up core and nlu, the project was refactored to fit the according documentation.
After rebuilding the bot with the exact same setup, no errors were thrown and the bot worked as expected.
We came to the conclusion that this must have been a particular problem on threxx's workstation.
If someone else might come to this point, he is welcome to post here such that we could help him.
Need help to select all elements of a listview using send_message. I want this to work in RDP disconnected mode and hence using pywinauto's send_message api.
My code
from pywinauto import win32defines
app = Application().connect(path = pathToAppEXE)
lvitem = win32structures.LVITEMW()
lvitem.mask = win32defines.LVIF_STATE
lvitem.state = 1
lvitem.stateMask = win32defines.LVIS_SELECTED
app.window_(title_re = "Net Position.*").ListView.send_message(win32defines.LVM_SETITEMSTATE,-1,lvitem)
It does nothing. Maybe I am not getting the lvm flags correctly. Need assistance to fix the code.
Method .get_item(...) (see docs) should return _listview_item object with some available methods: some of them don't involve real click.
Maybe Remote Execution Guide is also useful.
If scipy.weave.inline is called inside a massive parallel MPI-enabled application that is run on a cluster with a home-directory that is common to all nodes, every instance accesses the same catalog for compiled code: $HOME/.pythonxx_compiled. This is bad for obvious reasons and leads to many error messages. How can this problem be circumvented?
As per the scipy docs, you could store your compiled data in a directory that isn't on the NFS share (such as /tmp or /scratch or whatever is available for your system). Then you wouldn't have to worry about your conflicts. You just need to set the PYTHONCOMPILED environment variable to something else.
My previous thoughts about this problem:
Either scipy.weave.catalog has to be enhanced with a proper locking mechanism in order to serialize access to the catalog, or every instance has to use its own catalog.
I chose the latter. The scipy.weave.inline function uses a catalog which is bound to the module-level name function_catalog of the scipy.weave.inline module. This can be discovered by looking into the code of this module (https://github.com/scipy/scipy/tree/v0.12.0/scipy/weave).
The simples solution is now to monkeypatch this name to something else at the beginning of the program:
from mpi4py import MPI
import numpy as np
import scipy.weave.inline_tools
import scipy.weave.catalog
import os
import os.path
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
size = comm.Get_size()
catalog_dir = os.path.join(some_path, 'rank'+str(rank))
try:
os.makedirs(catalog_dir)
except OSError:
pass
#monkeypatching the catalog
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
Now inline works smoothly: Each instance has its own catalog inside the common NFS directory. Of course this naming scheme breaks if two distinct parallel tasks ran at the same time, but this would also be the case if the catalog was in /tmp.
Edit: As mentioned in a comment above this procedure still bears problems if multiple indepedent jobs are run in parallel. This can be remedied by adding a random uuid to the pathname:
import uuid
u = None
if rank == 0:
u = str(uuid.uuid4())
u = comm.scatter([u]*size, root=0)
catalog_dir = os.path.join('/tmp/<username>/pythoncompiled', u+'-'+str(rank))
os.makedirs(catalog_dir)
#monkeypatching the catalog
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
Of course it would be nice to delete those files after the computation:
shutil.rmtree(catalog_dir)
Edit: There were some additional problems. The intermediate directory where cpp and o files are stored also hat some trouble due to simultaneous access from different instances, so the above method has to be extended to this directory:
basetmp = some_path
catalog_dir = os.path.join(basetmp, 'pythoncompiled', u+'-'+str(rank))
intermediate_dir = os.path.join(basetmp, 'pythonintermediate', u+'-'+str(rank))
os.makedirs(catalog_dir, mode=0o700)
os.makedirs(intermediate_dir, mode=0o700)
#monkeypatching the catalog and intermediate_dir
scipy.weave.inline_tools.function_catalog = scipy.weave.catalog.catalog(catalog_dir)
scipy.weave.catalog.intermediate_dir = lambda: intermediate_dir
#... calculations here ...
shutil.rmtree(catalog_dir)
shutil.rmtree(intermediate_dir)
One quick workaround is to use a local directory on each node (e.g. /tmp as Wesley said), but use one MPI task per node, if you have the capacity.