I want to use pySerial's serial.tools.list_ports.comports() to list available COM ports.
Reading the documentation:
The function returns an iterable that yields tuples of three strings:
port name as it can be passed to serial.Serial or serial.serial_for_url()
description in human readable form
sort of hardware ID. E.g. may contain VID:PID of USB-serial adapters.
I'm particulary interested in the third string to search for a specific USB-serial adapter with a VID:PID pair. I would like it (ideally) to work in Windows XP and later, Mac OS X, and Linux. I've tried with pySerial 2.7 in Ubuntu 13.10 and Windows 7 and works like a charm, but the docs also say:
Also note that the reported strings are different across platforms
and operating systems, even for the same device.
Note: Support is limited to a number of operating systems. On some
systems description and hardware ID will not be available.
Do you have any real-world experience with respect these ambiguities? More detailed info? Any non-working example? Variations on the hardware ID strings across systems?
Thanks a lot!
I guess if you want a counter-example of it working not as expected, here's what I get:
>>> serial.tools.list_ports.comports()
[('/dev/tty.Bluetooth-Incoming-Port', '/dev/tty.Bluetooth-Incoming-Port', '/dev/tty.Bluetooth-Incoming-Port'), ('/dev/tty.Bluetooth-Modem', '/dev/tty.Bluetooth-Modem', '/dev/tty.Bluetooth-Modem'), ('/dev/tty.usbserial-A1024XBO', '/dev/tty.usbserial-A1024XBO', '/dev/tty.usbserial-A1024XBO')]
where a FTDI USB-Serial adapter is plugged in. Which is expectable, because here's the comports() function:
def comports():
"""scan for available ports. return a list of device names."""
devices = glob.glob('/dev/tty.*')
return [(d, d, d) for d in devices]
which is the same for cygwin, BSD, NetBSD, IRIX, HP-UX, Solaris/SunOS, AIX…
How come that result can happen? Well, because my pyserial is version 2.6, which is only six months old :-)
After upgrading to latest version (2.7) from pypi, here's what I get:
>>> serial.tools.list_ports.comports()
[['/dev/cu.Bluetooth-Incoming-Port', 'n/a', 'n/a'], ['/dev/cu.Bluetooth-Modem', 'n/a', 'n/a'], ['/dev/cu.usbserial-A1024XBO', 'FT232R USB UART', 'USB VID:PID=403:6001 SNR=A1024XBO']]
so basically, add a version check to the latest version of pyserial in your setup.py, or you may get problems. Though support is still not added for other unix flavors. It looks like the VID:PID string is handled directly by parsing OS specific stuff to make that string generic enough. So basically I guess you can safely get it with something like : vid, pid = sp[2].split(' ')[1].split('=')[-1].split(':') (which is quite stupid, why parse values to build a string that has to be parsed again afterwards?!, I mean they do szHardwareID_str = 'USB VID:PID=%s:%s SNR=%s' % (m.group(1), m.group(2), m.group(4)) we couldn't be happier with just a tuple!)
And finally, pyserial looks inconsistent with its documentation, as it says: On some systems description and hardware ID will not be available (None)., whereas it does really return 'n/a'. I guess that will be fixed in pyserial 2.8 :-)
It's been some time since my original question, but current versions of pyserial (3.0+, I believe) have solved this in a neat way. No more clever parsing.
serial.tools.list_ports.comports(...) now returns a list containing ListPortInfo objects.
ListPortInfo objects contain vid and pid attributes (integer) as well as other useful USB-related attributes (see docs) which "are all None if it is not an USB device (or the platform does not support extended info)" and this seems to be supported on the main 3 platforms ("Under Linux, OSX and Windows, extended information will be available for USB devices").
So you can do something like the following:
for port in serial.tools.list_ports.comports():
if port.vid is not None and port.pid is not None:
# It's a USB port on a platform that supports the extended info
# Do something with it.
print("Port={},VID={:#06x},PID={:#06x}".format(port.device, port.vid, port.pid))
Related
Good day,
I'm working on a simple Python project and I need to print a receipt to an Epson TM-T82X thermal printer. I have checked the python-escpos library and it has good documentation.
Example is this one:
from escpos.printer import Usb
""" Seiko Epson Corp. Receipt Printer (EPSON TM-T88III) """
p = Usb(0x04b8, 0x0202, 0, profile="TM-T88III")
p.text("Hello World\n")
p.image("logo.gif")
p.barcode('1324354657687', 'EAN13', 64, 2, '', '')
p.cut()
My problem is where to get the two Usb parameters '0x04b8' and '0x0202'. I know that they are device and manufacturer IDs. Checking the documentation further, it said that the IDs can be acquired by checking the printer's Device Instance Path or Hardware ID. I've checked that as well and it gives something like this:
SWD\PRINTENUM\{67FDD9C0-3ADC-4191-9B80-1711BCA4B9DF}
I am running on Windows 10 and Windows 11. Please help. Thank you.
Below is a simple and perfect solution on Windows for IPC with shared memory, without having to use networking / sockets (that have annoying limits on Windows).
The only problem is that it's not portable on Linux:
Avoiding the use of the tag parameter will assist in keeping your code portable between Unix and Windows.
Question: is there a simple way built-in in Python, without having a conditional branch "if platform is Windows, if platform is Linux" to have a shared-memory mmap?
Something like
mm = sharedmemory(size=2_000_000_000, name="id1234") # 2 GB, id1234 is a global
# id available for all processes
mm.seek(1_000_000)
mm.write(b"hello")
that would internally default to mmap.mmap(..., tagname="id1234") on Windows and use /dev/shm on Linux (or maybe even a better solution that I don't know?), and probably something else on Mac, but without having to handle this manually for each different OS.
Working Windows-only solution:
#server
import mmap, time
mm = mmap.mmap(-1, 1_000_000_000, tagname="foo")
while True:
mm.seek(500_000_000)
mm.write(str(time.time()).encode())
mm.flush()
time.sleep(1)
# client
import mmap, time
mm = mmap.mmap(-1, 1_000_000_000, tagname="foo")
while True:
mm.seek(500_000_000)
print(mm.read(128))
time.sleep(1)
The easiest way is to use python with version >=3.8, it has added a built-in abstraction for shared memory,
it works on both windows and linux
https://docs.python.org/3.10/library/multiprocessing.shared_memory.html
The code will look something like this:
Process #1:
from multiprocessing import shared_memory
# create=true to create a new shared memory instance, if it already exists with the same name, an exception is thrown
shm_a = shared_memory.SharedMemory(name="example", create=True, size=10)
shm_a.buf[:3] = bytearray([1, 2, 3])
while True:
do_smt()
shm_a.close()
Process #2:
from multiprocessing import shared_memory
# create=false, use existing
shm_a = shared_memory.SharedMemory(name="example", size=10)
print(bytes(shm.buf[:3]))
# [0x01, 0x02, 0x03]
while True:
do_smt()
shm_a.close()
Otherwise, I think there are no common good solutions and you will need to reinvent the wheel :)
Personally this has worked well for me
Option 1: http://www.inspirel.com/yami4/
The YAMI4 suite for general computing is a multi-language and multi-platform package.
Several Operating systems:
Sample code
Microsoft Windows, POSIX (Linux, Max OS X, FreeBSD, ...), QNX (with native IPC messaging), FreeRTOS, ThreadX, TI-RTOS. Programming languages: C++, Ada, Java, .NET, Python, Wolfram.
Option 2: ZeroMq https://zeromq.org/
Per this question and answer -- Python multiprocessing.cpu_count() returns '1' on 4-core Nvidia Jetson TK1 -- the output of Python's multiprocessing.cpu_count() function on certain systems reflects the number of CPUs actively in use, as opposed to the number of CPUs actually usable by the calling Python program.
A common Python idiom is to use the return-value of cpu_count() to initialize the number of processes in a Pool. However, on systems that use such a "dynamic CPU activation" strategy, that idiom breaks rather badly (at least on a relatively quiescent system).
Is there some straightforward (and portable) way to get at the number of usable processors (as opposed the number currently in use) from Python?
Notes:
This question is not answered by the accepted answer to How to find out the number of CPUs using python, since as noted in the question linked at the top of this question, printing the contents of /proc/self/status shows all 4 cores as being available to the program.
To my mind, "portable" excludes any approach that involves parsing the contents of /proc/self/status, whose format may vary from release to release of Linux, and which doesn`t even exist on OS X. (The same goes for any other pseudo-file, as well.)
I don't think you will get any truly portable answers, so I will give a correct one.
The correct* answer for Linux is len(os.sched_getaffinity(pid)), where pid may be 0 for the current process. This function is exposed in Python 3.3 and later; if you need it in earlier, you'll have to do some fancy cffi coding.
Edit: you might try to see if you can use a function int omp_get_num_procs(); if it exists, it is the only meaningful answer I found on this question but I haven't tried it from Python.
Use psutil:
from the doc https://psutil.readthedocs.io/en/latest/:
>>> import psutil
>>> psutil.cpu_count()
4
>>> psutil.cpu_count(logical=False) # Ignoring virtual cores
2
This is portable
Here's an approach that gets the number of available CPU cores for the current process on systems that implement sched_getaffinity, and Windows:
import ctypes
import ctypes.wintypes
import os
from platform import system
def num_available_cores() -> int:
if hasattr(os, 'sched_getaffinity'):
return len(os.sched_getaffinity(0))
elif system() == 'Windows':
kernel32 = ctypes.WinDLL('kernel32')
DWORD_PTR = ctypes.wintypes.WPARAM
PDWORD_PTR = ctypes.POINTER(DWORD_PTR)
GetCurrentProcess = kernel32.GetCurrentProcess
GetCurrentProcess.restype = ctypes.wintypes.HANDLE
GetProcessAffinityMask = kernel32.GetProcessAffinityMask
GetProcessAffinityMask.argtypes = (ctypes.wintypes.HANDLE, PDWORD_PTR, PDWORD_PTR)
mask = DWORD_PTR()
if not GetProcessAffinityMask(GetCurrentProcess(), ctypes.byref(mask), ctypes.byref(DWORD_PTR())):
raise Exception("Call to 'GetProcessAffinityMask' failed")
return bin(mask.value).count('1')
else:
raise Exception('Cannot determine the number of available cores')
On Linux and any other systems that implement sched_getaffinity, we use Python's built-in wrapper for it.
On Windows we use ctypes to call GetProcessAffinityMask.
As far as I know there are no user APIs or tools to get/set the CPU affinity on macOS. In most cases os.cpu_count() will work fine, but if you truly need the number of available cores you may be out of luck.
I want to be able to take a device name (eg: /dev/disk2) and determine where (if anywhere) it's mounted (eg: /mnt/cdrom or /Volumes/RANDLABEL) in Python.
One way I can do this is to run df or mount and then parse the output, but this seems pretty cheesy and unreliable. For example, mount uses " on " as the delimiter between the device and the mountpoint. While very unlikely, either of these could potentially include that very string, making the output ambiguous.
On Linux I could read /proc/mounts, but this won't work on Mac OS X, for example.
So I'm looking for a way to find the mountpoint for a device in a way that's reliable (ie: can deal with arbitrary (legal) device/mountpoint names) and is "as portable as possible". (I'm guessing that portability to Windows might not be possible -- I'm not sure if it even has an analogous concept of device mountpoints.) I particularly want something that will work on both Linux and OS X.
There really isn't a portable way to do this so you'll need to deal with platform-specific code.
On OS X, the simplest and most reliable way to get disk volume information at the command level is to use the -plist option for diskutil list. The output can then be processed directly in Python with the plistlib module. For example:
diskutil list -plist | \
python -c 'import sys,plistlib,pprint; pprint.pprint(plistlib.readPlist(sys.stdin))'
{'AllDisks': ['disk0', 'disk0s1', 'disk0s2', 'disk0s3', 'disk1'],
'AllDisksAndPartitions': [{'Content': 'GUID_partition_scheme',
'DeviceIdentifier': 'disk0',
'Partitions': [{'Content': 'EFI',
'DeviceIdentifier': 'disk0s1',
'Size': 209715200},
{'Content': 'Apple_CoreStorage',
'DeviceIdentifier': 'disk0s2',
'Size': 499248103424},
{'Content': 'Apple_Boot',
'DeviceIdentifier': 'disk0s3',
'Size': 650002432,
'VolumeName': 'Recovery HD'}],
'Size': 500107862016},
{'Content': 'Apple_HFSX',
'DeviceIdentifier': 'disk1',
'MountPoint': '/',
'Size': 499097100288,
'VolumeName': 'main'}],
'VolumesFromDisks': ['main'],
'WholeDisks': ['disk0', 'disk1']}
I don't think this works in OS X, but one way in Linux to programmatically get if a device is mounted and on which paths is through the dbus org.freedesktop.UDisks.Device interface:
import sys, dbus
device_name = sys.argv[1]
bus = dbus.SystemBus()
ud_manager_obj = bus.get_object("org.freedesktop.UDisks", "/org/freedesktop/UDisks")
ud_manager = dbus.Interface(ud_manager_obj, 'org.freedesktop.UDisks')
device = bus.get_object('org.freedesktop.UDisks',
'/org/freedesktop/UDisks/devices/{0}'.format(device_name))
device_properties = dbus.Interface(device, dbus.PROPERTIES_IFACE)
if device_properties.Get('org.freedesktop.UDisks.Device', 'DeviceIsMounted'):
for mount_path in device_properties.Get('org.freedesktop.UDisks.Device', 'DeviceMountPaths'):
print mount_path
(From my comment above: mtab is the standard Linux way. It doesn't exist on FreeBSD, Mac OS X or Solaris. The former two have the getfsstat(2) and getmntinfo(2) system calls; on Solaris you can use getmntent(3C). Unfortunately the list of currently-mounted filesystems is not defined by POSIX AFAIK, so it is wildly different on different platforms.)
There's the experimental mount module in the PSI package from PyPI, which appears to attempt to bundle all the platform-specific methods into a simple abstraction, and which is advertised as working on Mac OS X (Darwin), AIX, Linux and Solaris. The Darwin module probably works on *BSD.
What about reading /etc/mtab and /etc/fstab?
I don't know OSX, but that's the standard Unix way to know what is mounted where. mtab should list all mounted filesystems, fstab should list all predefined mountpoints (which may or may not actually be mounted).
From within a Python application, how can I get the total amount of RAM of the system and how much of it is currently free, in a cross-platform way?
Ideally, the amount of free RAM should consider only physical memory that can actually be allocated to the Python process.
Have you tried SIGAR - System Information Gatherer And Reporter?
After install
import os, sigar
sg = sigar.open()
mem = sg.mem()
sg.close()
print mem.total() / 1024, mem.free() / 1024
Hope this helps
psutil would be another good choice. It also needs a library installed however.
>>> import psutil
>>> psutil.virtual_memory()
vmem(total=8374149120L, available=2081050624L, percent=75.1,
used=8074080256L, free=300068864L, active=3294920704,
inactive=1361616896, buffers=529895424L, cached=1251086336)
For the free memory part, there is a function in the wx library:
wx.GetFreeMemory()
Unfortunately, this only works on Windows. Linux and Mac ports either return "-1" or raise a NotImplementedError.
You can't do this with just the standard Python library, although there might be some third party package that does it. Barring that, you can use the os package to determine which operating system you're on and use that information to acquire the info you want for that system (and encapsulate that into a single cross-platform function).
In windows I use this method. It's kinda hacky but it works using standard os library:
import os
process = os.popen('wmic memorychip get capacity')
result = process.read()
process.close()
totalMem = 0
for m in result.split(" \r\n")[1:-1]:
totalMem += int(m)
print totalMem / (1024**3)