I am trying to communicate with a AlphaLabs GM-2 Gaussmeter (https://www.alphalabinc.com/product/gm2/) via its USB port with serial in python. The gaussmeter is a pretty straightforward device that only displays a digial value of the measured magnetic field on the front panel. We hope to get to the point were we can read the measurement and plot it versus time.
For now, we are having issues communicating with the device and would love some help! I've been trying to follow their Data Acquisition Manual for their systems (https://www.alphalabinc.com/wp-content/uploads/2018/02/alphaapp_comm_protocol.pdf)... but alas, we are definitely hitting a big road block.
According to the manual, if I would like to send the device the ID_METER_PROP command, i need to feed the device the command byte: 0x01 followed by 'five bytes whose contents doesn't matter'. This should give us an ASCII block followed by either a terminate byte or a byte signaling there is more data.
From our code we can get one ASCII block followed by this 'acknowledgement byte' (indicating their is more data to be sent from the gaussmeter...) but we can't seem to get out program to receive said data. Once we call this program it freezes the gaussmeter... like it's trying to send more data but just can't.
Thanks for any advice!
I've tried contacting tech support at Alpha Labs, but sadly they couldn't offer any coding help outside of their premade GUI.
'''python
# Define the command to send to the device
command = serial.to_bytes([0x01, 0x03, 0x03, 0x03, 0x03, 0x03])
#print(command)
# Send command to device and save its return
ret=gaussmeter.getIdentification(command)
print(ret) # print return variable
#-----
#Defined Function getIdentification for reference
#-----
def getIdentification(self, command):
time.sleep(self.DEFAULT_SLEEP_TIME)
self.port.write(command)
identification = self.port.read(self.DEFAULT_READ_SIZE)
test = self.port.read(self.DEFAULT_READ_SIZE)
return identification, test
'''
The code above outputs: (b':METER_NAME=GM2_GAUS\x08', b'')
the '\x08 is the 'acknowledgement byte' defined above and in the manual.
Calling this code freezes the gaussmeter device and the only way to reset it is to unplug it and plug it back in.
We'd expect to see more device ASCII settings as defined in the manual and we definitely do not expect the device to freak out
First of all, I believe you should call sleep after calling write.
And the documentation says you should repeat the process if you get that 'acknowledgement byte'.
So you send command, read and if acknowledgement byte is received repeat.
Your code is missing sending ACKNOWLEDGE (0x08) back to the gaussmeter when an ACKNOWLEDGE (0x08) is in byte 21. Thus gaussmeter freezes waiting to receive an ACKNOWLEDGE before sending more information.
This code worked for me:
def getIdentification(self):
time.sleep(self.DEFAULT_SLEEP_TIME)
self.send_cmd(self.ID_METER_PROP)
identification = self.port.read(self.DEFAULT_READ_SIZE)
if (len(identification) == 21):
print(identification[0:20])
more_to_read = (identification[20] == self.ACKNOWLEDGE)
while(more_to_read):
self.send_cmd( self.ACKNOWLEDGE)
identification = self.port.read(self.DEFAULT_READ_SIZE)
print(identification[0:20])
more_to_read = (identification[20] == 0x08)
else:
print("Error reading from gaussmeter port")
Related
I wrote a script a few years ago in Python 2.x and am trying to port it to run in 3.x
Here are the basics of what I am doing:
NetworkDevices is a file that contains host and access credentials for a list of devices.
DebugFile is an output file I dump stuff too as I go (depending on the debug level), so I can see when a device sent something I wasn't expecting.
NetworkDevices = open(WorkingDir + DeviceFilename,"r")
DebugFile = open(WorkingDir + DebugFilename,"w",1)
Try
# Create instance of SSHClient object
ssh_pre = paramiko.SSHClient()
# Automatically add untrusted hosts (make sure okay for security policy in your environment)
ssh_pre.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# initiate SSH connection
ssh_pre.connect(InputVars[0], username=InputVars[1], password=InputVars[2], timeout=10)
# Use invoke_shell to establish an 'interactive session'
Conn = ssh_pre.invoke_shell()
except
# SSH connection failed, lets try a telnet connection.
Conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
# Open Telnet Connection to device
Conn.connect((InputVars[0],23))
except
#Print some error message saying you couldn't connect, blah blah
else
output = str(Conn.recv(10000))
else
output = str(Conn.recv(10000))
# After I establish my connection via object Conn, I can always just pull more data from
# the same connection object regardless of whether it's ssh or telnet.
if DebugVar > 3:
DebugFile.write("Here is the initial response after connecting.\n")
DebugFile.write("<<<1>>>\n"+output+"\n<<<1>>>\n")
Then do a bunch of stuff depending on what is to be collected, device type, etc.
The output I get is this:
Here is the initial response after connecting:
<<<1>>>
b'\r\nUShaBTC-DCTNW-CORE01#'
<<<1>>>
I am trying to get this:
Here is the initial response after connecting.
<<<1>>>
UShaBTC-DCTNW-CORE01#
<<<1>>>
I've tried using formatted string literals, but that keeps leading me down the path for using string slicing, which I am trying to avoid. It's a really simple thing I'm trying to do. Receive a "chunk" (to use a generic term) of data, look in that data for a certain string, keyword, or structure, then send something to the connection, and get another chunk of data.
Yes, what I'm looking for is in what I receive either way. I can still search on it, but at some point I need to write what I received out to a file that is readable, and I'm not having much luck doing that.
Hey one more question....
Apparently Python 3 doesn't support unbufferred file output. As I have shown I dump data to a debug file (depending on the debug level my script runs at). In a crash, it's pretty important to get the last bit of data that I received to figure out why it crashed, so I always wrote data received immediately with no buffering to my debug file, so I could figure out later where it crashed and why. Is there a way to write to a file in Python 3 without buffering? I dropped back to a buffer of 1 (instead of 0) so my script will run in Python 3.x, but I still might not see the last (1 byte I guess, haven't actually been able to find out what the units are for that buffer size).
Appreciate any help.
Just call decode on the byte string:
output = Conn.recv(10000).decode('utf-8')
Though in general, you should not use SSHClient.invoke_shell. Use SSHClient.exec_command. See What is the difference between exec_command and send with invoke_shell() on Paramiko?
I have a standard server-client TCP setup. The basic idea is a chat system. Looking at only the client's side of the conversation, the client prompts the user for input with:
sys.stdout.write('<%s> ' % username)
sys.stdout.flush()
using the following logic:
while True:
socket_list = [sys.stdin, s]
read_sockets, write_sockets, error_sockets = select.select(socket_list, [], [])
for sock in read_sockets:
if sock == s:
data = sock.recv(4096)
if data:
output('\a\r%s' % data) #output incoming message
sys.stdout.write('<%s> ' % username) #prompt for input
sys.stdout.flush()
else:
raise SystemExit
else:
msg = getASCII(sys.stdin.readline()) # returns only the ascii
if msg:
s.send(msg)
sys.stdout.write('<%s> ' % username)
sys.stdout.flush())
(Note: truncated snippet. Full code can be found here Linked code has been updated and so is no longer relevant.)
The problem is, when the user is typing and it gets an incoming message from the server, the client outputs the message and prompts for input again. The message that was being typed is still in the stdin buffer but has gone from the screen. If the user presses enter to send the message, the entire message will be sent including what was in the buffer, but on the user's screen, only the second part of the message, the part after the interruption, will be displayed.
I have a possible solution, which is that when I prompt for input, I check if there's anything in the buffer and output that along with the prompt, but I have no idea how to implement it. Any help is appreciated.
To implement your solution, you will have to read from stdin in an unbuffered way. readline() and read() block until an EOL or EOF. You need the data from stdin BEFORE the return key is pressed. To achieve that, this might prove helpful: http://code.activestate.com/recipes/134892-getch-like-unbuffered-character-reading-from-stdin/ When you are about to write data, you could then read from stdin, store it somewhere and output it again after outputting the message. As select won't be called for stdin, make a separate read-thread that reads stdin. Use locks for accessing stdin's data so far.
As an alternative to implementing your own editing line input function as discussed in the question's comments, consider this approach: Change the scrolling region to leave out the screen's bottom line (the user input line) and enter the scrolling region only temporarily to output incoming server messages. That answer contains an example.
The problem seems to be that you are letting the messages from the other user interrupt the typing. I recommend you either only listen to one thing at a time (when the user is typing you let him finish and press enter before listening for remote messages) or you listen for the user's input one key at a time and build up your own buffer (see Polling the keyboard (detect a keypress) in python). The downside to the later approach is that you need to implement key editing, etc. There may be a library that accomplishes this for you.
Note that in most chat programs, the area you type is in a separate window/screen region than where you are seeing the messages. All messages (yours as well as others) show up when complete in this message area. Perhaps you can use just display messages (independent of input) somewhere else on the screen.
I'm trying to use PyVisa to control an Agilent 4156C using its FLEX command set. The communication seems to be working OK, as I can query the instrument with *IDN? and read the status byte. I also think I'm setting up my voltage sweep properly now, as I don't see any errors on the screen of the 4156 when I execute the Python script. My problem is that when I try to read the measurement data using the RMD? command, the instrument doesn't respond, and the program errors due to timeout. Here is my current program:
import visa
rm = visa.ResourceManager()
inst = rm.open_resource('GPIB0::17::INSTR')
print(inst.query('*IDN?'))
inst.timeout = 6000
print(inst.write('US'))
print(inst.write('FMT 1,1'))
# Set short integration time
print(inst.write('SLI 1'))
# Enable SMU 3
print(inst.write('CN 3'))
# Set measurement mode to sweep (2) on SMU 3
print(inst.write('MM 2,3'))
# Setup voltage sweep on SMU 3
#print(inst.write('WV 3,3,0,0.01,0.1,0.01'))
print(inst.write('WV 3,3,0,-0.1,0.1,0.01,0.01,0.001,1'))
# Execute
print(inst.write('XE'))
# Query output buffer
print("********** Querying RMD **********")
print(inst.write('RMD? 0'))
print(inst.read())
print("********** Querying STB **********")
print(inst.query('*STB?'))
The program always hangs when I try to read after writing 'RMD? 0', or if I query that command. I feel like I am missing something simple, but just not able to find it in the available Agilent or PyVisa documentation. Any help would be greatly appreciated. I'm using the standard NI VISA that comes with LabView (I mention that because I came across this post).
I encountered the same problem and solved it.
Command XE launches the execution of current/voltages measurements with Agilent 4156C: it is thus not possible to send any additional GPIB command during execution. Even "STB?" does not work.
The only way I found to check status byte and measurement completion is checking continuously "inst.stb" paramter which is updated continuously by visa driver.
Hope this will help other users.
My code:
class Agilent4156C:
def __init__(self, address):
try:
rm = visa.ResourceManager(r'C:\\Windows\\system32\\visa32.dll')
self.com=rm.open_resource(address)
self.com.read_termination = '\n'
self.com.query_delay = 0.0
self.com.timeout=5000
except:
print("** error while connecting to B1500 **")
def execution(self):
self.com.write("XE")
while self.com.stb != 17:
time.sleep(0.5)
I am currently writing an AI program that receives input from Dragon NaturallySpeaking (using Natlink), processes it, and returns a spoken output. I was able to come up with a Receiver GrammarBase that captures all input from Dragon and sends it to my parser.
class Receiver(GrammarBase):
gramSpec = """ <start> exported = {emptyList}; """
def initialize(self):
self.load(self.gramSpec, allResults = 1)
self.activateAll()
def gotResultsObject(self, recogType, resObj):
if recogType == 'reject':
inpt, self.best_guess = [], []
else:
inpt = extract_words(resObj)
inpt = process_input(inpt) # Forms a list of possible interpretations
self.best_guess = resObj.getWords(0)
self.send_input(inpt)
def send_input(self, inpt):
send = send_to_parser(inpt) # Sends first possible interpretation to parser
try:
while True:
send.next() # Sends the next possible interpretation if the first is rejected
except StopIteration: # If all interpretations are rejected, try sending the input to Dragon
try:
recognitionMimic(parse(self.best_guess))
except MimicFailed: # If that fails too, execute all_failed
all_failed()
This code works as expected, but there are several problems:
Dragon processes the input before sending it to my program. For example, if I were to say "Open Google Chrome.", it would open Google Chrome, and then send the input to Python. Is there a way to send the input to Python without first processing it?
When I call waitForSpeech(), a message box pops up, stating that the Python interpreter is waiting for input. Is it possible (for aesthetics and convenience) to prevent the message box from showing up, and instead terminate the speech collecting process after a significant pause from the user?
Thank you!
With respect to your first question, it turns out that DNS uses the "Open ..." Utterance as part of its command resolving process internally. This means that DNS resolves the speech and executes the command way before natlink has a chance at it. The only way around this is to change the utterance from "Open ..." to "Trigger ..." in your natlink grammar (or to some other utterance that DNS is not using besides "Trigger").
Some of the natlink developers hang out at speechcomputing.com. You may get better responses there.
Good luck!
I have a rare bug that seems to occur reading a socket.
It seems, that during reading of data sometimes I get only 1-3 bytes of a data package that is bigger than this.
As I learned from pipe-programming, there I always get at least 512 bytes as long as the sender provides enough data.
Also my sender does at least transmit >= 4 Bytes anytime it does transmit anything -- so I was thinking that at least 4 bytes will be received at once in the beginning (!!) of the transmission.
In 99.9% of all cases, my assumption seems to hold ... but there are really rare cases, when less than 4 bytes are received. It seems to me ridiculous, why the networking system should do this?
Does anybody know more?
Here is the reading-code I use:
mySock, addr = masterSock.accept()
mySock.settimeout(10.0)
result = mySock.recv(BUFSIZE)
# 4 bytes are needed here ...
...
# read remainder of datagram
...
The sender sends the complete datagram with one call of send.
Edit: the whole thing is working on localhost -- so no complicated network applications (routers etc.) are involved. BUFSIZE is at least 512 and the sender sends at least 4 bytes.
I assume you're using TCP. TCP is a stream based protocol with no idea of packets or message boundaries.
This means when you do a read you may get less bytes than you request. If your data is 128k for example you may only get 24k on your first read requiring you to read again to get the rest of the data.
For an example in C:
int read_data(int sock, int size, unsigned char *buf) {
int bytes_read = 0, len = 0;
while (bytes_read < size &&
((len = recv(sock, buf + bytes_read,size-bytes_read, 0)) > 0)) {
bytes_read += len;
}
if (len == 0 || len < 0) doerror();
return bytes_read;
}
As far as I know, this behaviour is perfectly reasonable. Sockets may, and probably will fragment your data as they transmit it. You should be prepared to handle such cases by applying appropriate buffering techniques.
On other hand, if you are transmitting the data on the localhost and you are indeed getting only 4 bytes it probably means you have a bug somewhere else in your code.
EDIT: An idea - try to fire up a packet sniffer and see whenever the packet transmitted will be full or not; this might give you some insight whenever your bug is in your client or in your server.
The simple answer to your question, "Read from socket: Is it guaranteed to at least get x bytes?", is no. Look at the doc strings for these socket methods:
>>> import socket
>>> s = socket.socket()
>>> print s.recv.__doc__
recv(buffersize[, flags]) -> data
Receive up to buffersize bytes from the socket. For the optional flags
argument, see the Unix manual. When no data is available, block until
at least one byte is available or until the remote end is closed. When
the remote end is closed and all data is read, return the empty string.
>>>
>>> print s.settimeout.__doc__
settimeout(timeout)
Set a timeout on socket operations. 'timeout' can be a float,
giving in seconds, or None. Setting a timeout of None disables
the timeout feature and is equivalent to setblocking(1).
Setting a timeout of zero is the same as setblocking(0).
>>>
>>> print s.setblocking.__doc__
setblocking(flag)
Set the socket to blocking (flag is true) or non-blocking (false).
setblocking(True) is equivalent to settimeout(None);
setblocking(False) is equivalent to settimeout(0.0).
From this it is clear that recv() is not required to return as many bytes as you asked for. Also, because you are calling settimeout(10.0), it is possible that some, but not all, data is received near the expiration time for the recv(). In that case recv() will return what it has read - which will be less than you asked for (but consistenty < 4 bytes does seem unlikely).
You mention datagram in your question which implies that you are using (connectionless) UDP sockets (not TCP). The distinction is described here. The posted code does not show socket creation so we can only guess here, however, this detail can be important. It may help if you could post a more complete sample of your code.
If the problem is reproducible you could disable the timeout (which incidentally you do not seem to be handling) and see if that fixes the problem.
This is just the way TCP works. You aren't going to get all of your data at once. There are just too many timing issues between sender and receiver including the senders operating system, NIC, routers, switches, the wires themselves, the receivers NIC, OS, etc. There are buffers in the hardware, and in the OS.
You can't assume that the TCP network is the same as a OS pipe. With the pipe, it's all software so there's no cost in delivering the whole message at once for most messages. With the network, you have to assume there will be timing issues, even in a simple network.
That's why recv() can't give you all the data at once, it may just not be available, even if everything is working right. Normally, you will call recv() and catch the output. That should tell you how many bytes you've received. If it's less than you expect, you need to keep calling recv() (as has been suggested) until you get the correct number of bytes. Be aware that in most cases, recv() returns -1 on error, so check for that and check your documentation for ERRNO values. EAGAIN in particular seems to cause people problems. You can read about it on the internet for details, but if I recall, it means that no data is available at the moment and you should try again.
Also, it sounds like from your post that you're sure the sender is sending the data you need sent, but just to be complete, check this:
http://beej.us/guide/bgnet/output/html/multipage/advanced.html#sendall
You should be doing something similar on the recv() end to handle partial receives. If you have a fixed packet size, you should read until you get the amount of data you expect. If you have a variable packet size, you should read until you have the header that tells you how much data you send(), then read that much more data.
From the Linux man page of recv http://linux.about.com/library/cmd/blcmdl2_recv.htm:
The receive calls normally return any
data available, up to the requested
amount, rather than waiting for
receipt of the full amount requested.
So, if your sender is still transmitting bytes, the call will only give what has been transmitted so far.
If the sender sends 515 bytes, and your BUFSIZE is 512, then the first recv will return 512 bytes, and the next will return 3 bytes... Could this be what's happening?
(This is just one case amongst many which will result in a 3-byte recv from a larger send...)
If you are still interested, patterns like this :
# 4 bytes are needed here ......
# read remainder of datagram...
may create the silly window thing.
Check this out
Use recv_into(...) method from the socket module.
Robert S. Barnes written the example in C.
But you can use Python 2.x with standard python-libraries:
def readReliably(s,n):
buf = bytearray(n)
view = memoryview(buf)
sz = s.recv_into(view,n)
return sz,buf
while True:
sk,skfrom = s.accept()
sz,buf = io.readReliably(sk,4)
a = struct.unpack("4B",buf)
print repr(a)
...
Notice, that sz returned by readReliably() function may be greater than n.