I want to communicate with the phone via serial port. After writing some command to phone, I used ser.read(ser.inWaiting()) to get its return value, but I always got total 1020 bytes of characters, and actually, the desired returns is supposed to be over 50KB.
I have tried to set ser.read(50000), but the interpreter will hang on.
How would I expand the input buffer to get all of the returns at once?
If you run your code on Windows platform, you simply need to add a line in your code.
from serial import Serial
ser = Serial(port='COM1', baudrate=115200, timeout=1, writeTimeout=1)
ser.set_buffer_size(rx_size = 12800, tx_size = 12800)
Where 12800 is an arbitrary number I chose. You can make receiving(rx) and transmitting(tx) buffer as big as 2147483647 (equal to 2^31 - 1)
this will allow you to expand the input buffer to get all of the returns at once.
Be aware that this will work only with some drivers since it is a recommendation. The driver might not take your recommendation and will stick with its' original buffer size.
I have had exactly the same problem, including the 1020 byte buffer size and haven't found a way to change this. My solution has been to implement a loop like:
in_buff=''
while mbed.inWaiting():
in_buff+=mbed.read(mbed.inWaiting()) #read the contents of the buffer
time.sleep(0.11) #depending on your hardware, it can take time to refill the buffer
I would be very pleased if someone can come up with a buffer-resize solution!
I'm guessing that you are reading 1020 bytes because that is all there is in the buffer, which is what ser.inWaiting() is returning. Depending on the baud rate 50 KB may take a while to transfer, or the phone is expecting something different from you. Handshaking?
Inspect the value of ser.inWaiting, and then the contents of what you are receiving for hints.
pySerial uses the native OS drivers for serial receiving. In the case of Windows, the size of the input driver is based on the device driver.
You may be able to increase the size in your Device Manager settings if it is possible, but ultimately you just need to read the data in fast enough.
Related
I am trying to create a communication between an STM32 and a laptop.
I am trying to receive data from the serial, sent thanks to an STM32. Actual code that I am sending is 0x08 0x09 0x0A 0x0B
I checked on the oscilloscope and I am indeed sending the correct values in the correct order.
What I receive is actually :
b'\n\x0b\x08\t'
I assume that Python is not reading an input that is greater than a 3 bit size, but can not figure out why
Please find my code below :
import serial
ser = serial.Serial('COM3', 115200, bytesize=8)
while 1 :
if(ser.inWaiting() != 0) :
print(ser.read(4))
If someone could help, it would be nice ! :)
check your uart rate, keep the python serial rate the same for stm32
What comes to my mind when looking at pySerial library is that while You initialize Your COM port:
You are not providing read timeout parameter.
You are awaiting 4 bytes of data from serial port.
The problem with such approach is that Your code will wait forever until it gets these 4 bytes and if You are sending the same array from STM32 for me it looks like You received 0x0A,0x0B from older packet and 0x08,0x09 from newer packet, so python code printed these 4 bytes and in fact received into buffer also 0x0A,0x0B of newer packet but it waits until it will receive 2 more bytes before it will be allowed to return with data to print.
Putting here a timeout on read and limiting read argument to single byte might solve Your problem.
Also for further development if You would like to create regular communication between microcontroller and computer I would suggest to move these received bytes into separate buffer and recognize single packets in separate thread parser. In Python it will be painful to create more complex serial communication as even with separate thread it will be quite slow.
I'm working on an embedded system that sends commands via Uart.
Uart works at 115200 baud
On PC side I want to read these commands, parse them and execute the related action.
I choose python as language to build a script.
This is a typical command received from the embedded system:
S;SEND;40;{"ID":"asg01","T":1,"P":{"T":180}};E
Each message starts with S and ends with E.
The command associated to the message is "SEND" and the payload length is 40.
My idea is read the bytes coming from the UART and:
check if the message starts with S
check if the message ends with E
if the above assumptions are true, split the message in order to find the command and the payload.
Which is the best way to parse the all bytes coming from an asynchronous uart?
My concern regards the lost of message due to wrong (or slow) parsing.
Thanks for the help!
BR,
Federico
In my day job, I wrote the software for an embedded system and a PC communicating with each other by a USB cable, using the UART protocol at 115,200 baud.
I see that you tagged your post with PySerial, so you already know about Python's most popular package for serial port communication. I will add that if you are using PyQt, there's a serial module included in that package as well.
115,200 baud is not fast for a modern desktop PC. I doubt that any parsing you do on the PC side will fail to keep up. I parse data streams and plot graphs of my data in real time using PyQt.
What I have noticed in my work with communication between an embedded system and a PC over a UART is that some data gets corrupted occasionally. A byte can be garbled, repeated, or dropped. Also, even if no bytes are added or dropped, you can occasionally perform a read while only part of a packet is in the buffer, and the read will terminate early. If you use a fixed read length of 40 bytes and trust that each read will always line up exactly with a data packet as you show above, you will frequently be wrong.
To solve these kinds of problems, I wrote a FIFO class in Python which consumes serial port data at the head of the FIFO, yields valid data packets at the tail, and discards invalid data. My FIFO holds 3 times as many bytes as my data packets, so if I am looking for packet boundaries using specific sequences, I have plenty of signposts.
A few more recommendations: work in Python 3 if you have the choice, it's cleaner. Use bytes and bytearray objects. Don't use str, because you will find yourself converting back and forth between Unicode and ASCII.
This format is almost parseable as a csv, but not quite, because the fourth field is JSON, and you may not be able to guarantee that the JSON doesn't contain any strings with embedded semicolons. So, I think you probably want to just use string (or, rather, bytes) manipulation functions:
def parsemsg(buf):
s, cmd, length, rest = buf.split(b';', 3)
j, _, e = rest.rpartition(b';')
if s != b'S' or e != b'E':
raise ValueError('must start with S and end with E')
return cmd.decode('utf-8'), int(length), json.loads(j)
Then:
>>> parsemsg(b'S,SEND,40,{"ID":"asg01","T":1,"P":{"T":180}},E')
('SEND', 40, {'ID': 'asg01', 'T': 1, 'P': {'T': 180}})
The actual semicolon-parsing part takes 602ns on my laptop, The decode and int raise that to 902ns. The json.loads, on the other hand, takes 10us. So, if you're worried about performance, the JSON part is really the only part that matters (trying third-party JSON libs I happen to have installed, the fastest one is still 8.1us, which isn't much better). You might as well keep everything else simple and robust.
Also, considering that you're reading this at 115000 baud, you can't get these messages any faster than about 6ms, so spending 11us parsing them is not even close to a problem in the first place.
First time poster.
Before I start, I just want to say that I am a beginner programmer so bear with me, but I can still follow along quite well.
I have a wireless device called a Pololu Wixel, which can send and receive data wirelessly. I'm using two of them. One to send and one to receive. It's USB so it can plug straight into my Raspberry Pi or PC, so all I have to do is connect to the COM port through a terminal to read and write data to it. It comes with a testing terminal program that allows me to send 1-16 bytes of info. I've done this and I've sent and received 2 bytes (which is what I need) with no problem.
Now here's my problem: When I open up the Ubuntu terminal and use Pyserial to connect to the correct sending Wixel COM Port and write a value larger than 255, my receiving COM port, which is connected to another instance of Terminal also using Pyserial, doesn't read the right value, hence I think I'm not being able to read and write two bytes, but only one. After doing something reading online in the pyserial documentation, I believe, not know, that Pyserial can only read and write 5,6,7, or 8 bits at a time.
I hope my problem is obvious now. How the heck can I write 2 bytes worth of info to the COM port to my device and send it to the other device which needs to read those 2 bytes through, all using pyserial?
I hope this all makes sense, and I would greatly appreciate any help.
Thanks
UPDATE
Okay, I think I've got this going now. So I did:
import serial
s=serial.Serial(3) //device #1 at COM Port 4 (sending)
r=serial.Serial(4) //device #4 at COM Port 5 (receiving)
s.timeout=1
r.timeout=1
s.write('0x80')
r.readline()
//Output said: '0x80'
s.write('hh')
r.readline()
//Output said: 'hh'
Honestly, I think this solves my problem. Maybe there never was a problem to begin with. Maybe I can take my 16bit binary data from the program, example "1101101011010101", turn it into characters (I've seen something called char() before I think that's it)
then use s.write('WHATEVER')
then use r.readline() and convert back to binary
You'll likely need to pull your number apart into multiple bytes, and send the pieces in little endian or big endian order.
EG:
low_byte = number % 256
high_byte = number // 256
That should get you up to 65535. You can reconstruct the number on the other side with high_byte * 256 + low_byte.
I am reading data from a microcontroller via serial, at a baudrate of 921600. I'm reading a large amount of ASCII csv data, and since it comes in so fast, the buffer get's filled and all the rest of the data gets lost before I can read it. I know I could manually edit the pyserial source code for serialwin32 to increase the buffer size, but I was wondering if there is another way around it?
I can only estimate the amount of data I will receive, but it is somewhere around 200kB of data.
Have you considered reading from the serial interface in a separate thread that is running prior to sending the command to uC to send the data?
This would remove some of the delay after the write command and starting the read. There are other SO users who have had success with this method, granted they weren't having buffer overruns.
If this isn't clear let me know and I can throw something together to show this.
EDIT
Thinking about it a bit more, if you're trying to read from the buffer and write it out to the file system even the standalone thread might not save you. To minimize the processing time you might consider reading say 100 bytes at a time serial.Read(size=100) and pushing that data into a Queue to process it all after the transfer has completed
Pseudo Code Example
def thread_main_loop(myserialobj, data_queue):
data_queue.put_no_wait(myserialobj.Read(size=100))
def process_queue_when_done(data_queue):
while(1):
if len(data_queue) > 0:
poped_data = data_queue.get_no_wait()
# Process the data as needed
else:
break;
There's a "Receive Buffer" slider that's accessible from the com port's Properties Page in Device Manager. It is found by following the Advanced button on the "Port Settings" tab.
More info:
http://support.microsoft.com/kb/131016 under heading Receive Buffer
http://tldp.org/HOWTO/Serial-HOWTO-4.html under heading Interrupts
Try knocking it down a notch or two.
You do not need to manually change pyserial code.
If you run your code on Windows platform, you simply need to add a line in your code
ser.set_buffer_size(rx_size = 12800, tx_size = 12800)
Where 12800 is an arbitrary number I chose. You can make receiving(rx) and transmitting(tx) buffer as big as 2147483647a
See also:
https://docs.python.org/3/library/ctypes.html
https://msdn.microsoft.com/en-us/library/system.io.ports.serialport.readbuffersize(v=vs.110).aspx
You might be able to setup the serial port from the DLL
// Setup serial
mySerialPort.BaudRate = 9600;
mySerialPort.PortName = comPort;
mySerialPort.Parity = Parity.None;
mySerialPort.StopBits = StopBits.One;
mySerialPort.DataBits = 8;
mySerialPort.Handshake = Handshake.None;
mySerialPort.RtsEnable = true;
mySerialPort.ReadBufferSize = 32768;
Property Value
Type: System.Int32
The buffer size, in bytes. The default value is 4096; the maximum value is that of a positive int, or 2147483647
And then open and use it in Python
I am somewhat surprised that nobody has yet mentioned the correct solution to such problems (when available), which is effective flow control through either software (XON/XOFF) or hardware flow control between the microcontroller and its sink. The issue is well described by this web article.
It may be that the source device doesn't honour such protocols, in which case you are stuck with a series of solutions that delegate the problem upwards to where more resources are available (move it from the UART buffer to the driver and upwards towards your application code). If you are losing data, it would certainly seem sensible to try and implement a lower data rate if that's a possibility.
For me the problem was it was overloading the buffer when receiving data from the Arduino.
All I had to do was mySerialPort.flushInput() and it worked.
I don't know why mySerialPort.flush() didn't work. flush() must only flush the outgoing data?
All I know is mySerialPort.flushInput() solved my problems.
I am writing a program in Python that will act as a server and accept data from a client, is it a good idea to impose a hard limit as to the amount of data, if so why?
More info:
So certain chat programs limit the amount of text one can send per send (i.e. per time user presses send) so the question comes down to is there a legit reason for this and if yes, what is it?
Most likely you've seen code which protects against "extra" incoming data. This is often due to the possibility of buffer overruns, where the extra data being copied into memory overruns the pre-allocated array and overwrites executable code with attacker code. Code written in languages like C typically has a lot of length checking to prevent this type of attack. Functions such as gets, and strcpy are replaced with their safer counterparts like fgets and strncpy which have a length argument to prevent buffer overruns.
If you use a dynamic language like Python, your arrays resize so they won't overflow and clobber other memory, but you still have to be careful about sanitizing foreign data.
Chat programs likely limit the size of a message for reasons such as database field size. If 80% of your incoming messages are 40 characters or less, 90% are 60 characters or less, and 98% are 80 characters or less, why make your message text field allow 10k characters per message?
What is your question exactly?
What happens when you do receive on a socket is that the current available data in the socket buffer is immediately returned. If you give receive (or read, I guess), a huge buffer size, such as 40000, it'll likely never return that much data at once. If you give it a tiny buffer size like 100, then it'll return the 100 bytes it has immediately and still have more available. Either way, you're not imposing a limit on how much data the client is sending you.
I don't know what your actual application is, however, setting a hard limit on the total amount of data that a client can send could be useful in reducing your exposure to denial of service attacks, e.g. client connects and sends 100MB of data which could load your application unacceptably.
But it really depends on what you application is. Are you after a per line limit or a total per connection limit or what?