I am developing a simple web server that runs on a raspberry pi zero and lights up an LED when a request is received on the POST route (with a given color, intensity, blink timing and other informations contained in the request data) and shuts it down when a request is received on the DELETE route.
I wanted to have a sort of backup of the requests i do to the server so that they can be "redone" (in whatever order) when the server restarts so that the LEDs will turn on without having to redo all of them by hand.
Right now (since it was the easiest and fasted way for me to do it as a proof of concept) every time i make a POST request i save the color in a dict using as key the serial of the LED and then write the dict to a json file.
When i receive a DELETE request i read the file, delete the entry and write it again with the other information that it may contain (if more than one LED was connected), if the server loses power or gets shut down and restarts it reads the file and restores the LEDs statuses.
I was wondering what would be the best way to have a system like this (either using a file, DB or other possible solutions) in a way that would use the lowest amount of RAM possible since I already have other services running on the rpi that use quite a bit of it.
Depending on how many LEDs there are it sounds like what you are doing will be a JSON file of only a few bytes, right? There are ways you could compress that, but unless you have a huge number of LEDs I doubt it will be a significant saving compared to everything else.
Related
I made a program which is saving sensor data in a log file (server site).
This file is stored in a temporary directory (Ram-Disk).
Each line contains a timestamp and a JSON string.
The update rate is dependent on the sensor data, but the fastest is every 0.5s.
What I want to do is, to stream every update in this file to a client application.
I have several approaches in mind:
maybe a shared folder on server site (samba) with a script (client site), just checking the file every 0.5s
maybe a another server program running on the server, checking for updates (but this I don't want to do, because Raspberry Pi is slow)
Has anyone maybe done something like this before and can share some ideas? Is there maybe a python module for this already (which opens a file like a stream and if something changed then this stream is giving it out)? Is it smart to check a file constantly for updates?
To stream the log file to an application you can use
tail -n 1000000 -f | application
(This will continuously check the file for new lines and then stream them to the application, then hang again until new lines are present.)
But this will of course put load on your server as the querying whether there are new lines or not will be relayed to the Raspberry Pi to execute it. A small program (written in C, with a decent sleep) on the server itself might in fact put less load on it than querying for new lines via the network.
I'm doing something like that.
I have a server running on my raspberry pi + client that parse the output of the server and sends it to another server on the web.
What I'm doing is that the local server program write it's data in chunks.
Every time it writes the data (by the way, also on tmpfs) it writes it on a different file, so I don't get errors when trying to parse the file while something else is writing to that file..
After it writes the file, it starts the client program in order to parse and send the data (Using subprocess with the name of the file as a parameter).
Works great for me.
I am reading data from a microcontroller via serial, at a baudrate of 921600. I'm reading a large amount of ASCII csv data, and since it comes in so fast, the buffer get's filled and all the rest of the data gets lost before I can read it. I know I could manually edit the pyserial source code for serialwin32 to increase the buffer size, but I was wondering if there is another way around it?
I can only estimate the amount of data I will receive, but it is somewhere around 200kB of data.
Have you considered reading from the serial interface in a separate thread that is running prior to sending the command to uC to send the data?
This would remove some of the delay after the write command and starting the read. There are other SO users who have had success with this method, granted they weren't having buffer overruns.
If this isn't clear let me know and I can throw something together to show this.
EDIT
Thinking about it a bit more, if you're trying to read from the buffer and write it out to the file system even the standalone thread might not save you. To minimize the processing time you might consider reading say 100 bytes at a time serial.Read(size=100) and pushing that data into a Queue to process it all after the transfer has completed
Pseudo Code Example
def thread_main_loop(myserialobj, data_queue):
data_queue.put_no_wait(myserialobj.Read(size=100))
def process_queue_when_done(data_queue):
while(1):
if len(data_queue) > 0:
poped_data = data_queue.get_no_wait()
# Process the data as needed
else:
break;
There's a "Receive Buffer" slider that's accessible from the com port's Properties Page in Device Manager. It is found by following the Advanced button on the "Port Settings" tab.
More info:
http://support.microsoft.com/kb/131016 under heading Receive Buffer
http://tldp.org/HOWTO/Serial-HOWTO-4.html under heading Interrupts
Try knocking it down a notch or two.
You do not need to manually change pyserial code.
If you run your code on Windows platform, you simply need to add a line in your code
ser.set_buffer_size(rx_size = 12800, tx_size = 12800)
Where 12800 is an arbitrary number I chose. You can make receiving(rx) and transmitting(tx) buffer as big as 2147483647a
See also:
https://docs.python.org/3/library/ctypes.html
https://msdn.microsoft.com/en-us/library/system.io.ports.serialport.readbuffersize(v=vs.110).aspx
You might be able to setup the serial port from the DLL
// Setup serial
mySerialPort.BaudRate = 9600;
mySerialPort.PortName = comPort;
mySerialPort.Parity = Parity.None;
mySerialPort.StopBits = StopBits.One;
mySerialPort.DataBits = 8;
mySerialPort.Handshake = Handshake.None;
mySerialPort.RtsEnable = true;
mySerialPort.ReadBufferSize = 32768;
Property Value
Type: System.Int32
The buffer size, in bytes. The default value is 4096; the maximum value is that of a positive int, or 2147483647
And then open and use it in Python
I am somewhat surprised that nobody has yet mentioned the correct solution to such problems (when available), which is effective flow control through either software (XON/XOFF) or hardware flow control between the microcontroller and its sink. The issue is well described by this web article.
It may be that the source device doesn't honour such protocols, in which case you are stuck with a series of solutions that delegate the problem upwards to where more resources are available (move it from the UART buffer to the driver and upwards towards your application code). If you are losing data, it would certainly seem sensible to try and implement a lower data rate if that's a possibility.
For me the problem was it was overloading the buffer when receiving data from the Arduino.
All I had to do was mySerialPort.flushInput() and it worked.
I don't know why mySerialPort.flush() didn't work. flush() must only flush the outgoing data?
All I know is mySerialPort.flushInput() solved my problems.
So here's the deal. I have a server that has a ton of clients. I need a way to send them all a python script, and once they receive it they must immediately execute the script.
Said script will create a file which I then need to download back to the server.
The only thing I have to start with is a file with a list of client IP addresses (though with not too much effort I can change that to be client "names" if that would make the code easier).
As of right now it does not matter whether this is accomplished through POST or FTP or any other file transfer service you can think of, the only goal is that it is fast.
The script that is being executed on all the clients is a simple key generator, which I can provide if need be.
As said before the main goal of this needs to be speed, any help that can be provided would be appreciated.
Im part of a project trying to "map" the internet. I was picked, not because of any sort of networking skills (of which i have none) but because i was the only candidate that knew any python. Right now im just trying to establish a connection with all the (1000+) clients we are using and get the ssh (rsa) keys from them. Later i will be telling the clients to send traceroutes.
EDIT ---- provided additional information to make the question clearer
Have a look at Fabric. It is a tool that exactly fits what you need, though I don't know how fast it is.
Need some direction on this.
I'm writing a chat room browser-application, however there is a subtle difference.
These are collaboration chats where one person types and the other person can see live ever keystroke entered by the other person as they type.
Also the the chat space is not a single line but a textarea space, like the one here (SO) to enter a question.
All keystrokes including tabs/spaces/enter should be visible live to the other person. And only one person can type at one time (I guess locking should be trivial)
I haven't written a multiple chatroom application. A simple client/server where both are communicatiing over a port is something I've written.
So here are the questions
1.) How is a multiple chatroom application written ? Is it also port based ?
2.) Showing the other persons every keystroke as they type is I guess possible through ajax. Is there any other mechanism available ?
Note : I'm going to use a python framework (web2py) but I don't think framework would matter here.
Any suggestions are welcome, thanks !
The Wikipedia entry for Comet (programming) has a pretty good overview of different approaches you can take on the client (assuming that your client's a web browser), and those approaches suggest the proper design for the server (assuming that the server's a web server).
One thing that's not mentioned on that page, but that you're almost certainly going to want to think about, is buffering input on the client. I don't think it's premature optimization to consider that a multi-user application in which every user's keystroke hits the server is going to scale poorly. I'd consider having user keystrokes go into a client-side buffer, and only sending them to the server when the user hasn't typed anything for 500 milliseconds or so.
You absolutely don't want to use ports for this. That's putting application-layer information in the transport layer, and it pushes application-level concerns (the application's going to create a new chat room) into transport-level concerns (a new port needs to be opened on the firewall).
Besides, a port's just a 16-bit field in the packet header. You can do the same thing in the design of your application's messages: put a room ID and a user ID at the start of each message, and have the server sort it all out.
The thing that strikes me as a pain about this is figuring out, when a client requests an update, what should be sent. The naive solution is to retain a buffer for each user in a room, and maintain an index into each (other) user's buffer as part of the user state; that way, when user A requests an update, the server can send down everything that users B, C, and D have typed since A's last request. This raises all kind of issues about memory usage and persistence that don't have obvious simple solutions
The right answers to the problems I've discussed here are going to depend on your requirements. Make sure those requirements are defined in great detail. You don't want to find yourself asking questions like "should I batch together keystrokes?" while you're building this thing.
You could try doing something like IRC, where the current "room" is sent from the client to the server "before" the text (/PRIVMSG #room-name Hello World), delimited by a space. For example, you could send ROOMNAME Sample text from the browser to the server.
Using AJAX would be the most reasonable option. I've never used web2py, but I'm guessing you could just use JSON to parse the data between the browser and the server, if you wanted to be fancy.
I'm writing a custom ftp client to act as a gatekeeper for incoming multimedia content from subcontractors hired by one of our partners. I chose twisted because it allows me to parse the file contents before writing the files to disk locally, and I've been looking for occasion to explore twisted anyway. I'm using 'twisted.protocols.ftp.FTPClient.retrieveFile' to get the file, passing the escaped path to the file, and a protocol to the 'retrieveFile' method. I want to be absolutely sure that the entire file has been retrieved because the event handler in the call back is going to write the file to disk locally, then delete the remote file from the ftp server alla '-E' switch behavior in the lftp client. My question is, do I really need to worry about this, or can I assume that an err back will happen if the file is not fully retrieved?
There are a couple unit tests for behavior in this area.
twisted.test.test_ftp.FTPClientTestCase.test_failedRETR is the most directly relevant one. It covers the case where the control and data connections are lost while a file transfer is in progress.
It seems to me that test coverage in this area could be significantly improved. There are no tests covering the case where just the data connection is lost while a transfer is in progress, for example. One thing that makes this tricky, though, is that FTP is not a very robust protocol. The end of a file transfer is signaled by the data connection closing. To be safe, you have to check to see if you received as many bytes as you expected to receive. The only way to perform this check is to know the file size in advance or ask the server for it using LIST (FTPClient.list).
Given all this, I'd suggest that when a file transfer completes, you always ask the server how many bytes you should have gotten and make sure it agrees with the number of bytes delivered to your protocol. You may sometimes get an errback on the Deferred returned from retrieveFile, but this will keep you safe even in the cases where you don't.