I made a program which is saving sensor data in a log file (server site).
This file is stored in a temporary directory (Ram-Disk).
Each line contains a timestamp and a JSON string.
The update rate is dependent on the sensor data, but the fastest is every 0.5s.
What I want to do is, to stream every update in this file to a client application.
I have several approaches in mind:
maybe a shared folder on server site (samba) with a script (client site), just checking the file every 0.5s
maybe a another server program running on the server, checking for updates (but this I don't want to do, because Raspberry Pi is slow)
Has anyone maybe done something like this before and can share some ideas? Is there maybe a python module for this already (which opens a file like a stream and if something changed then this stream is giving it out)? Is it smart to check a file constantly for updates?
To stream the log file to an application you can use
tail -n 1000000 -f | application
(This will continuously check the file for new lines and then stream them to the application, then hang again until new lines are present.)
But this will of course put load on your server as the querying whether there are new lines or not will be relayed to the Raspberry Pi to execute it. A small program (written in C, with a decent sleep) on the server itself might in fact put less load on it than querying for new lines via the network.
I'm doing something like that.
I have a server running on my raspberry pi + client that parse the output of the server and sends it to another server on the web.
What I'm doing is that the local server program write it's data in chunks.
Every time it writes the data (by the way, also on tmpfs) it writes it on a different file, so I don't get errors when trying to parse the file while something else is writing to that file..
After it writes the file, it starts the client program in order to parse and send the data (Using subprocess with the name of the file as a parameter).
Works great for me.
Related
Python newbie here.
We have a script that downloads data from an API (in the form of JSON), converts it into a CSV and uploads to FTP using ftplib.FTP(); script below:
session = ftplib.FTP('ftpserver','username','password')
file = open('filename.csv','rb')
session.storbinary('STOR filename.csv', file)
file.close()
session.quit()
This whole process is set to occur every 11 minutes using:
schedule.every(11).minutes.do(download_json_from_api)
while True:
schedule.run_pending()
time.sleep(1)
The script runs no problem until every now and then (no pattern or set time) we receives an EOFError that would look like this:
A quick Google suggested this may have something to do with the FTP server connection dropping out; would that be correct? What can I add to my existing script to prevent this from happening (eg can I write something to immediately initiate new connection every time the connection gets terminated? OR something that will tell Python to close the current console and run the script again in a new console (because that seems to fix the issue)?)?
Guidance is much appreciated!
Thank you.
I am developing a simple web server that runs on a raspberry pi zero and lights up an LED when a request is received on the POST route (with a given color, intensity, blink timing and other informations contained in the request data) and shuts it down when a request is received on the DELETE route.
I wanted to have a sort of backup of the requests i do to the server so that they can be "redone" (in whatever order) when the server restarts so that the LEDs will turn on without having to redo all of them by hand.
Right now (since it was the easiest and fasted way for me to do it as a proof of concept) every time i make a POST request i save the color in a dict using as key the serial of the LED and then write the dict to a json file.
When i receive a DELETE request i read the file, delete the entry and write it again with the other information that it may contain (if more than one LED was connected), if the server loses power or gets shut down and restarts it reads the file and restores the LEDs statuses.
I was wondering what would be the best way to have a system like this (either using a file, DB or other possible solutions) in a way that would use the lowest amount of RAM possible since I already have other services running on the rpi that use quite a bit of it.
Depending on how many LEDs there are it sounds like what you are doing will be a JSON file of only a few bytes, right? There are ways you could compress that, but unless you have a huge number of LEDs I doubt it will be a significant saving compared to everything else.
I'm currently working on gateway with an embedded Linux and a Webserver. The goal of the gateway is to retrieve data from electrical devices through a RS485/Modbus line, and to display them on a server.
I'm using Nginx and Django, and the web front-end is delivered by "static" files. Repeatedly, a Javascript script file makes AJAX calls that send CGI requests to Nginx. These CGI requests are answered with JSON responses thanks to Django. The responses are mostly data that as been read on the appropriate Modbus device.
The exact path is the following :
Randomly timed CGI call -> urls.py -> ModbusCGI.py (import an other script ModbusComm.py)-> ModbusComm.py create a Modbus client and instantly try to read with it.
Next to that, I wanted to implement a Datalogger, to store data in a DB at regular intervals. I made a script that also import the ModbusComm.py script, but it doesn't work : sometime multiple Modbus frames are sent at the same time (datalogger and cgi scripts call the same function in ModbusComm.py "files" at the same time) which results in an error.
I'm sure this problem would also occur if there are a lot of users on the server (CGI requests sent at the same time). Or not ? (queue system already managed for CGI requests? I'm a bit lost)
So my goal would be to make a queue system that could handle calls from several python scripts => make them wait while it's not their turn => call a function with the right arguments when it's their turn (actually using the modbus line), and send back the response to the python script so it can generate the JSON response.
I really don't know how to achieve that, and I'm sure there are better way to do this.
If I'm not clear enough, don't hesitate to make me aware of it :)
I had the same problem when I had to allow multiple processes to read some Modbus (and not only Modbus) data through a serial port. I ended up with a standalone process (“serial port server”) that exclusively works with a serial port. All other processes work with that port through that standalone process via some inter processes communication mechanism (we used Unix sockets).
This way when an application wants to read a Modbus register it connects to the “serial port server”, sends its request and receives the response. All the actual serial port communication is done by the “serial port server” in sequential way to ensure consistency.
I'm developping a script with Pyserial to send data to a microcontroller, the microcontroller then read the data, process them, and send back some debug information to my python script.
My python script was working without any issue when i was just reading the data from the microcontroller. Now that I need to send data to the microcontroller and start reading just after that, the data i'm reading are not complete.
I should receive something like [TAG1],10,11,12,[TAG1],13,14,15\n but sometimes I don't received the beginning of the data but only the end like 1,12,[TAG1],13,14,15\n
I'm basically doing:
serial.write(dataOut)
dataIn = serial.read(sizeOfTheDataToReceive)
The issue does not come from the microcontroller I'm sure of that, if I'm using putty to send/receive my data I always see the full data.
I tried to add some delay in my microcontroller code to send the data 10s after receiving the data from python, but still it's not working everytime.
Do you have any idea what can cause it ? The COM port is opened when the python script start and is closed at the end of the script.
You need to clear your read and write buffers:
serial.flushInput()
serial.flushOutput()
Then read the data byte-wise:
serial.write(dataOut)
time.Sleep(0.3)
s=''
while serial.inWaiting()>0:
b=serial.read(1)
# time.Sleep(0.1)
s += b
Is there any way to access a file being uploaded over http using a CGI script before the upload finishes? For example, say a 10 megabyte file is being uploaded, and is exactly 10% done, meaning the server has 1 megabyte of data. Is it possible to read that 1 megabyte of data without waiting for the upload to finish?
My understanding of http uploads is that the server won't call the CGI script handling the upload until all of the data is received, but I'm hoping there's some way around that. I'm using python to handle CGI requests if that makes any difference.
Thanks in advance for any help.
CGI is the specification of communication between the web server and the external application. It does not allow for this.
In fact, most web servers won't do anything with an upload until it finishes, but there's no reason you couldn't write/change one (or MAYBE find one, but I don't know which it would be) to allow access, but you're still not going to do it via a CGI.
http://www.ietf.org/rfc/rfc3875