A bit of background on my project: I am doing a robot which can be controlled through the internet with live video feed. For that I have a raspberry pi as an on-board computer that handles the internet connection, video streaming, motor controlling etc. I've been successful with controlling it through a socket connection on python and a public VPS to relay commands (for NAT traversal).
Now I want to implement some sort of "idle state" like you would have on a mobile phone: The robot would stays idle and consume almost no power, and when I initiate a call on my computer it would wake up, stream the video etc. and when I hang up the call it would go to sleep again. Essentially, I want some version of Apple's Facetime and tweak it to my liking. The emphasis is on consuming as little energy as possible, since the robot is battery powered.
I can find almost no information on the internet for such an implementation (or maybe I just don't know the correct keyword). Can anyone point me in the correct direction?
Related
Good morning :)
I have some questions regarding an icecast-setup.
We have my own (Nextcloud)-Server sitting in our church. This works perfectly fine and since here in Germany the community services got forbidden already once during all this Covid19-stuff, we want to have a stream-setup. I managed to set up an icecast-instance on our server and we use my old laptop with Rocket Broadcaster and a Steinberg CI2 to provide the source for the stream. That works all as intended. We already used it once because we stopped public services for two weeks after one of our member got tested positive after he went abroad for a week.
Our operator on the PA doesn't want to have another display there, which would disturb him from listening the sermon.
My project regarding this: I have a RPi4 and a Behringer U-Phoria UMC202HD. The input atm is a standard mic, that is connected to the Interface.
The Pi is configured with darkice and uses its own icecast installation, while I am testing everything at home. Since I started the streaming project, after the service we switched Rocket Broadcaster to use VLC and a folder of old service recordings (mastered and in MP3-format) to provide a source to listen while we are at work or on travels. This option gets used pretty regularly and I want to keep it going.
My plan is to have a little box with a LED levelmeter, where the input level is shown. That should be done with a little python script. Also I want to add two buttons, where presets for the two setups get loaded. Button 1: kill the current source and start the darkice livestream. Button 2: kill the current source and start the playback of old recordings. Both options should have visual feedback for the operator. The raspberry has to work in headless mode without need for a VNC or SSH-connection for the normal usage.
My problem: I tried:
sudo killall darkice && /home/pi/darkice.sh
This code will get changed, because I probably have to use ices for the mp3-playback. So basically it kills darkice, starts the ices playback (for now only restarts the stream in a blink) and vice versa.
The bash-file exists and gets executed at reboot via cronjob. That works well. When I execute the above mentioned killall command, icecast continues the broadcast almost instantly, but the stream stops on the clients. Everyone needs to restart it.
Do I have an option to change the setup, so that I am able to switch between the two options without the need for everyone to restart?
My plan was to create a bash-script, where I do this all inside and execute it via GPIO-input and pythoncode.
Thanks in advance!
Is there a way I can hack somehow an xbox controller to send custom signal or clone the protocol with a usb dongle.
Basically a way I can use my pc as a controller for xbox console.
I am trying to develop an AI that plays FIFA and all the processing is made on PC. I can't find how to send the signal, corresponding to the action the AI has decided to make, to the xbox console.
Thank you in advance
No, xbox controller wireless protocol is proprietary and nobody reverse-engineered it yet AFAIK.
To use a real xbox wireless controller on a PC (the opposite of what you want) you need an adapter, which is sold separately, and most USB XBOX code we have published is dedicated to make those adapters work on linux. The actual protocol of wireless communication between the adapter and the controller is a mystery. It is not normal wifi so you can't use any wifi adapter available in the market to mimic it.
Another option is to crack open an xbox controller and connect the components (using arduino maybe?) directly to the PC.
There's a work around solution to your problem. The Xbox app for Windows allows you to play games via streaming over a local network. The quality is imperfect, but if you have access to the console's actual video output you can set the stream quality to the lowest setting to minimize latency.
Streaming via the Xbox app for Windows will allow you to emulate a controller via any of a number of methods that convert various signals to X-input. I personally have tested this with a Dual Shock 4 controller and InputMapper. For your purposes, you may want to try EventGhost.
People shouldn't just throw shade automatically because you're designing a bot. There are plenty of respectable reasons for doing so, particularly academic ones. I hope you're not just trying to cheat, but if you are, it's an impressive way of going about it, at least.
I am working on a student project involving a drone which runs on the Pixhawk platform but has a 'companion computer' in the form of a Raspberry Pi. The Pi runs its own Python software and uses DroneKit (and therefore MAVLink?) to communicate with the Pixhawk via USB - giving it commands, transferring data and so on. Additionally, we have a 'ground station' laptop running ArduPilot Mission Planner which can view and interact with the aircraft remotely and also view it's telemetry. I noticed a 'Messages' tab which essentially acts like a remote console, showing 'logged' messages from the Pixhawk - this is what the question is referring to.
For debugging and information purposes, I would like to be able to add to this from Python on the Pi. I assumed this would be easily achievable through DroneKit but it does not seem trivial - send_mavlink and message_factory looked hopeful but I have found nobody else trying to do this specifically.
How can I easily redirect my 'console messages' from Python to the ground station? I realise there are alternative methods but going through the Pixhawk's existing telemetry system seems a much better option.
Thanks
One thing you can do is to create a bridge (proxy) between Pixhawk and GCS with your RPi, similar to this question.
Then in the middle of that you can send your own text messages with:
gcs_conn.mav.statustext_send(mavutil.mavlink.MAV_SEVERITY_INFO, "your message here")
Be careful not blocking too much the telemetry transmission, otherwise you could have intermittent connection to the drone from your GCS.
Let's say I have a raspberry pi here and I want to write a Python script that turns on the light as soon as a i2c signal reaches the pi and some pin gets high. I do not want to use polling for this task, as it could slow down the process
(I know just a bit but I'ts just bad practice and puts load on the CPU and so on, so I don't want to permacycle asking for the input state)
Is there any kind of server or callback function I could use to realize this with a python script? Which library could I use to reach such a behaviour?
First Ideas are enviromental variables/the i2c Interface in the Linux system that I could listen to constantly somehow and catch it to make it do what I want it to.
As I see it no need to use python but I don't see whole picture so I don't know if this will help You,
just regarding this part of question:
Is there any kind of server or callback function...
rpio.poll(pin, callback());
Watch pin for changes and execute the callback callback() on events.
callback() takes a single argument, the pin which triggered the callback.
The optional direction argument can be used to watch for specific
events:
rpio.POLL_LOW: poll for falling edge transitions to low.
rpio.POLL_HIGH: poll for rising edge transitions to high.
rpio.POLL_BOTH: poll for both transitions (the default).
Complete documentation - this is the npm module documentation
My example of configuring node.js server
Under Windows XP I have seen some commercial software that protects the computer with a USB device. That is, the screensaver activates after a certain period of time as usual. But to deactivate, you need not only a passphrase, but also a USB device plugged in. The device contains certificates and has to be verified before deactivating the screensaver.
I am looking for some way to implement such feature with Python. I have searched in the Ubuntu Software Center and got BlueProximity being the software most close to my purpose but still different. This software monitors a certain Bluetooth device and its presence is used to simulate user activities periodically to avoid screensaver's activation.
Surely I can do a similar program, periodically check a certain USB disk's presence and validate its containing certificate, and if all-OK, poke the screensaver as some user activity, otherwise lock the screen.
However this is not immediate. Suppose someone have stolen my passphrase to unlock the screensaver, but not the USB disk, then he can unlock the screen. And within at least one minute or so the screen should be locked again. Even if my program has a rather short checking period--like 0.1 second, intervals summing from the 0.1s and the slowly fading time cost(usually nearly 1s) exists between one and another lock-up.
So is there any better solution, such as some APIs that my program can tell screensaver refuse to unlock any way?
You might want to take a look to PAM (Pluggable Authentication Modules). The solution will be more generic, robust and it could be applied to any program that relies on PAM for authentication.