Within the Raspberry Pi, defined camera.shutter does not match with queried camera.exposure_speed.
The picamera API document-PiCamera API document states:
Exposure_speed-
Retrieves the current shutter speed of the camera.
When queried, this property returns the shutter speed currently being used
by the camera. If you have set shutter_speed to a non-zero value, then
exposure_speed and shutter_speed should be equal. However, if
shutter_speed is set to 0 (auto), then you can read the actual shutter
speed being used from this attribute. The value is returned as an integer
representing a number of microseconds. This is a read-only property.
Despite described above, after I defined shutter_sepeed to 10 seconds, exposure_speed returns 0 -the two vairables are not equal.
as can be seen in my code below:
from picamera import PiCamera
with PiCamera(resolution=(1024,768), framerate=Fraction(1,6), sensor_mode=3) as camera:
exp_sec = int('10')
camera.shutter_speed = exp_sec * 10**6 # micros
sleep(30)
print('camera_shutter_speed='+str(camera.shutter_speed))
print('camera_exposure_speed:'+str(camera.exposure_speed))
camera.iso = 1600 # 100-1600
camera.exposure_mode = 'off' # lock all setting parameters
fn_png = str(time.strftime("%Y-%m-%d-%H-%M-%S"))+'.png
camera.capture(fn_png, format='png')
In response:
>>>
===== RESTART: /home/pi/Documents/test_scripts/cap_one_image.py =====
made new direc
it is time to take a shot
0
camera_shutter_speed=9999959
camera_exposure_speed= 0
The last two are not equal which does not make any sense. Thoughts?
IIRC, the camera.exposure_speed attribute does not update until after you've taken an image at the requested shutter_speed setting.
If you try printing the settings after capture, does that work?
exp_sec=int('10')
camera.shutter_speed=exp_sec*10**6 # micros
sleep(30)
print('camera_shutter_speed='+str(camera.shutter_speed))
print('camera_exposure_speed:'+str(camera.exposure_speed))
camera.iso=1600 #100-1600
camera.exposure_mode='off' # lock all setting parameters
fn_png=str(time.strftime("%Y-%m-%d-%H-%M-%S"))+'.png'
camera.capture(fn_png, format='png')
print('camera_shutter_speed='+str(camera.shutter_speed))
print('camera_exposure_speed:'+str(camera.exposure_speed))
Related
I’m working on a project in micropython using an openMV camera and blob detection to determine the orientation of an object. My problem is when the check is executed, I get an error “ArilY is not defined”, because the object isn’t in the camera view yet (moving on conveyer). How can I implement a path in my code to not execute the check and just print that there is no object instead, then begin the loop again and check for the object? I have tried to implement a break with if else but can't seem to get the code right.
'''
import sensor, image, time, math
from pyb import UART
sensor.reset() # Reset and initialize the sensor.
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 2000) # Wait for settings take effect.
#sensor.set_auto_gain(False) # must be turned off for color tracking
#sensor.set_auto_whitebal(False) # must be turned off for color tracking
threshold_seed = (7,24,-8,4,-3,9)
threshold_aril = (33,76,-14,6,17,69)
threshold_raphe = (36,45,28,43,17,34)
thresholds = [threshold_seed,threshold_aril,threshold_raphe]
clock = time.clock() # Create a clock object to track the FPS.
uart = UART(3, 9600)
arilY = None
seedY = None
def func_pass():
result = "Pass"
print(result)
print("%d\n"%aril.cx(), end='')
uart.write(result)
uart.write("%d\n"%aril.cx())
#these two functions print info to serial monitor and send
def func_fail():
result = "Fail"
print(result)
print("%d\n"%aril.cx(), end='')
uart.write(result)
uart.write("%d\n"%aril.cx())
def func_orientation(seedY, arilY):
if (seedY and arilY):
check = 0
check = (seedY - arilY)
if
func_pass()
else:
func_fail()
while(True): #draw 3 blobs for each fruit
clock.tick()
img = sensor.snapshot()
for seed in img.find_blobs([threshold_seed], pixels_threshold=200, area_threshold=200, merge=True):
img.draw_rectangle(seed[0:4])
img.draw_cross(seed.cx(), seed.cy())
img.draw_string(seed.x()+2,seed.y()+2,"seed")
seedY = seed.cy()
for aril in img.find_blobs([threshold_aril],pixels_threshold=300,area_threshold=300, merge=True):
img.draw_rectangle(aril[0:4])
img.draw_cross(aril.cx(),aril.cy())
img.draw_string(aril.x()+2,aril.y()+2,"aril")
arilY = aril.cy()
for raphe in img.find_blobs([threshold_raphe],pixels_threshold=300,area_threshold=300, merge=True):
img.draw_rectangle(raphe[0:4])
img.draw_cross(raphe.cx(),raphe.cy())
img.draw_string(raphe.x()+2,raphe.y()+2,"raphe")
rapheY = raphe.cy()
func_orientation(seedY, arilY);
Something you could do is preemptively define arilY and SeedY as None before the while loop, then enclose the check in a if(arilY and seedY):
if you want to avoid using None, you could have an additional boolean that you set to true when arilY is detected, then enclose the check in a test for this boolean
But the bigger question here, is why your allocations are in the inner loop? You always redefine seedY and arilY for each iteration of the loop, which mean it will always be equal to seed.y of the last seed in the list, meaning all allocations prior to the last one were useless.
If you move the allocations outside the loop, there shouldn't be a problem.
I am using split_on_silence function with parameters as shown in below method in python.
I am not getting any single chunk in return. chunks length is 0.
what wrong am I doing?
from pydub import AudioSegment
from pydub.silence import split_on_silence
song = AudioSegment.from_wav(my_file)
# split track where silence is 0.5 seconds
# or more and get chunks
chunks = split_on_silence(song,
# must be silent for at least 0.5 seconds
# or 500 ms. adjust this value based on user
# requirement. if the speaker stays silent for
# longer, increase this value. else, decrease it.
min_silence_len = 500,
# consider it silent if quieter than -16 dBFS
# adjust this per requirement
silence_thresh = -16
)
song.dBFS
-22.4691372540001
silence_thresh - (in dBFS) anything quieter than this will be considered silence.Notice the average loudness of the sound is -22.4 dBFS (quieter than
the default silence_thresh of -16dBFS).
Change the value of silence_thresh = -23.
[worked from silence_thresh = -21]. check this for further details
I currently have an array of desired position vs. time of an object in my plant. I am using an inverse dynamics controller in order to drive the object to this desired position but I'm experiencing some difficulties. Here is how I am doing this:
I created the controller system
ID_cont = InverseDynamicsController(robot=controller_plant, kp=np.array([0.5]), ki=np.array([0.3]), kd=np.array([0.4]), has_reference_acceleration=False)
ID_controller = builder.AddSystem(ID_cont)
I got the controller input and output ports
control_estimated_state_input_port = ID_controller.get_input_port(0)
control_desired_state_input_port = ID_controller.get_input_port(1)
control_output_port = ID_controller.get_output_port(0)
I added a constant state source (likely wrong to do) and a state interpolator
constant_state_source = ConstantVectorSource(np.array([0.0]))
builder.AddSystem(constant_state_source)
position_to_state = StateInterpolatorWithDiscreteDerivative(controller_plant.num_positions(),
controller_plant.time_step())
builder.AddSystem(position_to_state)
I wired the controller to the plant
builder.Connect(constant_state_source.get_output_port(), position_to_state.get_input_port())
builder.Connect(position_to_state.get_output_port(), control_desired_state_input_port)
builder.Connect(plant.get_state_output_port(model_instance_1), control_estimated_state_input_port)
builder.Connect(control_output_port, plant.get_actuation_input_port(model_instance_1))
Next, I am trying to create a while loop that advances the simulation and changes the 'constant vector source' so I can feed in my position vs. time values but I'm unsure if the reason this isn't working out is because this is the complete wrong approach or if this is the right approach but I just have a few things wrong
diagram_context = diagram.CreateDefaultContext()
sim_time_temp = diagram_context.get_time()
time_step = 0.1
while sim_time_temp < duration:
ID_controller_context = diagram.GetMutableSubsystemContext(ID_controller, diagram_context)
simulator.AdvanceTo(sim_time_temp)
sim_time_temp = sim_time_temp + time_step
I added a constant state source (likely wrong to do) and a state interpolator
As you suspected, this is not the best way to go if you already have a desired sequence of positions and times that you want the system to track. Instead, you should use a TrajectorySource. Since you have a set of positions samples, positions (num_times x num_positions array), that you'd like the system to hit at specified times (num_times x 1 array), PiecewisePolynomial.CubicShapePreserving is a reasonable choice for building the trajectory.
desired_position_trajectory = PiecewisePolynomial.CubicShapePreserving(times, positions)
desired_state_source = TrajectorySource(desired_position_trajectory,
output_derivative_order=1)
builder.AddSystem(desired_state_source)
The output_derivative_order=1 argument makes desired_state_source output a [position, velocity] vector rather than just a position vector. You can connect desired_state_source directly to the controller, without an interpolator.
With this setup, you can advance the simulation all the way to duration without the need for a while loop.
I need help to get actual value from gravity sound meter with raspberry pi.
I have a python program to get those details
import sys
sys.path.append('../')
import time
from DFRobot_ADS1115 import ADS1115
ADS1115_REG_CONFIG_PGA_6_144V = 0x00 # 6.144V range = Gain 2/3
ADS1115_REG_CONFIG_PGA_4_096V = 0x02 # 4.096V range = Gain 1
ADS1115_REG_CONFIG_PGA_2_048V = 0x04 # 2.048V range = Gain 2 (default)
ADS1115_REG_CONFIG_PGA_1_024V = 0x06 # 1.024V range = Gain 4
ADS1115_REG_CONFIG_PGA_0_512V = 0x08 # 0.512V range = Gain 8
ADS1115_REG_CONFIG_PGA_0_256V = 0x0A # 0.256V range = Gain 16
ads1115 = ADS1115()
while True :
#Set the IIC address
ads1115.setAddr_ADS1115(0x48)
#Sets the gain and input voltage range.
ads1115.setGain(ADS1115_REG_CONFIG_PGA_6_144V)
#Get the Digital Value of Analog of selected channel
adc0 = ads1115.readVoltage(0)
time.sleep(0.2)
adc1 = ads1115.readVoltage(1)
time.sleep(0.2)
adc2 = ads1115.readVoltage(2)
time.sleep(0.2)
adc3 = ads1115.readVoltage(3)
print "A0:%dmV A1:%dmV A2:%dmV A3:%dmV"%(adc0['r'],adc1['r'],adc2['r'],adc3['r'])
It's displaying values like
A0:0mv A1:1098mV A2:3286mV A3:498mV
But I don't know how to get actual sound value in decibel unit.
You can find the documentation here:
https://wiki.dfrobot.com/Gravity__Analog_Sound_Level_Meter_SKU_SEN0232
To anwser your question:
For this product,the decibel value is linear with the output
voltage.When the output voltage is 0.6V, the decibel value should be
30dBA. When the output voltage is 2.6V, the decibel value should be
130dBA. The calibration is done before leaving the factory, so you
don't need to calibrate it. So we can get this relation: Decibel
Value(dBA) = Output Voltage(V) × 50
So you need to check to which connector (A0, A1, A2 or A3) you connected your sound level meter. Take that value (which seems to be in mV) convert it to V and multiple by 50.
Or simple divide your value by 20.
I am writing a script to convert a picture into MIDI notes based on the RGBA values of the individual pixels. However, I cannot seem to get the last step working, which is to actually output the notes to a file.
I have tried using the MIDIUtil library, however its documentation is not the greatest and I can't seem to figure it out.
If anyone could tell me how to sequence the notes (so that they don't all begin at the beginning) it would be greatly appreciated.
Looking at the sample, something like
from midiutil.MidiFile import MIDIFile
# create your MIDI object
mf = MIDIFile(1) # only 1 track
track = 0 # the only track
time = 0 # start at the beginning
mf.addTrackName(track, time, "Sample Track")
mf.addTempo(track, time, 120)
# add some notes
channel = 0
volume = 100
pitch = 60 # C4 (middle C)
time = 0 # start on beat 0
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
pitch = 64 # E4
time = 2 # start on beat 2
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
pitch = 67 # G4
time = 4 # start on beat 4
duration = 1 # 1 beat long
mf.addNote(track, channel, pitch, time, duration, volume)
# write it to disk
with open("output.mid", 'wb') as outf:
mf.writeFile(outf)
I know this is an old post, but I'm the author of the library, and I wanted to mention that python 2 and 3 support have now been unified and with the demise of Google Code the code is now hosted on GitHub and can be installed via pip, ie:
pip install MIDIUtil
Documentation is available at Read The Docs.
(Tried to comment but I lacked the experience points.)
The end-of-track message is created automatically when the file is written to disk.