The programming is displaying web cam footage to a monitor. I am trying to add a webserver to the program via flask. When I try to run the flask app it throws a GPIO device busy error. I had the webserver running separate from this code.
https://github.com/hall488/lockedin
Terminal Connected to Coral Dev Board
<me/mendel/lockedin$ python3 -m edgetpuvision.detect \
> --source /dev/video1:YUY2:864x480:20/1 \
> --model ${TEST_DATA}/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite
* Serving Flask app 'edgetpuvision.apps' (lazy loading)
* Environment: development
* Debug mode: on
* Running on all addresses (0.0.0.0)
WARNING: This is a development server. Do not use it in a production deployme
nt.
* Running on http://127.0.0.1:3000
* Running on http://128.46.192.239:3000 (Press CTRL+C to quit)
* Restarting with stat
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/periphery/gpio.py", line 623, in
_reopen
fcntl.ioctl(self._chip_fd, CdevGPIO._GPIO_GET_LINEHANDLE_IOCTL, request)
OSError: [Errno 16] Device or resource busy
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/mnt/home/mendel/lockedin/edgetpuvision/detect.py", line 50, in <module>
in1 = GPIO("/dev/gpiochip2", 9, "out") #pin 17
File "/usr/local/lib/python3.7/dist-packages/periphery/gpio.py", line 496, in
__init__
self._open(path, line, direction, edge, bias, drive, inverted, label)
File "/usr/local/lib/python3.7/dist-packages/periphery/gpio.py", line 547, in
_open
self._reopen(direction, edge, bias, drive, inverted)
File "/usr/local/lib/python3.7/dist-packages/periphery/gpio.py", line 625, in
_reopen
raise GPIOError(e.errno, "Opening output line handle: " + e.strerror)
periphery.gpio.GPIOError: [Errno 16] Opening output line handle: Device or resou
rce busy
mendel#elusive-mole:/mnt/home/mendel/lockedin$
Detect.py file where trying to implement webserver
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A demo which runs object detection on camera frames.
export TEST_DATA=/usr/lib/python3/dist-packages/edgetpu/test_data
Run face detection model:
python3 -m edgetpuvision.detect \
--model ${TEST_DATA}/mobilenet_ssd_v2_face_quant_postprocess_edgetpu.tflite
Run coco model:
python3 -m edgetpuvision.detect \
--model ${TEST_DATA}/mobilenet_ssd_v2_coco_quant_postprocess_edgetpu.tflite \
--labels ${TEST_DATA}/coco_labels.txt
"""
import argparse
import colorsys
import itertools
import time
import sys
from flask import Flask, render_template, request
app = Flask(__name__)
from pycoral.adapters import detect
from pycoral.utils import edgetpu
from periphery import GPIO
from periphery import PWM
from . import svg
from . import utils
from .apps import run_app
#google "Coral GPIO"
in1 = GPIO("/dev/gpiochip2", 9, "out") #pin 17
in2 = GPIO("/dev/gpiochip4", 10, "out") #pin 18
pwm1 = PWM(0, 0) #pin32
in3 = GPIO("/dev/gpiochip0", 7, "out") #pin 29
in4 = GPIO("/dev/gpiochip0", 8, "out") #pin 31
pwm2 = PWM(1, 0) #pin33
CSS_STYLES = str(svg.CssStyle({'.back': svg.Style(fill='black',
stroke='black',
stroke_width='0.5em'),
'.bbox': svg.Style(fill_opacity=0.0,
stroke_width='0.1em')}))
def size_em(length):
return '%sem' % str(0.6 * (length + 1))
def color(i, total):
return tuple(int(255.0 * c) for c in colorsys.hsv_to_rgb(i / total, 1.0, 1.0))
def make_palette(keys):
return {key : svg.rgb(color(i, len(keys))) for i, key in enumerate(keys)}
def make_get_color(color, labels):
if color:
return lambda obj_id: color
if labels:
palette = make_palette(labels.keys())
return lambda obj_id: palette[obj_id]
return lambda obj_id: 'white'
def overlay(title, objs, get_color, labels, inference_time, inference_rate, layout):
x0, y0, width, height = layout.window
font_size = 0.03 * height
defs = svg.Defs()
defs += CSS_STYLES
doc = svg.Svg(width=width, height=height,
viewBox='%s %s %s %s' % layout.window,
font_size=font_size, font_family='monospace', font_weight=500)
doc += defs
# if len(objs) == 0:
# in1.write(False)
# in2.write(False)
# pwm1.frequency = 1e3
# pwm1.duty_cycle = .75
# pwm1.enable()
# in3.write(False)
# in4.write(False)
# pwm2.frequency = 1e3
# pwm2.duty_cycle = .75
# pwm2.enable()
for obj in objs:
color = get_color(obj.id)
inference_width, inference_height = layout.inference_size
bbox = obj.bbox.scale(1.0 / inference_width, 1.0 / inference_height).scale(*layout.size)
x, y, w, h = bbox.xmin, bbox.ymin, bbox.width, bbox.height
percent = int(100 * obj.score)
if labels:
caption = '%d%% %d %d %s' % (percent, bbox.xmin, bbox.ymin, labels[obj.id])
else:
caption = '%d %d' % (x + w/2, y + h/2)
motor_IO(x, y, w, h)
doc += svg.Rect(x=x, y=y, width=w, height=h,
style='stroke:%s' % color, _class='bbox')
doc += svg.Rect(x=x, y=y+h ,
width=size_em(len(caption)), height='1.2em', fill=color)
t = svg.Text(x=x, y=y+h, fill='black')
t += svg.TSpan(caption, dy='1em')
doc += t
ox = x0 + 20
oy1, oy2 = y0 + 20 + font_size, y0 + height - 20
# Title
if title:
doc += svg.Rect(x=0, y=0, width=size_em(len(title)), height='1em',
transform='translate(%s, %s) scale(1,-1)' % (ox, oy1), _class='back')
doc += svg.Text(title, x=ox, y=oy1, fill='white')
# Info
lines = [
'Objects: %d' % len(objs),
'Inference time: %.2f ms (%.2f fps)' % (inference_time * 1000, 1.0 / inference_time)
]
for i, line in enumerate(reversed(lines)):
y = oy2 - i * 1.7 * font_size
doc += svg.Rect(x=0, y=0, width=size_em(len(line)), height='1em',
transform='translate(%s, %s) scale(1,-1)' % (ox, y), _class='back')
doc += svg.Text(line, x=ox, y=y, fill='white')
return str(doc)
def motor_IO(x, y, w, h):
if x + w/2 > 400 :
in1.write(True)
in2.write(False)
pwm1.frequency = 1e3
pwm1.duty_cycle = .75
pwm1.enable()
else :
in1.write(False)
in2.write(True)
pwm1.frequency = 1e3
pwm1.duty_cycle = .75
pwm1.enable()
if y + h/2 > 400 :
in3.write(True)
in4.write(False)
pwm2.frequency = 1e3
pwm2.duty_cycle = .75
pwm2.enable()
else :
in3.write(False)
in4.write(True)
pwm2.frequency = 1e3
pwm2.duty_cycle = .75
pwm2.enable()
def print_results(inference_rate, objs):
print('\nInference (rate=%.2f fps):' % inference_rate)
for i, obj in enumerate(objs):
print(' %d: %s, area=%.2f' % (i, obj, obj.bbox.area))
def render_gen(args):
fps_counter = utils.avg_fps_counter(30)
interpreters, titles = utils.make_interpreters(args.model)
assert utils.same_input_image_sizes(interpreters)
interpreters = itertools.cycle(interpreters)
interpreter = next(interpreters)
labels = utils.load_labels(args.labels) if args.labels else None
filtered_labels = set(l.strip() for l in args.filter.split(',')) if args.filter else None
get_color = make_get_color(args.color, labels)
draw_overlay = True
width, height = utils.input_image_size(interpreter)
yield width, height
output = None
while True:
tensor, layout, command = (yield output)
inference_rate = next(fps_counter)
if draw_overlay:
start = time.monotonic()
edgetpu.run_inference(interpreter, tensor)
inference_time = time.monotonic() - start
objs = detect.get_objects(interpreter, args.threshold)[:args.top_k]
if labels and filtered_labels:
objs = [obj for obj in objs if labels[obj.id] in filtered_labels]
objs = [obj for obj in objs \
if args.min_area <= obj.bbox.scale(1.0 / width, 1.0 / height).area <= args.max_area]
if args.print:
print_results(inference_rate, objs)
title = titles[interpreter]
output = overlay(title, objs, get_color, labels, inference_time, inference_rate, layout)
else:
output = None
if command == 'o':
draw_overlay = not draw_overlay
elif command == 'n':
interpreter = next(interpreters)
def add_render_gen_args(parser):
parser.add_argument('--model',
help='.tflite model path', required=True)
parser.add_argument('--labels',
help='labels file path')
parser.add_argument('--top_k', type=int, default=50,
help='Max number of objects to detect')
parser.add_argument('--threshold', type=float, default=0.1,
help='Detection threshold')
parser.add_argument('--min_area', type=float, default=0.0,
help='Min bounding box area')
parser.add_argument('--max_area', type=float, default=1.0,
help='Max bounding box area')
parser.add_argument('--filter', default=None,
help='Comma-separated list of allowed labels')
parser.add_argument('--color', default=None,
help='Bounding box display color'),
parser.add_argument('--print', default=False, action='store_true',
help='Print inference results')
##app.route("/")
def main():
app.run(host='0.0.0.0', port=3000, debug=True)
run_app(add_render_gen_args, render_gen)
#templateData = {
#}
# Pass the template data into the template main.html and return it to the user
#return render_template('main.html', **templateData)
##app.route("/<changePin>/<action>")
#def action(changePin, action):
# print("yo")
if __name__ == '__main__':
main()
It looks like Flask's multithreading interferes with Periphery's GPIO interactions. If you create one python process for managing GPIO via Periphery and a second python process for the webserver, then you shouldn't hit this error.
Around 2020, Flask started defaulting its server to multithreaded. Older versions were synchronous and single threaded -- but allowed for multiple processes for handling multiple clients. This older configuration might not interfere with Periphery either.
Related
I have been trying to understand the error reported when I run an example supplied with some hardware i purchased.
I have tried googling around but every answer I get is a bit beyond my comprehension. I think what is going wrong is that the script, or one of the imported scripts is written for Python 2 and i am trying to run it in python 3.
When I try and run it in Python 2 i get a whole host of other problems so I have been trying to make it work with 3.
The hardware I purchased is the Enviro+ sensor suite for the raspberry pi sold by Pimoroni
Hardware Link
Github Library
Pimoroni Tutorial
#!/usr/bin/env python
import time
import colorsys
import os
import sys
import ST7735
import ltr559
from bme280 import BME280
from pms5003 import PMS5003
from enviroplus import gas
from subprocess import PIPE, Popen
from PIL import Image
from PIL import ImageDraw
from PIL import ImageFont
print("""all-in-one.py - Displays readings from all of Enviro plus' sensors
Press Ctrl+C to exit!
""")
# BME280 temperature/pressure/humidity sensor
bme280 = BME280()
# PMS5003 particulate sensor
pms5003 = PMS5003()
# Create ST7735 LCD display class
st7735 = ST7735.ST7735(
port=0,
cs=1,
dc=9,
backlight=12,
rotation=270,
spi_speed_hz=10000000
)
# Initialize display
st7735.begin()
WIDTH = st7735.width
HEIGHT = st7735.height
# Set up canvas and font
img = Image.new('RGB', (WIDTH, HEIGHT), color=(0, 0, 0))
draw = ImageDraw.Draw(img)
path = os.path.dirname(os.path.realpath(__file__))
font = ImageFont.truetype(path + "/fonts/Asap/Asap-Bold.ttf", 20)
message = ""
# The position of the top bar
top_pos = 25
# Displays data and text on the 0.96" LCD
def display_text(variable, data, unit):
# Maintain length of list
values[variable] = values[variable][1:] + [data]
# Scale the values for the variable between 0 and 1
colours = [(v - min(values[variable]) + 1) / (max(values[variable])
- min(values[variable]) + 1) for v in values[variable]]
# Format the variable name and value
message = "{}: {:.1f} {}".format(variable[:4], data, unit)
print(message)
draw.rectangle((0, 0, WIDTH, HEIGHT), (255, 255, 255))
for i in range(len(colours)):
# Convert the values to colours from red to blue
colour = (1.0 - colours[i]) * 0.6
r, g, b = [int(x * 255.0) for x in colorsys.hsv_to_rgb(colour,
1.0, 1.0)]
# Draw a 1-pixel wide rectangle of colour
draw.rectangle((i, top_pos, i+1, HEIGHT), (r, g, b))
# Draw a line graph in black
line_y = HEIGHT - (top_pos + (colours[i] * (HEIGHT - top_pos)))\
+ top_pos
draw.rectangle((i, line_y, i+1, line_y+1), (0, 0, 0))
# Write the text at the top in black
draw.text((0, 0), message, font=font, fill=(0, 0, 0))
st7735.display(img)
# Get the temperature of the CPU for compensation
def get_cpu_temperature():
process = Popen(['vcgencmd', 'measure_temp'], stdout=PIPE)
output, _error = process.communicate()
return float(output[output.index('=') + 1:output.rindex("'")])
# Tuning factor for compensation. Decrease this number to adjust the
# temperature down, and increase to adjust up
factor = 0.8
cpu_temps = [0] * 5
delay = 0.5 # Debounce the proximity tap
mode = 0 # The starting mode
last_page = 0
light = 1
# Create a values dict to store the data
variables = ["temperature",
"pressure",
"humidity",
"light",
"oxidised",
"reduced",
"nh3",
"pm1",
"pm25",
"pm10"]
values = {}
for v in variables:
values[v] = [1] * WIDTH
# The main loop
try:
while True:
proximity = ltr559.get_proximity()
# If the proximity crosses the threshold, toggle the mode
if proximity > 1500 and time.time() - last_page > delay:
mode += 1
mode %= len(variables)
last_page = time.time()
# One mode for each variable
if mode == 0:
variable = "temperature"
unit = "C"
cpu_temp = get_cpu_temperature()
# Smooth out with some averaging to decrease jitter
cpu_temps = cpu_temps[1:] + [cpu_temp]
avg_cpu_temp = sum(cpu_temps) / float(len(cpu_temps))
raw_temp = bme280.get_temperature()
data = raw_temp - ((avg_cpu_temp - raw_temp) / factor)
display_text(variable, data, unit)
if mode == 1:
variable = "pressure"
unit = "hPa"
data = bme280.get_pressure()
display_text(variable, data, unit)
if mode == 2:
variable = "humidity"
unit = "%"
data = bme280.get_humidity()
display_text(variable, data, unit)
if mode == 3:
variable = "light"
unit = "Lux"
if proximity < 10:
data = ltr559.get_lux()
else:
data = 1
display_text(variable, data, unit)
if mode == 4:
variable = "oxidised"
unit = "kO"
data = gas.read_all()
data = data.oxidising / 1000
display_text(variable, data, unit)
if mode == 5:
variable = "reduced"
unit = "kO"
data = gas.read_all()
data = data.reducing / 1000
display_text(variable, data, unit)
if mode == 6:
variable = "nh3"
unit = "kO"
data = gas.read_all()
data = data.nh3 / 1000
display_text(variable, data, unit)
if mode == 7:
variable = "pm1"
unit = "ug/m3"
data = pms5003.read()
data = data.pm_ug_per_m3(1.0)
display_text(variable, data, unit)
if mode == 8:
variable = "pm25"
unit = "ug/m3"
data = pms5003.read()
data = data.pm_ug_per_m3(2.5)
display_text(variable, data, unit)
if mode == 9:
variable = "pm10"
unit = "ug/m3"
data = pms5003.read()
data = data.pm_ug_per_m3(10)
display_text(variable, data, unit)
# Exit cleanly
except KeyboardInterrupt:
sys.exit(0)
When I try and run the code i get the following results:
Traceback (most recent call last):
File "all-in-one.py", line 135, in <module>
cpu_temp = get_cpu_temperature()
File "all-in-one.py", line 89, in get_cpu_temperature
return float(output[output.index('=') + 1:output.rindex("'")])
TypeError: argument should be integer or bytes-like object, not 'str'
Please forgive me if I have not filled this help request out correctly - i am very new to forums (I hardly ever post in them, although i read them a lot for help), and i am also very new to Python and Linux.
Any help and support from the community would be massively appreciated - thank you in advance...
SW
According to Python 3 whitepages on subprocess.communicate(), the type of output and _error can be either strings (what you want) OR bytes. If you were getting strings back, you wouldn't have this problem, but the TypeError message you're getting is exactly what you get when you try to call index() on a bytes object with a string argument.
Demonstrably:
>>> output = "temperature = '88 C'".encode('utf-8') #this is of type bytes
>>> output
b"temperature = '88 C'"
>>> output.index('=')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: argument should be integer or bytes-like object, not 'str'
>>> output.index(ord('='))
12
So you should replace output.index('=') with output.index(ord('=')) and output.rindex("'") with output.rindex(ord("'")).
EDIT
I realized this much later, but you can circumvent using ord() by prepending your search string with a b.
output.index(b'=')
I know, there was answers, but I am new to python, and don't understand how to fix the problem.
I was creating QR Code scanner and I copy code from:
https://pastebin.com/C4r2uNCC
And I get these errors.
I have Webcam plugged in. That's not the problem.
[ 2076.921537] usb 1-4: Product: USB2.0 Camera
[ 2077.018998] USB Video Class driver (1.1.1)
My code:
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Author : Jimmyromanticdevil
# QRbar-cv
#
# Proc of consept :
# this is just simple code from part of my work for QRcode & Barcode scanner with webcame stuff
# i am using Opencv for realtime track the image & zbar for decode the image
#
# Dependency :
#
#
# Some of Good Refrensi stuff :
# https://github.com/jayrambhia/Install-OpenCV
# http://nwlinux.com/install-qtqr-in-ubuntu-10-04-lucid-using-apt-get/
# http://zbar.sourceforge.net/
import cv2 as cv # Use OpenCV-2.4.3
import zbar
def scanner_procces(frame, set_zbar):
set_width = 100.0 / 100
set_height = 90.0 / 100
coord_x = int(frame.width * (1 - set_width) / 2)
coord_y = int(frame.height * (1 - set_height) / 2)
width = int(frame.width * set_width)
height = int(frame.height * set_height)
get_sub = cv.GetSubRect(frame, (coord_x + 1, coord_y + 1, width - 1, height - 1))
cv.Rectangle(frame, (coord_x, coord_y), (coord_x + width, coord_y + height), (255, 0, 0))
cm_im = cv.CreateImage((get_sub.width, get_sub.height), cv.IPL_DEPTH_8U, 1)
cv.ConvertImage(get_sub, cm_im)
image = zbar.Image(cm_im.width, cm_im.height, 'Y800', cm_im.tostring())
set_zbar.scan(image)
for symbol in image:
print()
'\033[1;32mResult : %s symbol "%s" \033[1;m' % (symbol.type, symbol.data)
cv.ShowImage("QR Koodi Skanneri", frame)
# cv.ShowImage("webcame2", get_sub)
cv.WaitKey(10)
if __name__ == "__main__":
# set up our stuff
cv.namedWindow("QR Koodi Scanneri", cv.WINDOW_AUTOSIZE)
capture = cv.VideoCapture(-1)
set_zbar = zbar.ImageScanner()
while True:
frame = capture.read()
scanner_procces(frame, set_zbar)
And Logs:
Traceback (most recent call last):
File "kassakone_scanner.py", line 56, in <module>
scanner_procces(frame, set_zbar)
File "kassakone_scanner.py", line 26, in scanner_procces
coord_x = int(frame.width * (1 - set_width) / 2)
AttributeError: 'tuple' object has no attribute 'width'
You are using a tuple, which means it has index-bassed assignment of values. To refer to a length in a tuple(length,width) you simply access the index [0].
These days I have tried to generate an application using the pythonapp template from the Github project Jumpstarter(https://github.com/aldebaran/robot-jumpstarter) to do the localization of Pepper. My basic idea is to combine the LandmarkDetector module in the generated app „Lokalisierung“(Localization of German).
You can read the whole code of "LandmarkDetector.py","main.py" and"MainLandmarkDetection.py" here:
"LandmarkDetector.py":
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Demonstrates a way to localize the robot with
ALLandMarkDetection"""
import qi
import time
import sys
import argparse
import math
import almath
class LandmarkDetector(object):
"""
We first instantiate a proxy to the ALLandMarkDetection module
Note that this module should be loaded on the robot's naoqi.
The module output its results in ALMemory in a variable
called "LandmarkDetected".
We then read this ALMemory value and check whether we get
interesting things.
After that we get the related position of the landmark compared to robot.
"""
def __init__(self, app):
"""
Initialisation of qi framework and event detection.
"""
super(LandmarkDetector, self).__init__()
app.start()
session = app.session
# Get the service ALMemory.
self.memory = session.service("ALMemory")
# Connect the event callback.
# Get the services ALMotion & ALRobotPosture.
self.motion_service = session.service("ALMotion")
self.posture_service = session.service("ALRobotPosture")
self.subscriber = self.memory.subscriber("LandmarkDetected")
print "self.subscriber = self.memory.subscriber(LandmarkDetected)"
self.subscriber.signal.connect(self.on_landmark_detected)
print "self.subscriber.signal.connect(self.on_landmark_detected)"
# Get the services ALTextToSpeech, ALLandMarkDetection and ALMotion.
self.tts = session.service("ALTextToSpeech")
self.landmark_detection = session.service("ALLandMarkDetection")
# print "self.landmark_detection" is repr(self.landmark_detection)
self.motion_service = session.service("ALMotion")
self.landmark_detection.subscribe("LandmarkDetector", 500, 0.0 )
print "self.landmark_detection.subscribe(LandmarkDetector, 500, 0.0 )"
self.got_landmark = False
# Set here the size of the landmark in meters.
self.landmarkTheoreticalSize = 0.06 #in meters 0 #.05 or 0.06?
# Set here the current camera ("CameraTop" or "CameraBottom").
self.currentCamera = "CameraTop"
def on_landmark_detected(self, markData):
"""
Callback for event LandmarkDetected.
"""
while markData == [] : # empty value when the landmark disappears
self.got_landmark = False
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
if not self.got_landmark: # only speak the first time a landmark appears
self.got_landmark = True
#stop.motion_service.moveTo
print "Ich sehe eine Landmarke! "
self.tts.say("Ich sehe eine Landmarke! ")
# Retrieve landmark center position in radians.
wzCamera = markData[1][0][0][1]
wyCamera = markData[1][0][0][2]
# Retrieve landmark angular size in radians.
angularSize = markData[1][0][0][3]
# Compute distance to landmark.
distanceFromCameraToLandmark = self.landmarkTheoreticalSize / ( 2 * math.tan( angularSize / 2))
# Get current camera position in NAO space.
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
transformList = almath.vectorFloat(transform)
robotToCamera = almath.Transform(transformList)
# Compute the rotation to point towards the landmark.
cameraToLandmarkRotationTransform = almath.Transform_from3DRotation(0, wyCamera, wzCamera)
# Compute the translation to reach the landmark.
cameraToLandmarkTranslationTransform = almath.Transform(distanceFromCameraToLandmark, 0, 0)
# Combine all transformations to get the landmark position in NAO space.
robotToLandmark = robotToCamera * cameraToLandmarkRotationTransform *cameraToLandmarkTranslationTransform
# robotTurnAroundAngle = almath.rotationFromAngleDirection(0, 1, 1, 1)
# print "robotTurnAroundAngle = ", robotTurnAroundAngle
print "x " + str(robotToLandmark.r1_c4) + " (in meters)"
print "y " + str(robotToLandmark.r2_c4) + " (in meters)"
print "z " + str(robotToLandmark.r3_c4) + " (in meters)"
def run(self):
"""
Loop on, wait for events until manual interruption.
"""
# Wake up robot
self.motion_service.wakeUp()
# Send robot to Pose Init
self.posture_service.goToPosture("StandInit", 0.5)
# Example showing how to get a simplified robot position in world.
useSensorValues = False
result = self.motion_service.getRobotPosition(useSensorValues)
print "Robot Position", result
# Example showing how to use this information to know the robot's diplacement.
useSensorValues = False
# initRobotPosition = almath.Pose2D(self.motion_service.getRobotPosition(useSensorValues))
# Make the robot move
for i in range(1, 12, 1):
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
print "self.motion_service.moveTo(0, 0, (0.1)*math.pi)"
print "Starting LandmarkDetector"
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print "Interrupted by user, stopping LandmarkDetector"
self.landmark_detection.unsubscribe("LandmarkDetector")
#stop
sys.exit(0)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="10.0.0.10",
help="Robot IP address. On robot or Local Naoqi: use
'10.0.0.10'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
try:
# Initialize qi framework.
connection_url = "tcp://" + args.ip + ":" + str(args.port)
app = qi.Application(["LandmarkDetector", "--qi-url=" + connection_url])
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
landmark_detector = LandmarkDetector(app)
landmark_detector.run()
"main.py":
""" A sample showing how to make a Python script as an app. """
version = "0.0.8"
copyright = "Copyright 2015, Aldebaran Robotics"
author = 'YOURNAME'
email = 'YOUREMAIL#aldebaran.com'
import stk.runner
import stk.events
import stk.services
import stk.logging
class Activity(object):
"A sample standalone app, that demonstrates simple Python usage"
APP_ID = "com.aldebaran.lokalisierung"
def __init__(self, qiapp):
self.qiapp = qiapp
self.events = stk.events.EventHelper(qiapp.session)
self.s = stk.services.ServiceCache(qiapp.session)
self.logger = stk.logging.get_logger(qiapp.session, self.APP_ID)
def on_touched(self, *args):
"Callback for tablet touched."
if args:
self.events.disconnect("ALTabletService.onTouchDown")
self.logger.info("Tablet touched: " + str(args))
self.s.ALTextToSpeech.say("Yay!")
self.stop()
def on_start(self):
"Ask to be touched, waits, and exits."
# Two ways of waiting for events
# 1) block until it's called
self.s.ALTextToSpeech.say("Touch my forehead.")
self.logger.warning("Listening for touch...")
while not self.events.wait_for("FrontTactilTouched"):
pass
# 2) explicitly connect a callback
if self.s.ALTabletService:
self.events.connect("ALTabletService.onTouchDown", self.on_touched)
self.s.ALTextToSpeech.say("okay, now touch my tablet.")
# (this allows to simltaneously speak and watch an event)
else:
self.s.ALTextToSpeech.say("touch my tablet ... oh. " + \
"I don't haave one.")
self.stop()
def stop(self):
"Standard way of stopping the application."
self.qiapp.stop()
def on_stop(self):
"Cleanup"
self.logger.info("Application finished.")
self.events.clear()
if __name__ == "__main__":
stk.runner.run_activity(Activity)
"MainLandmarkDetection.py":
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""A sample showing how to make a Python script as an app to localize
the robot with ALLandMarkDetection"""
version = "0.0.8"
copyright = "Copyright 2015, Aldebaran Robotics"
author = 'YOURNAME'
email = 'YOUREMAIL#aldebaran.com'
import stk.runner
import stk.events
import stk.services
import stk.logging
import time
import sys
import math
import almath
class Activity(object):
"A sample standalone app, that demonstrates simple Python usage"
APP_ID = "com.aldebaran.lokalisierung"
def __init__(self, qiapp):
self.qiapp = qiapp
self.events = stk.events.EventHelper(qiapp.session)
self.s = stk.services.ServiceCache(qiapp.session)
self.logger = stk.logging.get_logger(qiapp.session, self.APP_ID)
self.qiapp.start()
session = qiapp.session
# Get the service ALMemory.
self.memory = session.service("ALMemory")
# Connect the event callback.
# Get the services ALMotion & ALRobotPosture.
self.motion_service = session.service("ALMotion")
self.posture_service = session.service("ALRobotPosture")
self.subscriber = self.memory.subscriber("LandmarkDetected")
print "self.subscriber = self.memory.subscriber(LandmarkDetected)"
self.subscriber.signal.connect(self.on_landmark_detected)
print "self.subscriber.signal.connect(self.on_landmark_detected)"
# Get the services ALTextToSpeech, ALLandMarkDetection and ALMotion.
self.tts = session.service("ALTextToSpeech")
self.landmark_detection = session.service("ALLandMarkDetection")
# print "self.landmark_detection" is repr(self.landmark_detection)
self.motion_service = session.service("ALMotion")
self.landmark_detection.subscribe("Activity", 500, 0.0)
print "self.landmark_detection.subscribe(Activity, 500, 0.0 )"
self.got_landmark = False
# Set here the size of the landmark in meters.
self.landmarkTheoreticalSize = 0.06 # in meters 0 #.05 or 0.06?
# Set here the current camera ("CameraTop" or "CameraBottom").
self.currentCamera = "CameraTop"
def on_landmark_detected(self, markData):
"""
Callback for event LandmarkDetected.
"""
while markData == []: # empty value when the landmark disappears
self.got_landmark = False
# self.motion_service.moveTo(0, 0, 0.1 * math.pi)
if not self.got_landmark: # only speak the first time a landmark appears
self.got_landmark = True
# stop.motion_service.moveTo
print "Ich sehe eine Landmarke! "
# self.tts.say("Ich sehe eine Landmarke! ")
# Retrieve landmark center position in radians.
wzCamera = markData[1][0][0][1]
wyCamera = markData[1][0][0][2]
# Retrieve landmark angular size in radians.
angularSize = markData[1][0][0][3]
# Compute distance to landmark.
distanceFromCameraToLandmark = self.landmarkTheoreticalSize / (2 * math.tan(angularSize / 2))
# Get current camera position in NAO space.
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
transformList = almath.vectorFloat(transform)
robotToCamera = almath.Transform(transformList)
# Compute the rotation to point towards the landmark.
cameraToLandmarkRotationTransform = almath.Transform_from3DRotation(0, wyCamera, wzCamera)
# Compute the translation to reach the landmark.
cameraToLandmarkTranslationTransform = almath.Transform(distanceFromCameraToLandmark, 0, 0)
# Combine all transformations to get the landmark position in NAO space.
robotToLandmark = robotToCamera * cameraToLandmarkRotationTransform * cameraToLandmarkTranslationTransform
# robotTurnAroundAngle = almath.rotationFromAngleDirection(0, 1, 1, 1)
# print "robotTurnAroundAngle = ", robotTurnAroundAngle
print "x " + str(robotToLandmark.r1_c4) + " (in meters)"
print "y " + str(robotToLandmark.r2_c4) + " (in meters)"
print "z " + str(robotToLandmark.r3_c4) + " (in meters)"
def run(self):
"""
Loop on, wait for events until manual interruption.
"""
# Wake up robot
self.motion_service.wakeUp()
# Send robot to Pose Init
self.posture_service.goToPosture("StandInit", 0.5)
# Example showing how to get a simplified robot position in world.
useSensorValues = False
result = self.motion_service.getRobotPosition(useSensorValues)
print "Robot Position", result
# Example showing how to use this information to know the robot's diplacement.
useSensorValues = False
# initRobotPosition = almath.Pose2D(self.motion_service.getRobotPosition(useSensorValues))
# Make the robot move
for i in range(1, 20, 1):
self.motion_service.moveTo(0, 0, 0.1 * math.pi)
print "self.motion_service.moveTo(0, 0, (0.1)*math.pi)"
print "Starting Activity"
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
print "Interrupted by user, stopping Activity"
self.landmark_detection.unsubscribe("Activity")
# stop
sys.exit(0)
def stop(self):
"Standard way of stopping the application."
self.qiapp.stop()
def on_stop(self):
"Cleanup"
self.logger.info("Application finished.")
self.events.clear()
if __name__ == "__main__":
stk.runner.run_activity(Activity)
landmark_detector = Activity()
landmark_detector.run()
This is how it worked in cmd.exe:
And I have a question to the parameter by ”landmark_detector = Activity()” in line 157 at almost the end of the program because of the Error in the image, which I should pass. After reading the answers to the similar question here by StackoverflowPython: TypeError: __init__() takes exactly 2 arguments (1 given), I am still confused. I think it should be a value which is given to the "qiapp", but what value?
Best regards,
Frederik
Actually, when you call
stk.runner.run_activity(Activity)
... it will instantiate that activity object for you, with the right parameters, you don't need to landmark_detector = Activity() etc. in MainLandmarkDetector.py
And if this object has a method called on_start, that method will be called once everything is ready (so you may only need to rename your run() method to on_start()
Note also that instead of calling sys.stop(), you can just call self.stop() or self.qiapp.stop() (which is a bit cleaner in terms of letting the cleanup code in on_stop to be called, if you need to unsubscribe to things etc.)
See here for some documentation on stk.runner.
Note also that instead of doing
self.motion_service = session.service("ALMotion")
(...)
transform = self.motion_service.getTransform(self.currentCamera, 2, True)
you can directly do
transform = self.s.ALMotion.getTransform(self.currentCamera, 2, True)
... which (in my opinion) makes it easier to see what is being done exactly (and reduces the number of variables).
I like what the turtle module does in Python and I'd like to output the entire animation of it drawing the shape. Is there a way to do this? GIF/MP4/anything that shows the animation. Note, I know that an external screen recorder will do the job, but I'm looking for a way for the turtle module to do this itself.
Make an animated GIF from Python turtle using Preview on OSX
1) Start with a working program
As obvious as that seems, don't be debugging your code while trying to generate the animated GIF. It should be a proper turtle program with no infinite loops that ends with mainloop(), done(), or exitonclick().
The program I'm going to use for this explanation is one I wrote for Programming Puzzles & Golf Code that draws an Icelandic flag using turtle. It's intentionally minimalist as it is PP&GC:
from turtle import *
import tkinter as _
_.ROUND = _.BUTT
S = 8
h = 18 * S
color("navy")
width(h)
fd(25 * S)
color("white")
width(4 * S)
home()
pu()
goto(9 * S, -9 * S)
lt(90)
pd()
fd(h)
color("#d72828")
width(S + S)
bk(h)
pu()
home()
pd()
fd(25 * S)
ht()
done()
2) Have your program save snapshots on a timed basis
Repackage your program with draw(), save() and stop() timed events roughly as follows:
from turtle import *
import tkinter as _
_.ROUND=_.BUTT
def draw():
S = 8
h = 18 * S
color("navy")
width(h)
fd(25 * S)
color("white")
width(4 * S)
home()
pu()
goto(9 * S, -9 * S)
lt(90)
pd()
fd(h)
color("#d72828")
width(S + S)
bk(h)
pu()
home()
pd()
fd(25 * S)
ht()
ontimer(stop, 500) # stop the recording (1/2 second trailer)
running = True
FRAMES_PER_SECOND = 10
def stop():
global running
running = False
def save(counter=[1]):
getcanvas().postscript(file = "iceland{0:03d}.eps".format(counter[0]))
counter[0] += 1
if running:
ontimer(save, int(1000 / FRAMES_PER_SECOND))
save() # start the recording
ontimer(draw, 500) # start the program (1/2 second leader)
done()
I'm using 10 frames per second (FPS) as that will match what Preview uses in a later step.
3) Run your program; quit after it completes.
Create a new, empty directory and run it from there. If all goes to plan, it should dump a series of *.eps files into the directory.
4) Load all these *.eps files into Preview
Assuming Preview is my default previewer, in Terminal.app I would simply do:
open iceland*.eps
5) Select-All the PDF (were EPS) files in the Preview sidebar and File/Export... (not Export as PDF) as GIF
Set the export type under the Options button, save them into our temporary directory. You need to hold down the Option key when selecting a format to see the GIF choice. Pick a good screen resolution. We should now have *.gif files in our temporary directory.
Quit Preview.
6) Load all the *.gif files into Preview
open iceland*.gif
7) Merge all but first GIF file into the first GIF file
Select All the GIF files in Preview's sidebar. Unselect (Command Click) the first GIF file, e.g. iceland001.gif. Drag the selected GIF files onto the unselected GIF file. This will modify it and it's name. Use File/Export... to export the modified first GIF file to a new GIF file, e.g. iceland.gif
8) This is an animated GIF!
Convince yourself by loading it into Safari, e.g.:
open -a Safari iceland.gif
9) Converting to a repeating animated GIF
For a repeating animated GIF, you'll need some external tool like ImageMagick or Gifsicle to set the loop value:
convert -loop 0 iceland.gif iceland-repeating.gif
And again convince yourself that it works:
open -a Safari iceland-repeating.gif
10) Animated GIF result. Good luck!
Main concept
Here is my solution, the step as below,
Get the frame (you need use turtle.ontimer -> turtle.getcanvas().postscript(file=output_file) )
Convert each EPS to PNG. (since turtle.getcanvas().postscript return EPS, so you need use PIL to convert EPS to PNG)
you need download ghostscript: https://www.ghostscript.com/download/gsdnld.html
Make a GIF with your PNG list. (use PIL.ImageFile.ImageFile.save(output_path, format='gif', save_all=True, append_images=, duration, loop)
Script
Here is my script (maybe I will publish to PyPI if I have time...)
import turtle
import tkinter
from typing import Callable, List
from pathlib import Path
import re
import os
import sys
import functools
import PIL.Image
from PIL.PngImagePlugin import PngImageFile
from PIL.ImageFile import ImageFile
from PIL import EpsImagePlugin
def init(**options):
# download ghostscript: https://www.ghostscript.com/download/gsdnld.html
if options.get('gs_windows_binary'):
EpsImagePlugin.gs_windows_binary = options['gs_windows_binary'] # install ghostscript, otherwise->{OSError} Unable to locate Ghostscript on paths
# https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/cap-join-styles.html
# change the default style of the line that made of two connected line segments
tkinter.ROUND = tkinter.BUTT # default is ROUND # https://anzeljg.github.io/rin2/book2/2405/docs/tkinter/create_line.html
def make_gif(image_list: List[Path], output_path: Path, **options):
"""
:param image_list:
:param output_path:
:param options:
- fps: Frame Per Second. Duration and FPS, choose one to give.
- duration milliseconds (= 1000/FPS ) (default is 0.1 sec)
- loop # int, if 0, then loop forever. Otherwise, it means the loop number.
:return:
"""
if not output_path.parent.exists():
raise FileNotFoundError(output_path.parent)
if not output_path.name.lower().endswith('.gif'):
output_path = output_path / Path('.gif')
image_list: List[ImageFile] = [PIL.Image.open(str(_)) for _ in image_list]
im = image_list.pop(0)
fps = options.get('fps', options.get('FPS', 10))
im.save(output_path, format='gif', save_all=True, append_images=image_list,
duration=options.get('duration', int(1000 / fps)),
loop=options.get('loop', 0))
class GIFCreator:
__slots__ = ['draw',
'__temp_dir', '__duration',
'__name', '__is_running', '__counter', ]
TEMP_DIR = Path('.') / Path('__temp__for_gif')
# The time gap that you pick image after another on the recording. i.e., If the value is low, then you can get more source image, so your GIF has higher quality.
DURATION = 100 # millisecond. # 1000 / FPS
REBUILD = True
def __init__(self, name, temp_dir: Path = None, duration: int = None, **options):
self.__name = name
self.__is_running = False
self.__counter = 1
self.__temp_dir = temp_dir if temp_dir else self.TEMP_DIR
self.__duration = duration if duration else self.DURATION
if not self.__temp_dir.exists():
self.__temp_dir.mkdir(parents=True) # True, it's ok when parents is not exists
#property
def name(self):
return self.__name
#property
def duration(self):
return self.__duration
#property
def temp_dir(self):
if not self.__temp_dir.exists():
raise FileNotFoundError(self.__temp_dir)
return self.__temp_dir
def configure(self, **options):
gif_class_members = (_ for _ in dir(GIFCreator) if not _.startswith('_') and not callable(getattr(GIFCreator, _)))
for name, value in options.items():
name = name.upper()
if name not in gif_class_members:
raise KeyError(f"'{name}' does not belong to {GIFCreator} members.")
correct_type = type(getattr(self, name))
# type check
assert isinstance(value, correct_type), TypeError(f'{name} type need {correct_type.__name__} not {type(value).__name__}')
setattr(self, '_GIFCreator__' + name.lower(), value)
def record(self, draw_func: Callable = None, **options):
"""
:param draw_func:
:param options:
- fps
- start_after: milliseconds. While waiting, white pictures will continuously generate to used as the heading image of GIF.
- end_after:
:return:
"""
if draw_func and callable(draw_func):
setattr(self, 'draw', draw_func)
if not (hasattr(self, 'draw') and callable(getattr(self, 'draw'))):
raise NotImplementedError('subclasses of GIFCreatorMixin must provide a draw() method')
regex = re.compile(fr"""{self.name}_[0-9]{{4}}""")
def wrap():
self.draw()
turtle.ontimer(self._stop, options.get('end_after', 0))
wrap_draw = functools.wraps(self.draw)(wrap)
try:
# https://blog.csdn.net/lingyu_me/article/details/105400510
turtle.reset() # Does a turtle.clear() and then resets this turtle's state (i.e. direction, position etc.)
except turtle.Terminator:
turtle.reset()
if self.REBUILD:
for f in [_ for _ in self.temp_dir.glob(f'*.*') if _.suffix.upper().endswith(('EPS', 'PNG'))]:
[os.remove(f) for ls in regex.findall(str(f)) if ls is not None]
self._start()
self._save() # init start the recording
turtle.ontimer(wrap_draw,
t=options.get('start_after', 0)) # start immediately
turtle.done()
print('convert_eps2image...')
self.convert_eps2image()
print('make_gif...')
self.make_gif(fps=options.get('fps'))
print(f'done:{self.name}')
return
def convert_eps2image(self):
"""
image extension (PGM, PPM, GIF, PNG) is all compatible with tk.PhotoImage
.. important:: you need to use ghostscript, see ``init()``
"""
for eps_file in [_ for _ in self.temp_dir.glob('*.*') if _.name.startswith(self.__name) and _.suffix.upper() == '.EPS']:
output_path = self.temp_dir / Path(eps_file.name + '.png')
if output_path.exists():
continue
im: PIL.Image.Image = PIL.Image.open(str(eps_file))
im.save(output_path, 'png')
def make_gif(self, output_name=None, **options):
"""
:param output_name: basename `xxx.png` or `xxx`
:param options:
- fps: for GIF
:return:
"""
if output_name is None:
output_name = self.__name
if not output_name.lower().endswith('.gif'):
output_name += '.gif'
image_list = [_ for _ in self.temp_dir.glob(f'{self.__name}*.*') if
(_.suffix.upper().endswith(('PGM', 'PPM', 'GIF', 'PNG')) and _.name.startswith(self.__name))
]
if not image_list:
sys.stderr.write(f'There is no image on the directory. {self.temp_dir / Path(self.__name + "*.*")}')
return
output_path = Path('.') / Path(f'{output_name}')
fps = options.get('fps', options.get('FPS'))
if fps is None:
fps = 1000 / self.duration
make_gif(image_list, output_path,
fps=fps, loop=0)
os.startfile('.') # open the output folder
def _start(self):
self.__is_running = True
def _stop(self):
print(f'finished draw:{self.name}')
self.__is_running = False
self.__counter = 1
def _save(self):
if self.__is_running:
# print(self.__counter)
output_file: Path = self.temp_dir / Path(f'{self.__name}_{self.__counter:04d}.eps')
if not output_file.exists():
turtle.getcanvas().postscript(file=output_file) # 0001.eps, 0002.eps ...
self.__counter += 1
turtle.ontimer(self._save, t=self.duration) # trigger only once, so we need to set it again.
USAGE
init(gs_windows_binary=r'C:\Program Files\gs\gs9.52\bin\gswin64c')
def your_draw_function():
turtle.color("red")
turtle.width(20)
turtle.fd(40)
turtle.color("#00ffff")
turtle.bk(40)
...
# method 1: pass the draw function directly.
gif_demo = GIFCreator(name='demo')
# gif_demo.configure(duration=400) # Optional
gif_demo.record(your_draw_function)
# method 2: use class
# If you want to create a class, just define your draw function, and then record it.
class MyGIF(GIFCreator):
DURATION = 200 # optional
def draw(self):
your_draw_function()
MyGIF(name='rectangle demo').record(
# fps=, start_after=, end_after= <-- optional
)
demo
init(gs_windows_binary=r'C:\Program Files\gs\gs9.52\bin\gswin64c')
class TaiwanFlag(GIFCreator):
DURATION = 200
# REBUILD = False
def __init__(self, ratio, **kwargs):
"""
ratio: 0.5 (40*60) 1 (80*120) 2 (160*240) ...
"""
self.ratio = ratio
GIFCreator.__init__(self, **kwargs)
def show_size(self):
print(f'width:{self.ratio * 120}\nheight:{self.ratio * 80}')
#property
def size(self): # w, h
return self.ratio * 120, self.ratio * 80
def draw(self):
# from turtle import *
# turtle.tracer(False)
s = self.ratio # scale
pu()
s_w, s_h = turtle.window_width(), turtle.window_height()
margin_x = (s_w - self.size[0]) / 2
home_xy = -s_w / 2 + margin_x, 0
goto(home_xy)
pd()
color("red")
width(80 * s)
fd(120 * s)
pu()
goto(home_xy)
color('blue')
goto(home_xy[0], 20 * s)
width(40 * s)
pd()
fd(60 * s)
pu()
bk((30 + 15) * s)
pd()
color('white')
width(1)
left(15)
begin_fill()
for i in range(12):
fd(30 * s)
right(150)
end_fill()
rt(15)
pu()
fd(15 * s)
rt(90)
fd(8.5 * s)
pd()
lt(90)
# turtle.tracer(True)
begin_fill()
circle(8.5 * s)
end_fill()
color('blue')
width(2 * s)
circle(8.5 * s)
# turtle.tracer(True)
turtle.hideturtle()
taiwan_flag = TaiwanFlag(2, name='taiwan')
turtle.Screen().setup(taiwan_flag.size[0] + 40, taiwan_flag.size[1] + 40) # margin = 40
# taiwan_flag.draw()
taiwan_flag.record(end_after=2500, fps=10)
I'm writing a roguelike with libtcodpy. It works, but when I run this listing: http://kooneiform.wordpress.com/2009/03/29/241/ at the bottom of the page is a full listing and a few others I've tried, I get errors such as this:
FYI I'm on Windows and do have the libtcodpy.py, SDL.dll, libtcod-mingw.dll files and they do work when following the tutorial that is most popular for libtcodpy.
For the listing above I receive this specific error:
$ python roguelike_practice2.py
Traceback (most recent call last):
File "roguelike_practice2.py", line 165, in <module>
draw()
File "roguelike_practice2.py", line 98, in draw
libtcod.console_set_foreground_color(0, libtcod.white)
AttributeError: 'module' object has no attribute 'console_set_foreground_color'
I also on that same program encounter the exact same issue with console_set_background_color, console_print_left. None work. All with the same error.
For other listings such as this one:
#!/usr/bin/python
###imports###
import os
import libtcodpy as libtcod
###utility functions###
def get_key(key):
if key.vk == libtcod.KEY_CHAR:
return chr(key.c)
else:
return key.vk
###global constants and variables###
window_width = 46
window_height = 20
first = True
fov_px = 9
fov_py = 10
fov_recompute = True
fov_map = None
fov_colors = {
'dark wall' : libtcod.Color(0, 0, 100),
'light wall' : libtcod.Color(130, 110, 50),
'dark ground' : libtcod.Color(50, 50, 150),
'light ground' : libtcod.Color(200, 180, 50)
}
fov_init = False
fov_radius = 4
do = {
'up' : (0, -1),
'down' : (0, 1),
'right' : (1, 0),
'left' : (-1, 0)
}
keys = {
'i' : do['up'],
'k' : do['down'],
'j' : do['left'],
'l' : do['right'],
libtcod.KEY_UP : do['up'],
libtcod.KEY_KP8 : do['up']
}
smap = ['##############################################',
'####################### #################',
'##################### # ###############',
'###################### ### ###########',
'################## ##### ####',
'################ ######## ###### ####',
'############### #################### ####',
'################ ###### ##',
'######## ####### ###### # # # ##',
'######## ###### ### ##',
'######## ##',
'#### ###### ### # # # ##',
'#### ### ########## #### ##',
'#### ### ########## ###########=##########',
'#### ################## ##### #####',
'#### ### #### ##### #####',
'#### # #### #####',
'######## # #### ##### #####',
'######## ##### ####################',
'##############################################',
]
###drawing###
def draw():
global fov_px, fov_py, fov_map, first
global fov_init, fov_recompute, smap
if first:
wh = window_height
ww = window_width
first = False
libtcod.console_clear(0)
libtcod.console_set_fore(0, ww, wh, libtcod.white)
libtcod.console_print_left(0, 1, 1, libtcod.BKGND_NONE,
"IJKL : move around")
libtcod.console_set_fore(0, ww, wh, libtcod.black)
libtcod.console_put_char(0, fov_px, fov_py, '#',
libtcod.BKGND_NONE)
for y in range(window_height):
for x in range(window_width):
if smap[y][x] == '=':
libtcod.console_put_char(0, x, y,
libtcod.CHAR_DHLINE,
libtcod.BKGND_NONE)
if not fov_init:
fov_init = True
fov_map = libtcod.map_new(window_width, window_height)
for y in range(window_height):
for x in range(window_width):
if smap[y][x] == ' ':
libtcod.map_set_properties(fov_map, x, y, True, True)
elif smap[y][x] == '=':
libtcod.map_set_properties(fov_map, x, y, True, False)
if fov_recompute:
fov_recompute = False
libtcod.map_compute_fov(fov_map, fov_px, fov_py, fov_radius, True)
for y in range(window_height):
for x in range(window_width):
affect, cell = 'dark', 'ground'
if libtcod.map_is_in_fov(fov_map, x, y):
affect = 'light'
if (smap[y][x] == '#'):
cell = 'wall'
color = fov_colors['%s %s' % (affect, cell)]
libtcod.console_set_back(0, x, y, color, libtcod.BKGND_SET)
###game state updates###
def update(key):
global fov_py, fov_px, fov_recompute, smap
key = get_key(key)
if key in keys:
dx, dy = keys[key]
if smap[fov_py+dy][fov_px+dx] == ' ':
libtcod.console_put_char(0, fov_px, fov_py, ' ',
libtcod.BKGND_NONE)
fov_px = fov_px + dx
fov_py = fov_py + dy
libtcod.console_put_char(0, fov_px, fov_py, '#',
libtcod.BKGND_NONE)
fov_recompute = True
###initialization and main loop###
font = os.path.join('fonts', 'arial12x12.png')
libtcod.console_set_custom_font(font, libtcod.FONT_LAYOUT_TCOD | libtcod.FONT_TYPE_GREYSCALE)
libtcod.console_init_root(window_width, window_height, 'Python Tutorial', False)
while not libtcod.console_is_window_closed():
draw()
libtcod.console_flush()
key = libtcod.console_wait_for_keypress(True)
update(key)
if key.vk == libtcod.KEY_ESCAPE:
break
I receive the following errors, again I have all the needed files in the folder and am on Windows.
Error for listing 2:
Traceback (most recent call last):
File "roguelike_practice1.py", line 167, in <module>
draw()
File "roguelike_practice1.py", line 100, in draw
libtcod.console_set_fore(0, ww, wh, libtcod.white)
File "c:\Users\cshenkan\CloudStation\Programming\Libtcod\Project 2\libtcodpy.p
y", line 764, in console_set_fore
_lib.TCOD_console_set_fore(con, x, y, col)
File "c:\Python27\lib\ctypes\__init__.py", line 378, in __getattr__
func = self.__getitem__(name)
File "c:\Python27\lib\ctypes\__init__.py", line 383, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: function 'TCOD_console_set_fore' not found
I run into this TCOD_console_set_fore error a bunch. But say I comment that out, I get the same error but with another function such as TCOD_console_set_back, and others.
Not sure why I'm getting these errors. Using Python 2.7.9 32 bit, and libtcod 1.5.1 I believe. Running Windows 7 64 bit. Keep in mind I can get other programs to run that don't require any of those set_foreground and variation functions, or the print_left function or whatever other functions aren't working. But I'm sure it one or two issues affecting all the functions that wont work. \
If anyone has any ideas, I've spent a ton of time looking online to no avail for info. And the forum for libtcod takes days for administrator approval to join - lame.
Anyway thanks in advance! Ask me any questions or if you need clarification.
TCOD 1.5.1 renamed some functions, so that is why your two listings are crashing.
Version 1.5.1 renamed console_set_foreground_color to console_set_default_foreground,console_set_background_color to console_set_default_background, console_set_fore and console_set_back to console_set_char_foreground and console_set_char_background respectively, and console_wait_for_keypress has been replaced with sys_wait_for_event.
Also, console_print_left has been replaced by console_print_ex, which has an extra 'alignment' parameter between background and the string to print.
It appears that those functions were deprecated in 1.5.1. I can find them in 1.5.0, but neither in 1.5.1 nor 1.5.2. I think you would have to use console_print_ex or console_print_rect_ex instead.
Otherwise you could off course switch back to 1.5.0.