When trying to use pyttsx3 I can only use English voices. I would like to be able to use Dutch as well.
I have already installed the text to speech language package in the windows settings menu. But I can still only use the deafaut english voice.
How can I fix this?
If you want to change a language you need to change to another "voice" that supports your language.
To see which voices/languages are installed you can list them like this:
import pyttsx3
engine = pyttsx3.init()
for voice in engine.getProperty('voices'):
print(voice)
No you can change to your favorite voice like this:
engine.setProperty('voice', voice.id)
I personally use this helper function I mentioned here also
# language : en_US, de_DE, ...
# gender : VoiceGenderFemale, VoiceGenderMale
def change_voice(engine, language, gender='VoiceGenderFemale'):
for voice in engine.getProperty('voices'):
if language in voice.languages and gender == voice.gender:
engine.setProperty('voice', voice.id)
return True
raise RuntimeError("Language '{}' for gender '{}' not found".format(language, gender))
And finally, you can use it like this (if language and gender are installed):
import pyttsx3
engine = pyttsx3.init()
change_voice(engine, "nl_BE", "VoiceGenderFemale")
engine.say("Hello World")
engine.runAndWait()
Installing another language on Windows is not enough. By default, windows installed new "speakers" are accessible only to windows official programs. You need to give access to use it within python code. This is done by changing couple of registry files(Don't do it without backup or if you are not sure what you are doing).
here is a small example of using another language, hebrew in this case. as for a manual how to change the registries, go into machine_buddy.get_all_voices(ack=True) documentation and it's written in the comments.
import wizzi_utils as wu # pip install wizzi_utils
def tts():
# pip install pyttsx3 # needed
machine_buddy = wu.tts.MachineBuddy(rate=150)
all_voices = machine_buddy.get_all_voices(ack=True)
print('\taudio test')
for i, v in enumerate(all_voices):
machine_buddy.change_voice(new_voice_ind=i)
machine_buddy.say(text=v.name)
if 'Hebrew' in str(v.name):
t = 'שלום, מה קורה חברים?'
machine_buddy.say(text=t)
return
def main():
tts()
return
if __name__ == '__main__':
wu.main_wrapper(
main_function=main,
seed=42,
ipv4=False,
cuda_off=False,
torch_v=False,
tf_v=False,
cv2_v=False,
with_pip_list=False,
with_profiler=False
)
output:
<Voice id=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_EN-US_DAVID_11.0
name=Microsoft David Desktop - English (United States)
languages=[]
gender=None
age=None>
<Voice id=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\MSTTS_V110_heIL_Asaf
name=Microsoft Asaf - Hebrew (Israel)
languages=[]
gender=None
age=None>
<Voice id=HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_EN-US_ZIRA_11.0
name=Microsoft Zira Desktop - English (United States)
languages=[]
gender=None
age=None>
audio test
The language property "zh" denotes Mandarin Chinese. You can loop through all the voices included in pyttsx3 and at the time of this writing there are 3 supported Chinese voices (on a MacBook): Taiwanese, Hong Kong, and Chinese (mainland).
engine = pyttsx3.init()
voices = engine.getProperty('voices')
for voice in voices:
engine.setProperty('voice', voice.id)
if "zh" in voice.id:
print(voice.id)
I'm using a Mac, so the results may be different for you. But here is my result:
com.apple.voice.compact.zh-TW.Meijia
com.apple.voice.compact.zh-HK.Sinji
com.apple.voice.compact.zh-CN.Tingting
Here is an example code to audibly speak Chinese:
engine = pyttsx3.init()
voices = engine.getProperty('voices')
engine.setProperty('voice', "com.apple.voice.compact.zh-CN.Tingting")
engine.say('炒菜的时候,也可以放')
engine.runAndWait()
If you want a Chinese voice, for example, on a Windows computer, it is not included by default. You will have to first manually add one. Then you can run through the first code sample above and find the voice with "zh" in it. I did a test on a Windows machine and the name of the voice.id was "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Speech\Voices\Tokens\TTS_MS_ZH-CN_HUIHUI_11.0"
Related
I wish to have voice assistant in Hindi. Can engine.say() accept translated text? My code is below. It speaks English text, but not Hindi text. No error occurs, however no voice occurs. (My laptop sound=100%).
Can anyone help?
import pyttsx3
from translate import Translator
engine = pyttsx3.init()
translator= Translator(from_lang="english",to_lang="hindi")
translation = translator.translate("Today is Tuesday")
print(translation)
engine.say("My first code on text-to-speech")
engine.say(translation)
engine.runAndWait()
Your code ran fine after I installed its upstream dependencies
pip3 install translate pyttsx3
when I ran it got this error
OSError: libespeak.so.1: cannot open shared object file: No such file or directory
so then I installed libespeak1 ... I am on Ubuntu so this command is
sudo apt-get install libespeak1
then below python code executed correctly and I could hear the translation
from translate import Translator
import pyttsx3
translator= Translator(from_lang="english",to_lang="hindi")
# translation = translator.translate("Today is Tuesday")
translation = translator.translate("Today is Tuesday and its the day before Wednesday")
engine = pyttsx3.init()
engine.setProperty("languages",'hi')
engine.say(translation)
engine.runAndWait()
Using pyttsx3 (tried versions 2.5 to current) on Visual Studios Code on Windows 10 With Python 3.10.0.
My Problem that I am currently having is that the code will run through, but no audio is being outputted. while debugging there is no pause stepping into or over the code (for parts including pyttsx3). I made sure my audio is on, and that it is working. I used a different tts library gtts and the audio worked, but I am trying to write offline. I also tried this exact code from VS code in PyCharm and I still had the same problem. Again with no errors or warnings.
import speech_recognition as sr
import pyttsx3
listener = sr.Recognizer()
engine = pyttsx3.init(driverName='sapi5')
#voices = engine.getProperty('voices')
#engine.setProperty('voice', voices[0].id)
engine.say("Testing, audio")
engine.runAndWait
try:
with sr.Microphone() as source:
print('Listening...')
voice = listener.listen(source)
command = listener.recognize_google(voice)
print(command)
engine.say(command)
engine.runAndWait
except:
pass
print('Hello')
I also tried this block of code with no driver name and the same problem above persists
import speech_recognition as sr
import pyttsx3
listener = sr.Recognizer()
engine = pyttsx3.init()
#voices = engine.getProperty('voices')
#engine.setProperty('voice', voices[0].id)
engine.say("Testing, audio")
engine.runAndWait
try:
with sr.Microphone() as source:
print('Listening...')
voice = listener.listen(source)
command = listener.recognize_google(voice)
print(command)
engine.say(command)
engine.runAndWait
except:
pass
print('Hello')
The two commented lines on both programs didn't change anything for me either.
I also tested the program with just pyttsx3.
import pyttsx3
engine = pyttsx3.init(driverName='sapi5')
engine.say("Testing, audio")
engine.runAndWait
and tested using 'Testing, audio' instead of "Testing, audio" and I tried single words as well.
Any help would be amazing! Thank you!
In the mean time I will try to test this(translated to work with Linux) program in Linux to see if my OS is the issue.
I will also try an older version of python to see if that is the issue. Along with python 2.
My biggest assumption is that pyttsx3 needs a earlier version of python to work, but I could also be 100% wrong about that.
You forgot to put the parentheses on engine.runAndWait. Do this: engine.runAndWait()
So I want to make a command for my discord bot that translates stuff. I'm using repl.it and the googletrans pip install code won't work for some reason. I also tried doing pip install googletrans==3.1.0a0 in the shell, but it won't work either. Is there currently a code for google translate that works in python or an updated one that works for repl.it??
here's my current code (when i try the command, it doesn't respond):
#client.command(aliases=['tr'])
async def translate(ctx, lang_to, *args):
lang_to = lang_to.lower()
if lang_to not in googletrans.LANGUAGES and lang_to not in googletrans.LANGCODES:
raise commands.BadArgument("Invalid language detected. Make sure to check if it is spelled correctly and that it is a real language.")
text = ' '.join(args)
translator = googletrans.Translator()
text_translated = translator.translate(text, dest=lang_to).text
await ctx.send(text_translated)```
Hey try to Install google translator version 4.0.0rc1!
pip install googletrans==4.0.0rc1
My code:
#bot.command()
async def trans(ctx, lang, *, args):
t= Translator()
a= t.translate(args, dest=lang)
tembed= discord.Embed(title=f'Translating Language....', description=f'Successfully translated the text below :point_down: \n \n**{a.text}**', color=discord.Colour.random())
await ctx.send(embed=tembed)
print(googletrans.LANGUAGES)
I am trying to use ALDialog module to have a virtual conversation with the Choregraphe simulated NAO6 robot. I have the below script:
import qi
import argparse
import sys
def main(session):
"""
This example uses ALDialog methods.
It's a short dialog session with two topics.
"""
# Getting the service ALDialog
ALDialog = session.service("ALDialog")
ALDialog.setLanguage("English")
# writing topics' qichat code as text strings (end-of-line characters are important!)
topic_content_1 = ('topic: ~example_topic_content()\n'
'language: enu\n'
'concept:(food) [fruits chicken beef eggs]\n'
'u: (I [want "would like"] {some} _~food) Sure! You must really like $1 .\n'
'u: (how are you today) Hello human, I am fine thank you and you?\n'
'u: (Good morning Nao did you sleep well) No damn! You forgot to switch me off!\n'
'u: ([e:FrontTactilTouched e:MiddleTactilTouched e:RearTactilTouched]) You touched my head!\n')
topic_content_2 = ('topic: ~dummy_topic()\n'
'language: enu\n'
'u:(test) [a b "c d" "e f g"]\n')
# Loading the topics directly as text strings
topic_name_1 = ALDialog.loadTopicContent(topic_content_1)
topic_name_2 = ALDialog.loadTopicContent(topic_content_2)
# Activating the loaded topics
ALDialog.activateTopic(topic_name_1)
ALDialog.activateTopic(topic_name_2)
# Starting the dialog engine - we need to type an arbitrary string as the identifier
# We subscribe only ONCE, regardless of the number of topics we have activated
ALDialog.subscribe('my_dialog_example')
try:
raw_input("\nSpeak to the robot using rules from both the activated topics. Press Enter when finished:")
finally:
# stopping the dialog engine
ALDialog.unsubscribe('my_dialog_example')
# Deactivating all topics
ALDialog.deactivateTopic(topic_name_1)
ALDialog.deactivateTopic(topic_name_2)
# now that the dialog engine is stopped and there are no more activated topics,
# we can unload all topics and free the associated memory
ALDialog.unloadTopic(topic_name_1)
ALDialog.unloadTopic(topic_name_2)
if __name__ == "__main__":
session = qi.Session()
try:
session.connect("tcp://desktop-6d4cqe5.local:9559")
except RuntimeError:
print ("\nCan't connect to Naoqi at IP desktop-6d4cqe5.local(port 9559).\nPlease check your script's arguments."
" Run with -h option for help.\n")
sys.exit(1)
main(session, "desktop-6d4cqe5.local")
My simulated robot has desktop-6d4cqe5.local as IP address and its NAOqi port is running on 63361. I want to run the dialogs outside of the Choregraphe in a python script and only be able to use the dialog box within the Choregraphe to test it. When I ran the above python file I got:
Traceback (most recent call last):
File "C:\Users\...\Documents\...\choregraphe_codes\Welcome\speak.py", line 6, in <module>
import qi
File "C:\Python27\Lib\site-packages\pynaoqi\lib\qi\__init__.py", line 93
async, PeriodicTask)
^
SyntaxError: invalid syntax
I couldn't figure out the problem as there was not much resources online and the robot's documentations are a bit hard to understand.
Please help, thank you.
You are running the script using a Python version greater than 3.5, that sees async as a keyword, now.
NAOqi only supports Python 2.
Try running your script with python2 explicitly.
I wondered if anybody knows if and how Stanford Open IE can be set up in google colab?
I've followed the colab tutorial for the CoreNLP client before and that seems to be working.
I get the following error when running the example from their github (https://github.com/philipperemy/Stanford-OpenIE-Python):
---------------------------------------------------------------------------
PermanentlyFailedException Traceback (most recent call last)
<ipython-input-2-01d7100eb03f> in <module>()
4 text = 'Barack Obama was born in Hawaii. Richard Manning wrote this sentence.'
5 print('Text: %s.' % text)
----> 6 for triple in client.annotate(text):
7 print('|-', triple)
8
3 frames
/usr/local/lib/python3.6/dist-packages/stanfordnlp/server/client.py in ensure_alive(self)
135 time.sleep(1)
136 else:
--> 137 raise PermanentlyFailedException("Timed out waiting for service to come alive.")
138
139 # At this point we are guaranteed that the service is alive.
PermanentlyFailedException: Timed out waiting for service to come alive.
Any advice is appreciated :-)
I have found the workaround for the Stanford CoreNLP(OPEN IE) with stanza. It is now suggested to use stanza for all annotators.
The problem with me running this was that the colab couldn't find the path to the stanford-corenlp-full-2018-10-05.zip file.
First, download the below file and upload it to your google drive.
https://nlp.stanford.edu/software/stanford-corenlp-full-2018-10-05.zip
After uploading to the drive, mount your drive with colab(refer colab for this.)
from google.colab import drive
drive.mount('/content/drive')
you should see stanford-corenlp-full-2018-10-05.zip file in the file explorer of the colab under the content folder. copy this path.
Install stanza(installs stanza)
!pip install stanza
Set the CORENLP_HOME path to the path copied(path of the file in google drive).
import os
os.environ["CORENLP_HOME"] = '/content/drive/MyDrive/stanford-corenlp-full-2018-10-05'
Run the following code.
import stanza
# Import client module
from stanza.server import CoreNLPClient
client = CoreNLPClient(timeout=150000000, be_quiet=True, annotators=['openie'],
endpoint='http://localhost:9001')
client.start()
import time
time.sleep(10)
Make sure you have set the timeout and annotators as openie in the above code.
Check if the java is running.
# Print background processes and look for java
# You should be able to see a StanfordCoreNLPServer java process running in the
#background
!ps -o pid,cmd | grep java
Run the code with your text to get triplets as follows.
text = "Albert Einstein was a German-born theoretical physicist. He developed the theory of relativity."
document = client.annotate(text, output_format='json')
triples = []
for sentence in document['sentences']:
for triple in sentence['openie']:
triples.append({
'subject': triple['subject'],
'relation': triple['relation'],
'object': triple['object']
})
print(triples)
Try this before starting server
%env NO_PROXY='localhost'
%env no_proxy='localhost'
I tested it with stanza-corenlp. And it solved the time-out problem.