Disclosure: This post may contain affiliate links, meaning when you click the links and make a purchase, we receive a commission.
Speech synthesis (or Text to Speech) is the computer-generated simulation of human speech. It converts human language text into human-like speech audio. In this tutorial, you will learn how you can convert text to speech in Python.
Please note that I will use text-to-speech or speech synthesis interchangeably in this tutorial, as they're essentially the same thing.
In this tutorial, we won't be building neural networks and training the model from scratch in order to achieve results, as it is pretty complex and hard to do it. Instead, we gonna use some APIs, engines, and pre-trained models that offer it.
More specifically, we will use three different techniques to do text-to-speech:
To make things clear, this tutorial is about converting text to speech and not the other way around, if you want to convert speech to text instead, check this tutorial.
Table of contents:
To get started, let's install the required modules:
$ pip install gTTS pyttsx3 playsound soundfile transformers datasets sentencepiece
As you may guess, gTTS stands for Google Text To Speech, it is a Python library to interface with Google Translate's text-to-speech API. It requires an Internet connection and it's pretty easy to use.
Open up a new Python file and import:
import gtts
from playsound import playsound
It's pretty straightforward to use this library, you just need to pass text to the gTTS object which is an interface to Google Translate's Text to Speech API:
# make request to google to get synthesis
tts = gtts.gTTS("Hello world")
Up to this point, we have sent the text and retrieved the actual audio speech from the API, let's save this audio to a file:
# save the audio file
tts.save("hello.mp3")
Awesome, you'll see a new file appear in the current directory, let's play it using playsound module installed previously:
# play the audio file
playsound("hello.mp3")
And that's it! You'll hear a robot talking about what you just told him to say!
It isn't available only in English, you can use other languages as well by passing the lang
parameter:
# in spanish
tts = gtts.gTTS("Hola Mundo", lang="es")
tts.save("hola.mp3")
playsound("hola.mp3")
If you don't want to save it to a file and just play it directly, then you should use tts.write_to_fp()
which accepts io.BytesIO()
object to write into, check this link for more information.
To get the list of available languages, use this:
# all available languages along with their IETF tag
print(gtts.lang.tts_langs())
Here are the supported languages:
{'af': 'Afrikaans', 'sq': 'Albanian', 'ar': 'Arabic', 'hy': 'Armenian', 'bn': 'Bengali', 'bs': 'Bosnian', 'ca': 'Catalan', 'hr': 'Croatian', 'cs': 'Czech', 'da': 'Danish', 'nl': 'Dutch', 'en': 'English', 'eo': 'Esperanto', 'et': 'Estonian', 'tl': 'Filipino', 'fi': 'Finnish', 'fr': 'French', 'de': 'German', 'el': 'Greek', 'gu': 'Gujarati', 'hi': 'Hindi', 'hu': 'Hungarian', 'is': 'Icelandic', 'id': 'Indonesian', 'it': 'Italian', 'ja': 'Japanese', 'jw': 'Javanese', 'kn': 'Kannada', 'km': 'Khmer', 'ko': 'Korean', 'la': 'Latin', 'lv': 'Latvian', 'mk': 'Macedonian', 'ml': 'Malayalam', 'mr':
'Marathi', 'my': 'Myanmar (Burmese)', 'ne': 'Nepali', 'no': 'Norwegian', 'pl': 'Polish', 'pt': 'Portuguese', 'ro': 'Romanian', 'ru': 'Russian', 'sr': 'Serbian', 'si': 'Sinhala', 'sk': 'Slovak', 'es': 'Spanish', 'su': 'Sundanese', 'sw': 'Swahili', 'sv': 'Swedish', 'ta': 'Tamil', 'te': 'Telugu', 'th': 'Thai', 'tr': 'Turkish', 'uk': 'Ukrainian', 'ur': 'Urdu', 'vi': 'Vietnamese', 'cy': 'Welsh', 'zh-cn': 'Chinese (Mandarin/China)', 'zh-tw': 'Chinese (Mandarin/Taiwan)', 'en-us': 'English (US)', 'en-ca': 'English (Canada)', 'en-uk': 'English (UK)', 'en-gb': 'English (UK)', 'en-au': 'English (Australia)', 'en-gh': 'English (Ghana)', 'en-in': 'English (India)', 'en-ie': 'English (Ireland)', 'en-nz': 'English (New Zealand)', 'en-ng': 'English (Nigeria)', 'en-ph': 'English (Philippines)', 'en-za': 'English (South Africa)', 'en-tz': 'English (Tanzania)', 'fr-ca': 'French (Canada)', 'fr-fr': 'French (France)', 'pt-br': 'Portuguese (Brazil)', 'pt-pt': 'Portuguese (Portugal)', 'es-es': 'Spanish (Spain)', 'es-us': 'Spanish (United States)'}
Now you know how to use Google's API, but what if you want to use text-to-speech technologies offline?
Well, pyttsx3 library comes to the rescue, it is a text-to-speech conversion library in Python, and it looks for TTS engines pre-installed in your platform and uses them, here are the text-to-speech synthesizers that this library uses:
Here are the main features of the pyttsx3 library:
Note: If you're on a Linux system and the voice output is not working with this library, then you should install espeak, FFmpeg and libespeak1:
$ sudo apt update && sudo apt install espeak ffmpeg libespeak1
To get started with this library, open up a new Python file and import it:
import pyttsx3
Now we need to initialize the TTS engine:
# initialize Text-to-speech engine
engine = pyttsx3.init()
Now to convert some text, we need to use say() and runAndWait() methods:
# convert this text to speech
text = "Python is a great programming language"
engine.say(text)
# play the speech
engine.runAndWait()
say() method adds an utterance to speak to the event queue, while runAndWait() method runs the actual event loop until all commands queued up. So you can call multiple times the say() method and run a single runAndWait() method in the end, in order to hear the synthesis, try it out!
This library provides us with some properties that we can tweak based on our needs. For instance, let's get the details of speaking rate:
# get details of speaking rate
rate = engine.getProperty("rate")
print(rate)
Output:
200
Alright, let's change this to 300 (make the speaking rate much faster):
# setting new voice rate (faster)
engine.setProperty("rate", 300)
engine.say(text)
engine.runAndWait()
Or slower:
# slower
engine.setProperty("rate", 100)
engine.say(text)
engine.runAndWait()
Another useful property is voices, which allow us to get details of all voices available on your machine:
# get details of all voices available
voices = engine.getProperty("voices")
print(voices)
Here is the output in my case:
[<pyttsx3.voice.Voice object at 0x000002D617F00A20>, <pyttsx3.voice.Voice object at 0x000002D617D7F898>, <pyttsx3.voice.Voice object at 0x000002D6182F8D30>]
As you can see, my machine has three voice speakers, let's use the second, for example:
# set another voice
engine.setProperty("voice", voices[1].id)
engine.say(text)
engine.runAndWait()
You can also save the audio as a file using the save_to_file()
method, instead of playing the sound using say()
method:
# saving speech audio into a file
engine.save_to_file(text, "python.mp3")
engine.runAndWait()
A new MP3 file will appear in the current directory, check it out!
In this section, we will use the 🤗 Transformers library to load a pre-trained text-to-speech transformer model. More specifically, we will use the SpeechT5 model that is fine-tuned for speech synthesis on LibriTTS. You can learn more about the model in this paper.
To get started, let's install the required libraries (if you haven't already):
$ pip install soundfile transformers datasets sentencepiece
Open up a new Python file named tts_transformers.py
and import the following:
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from datasets import load_dataset
import torch
import random
import string
import soundfile as sf
device = "cuda" if torch.cuda.is_available() else "cpu"
Let's load everything:
# load the processor
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_tts")
# load the model
model = SpeechT5ForTextToSpeech.from_pretrained("microsoft/speecht5_tts").to(device)
# load the vocoder, that is the voice encoder
vocoder = SpeechT5HifiGan.from_pretrained("microsoft/speecht5_hifigan").to(device)
# we load this dataset to get the speaker embeddings
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
The processor
is the tokenizer of the input text, whereas the model
is the actual model that converts text to speech.
The vocoder
is the voice encoder that is used to convert human speech into electronic sounds or digital signals. It is responsible for the final production of the audio file.
In our case, the SpeechT5 model transforms the input text we provide into a sequence of mel-filterbank features (which is a type of representation of the sound). These features are a type of acoustic feature used often in speech and audio processing, derived from a Fourier transform of the signal.
The HiFi-GAN vocoder we're using takes these representations and synthesizes them into actual audible speech.
Finally, we load a dataset that will help us get the speaker's voice vectors so we can synthesize speech with various speakers. Here are the speakers:
# speaker ids from the embeddings dataset
speakers = {
'awb': 0, # Scottish male
'bdl': 1138, # US male
'clb': 2271, # US female
'jmk': 3403, # Canadian male
'ksp': 4535, # Indian male
'rms': 5667, # US male
'slt': 6799 # US female
}
Next, let's make our function that does all the speech synthesis for us:
def save_text_to_speech(text, speaker=None):
# preprocess text
inputs = processor(text=text, return_tensors="pt").to(device)
if speaker is not None:
# load xvector containing speaker's voice characteristics from a dataset
speaker_embeddings = torch.tensor(embeddings_dataset[speaker]["xvector"]).unsqueeze(0).to(device)
else:
# random vector, meaning a random voice
speaker_embeddings = torch.randn((1, 512)).to(device)
# generate speech with the models
speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)
if speaker is not None:
# if we have a speaker, we use the speaker's ID in the filename
output_filename = f"{speaker}-{'-'.join(text.split()[:6])}.mp3"
else:
# if we don't have a speaker, we use a random string in the filename
random_str = ''.join(random.sample(string.ascii_letters+string.digits, k=5))
output_filename = f"{random_str}-{'-'.join(text.split()[:6])}.mp3"
# save the generated speech to a file with 16KHz sampling rate
sf.write(output_filename, speech.cpu().numpy(), samplerate=16000)
# return the filename for reference
return output_filename
The function takes the text
, and the speaker
(optional) as arguments and does the following:
torch.randn()
. Although I do not think it's a reliable way of making a random voice.model.generate_speech()
method to generate the speech tensor, it takes the input IDs, speaker embeddings, and the vocoder.Let's use the function now:
# generate speech with a US female voice
save_text_to_speech("Python is my favorite programming language", speaker=speakers["slt"])
This will generate a speech of the US female (as it's my favorite among all the speakers). This will generate a speech with a random voice:
# generate speech with a random voice
save_text_to_speech("Python is my favorite programming language")
Let's now call the function with all the speakers so you can compare speakers:
# a challenging text with all speakers
text = """In his miracle year, he published four groundbreaking papers.
These outlined the theory of the photoelectric effect, explained Brownian motion,
introduced special relativity, and demonstrated mass-energy equivalence."""
for speaker_name, speaker in speakers.items():
output_filename = save_text_to_speech(text, speaker)
print(f"Saved {output_filename}")
# random speaker
output_filename = save_text_to_speech(text)
print(f"Saved {output_filename}")
Output:
Saved 0-In-his-miracle-year,-he-published.mp3
Saved 1138-In-his-miracle-year,-he-published.mp3
Saved 2271-In-his-miracle-year,-he-published.mp3
Saved 3403-In-his-miracle-year,-he-published.mp3
Saved 4535-In-his-miracle-year,-he-published.mp3
Saved 5667-In-his-miracle-year,-he-published.mp3
Saved 6799-In-his-miracle-year,-he-published.mp3
Saved lz7Rh-In-his-miracle-year,-he-published.mp3
Listen to 6799-In-his-miracle-year,-he-published.mp3
:
Great, that's it for this tutorial, I hope that will help you build your application, or maybe your own virtual assistant in Python!
To conclude, we have used three different methods for text-to-speech:
if you want to use a reliable synthesis, you can go for Google TTS API or any other reliable API of your choice. If you want a reliable but offline method, you can also use the SpeechT5 transformer. And if you just want to make it work quickly and without an Internet connection, you can use the pyttsx3 library.
Here is the documentation for used libraries:
Finally, if you're a beginner and want to learn Python, I suggest you take the Python For Everybody Coursera course, in which you'll learn a lot about Python. You can also check our resources and courses page to see the Python resources I recommend!
Related: How to Play and Record Audio in Python.
Happy Coding ♥
View Full Code