Voice cloning — the ability to faithfully reproduce a human voice from an audio sample — has transitioned in 18 months from a laboratory gadget to an accessible commercial tool. With only 30 seconds of reference audio, current models generate a synthetic voice indistinguishable from the original for 78% of human listeners. For businesses, legitimate uses are numerous. The risks are too. This analysis covers both.
Legitimate uses in business
Proprietary brand voice
Creating a unique synthetic voice for all its AI agents is the most widespread and least controversial use. The company has a voice actor recorded (with an explicit transfer agreement), creates a voice model from this recording, and has a 100% proprietary voice for its agents, voice servers, and audio advertisements. Cost: €2,000 to €8,000 depending on the recording duration. Advantage: total brand consistency, no legal risk.
Accessibility and multilingual content
A publishing group can clone an author's voice (with their contractual consent) to narrate their audiobooks in 40 languages, without the author having to record in each language. A trainer can create multilingual versions of their e-learning courses with their own cloned voice. These documented and consented uses are legally sound.
Customer voice personalization
Some companies are experimenting with advanced personalization: the AI agent subtly adapts its regional accent or language register according to the customer's profile. Not exactly voice cloning, but a fine-tuning of speech synthesis parameters that produces a similar effect of closeness.
Poorly managed risks
Internal vocal deepfake
Several documented incidents in 2025 involve cybercriminals using cloned voices of executives to authorize fraudulent transfers via phone calls. A cloned voice of the CEO ordering an "urgent confidential transfer" is convincing enough to deceive an unprepared employee. Companies must implement out-of-band verification protocols for any urgent financial requests received by phone.
Liability in case of misuse
If you deploy a cloned voice for your customer service and a customer is misled about the artificial nature of the conversation, your liability may be engaged. The European AI Act has mandated since January 2026 that all AI-generated content be clearly identified as such in interactions with consumers.
The legal framework in 2026
In Europe, three texts govern voice cloning:
- GDPR: The voice is considered biometric data. Cloning a person's voice without explicit legal basis (consent, contract) is a violation of the GDPR.
- AI Act (applicable since August 2025): Voice synthesis systems deployed in interaction with consumers must include an audible or readable transparency marking.
- French common law: The voice is protected under the right to image and privacy. Using someone's voice without permission may constitute infringement or violation of privacy.
What contracts should include
If you use the voice of an actor or collaborator to create a voice model: a rights transfer contract specifying the authorized uses (AI agents, advertising, e-learning), the duration (limited or perpetual), the territory, and the conditions for revocation. Without this contract, the person can demand the removal of the model and damages at any time.
"The voice is identity. Companies that treat voice cloning as a mere technical asset without legal dimension take considerable risks." — Lawyer specializing in digital law, Parisian firm
Best practices for responsible deployment
- Always work with voices created by paid actors who have signed an explicit contract.
- Clearly inform customers that they are interacting with an AI agent (and not a human).
- Never use the voice of an executive or employee without their written consent.
- Regularly audit the use of your voice model to detect misuse.
- Train financial teams on the risks of vocal deepfake for transfer verifications.