Voice cloning is the single feature that makes Star Singer feel magical. It is also the feature that carries the most responsibility. Here is how we think about it, and how our system actually behaves under the hood.
Your voice belongs to you
When you record your 30-second sample, we build a compact voice model and store it against your account. That model is encrypted at rest and never shared with third parties. We never sell voice data, period. You can delete your voice model from Settings at any time — it is gone from our servers within 30 days, and from backups within 90.
We also do not train foundation models on your voice. Your model is only used to generate content you request.
We only clone your own voice
You can only clone a voice from audio you recorded in-app or that you uploaded with explicit consent. The app will reject obvious attempts to clone celebrities — we run speaker recognition against a deny list of public figures and block matches. If you try to upload a podcast clip of a politician and turn it into a song, it will not work.
We are not perfect here. Speaker recognition is hard and adversaries are creative. If you see your own likeness or voice used without consent on Star Singer, email hello@starsinger.ai and we will remove the content and suspend the creator within hours.
What we refuse to generate
- Non-consensual likeness or voice of a real person. This is the hard line.
- Sexual content involving minors and any CSAM. We scan every generation against known material and use human review on anything that trips our filters.
- Hate speech, incitement to violence, and harassment targets. We use text moderation on lyrics and visual moderation on rendered frames.
- Fraud and impersonation. Songs that pretend to be statements by a real named person are not allowed.
How moderation actually works
Every generation runs through three layers. First, your input prompt and lyrics go through a text safety classifier. Second, the audio stream goes through a fingerprint check against known commercial recordings so we know when you are doing a cover versus an original. Third, the final video goes through a visual safety classifier plus random human review on a fraction of outputs.
If any layer flags content, the generation stops and you get a message explaining why. Creators who repeatedly trigger the hard rules get their accounts restricted or banned.
The cover song question
You can cover a copyrighted song on Star Singer for personal, non-commercial use. That is legal in most jurisdictions as fair personal use, and it mirrors what millions of people already do on TikTok and YouTube every day. Commercial distribution of a cover requires a mechanical license from the rights holder — we flag obvious commercial uploads and require attestation.
If a rights holder asks us to take down a specific cover, we do that promptly. We are building a direct licensing portal that will let labels set preferences for their catalog at the song level.
Age and minors
Star Singer is 13+ in most regions and 16+ in the EU and UK. For users under 18, we apply stricter moderation around likeness use, require a guardian to agree to the Terms on their behalf, and disable public publishing by default. You cannot clone a minor's voice unless you are the parent or guardian of record.
The honest part
We are not going to claim a perfect record. AI systems are new, adversaries will find edge cases, and some harmful content will slip through every provider in this space. What we commit to is transparency about what we block, fast action when something goes wrong, and continuous improvement of the moderation stack. If you see something that should not be on Star Singer, tell us. We will respond.
Build creatively. Respect consent. That is the whole ethos.