gardnr a day ago

The model weighs 1.5GB [1] (the q4 quant is ~500MB)

The demo is impressive. It uses reference audio at inference time, and it looks like the training code is mostly available [2][3] with a reference dataset [4] as well.

From the README:

> NeuTTS Air is built off Qwen 0.5B

1. https://huggingface.co/neuphonic/neutts-air/tree/main

2. https://github.com/neuphonic/neutts-air/issues/7

3. https://github.com/neuphonic/neutts-air/blob/feat/example-fi...

4. https://huggingface.co/datasets/neuphonic/emilia-yodas-engli...

joshstrange a day ago

This is really neat. I cloned my voice and can generate text, but I can't seem to generate longer clips. The README.md says:

> Context Window: 2048 tokens, enough for processing ~30 seconds of audio (including prompt duration)

But it's cutting off for me before even that point. I fed it a paragraph of text and it gets part of the way through it before skipping a few words ahead, saying a few words more, then cutting off at 17 seconds. Another test just cut off after 21 seconds (no skipping).

Lastly, I'm on a MBP M3 Max with 128GB running Sequoia. I'm following all the "Guidelines for minimizing Latency" but generating a 4.16 second clip takes 16.51s for me. Not sure what I'm doing wrong or how you would use this in practice since it's not realtime and the limit is so low (and unclear). Maybe you are supposed to cut your text into smaller chunks and run them in parallel/sequence to get around the limit?

nopelynopington 4 days ago

If this lives up to the demo it's a huge development for anyone looking to do realistic tts without paying to use an API

  • kristopolous 21 hours ago

    there's quite a number of pretty low overhead models around that do that in realtime these days.

    • MarsIronPI 6 hours ago

      But how many of them support voice cloning?

      (Genuine question; I haven't seen any other than this one.)

mrklol 11 hours ago

Model says it’s only supporting English, seems like the demos on their page for other languages are using an older model as the quality is worse.

But the current one seems really good, tested it for quite a bit with multiple kind of inputs.

ks2048 a day ago

Every couple of weeks I see a new TTS model showcased here and it’s always difficult to see how they differ from one another. Why don’t they describe the architecture and details of the trailing data?

My cynical side thinks people just take the state-of-the-art open source model, use an LLM to alter the source, minimal fine tuning to change the weights and they are able to claim “we built our own state of the art tts”.

I know it’s open source, so I can dig into the details myself, but are they any good high-level overviews of modern TTS, comparing/contrasting the top models?

  • popalchemist 18 hours ago

    The special sauce here is that it is built on a very small LLM (Qwen) which means this can run on CPU-only, or even on micro devices like Raspberry Pi or a mobile phone.

    Architecturally it's similar to other LLM-based TTS models (like OuteTTS) but the underlying LLM makes them able to release it under an Apache 2 license.

  • DecoPerson 21 hours ago

    Without the resources to do a study to see if the quality is actually better or worse than other options, these open-TTS models must be judged by what you think of their output. (That is, do your own study.)

    I've found some of them to be surprisingly good. I keep a list of them, as I have future project ideas that might need a good one, and each has its own merits.

    I'm yet to find one that does good spoken informal Chinese. I'd appreciate if anyone can suggest one!

miki123211 20 hours ago

> Install espeak (required dependency)

This means using this TTS in commercial project is very dicy due to GPL3.

  • mlla 13 hours ago

    If only English support is required eSpeak could be replaced with MisakiSwift, which is under Apache 2.0 https://github.com/mlalma/MisakiSwift

    • diggan 12 hours ago

      Unfortunately seems it's Mac/iPhone only. Any cross platform alternatives?

kanwisher 12 hours ago

Need to hook this up to Home assistant

aitchnyu 15 hours ago

Tangential, how easy is it to verify watermark with a smartphone and how easy is it to erase the watermark?

baby 17 hours ago

BTW I was looking to train a TTS on my voice, whats the best way to do that today locally?

curioussquirrel a day ago

Could we finally get a decent opensource TTS app for Android? This project is very cool.

  • hsjdbsjeveb a day ago

    SherpaTTS?

    On Fdroid

    • deknos 15 hours ago

      i though this uses coqui which is not really opensource?

oidar 21 hours ago

I really wish these cloning tts would incorporate some sort of prosody control.