The second item on the menu is Generators, a set of AI tools for generating assets, images, textures, animations, and sound. Unlike Assistant, these rely on both third-party models – such as those from Scenario, Inc. and Layer AI, which are trained on Stable Diffusion, FLUX, Bria, and GPT-Image foundation models – and Unity’s own first-party AIs.

When it comes to those third-party models, Unity notes that while “Partner Model providers” do not train their models with your developer data, Unity does send them your anonymized data, including prompts, reference assets, and more, “for the sole purpose of running the services.”

It’s also worth pointing out that some models block prompts with a likelihood of generating IP/copyright-infringing content, producing either a null response, a blank image, or a message requiring you to modify your prompt. This implies that copyrighted materials were indeed used to train the models Unity now employs, but AI developers have implemented measures to prevent users from seeing that in the generated outputs.

Lastly, there’s a new Inference Engine, which replaces Sentis and lets you run AI models locally in the Unity Editor or on end-user devices during Unity runtime. According to the devs, Inference Engine doesn’t come with built-in models but allows you to import your own pre-trained models or ones obtained from model repositories like Hugging Face.



Source link

Podcast also available on PocketCasts, SoundCloud, Spotify, Google Podcasts, Apple Podcasts, and RSS.