OpenAI is developing a text-to-music tool trained with Juilliard students

OpenAI is reportedly developing a new tool that could redefine how music is created. The model, still in development, is designed to generate original music from text or audio prompts, enabling users to create compositions or soundtracks without traditional instruments or production tools.

According to early reports, the tool will allow users to describe what they want, such as a guitar backing for a vocal take or a cinematic score for a short film, and the model will produce the corresponding audio.

To train the system, OpenAI is collaborating with students from the Juilliard School in the United States, who are helping annotate musical scores. The data collected from this process aims to teach the model how to interpret musical structure, style, and emotional tone more effectively than through raw, uncurated datasets.

This isn’t OpenAI’s first experiment in music generation. The company previously released MuseNet and Jukebox, which demonstrated the potential of AI in composition but were never made widely accessible. In recent years, OpenAI’s focus has shifted to speech-related models, such as those used for transcription and voice synthesis, giving it a foundation to expand into broader sound and music applications.

Competing in an increasingly crowded AI music market

OpenAI’s entry into music generation comes at a time when the space is already seeing significant activity from players such as Google DeepMind, with models like MusicLM and Lyria, and start-ups Suno and Udio. Both Suno and Udio are currently facing lawsuits from the Recording Industry Association of America (RIAA) over alleged use of copyrighted recordings in training data, cases that could have far-reaching implications for how AI music systems are developed and regulated.

By contrast, OpenAI’s collaboration with Juilliard signals an effort to build its training process around licensed or educational data. This approach could help the company position itself as a more compliant and ethically grounded player in the field.

Expanding OpenAI’s creative ecosystem

The music tool is expected to be part of OpenAI’s broader push into creative media, following launches such as ChatGPT Atlas, an AI-enabled browser, and Sora, its text-to-video generation platform. The company has not yet confirmed whether the new model will be a standalone product or integrated within existing services like ChatGPT or Sora.

If integrated, the tool could enable users to generate video and music together in a single workflow, a direction that aligns with OpenAI’s goal of supporting multimodal creativity.

While details around the launch remain unclear, the implications for musicians, producers, and creators could be significant. Text-to-music generation tools are increasingly seen as production aids that can speed up workflows, support idea development, and expand access to music creation beyond traditional studio environments.

Total
0
Shares
Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Previous Post
YouTube contributes $8 billion to the global music industry between July 2024 and June 2025.

YouTube pays $8 billion to the global music industry between July 2024 and June 2025

Next Post

YouTube announces Indian artists for Foundry Class of 2025

Related Posts