For the first time, the Google’s
machine-learning system, Magenta, shows that it is capable of making music in a
90 second long clip.
“To start, Magenta is being developed
by a small team of researchers from the Google Brain team. If you’re a
researcher or a coder, you can check out our alpha-version code. Once we have a
stable set of tool and models, we’ll invite external contributors to check in
code to our GitHub. If you’re a musician or an artist, we hope you’ll try using
these tools to make some noise or images or videos… or whatever you like,” said
a blog post from Google.
“Our goal is to build a community
where the right people are there to help out. If the Magenta tools don’t work
for you, let us know. We encourage you to join our discussion list and shape
how Magenta evolves. We’d love to know what you think of our work---as an
artist, musician, researcher, coder, or just an aficionado. You can follow our
progress and check out some of the music and art Magenta helps create right here
in this blog. As we begin accepting code from community contributors, the blog
will also be open to posts from these contributors, not just Google Brain team
members.”
The Magenta project runs on top of
Google’s open-source AI engine, and while it might sound a little odd at first
that Google is opening this not-so-simple source code for anyone to use, it’s a
part of the company’s hope that open-sourcing its AI engine to help it evolve
and grow in a way that people are comfortable with it.
Share It To Your Friends!
Loading...