Why not call it Slopify? Humans are the new vinyl.
https://flippa.com/12100071 - I was wondering why SmashHaus was for sale. (no affiliation) Peak value. It's only downhill from here for outsourced music.
I've tried and couldn't make it sound like Angine de Poitrine, it completely ignores the microtones. Sounds more like Polyphia. It does look like AdP is the answer to AI.... or we haven't trained the models with sufficient microtones, likely due to western music influence.
Wow, such hate for anything AI. I haven't used it, I don't care to. But I have teenaged kids and I've seen them hang and tinker with tech and ai. They seem to love it.
Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
> Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
yes! go tell to teenager kids harmonize by typing and not editing sheet score a > 4 instrument melody without any music theory background or in-ear training
I'm sure you're right that AI augmented workflows can (& do?) produce beautiful works that I would call art... it's just that the overwhelming majority of AI 'art' I experience on the internet is slop.
Seems to be where Udio and Suno were 2 years ago, but with a better initial UI than they had. I'm sure Google will discontinue this in a year or two. Suno has since pulled significantly ahead. This isn't another Songsmith, but it's behind the curve right now.
Agreed - I have been checking to verify whether this truly is a Google service or just something that links out to generic Google ToS and Support pages. It looks suspicious.
it can't remix. even comfyui can remix on my desktop. I've used udio, suno, comfyui with the music generation models, and one other site that i can't remember the name of since it was through a friend.
They all kinda suck, you do have to run generation many times unless you're very lucky.
I and my friend wrote 10 albums between 1997 and 2007. we went solo for geographical regions, and i stopped writing music altogether in 2017 or so, only doing arrangements, mashups, mastering.
and now i don't even know if i ever want to get back into music, because i can rapidly generate a "good enough for the moment" track, like when the south korean president tried to coup: https://soundcloud.com/djoutcold/i-aint-even-writing-music-a... Oh by the way the lyrics are in lojban except for "... and the people were pissed"
the oldest stuff on those three sites i mentioned is all hand written by me over the years.
I've noticed that all of these music generators suffer from something like "mean" collapse (as opposed to mode, you do get variance, but all results are highly centered around similar sounding songs).
The music is all just very average, it sounds like the most average song with the most average chord progression/drum pattern per genre.
I guess that makes sense if these are most likely next audio token predictors... but it'd be cool if there was a way to inject some type of creativity/novelty into these, or at least tune up the temperature.
Everything so far just sounds like stock library music to me.
It's good, I have this song I generated last year with Suno which stuck with me and I just tried having Flow generate a variant and it was acceptable. Sometimes the lyrics get modified for no reason. It would be better if you could control emphasis by specifying tags or something along those lines, but it seems fun to play around. If there was an intermediate step where a symbolic or partial processing of your input was shown for tweaking, it would be immensely powerful.
One of the key issues that I encountered after a few song generations is that it feels very rushed, like it's constrained to this 3 minute limit per song so it forces every section of the song to conform to a very specific structure. I tried increasing the limit to 4 minutes but it still gave me 3 minute songs.
Honestly, I feel like this product is showing up a bit late to the party and it's not really feeling particularly innovative. There's nothing egregiously bad about it, but it doesn't seem to add anything new or special that I could notice.
I don't have a microphone hooked up so I can't try the voice interface, but it would be really fun if you could sing to it in order to iteratively compose a song. It could clean up your voice a bit and add music. Or being able to hum out a beat which it converts into a track which you slowly build up. Is anyone able to try if those capabilities are possible with the existing product?
Overall, I'm not sure if a chat interface is the best way to produce a song. It feels very restrictive to have full songs as the primary iteration mechanism. In a text file and in code you can inspect or modify different sections or components very easily. I think a more human-focused tool would provide an on-ramp towards full music production, where you can focus on the parts that you care about and enjoy, while the AI tool fills the other parts with sausage. Right now you can chat with the tool but it appears to be quite limited in the kind of changes that it can make.
This is their recent acquisition (producer.ai) rebranded.
Before the purchase, the quality of generations had been going down for a while (IMO; subjective and anecdotal). I tested multiple iterations of their chat interface and was never thrilled with its ability to actually understand or adhere to prompts. I had liked their previous (Suno/Udio-like) iteration (Riffusion).
Curious to hear how it performs for people now and whether anything has improved.
Ok, someone explain the use case for this? Jingles? Making a song about your friend / sig other? Are people thinking they are going to sell these songs and create an AI artist?
I've summarized Supreme Court cases into Broadway musicals. The thing about memory is that novel input increases retention. So now I know about grouse hunting and explosives that fall off trains and their constitutional implications.
Another was a set of songs that helped me emotionally regulate on the drive home after couples therapy. The lyrics contained grounding exercises that helped maintain awareness and presence and contained mindfulness practices.
Both did their job, but they were also music for utility, not necessarily for artistic enjoyment. So it's not entirely an apples to apples comparison.
It can generate something well produced, but it's really bad at applying taste or direction in the way a human does.
The workflow feels wrong. it should be closer to a DAW with chat, where the model outputs stems, samples and arrangement parts instead of one finished track. Then you could target a specific sound, section or idea and actually develop it.
Really bad at prompt adherence. Was trying to get it to compose a solo old time banjo piece.
Couldn't get it to stop adding in backing instrumentals at all and it sounded too much like bluegrass style.
"solo banjo instrumental, strictly no other instruments" ... ten seconds later: drums, a fiddle, and a guitar join in.
Here's my personal take on what I'll call the new realm of "AI art". Whether it's prompting a music model or an image model, there is a huge space for creative output, limited only by the human imagination. Sure, tossing in a single prompt and letting the model crap out something will produce "slop". But if you pour your heart into exploring the high-dimensional landscape of the model, you can find truly amazing stuff. This is no different than exploring the creative landscape of music, photography, and other forms of art in the pre-LLM era.
I find that people who rush to negative judgement of LLM-generated art are not going far enough in the creative process to properly judge just how much juice there is to be squeezed out of those 50-billion-dimensional spaces.
Given how much Google lives to mash their offerings together, and then sunset them, I live in fear of them killing Google Youtube Music (or whatever it's called), in favor of combining functionality with this, and having my music cycle between my actual library, and bespoke AI-generated stuff.
I can see the appeal if this ends up being good at iteration rather than just first-pass generation. A lot of AI music products look impressive for five minutes, but the real test is whether they help someone get closer to a specific thing they actually wanted to make.
If it's anything like suno, it probably takes you 30 to 40 attempts to dial in what you were looking for. (And don't get me wrong, the results can be great with suno - there's just a lot of trial and error, and dice rolling.)
I don't know anything about these AI tools, but it seems to me like, the yield rates of all these AI media generators are exactly in the range of that of lootbox games. Kids "pull" it like slot machines for set prompt, keeping no more than 1% of outputs. The rest is just thrown away, only potentially useful as negative data. So 600 per month total is probably like just couples per month usable.
That’s a huge amount of messing around to get those handful of songs then. If only 1% are good, you’re pulling the lever 100x more than you should need to.
big tech companies are 50 companies in a trench coat, there isn't some great aligning directive. Feels like some random side project some employees felt like making.
Welcome to 2026's reality, most new music is already AI-generated. I don't like it, but it is what it is. YT Music is already full of AI slop, those tools aren't changing that.
If anything it gives Google control of the entire production->sale->delivery process.
I'm honestly not seeing a downside for Google here, can you elaborate?
Most new music by what definition? I'm certain more stuff is being churned out by these automated tools than genuine human creativity, but that doesn't make it economically relevant if the only use it's seeing is random high school kids' YouTube channels. It's not seeing streams on services, it's not bringing in revenue once created.
I just keep reporting AI slop videos (incl music) on YT, and sometimes the videos or even entire channel vanish. I hope I'm contributing to this process to keep YT safe, but I'm just one guy, and they probably have a much bigger effort internally.
The downside for Google is, ultimately, the death of the company. Nobody wants AI slop, and go out of their way to actively avoid it and punish companies that promote it. Google already is running a huge risk by pushing Gemini into every service, and permanently burning customers and users with it.
Microsoft is already seeing the downside of trying to Copilot everything. Their software is now partly slop, shit randomly breaks, companies cancel Azure/Office subscriptions and move to on-prem, FOSS, etc. They've pumped their brakes quite a lot, but the damage may be too great to mitigate now.
If Google wants to lose money in the long run, then by all means, please continue.
The people in charge here don’t give a fuck about the long term.
Reap as much profits for yourself as you can before everything inevitably collapses - that’s the prevailing current trend.
Let the lizard brain take over and just feel good in the moment, why worry about the future.
Tried to prompt some instrumental progressive metal with keyboard and guitar unison solos and some back and forth call and reply riffing and eventually it just kinda forgot that there was supposed to even be keyboards in the song. Basically slop knockoff of Liquid Tension Experiment.
The sound of the guitar is good but the keyboard sounded realy awful, just like a Casio toy keyboard pretending to be a piano. Like truely awful sounding, which is when I prompted the AI to try to fix the tone and then it basically just removed it.
The drums were also waaaaay too prominent so I asked it make them a bit more subdued in the mix and it just ended up slowing down everything to the point it just kinda sounded like generic radio alt-rock instead.
But basically once the keyboards were forgotten no amount of prompting could “convince” it to bring them back.
I tried Suno a few months ago out of morbid curiosity and it was waaaay better than this. Actually got something that made my musician friends actually kinda nervous.
What I really hate about all of this, whether it’s music, images, video or anything else, is how much they all use the word “create.” As in, you can create the music you’ve always imagined.
You. Are. Not. Creating. Anything.
You are prompting. Then tweaking, changing, adjusting, etc. The tech is incredible, don’t get me wrong, but it’s advertised so blatantly as the user doing the creating.
Use it as a creativity tool, but don’t get caught up in the false belief that what it spits out is something you created.
Old man yells at cloud. Going back to my cave now.
It's worse than that: the creativity and originality I put in my prompts, it extinguishes, and instead churns out unoriginal formulaic crap. The crap sounds exquisite and realistic though.
The models are primitive right now, but we’re clearly heading toward “AI as sound synthesis, human as artist” - much like how producers currently use a DAW to assemble premade loops and sounds from Splice, but with the producer now able to prompt any sound, filter, or effect they can imagine into existence and then rearrange them into a song.
See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.
I’m a photographer, I have almost a quarter million photos in my archives. When I take a photo, 90% of it is composition, during which I move around, analyze lighting, background, aperture, shutter speed, exposure and a whole lot of “what do I want to capture here?”
The other 10% is editing, which for me involves minor color adjustments, highlights, shadows, cropping, etc. I make all the decisions.
AI can generate an image based on a prompt, and that’s fine, but I would never, never claim to have created that output myself.
So the only difference is the amount of decisions and iteration. If someone spends 5 hours iterating with AI vs 5 minutes on a photo, which one has the better claim to being a creative work
If someone spends 5 hours communicating with an artist they're commissioning vs 5 minutes on a sketch on a napkin, I think the napkin has a stronger claim to creativity.
It is not necessary to draw a sharp line that clearly divides everything before saying “this is too far” about something that has, in fact, gone too far.
Good question. Maybe not cook, but consider someone who picked just the right ingredients and preparation for a sandwich. Combining flavors and textures in novel ways that are as surprising as they are delicious. I would ascribe more of the creative credit to that person vs. the one cutting the bread.
This is the type of thing that really doesn't interest me. Algorithmic junk that sounds "good", similar to the kind of writing an LLM generates. Sure you can do it, but the main use cases are AI Slop (IMO). Slopify is a great name
I wanted it to make a "PAMS" or "JAM CReative" style radio jingle, like the radio jingles of 70s radio stations but for my website. It failed miserably
Really not a fan of the "ChatGPT" style UI, did they even look at something like Suno at all? This is a little silly to me.
The music sounds decent, I feel like its missing some things, to be fair Suno still doesn't know what a Puerto Rican guiro is. I assume a lot of these AI platforms will take many iterations.
Things Suno needs to figure out and maybe Google now too, is how to let someone pick a specific voice, and get a rather unique voice, I've heard a few songs in Suno with similar voices to my own songs, and its kind of weird.
I do love making the songs as a hobby, so not a big deal. All in all, AI music is really fun to toy with, especially blending genres together.
One very noticeable difference against Suno is Google Flow Music lets you make Music Videos, which I have yet to test. I wonder if I can use my Suno songs to make music videos for them, not sure I'm vibing with Google's Music AI yet.
Aside: Makes me chuckle a little, since "Flow Music" is a reggaeton catch phrase by Arcangel who would always say "Flow Music" even though it was always called "Flow Factory" he would call it Flow Music.
Edit:
There's some awkward factors Google will need to work out, while the instruments and voices sound nice and clear, the rythm feels weirdly off for some songs, its like the voices are not matching the genre mix, its also missing some nuances I've asked for, I assume it does not know what "wobble bass" means. Suno lets you describe nuanced specific sounds or instruments and uses them how you describe.
I told it to have a dubstep breakdown in the middle, it keeps the artist singing / rapping, which is bizarre, that's not how a breakdown would be...
Suno takes great care to make sure the voice always matches whatever is going on with the beat, including humming the beat / bass / brass / whatever instruments are being played.
Glad Suno is going to have some real competition, I just hope Google doesn't kill Suno with its bigger wallet, would be a shame.
Edit:
Final verdict from me is, it feels like less polished than Suno in terms of music, but more features. Suno lacks music creation which still annoys me they let you make a lyric video that's one single orientation / resolution, you have zero control over it otherwise.
There's a "workspace builder" which you prompt and it builds a web app that lets you create songs and what not, not sure what all the features of it is, but it is interesting as well.
If they get this more on par with Suno, they might for the first time ever take money from me since I left the Android / Google platform many moons ago.
Like a strange form of Gell Mann amnesia, where all AI output is probably this bad, but if we don't know any better, we don't know just how bad it is.
https://flippa.com/12100071 - I was wondering why SmashHaus was for sale. (no affiliation) Peak value. It's only downhill from here for outsourced music.
> My bad—I forgot to hook up the sound system.
And then it started playing jazz, which I'm not mad about. Nice to see Google trying fun stuff.
Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
yes! go tell to teenager kids harmonize by typing and not editing sheet score a > 4 instrument melody without any music theory background or in-ear training
They all kinda suck, you do have to run generation many times unless you're very lucky.
I and my friend wrote 10 albums between 1997 and 2007. we went solo for geographical regions, and i stopped writing music altogether in 2017 or so, only doing arrangements, mashups, mastering.
I can't use this google product to make a song. source for credentials is my youtube, soundclick, and soundcloud accounts, e.g. https://www.youtube.com/watch?v=HXro-e0e7aA
and now i don't even know if i ever want to get back into music, because i can rapidly generate a "good enough for the moment" track, like when the south korean president tried to coup: https://soundcloud.com/djoutcold/i-aint-even-writing-music-a... Oh by the way the lyrics are in lojban except for "... and the people were pissed"
the oldest stuff on those three sites i mentioned is all hand written by me over the years.
The music is all just very average, it sounds like the most average song with the most average chord progression/drum pattern per genre.
I guess that makes sense if these are most likely next audio token predictors... but it'd be cool if there was a way to inject some type of creativity/novelty into these, or at least tune up the temperature.
Everything so far just sounds like stock library music to me.
One of the key issues that I encountered after a few song generations is that it feels very rushed, like it's constrained to this 3 minute limit per song so it forces every section of the song to conform to a very specific structure. I tried increasing the limit to 4 minutes but it still gave me 3 minute songs.
Honestly, I feel like this product is showing up a bit late to the party and it's not really feeling particularly innovative. There's nothing egregiously bad about it, but it doesn't seem to add anything new or special that I could notice.
I don't have a microphone hooked up so I can't try the voice interface, but it would be really fun if you could sing to it in order to iteratively compose a song. It could clean up your voice a bit and add music. Or being able to hum out a beat which it converts into a track which you slowly build up. Is anyone able to try if those capabilities are possible with the existing product?
Overall, I'm not sure if a chat interface is the best way to produce a song. It feels very restrictive to have full songs as the primary iteration mechanism. In a text file and in code you can inspect or modify different sections or components very easily. I think a more human-focused tool would provide an on-ramp towards full music production, where you can focus on the parts that you care about and enjoy, while the AI tool fills the other parts with sausage. Right now you can chat with the tool but it appears to be quite limited in the kind of changes that it can make.
Before the purchase, the quality of generations had been going down for a while (IMO; subjective and anecdotal). I tested multiple iterations of their chat interface and was never thrilled with its ability to actually understand or adhere to prompts. I had liked their previous (Suno/Udio-like) iteration (Riffusion).
Curious to hear how it performs for people now and whether anything has improved.
Another was a set of songs that helped me emotionally regulate on the drive home after couples therapy. The lyrics contained grounding exercises that helped maintain awareness and presence and contained mindfulness practices.
Both did their job, but they were also music for utility, not necessarily for artistic enjoyment. So it's not entirely an apples to apples comparison.
Also, probably someone will game an algorithm to get revenue from a bajillion tracks of lofi slop.
The workflow feels wrong. it should be closer to a DAW with chat, where the model outputs stems, samples and arrangement parts instead of one finished track. Then you could target a specific sound, section or idea and actually develop it.
"solo banjo instrumental, strictly no other instruments" ... ten seconds later: drums, a fiddle, and a guitar join in.
I find that people who rush to negative judgement of LLM-generated art are not going far enough in the creative process to properly judge just how much juice there is to be squeezed out of those 50-billion-dimensional spaces.
I especially love the glitchy ui sounds, although I suspect it's hardly intentional.
I could understand if this was an API that people built products around, but it seems to be geared directly at consumers.
Odds are for every 200 ai songs you generate , 2 or 3 are decent.
Anyway. UMG will probably force you to sign over training rights in future record deals.
The models still can't rap. Sounds like if you asked someone who didn't know what rap was to read a script
They're a music store, they sell music, both to own, but also renting their vast library out.
Google should learn not to shit where they eat.
If anything it gives Google control of the entire production->sale->delivery process.
I'm honestly not seeing a downside for Google here, can you elaborate?
The downside for Google is, ultimately, the death of the company. Nobody wants AI slop, and go out of their way to actively avoid it and punish companies that promote it. Google already is running a huge risk by pushing Gemini into every service, and permanently burning customers and users with it.
Microsoft is already seeing the downside of trying to Copilot everything. Their software is now partly slop, shit randomly breaks, companies cancel Azure/Office subscriptions and move to on-prem, FOSS, etc. They've pumped their brakes quite a lot, but the damage may be too great to mitigate now.
If Google wants to lose money in the long run, then by all means, please continue.
Once you have that particular brand of cancer, its too late to save the company without drastic measures.
Nice
The sound of the guitar is good but the keyboard sounded realy awful, just like a Casio toy keyboard pretending to be a piano. Like truely awful sounding, which is when I prompted the AI to try to fix the tone and then it basically just removed it.
The drums were also waaaaay too prominent so I asked it make them a bit more subdued in the mix and it just ended up slowing down everything to the point it just kinda sounded like generic radio alt-rock instead.
But basically once the keyboards were forgotten no amount of prompting could “convince” it to bring them back.
I tried Suno a few months ago out of morbid curiosity and it was waaaay better than this. Actually got something that made my musician friends actually kinda nervous.
You. Are. Not. Creating. Anything.
You are prompting. Then tweaking, changing, adjusting, etc. The tech is incredible, don’t get me wrong, but it’s advertised so blatantly as the user doing the creating.
Use it as a creativity tool, but don’t get caught up in the false belief that what it spits out is something you created.
Old man yells at cloud. Going back to my cave now.
See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.
The other 10% is editing, which for me involves minor color adjustments, highlights, shadows, cropping, etc. I make all the decisions.
AI can generate an image based on a prompt, and that’s fine, but I would never, never claim to have created that output myself.
Does the guy who tells the composer "write a song" create?
No.
The line is somewhere in the middle
The music sounds decent, I feel like its missing some things, to be fair Suno still doesn't know what a Puerto Rican guiro is. I assume a lot of these AI platforms will take many iterations.
Things Suno needs to figure out and maybe Google now too, is how to let someone pick a specific voice, and get a rather unique voice, I've heard a few songs in Suno with similar voices to my own songs, and its kind of weird.
I do love making the songs as a hobby, so not a big deal. All in all, AI music is really fun to toy with, especially blending genres together.
One very noticeable difference against Suno is Google Flow Music lets you make Music Videos, which I have yet to test. I wonder if I can use my Suno songs to make music videos for them, not sure I'm vibing with Google's Music AI yet.
Aside: Makes me chuckle a little, since "Flow Music" is a reggaeton catch phrase by Arcangel who would always say "Flow Music" even though it was always called "Flow Factory" he would call it Flow Music.
Edit:
There's some awkward factors Google will need to work out, while the instruments and voices sound nice and clear, the rythm feels weirdly off for some songs, its like the voices are not matching the genre mix, its also missing some nuances I've asked for, I assume it does not know what "wobble bass" means. Suno lets you describe nuanced specific sounds or instruments and uses them how you describe.
I told it to have a dubstep breakdown in the middle, it keeps the artist singing / rapping, which is bizarre, that's not how a breakdown would be...
Suno takes great care to make sure the voice always matches whatever is going on with the beat, including humming the beat / bass / brass / whatever instruments are being played.
Glad Suno is going to have some real competition, I just hope Google doesn't kill Suno with its bigger wallet, would be a shame.
Edit:
Final verdict from me is, it feels like less polished than Suno in terms of music, but more features. Suno lacks music creation which still annoys me they let you make a lyric video that's one single orientation / resolution, you have zero control over it otherwise.
There's a "workspace builder" which you prompt and it builds a web app that lets you create songs and what not, not sure what all the features of it is, but it is interesting as well.
If they get this more on par with Suno, they might for the first time ever take money from me since I left the Android / Google platform many moons ago.