The $50 Challenge#
A few days ago I got accepted into the MiniMax developer program. The email was short and direct: here's a $50 API voucher, we're curious to see what you'll build. That's it. No strings, no required deliverable, just fifty dollars and a question.
Some context on me: I've spent 15 years building backends, data platforms, and pipelines. I'm comfortable with databases, APIs, infrastructure — the stuff nobody sees. What I've never done is build something consumer-facing and try to get people to use it. Never marketed a product, never asked anyone to pay for something I made. That whole muscle is atrophied, if it ever existed.
So I made a bet with myself. I'd use the MiniMax credits to build a community feature for my personal site — an AI-powered jukebox where visitors could generate music tracks and interact with each other's creations. Then I'd try to sustain it through crowd-funding. If I can't convince a handful of people that this is worth keeping alive, there's probably no point trying anything more ambitious on the commercial side.
The other piece of context: I didn't build this alone. Claude Code was my coding partner through the entire process. I'm going to be transparent about that from the start because the human+AI dynamic is a big part of this story. What I directed, what I caught, where Claude surprised me, where it fell short — that's all in here. This isn't a hype piece about AI-assisted development. It's a report from someone figuring it out in real time.
Here's how four sessions across one night and one morning turned a $50 voucher into a live community feature.
From Zero to Jukebox#
Session one started around 11pm on a weeknight. I had a rough idea: visitors type a prompt describing a song, MiniMax generates it, the track shows up in a public feed. No accounts, no logins, just show up and make music.
I described the vision to Claude and it scaffolded the full architecture in one pass. The first commit touched 30 files: 11 UI components, 5 API routes, a Supabase edge function for music generation, database migrations, and 106 tests. One commit.
The core generation flow uses a fire-and-forget pattern. A visitor submits a prompt, the API creates a pending track in Supabase, then triggers an edge function that calls MiniMax's music-2.5+ model. When generation completes (usually 30-60 seconds), the edge function updates the track status and stores the audio URL. The visitor's browser polls for updates.
One decision I made early: no user accounts. Visitors interact anonymously, identified only by a daily-rotating hash of their IP and user agent. This keeps things frictionless while still enabling per-visitor rate limiting. It was also a deliberate privacy stance — I don't want to know who my visitors are, and I don't want their email addresses. I need just enough identity to prevent spam and enable reactions, not one bit more. A daily-rotating hash gives me exactly that. It means fire reactions reset every day, which I initially saw as a bug but now see as a feature: tracks have to earn their fires fresh each day.
Here's the visitor hash — it rotates daily so there's no persistent tracking:
export function getVisitorHash(request: Request): string {
const forwarded = request.headers.get("x-forwarded-for");
const ip = forwarded?.split(",")[0]?.trim() ?? "unknown";
const ua = request.headers.get("user-agent") ?? "unknown";
const salt = getDailySalt();
return createHash("sha256")
.update(ip + "|" + ua + "|" + salt)
.digest("hex")
.slice(0, 16);
}The UI followed the site's "Ember" branding — dark background, warm cream text, that #c75c2c accent color. Claude nailed the terminal aesthetic without me having to micro-manage component styles. The track cards show a waveform visualization, playback controls, and the original prompt. It even matched the monospace vibe of the rest of the site without being told to. It looks like it belongs, which is more than I expected from a first pass.
Around 12:30am I generated the first track. I typed "lo-fi jazz for debugging at midnight" and waited. Thirty seconds later, a piano riff with brushed drums started playing through my laptop speakers. It sounded... good? Like, genuinely good. I sat there for a minute just listening, slightly stunned that this worked on the first try. The apartment was dead quiet except for this warm little piano loop bleeding out of my laptop, and I remember thinking: I should be asleep, but I don't want to stop this.
That feeling wore off quickly. Because the next thing I had to do was actually review what had just been committed.
The Vibe Coding Reality Check#
Thirty files in one commit. Let's sit with that for a second.
I didn't write those files. I described what I wanted, reviewed the output, asked for adjustments, and approved the result. But the actual keystrokes, the architectural decisions at the function level, the naming conventions, the error handling patterns — those came from Claude. My role was more like a tech lead doing a very fast code review than a developer writing code.
This is the part of AI-assisted development that doesn't get talked about enough. The speed is real. But the speed comes with a specific cost: you're now responsible for code you didn't write and don't have muscle memory for. You can read it, understand it, even approve it — but you didn't think it into existence line by line. That gap matters when something breaks at 2am.
I did double-check certain things. The Supabase Row Level Security policies got a careful read — that's where data leaks happen. The rate limiting logic got scrutinized. The API route handlers got a pass for obvious injection vectors. But did I trace every component's render path? No. Did I verify every edge case in the 106 tests? Also no.
And about those 106 tests — Claude wrote those too. They pass, they cover the main flows, but when I actually sat down to read through them, I found the coverage was thinner than it looked. There were twelve tests on the track card component that all tested slight variations of rendering props, but not a single test for what happens when the audio URL comes back null from MiniMax — which is a real failure mode listed in their API docs. Green checkmarks, blind spots. A test suite that gives you confidence without earning it is worse than no tests at all, because at least with no tests you know you're flying blind.
Vibe coding has a debt that comes due when something breaks. If you don't understand the code well enough to debug it without AI assistance, you haven't saved time — you've borrowed it.
I spent about 45 minutes after that first commit just reading. Not fixing anything, not even taking notes — just building a mental map. I traced the generation flow from form submission through the edge function and back. I found one place where an error in the MiniMax callback would silently swallow the failure, leaving a track stuck in "generating" forever. I flagged it, Claude fixed it in one shot. That 45 minutes probably saved me a 2am debugging session later. That's the tax. It's real, it's unavoidable, and anyone telling you AI-assisted development is "10x faster" is probably not counting it.
Making It Shareable#
Session two kicked off around 1am. The jukebox worked, but I realized something obvious: if people can't share individual tracks, the whole community angle falls apart. Nobody's going to copy-paste a URL to the main jukebox page and say "scroll down to the third track." Each track needed its own page, its own URL, its own preview.
This meant dynamic OG images. When you share a track link on Twitter or Discord or iMessage, you need a 1200x630 PNG that shows the track title, the prompt, and something visually interesting. I used Satori with Next.js ImageResponse to generate these on the fly.
The fun part was making each track's background unique. Instead of a static template, I hash the track ID into a gradient — deterministic, so the same track always gets the same colors, but every track looks different.
Here's the gradient hash — same input always produces the same visual:
function hashGradient(id: string): string {
let hash = 0;
for (let i = 0; i < id.length; i++) {
hash = (hash << 5) - hash + id.charCodeAt(i);
hash |= 0;
}
const hue1 = Math.abs(hash % 360);
const hue2 = (hue1 + 40 + Math.abs((hash >> 8) % 60)) % 360;
const angle = Math.abs((hash >> 16) % 360);
// Each track ID produces a unique, deterministic gradient
return "linear-gradient(" + angle + "deg, "
+ "hsl(" + hue1 + ", 40%, 15%), "
+ "hsl(" + hue2 + ", 35%, 10%))";
}The share button detects context: on mobile it triggers the native share sheet via the Web Share API, on desktop it copies the link to clipboard with a brief toast confirmation. Small thing, but it's the kind of detail that determines whether anyone actually shares.
I also added a sponsor CTA on each track page. The idea is simple: show the running cost transparently. At current MiniMax pricing, 10 tracks per day costs roughly $1.50. That's about $45 a month. The jukebox runs as long as there's credit. I'm not hiding behind vague "support us" language — the math is right there. Here's what it costs, here's how long the credits last, here's how to add more. And that 10-per-day cap isn't a server capacity limit — it's a sustainability decision. Each generation should feel intentional, not disposable, and the burn rate needs to stay manageable while I figure out whether this thing has legs.
By 2am, every track had a shareable page with a unique OG image, a working audio player, and a clear path to supporting the project. I went to bed.
Community Reactions#
Session three started around 9am with coffee and a fresh perspective. The jukebox worked. You could generate tracks and share them. But it was still fundamentally passive — you listen, maybe you share, and that's it. There was no way for visitors to interact with each other's music.
I wanted something lightweight. Not comments (too much moderation), not ratings (too judgmental), not likes (too generic). I landed on a fire reaction. One emoji per visitor per track, toggle on/off. No downvotes, no negative reactions at all. That last part was deliberate — I want people to feel good about creating, not anxious about being judged. If you don't like a track, you just move on. The whole point of this thing is to encourage people to type weird prompts and see what happens. A downvote button kills that impulse. A fire button rewards it.
Before writing any code, I wrote a spec — about 200 words in a markdown file, just the contract and the edge cases. This was a deliberate choice — I've found that describing the feature in plain language before handing it to Claude produces significantly better results than trying to iterate through prompts. The spec covered the data model, the API contract, the UI behavior, and the edge cases (what happens if someone fires a track while offline? What about race conditions on the count?).
The core of the reaction system is a Postgres function that handles the toggle atomically. No race conditions, no stale counts — insert or delete in one transaction with the denormalized count updated in the same call:
CREATE OR REPLACE FUNCTION toggle_fire(
p_track_id UUID, p_visitor_hash TEXT, p_action TEXT
) RETURNS TABLE(fired BOOLEAN, fire_count INTEGER) AS $$
BEGIN
IF p_action = 'fire' THEN
INSERT INTO track_fires (track_id, visitor_hash)
VALUES (p_track_id, p_visitor_hash)
ON CONFLICT (track_id, visitor_hash) DO NOTHING;
IF FOUND THEN
UPDATE jukebox_tracks
SET fire_count = jukebox_tracks.fire_count + 1
WHERE id = p_track_id;
END IF;
ELSIF p_action = 'unfire' THEN
DELETE FROM track_fires
WHERE track_id = p_track_id AND visitor_hash = p_visitor_hash;
IF FOUND THEN
UPDATE jukebox_tracks
SET fire_count = GREATEST(jukebox_tracks.fire_count - 1, 0)
WHERE id = p_track_id;
END IF;
END IF;
RETURN QUERY SELECT
EXISTS(SELECT 1 FROM track_fires tf
WHERE tf.track_id = p_track_id
AND tf.visitor_hash = p_visitor_hash),
jt.fire_count
FROM jukebox_tracks jt WHERE jt.id = p_track_id;
END;
$$ LANGUAGE plpgsql SECURITY DEFINER;On the client side, the fire button uses optimistic updates — the UI flips immediately and syncs with the server in the background. If the request fails, it rolls back. The visitor's fired state persists in localStorage so it survives page refreshes without needing a server round-trip on every load.
I also added a trending sort. The algorithm is borrowed from Hacker News: fire_count / (hours_since_creation + 2)^1.5. This means a track with 5 fires that was created an hour ago ranks higher than a track with 20 fires from yesterday. It rewards recent engagement without completely burying older tracks. The +2 in the denominator prevents brand-new tracks from having an infinite score.
By 10am, the community layer was live. Visitors could fire tracks, see what was trending, and the whole thing felt... social. Like a tiny corner of the internet where strangers were collectively deciding which AI-generated music was worth listening to. I fired one of the overnight tracks myself — someone (me, hours earlier, but it felt like someone else) had prompted "ambient rain on a tin roof" and the result was genuinely atmospheric. This was the first time the jukebox felt like more than a tech demo.
The Morning After#
Session four, 10:47am. I opened Vercel to check how the overnight deployment was holding up, and the OG image functions were returning 500 errors. Every single one. The track pages worked fine, but share previews were completely broken.
The culprit was font loading. Claude had used readFileSync with process.cwd() to load the Geist Mono font file. This works perfectly in local development where the filesystem is a normal filesystem. On Vercel's serverless functions, process.cwd() points to a read-only deployment artifact where the file layout doesn't match your local project structure.
Here's the before and after — the fix tells the bundler to include the font as a dependency:
// BEFORE: works locally, crashes on Vercel
import { readFileSync } from "node:fs";
import { join } from "node:path";
const font = readFileSync(
join(process.cwd(), "src/app/fonts/GeistMono-Regular.ttf")
);
// AFTER: tells the bundler to include the font file
import { readFile } from "node:fs/promises";
import { fileURLToPath } from "node:url";
const fontUrl = new URL(
"../../../../fonts/GeistMono-Regular.ttf",
import.meta.url
);
const font = await readFile(fileURLToPath(fontUrl));But that was only the first problem. Once the font loaded, Satori started complaining about CSS properties it doesn't support. Then there were JSX structure issues. Then rendering quirks. It took five commits to get the OG images working correctly in production:
| Fix | Problem | Solution |
|---|---|---|
| 1 | Font loading crashes serverless | Switch to import.meta.url for bundler inclusion |
| 2 | Satori CSS limitations | Replace -webkit-line-clamp with JS truncation |
| 3 | Multi-child node errors | Fix JSX structure |
| 4 | CSS border triangles | Use unicode ▶ instead |
| 5 | Layout composition | Move play button to album art overlay |
Five commits, five deploys, probably 40 minutes total. Each fix was small. The pattern was the same every time: check Vercel logs, identify the error, describe it to Claude, review the fix, deploy, check again.
The punchline here is important: AI wrote the OG image code. AI helped fix the OG image code. But I had to notice it was broken. If I hadn't checked Vercel that morning, those share previews would have been broken for who knows how long. Every link shared on social media would have shown a generic fallback instead of the beautiful per-track gradients.
Nobody's monitoring your AI-generated code for you. That part is still your job.
And there's a specific feeling to debugging code you didn't write. When I write a function and it breaks, I can retrace my thought process — what I was trying to do, what assumptions I made, what shortcuts I took. With AI-generated code, you don't have that context. You're reading the code cold, inferring intent from structure. It's more like getting paged for an incident at a new job than debugging your own work. But that's actually a skill worth developing: reading with intent rather than recall. Looking at a function and asking "what was this trying to accomplish?" instead of "what was I thinking?" It's the same skill you use reviewing pull requests from teammates, but applied to your own codebase.
This is the broader shift in failure modes that I keep coming back to. It's no longer "I wrote a bug." It's "I shipped a bug I didn't review carefully enough." The accountability is the same — it's your name on the deploy — but the path to the bug is different, and so is the path to finding it.
The Bet Is On#
The jukebox is live at datagobes.dev/community/jukebox. You can go there right now, type a prompt, and have an AI-generated track playing in under a minute. You can fire the tracks you like. You can share them with anyone.
Here's the math: I started with roughly $50 in MiniMax credits. At the current rate of about 10 tracks per day, that's around $1.50 daily — call it 33 days of runway. By the time you're reading this, some of that runway is already burned. The clock is ticking, which is kind of the point.
The experiment is straightforward. Can a small community feature, built transparently and funded openly, sustain itself? I'm not trying to build a business here. I'm trying to answer a simpler question: if you make something people enjoy and you're honest about what it costs, will enough people care to keep it going? That's the most human part of this whole project — a bet on behavior that no model can predict. I'm betting that transparency and a thing worth using adds up to sustainability. I'm genuinely not sure if I'm right.
I have ideas for where this could go next. Playlist curation, collaborative prompting, maybe genre challenges. But none of that matters if the basic premise — that people will show up, create, react, and occasionally contribute — doesn't hold. So I'm starting with the minimum and seeing what happens.
If you've read this far, go try the jukebox. Generate something weird. Fire a track that makes you smile. And if you want the music to keep playing, the sponsor link is right there on the page — the math is transparent and so is the ask.
This whole thing took about 12 hours across four sessions. A backend engineer with no consumer product experience, an AI coding partner, and $50 in API credits. I built something I'm genuinely proud of, which — after 15 years of building things that live behind dashboards nobody sees — feels like a small but real shift. I'm not sure where it goes from here. But for the first time, I built something and actually wanted to show it to people. That counts for something.