Back

The importance of taste-making in the era of personal agents

Published: November 3, 2025

I've been working on honing my preferences and wants better.

There’s been a lot of discourse about how engineers train AI systems with high-level objectives and allow them to simulate countless micro-decisions to achieve them. I believe that consumers and end-users are moving toward that same abstraction in our own lives, though it’s been less discussed.

As the AI systems we’re all beginning to use at a growing pace get better at execution, we’re now forced to get better ourselves at defining what it is we really want. Otherwise, the prompt runs its course and the agent does its thing. “Preference tuning” and even “prompt engineering” are really just tasks that pose the same questions we’ve been asking all along our whole lives: What do I like? What do I want?

The problem is that it’s getting harder to know. Our wants have become entangled with the systems built to study us. Cohort models and recommendation engines have blurred the line between genuine and engineered preference. We’ve been grouped into sameness, and the signal of real taste is barely audible beneath the constant noise of nudges.

Brian Eno said in a recent podcast that understanding what we like is difficult - that our sense of preference is slippery and constantly shifting. We rarely spend time tuning our tastes, though I think we have to now, given how much autonomy we’re handing over to AI agents.

Music happens to be an interesting medium that makes preference especially clear. It triggers pleasure and emotion, revealing taste in ways that are both true and increasingly measurable. I expect us to move toward a kind of mind reading that can detect what we want more accurately (perhaps even through audio itself...) so we’re no longer reliant on faulty, after-the-fact human self-assessment.

Like many, I’m okay obfuscating away uninteresting tasks so long as I can clearly understand what I’m optimizing for in a given day and in this life. The decisions I make then become much more cerebral and high-level than before, and I’m fine with that. I choose the KPIs, define the beliefs beneath them, and act accordingly - or let my agent handle the rest.

The challenge now isn’t just offloading small decisions, but understanding and defining for ourselves the higher-order goals that may soon be fully executed on our behalf. And even if we do manage to isolate what it is we want, there’s another layer of uncertainty: is what we feel we want actually what we should want?

This, I think, will be the next frontier. I expect to see simulation-based systems and world models that help us explore that question - tools that can model scenarios and reveal which choices align with our higher-level, even philosophical, objectives.

Given this expectation, I think it’s worth intentionally honing our preferences now. We’ll need to strengthen our sensitivity to them, to notice how things make us feel, to sharpen our awareness of resonance and rejection. Not to so predictably bring this back to music, but that’s partly why I really like playlisting, DJing, and other forms of taste-making as a concept. It’s the practice of taste in its purest form, deliberately finding things you like enough to share them with the world. Tastemakers exercise the muscle of preference well.

And that’s what we’ll all have to do. Because as agents learn to execute flawlessly, our job will be to make sure they’re optimizing for something we’re truly aligned with, whatever that means.

I’m interested in learning from people building systems that help consumers uncover their truest preferences - or model what those preferences should be.