What to capture at TPS 2026.

For two days, the most concentrated artefact of the global podcasting industry takes place in London. ~241 sessions across 9 stages. 10,000 attendees from 53 countries. 1,000+ brand and agency people in one room with 2,000+ creators. The industry doesn't gather like this anywhere else.

That density is the asset. Captured well, it becomes the raw material for everything TPS could become in the eleven months between shows — a year-round publication, a searchable archive, an industry intelligence product, a talent network, fuel for international expansion. The growth comes from unlocking what already happens in the room.

Six plausible directions for the eleven months between shows.

What follows is initial thinking, not a research-backed roadmap. In an ideal world, a proper audience study — talking to attendees, brands, sponsors, agencies — would tell us which of these directions the market actually wants and would pay for. These are six plausible places to start, drawn from what's known about the show and the industry around it. They're not mutually exclusive; most share inputs, which is the leverage argument for capturing widely now.

A year-round content engine

TPS as a publication, not just a festival. Weekly clips, monthly editorial pieces, a newsletter the industry actually opens. Keeps the brand present in the eleven months of silence and feeds the next year's pass sales without spending a penny on advertising.

A searchable session archive

Every session, by topic, by speaker, by stage. Access could either be folded into the pass — turning a £205 ticket into a year-long resource — or sold as a standalone subscription to people who want the archive without the live show. It solves the "I missed three sessions I wanted to see" problem either way.

An industry intelligence product

TPS is uniquely positioned to package what the industry is actually saying — themes that keep recurring across stages, questions speakers can't stop asking, shifts in tone year-on-year. Brands, agencies, platforms, and investors might pay for that synthesis. Nobody else has the raw material to produce it.

A speaker and talent network

If we know what every conversation, panel, and discussion across every stage was actually about, we can find ways to connect people who might never otherwise meet. The transcript and video corpus becomes a kind of introduction engine — matching the brand-side person who spent an hour discussing audio strategy with the agency lead who spent an hour discussing exactly that. It's TPS Connects taken to a whole other level.

Sponsor and partner case-study material

Brands that activated on the floor want proof their investment landed. Capturing the floor properly — what their stand looked like, who came through, what conversations happened — turns next year's sales conversations from "trust us" into "look what happened last year for X."

Fuel for international expansion

If TPS goes to a second market, proof-of-format material from London 2026 is what sells the proposition to local partners and sponsors. On top of the atmospheric layer — photography, audience-reaction footage — testimonials from named speakers and recordings of the sessions themselves would help prove that the substance in the room matched the scale of the room.

Five of those directions need the same raw material — what happened on stage.

Why Now.

Two years ago, capturing every session would have been a hard-drive-full-of-stuff problem — you'd record everything, watch a fraction, and most of it would gather dust. Now it isn't.

Thanks to AI, we can transcribe the whole corpus, search across it, surface the through-lines between speakers, and suggest clips automatically — at scale, almost for free.

We need two things.

Material for AI to analyse. Humans to tell us what mattered most. Both are cheap; together they turn two days into year-round material.

Capture everything we can.

The point isn't broadcast — it's a corpus. Once it exists, AI handles transcripts, search, clip suggestions, and connections found between speakers on different stages.

  • Multi-cam on the marquee stages where you've already got it.
  • Single locked-off camera on every other stage you can — back of the room, framed on the speakers, audio off the PA.
  • Audio-only recording where all else fails — transcripts alone still feed the archive and the search.

Tell us what was good.

~241 sessions is too much to comb through after the fact, and AI can't yet tell you what was great. Two small layers cover it: the people running the room, and the people watching.

  • Producer debriefs — five lines from each stage producer at the end of each day: what landed, what surprised, the moment they'd point to.
  • Audience reaction survey — a one-question poll within 48 hours of close: what's the one thing you'll remember from TPS 2026? Open-text, aggregated.

Cheap to think about now, expensive to ignore.

Rights are the failure point.

Whatever gets captured is only as useful as the rights to use it. If speaker, partner, or attendee contracts haven't pre-authorised re-use, the asset is effectively locked.

Locked-down session metadata.

AI does the heavy lifting on transcripts and speaker attribution, but only if it knows who was on stage. A clean, final list of panelists, moderators, and running order is what turns generic transcripts into something you can actually search by name later.

Ownership.

Who's ultimately responsible for making sure the recording, interviews, metadata, and rights all happen as planned? Without one person owning it end-to-end, half of this becomes somebody-else's-problem.


Charlie Palmer · Red Slash Studio