
Ah, scheduling–one of those (pain in the anatomy) problems in computing. This isn’t my first rodeo: I’d buy all access passes for Philly film festivals (R.I.P. PIGLFF!) and feel cheated if I didn’t fill every slot over the two full weeks and five screens. Web Summit’s on a different scale though. After opening night tonight, 30+ tracks and 300+ hours of content delivered across dozens of stages over three days–yikes! There’s lots of event time overlap with no synchronized start/end times across tracks. So I rolled up my sleeves and opened a new Google Sheets sheet, and logged into the event website.
Web Summit web *site* is ironically not all that. Truth is, I think that’s deliberate to force you to use the app. Not duplicating developer effort would be the kinder interpretation, but that’s not my default mode. The app was a bit better on the scan-and-pick, but I still ended up doing most of that on the website. Another truth is that AI is dominating this web summit, because of course it is.
This was a good excuse to try Comet, Perplexity’s agentic browser. Once I’d made all my yes/no/maybe choices, I brought up the website in Comet and asked it to help me resolve scheduling problems. At first it just gave me a spreadsheet (Yay, right track–I thought) for the opening night–one event. Then I had a bunch of back-and-forths to help it understand the problem, the categorization, and how the information was spread across tabs and subsections by day. I did finally get a spreadsheet which was close to what I’d have ad-hoc’ed. However…
The app displays the schedule in a daily view similar to Google Calendar. I’m guessing that’s a part of Google UI’s toolset. (I can never remember its name, but my AI editor Plex reminded me–Material Design) It didn’t differentiate between “Going” and “Interested” on the grid, but clicking through showed Going on-or-off and a bookmark for Interested. The state diagrams aren’t quite the same across website and app, but its close enough that pressing a few buttons confirmed what was what.

I also worked with the schedule in both the website and the app. Some things were just easier on the website, like seeing the event descriptions and doing rapid click-click-clicks on a page with my schedule so far for the whole day. Refresh rate was pretty good, but I did end up hitting the button or pulling down the screen pretty frequently.
So, the bottom line is that AI gave a satisfactory answer, but the app and website together was vastly easier and more effective. Just generating the spreadsheet took a good chunk of time because of all the page traversals, and that didn’t include any actual schedule resolution on my part or updating the app/website. Agentic browsing just wasn’t a great fit for the use case. The app is actually good on its own, but I expect the integration with actually being at the conference is where it will really shine.
It made me think of a talk MJD gave a thousand years ago at Philly Perl Mongers about how the visual system is one of the oldest parts of the brain and has been honed for millions of years–language processing not-so-much. The case back then was how something like the whitespace and layout of a make file was much easier to process because of the well-evolved visual processing center. Something like XML in (ugh) Maven now, not so much. Gradle’s better since most programming languages use a layout that helps visual processing. And of course there is python with its fetish for whitespace and looking like pseudo-code. Looks like the same was true for GUI versus LLM. Old brain parts for the win!
I am here, in part, to get a better sense from people what’s really happening with AI AND to drink in the jargon and fever dreams infested by it. Looks like I got a head start before the conference even started!
