Issue #014 of Start Select Reset went out by snail mail yesterday to my supporters (£5/month+).
Start Select Reset is delivered straight to your snail mailbox 📬 four times a year!
Issue #014 – You Can Just Do Things
Issue #014 of SSRZ is about my firm belief that ‘you can just do things’.
The essay reflects on the end of the conclusion of 301 and who it’s taught me that the distance between ‘idea’ and ‘upload’ should be as short as possible.
I also talk about the difference between knowing and capital-K (K)nowing. and that you really don’t need to ask anyones permission to create something and share something online.
The ‘the dream’ and ‘the doing’ of creative work are for you. The ‘done’ is for everyone else.
As I’ve said a couple of times since I had the realisation that the 301 format would conclude at Episode 301, Start Select Zine will now begin to play a much bigger role in my creative life. In a duel orbit with whatever the podcast is going to become over the next few years. I must admit that right now I have a much clearer vision for with I want SSRZ to be, than I do the show! But I’m excited to make and create and post it online, and send it in the post.
When I posted the last issue of the zine I mentioned that I’m thinking of starting a small press / label called ‘Family of Giants’ to now house SSRZ under, and I think that is going to go a head. I’d like it to be a collective of people with fairly low stakes involved, just an association under a logo. If you have anything that you are working on, are thinking about putting out in the near future, get in touch and lets talk!
There are very few bits of software that have achieved such lasting cultural immortality as Clippy.
To examine its status as an OG “little guy,” we must consider three separate things: his vision, execution, and location. All three still have downstream effects on AI Agent design today.
But nevertheless, his development still offers many useful concepts for people working on, developing, and thinking about agents today.
Before we get to the great Paperclip himself. We have to start with the context he was developed in, and research that he came out of.
I should also disclose that he is is a personal acquaintance of mine, who I once accompanied me as my plus-one to a party when he was down on his luck back in the early 2010’s
It is now very hard to imagine a world where computing isn’t just an ambient fact of life. But back in the early 90’s, the personal computing revolution was still unfolding at a rapid pace. Desktop machines (PCs) were making their way in to homes and offices, and many people interacting with them, were using computers for the very first time.
I’m 40, and a computer with a mouse is just ‘something that has been around’ in my life. Whilst I don’t remember a time before computers, I do have some vivid memories of people around growing up in the 1990’s being scared of them.
Which, of course, was a big problem for Microsoft! Consumers being scared of your product isn’t good for business.
Here’s an example of the kind of consumer hurdles they were up against:
Behind a one-way mirror in the bowels of Microsoft’s Redmond campus, Karen Fries watched yet another volunteer cry.
The wife of a colleague had offered to test Microsoft Publisher, the desktop application that debuted in 1991. Back then, the company still leaned on friends and family as guinea pigs for its products. Managers and developers eyed subjects like lab investigators from behind the glass, observing their every cursor move. Only about 15 percent of households owned a personal computer, or PC. Even the people closest to the geeks actually building the machines feared the technology. “They’d be afraid to even move the mouse,” recalls Fries. Sometimes, they’d tear up.
Something needed to be done.
The Media Equation
In 1996, Stanford University professors Byron Reeves and Clifford Nass published: The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places.
Detailing experiments conducted throughout the 1980s and early 90s they proved, that on a fundamental level, people unconsciously apply the same social expectations and rules to ‘interactive media’ as they would in their everyday lives with people in the real world.
The central thesis of the book is basically: “media interaction = social and natural.”
Very similar to Animating Anthropomorphism, their findings showed that if you put a talking rectangle onscreen in front of someone, the user would expect the same social cues from it as they would from actual humans. They also found that people would score their opinions about a laptop higher if they are required to enter them on the laptop they are reviewing. But, if asked to provide their feedback in another room they would score it lower—They didn’t want to offend it, or hurt its feelings. Users will also naturally ascribe a gender to synthetic voices; and people flinch when a VR avatar leans in too close. And so on.
This work was massively influential at Microsoft research throughout the early 90’s and lead to a top-secret project codenamed “Utopia”. Which was released in March 1995 as Microsoft Bob.
Microsoft Bob
Under the influence of CASA paradigm, Microsoft reasoned that if people treat computers like social beings, then giving a computer a literal social “face” should make it friendlier and easier to use.
An early example of success was the invention of the installation wizard.
Despite a clear step-by-step textual interface with buttons like “Next” and “Finish”, first-time users still really struggled with the process. Simply because they didn’t understand basic interface grammars, or elements like menus and buttons.
So drawing from CASA, designers began experimenting with more intuitive onscreen guidance. One such experiment replaced the text of the text based wizard interface with a cartoon owl delivering instructions in a speech bubble step by step. This was so successful that it lead to MS abandoning printed and textural onboarding manuals entirely, in favour of the character-led help systems.
MS Bob jettisoned the standard desktop for a cartoon house! Users launched programs by clicking on real-world objects; like a pen and paper for the word processor. Throughout the interface users were guided by animated “Personal Guides”.
Note: Lead marketing and product manager for Microsoft Bob was Bill Gates’ then-fiancée Melinda French!
If you think a fully cartoonified operating system sounds like a disaster, then you would be right!
Priced at ~$100 ($210+ in today’s money) and demanding high system requirements— the machines and specs that would have also run the early Metaverse/MMO Active Worlds that only the super affluent could afford. Bob was derided as “childish,” and “patronising”. Its UI was even more cumbersome than the DOS 3.1 interface it was meant to replace, leading to its discontinuation within a year. A total disaster.
However, overall the CASA inspired UX paradigm of this era was very productive. Any child of the 90s raised on Utopian Scholastic dreams will be intimately familiar with CASA influenced design as it was foundational to CD ROMs experiences, and even menus of DVDs.
You can see its influence all over Microsoft Encarta. With its prompts and pop-up tips that often spoke directly to the user: “Would you like to explore…?” or “Try this MindMaze quiz.”
This is design built upon ‘turn taking’—inspired by human dialogue, or board games—which is why CASA is so important when thinking about what’s gone on with LLMs in the last… 3 years. Turn taking (chat) was the technologies first major UX unlock.
After MS Bob crashed and burned, Microsoft took its learnings about “personal guides” and decided to port them into Microsoft Office as they had clear utility.
Project Lumiere
Back in the 1990’s Eric Horvitz Microsoft’s current Chief Scientific Officer, was ‘Human Computer Interface Lead’ in the ‘Decision Theory & Adaptive Systems Group‘, and was in charge of Project Lumiere. A parallel project to the Personal Guides, Microsoft Bob had similar ambitions.
The Lumiere Project centers on harnessing probability and utility to provide assistance to computer software users. We review work on Bayesian user models that can be employed to infer a user’s needs by considering a user’s background, actions, and queries. Several problems were tackled in Lumiere research, including (1) the construction of Bayesian models for reasoning about the time-varying goals of computer users from their observed actions and queries, (2) gaining access to a stream of events from software applications, (3) developing a language for transforming system events into observational variables represented in Bayesian user models, (4) developing persistent profiles to capture changes in a user’s expertise, and (5) the development of an overall architecture for an intelligent user interface.
This all sounds an awful lot like Microsoft’s ambitions for their current Windows co-pilot and recall product doesn’t it?
Anyways, here’s a demo of it working of Lumiere from 1995:
In Part 1, we see an interface and UX behaviour that also very much feels like what Gemini in Google Docs looks and feels like in 2025. It’s also remarkable how well all this appears to work.
But more importantly for our story, in the video’s concluding section Horvitz discusses the feasibility of driving non-traditional “social user interfaces” with Lumiere’s inferences inside. He shows off a demo of their tool, which if you squint! sort of looks like what we got with Clippy but way better.
If you are designing AI agent interfaces in 2025, you really should check out Part 2. In the few short minutes onscreen it shows lots of fun UI patterns that could serve as inspiration to be re-explored.
Microsoft’s Research site says that Lumiere’s prototypes served as “the basis for components of the Office Assistant in Microsoft Office”.
Lumiere combined with a CASA inspired interface was apparently an extremely compelling product! But all that real-time Bayesian inference needed high system resources, and that was a problem.
This original Clippy learned from the user, it was trainable, so it was truly Bayesian. By all accounts, it worked really well. However, for the commercial version that appeared in Office 97 and thereafter, upper management insisted that the Bayesian heart of Clippy be replaced by a rule based system that could not learn from the user. The reason Clippy was crippled was not out of palace intrigue or corporate malice but simply that Office 97 already occupied too much space and the Office designers had to choose between including a full fledged Clippy or including some new, mundane word processing features, but not both, and they chose the latter. Hence, the original, by many accounts brilliant Clippy, was lobotomized before it was first released to the public.
What could have been!
Before we move on to Mr Paperclip himself, I just want to note that there are a lot of articles online from the early 2010’s discussing Project Lumiere, and failings of Clippy that reference a 2009 post called “The Lumiere project: The origins and science behind Microsoft’s Office Assistant” cross posted either to Robotzeitgeist.com or machinelearningagents.com. with the permalink: Lumiere-project-origins-and-science.htm
Both websites are now dead, and their direct links are not saved on the wayback machine. However! I dug through the wayback machine and found the article on page 13 of the earliest crawl of the blogs archives!
You are very welcome future internet traveller who finds their way here looking for that article!
Clippy
With the ashes of Microsoft Bob smouldering, and a brilliant, if resource-hungry, Project Lumiere deemed too beefy for the average 90s PC, a piss poor compromise was half assed and bundled into Office 97 officially called “Office Assistant”
But why a paperclip?
You might assume that a character as divisive and universally hated as Clippy, was a top-down decision made without user input.
But the reality, as told by his creator Kevan Atteberry in this video is far more surprising.
The search for Office’s default Assistant was exhaustive and data‐driven. Over 260 characters were created, then tested by the CASA group at Stanford. Focus groups rated each on trustworthiness, likability, and engagement. The list was cut to ten finalists, and the clear winner—according to the data—was Clippy.
As Atteberry recalls, “There were people… not happy that Clippy kept making it through every level.” The public loved the friendly paperclip; some insiders feared he’d irritate users. But in the end, the data won and Clippy became the face of Microsoft Office.
So What Went Wrong?
But how did this cute, data-approved little guy become the most despised character in computing history? The failure was twofold: the initial technical lobotomy and the resultant social incompetence from that decision.
Continuing a now 16-year-long tradition of referencing that Robotzeitgeist post when talking about Clippy’s implementation, the intelligent core of Project Lumiere was ripped out. What shipped was a hollow shell that:
Had no memory: The Assistant couldn’t build a persistent user profile. It treated you like a clueless beginner every single time.
Had no real context: It could only see your most recent actions, so its advice was often wildly out of sync with what you were actually doing.
Had no chill: Most damningly, because the intelligent system for deciding when to offer help was replaced with a simple, hard-coded ruleset, it’s behaviour became a pest.
Clippy Was An Annoying Social Actor
As researcher Luke Swartz identified in his 2003 thesis, Why People Hate the Paperclip, Clippy despite being grounded in CASA principles, the version of Clippy that shipped to users embodied all the traits of the most annoying person you’ve ever met.
Because he…
Was intrusive and lacked etiquette. He’d constantly get in your way, popping up uninvited to offer help with the letter you’ve written a thousand times. He broke the cardinal social rule: “Don’t make the same mistake twice”.
Was a know-it-all who lowered your status. For advanced users, his constant interruptions were “patronising” and felt “offensively paternal”. For beginners, rather than being helpful, he often just served as a constant reminder of “how much I don’t know,” making them feel stupid.
Was endlessly distracting. His idle animations, like tapping on the screen, were designed to make him feel alive but instead just made him a constant, annoying distraction when you were trying to focus.
This last one is interesting. The animations that in testing made him feel cute and approachable, when combined with an overbearing clueless personality, resulted in women finding him in particular, “creepy”.
Even in 2007 when I was working at the bookshop he’d pop up onscreen at work and someone would say “why is he so creepy?” Lol
Luke Swartz thesis comes to a number of conclusions about Interface Agent design.
The following are some design conclusions that would apply not only to a redesign of the Office Assistant, but to designing any user interface agent:
Consider the agents’ task in its social element (for example, beginners may want to rely on more experienced users for help and guidance—how can one facilitate this?).
Agents should obey human rules of etiquette as much as possible (if one doesn’t like a person who disobeys these rules, one will especially dislike a computer agent that disobeys them!).
Explore ways to use the agent to teach users skills to make them more self-sufficient (thus allowing users to retain a sense of control over the program).
Carefully introduce the agent so as to realistically showcase its best features—and be sure that the appearance and behavior are consistent with that introduction (for example, if one calls the agent “fun,” there should be something fun about it!).
Study whether it is beneficial to use characters or agents at all (in some cases, a less anthropomorphic agent, or no agent at all, may provide the same benefits with less costs).
If one wished to draw a single lesson from this research, it might be that designing effective user interface agents is hard. Many factors—task, situation, behavior, appearance, label—influence users’ responses. However, there seem to be sufficient benefits to using such agents to justify continued research to explore how these factors work. Moreover, by better understanding how we interact with agents, we may better understand how we interact with each other.
What Type of ‘Little Guy’?
The ghost of Clippy haunts almost all little guys that pop up in our software today. From your banks chat support features, product information chat on amazon, and every single agent being crammed into software everywhere. Some are given avatars, some aren’t, and that is a very important design decision.
The answer (unsurprising) from someone who thinks of all techno-social systems as worlds is: Where they live ontologically inside of the code space.
In my work recently, I’ve been dividing ‘little guys’ into kinds of digital agents by where they live: the inhabitant and the interloper.
A Petz cat is an example of a pure inhabitant. It exists inside a self-contained environment or world. The user is an external force, interacting with it.
Clippy, on the other hand, is an interloper. He doesn’t ‘live inside’ the Word document. But he also doesn’t fully live inside Microsoft Word’s interface either. He’s a sort of meta-entity. Not fully part of the world/document he was observing, nor the softwares ‘frame’.
The direct descendants of Clippy are today’s copilots and other kinds of embedded assistants. But some are more ‘in the world’ than others.
The other question that arises after ‘taxonomising’ Agents between interlopers and inhabitants is “Does the agent act on its own? or does it wait to be called?”
I’ve been thinking of this axis as Proactive vs. Reactive.
Which gives us this 2×2:
This framework gives us four fundamental classes of little guys each with its own design challenges:
The Proactive Inhabitant is The Companion. This is Petz. A character that lives in its own world but has agency. Its challenge is creating a believable and engaging persona.
The Proactive Interloper is The Assistant. This is Clippy. An agent that watches you work and butts in to help. Its primary challenge is etiquette.
The Reactive Interloper is The Tool. This is the AI you summon inside an app (like Gemini in Google Docs). It waits to be called, and its challenge is pure capability.
The Reactive Inhabitant is The Oracle. This is a destination AI like ChatGPT. It exists in its own space and waits for you to ask it questions. Its challenge is maintaining or understanding context.
Understanding which quadrant an agent lives in is the first step to understanding what you are dealing with or designing.
I’ll have more to say about this 2×2 in future as it really needs an extra dimension: ‘Running Locally vs in The Cloud‘. But for now, its all complicated enough and this post has gotten too long as it is.
I don’t know why I am pulling 🤔 face in all my photos with clippy from 2012?
I went home to The Chalk this weekend. My brother has been home from Taiwan for a wedding this week, and it’s my Dads birthday on Wednesday, so it was wonderful to be around the whole family.
Went for a walk along Palm Bay into margate of the first evening and I had to make this little image macro with this quote from JMW Turner. The skies over Thanet are the loveliest in all Europe
When I still had my alt/lit poetry blog on Tumblr in the early 2010’s this is the sort of thing I used to make. LOL. But .. I mean he was not wrong though was he?
Also, I bought a cool hat!
Though Eve tell’s me that the full stop/period feels quite aggressive lol.
Today we had a family outing which involved sitting in enormous traffic jam on the M25, and then dropping my brother off at Heathrow for his flight home. Its been a bit of a whirlwind weekend.
I wrote a bit about the Coinbase advert that since posting, has been banned on TV here in the UK for being ‘too political’. This post has done quite well on socials, you should read it.
The Coinbase campaign is effective because it’s speaking to the public’s justified anger, and explicitly names the rot that we have all been told to ignore: a nation that’s been economically frozen since 2008. But it channels us away from collective political questions and towards individual financial risk.
As good as “Everything is Fine” is, the ad’s satirical despair is basically a marketing funnel for its preferred solution: “exit through the casino.” It reframes systemic failure as an opportunity for a personal gamble.
Finished the restructure of Part IV in SLOP MACHINES. Editing this has been vexing me for weeks and I think I finally cracked it!
Sent some more experience.computer emails and now i’m trying to organise dates.
Going on Neomania pod next week will post about it when it’s out.
I think i’ll also be back on wolf pod at some point soon, probably recording later this month. W/C 18th. Gonna talk about finishing 301, that whole process, what i’ve learnt, where i think social media is, and what’s next.
Nearly finished this post about CLIPPY. I need to get this history out so I can just blog about things that are coming out using the terms and ideas for reference.
Continue to ‘Degoogle’ my monthly ‘scrtach pad’ documents. Moving them into Obsidian. By the date the entries were created. Did a lot on the train on Friday, copy pasting.
Finished up the design and layout for a marketing PDF for the game i’ve been working on at work. Finally can move on to other things and other projects.
Terminal Access
This article on how AI CAPEX is propping up the US economy is really worth reading. I think it speaks to why we aren’t going to see any serious regulation any time soon, and also why there’s a big race. As soon as the build out is done, all this spend/economic activity is gonna fall off the GPD books and in to day to day OPEX.
I have no idea what’s going to happen next. But if AI investment is so massive that it’s quite actually helping to prop up the US economy in a time of growing stress, what happens if the AI stool does get kicked out from under it all?
I personally, have just started thinking about the huge AI build out as an enormous economic stimulus / or bailout of the country by Silicon Valley, and is probably why they were all at the inauguration.
Let me just start by saying that Claude Code (with the latest Sonnet 4 and Opus 4 models) is genuinely good at writing code. It’s certainly not a top 1% programmer, but I would say that Claude’s outputs are significantly better than those of the average developer.
Fans can sustain careers if they start from the right place and if the fandom infrastructure is strong enough, but they can’t add exposure. They can’t do the work of the still-important middlemen.
Gen Z is rejecting traditional advertising which paints a picture of a happier, more beautiful, successful life. For them, it’s about identity. If your content doesn’t make Gen Z say “That’s so me,” then it’s not worth their time. This could be a video of someone fake-smiling through a Zoom meeting while their laptop teeters on a stack of laundry, a meme about overthinking a simple text message for 15 minutes, or a skit that dramatizes the emotional rollercoaster of getting ghosted after a job interview. The goal isn’t polished perfection—its emotional accuracy, humor, and the unfiltered truth of everyday moments.
There’s something fascinating – and slightly uncanny – about hearing your own voice say something you didn’t actually record. This wasn’t just a robotic reading. It had intonation, pacing, and nuance that made it feel personal. And that’s the thing: it was personal, even if I didn’t perform it myself.
In the meantime, do what you can: save yourselves with individual actions. Partner early, move in together and cut your rent in half. Vote in your own interest. Invest as early as you can in equities and a pension. And frankly, move abroad.
I finished reading two books this week: Tight Hip, Twisted Core by Christine Koth is an interesting look at the trunk and core muscles in particular the Iliacus in the pelvis. It points out that many long term issues around the body, shoulder pain, knee pain etc etc are biomechanically downstream of the core. For example, down in the foot almost all bunions are caused by a tight muscle somewhere way up in the hip/abdomen.
I listened to The Silent King by Guy Haley and burnt though it mostly on train commutes. Book 9 in Dawn of Fire series. I can’t wait to see how they bring this slightly eclectic series together in the last book out late this year.
Moving on from those two books i’ve started reading The Universal Christ: How a Forgotten Reality Can Change Everything We See, Hope For and Believe by Richard Rohr. I’ve listened to so many lectures by Rohr on Youtube, or podcast interviews over the years that this book has been on my list for ages. However, I only feel ready to read it now.
Moving on from the twisted core book i’ve started reading Mindbody Prescription: Healing the Body, Healing Pain by John E. Sarno the classic from the late 90’s. Only just started it so no opinions yet.
After I finished reading the warhammer book I was doing some tidying up in my audible app and I found No Bad Parts: Healing Trauma and Restoring Wholeness with the Internal Family Systems Model by Richard C. Schwartz. I’ve read a bunch of IFS books, and don’t remember buying this one with a credit on Audible? So i’m listening to it now. Forward is by Alanis Morissette, fun!
It came up in my Bandcamp recommendations on Monday and wow it’s just incredible.
I’ve have fallen head over heels in love with this album by Japanese Jazz composer Misaki Umei. It just has so much stuff in it! jazz, pop, breakbeats, soundscapes, classical, punk. But not one feels out of place at all, nor does it feel like genre hopping. Its all integrated into the vision of the album.
I think everyone should give this a spin, regardless of what genre of music you like.
Remember Kids:
Hatred thrives on familiarity and intimacy, and struggles to grow in less fertile hearts. He wasn’t a figure of loathing and lies, cackling at the notion of genocide for bloodshed’s own sake. He was merely a man, one we scarcely knew, who turned our talents to unwholesome ends.