🟦 The Only Acceptable Price Point

 

I started writing this grab bag of quotes and thoughts about local AI models a couple of weeks ago, but was inspired to finish it after playing with Meta’s newest nano model Llama 3.1 8B on my machine over the last 24hours.

It is truly amazing to me that a model that has equivalent capability, to OpenAI’s GPT-3.5 -which blew the world away back in November of 2022 – is running locally on my year-old MacBook Pro using less RAM than a web browser.

Open As It Gets ¯\_(ツ)_/¯

Meta’s decision to release the Llama model architecture as open source is an extremely important move – often missed by commentators about AI. I’ll acknowledge that there is a great deal of hand wringing about the ‘actual licence’ of the Llama Architecture and how ‘Open’ their ‘Open Source’ licence really is – The licence is one of Meta’s own devising and not approved by the Open Source Initiative – it is never the less, about as open as gets.

Various estimates put Meta’s AI capital expenditure for 2024 at $35 billion to $40 billion expenditure. With $16-18 billion already spent on GPUs, data centres, GPU hardware, thus far. Either way, the release of the Llama 3 model stable – in particular the Llama 3.1 frontier model – are the fruits of this enormous spend.

I highly recommend this 30-minute interview with Zuckerberg on Meta’s AI strategy and the release of Llama 3.1 as he makes some interesting points:

Reading between the lines, Zuckerberg intends Meta’s open models to stand as a kind of bulwark against the other tech giants’ closed products. His points about the future monetisation strategy are interesting too.

I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here.

These models for all intents and purposes – unless you have 700 million users – are released to the public and are free to download and use, retrain, hack on, modify, and extend its capabilities all you want. OpenAI, Anthropic, and Google meanwhile, have set the price point for large models at $20 a month. But, as per the title of this post – If I were them I’d be worried.

Even if Meta were to never release another model of this kind again, Llama 3.1 still represents a significant piece of computing architecture released into the wild. A gift to the world(?).

It’s not going away.

Also the largest, most capable frontier model released yesterday – Big Llama 3.1 405b, is (as I understand it) ‘just about’ runnable on top end consumer hardware. Though most folks will be running it in the cloud for the foreseeable future.

While reporting and writing in the media about AI tends to focus on the super-large frontier models that run on and require city scale industrial compute in sheds just off the M4, I think readers of this blog should be be interested in the much smaller Nano models.

Like I said I literally have one running on my M2 right now and it runs great.

Intelligence Inside

Last month, I made an episode of my podcast about Little Computer People and said what we are going to see ‘maximal intelligence at all levels’

A small model like Google’s Nano, plus new optimisation and increased RAM in whatever pixel hardware they announce soon may mean we see  a performant language model running locally on a phone this year.  An AI developer recently said to me that the goal is ‘maximal intelligence at all levels’—on your device, in software, and in the cloud. If your phone thinks an instruction is ‘too big or complex’ it will push it up to a bigger model in the cloud.

And that is exactly what Apple announced a few weeks later with onboard intelligence.

Apple Intelligence is designed to protect your privacy at every step. It’s integrated into the core of your iPhone, iPad, and Mac through on-device processing. So it’s aware of your personal information without collecting your personal information. And with groundbreaking Private Cloud Compute, Apple Intelligence can draw on larger server-based models, running on Apple silicon, to handle more complex requests for you while protecting your privacy.

The only thing we need for lil models to be running everywhere is for new consumer devices is more RAM. And as these two graphs show – there’s a lot of headroom:

Via @dschaub

Via @dschaub

Here’s a comment on the above from Gruber:

Apple silicon Mac with 8 GB RAM performs as well under memory constraints as an Intel-based Mac with 16 GB. But base model consumer Macs have been stuck at 8 GB for a long time, and it’s impossible to look at Schaub’s charts and not see that regular increases in base RAM effectively stopped when Tim Cook took over as CEO. Apple silicon efficiency notwithstanding, more RAM is better, and certainly more future-proof. And it’s downright bizarre to think that come this fall, all iPhone 16 models will sport as much RAM as base model Macs.

Just a ‘small bump’ in minimum specs is going to open up local nano models over the next few years: in our phones, laptops, tablets, TVs? The fact I have one running on a mid spec M2 right now, clearly means they will be inside of everything soon.

Which is what Apple are already doing of course. I think that it’s pretty instructive that they aren’t launching a chat bot, nor is there any kind of conversational UX wrapped around them. Critics who say that AI is nothing but ‘spicy autocomplete’ will largely be vindicated tbh. As the UI on general purpose intelligence improves, we will barely notice them. Behind the interface, these models will help with spell check, arranging lists, creating to-do lists, tidy up wonky copy and pasted text, prune speech-to-text voice notes, and more. A ‘Super Siri’

Siri draws on Apple Intelligence for all-new superpowers. With an all-new design, richer language understanding, and the ability to type to Siri whenever it’s convenient for you, communicating with Siri is more natural than ever. Equipped with awareness of your personal context, the ability to take action in and across apps, and product knowledge about your devices’ features and settings, Siri will be able to assist you like never before.

When it launches, (Unlike my thoughts on the OpenAI conversational demo) I really don’t think many people are going to be thinking ‘WOW THERE’S AN AI IN MY PHONE NOW.’ Most are just going to think ‘my phone’s a bit smarter, there’s some useful new stuff’ and just go back to texting their families in WhatsApp, scrolling TikTok, or whatever people use their phones for most of the time.

The thing is you’re xbox uses more energy. These small models aren’t the energy hungry, water guzzling, industrial compute everyone is worried about. They are going to be little guys in your phone.

The Only Acceptable Price Point is Free

Matt Webb asked the following back in October 2023:

If future AI models will be more and more intelligent (per watt, or per penny, or per cubit foot, whatever we choose measure) then we can equivalently say that, in the future, today’s AI models will become cheaper and more abundant.

What happens when intelligence is too cheap to meter?

Too cheap to meter: a commodity so inexpensive that it is cheaper and less bureaucratic to simply provide it for a flat fee or even free.

Last month I also wondered about the falling cost of running state-of-the-art LLMs and what this might mean over the long term.

Right now it costs about 60 bucks a day to house a state of the art little computer person powered by an LLM in a virtual world. People are already doing it, and the idea has been around for decades. But what will the world be like when it costs 60p? 

With Llama 3.1 8B running locally on my machine I totally missmisjudged how quickly it was going to happen. It REALLY IS worth us asking the question: What happens when intelligence is too cheap to meter?

Right now the price point for the frontier models has settles on $20 bucks a month. But big Llama3 (as I have already mentioned) is just runnable on consumer hardware. And as Albert Romero over at The Algorithmic Bridge recently said, this price point is a question of value:

I lurk in alpha AI bubbles. Here’s the most common take I’ve heard in 2024: “Why do people still use the free version of ChatGPT when for a few bucks you have access to substantially better tools like GPT-4, Gemini Advanced, and Claude 3”? (This changed after GPT-4o became the default model but remains a valid question.)

It feels true: No one I know pays for these tools. No one I know online who’s not in the bubble pays for them either. I’d even wager, without proof, that most users haven’t noticed ChatGPT was replaced by a more powerful version.

It’s true—not as an opinion but as a verifiable fact—that you can get surprising amounts of performance improvements (I’m talking about the you-can’t-believe-how-much-better-this-is-until-you-try-it kind) worth much more than twenty dollars. I condemn the use we’re giving these tools but when used for intimate purposes instead of deceptively making money, they’re a bargain if you take the time to learn.

The thing is until very recently the free version of ChatGPT people were using had the same level of capability as the model I have running on my machine.

Romero again:

Those are the facts. Here’s the big picture of the current trend—making AI models (up to) 100x smaller instead of larger:

  • The first AI models were seriously underoptimized. It was a matter of time before they got tiny, fast, and cheap without compromising quality. In other words, they should’ve never been that expensive.

  • A few companies own the best models—private, open-source, large, and now also tiny—which means they control the entire market.

  • A few companies own the distribution channels, which means AI isn’t a democratization force but a new element of the same oligopoly.

LLM’s are 7 years old and realistically just 3-4. Optimisation is going to continue to happen and we are going to continue to see all sorts innovation happening, 1-bit models are being explored etc. The cost of training models is only going to increase so the pace of innovation at the bottom end is going to be intense. And like with the ecosystem around stable diffusion, much of it is going to happen at the hobbyist level.

Meta has basically created a new market. One totally separate from and outside of Google, OpenAI’s, etc API paywalls. As per the strategy:

I believe the Llama 3.1 release will be an inflection point in the industry where most developers begin to primarily use open source, and I expect that approach to only grow from here.

And this is the thing: With the ecosystem that has already spring up around older Llama models, these new models capabilities, and market presure – it may well be that Free seems to be the only acceptable price.

I have some misgivings around Meta’s strategy, and I also wonder – like with their metaverse project – just how many billions they are going to burn on this over the long term. But for now I am glad they are.

I’m going to keep playing with the smaller nano models, seeing what they can do. I’m looking into training a LoRa on my journal, or maybe my blog.

The only thing these smaller nano models require is battery power. They aren’t burning down the rainforest, using the power of a small city, and the water of a small country. They are just lil’ guys in your phone and everyone is going to end up with devices that have ‘Intelligence Inside’.

We are already in the era of intelligence is too cheap to meter. The real question is what are we going to use it for?

I cut several other sections out of this post which I’ll come back to, as they really should be whole posts by themselves:

  1. Observing that its private and not nation state actors burning all this cash building these things.
  2. At some point soon we may run out of road with training data, and/or the training of open-source AI models becomes so enormously expensive costing $100’s billions of dollars for incremental improvements that their training become a generational civilisational project.
    • Funded and worked on at a global scale as an engineering project, similar to global coordination around climate change?
  3. The first sparks of Silica Anima might get confirmed soon. At which point, the entirety of ‘post-Enlightenment Western civilisation and theory of mind‘ comes tumbling down and we have a much bigger crisis rendering almost everything we know about AI / philosophy of computer science moot.
  4. The railways, telegraph, telephone, internet back bone, and social media companies all lost billions and billions of dollars laying the Rails and Fibre. If the bubble pops and everyone goes broke we’ll still have Llama 3.1 and a shit ton of capacity that we can find other uses of GPU’s for.
  5. Riffing on the idea of Little Computer People and local agents as Tamogotchi’s I think Matt Webbs recent post on AI landscape is missing a whole category of ‘art games’ and clippy pets.
  6. Evals

Prefer Email? 📨

Subscribe to receive my Weeknotes + Podcast releases directly to your inbox. Stay updated without overload!

Subscribe 📥

Or subscribe to my physical zine mailing list from ÂŁ5 a month

The post 🟦 The Only Acceptable Price Point appeared first on thejaymo.

thejaymo

24 Jul 2024 at 19:43

⬛ Experience.Computer Season 2 | Coming Soon!

 I just posted the launch trailer for Season 2 of Experience.Computer, my podcast about aphantasia, creativity, and the imagination over on Substack.

Details below:

Experience.Computer is slow radio about high tech.
An interview show about aphantasia, creativity, and the imagination

https://experience.computer/rss

Apple PodcastsSpotifyPocketCasts

SEASON 2 – COMING NEXT WEEK

I’ve have another great set of guests lined up who share their unique perspectives and insights with me.

The first episode of Season 2 will be launching very soon, so stay subscribed!
New episodes will be then released monthly.

Experience.Computer hopes to bring you interesting interviews with creative people about imagination and the tools and technology they use to create.

This has been an đŸŸ§ RSS Club post.
Syndicated to you, really simply, from thejaymo.net

The post ⬛ Experience.Computer Season 2 | Coming Soon! appeared first on thejaymo.

thejaymo

23 Jul 2024 at 14:02

My Little Spot

 

It’s that time in the summer where there’s a lot of Birthdays. I’ve had a great week, been to drinks in the park and a 40th Birthday. It’s mine this week coming and my parents are coming up on Saturday and then it’s drinks and dinner with friends.

Hope everyone is enjoying the sun!

  1. My Little Spot
  2. Solarpunk Means Dreaming Green
  3. Permanently Moved
  4. Photo 365
  5. The Ministry Of My Own Labour
  6. Terminal Access
  7. Dipping the Stacks
  8. Reading
  9. Music
  10. Remember Kids:


My Little Spot

“Write what you want to read”.

This was the advice that caused me to write a fairytale for 301 this week. I opened the show with talking about how that advice is annoying. Mostly because I want to read weird stuff, Whilst there’s a lot of weird stuff online – Substack is starting to become a very wyrd place – but none is my exact ‘flavour’ of weird.

I am actually a pretty weird guy. But what am I supposed to do? Actually fly a freak flag rather than trying my best to just go about my day like a Gray Man?

I wanna talk about time loops, encounters with place, visions, magic, literal quests i’ve been sent on… I mean maybe. I’ll think about it.

It’s my birthday this week coming, and I sort of wonder if 39-> 40 is a good year to be unhinged. Lay out the runway a little for the tradition ’40th birthday coming out as a wizard’ party. Maybe i’ll just get weird over on Substack? – despite my misgivings about the whole platform. I dunno.

All that aside I think “Write what you want to read” is actually really freeing advice. I’m going to take it to heart. After a false start I think I’ll be posting to the RSS club on the blog more. This is my little spot on the internet, after 15 years here I should probably *actually* lean into the medium.

Solarpunk Means Dreaming Green

The other week I posted that the audio for my talk on Solarpunk I gave in Lisbon had been put online.

Just after I posted it, Gordon White encouraged me to get into the habit of re-recording talks like this so I can have a ‘canon’ version of my own. This week I sat down and re-recorded the full 40mins talk. I think it’s one of the best talks I’ve given on Solarpunk.

You can watch it here.

I really enjoyed the process of sitting down, now weeks after the event and doing the talk again, editing it all together with green screen elements etc. Might do more of it.

Permanently Moved

Hole of Tears

Ivan, lost in the chaos of modern life, takes a walk by the river seeking solace. A modern fairy story.

Full Show Notes: https://www.thejaymo.net/2024/07/21/2416-hole-of-tears/

Support the show! 
Subscribe to my zine
Watch on Youtube

Permanently moved is a personal podcast 301 seconds in length, written and recorded by @thejaymo

Support 💪

ÂŁ5 MONTHLY

Includes Handmade Zine ✉️

Subscribe

Apple PodcastsSpotifyPocketCastsYouTubeOvercastAudibleRSS

Or wherever you get your podcasts

Photo 365

193/365/2024

The Ministry Of My Own Labour

  • Read a bunch of the ‘classic’ essays on building a SAAS business, took lots of notes.
    • I must admit, I’ve come away with more questions than answers but in a good way
  • Recorded an interview with EcoGradia Pod
  • Recorded with Eddie Rathke for Wolf Pod
  • Call with Novara Media
  • Heavy focus on getting things in order.
    • Got some business cards LMAO
  • More work on the sekret project

Terminal Access

For those reading this via email, I reviewed Paul Cezge’s current Kickstarter for his newest book on DIY journalling games: Inscapes: How the Worlds We Make Make Us Who We Are

Dipping the Stacks

Mind of the Creator

It’s fascinating to imagine the engineers working on diagnosis and recovery from the memory failure. Having to build a mental model of the system, built by someone else. It has brought back memories of reading through the 6502 code Steve Wozniack wrote for driving the Disk II from the CPU. I had to reverse engineer this as a 13-yr-old learning to copy Apple II games that didn’t want to be copied. Understanding it all was beyond my abilities, but I felt very close.

Carbon fiber hewn structural batteries heralded as ‘massless’ solution for lighter devices | Tom’s Hardware

When your laptop’s case is also its battery, it’s bound to be more lightweight.

Social-Media Influencers Aren’t Getting Rich—They’re Barely Getting By – WSJ

Last year, 48% of creator-earners made $15,000 or less, according to NeoReach, an influencer marketing agency. Only 13% made more than $100,000.

Collaborating on the Computer with William S. Burroughs – RealityStudio

‘It’s a very interesting piece that shows Burroughs’ relevance in the Cyberpunk world of computer art,’ Goddard said

the attention cottage

I’m trying to build my way back to that balance, through how I organize the space in which I live and how I apportion my attention. Systolic, diastolic; inhale, exhale. Balance. Almost everything I write, including my newsletter, is meant to help people rebalance their attention — to give them another piece of furniture for their attention cottage.

Reading

I finished reading Herzog’s Every Man for Himself and God Against All. What a memoir, what a life. Its very inspiring. One of the things I take way from the book is his pure comfortableness with being uninformed about things that don’t interest him.

I started reading Joan Didion’s Slouching Towards Bethlehem. I asked Huw Lemmey where I should start and he said this. So here I am. One essay a night before bed. I’ve been finishing one short essay and then flipping over to…

James Holis’ Finding Meaning in the Second Half of Life. I’m about half way done. This book feels like exactly what it is. A second run at the subject of midlife after the success of The Middle Passage about a decade before. This book whilst not any more prescriptive, offers a lot more signposting. Paths out, potential routes forward.

I picked up FRIENDLY AMBITIOUS NERD by Visakan Veerasamy. One of the blurbs writes: “sort of a Marcus Aurelius Meditations but for Twitter addicts”. I don’t think thats an unreasonable description of what the book is. It’s a book thats full word of mouth at this point. Been out for a couple of years. Check it out if the (great) title interests you.

Music

Spotify Playlist

CombatWoundedVeteran – I Know a Girl Who Develops Crime Scene Photos (Instrumental)

In honour of National Karaoke Day and the 25th Anniversary of one the best albums of all time, absurdist / outsider art Grindcore band CombatWoundedVeteran have put out an instrumental version of I Know a Girl Who Develops Crime Scene Photos.

It’s grind core. It’s not for everyone, but I really enjoyed listening to one of my favourite albums in a new way. In fact I suspect the instrumental versions might be a bit more palatable

I can’t even begin to tell you how influential this album has had/on my life. An absolute uncompromising banger. Inspiring.

I have the 3rd pressing this album in hot pink (of 713), and it has the rarer blue cover and still listen to it all the time. I bought it from a traveling distro at a punk show.

Here’s the same song *with* vocals:

Remember Kids:

In order for there to be a mirror of the world, it is necessary that the world have a form

Umberto Eco | The Name of the Rose

Prefer Email? 📨

Subscribe to receive my Weeknotes + Podcast releases directly to your inbox. Stay updated without overload!

Subscribe 📥

Or subscribe to my physical zine mailing list from ÂŁ5 a month

The post My Little Spot appeared first on thejaymo.

thejaymo

21 Jul 2024 at 20:14

Hole of Tears | 2416

 

|

|

Ivan, lost in the chaos of modern life, takes a walk by the river seeking solace. A modern fairy story.

Full Show Notes: https://www.thejaymo.net/2024/07/21/2416-hole-of-tears/

Support the show! 
Subscribe to my zine
Watch on Youtube

Permanently moved is a personal podcast 301 seconds in length, written and recorded by @thejaymo

Support 💪

ÂŁ5 MONTHLY

Includes Handmade Zine ✉️

Subscribe

Apple PodcastsSpotifyPocketCastsYouTubeOvercastAudibleRSS

Or wherever you get your podcasts


Hole of Tears

The other day someone said to me ‘Write what you want to read’. And as a piece of advice it’s really annoyed me. Because the only thing I want to read right now are fairy stories. So I’ve written one.

In the heart of the city, amidst the hustle and bustle, lived a man named Ivan. Ivan was like many of us, entangled in the web of modern life, his days consumed by messages, notifications, and endless TikToks. Despite this, each night as he lay in bed, he felt a quiet yearning for something more real.

One afternoon, after a particularly exhausting day at work, Ivan decided he needed a change. As the sky turned shades of orange and pink, he chose to step away from the noise and go for a walk. Checking his pockets—phone, keys, wallet—he left his flat and headed toward the river. It was a wonderful evening, with the water murmuring gently and the setting summer sun casting a magical glow.

By the river’s edge, he paused, the city’s noise fading into the background. On a whim, he stepped into the river. The water filled his shoes, and his jeans grew heavy and wet, but he continued, drawn by an inexplicable urge. As he waded deeper, he felt a strange sense of calm.

Suddenly, a small boat drifted toward him. In it sat an old man, his hair as white as snow and his eyes as deep as the river was long.

“Evening” the old man nodded, his voice gentle yet strong. “What are you doing in the river?”

“I needed peace,” Ivan replied, feeling a strange familiarity with the old man. “I just wanted to escape the noise.”

“Well,” the old man said, “Wade a little further in, and I’ll tell you a story.”

Ivan hesitated, then followed the old man, the cool water now lapping at his waist. He touched his phone through his jeans, reassuring himself it was still there.

“Once,” the old man began, “In a town much like this one, there lived a very busy man named Robert. A cartographer obsessed with creating the perfect map.  

He pored over his charts day and night and worked tirelessly, convinced that with one more correction, he would know where he was and where he was going. But the paths on his map always needed changing.”

The river’s water was now at Ivan’s chest, but he was so absorbed in the story that he barely noticed.

“Robert became frustrated, his mind a maze of confusion. He wandered the streets, eyes glued to his map, ignoring the world around him. The people he passed became like ghosts, their voices mere echoes. Then one night, under a moonless sky, Robert, looking at his map, not looking where he was going, fell down a hole.”

Ivan shivered as the old man’s voice wove a spell around him.

“Trapped in a deep hole, with only the distant sound of the city above, Robert shouted until his voice grew hoarse, but no one came. So, he sat down and cried, and his tears began pooling at the bottom of the hole.”

Ivan’s mind filled with the image of Robert, alone and desperate.

“Robert cried for what felt like days,” the old man continued “and his tears filled the hole. Slowly at first, but then higher and higher and just as he began to fear drowning, he remembered his map. Folding it into the shape of a paper boat, he placed it on the pool of tears. Climbing onto the makeshift vessel, he let it carry him upward, buoyed by his own sorrow.”

Ivan felt the weight of his own phone, like an anchor dragging him down.

“When Robert reached the top, he stepped out of the boat and onto dry land. Just as he reached to pluck the map-boat from the water, it sank. He realised he didn’t need it anymore. He had missed the world—the world with the hole that wasn’t on his map.”

Ivan suddenly realised the river was up to his neck. His eyes widened with fear, and he reached out for the old man’s boat, but it was gone. His phone felt like a heavy leaden weight dragging him down. He panicked and with a final, desperate breath, he pulled it from his pocket and let it fall into the murky water. Instantly, he shot up like a cork, as if a burden had been lifted from his soul.

The water now seemed to support him, a comforting presence. He kicked his legs and swam toward the shore. The cool water invigorated him until his feet found solid ground. Stumbling onto the shore, he was breathless but exhilarated.

Turning back to the river, Ivan thought he saw a rowboat disappearing around the bend. “Thank you,” he whispered, knowing the old man had given him a gift far greater than a simple story.

From that day on, Ivan walked through the city with his eyes up, not glued to his phone. He savoured the sights and sounds around him, smiling at strangers and noticing things he’d never seen before.

He did get a new phone, but he rarely looked at it. As he walked around the city, he watched where he was going, careful to avoid any potholes, thinking of Robert and his map. After all, what was he going to do? Although his new phone might be waterproof, it definitely didn’t float. 

Prefer Email? 📨

Subscribe to receive my Weeknotes + Podcast releases directly to your inbox. Stay updated without overload!

Subscribe 📥

Or subscribe to my physical zine mailing list from ÂŁ5 a month

The post Hole of Tears | 2416 appeared first on thejaymo.

thejaymo

21 Jul 2024 at 13:20



Refresh complete

ReloadX
Home
(225) All feeds

Last 24 hours
Download OPML
A Very Good Blog by Keenan
*
A Working Library
Alastair Johnston
Andy Sylvester's Web
Anna Havron
annie mueller
Annie Mueller
*
Apple Annie's Weblog
*
Articles – Dan Q
*
Austin Kleon
*
Baty.net posts
bgfay
Bix Dot Blog
*
Brandon's Journal
*
Chris Coyier
Chris Lovie-Tyler
*
Chris McLeod's blog
*
CJ Chilvers
CJ Eller
*
Colin Devroe
*
Colin Walker – Daily Feed
Content on Kwon.nyc
*
Dave's famous linkblog
daverupert.com
Dino's Journal 📖
dispatches
E L S U A ~ A blog by Luis Suarez
*
Excursions
*
Flashing Palely in the Margins
Floating Flinders
For You
Frank Meeuwsen
frittiert.es
Hello! on Alan Ralph
*
Human Stuff from Lisa Olivera
inessential.com
*
Interconnected
Into the Book
*
jabel
Jake LaCaze
*
James Van Dyne
*
Jan-Lukas Else
*
Jim Nielsen's Blog
Jo's Blog
*
Kev Quirk
lili's musings
*
Live & Learn
Lucy Bellwood
Maggie Appleton
*
Manton Reece
*
Manu's Feed
maya.land
Meadow 🌱
Minutes to Midnight RSS feed
Nicky's Blog
Notes – Dan Q
*
On my Om
One Man & His Blog
Own Your Web
Paul's Dev Notes
*
QC RSS
rebeccatoh.co
*
reverie v. reality
*
Rhoneisms
ribbonfarm
*
Robin Rendle
Robin Rendle
*
Sara Joy
*
Scripting News
*
Scripting News for email
*
Scripting News podcasts
Sentiers – Blog
*
Simon Collison | Articles & Stream
strandlines
the dream machine
*
The Marginalian
*
thejaymo
theunderground.blog
tomcritchlow.com
*
Tracy Durnell
*
Winnie Lim
*
yours, tiramisu
Žan Černe's Blog

About Reader


Reader is a public/private RSS & Atom feed reader.


The page is publicly available but all admin and post actions are gated behind login checks. Anyone is welcome to come and have a look at what feeds are listed — the posts visible will be everything within the last week and be unaffected by my read/unread status.


Reader currently updates every six hours.


Close

Search




x
Colin Walker Colin Walker colin@colinwalker.blog