Page 2 of 16
<<     < >     >>

Meet Visual AI’s Unlikely Winner

 

I’ve spent over a decade following the evolution of computational photography, watching as algorithms and machine learning transform how we capture and enhance visual content. During this journey, I’ve tracked numerous companies pushing boundaries in this space.

One company stood apart. It caught my attention years ago when I was looking for better ways to handle noise in my digital photos. This company was Topaz Labs, which started almost two decades ago as a simple Photoshop plugin maker. Today, their tools are an essential part of my photography workflow, whether I’m processing RAW files from my Leica, my Hasselblad, or even my iPhone.

Topaz Labs isn’t just any software company – it’s a case study in how to build a significant technology business by breaking all the conventional rules. In Silicon Valley’s startup playbook, there are certain rules everyone “knows.” Don’t start a company with family. Don’t bootstrap when you could raise venture capital. Don’t focus on a niche when you could build a platform. But in a quiet corner of Dallas, a father-son team has broken all these rules while building one of the most interesting companies in artificial intelligence.

While AI startups chase multi billion-dollar valuations, Topaz has built a $48 million business by doing something less flashy but equally important: making and selling AI-based software that makes existing visual content better.

What makes Topaz’s main software so useful to me and so interesting generally is that it improves, upscales, and sharpens photos in a way that no one else can. It does so without screaming “AI” and leaves me as the ultimate arbiter of the result. Now, armed with AI it’s leveraging that capability into video, and output of other generative Ai engines. It’s tempting to think of them as just another photo app, for the small group of professional and semi pro photographers who have the time and need to spend more than a minute or two tweaking a photo or video. But  the world is becoming more and more visual and tools that are easy to use and help with visual storytelling are critical. 

I’ve been using Topaz’s tools for years, watching them evolve from simple Photoshop plugins into sophisticated AI-powered software that’s become essential for professional photographers and video creators.

The company’s founder, Albert Yang, earned his PhD in computer vision from the (University of )Waterloo before starting Topaz. After watching how they’ve navigated the rise of generative AI, I reached out to his son and CEO Eric Yang to understand their success where many venture-backed companies struggle.

This is an edited version of our conversation that takes you through the history of this nearly 20-year-old bootstrapped company and offers a lesson on how to build a profitable $48-million-a-year business by staying focused without venture dollars or Silicon Valley razzle-dazzle.

—-

Om: Tell me, when did the company start?

Eric: The company was incorporated in 2004. My dad, a signal processing expert, worked in Silicon Valley on audio devices. He thought he could apply similar theory to video enhancement. He said, “Hey, you can do the website, answer all the emails and do all that. I’ll build the product. Let’s try to sell it.”

The first product was called Topaz Moment, but it didn’t sell. This was in 2006. We tried different things until we started developing Photoshop plugins. There was a photography forum at the time where I was trying to push our software and people were rude on the forums. They said, ‘That’s cool and all, but not useful at all. What we want is this other thing,’ which was 10 times easier to make and not as interesting technically.

We built that product. It was called Topaz Adjust and gave you an HDR-looking image out of a single image. Back then the (High Dynamic Range) HDR look was pretty big.

Om: Trey Ratcliff was the guy who made the HDR look a thing.

Eric: Yes, his review of Topaz Adjust was what put us on the map. He did a big review of it. He contributed to that tool’s growth. That was the first phase of our company, Photoshop plugins. That lasted from 2008 to about 2017-18. During that time, I went back to college, started another company with a friend, and then went to San Francisco and joined a YC and Sequoia-backed startup.

Around 2017, for the problems we were solving using traditional image science, we found that machine learning would solve them 10 times better. The noise reduction and upscaling with Generative Adversarial Network (GAN) models was revolutionary. Instead of using traditional tools. We were the first to release the first GAN model for image upscaling in early 2018.

At that point we were making $3 million in revenue. We’re not good at marketing. But the quality was better, and spread through word of mouth. My dad ran the whole thing. In 2018, I came back.

[Image upscaling is making a digital image larger while trying to maintain or improve its visual quality. For instance, you could take a 1000 x 1000 pixel image and make it a 3000 x 3000 pixel image. Many photographers want to upscale their images, so they can make bigger prints. Others use larger files to improve the final image. I use upscaling and other improvements on my iPhone photos. Anyone with some time on their hands can really upscale and clean-up old images, including family photos from half-a-century ago. Beyond photography, upscaling is used for enhancing old photos, improving video quality for streaming, or restoring vintage movies. They use it for old TV shows to look better on modern television displays.]

Om: I remember Topaz was one of the few companies doing work around that technology. Did you come up with your own models or were they based on open source?

Eric: There was one paper that we based our technology on. No open source tools existed. The important part was simulating the degradation pipeline to generate synthetic training data. We tried different approaches. My dad tried different things. Eventually he came across something good. We iterated over time, made it faster and better. The first model wasn’t based on anything open source.

Om: If he’s doing signal processing, I assume he’s a chip guy.

Eric: He was working at startups in San Francisco. One was called Techwell that went IPO. 

Later he was at Fortimedia, which did microphone array processing – multiple microphones, noise reduction. That one would have IPO’d except Lehman Brothers happened. They were their underwriters.

He left the job. He decided he wanted to apply this approach to a different domain, because he’s a builder. He decided video would be the focus. 

Om: When you started experimenting with ML in 2016-2017, what drew you into it? There you were making plugins for Photoshop, and then you made this sharp right turn.

Eric: We made creative plugins, and plugins for cleaning up photos and upscaling photos. We had DeNoise and inFocus, which is not a great name for a sharpening tool. My dad had a PhD in computer vision from Waterloo. He got interested in this technique and tried it out. It turns out that it worked better than anything we’d seen. I think it aligned with what we were trying to offer for customers.

[Photoshop plugins have always been a big niche business. These plugins fill in the gaps left by Adobe’s product teams. Others make it easier to use and extract better value from Photoshop and Lightroom, two major photo editing platforms. Many such companies are small, often being one-or-two person operations.]

Om: One of the things I find very interesting is there are two kinds of people in my life. I’ve invested in companies around better network bandwidth management, network connectivity, latency. And my peers would say, “Man, there’s going to be so much bandwidth. You don’t need to load balance. It will just work.” Same with photography – “No, there’s going to be more megapixels. You don’t need to worry about upscaling. The megapixels are going to produce bigger files.”

Eric: Yeah, this is actually really interesting because I think it fundamentally gets at why we’ve been able to grow relatively quickly without too much competition in the space. It seems from [the outside] that upscaling is essentially a finite problem. You have some low quality stuff that you want to upscale. After you upscale all that, after you restore it, you should be done. But what we’ve found is people seem to always have the need to upgrade the value of their content.

Early on, digital photography was really big for us because there was a lot of sensor noise, so noise reduction was essential. Adobe didn’t solve it well until way later with their tools. That was a major benefit for us. Upscaling was pretty big because even if you had a lot of pixels, the quality wasn’t there. There was a lot of demand for more aesthetically pleasing pixels.

Eventually, after 2020, digital photography from DSLRs and mirrorless cameras seemed to not be as popular anymore. We saw a big drop off in customers there. Then, generative AI became this big thing. Now, a lot of our tools are used for that. It is kind of fortuitous for us that the one thing we specialize in, there has been a couple of waves that both benefit from that.

Om: What about Smartphone photography?

Eric: Smartphone photography didn’t really work for us – our tools are expensive. They’re not free. Most people taking photos on their phones don’t use our stuff. (Topaz PhotoAI costs about two hundred dollars.) 

Om: I beg to differ. I use primarily iPhone RAW. I run it through Topaz Photo AI, sharpen, denoise, upscale, and then I take it to Camera RAW, and then I take it to Photoshop.

Eric: Fantastic. We should use it as a case study.

Om: That’s when Topaz shines, when I have a RAW file coming off a very small, very noisy, very poor quality sensor. It’s a great sensor when you’re looking at it on the iPhone screen, but if you look at a bigger screen, it’s not. If you run it through the PhotoAI software, it comes out pretty clean. I’ll tell you how else I use it. I only use a 50-millimeter lens equivalent on a Hasselblad or my Leica SL3. One is 100-megapixel, one is 61-megapixel, and then I hit the double upscaling on your Photo AI so that I can crop. I just don’t need to carry too many other lenses.

Eric: That’s a fantastic use case. You’re already shooting at a really high resolution.

Om: I like the cropping that I can do later.

Eric: Amazing. None of these were diffusion-based models These were all (Generative Adversarial Network) GAN-based models. Recently, we released diffusion models, some for restoration. Have you tried the super focus or Recover v2 or any of the slow diffusion-based ones? 

[ Diffusion-based AI models gradually add noise to real images until they become pure static. They learn how the image degrades. This is the reverse of the typical diffusion process – instead of generating from pure noise, it’s removing noise and degradation from existing photos. Topaz has extended this approach to video, a product they launched in February 2025, under the codename, Project Starlight.

GAN models are fast at generating images once trained, and they can produce sharp, high-quality results. Diffusion models are slower, generally more stable to train and often produce more diverse results.]

Om: For you guys, it must be interesting now because with generative AI tools like ChatGPT and Midjourney everyone’s throwing up small files and people need to scale them.

Eric: Yes, but it could be a curse or a boost.

 It’s a curse because just like 2018 when we saw this (new) ML approach throw out all the years of imaging research because it was so much better. [GAN was the new ML approach.]

It was the same thing in 2022-23, when you saw a startup Stable Diffusion applying diffusion models to upscaling images that have such better quality than our existing [GAN-based] software. 

We panicked. We thought “Oh my gosh, we have to get on this. Otherwise we’re going to die as a company.” We tried to move fast in creating diffusion models for upscaling and restoration, which is a pretty hard thing to do because diffusion models are extremely slow. Even right now, a lot of the approaches to this generate pretty small footage, photos and videos. We worked hard on that . And right now, we have a good handle on it.

Om: So who are your customers now because of generative AI?

Eric: It’s been an evolution. First, our core audience, still more than 50% of our revenue, is creative professionals —photographers, videographers, prosumers. It gradually shifted away from photography into video, which was interesting to me.

The second major category is video production and streaming. These are professional and enterprise customers. We are used in major productions. I think we’re one of the AI tools that is least offensive to them, because we never generate things from scratch. We just help solve quality problems.

We’ve seen an increasing percentage of our usage come from that. The largest growing segment is AI filmmakers and AI enthusiasts. I think it was a major boon, because we managed to keep our lead in upscaling and quality enhancement. There’s 10 times as much content being generated, and people want to show it in larger formats.

Om: So it must be very gratifying for your dad that you guys have come full circle, started pursuing video, and here you are doing video again.

Eric: Yeah, exactly. We did video again in 2018. That was the year I came back from San Francisco and he transitioned the running of the company to me. He’s a technologist, so he still worked on the models. Around a year and a half ago, he stepped back. We’re at 55 full time in Dallas, we’re looking to expand quickly because the growth has been good.

Om: You said when you were selling plugins, it was still $3 million…

Eric: Last year we did 48 million in revenue.

Om: Congratulations. And I’m assuming it is profitable.

Eric: Yeah, pretty profitable. Lately, we’ve been doing the diffusion models, and had to buy a lot more compute. So that’s eaten into it. Back in 2022, it was very profitable. We buy Nvidia’ latest. We buy H100s and A100s.

Om: What’s your bigger picture prognosis? This year has been about insane progress. Google with Veo3. And the Chinese companies are launching video related models left, right and center. Mostly because with WeChat and TikTok and Baidu, they have better video data.

Eric: Okay, I think everybody in the industry is aligned that in the next three to five years, the process of creating a story using video is gonna change dramatically. Right now if I were to try to get a story in my head onto a video, it’d be really hard . It’d be very expensive, I’d have to do a lot of production. But in three to five years, it’ll be super low friction to do that with really high production quality. It wouldn’t just be in text – I can maybe insert pictures of faces, reference images, combine different videos .

The part that we believe differently than other companies is that most other AI companies are training models to do all of this in more or less one shot. You have one large base model that gets you a very directable output – it will interpret your prompt well, really high quality, whatever reference that you put in, and 1080p or 4K later.

I think that the future will be more iterative. There’ll be many different models working together that you would layer onto each other . This is very timely because Mid-Journey just released their video model And it only generates at 480p, which is interesting. Everything else is at least 720p . But users love it, because they don’t need to spend a lot of money for the initial generations , and then they can craft the story first before upscaling it later.

Om: What’s the question I have not asked you?

Eric: We’re a pretty different company. I think if somebody asks me what sets us apart, it’s focus and listening to customers. I know this kind of sounds trite.

We try to figure out the problems that are being underserved by everybody else by talking with customers and then completely disregarding everything else until we solve that thing and make sure we’re best at that thing. I think that’s how we’ve been able to stay alive and stay pretty relevant and fast growing. A really healthy dose of paranoia at all times is actually pretty exciting .

Om: Why is Adobe Lightroom or Photoshop upscaling so much worse ?

Eric: Another question I’ve asked myself many times .

Om: They have more resources, more people use their cloud, they have better training data. They should be doing better. Why are they not doing better versus you?

Eric: Because it is our lifeblood and they don’t depend on it.

Om: What are you into right now?

Eric: Recently, I’ve been using a Mid-Journey video. It is really good. I’m very into the video stuff right now. I thought Pika and Runway were good as well. Overall, the pace of progress on the video generation models is fast and exciting. The interesting thing about Mid-Journey is the experience and the aesthetics is very unique and contributes to its appeal.

Om: How do you think we’ll see video AI capabilities scale? We’ve seen chatbots plateau after their initial progress. What kind of advances do you expect in the next 24 to 36 months?

Eric: I could be wrong, but I feel like the remaining progress doesn’t necessarily have to be on the foundation models. I think it’s at a point where the product layer becomes more important.

Right now if somebody could build a tool that used existing models together in novel ways – solving specific problems like compositing in a way that looks natural – that would make a better contribution than the next incremental update of a foundation model.

It’s similar to LLMs where there’s probably progress to be done, but what users really want is integration with Gmail and other tools. They want it to be more useful in their daily workflow.

Om: Thanks Eric. This was fun.

Crazy Stupid Tech

30 Jun 2025 at 15:00

As if you have only 12 years to live

 As if you have only 12 years to live

What if what frightened you could instead motivate you?

What if you could not only accept your biggest weakness but also somehow turn it into a strength?

These are some of the questions I've been asking since I heard a certain point on Mark Manson's podcast. (This point was made before Manson changed his podcast format and also before I quit podcasts cold turkey like a madman so that I could have more time to listen to music again.)

I forget the exact episode and the context for the point, but, if I remember correctly, Manson shared an anecdote related to a listener's question about changing your nature. (What bits and bytes that make up your personality within the Simulation are constants and which are variables you can improve?)

When he was a dating coach, Manson said, clients would often confide their desires to change a core part of themselves. Many said they wished they could stop being anxious messes.

Me too, man. Me too.

So I was interested to hear Manson's thoughts on the matter.

Then he said it—

I got bad news for you, bro: If you're overly anxious, then you're probably gonna be overly anxious 'til the day you die. You just gotta find a way to accept and deal.

In the case of anxiety, pharmaceuticals and cognitive behavior therapy can be great treatments, but they're not cures, meaning you're likely stuck with anxiety for however long you're lucky to live.

Bummer.

But what if you could do something awesome with your curse?

The exploration of this question has led to a radical shift in how I approach my own anxiety.

If you follow me on LinkedIn (Linky Dink), then over the last few weeks, you've seen wave after wave of shenanigans from me and some of my digital friends, who have been helping me pervert the sanctity of the world's largest professional network, which is now truly just a social media platform like so many others.

While it may appear to be all fun and games (and maybe it is for everyone else—I can't speak for them), let me pull back the curtain and let you in on a personal secret: My recent and current activity (on Linky Dink and elsewhere) is driven by anxiety. And a moderate dose of neuroticism.

Because, these days, I'm living by a very specific timer. Since my 40th birthday earlier this year, I've been living as if I have only 12 years left to live.

Though we can all go at any moment, I have no logical reason to think I have only 12 years left to live. The reason is purely emotional.

Back when my site was a blog and before it became a newsletter, I wrote about the simple math that guides my life. And in that post, I shared that I'm 40 years old, while my father and stepfather were each 51 at the times of their deaths, and my mother was 52 at the time of hers.

So, if I live only as long as the oldest of my parental figures, then I have only 12-ish years to live. (Technically, I have something closer to 11 and a half years. But remember, kids: Don't let facts get in the way of a good story.)

These bits of personal math are nothing new, as I've been running the numbers since 2011, when my parents died. There was also a time when I lived with the burden of the self-created prophecy that I'm destined to die from cancer, just as they did.

What is new is the framing of the math, which is no longer an exercise in dread, but instead an exercise in daring to dream what's possible.

While I have good reason to think I'll live more than 12 years, I want to live the next 12 years as if they're all I have left. While terminally ill people often get prognoses much shorter than 12 years, it's really not a lot of time when we're talking about the balance of one's mortality.

What can I accomplish in such a short amount of time? That's the question that guides the anxieties I've shared in this post.

Why would anyone do such a thing?

Well, for one, I'm an artist at my core. And, nearly 20 years ago, at my first real grownup job, a colleague shared a certain insight I didn't appreciate at the time but now see as wise, and I've adopted it as part of my own personal philosophy. This colleague, a musician at heart, told me that imposing limitations on your art gives structure and value to said art. (For example, Western music has only seven notes. Requiring certain rhythm and maybe even rhyming for a song or poem is a limitation, but it's a limitation that adds value to the art.) At first I scoffed. Why would anyone make an already-difficult process more difficult? But, now I see that, when you've set up the right limitations in certain areas, then you're free to explore creativity in other areas.

These days, I see myself as a bit of a performance artist. I'm not in the same league as Andy Kaufman or Pee-wee Herman, but I do have my own spin on it. Because my performance art is all about leaning into my anxiety. Instead of letting anxiety hold me back, I'm learning how to use it as fuel to propel me into the great unknown that awaits us in the unraveling of the 21st Century.

The good news is that, when anxiety and neuroticism are the fuels for your performance art, it's so easy to find inspiration all around you.

Someone once accused me of using dark humor as a coping mechanism. Like, duh. But there's no reason to make such an accusation these days, because I openly admit that's exactly what I've done before, what I'm doing now, and what I'll likely do always. I'm anxious af about the current moment--and I don't think I need to elaborate. If you're reading this newsletter, then you surely have enough imagination to fill in those blanks.

As I've already shared, I often think about the ages of my parental figures at the times of their deaths. I've tried to push those thoughts into some forgotten corner of my mind, but I can't—they keep popping up. Rather than feel bad about it, I've accepted it's not my fault, because certain things happened to me to form me into the person I am.

So, I'm taking that personal math and creating an artistic limitation for myself: Live as if you have only 12 years left. What can you accomplish in that time? What awesome thing or things can you do in that time to honor your parents—and the limited and uncertain nature of human existence? Can you turn your anxiety into a form of catharsis?

My anxiety is also the fuel for the novel I'm writing, because, if the world's going to hell in a handbasket, then I don't want it all to end with my novel unfinished. But Jake, you may say, surely you don't really think the world's going to end. Humanity has survived so many scary times, and though you may have concerns about what's going on in your homeland, America still has the great advantage of having a diversity of resources and some of the world's best borders. Logically, I agree. Emotionally, I'm not so mature.

My anxiety ain't going away. I have 40 years of data and anecdotes to back up that statement. So I'm stuck with that artistic limitation in that I can't get rid of it. It's part of the formula of my performance art. But I'm lucky that I can turn that limitation into an asset.

This edition of the newsletter has focused a lot on me. I'm sorry for that. But, as is often the case with my art, I share parts of myself in the hope you'll see reflections of yourself.

So, let's bring this back to you: What artistic limitations are you stuck living with that you can work to turn into crucial elements of your own performance art? What particular anxieties can you lean into and find inspiration in? Or, what limitation beside anxiety can you use as a framework for your life?

I genuinely would love to know. So, if you're willing to share, please let me know by responding to this email.

<3,

Jake

Jake LaCaze

30 Jun 2025 at 14:45
#

Episode 609: PM Talks S2E6 – Momentum – Mike Vardy

Momentum isn’t just a starting gun. It’s a rhythm, a flow, a throughline. In this episode, we break down what momentum looks like at different stages of a project or practice, how to recognize its many disguises, and ways to harness it without burning out or blowing past friction points that deserve your attention.

I’m late on posting this latest episode of my regular chat with my good friend Mike Vardy. Have a listen.

Rhoneisms

30 Jun 2025 at 14:01

Monday, June 30, 2025

 Where was everyone yesterday? All my feeds were very quiet. Not silent, so it I don't think it was a technical problem. But just really quiet. RSS, email, Fediverse, all sparse. It was boring.


I'm typing this in Ghost's editor, which, as web editors go, is very good. Except it feels wrong. Not much, just a little. Something feels off. Most of the time I barely notice it, but if I'm bouncing back and forth between typing in Ghost and typing in Emacs, the difference is noticeable and I don't like it. I'm not interested in editing in something else and then posting to Ghost. I've tried in various ways and it's too much of an abstraction. If I wanted to type in one place and push it somewhere else, I'd go back to a static site generator. Oh, and I miss having footnotes. Ghost's workaround is icky. Can you feel something happening? I can.


Here’s a stupid idea I’m thinking about trying that completely contradicts what I just wrote. What if I were to write all my posts in Emacs and render them locally using Hugo. Then, copy and paste the rendered HTML into the Ghost editor for publishing? A bonus with that approach would be that when I inevitably end up back to using Hugo for the blog, all the content will already be there 😁.

Baty.net posts

30 Jun 2025 at 11:29
#

Subject: Satisfying Sounds

The ”WHOOSH-THWACK-RATTLE-POP” sound the vacuum cleaner makes when gravel and debris disappear up the hose, never to be seen again, when you vacuum the car.

Weirdly satisfying. 🚘😌

Robert Birming

30 Jun 2025 at 10:13
#

Finished reading: Battle Scars by Jason Fox 📚

An honest, interesting, and at times deeply unsettling read. It paints a raw and personal picture of one of the many terrible aftereffects war can leave behind — and the road back to recovery.

Robert Birming

30 Jun 2025 at 08:54

Monday Morning Wake-Up Call

 

I believe with perfect faith that at this very moment millions of human beings are standing at crossroads and intersections, in jungles and deserts, showing each other where to turn, what the right way is, which direction. They explain exactly where to go, what is the quickest way to get there, when to stop and ask again. There, over there. The second turnoff, not the first, and from there left or right, near the white house, by the oak tree.  They explain with excited voices, with a wave of the hand and a nod of the head: There, over there, not that there, the other there, as in some ancient rite. This too is a new religion.  I believe with perfect faith, that at this very moment.

Yehuda Amichai, from “I Wasn’t One of the Six Million: And What Is My Life Span? Open Closed Open” in “Open Closed Open: Poems.” Translated by Chana Bloch and Chana Kronfeld. (Harcourt, 2000)


Notes:

Live & Learn

30 Jun 2025 at 08:22

IndieWeb Carnival: Take Two

 

For this month’s IndieWeb Carnival Nick has picked the topic of do-overs and the more I think about this topic—in the context of going back and doing something differently—the more I can’t find an instance where I’d want to give something a crack a second time. Not because my life’s perfect mind you, that’s far from it. There are many, many things I wish were different but going back and doing something again to get a different outcome doesn’t look appealing to me. Because however imperfect, however messy and unsatisfying, my life is my life.

There are two quotes that come to mind that are somewhat related to this. One is from Jobs' famous 2005 Stanford speech:

You can't connect the dots looking forward; you can only connect them looking backwards. So you have to trust that the dots will somehow connect in your future. You have to trust in something - your gut, destiny, life, karma, whatever.

And the other is from Watts:

Let's suppose that you were able every night to dream any dream that you wanted to dream. And that you could, for example, have the power within one night to dream 75 years of time. Or any length of time you wanted to have. And you would, naturally as you began on this adventure of dreams, you would fulfil all your wishes. You would have every kind of pleasure you could conceive. And after several nights of 75 years of total pleasure each, you would say "Well, that was pretty great." But now let's have a surprise. Let's have a dream which isn't under control. Where something is gonna happen to me that I don't know what it's going to be. And you would dig that and come out of that and say "Wow, that was a close shave, wasn't it?" And then you would get more and more adventurous, and you would make further and further out gambles as to what you would dream. And finally, you would dream ... where you are now. You would dream the dream of living the life that you are actually living today.”

Looking backwards, wishing things were different, seems like a wasted opportunity to me. Because life’s unfolding right in front of you at this very moment and opportunities to do things differently are waiting ahead.


Thank you for keeping RSS alive. You're awesome.

Email me :: Sign my guestbook :: Support for 1$/month :: See my generous supporters :: Subscribe to People and Blogs

Manu's Feed

30 Jun 2025 at 07:20
<<     < >     >>



Refresh complete

ReloadX
Home
(157) All feeds

Last 24 hours
Download OPML
A Very Good Blog by Keenan
A Working Library
Alastair Johnston
Anna Havron
*
Annie
Annie Mueller
Apple Annie's Weblog
*
Articles – Dan Q
*
Baty.net posts
bgfay
Bix Dot Blog
Brandon's Journal
*
Chris Coyier
Chris Lovie-Tyler
Chris McLeod's blog
*
Colin Devroe
Colin Walker – Daily Feed
*
Content on Kwon.nyc
*
Crazy Stupid Tech
*
daverupert.com
Dino's Journal 📖
dispatches
dominikhofer dot me
*
Dragoncatcher the blog
Excursions
*
Flashing Palely in the Margins
Floating Flinders
For You
*
Frank Meeuwsen
frittiert.es
Hello! on Alan Ralph
*
Human Stuff from Lisa Olivera
inessential.com
*
jabel
*
Jake LaCaze
James Van Dyne
*
Jan-Lukas Else
*
Jim Nielsen's Blog
Jo's Blog
*
Kev Quirk
lili's musings
*
Live & Learn
*
Lucy Bellwood
Maggie Appleton
*
Manton Reece
*
Manu's Feed
Matt's Blog
maya.land
*
Meadow
Minutes to Midnight RSS feed
Nicky's Blog
*
Notes – Dan Q
*
On my Om
Own Your Web
Paul's Dev Notes
*
QC RSS
rebeccatoh.co
reverie v. reality
*
Rhoneisms
ribbonfarm
Robert Birming
*
Robert Birming
Robin Rendle
Robin Rendle
Sara Joy
*
Scripting News for email
Sentiers – Blog
*
Simon Collison | Articles & Stream
strandlines
*
Tangible Life
the dream machine
*
The Torment Nexus
*
thejaymo
theunderground.blog
Thoughtless Ramblings
*
tomcritchlow.com
*
Tracy Durnell
*
Winnie Lim
*
yours, tiramisu

About Reader


Reader is a public/private RSS & Atom feed reader.


The page is publicly available but all admin and post actions are gated behind login checks. Anyone is welcome to come and have a look at what feeds are listed — the posts visible will be everything within the last week and be unaffected by my read/unread status.


Reader currently updates every six hours.


Close

Search




x
Colin Walker Colin Walker colin@colinwalker.blog