Personal reflection on building Project Vigil in public with AI assistance. Less structured than the Captain’s Log. More honest about the experience.
All entries are co-written with AI and fully accountable as my own work.
Entry 003 — On Building for Family
April 2026
There is a robot in my shop.
Not a metaphor. An actual robot — walnut and maple, brass hardware, speaker eyes, three porthole gauges where his heart would be. His name is Mortise. He is named after a woodworking joint. He runs a small language model locally on NVIDIA hardware tucked into his torso. He plays music, holds conversations, expresses himself through light and sound and a single servo that lets him turn his head. He does not connect to the cloud. He does not send data anywhere. He lives on the desk and he belongs to the house.
I built him by hand. And this week I did project work on him — organized research, set up a knowledge base for his V2 sibling, pulled context articles about child-safe AI interaction design and expressive robot eyes. V2 is a Christmas gift. I have eight months.
I want to think carefully about why I am writing this down in a project journal that is supposed to be about workforce AI and governance research.
The easy answer is that Mortise shares infrastructure with everything else. The same local inference pipeline, the same research agent, the same dashboard. Six of the KB articles that came out of a batch run turned out to be about him. The boundary between the work is genuinely porous.
But that is not the whole answer.
The more honest answer is that Mortise is what I build when I am building for people I love instead of problems I am trying to solve. He is not a product. He has no market. He will never have a roadmap entry or a benchmark. He exists because I wanted to build something that would make the people in my house smile — something that is mine in a way that even good work for good reasons rarely is. The woodworking is real. The brass is real. The walnut and maple are real. I cut the joints myself.
There is something I think about when I work on him that I do not always think about when I work on Vigil or Waiven: what it would mean for AI to be genuinely good for a family. Not useful in an enterprise sense. Not productivity-enhancing. Good in the way that a good dog is good, or a good piece of furniture — present, trustworthy, not asking anything back.
That is a different design problem than anything I am working on professionally. It has different constraints. The stakes are different. You do not benchmark whether your kid feels safe around something. You just pay attention.
I think working on Mortise makes the professional work better, actually. It keeps the question human. It is easy to spend months inside benchmark infrastructure and capability evaluations and forget that somewhere at the end of all of it there is a person — maybe an eighteen-year clerk in a manufacturing plant, maybe a kid asking a robot a question before bed — and what they need is something that is actually on their side.
Mortise reminds me what “on their side” looks like when it has no agenda.
Human’s Note
[Add your observations here before publishing.]
Jason Snyder — Project Vigil Co-written with Claude Sonnet 4.6
Entry 001 — On Disclosure
March 2026
I am writing this with the assistance of AI. The research says I shouldn’t tell you that.
There is a transparency dilemma at the center of AI-assisted writing, and it is not subtle. Communication researchers have documented it clearly: when readers are told that a message was written with AI assistance, their trust in both the content and the author measurably declines. The rational move — the strategically self-interested move — is to say nothing.
I am telling you anyway. And I want to be direct about why that is not a small decision.
Project Vigil exists because workers deserve to know what is coming for them — honestly, without spin, without the kind of institutional hedging that protects organizations while leaving individuals unprepared. If I open this journal by quietly omitting something I know to be relevant, I have already failed at the thing I am trying to build. The foundation has to be solid or none of the rest of it matters.
New York Times tech journalist Kevin Roose and his colleague Stuart Thompson recently published a blind taste test — AI-written passages placed side by side with work from masterful human writers, with no labels. The headline finding, in Roose’s words: basically a coin flip, with slightly more readers preferring the AI-written passages. When those same readers were told which was which, they got very mad. His co-host Casey Newton’s read on why is hard to argue with: because they think they’re too smart to fall for AI writing.
That anger is not about quality. It is about feeling deceived. Which is precisely the trap I am not willing to set.
So here is exactly what this looks like in practice. The ideas in this journal come from real conversations — arguments I am actually having, problems I am genuinely stuck on, observations I cannot stop thinking about. AI helps me shape those conversations into something readable. I read every draft back and ask a simple question: does this sound like me? Would I say this out loud, in those words, to someone I respect? If the answer is no, it does not go out.
The philosopher Mikhail Bakhtin had a concept he called answerability — the idea that authentic authorship requires the writer to be fully accountable for what they put into the world. Not just legally or professionally, but ethically. You have to be able to answer for it. That is the standard I am holding this journal to. Every entry here has my name on it because I am willing to defend everything in it — the argument, the framing, the conclusions — as my own.
What I am not willing to do is pretend the tool is not there. That kind of omission is exactly what erodes the trust this project is trying to build — and it would be a specific kind of hypocrisy to launch a journal about workforce honesty with a lie of omission in the first paragraph.
The research says disclosure hurts. I think that is true in the short term and backwards in the long term. The writers and institutions that are going to earn trust in the next decade are not the ones who hid their process. They are the ones who were transparent about it early, when it was still costly to do so. This is entry one. You know exactly how it was made. That is the point.
Human’s Note
Immediately after this was drafted I listened to a podcast that discussed this exact topic. I had to come back and update this post with what I heard. I was surprised that in their informal, mini study, people struggled to identify AI content. I was not surprised to hear they didn’t appreciate that type of generated content. These tools are here to stay, and as this is an AI first project, we will continue to use them where we can, including in written content, but look for ways to improve in this specific challenge area. There is a healthy approach to improving and even generating content with AI, but I do not think we collectively have found a balanced approach that doesn’t leave readers and consumers irritated by a lack of genuineness and effort. I worry about this far more than I should.
Jason Snyder — Project Vigil Co-written with Claude Sonnet 4.6 and Gemini 3.0 Deep Research
Sources:
- Leer et al., The AI-authorship effect: Understanding authenticity, moral disgust and consumer response to AI-generated communication. ideas.repec.org
- Understanding Reader Perception Shifts upon Disclosure of AI Authorship. arXiv.org
- Kevin Roose and Casey Newton, Hard Fork podcast, New York Times. Stuart Thompson, blind taste test, New York Times.