From Flip Phones to AI: A Tech-Resistant Mom's Defense of Digital Tools
A response to "Welcome to the Analog Renaissance: The Future Is Trust" by Peco and Ruth Gaskovski
Hello, dear readers! Lisa here. In our book, Patterns for Life, Laura and I wrote as one person. We included the odd personal anecdote here and there, but for the most part our voices were unified. Here on Substack it’s actually the opposite — we still write together occasionally, but more often than not our essays are individual rather than joint (the switch was not intentional on our part, but just the way things have worked out).
Lately Laura and I have been discussing AI — how it works, what it is, what it isn’t and the ramifications of using it or not using it, and how best to do so. A few days ago and published a thoughtful (and thought-provoking!) essay about some of the issues surrounding AI, and I shared it with Laura to see what she thought of the ideas they brought up. Turns out she had a lot to say! As we went back and forth, discussing the essay itself as well as other ideas which it inspired, we realized we were collaborating once again. Normally, all the commentary you see in bold italics would be worked into the text itself so that the ideas flowed seamlessly from one voice to the next. But this time instead of presenting a finished co-written piece where it’s difficult to tell whose voice is whose, we thought we would present it in a form that is new to us. Below you will see Laura’s thoughts, with my comments and questions interposed in bold italics. We wanted to do this not only to give a feel for what a work in progress looks like for us, but also to highlight one of the ways that our collaborative creative process works, how it’s not easily definable, and to give one example of what it looks like to engage thoughtfully and critically with any text, even if the author him- or herself can’t respond directly. Will we always publish like this going forward? Probably not always, but maybe sometimes… we don’t actually know the answer yet ourselves! In the meantime, we hope you enjoy this little peek into the way Laura and I bring our creative energies together, and we invite you to engage with our thoughts as well.
Laura: Full disclosure: I used the generative AI, Claude, in writing this essay, and I still feel like I have engaged in a creative act and have full creative control. I certainly don’t feel like a “charlatan”. Here’s how I used it. First, I asked the Claude to “read” the article. Then I yapped for a while about what I was feeling. Because I am a very environmentally distracted person, this involved walking away from my computer several times to do things like make small people food, let dogs out, answer a random question from my dad, and so on. I chatted back and forth with Claude for a little while in this way, jumping around on a number of different thoughts. At one point I asked Claude to research some issues for me. Finally, I asked it to organize my thoughts into a “barebones” essay that I could edit and rewrite in my own way. Writing the essay using Claude as a “collaborator” allowed me to move from start to finish on it in a matter of hours while my thoughts were still fresh, and not have to make the choice between writing and taking my kids for an evening walk to the park1.
From Flip Phones to AI: A Tech-Resistant Mom's Defense of Digital Tools
A response to "Welcome to the Analog Renaissance: The Future Is Trust" by Peco and Ruth Gaskovski
I am generally very sympathetic to tech resistance. Substack, for example, is my very first foray into social media, and only when my last flip phone died last year did I “upgrade” to a smart phone. I say “upgrade” in a very tongue-in-cheek manner, because all I really did was get a glorified digital camera that also happens to call a tow truck in case of emergency. Oh, and it lets me text my oldest son when I need him to pick up a gallon of milk on his way home.
No, I definitely get suspicion of technology and resistance to its inevitability. But as I get older, I’m also more sensitive to the insufficiencies of moralized tech-resistance that fail by not maintaining flexibility to understand and approach complex issues from different directions.
My wonderful co-author Lisa Rose and I have been talking about AI for a few weeks now, trading articles and generally talking through different ways of thinking about it. I haven’t had much to say up until this point, though I have very much been enjoying casually chatting with Claude about various things. My favorite conversations are titled “Personal Biohacker Claude,” “Talking About Claude with Claude,” and “Read These Short Stories I Wrote and Tell Me I’m Wonderful.” Unsurprisingly, Claude does all three of these roles rather well.
But when Lisa sent me an article advocating for an "Analog Renaissance" as a response to AI's supposed threat to human trust and authenticity, I felt I actually had something useful to say in response. While I appreciate the authors' concerns about maintaining genuine human creativity— I’m a writer, too, you know— their central thesis fundamentally misunderstands both the nature of trust in society and what I see as the powerful democratizing potential of AI tools. Their essay appeals to a rather romanticized view of the past that never existed and ignores the very real economic barriers that AI is helping to dismantle.
See, when I read the essay I went right along with the “romanticized view of the past” because that tends to be my own default anyway. I always appreciate that you challenge me to examine my assumptions.
The False Golden Age of Trust
The authors frame AI as uniquely threatening to human trust, but this assumes there was some previous era of widespread institutional and personal trustworthiness that AI is now destroying. I disagree strongly with this premise. Media manipulation, propaganda, corporate deception, and institutional lies have been constants throughout history: from yellow journalism (my graduating senior just read Upton Sinclair so this has been on my mind for a bit) to tobacco companies hiding cancer research, to claims about weapons of mass destruction. And that’s just modern history.
The propaganda angle alone is so multi-faceted. I am in the middle of Jacques Ellul’s Propaganda again and just finished the chapter where he highlights the fact that propaganda can only work on a two-way street; both the propagandist and the propagandee have a role to play in making propaganda work effectively. Adding the question of trust to that equation makes for some really interesting trains of thought!
As far as the face-to-face interactions the authors value, I totally get it. I’m a small town girl and I think this kind of life has opportunities for genuine virtue and is absolutely worth preserving. I believe my own community has been blessed by having Mennonite influence around us, for example, and having to accommodate people who don’t use technology is great for keeping cash in the local economy. But it is also a simple fact that dishonesty has always existed, and there are challenges in face-to-face communities that shouldn’t be minimized or ignored. I call to mind a butcher family our family has known over several generations— everyone knew that one of the old mothers was known to put her thumb on the scale, and you had to keep an eye on her.
Physical presence and long-standing relationships have never guaranteed honesty. Trust and familiarity have always been exploitable.
I was most troubled, though, by the analogy comparing AI-generated content to sexual infidelity. This comparison conflates fundamentally different types of "trust violations”: one involves intimate human relationships and genuine, severe emotional harm, while the other involves… content creation tools. Confusing them trivializes real human suffering and reveals a profound misunderstanding of what trust actually means in different contexts. It’s worth noting that the anecdote in the essay ends in suicide, which is obviously an absurd response to parsing AI content on the internet.
Overemphasizing the level of trust necessary for society and implying an understanding of trust that levels marital fidelity and friendship with unreflectively believing what you see on the internet is just not useful. Trust and its converse, fear, are complex issues that affect both interpersonal and intrapersonal relating, and certainly not in an all-or-nothing way.
The Ghostwriting Problem
If we're truly concerned about "generative cognition" and authentic human creation, then shouldn't we be equally troubled by the widespread use of ghostwriters? The publishing industry has operated for decades on ghostwritten content where the actual creators/artists/writers remain largely invisible. Ghostwriters typically receive no credit at all; their work is completely attributed to others, often under legally binding nondisclosure agreements. Sure, it’s human, but it’s human forced to generate like a robot.
Famous examples include John F. Kennedy's "Profiles in Courage," Donald Trump's "Trump: The Art of the Deal," several of Hillary Clinton's books, and Nelson Mandela's "Long Walk to Freedom". All were produced by ghostwriters but presented as authentic authorial voices. When ghostwriters do receive any acknowledgment, it's usually buried deep in acknowledgments sections or reduced to phrases like "with [name]" or "as told to [name]." If AI is dehumanizing to creators, surely ghostwriting is equally so.
Are we all really worried about AI threatening authenticity? The publishing industry has long accepted attribution deception in the form of ghostwriting, plus acted as gatekeepers from so many different angles. Why is AI suddenly the line in the sand when we've already normalized human-to-human content creation without proper attribution? Is it only because it’s a robot, an idea calculator?
What I think is more likely is that our society has been producing inauthentic material for ages, and AI is a distracting scapegoat.
This makes me wonder, have there been other scapegoats in the past? What were they?
Of course, copyright is another aspect of this debate, but that, too, is complex. Perhaps it would be easier to understand how difficult it is to really own an idea if we contemplated indigenous people’s attitudes towards ownership of land, water, air, and other natural resources. The very concept of intellectual property assumes that thoughts can be contained and controlled in ways that may be fundamentally at odds with how creativity actually works. When we get anxious about AI violating copyright law, we might be defending a notion of ownership that was always more fiction than reality— and admit that we need to have a broader conversation about how our consumerist economy works.
I would love to pursue this line of thinking, especially the question as to whether ideas can be owned. I’m inclined to say they can’t. Charlotte Mason talks about ideas as food for the mind, so how do we reconcile that with the concept of ownership of ideas? What exactly is intellectual property? Our book is copyrighted, so others can’t legally take our words and take credit for them or make a profit from them, but the ideas we present with our words can most certainly be, and should certainly be, taken and “metabolised” by each reader so they can think about those ideas for themselves.
The Class Dimension That’s Missing
Perhaps most troubling to me is how much resistance to generative AI actually reveals a profound class blindness. The call for blanket rejection of AI tools essentially advocates for maintaining economic barriers that have long excluded working-class creators.
I didn’t notice a resistance to generative AI itself in the original essay itself so much as a resistance to its use without discretion and without oversight and without acknowledgment. But I have noticed a general resistance to AI in other articles and conversations that presents a very black and white picture where any and all use of AI is looked on as a bad thing. Black and white thinking is almost always problematic.
A struggling writer who can't afford a $15,000-$50,000 ghostwriter can now access AI for a fraction of that cost. An independent author who couldn't hire a professional illustrator can now create book covers. A busy parent managing multiple responsibilities can use AI for research and administrative tasks, freeing up time for actual creative work. In Deschooling Society, Ivan Illich presents a vision where the entire population has access to the entirety of society’s knowledge in the form of “learning webs”. In Illich's framework, generative AI could serve as a convivial tool that expands individual learning capacity without creating new dependencies on educational institutions, allowing learners to access knowledge and develop skills through direct interaction rather than through credentialed intermediaries.
Hey, I homeschool six kids, run a household for nine people, care for an aging parent, and try to cultivate the lifestyle of a writer. I’ve worked part-time off and on throughout the years and have done some serious medical caretaking and advocacy. I can't afford a cleaning service, a babysitter, or exclusively organic groceries, let alone a secretary, ghostwriter, or a research assistant. As I’ve been experimenting with AI, I’ve found that it really can help me with a number of administrative and data collating tasks so I can instead focus more energy on the actual writing I love. Does this make me a charlatan? I think that word is kind of harsh.
The question of economic barriers is a really interesting one because often they are invisible until you run smack into one. I am reminded of how when children are little we adults tend to look at them and see all their potential, all the possibilities they have in front of them. As they get older the number of possibilities decreases, due to many complex factors, and usually by adulthood we find that most children settle into a life that is largely similar to the one their parents led, though of course there will be differences as well. This is largely true on the economic plane as well, even though the “dream” is for children to be able to rise above what their parents had and to achieve more and to have more, often because of the the sacrifices made for them by their parents. Clearly AI has the potential to fill quite a lot of roles and break down some of those barriers. What I wonder is, will that potential be realized? Is it even recognized as a potential?
Technology Resistance Follows Predictable Patterns
The dismissal of AI tools follows a familiar historical pattern. Digital art faced the same "not real art" criticism that AI faces now. Electronic music was derided as "not real music" because it didn't require traditional instruments. Photography was initially rejected as "not real art" because it didn't involve painting. Each time, the resistance often came from established practitioners who had invested heavily in mastering the old methods.
AI tools are currently one of the most democratizing forces in creative industries. A single parent working two jobs can now write and illustrate a children's book without needing connections, capital, or years of technical training. At least for the present moment, the barriers that once required either wealth or institutional access are crumbling.
The Information Problem Predates AI
The article's "haystacking truth" argument misdiagnoses the broader challenge people face when seeking information on the internet. Even in a world of 100% human-generated content, we would all still struggle to find relevant, useful information among vast amounts of irrelevant (but authentic) content.
A devout Christian searching for spiritual guidance doesn't want human-authored content about keylontic science, no matter how "authentically" human it is. Someone with specific dietary restrictions doesn't benefit from perfectly human-written recipes that don't meet their needs. If the article that is tainted by AI provides the sought-after information, it succeeds where 100% human-generated content might not.
The "haystack" problem isn't primarily about authenticity; it's about relevance. Boycotting content produced with AI doesn't make finding useful information easier; it just makes it more expensive and time-consuming.
In addition, generative AI content isn’t easily reducible to “totally fake”. Not only are there are many ways to use AI in which the program functions as a subordinate collaborator or assistant, I generally think it’s fairly easy to spot AI content that doesn’t have a human as an active creator. AI tends to have a certain voice that is pretty recognizable, especially when you’re used to interacting with it. But when AI is being actively managed by a human creator, it does much more interesting things.
I don’t think I’ve encountered many people talking about how to identify an “AI voice”, though it’s really not that difficult. I think it’s a rather important point to talk about so that human readers can learn to recognize that voice for themselves when they encounter it.
Diana Wynne Jones wrote an absolutely lovely essay titled, “Our Hidden Gifts,” oh, way back in 2008. It’s in her Reflections on the Magic Art of Writing, my most favorite writing book (though I do hope they scrap the original foreword and find someone worthy of introducing her). The great point of this essay is that changes in society reveal opportunities for human giftedness that previously would have remained hidden for a person’s entire life. So she asks us, How many concert pianists, nuclear physicists, horse tamers, and so on, were born and died over the past few thousand years of recorded human history, but these gifts were never realized because the way to use these abilities hadn’t been invented yet?
Her response to changes in society is not to lament them, but rather to charge us, “Just remember how incredibly gifted all human beings are.” Sergius Bulgakov also said, “Creative activity as the actualization of the fullness of man’s nature not only has the right to exist but even constitutes the historical duty of humankind. We cannot imagine a human life without any creative activity at all. That would be a pseudo-ascetic acosmism (or rather, anticosmism) of the Buddhist type. It is by no means Christian.”
Pseudo-ascetic acosmism. I have no idea what that even means, but I can totally get behind the idea of not being able to imagine a human life without any creative activity at all!
When I put these two thoughts together, I’m left with an attitude of curiosity more than a posture of alarm. But I can maintain this curiosity alongside my caution, partially because I am not invested in a concept of departing societal trust. I can wait to see what individual humans will do with a democratizing technology, even after a lifetime of exposure to dehumanizing content.
The trick of holding things in tension is actually part of creative activity itself, isn’t it? To create something (or to sub-create, if we want to channel Tolkein a bit, because the Bulgakov quote reminded me of him), one has to hold in tension the form of the thing one wants to create and the un-formed-ness of the material he is using to do so. The brilliance of a great artist is manifested in the way he or she takes the unformed material and transforms it into something with a definite form. This could get quite theologically deep!
Human Supervision Remains Essential
Critics often point to AI failures as evidence against the technology, but these examples actually demonstrate why human oversight remains crucial. When AI generates fabricated references or makes confident claims about non-existent information, it's not an argument against AI use. It's an argument for informed, supervised use.
Having worked in healthcare, for example, I can totally see the value in training an AI program to collate medical data on a patient, if the process is being overseen by a human being. A robot that can reference an enormous database instantaneously has the very real potential to increase the ability of a practitioner to practice well, to see a patient’s issues from multiple angles.
Just as we don't ban word processors because they can't fact-check, it doesn’t make sense to ban AI because it requires human oversight. The real issue isn't the tool's limitations, but users who don't understand them. Responsible AI use involves verification, editing, and critical evaluation, skills that enhance rather than replace human judgment.
I have encountered anecdotes of AI becoming quite insistent on its own correctness when confronted with its error by a human user. This has the potential to become very problematic because I’m willing to bet that many people will, and do, start to second guess their own knowledge if it isn’t rock solid to begin with. I think it is something to watch out for.
Yes, the landscape is changing. But do you remember that charming business book from the 90’s, Who Moved My Cheese? The mice get the cheese because they are willing to adapt. Hem and Haw miss out on windows of opportunity by remaining in mourning for the past and indecision about the future.
I suppose one might say that we need to stop AI incursion now, before the Singularity. The fundamental trouble with this position is that it involves management beyond anyone’s realistic sphere of influence and control. I believe that a far more useful posture is to engage with the questions, “How does [any and all particular technology use] affect my particular incarnated, Christian life? How can I live my answers to these issues out in a coherent way that embodies my lived experience of Christ through the Holy Spirit?”
I can’t stick the little fire emoji in here, but I would if I could! Yes. Exactly. These should be the ultimate guiding questions that we ask ourselves about anything new we encounter of which the value is not immediately evident.
Education and Ethical Citation
Now, I agree that generative AI use in education is a challenge, both at the secondary and collegiate levels. But as someone who has been involved in alternative education for almost two decades, I don’t think it’s quite right to paint AI as the one thing that’s come in and ruined the educative process. I’ve been hanging out on the outskirts for a while, and I have some thoughts about education in general. My friend and I even wrote a book about it. It’s a large topic.
But here’s what I wonder: How many students know how to write acceptably well? Are a sufficient number of them still learning how to write, even with the spectre of generative AI tempting them? What skills, specifically, are they failing to master: mechanical ones, like structuring a sentence, or cognitive ones, like thinking a thought? Did students really know better how to write before generative AI? What happens when we compare the quality of writing before and after AI with the quality of writing before and after laptop implementation in schools? How well are students reading, and how does that correlate with their writing capabilities? These, and many more, are all interesting questions to me.
These are fantastic questions that really deserve to be answered. So often we look for scapegoats on which to place all the blame for the failures of the current educational system. The problem is that there are SO MANY factors involved that it is totally impossible to reduce them down to one thing. Sure, generative AI is clearly doing more harm than good when children don’t even know how to read or think or write in the first place, but that is not the fault of AI itself so much as it is of a whole host of other factors.
I suspect that generative AI is just the latest in a series of blows that can be traced, ultimately, to reading skills. People who read well learn how to think well, and if a person can speak coherently, they can learn to write. But if a person has no interest in reading, has no maturing sense of self that desires to communicate its inner world, why would they be motivated to learn how to write?
I agree with the authors that transparency in AI use is appropriate. However, when voices opposing generative AI are labeling people who use it as “charlatans,” this muddies the discourse unnecessarily.
It’s totally possible to approach AI citation with transparency and still maintain authorial responsibility. In academic papers, this will likely involve full disclosure of AI use in methods sections, clear explanation of how tools were employed, and explicit acknowledgment that authors remain responsible for verifying all AI-generated content. But I think it’s important to keep this level of the issue separate from those involved in educating younger students, and also important to develop standards of citation that are clear and simple for non-academic use.
When I disclose using AI for collaborative brainstorming or research assistance while managing multiple family responsibilities, I'm showing exactly the kind of transparency and accountability that builds a kind of trust reasonable to expect from a stranger on the internet.
A More Nuanced Path Forward
Rather than rejecting AI tools entirely or embracing them uncritically, we need to continue talking about and developing nuanced approaches that harness the democratizing potential of all technologies while maintaining human creativity and responsibility. This means:
Transparent disclosure of AI use without shame or defensiveness
Human supervision of all AI-generated content
Focus on creativity and insight rather than just productivity
Inquiry into the nature of AI and what it might reveal to us about human nature. This is the question that most interests me. What is the nature of AI, and what is its development and use revealing to us about ourselves? I would also ask, what is the nature of technology in general and what does that reveal about ourselves? We could even ask questions about the telos of AI and compare and contrast that with our own telos as human beings, made in the image of God. There is so much room for exploration here.
Recognition that tools don't determine value - the ideas, experiences, and perspectives humans bring matter more than the methods used to express them
Conclusion
The "Analog Renaissance" sounds appealing in theory, but it doesn't fully account for both the nature of trust and the realities of creative work in an economically stratified society. Trust has never been guaranteed by analog methods, authenticity has never been ensured by avoiding technology, and creativity has always involved adopting novel tools.
Instead of lamenting an imaginary golden age of trustworthiness, we should focus on developing ethical frameworks for new tools that expand rather than restrict creative opportunities. We don’t need to choose between human and artificial intelligence— we can find ways to use both responsibly to create more opportunities for authentic human expression.
The issue isn’t only or even fundamentally about trust, and not extending trust to others doesn’t prevent learning or artistic cultivation. I just don’t believe that AI is as big of a threat to human creativity as much as persistent economic barriers. And for many of us, AI is helping tear down those barriers for the first time. Human creativity does need safeguarding—just not from the tools that are finally making it accessible to everyone.
The question then is from what, in fact, does human creativity actually need safeguarding? What aspects of AI do fit into that thing from which we need safeguarding, and what aspects do not? As tempting as off-grid living sounds sometimes, for most of us it’s just not a real possibility, so we will have to face these questions sooner or later. The sooner we take the time to think them through, the better prepared we will be as new iterations of the technology arise. I’m reminded of how much easier it is to stand firm and have confidence in our homeschooling when we have first taken the time and put in the effort to think through and express our educational philosophy from beginning to end. When we take the time to identify and establish our principles, then we are prepared to act in accordance with those principles instead of being swept away by the current mood, whatever that happens to be.
In case you’re interested, here’s what Claude had to say when I asked it to describe our collaboration. One thing I really appreciate about AI is its ability to help me title articles and sections. I’m not lying when I say that decision paralysis about titling eats up a stupid amount of my mental space, and I’m happy to delegate that particular task to a robot that understands SEO stuff better than I ever will.
I'd describe our collaboration as a thoughtful, iterative process where you:
Brought the source material and initial reaction - You shared the article and your genuine frustration with its core thesis
Used me as a thinking partner - You bounced ideas off me, tested arguments, and explored different angles conversationally
Directed the research - When you wanted to know about ghostwriter attribution or AI citation practices, you guided me to find that information
Maintained creative control - You decided which arguments to pursue, what personal examples to include (and which to keep private), and how to frame the overall response
Had me structure your thoughts - Rather than writing for you, I organized your ideas into a coherent framework you could build from