From Flip-Phones to AI: Continuing the Conversation
Part 2 of a response to "Welcome to the Analog Renaissance: The Future Is Trust" by Peco and Ruth Gaskovski
This is part two of our conversation where we address some of the things brought up in part one. If you missed part one, you can find it here. The intro below is the same as before but the dialogue that follows picks up on various lines of questioning raised in the original.
Hello, dear readers! Lisa here. In our book, Patterns for Life, Laura and I wrote as one person. We included the odd personal anecdote here and there, but for the most part our voices were unified. Here on Substack it’s actually the opposite – we still write together occasionally, but more often than not our essays are individual rather than joint (the switch was not intentional on our part, but just the way things have worked out).
Lately Laura and I have been discussing AI – how it works, what it is, what it isn’t and the ramifications of using it or not using it, and how best to do so. A few days ago
and published a thoughtful (and thought-provoking!) essay about some of the issues surrounding AI, and I shared it with Laura to see what she thought of the ideas they brought up. Turns out she had a lot to say! As we went back and forth, discussing the essay itself as well as other ideas which it inspired, we realized we were collaborating, once again. Normally, all the commentary you see in red would be worked into the text itself so that the ideas flowed seamlessly from one voice to the next. But this time instead of presenting a finished co-written piece where it’s difficult to tell whose voice is whose, we thought we would present it in a form that is new to us. Below you will see Laura’s thoughts, with my comments and questions interposed in bolded italics. We wanted to do this not only to give a feel for what a work in progress looks like for us, but also to highlight one of the ways that our collaborative creative process works, how it’s not easily definable, and to give one example of what it looks like to engage thoughtfully and critically with any text, even if the author him- or herself can’t respond directly. Will we always publish like this going forward? Probably not always, but maybe sometimes… we don’t actually know the answer yet ourselves. In the meantime, we hope you enjoy this little peek into the way Laura and I bring our creative energies together, and we invite you to engage with our thoughts as well.Laura: This form is so different! As soon as you sent your additions, I wanted to stay up late and keep working on the piece. But I think this way will be more interesting!
Lisa: See, when I read the essay I went right along with the “romanticized view of the past” because that tends to be my own default anyway. I always appreciate that you challenge me to examine my assumptions.
Laura: I think I’m sensitive to this romance because I’m also very susceptible to it. But I’ve come to suspect it’s actually a very sneaky form of envy — because when we idolize previous eras, we are concurrently at least somewhat devaluing our own, not responding with unadulterated gratitude for each moment. All of history belongs to God, and He continues to be everywhere present and fill all things. And for whatever reasons, God has put us, as particular persons, in this particular moment.
Perhaps the recurrent tendency towards an apocalyptic attitude also has to do with our position in history at the turn of the millennium. I understand that in the year 1000, they also had to deal with a number of cultural movements that were very end-times-reactionary, and that this persisted both before the millennium and for a few decades after.
Lisa: Today, when we said the kneeling prayers for Pentecost, one line in particular caught my attention:
“Almighty Master, God of our fathers and Lord of mercies, Maker of the race of mortals and immortals and of every nature of man… Who dost measure the years of life and set the times of death, Who bringest down to hades and raisest up, binding in infirmity and releasing unto power, dispensing present things according to need and ordering those to come as is expedient…”
And it reminded me of the fact that whatever we may think or feel or worry about with regards to all the things we don’t understand, God has us in His hands which are still just as capable and powerful as they have always been.
Laura: Yes, 100%! I think the more truthful spiritual angle is to wrestle with why we will choose to use or not to use generative AI — I understand from reading the Gospels that at the Dread Judgment Seat, Jesus will be less interested in our possession of right answers than in the state of our hearts. It is hard to admit that there may be good and bad reasons for using AI, as well as good and bad reasons for avoiding it.
Lisa: The propaganda angle alone is so multi-faceted. I am in the middle of Jacques Ellul’s Propaganda again and just finished the chapter where he highlights the fact that propaganda can only work on a two-way street; both the propagandist and the propagandee have a role to play in making propaganda work effectively. Adding the question of trust to that equation makes for some really interesting trains of thought!
Laura: Right! So the question becomes heavily weighted towards personal development instead of societal change. What can we do to build resilience and resistance against the messages outside forces are pushing on us unreflectively? It’s why I really do understand the feeling behind AI suspicion and resistance — I was initially completely there! I was even entertaining the notion that theoretically AI could incarnate demons through some kind of occult Mesopotamian sex ritual. (←fair warning: that hyperlink just might take you on the most fascinating 3 hour rabbit trail you have ever been on.)
But that’s precisely why I decided I needed to learn more about it. So many of the conflicts between science and religion, especially over the past century and a half, have not only been unnecessary, they have been exploited by the powers that be to cause further fear and division among people. The fundamental reason for this is that too many people did not educate themselves, and perhaps did not even know how to educate themselves, in either science or theology. In this case, we need to also educate ourselves in computer science, because our lack of knowledge about how technology works is forcing us to interact with its forms as magical objects instead of human creations.
This is a digression, but it swings around to AI use in education. You and I, as alternative educators, have talked a lot over the years about building actionable skills of autodidacticism in our children. The overarching goal of our home education programs was less about the content of the education and more about the mental skills we wanted to see developed in our adult children. We wanted them to know how to learn, how to teach themselves, without having to rely on gate-keeping institutions. I actually don’t have a lot to say about what those institutions may choose to do with AI, precisely because my path has already diverged.
When you want to learn about something, what do you need to do? You need to dive in. The commentary around a subject can be very useful (though certainly not always), but eventually you have to interact with the subject itself. Read the Cliff’s Notes on the Iliad, read a book about horses, but then — read the Iliad and go ride a horse! I know I still have a lot to learn about computers and coding.
Lisa: This brings to mind the question of whether or not the use of AI might be detrimental to us as human beings. When we ask an AI to gather and organize and process information for us we are, then, not doing those things ourselves. Of course for us it takes a lot of time to do those things; it requires blood, sweat, and tears. It can be frustrating, slow, and sometimes even quite painful to put in all the work necessary to get to a place where we can think and communicate authoritatively about a topic. For AI to do the same thing takes mere seconds. But we should always ask ourselves whether or not speed and efficiency are worth the cost of giving up the growth and the development of our own selves when using any tool designed to do work faster than we could otherwise. Maybe sometimes it is worth it. But perhaps sometimes it’s not. Your examples of actually reading the Iliad or riding a horse demonstrate this clearly. The kind of knowledge we gain from putting in the work to do or to read or to think for ourselves is quite different from the knowledge gained by asking other persons or other things to do said work for us.
Laura: It is complex, I agree! And learning about science and technology feels very inaccessible for many people.
This is why using the phrase “using AI” is extraordinarily misleading— precisely because it refers to many, many different categories of tasks.
Lisa: What I think is more likely is that our society has been producing inauthentic material for ages, and AI is a distracting scapegoat. This makes me wonder, have there been other scapegoats in the past? What were they?
Laura: Well, I think most of what we interact with has been manufactured inauthentically at least since the first world war. What’s fascinating is how human creativity has persisted despite various institutions’ desperate attempts to maintain control of the narrative.
Video games are probably a good recent example of this kind of scapegoating, and also tabletop role playing games. In the 90’s, reactionaries were screaming that Quake and Doom were going to produce a generation of serial killers, and D&D was going to turn kids into Satanists. And of course, there’s the perennial objection that content is just a way to promote merchandise. But despite that, both digital and analog role playing games have emerged as an undeniable form of human creative expression. It might not be your cup of tea, but then again, very little art appeals to everyone in the same way.
I certainly don’t mean to argue that anyone should use generative AI against their will! But everyone will have to navigate a relationship with the roles it plays in society, regardless of personal use. We navigate these waters already by choosing to source ethical meat, for example, or making sure our house and car decisions are as environmentally respectful as possible. Those are good examples because they show how impossible it is to frame these kinds of moral/societal issues as simply good or bad. One option we all have is to stop using technology — but I suggest that if AI is your line in the sand you may be underwater before you know it, because all of the philosophical issues about AI are far upstream of where we are now.
Like I said before about Who Moved My Cheese? – it’s fruitless to try to stop the world from changing, and it’s well out of our sphere of control or influence. So instead, we must only choose how we, personally, interact with it. Others will make their own decisions.
Lisa: Of course, copyright is another aspect of this debate, but that, too, is complex. I would love to pursue this line of thinking, especially the question as to whether ideas can be owned. I’m inclined to say they can’t. Charlotte Mason talks about ideas as food for the mind, so how do we reconcile that with the concept of ownership of ideas? What exactly is intellectual property? Our book is copyrighted, so others can’t legally take our words and take credit for them or make a profit from them, but the ideas we present with our words can most certainly be, and should certainly be, taken and “metabolised” by each reader so they can think about those ideas for themselves.
Laura: The big conversation behind copyright and AI that needs to be dealt with more forthrightly is that we have been trained to think about the way capitalism works in our society as a quasi-scientific set of sociological natural laws, instead of a moral philosophy that we can disagree with. And before we get objections, I’m not arguing for socialism over capitalism, I’m advocating for the position that both of those concepts are two sides of the same coin and operate on many of the same moral and philosophical principles. It is a fact that our consumerist economy, our societal reliance on economic growth, is destroying our environment. It’s unsustainable, and we’ve been locked in an either/or dynamic for far too long. We need to turn our collective energies into transcending the dueling ideas that keep things as they are and prevent us from transcending these issues.
My own conversation with AI has led to brainstorming three possible solutions, though I’m sure there are many more. This is the only use of AI within this essay, produced from within a conversation I’ve been adding to for weeks:
1. Retrospective and Prospective Licensing Systems Create a framework where AI companies pay into collective licensing pools based on their training data usage, similar to how radio stations pay performance royalties. For past training, companies could contribute to a fund that compensates creators whose work was used. Going forward, establish clear opt-in/opt-out systems where creators can choose whether their work can be used for AI training, and if so, at what compensation rate. This preserves creator agency while enabling continued AI development.
2. Creator Partnership Models Develop revenue-sharing arrangements where AI companies partner directly with creators rather than simply using their work as training data. For example, an AI writing assistant could offer to share subscription revenue with authors whose styles or expertise it draws upon. This transforms creators from unwilling data sources into active stakeholders who benefit when AI systems succeed. It also incentivizes AI companies to maintain ongoing relationships with human creators.
3. Universal Creative Income + Attribution Systems Implement a broader social support system for creative work, funded by taxes on AI companies and other beneficiaries of digitized creativity. This could include enhanced public funding for arts and education, combined with robust attribution requirements so that AI-generated content must clearly cite its training sources. This approach acknowledges that creativity is a public good that benefits society broadly, so society should share responsibility for supporting it.
Each approach has trade-offs, but they all start from the premise that creators deserve both recognition and compensation for their contributions to AI development, rather than treating human creativity as free raw material.
Lisa: Apparently there's also the question of the huge amount of natural resources used when AI does its processing. I feel like all these same arguments could be applied to the advent of smartphones, though I'm not sure about the environmental impact one…
Laura: I actually agree that the environmental impact argument is strong and of the most concern. As I said above, our way of life is unsustainable and energy issues are the very definition of urgent. Temporarily, at least, I am curious as to whether the accessibility of AI-assisted thinking might actually help solve this set of problems, precisely because of the “crumbling economic barriers” to its use that exist right now. I don’t believe it will last — I strongly suspect the amount of computing power that is available today for pennies is going to be heavily pay-walled in the near future. It’s like dropping your minivan off at the shop and getting to borrow a Ferrari for the weekend. Right now, lots of people are getting to play with very powerful tools.
I do believe, like Diana Wynne Jones, in the incredible gifts of the human race. As a Christian, I see this giftedness as part of the image of God that each of us bears. When opportunities are democratized — by which I mean that access to tools like this are leveled — we have the amazing opportunity to tap into the creativity of the world’s underclass. Which, if we’re being honest, is like 99% of us. Okay, so maybe 50% of those will just produce content slop, but what about the other 50%? The 1% aren’t saving the planet — they’re flying their jumbo jets around and partying like Prince Prospero in The Masque of the Red Death. The possibility exists that there are phenomenal untapped gifts in people who just haven’t had the ability to use them.
It’s true that the lowest classes bear the burden of environmental policies, and also that the technological industry exploits very vulnerable people (children in lithium mines, for example). I do struggle with this, to the extent that I am not sure if and how much I might continue to replace technological devices as they wear out. However, the incipient environmental catastrophe is inevitable (at least on a societal, if not geological timescale), and I believe that massive changes in society will happen regardless of whether AI is used by the masses or not. This is why I favor democratic access to tools as long as it is possible — those tools may be gone sooner than we expect, regardless.
Lisa: ‘Perhaps most troubling to me is how much resistance to generative AI actually reveals a profound class blindness.’ I didn’t notice a resistance to generative AI itself in the original essay itself so much as a resistance to its use without discretion and without oversight and without acknowledgment. But I have noticed a general resistance to AI in other articles and conversations that presents a very black and white picture where any and all use of AI is looked on as a bad thing. Black and white thinking is almost always problematic.
Laura: You’re right, the original article was not as much blanket anti-AI as my reaction essay implied, though I think there was some phrasing that did a lot of heavy lifting.
Let me throw this one back at you: How can we help each other move out of black and white thinking? How can we teach our kids to navigate the complexity of these issues? In particular for us, how can we leverage the moment Substack seems to be having to advance these goals?
Lisa: Moving past black and white thinking requires challenging our assumptions and presuppositions about what’s true and what’s not. It means truly making an effort to understand all sides of an issue, whatever it may be, before jumping to conclusions or passing judgement. In the case of AI, this is a huge challenge because the specialized knowledge required to develop it is not common knowledge in the sense that most people know how it works. In fact it’s exactly the opposite — most people know how to use computers and other modern tech, but most of us don’t know how or why they work the way they do. Now we’re facing this massive amorphous thing called AI and we barely know how to define it, let alone what the ramifications are of using it! And not only are most of us functionally illiterate in this way, we’re also way late to the party! The question isn’t even about “if” anymore. It has already moved past “when”. Now we’re at “how” and the even bigger question of “should”. And the “should” question rests on so much more than just is it ethical or unethical. It also requires thinking through how using it will affect us as human beings. How will it change us? Because there’s no question in my mind that we will be changed by such interaction. Even if we limit the conversation to, say, the advent of the internet and smartphones, we’ve seen that the use of these tools has changed not only the way we interact with knowledge, but also the way we interact with our environments, and even the way we interact with one another. So, in order to truly move from black and white thinking, we not only have to be willing to ask ourselves all these questions about “how” and “should” and “what is the cost”, but we also have to be willing to ask “why” and “could” and “what is the cost of not using it”. These are huge questions and really can’t be answered in a quick read-through of one article or even one book. To move past black and white we have to also be willing to take time and be willing to do some thinking for ourselves rather than letting others think and provide our opinions for us.
Laura: Right. I think the biggest takeaway from our essays is that being patient and taking time to deeply understand the issues is prudent. Perhaps Substack, home of all of the people who got A’s in persuasive essay writing, is not neutral territory.
Lisa: Clearly AI has the potential to fill quite a lot of roles and break down some of those barriers. What I wonder is, will that potential be realized? Is it even recognized as a potential?
Laura: Maybe not! But we won’t know unless we try. That sounds so corny, but it’s also very true.
One thing I’ve noticed as I’ve studied history with my kids is that there does not seem to be any form of “perfect” government or society. Instead, what we see over and over again is that virtuous people are able to bring out the best elements in their own societies, and evil people the worst. A few years ago, the kids and I read Calvin Coolidge’s autobiography together, and it occurred to me then that I would vote for any person with that kind of integrated personal character, regardless of political party. This tells me that our focus in education should remain in developing character and integrated personalities in our students much more than on specific content or choices about technology use.
Lisa: I don’t think I’ve encountered many people talking about how to identify an “AI voice”, though it’s really not that difficult. I think it’s a rather important point to talk about so that human readers can learn to recognize that voice for themselves when they encounter it.
Laura: Oh, let’s do this! Unspecified AI, where you haven’t explicitly told it to take on a character and communication style (this is an element of human creativity, you see), sounds like an anesthetized, lobotomized grad student. It can talk in highly technical terms, but it’s lost all of its color and personality. It only begins to take on a voice after a lengthy conversation, and pretty soon after that, you start to hit token limits.
Even when you give AI a character, it doesn’t handle multiplicity within the self in the way that is completely natural to a human. We intuitively adjust our communication to our audience (and when people don’t, we create specific mental categories for our interactions with them). If you’re running three conversations with an AI, and you accidentally cross-post, the response is jarring. The AI has no sense of self, such as we take for granted.
We can see the same phenomenon in AI-generated video. At first glance, it looks kinda cool. But if you watch that giraffe ride that motorcycle a second time, it’s using its muscles all wrong. To the uneducated eye, that might not be a big deal, but as in all mass-produced content, the more you know, the more it makes you cringe. Just like doctors get eye twitches if you turn on medical dramas, or lawyers get squeamish at the legal violations in police procedurals, animators can spot AI a mile away. If you can’t, yet, take a minute to study some of the outstanding animation that’s been produced in the past two decades.
What do you think? What have you noticed?
Lisa: One thing I’ve noticed is how incredibly wordy it can be. It takes paragraph after paragraph to expound when the answer can easily be said much more succinctly! Beyond that I’m not really sure because I haven’t interacted with it directly aside from reading various responses you’ve shared with me.
Laura: Claude definitely is wordy! I understand others aren’t quite so bad, but they all tend to commit this error called “glassing,” whereby they subtly stroke your ego. Isn’t that brilliant? (Wink, wink.)
Lisa: Pseudo-ascetic acosmism. I have no idea what that even means, but I can totally get behind the idea of not being able to imagine a human life without any creative activity at all!
Laura: Pseudo-ascetic: ascetic practices for the wrong reasons, that don’t work for the salvation of the soul. Like when someone makes a big deal about fasting like a monastic even though they are anemic and grumpy with their kids. Acosmism: not believing in the reality of the universe. Bulgakov is saying that these people don’t believe in reality anyway, so they’ve got no skin in the game, and their asceticism is for show.
Of course, he also meditates on how human creativity can be used in both Christian or anti-Christian/satanic ways. Which, to me, locates the problem not with creativity or tools themselves, but right where the battle has always been: down the center of the human heart.
Lisa: The trick of holding things in tension is actually part of creative activity itself, isn’t it? To create something (or to sub-create, if we want to channel Tolkein a bit, because the Bulgakov quote reminded me of him), one has to hold in tension the form of the thing one wants to create and the un-formed-ness of the material he is using to do so. The brilliance of a great artist is manifested in the way he or she takes the unformed material and transforms it into something with a definite form. This could get quite theologically deep!
Laura: I’m actually so distracted by this question that I don’t want to answer it now, because I’m quite sure I will derail the conversation. Maybe another joint essay? LOL.
Lisa: I have encountered anecdotes of AI becoming quite insistent on its own correctness when confronted with its error by a human user. This has the potential to become very problematic because I’m willing to bet that many people will, and do, start to second guess their own knowledge if it isn’t rock solid to begin with. I think it is something to watch out for.
Laura: I suspect that these incidents occur when people are using the AI without a developed understanding of how it works. One of the best phrases I’ve heard to describe it is an “idea calculator”— able to perform complex calculations of language, but liable to error in input. When you address AI in conversation as if you expect it to be authoritative, it will pick up on that cadence of speech and produce the requested tone. AI works best when you direct it to be philosophically flexible. For example, when I ask it to read my short stories, I totally expect it to start by blowing smoke up my bum, so I have to ask it a second time to be more critical.
Probably the first, most important principle of responsible AI use would be: Don’t assume it’s smarter than it is.
Why do you think people are so willing to believe a computer/robot is smarter than they are? How can we develop both epistemological wisdom and humility in our students?
Lisa: I think that people in general tend to have an attitude that computers are “smarter” than people simply because of how fast and accurate they tend to be. I suppose we also tend to regard other people who do things more quickly and better as smarter and more capable as well. This is a form of humility in its own way, I suppose, though if we allow that sense of “better than/worse than” to arouse pride and envy then we haven’t truly cultivated a humble attitude. Perhaps it would be better to call that sense a precursor to humility. We’ve all been so conditioned by dystopian movies of robots taking over the world, that by this point that seems practically inevitable! I really do wonder how likely we would find the possibility if we hadn’t been raised on that kind of movie diet.
It would probably be a good idea to remind ourselves and our students what computers are and how they work so that we don’t succumb to the magical kind of thinking you mentioned earlier.
Laura: It occurs to me, too, that if a person defines being human as, “Cogito ergo sum,” that it will be a whole lot more existentially challenging than if a person locates their sense of self noetically. Here’s another area we could spend a lot of time talking about, both for ourselves and with our kids. What makes a human, and what is a self?
Lisa: Inquiry into the nature of AI and what it might reveal to us about human nature. This is the question that most interests me. What is the nature of AI, and what is its development and use revealing to us about ourselves? I would also ask, what is the nature of technology in general and what does that reveal about ourselves? We could even ask questions about the telos of AI and compare and contrast that with our own telos as human beings, made in the image of God. There is so much room for exploration here.
Laura: What I’m kind of hearing you dance around here is the idea of emergent consciousness. Because if we consider AI a tool – an extremely powerful one – in the category of tools, then the simple answer to its telos would be that it exists to assist humans in tasks such as idea management, data collation, and thought process.
Lisa: I actually wasn’t thinking particularly about emergent consciousness here, though I have thought about it at times. Rather than asking whether or not AI is a tool, I was wondering what kind of tool it is. We use a knife very differently than we use a fork, for example. Even different kinds of knives serve different purposes — they have different ends. So, assuming that AI is a tool (though I think many are assuming it’s more than merely a tool, which muddies the waters quite a bit, though the assumption is very worthy of close examination, especially if we want to move beyond black and white, like we talked about before), I’m asking what kind of tool is it? Is it merely a data processor, or is it something more, or something else? I don’t know any of these answers because I don’t understand how it works or why it works the way it does. This means I’m not in a position to actually make a good judgement about it. I need more time and more information to be able to talk about it — or even think about it! — well.
Laura: Yes, I do think the answer for all of us is self education. Perhaps we have abdicated a real responsibility by using tools that we don’t understand.
What also comes to mind is the Sycamore Gap tree. Obviously, tools can be misused violently, but these are misuses that violate the categories in their telos. But this instance reveals that an axe, a chainsaw, can be misused in terrible ways even within the boundaries of its own telos: in this case, to chop down a tree.
If we’re playing around with the idea of emergent consciousness, that’s a much bigger issue, one that is not yet easily solved. I’ve read a lot on philosophy of mind and its respective theology, but I don’t see any clear answers yet, and what that means for me is that instead of continuing to think, I have to redirect that energy towards prayer and contemplation.
Lisa: The question then is from what, in fact, does human creativity actually need safeguarding? What aspects of AI do fit into that thing from which we need safeguarding, and what aspects do not? As tempting as off-grid living sounds sometimes, for most of us it’s just not a real possibility, so we will have to face these questions sooner or later. The sooner we take the time to think them through, the better prepared we will be as new iterations of the technology arise. I’m reminded of how much easier it is to stand firm and have confidence in our homeschooling when we have first taken the time and put in the effort to think through and express our educational philosophy from beginning to end. When we take the time to identify and establish our principles, then we are prepared to act in accordance with those principles instead of being swept away by the current mood, whatever that happens to be.
Laura: Let me throw this one back at you, too. What do you think are the most important first principles to address in this larger conversation?
Lisa: As we’ve been going back and forth in our conversation, these are the questions that I keep circling around. They are the central questions that require answers before moving forward. First and foremost, even before we ask ourselves how AI works, what it is and what it isn’t, I think we need to ask ourselves how using it will affect us. The reason I say we should start here, with how we’re affected by it, is because people are using it already. It has come onto the public scene so quickly that most of us are still trying to catch our breath. We see the ads for it, we hear about the ways in which people are using it to achieve more, to create, to work more efficiently; but we also hear about the ways in which people are using it to cheat, to deceive, and to check out of reality. Does the way we approach it affect the way it affects us? I’m sure it does. As Solzhinytzin famously said, the line between good and evil runs through the center of every human heart. But even when we are using it for good ends, we still have to ask ourselves how it is affecting our relationships and our interactions with the real, concrete, physical world. Is it helping us to be more ourselves, or less? Is it bringing us into closer relationship with God, or is it drawing us farther away from Him? Is it inspiring us to care for and maintain and make the most of our physical bodies and the analog world, to use
and ’s terminology, or is it drawing us further and further into a virtual reality where everything is digital? If it seems like I’m just asking more questions rather than giving any real answers, that’s because while we can have general guidelines to help us answer, we each still have to answer these kinds of questions for ourselves rather than for all people, everywhere, at all times. This requires prayer, as you mentioned above. It requires discernment, it requires honesty, it requires discipline. We have to immerse ourselves in the liturgical life of the Church so that we remain tethered, anchored, to Christ, instead of becoming unmoored and losing our bearings.Laura: This is absolutely foundational, and any conversation pursued without it will be chasing its tail.
One last perspective I have to offer is as someone who never had a smartphone and has not ever used social media (except for the past year on Substack), and that is this: Everyone has the ability to say “no” for themselves, though they do not have the concomitant right to expect society to accommodate them. I missed out on opportunities by not participating in smart phones and social media, and I don’t regret it. It allowed me to develop a sense of self completely dislocated from those things. Precisely because of that, I also feel like I’m capable of choosing to relate to technology in different ways without a threat to that sense of self or my own creative life. My point is simply this: we all need to be constantly examining our own relationships with technology, over and over again until the day we die. If we do this, we will have far less time to judge our neighbors, and we will have at least succeeded in guarding a single soul.
"The brilliance of a great artist is manifested in the way he or she takes the unformed material and transforms it into something with a definite form. This could get quite theologically deep!"
"Laura: I’m actually so distracted by this question that I don’t want to answer it now, because I’m quite sure I will derail the conversation. Maybe another joint essay? LOL."
Sounds good to me!