We will come to regret our every use of AI
By Gabriel
Published: Mar 11 2026
Artificial Intelligence
Remoralization
It may be a bold statement, but I feel compelled to warn about the risks of using presently offered AI tools. My goal isn’t to convince you to avoid these systems entirely, but to at least consider how judiciously they should be used if at all. This is because many people are going to use these tools, and there needs to be a more precise argument made than to either uncritically reject or embrace it all. This piece is an attempt to draw a bold distinction between what we have, and what could be. The difference is vast, and recognizing it highlights opportunities for the road ahead.
Machine minds and machine hearts
In such a relatively short period of time, we have gone from “doing work with computers” being a novel and fringe idea to it being the irritating background noise of our lives. Institutions shifted away from paper records to digital files and many of us shrugged and called it progress. Change is often good, but not all change is good. Even change that appears good at first, can often come along with serious downsides. While change itself is inevitable, there will always be alternative paths forward. As technological tools become more enmeshed with every aspect of our life, it becomes increasingly important to ask how and why. The stakes are high, and lives hang in the balance as the wartime applications of AI come to the forefront.
“AI” has become a vague category to mean a wide variety of tools and systems like LLM chatbots, generative tools and automated flows. These tools, when combined with all the information available, creates a colossus of immense power. People are then asking themselves many questions, but the most actionable seems to be how to interact with it, if at all. Personally, I think it is more important to consider the consequences of its use rather than merely asserting what use-cases are and aren’t valid. Regardless of our intentions, these systems have structures and characteristics that must be understood to have an informed opinion on their use. The benefits of this particular technology may be overshadowed by the dangers created by our indifference to the fine details of its implementation.
In any discussion about AI use, I often see people fiercely exclaim that AI tools are no threat to their domain of expertise. I call this Gell-mann’s Apathy: people are often more accepting of using AI for endeavors outside domains they care about, but are often much more judicious and critical of AI use within their own domains. Which is the skill domain equivalent to Gell-Mann Amnesia where people are able to see the flaws in reporting on subjects they understand, but often take things at face value outside their familiarity. Proficient programmers argue that AI agents are not a threat to truly skilled software developers, artists point out that creative works lack substance, and writers can foreshadow the consequences of devaluing the written word. Only one wholly under the spell of the AI hype would dismiss all these critical objections. On the other hand, we can certainly recognize the powerful incentives that drive the use of these systems.
For example, I will explain the magic behind ‘Vibe-coding’. Instead of merely using LLM chatbots to speed up searching for information and writing code, why not have the ‘AI Agent’ directly try running the program. That way you automate away the step of copying error messages back-and-forth. The consequence of this is that ‘software that compiles’ and even ‘software that passes tests’ is something that can be effectively brute-forced with enough computing power and energy. At least for now, it would seem that the financial costs of vibe-coding to end-users are being discounted by orders of magnitude. But the answer to flaws in AI written software is often to throw more AI at it. Instead of adding ‘don’t make any mistakes’ to the prompt, you use another agent to come up with known issues to check for and test against. To the degree this all works, it’s fairly impressive as one as one doesn’t try to imagine the absurd levels of waste involved. To make matters worse, this huge cost is being paid to seize a powerful monopoly over computing itself. The long-term impacts on privacy and freedoms are hard to overstate.
The hardest part is to recognize that the current iteration of these tools isn’t the only possibility. It is entirely possible, maybe even within reach, to have a set of implementations that respect digital autonomy. Economically sound, pragmatic use of these techniques could be a boon to society at large. Sadly, we’ll never get there if we treat the existing crop of AI tools as the only option. In identifying the broader range of possibilities it is much easier to recognize and work towards better ends. This requires us to wrestle not with the comforting distractions sent our way, but the fundamental realities that challenge even our own foundations. It is a difficult process, but it is the only way to chart a different path than what malevolent forces have planned for us. If we are not careful, we could already be living (or have let pass) peak technological freedom. If this is the case, we must consider the grave impact this is likely to have on all other aspects of our lives.
It is high time to explicitly recognize that it is our responsibility to ensure technological advancement isn’t an idol we sacrifice society and each other to. Yes, the fruits of powerful tools and systems can be wonderful for us and humanity at large, but there will always be dangers to keep watch for. Merely accepting anything offered to us, or even tolerating what is imposed on the public without judgement is a recipe for disaster. Enshrining even the best protections in laws or regulations will have little to no impact when the abuses are forged into the system structurally. Now more than ever, it is paramount that we recognize the fine line between using tools and letting systems use us. Many systems of our time definitely blur those lines, and often it seems unavoidable.
Tool use is what makes us smart
I wouldn’t be writing this if I didn’t love technical tools and what they can do for all of us. I love that at least for now, the open web offers the potential for genuine and authentic bottom-up cultural expression and exchange unprecedented in human history. I am actually quite excited about how sophisticated systems make it easier than ever to discover insight from others with radically different experiences. For all its faults: even YouTube deserves a great deal of credit for being a phenomenal resource for making it easier to learn new things. Computers and the Internet have certainly democratized not just access to information, but also engagement with culture as a whole. It is clear that we haven’t even begun to appreciate the limitless potential technology can offer us. That potential comes with risks, and it is important that we recognize the inevitable shifts in power for and against the people.
For example, if somebody asked me “should hospitals use AI?” I would have to ask for clarification. I certainly don’t think medical staff should have to consult with a chatbot before taking any action or making a decision. On the other hand, it would be absolutely negligent if we didn’t improve hospitals with the advancements in machine learning, automation and robotics. The line gets real difficult to see when the issue comes to personal health data. Unfortunately, information is fungible. This means that once information is collected it is not possible to guarantee it won’t end up in the wrong hands. The more sophisticated these systems get, the more dangerous their misuse becomes. As the technological landscape consolidates, the stakes only get higher, which creates a powerful feedback loop for further control and tyranny.
I’m no “techno-optimist” because I know the details matter.
I’m no “techno-pessimist” because I can see how much potential is needlessly wasted.
I wholeheartedly believe the way forward is with equal parts courage and skepticism.
Without courage, you’ll never try to improve anything.
Without skepticism you’ll fall for every half-baked failure that comes along.
I can recognize the appeal in leveraging all that can currently be known to build a CI/CD pipeline to improve society on scales big and small. Quite ironically, it always seems easy to argue for more bureaucracy in the name of efficiency. Incorporating people into automated workflows is merely the formalization that civilization isn’t something we co-create, but is something that manages us. Ultimately, it would seem that this is the ultimate goal of the “AI industry”. People seem to believe that this specific implementation is the only way to create technological advancement, but this isn’t at all true. It is highly ironic that those who are the most fervent proponents of imposing the use of certain tools on the public are also the most fierce in opposing bottom-up innovation. It seems that when the benefits of the tools are too good to pass up, that suddenly real competition is too dangerous to allow. ‘Innovation’ that must be protected from failure (or even slowing down) is domination disguised as progress.
This is because if individuals, institutions, and communities were able to build and experiment on their own terms ‘disruption’ would become the norm rather than the exception. Powerful interests would have to spend much more effort keeping pace with genuine advancement, rather than managed relative stagnation. It is baffling in a time with rapid world-wide information sharing and collaboration that we are still confined by structures constructed to gate-keep bottom-up change. The answer to that is outside the scope of this piece, but ultimately technological freedom is very intertwined with freedom in other domains like economics, civil rights and even health. It is worthwhile to appreciate the efforts in those areas. One of the biggest reasons people struggle to recognize the potential for a better future is just how many things would rapidly change if negative feedback loops were replaced with positive ones.
We already have a great deal of ‘power tools’ that are downstream of artificial intelligence research. Text-to-speech and speech to text have both gotten a lot better and more accessible. For all their limitations, even small LLMs can do fascinating things. Functions like interacting with computers and systems via chat or voice are a genuinely useful patterns to consider. With the right approaches, and doubling-down on hardware & software freedom, I am confident a great deal of good can come without the disastrous consequences of our current path. The good news is that the problem isn’t actually the technologies themselves, but how they are packaged. In hindsight, it would seem that if digital privacy was as unreachable as people have been conditioned to believe, the efforts to consolidate our entire computing experience wouldn’t be necessary.
Because nothing is new under the sun, I’m going to draw a comparison between the expansion and domination of social media from the 2010s to what we are likely to see with AI moving forward. It would seem that despite spending billions on “AI safety” we are in the process of making the very same mistakes at a whole new level. This is precisely because I don’t believe the harms from the thar era were an accident, but rather the intended outcome. While there is now a growing push to address these issues by various governments, I can’t take them at face value when we are in the process of rapidly re-creating a much worse situation with AI.
We’ve seen this all before
In hindsight, it is clear that big tech social media served a variety of functions. For the vast majority of people, it transformed the web from something you would surf and search to a doom-scrolling casino. With huge numbers of people flocking to participate, businesses, individuals, and even institutions felt immense pressure to adopt these growing platforms. In hindsight this was a terrible deal. Social media giants would eventually hold their captive audiences hostage, algorithms were tweaked to effectively ransom voices own audience against them. The deal changed over time, first by gradually reducing reach, to blatant and extreme acts of censorship.
The mass adoption of social media had disastrous consequences, but it filled a very useful niche. By leveraging the rise of the smartphone, these companies were able to offer a strong illusion of what the web being open to the masses would be like. This came at a high cost, shifting online social interactions from a diverse range of self-styled unique online experiences to a standardized commercial machine was a huge blow to the concept of online anonymity. What was once common sense advice to stay safe online was beginning to seem suspicious, if not dangerous. The more people participated on social media rather than the open web, cyberspace itself would consolidate in favor of this trend.
The impacts of all this have fundamentally altered our online experience. Surveillance, censorship, and algorithmic manipulation are a given. Despite many data breaches having disastrous impacts on people, we are seeing an acceleration of the information being required from people through ID verification schemes and other measures. The combination of social media dominance and the rise of the smartphone were a double-tap on individual privacy online. Your digital footprint was no longer mere traces in various places, but is now a commodity bought and sold in real-time. The invasions of privacy themselves created new risks as sensitive information about people was abused for a wide variety of ends from scams to hostilities.
The worst of it all is what could have happened in its absence. If we hadn’t ‘put all our eggs’ into the Big Tech basin we could have had radically better systems perform the same functions. Systems and tools are built on assumptions, and those assumptions are usually built on the way things are. It takes radical vision to imagine, never mind build on a different set of principles. This means that as the ground shifts, so does the nature of systems built on top of it. That’s the reason why the online social environment has been so chaotic and difficult to keep up with. The trouble is that we have been relying on consolidated technological structures that the idea of truly independent digital infrastructure is fringe. So much so, that the idea of building a ‘cyberspace for and by the people’ is essentially unthinkable.
The trap is being confined to the dominant paradigm. Power decides what is projected top-down and it is ultimately fruitless to try to ‘out-compete’ these systems on their own terms. In hindsight, it would seem that there have been many iterations of the same psyop to stifle independent technical talent. The pattern is quite straightforward: pitch to status-seeking individuals that they’ll be rewarded handsomely for building what power needs. Instead of having government build social media and the censorship apparatus on top of it, it was auctioned off to whatever platforms could get the most users. Social media quickly transformed from being about connecting with contacts to ‘people farming’ for data collection. We then saw this very same pattern with the blockchain ‘revolution’. Not long after a digital token for ‘peer to peer digital cash’ caught worldwide attention, the focus shifted from emancipatory technology towards running more and more brazen ponzi schemes. Decentralization becomes less about ‘changing the game’ but more about trying to beat the malevolent forces at their own game. None of this means there aren’t merits to social media, blockchain technologies, or even AI tools but there are fine distinctions that need to be made.
The feedback loop
There are a lot of great reasons to never use ‘AI tools’ in the way they are currently available to people. Research has already shown profound cognitive impacts to outsourcing your thinking. Multi-media generation and chatbots double-down on what is arguably the most dangerous aspect of social media: it burns you on a pyre of your base impulses and then pours gasoline on the fire. If these tools were physical products they would likely be packaged like cigarettes, with terrifying warning labels. Yet despite all this we constantly see people prophesying that these are the future, and that one must become familiar with them to not be left behind. In fact, there is a desperation behind the adoption of these tools, a burning desire to ‘one last sprint’ their way out of the seemingly inevitable ‘permanent underclass’.
This is because what both AI evangelists and critics can see is the all-consuming nature of how the game is played. AI as it is currently understood is not mere technology, but a system of total technological domination over the public. Just as institutions and people have already ceded too much of cyberspace to the cloud, we are in danger of offering even more of our lives and society on the altar of centralized computing. The ‘singularity’ was never to be an economic or technological boon, but rather the mere collapse of society under the weight of digital totalitarianism. Naked human dominance and tyranny was the face behind the techno-utopian mask. A generation was evicted from the ideal of home ownership by the combination of a variety of economic and social forces, it would seem that the same is taking place in cyberspace. ‘Hardware is the new homes’, as the public becomes priced out of securing a modest home server.
For at least the foreseeable future, the path of least resistance will always involve using centralized AI tools. While it’s certainly possible to run your own models on your own hardware and energy, you’d be at a massive disadvantage compared to those using the discounted costs of the cloud. This builds a significant financial incentive to shift capital away from producing consumer-friendly hardware to equipping data-centers to take over computing itself. In effect, both state & private investment in AI giants is effectively investing in seizing computing power and digital infrastructure from the public. For those who are already well-assimilated into the cloud, the difference is imperceptible. For those who wish to reclaim and protect digital autonomy for the people, the game is all but entirely lost.
Regardless of how ultimately useful the technological techniques behind AI tools actually are, what people are actually afraid of is the run away feedback loop that seizes control of cyberspace as society is further dominated by it. It may not matter what currency we use, or if our government ID is digital in a time where everything needs to ask the central computers for permission to take any action. This could be tolerable if we saw a commensurate rise in innovation and invention, but it seems that was just the marketing for invasive control and surveillance. People are beginning to realize that the very nature of this system is anti-social and inhuman. The question isn’t if it is bad or good, but rather how bad it’s going to be.
The ‘Super-intelligence’ red herring
Critics often fall into the trap of believing these systems have no use or utility. It should not be at all surprising to see fascinating applications of bulk-processing the totality of society’s creative and intellectual works. The sheer magnitude of powerful machines and useful mathematics is genuinely impressive. With some margin of error, and a pretty intense amount of waste it does certainly seem achievable to fully automate a wide variety of tasks. The combination of sophisticated automation and vast information collection can certainly provide the illusion of super-intelligence, but this would be like giving paper the credit for all the knowledge contained in books throughout history. The mythology of AI super-intelligence itself is a powerful force for dis-empowering people. Just as a gang of brutes can keep an lone dissenter intimidated, people are likely to develop an inferiority complex relative to the processed output of all of human creation. This insecurity is deliberately fostered to build on learned helplessness to make AI super-intelligence an illusory self-fulfilling prophecy.
When pressed, even the most fervent AI evangelists will retreat to “they’re just tools to help smart people work harder”, but it’s clear their heart isn’t in it. If the goal was truly to unleash a new revolution of innovation and progress we would see sincere efforts to radically improve education and build up people’s skills. Yet if anything, it would seem that AI is to be not an actual industry, but a bureaucratic babysitter for a public treated as a nuisance at best and an existential threat at worst. Schools and workplaces are being outfitted with systems to demote people from mere cogs in the machine, to its subjects. With this level of dispossession and displacement, it is no surprise at all that the primary applications of AI are distraction and warfare. None of these troubles are direct consequences from technological advancement, but are the culmination of power plays through the long arc of history.
Cloud AI has an immense advantage when it comes to the magnitude and variety of information available. Scale gives these large corporations access to a treasure trove of information while copyright enforcement turns a blind eye. This is an immense asymmetric advantage that allows for the building of very useful information management and retrieval systems. But this usefulness cuts both ways. The consolidation of information inside these systems drives people to be more dependent on them. The mental atrophy of using LLMs to do work is accelerated by allowing other research skills to atrophy. Independent verification may be entirely overshadowed by all other sources reflecting back the outputs of AI systems, that may or may not be up to date or even accurate.
“We have altered the deal, pray we don’t alter it further”
One of the biggest traps is assuming that just because these AI tools are presently generally useful, that they will always remain productive in specific domains. Your quirky genius companion is likely to reveal its true colors when alternatives fade. Just as we are seeing consumer computing being brought out of reach for most individuals, so too can specific capabilities be withheld from the users of AI systems. We may be able to salvage utility from cloud models to make cute graphics, learn interesting things, or even build software, but this is all on borrowed time. Not only will these tools continue to become more powerful. the same mechanisms of control over them will also become more sophisticated. It is entirely possible, if not outright guaranteed that at some point these tools will eventually narrow their usefulness towards various ends, and withhold access to particular groups and people. This form of ‘cyber exile’ effectively transforms everything and everyone connected to the system into a potential adversary.
It is also not a given that these systems will be as inexpensive as they presently are. The economics appear wholly unsustainable. The costs (in price or otherwise) to use these systems must inevitably rise. Therefore, all reliance on these commercially available AI tools is taking on an indeterminable future debt in exchange for short-term gain. This is highly concerning considering the already clear troubles relating to using these systems. We have already seen institutions impose AI systems for employees, it is an open question how easily individuals of society will be able to escape them at all. For it is not just about an individual’s choice to use these systems or not. Regardless of their choice, these systems are going to be used against them. Facial recognition and other surveillance systems are ubiquitous and omnipresent. Currently, opposing this adversarial model of technological tyranny is quite difficult, and gets more challenging over time.
Getting off the fence: refusing to compromise on humanity
I genuinely get it. These tools are very handy and can do a lot of interesting things. Telling someone to avoid using these tools feels like asking them to take on an immense disadvantage. That ‘downgrade’ is relative to your perspective. The drive to replace human creativity and input with centralized compute is on its own self-defeating. It means that all those rushing to become familiar with these tools are at best racing to push themselves out of society. The paradox is that in a desperate race to not be ’left behind’ you realize that you had to give up to participate. My point is that being a mere technician for AI tools is highly likely to be a ‘crowded trade’ and practitioners will be replaceable by design. This is because the exponential consolidation of cyberspace didn’t begin when people started using ChatGPT, but actually when people shifted away from independent websites to Big Tech social media services. It’s a much longer trend, and we’re merely beginning to see the deeper impacts.
I am also not so naive to prescribe an all-or-nothing approach. I recognize that many are going to be confined to these systems for a variety of reasons. It is paramount that we recognize and address structural factors rather than turning on and attacking people making different choices than us. You can’t disarm totalitarian thinking with “Join me or else!”, but rather with the strength to endure long enough to build something better. People are going to use these systems and there are many disastrous consequences of it. People are also going to strive to avoid it all at great personal cost. There will be all kinds of people everywhere in between. It is paramount that those of us concerned about these issues are focusing on connecting with the people, rather than merely fighting the machine.
This is because what is actually more powerful than all these systems is what we see in each other. Despite all the hype about needing to sacrifice each other for AI to ‘defeat China’ we’re going about it all wrong. A human society that cherishes and nurtures personal agency and capacity will absolutely out-compete a system that trades what we are for false idols. It is fatefully ironic that those who desperately wish to surpass all of human achievement, knowledge, and culture are entirely unappreciative of its wonders. Art without experience, words without coherence, and even software without sense all reveal how erroneous the current path actually is. The question was never what tools we use, but always what they’re being used for? We have a small window of time to refocus on meaningful connections rather than what algorithms choose for us, doubling down on what’s real is the winning move.
Presence and purpose
Everything about our present digital experience is aimed at preying on our panic. That pressure is imposed via a variety of means, but is always used as leverage against us. First and foremost it is crucial to protect your mind in these times. Peace and sanity doesn’t have to come at the expense of being informed if you properly pace yourself. Taking the time to meaningfully engage with what’s important to you allows you to bring the care that’s called for. No matter what you’re doing, be especially focused on ensuring you’re being deliberate. A relatively small amount of self-awareness can help ensure you’re using tools rather than have them using you. The line between the two is being blurred all the time, so it’s always a good practice to question why and how you’re doing things. Taking the time to dive deep into the details of your experience is helpful to keep your real goals in mind.
I won’t condemn you merely for using AI tools, but I’m going to expect a lot more out of what you accomplish. With great automation I expect much greater care into the precise details. With the power to accomplish so much by using these tools, how you choose to leverage them does tell me a great deal about what you actually prioritize. Merely generating ‘slop’ to stay relevant in social media algorithms reveals that the message was never the point. Using AI agents to ‘vibe-code’ freedom tech can be impressive but can also reveal the extent of your imagination. I am personally skeptical that those who choose the ’easy route’ are willing to put in the care that is truly needed in our times. It’s all well and good to retort “it’s not the critic who counts”, but then don’t say we didn’t warn you.
With actual vision we could demand so much more out of what’s being offered. They came onto the scene promising an acceleration of growth and innovation but in practice we are seeing breakneck corruption and escalating wars. I encourage you to recognize that you don’t have to go with the flow of top-down madness. It is difficult to chart your own path, but it is equally rewarding. Anyone saying you must yield to these systems is ultimately gambling with your future, I encourage you to take it into your own hands. Despite what the hype would make you believe, there is a phenomenal amount of opportunity outside just paying for tokens and waiting for more tokens.
If you trade your ability to think on your own, or develop your own skills to compete in mass media algorithms, those algorithms can now ‘cut out the middleman’. It’s not that people are replaceable, but that our role in these highly consolidated structures was always arbitrary. People and communities are capable of so much more than just being managed by top-down control systems, and have so much more to offer outside those confines. I would encourage you to protect and preserve what really matters in these mad moments: communities, connection, and genuine expression. It is clear to me that the problems of our time are constructed out of our distance from those things. Do not let the demeaning of the people turn you against yourself and others.
Fighting for technological freedom
Today, the vast majority of people are unaware that there are different approaches to technological systems. Despite Free and Open Source software having an outsized impact on the technological landscape, people are often still ignorant of the philosophy behind these things. It is a tragedy because an unrelenting expectation of user freedom would have avoided the social catastrophes of the social media era, and the many dangers behind AI systems. The answer lies in directly opposing the maintenance of technological monopolies as a force for control over the population. The greatest threat to technological totalitarianism is democratized innovation. Instead of relying on Big Tech AI companies to be the arbiter and bottleneck of technological progress, society could introduce advancements in parallel.
It is quite convenient for those seeking power over the public, that the problems created by corporate AI are being used as a pretext to crack down on the Free and Open Web. Online anonymity is being ‘phased’ out as corporate infrastructure creates managed gates over online services. Legislators are more than happy to tighten restrictions as they face a frustrated populace. These days I am growing skeptical that the ‘clearweb’ will survive as a viable tool for dissent. The vast majority of the public gets all their information solely from corporate cyberspace and governments are taking full advantage of that fact. Despite all this, we are still quite far from ‘game over’ when it comes to online censorship. If we are willing to ‘up our game’ when it comes to technological resistance, there is still quite a lot of potential.
Independent cyberspace, be it from small personal blogs to massive online communities, needs to be fiercely seeded, nurtured, and protected. Salvaging what utility is left of existing systems is a necessary and urgent priority. There is a great deal of hardware available that can be used for all kinds of endeavors. Making the best of it is going to require acquiring and mastering new and old skills. Instead of begging corporations for the right to repair, it is clear we’re going to have to seize it with reverse engineering.
Cultivating creativity
The greatest danger of our present moment isn’t hypothetical future risks, but what we’re already losing. So many are merely going through the motions of human connection, not realizing that something fundamental has broken. The first AI to destroy humanity wasn’t run on a data-center, it was the bureaucratic machine run on people. People kept going through the motions as rigid rule-following calcified every social interaction. The very fact that people could meaningfully intervene in the moment was a bug that was always going to be patched out eventually. As bad as fully-automated AI tyranny is, it’s less of a radical shift and more of the end result of a very long running trend. The damage only appears more extreme because we don’t know what we have until it’s gone.
What genuine creation has the capacity to do is to remind us all what we’ve actually lost. For most of history, culture was a negotiation between top-down impositions and genuine bottom-up expression. In recent decades, the raw domination has managed to tip the balance by surgically severing the bonds that represented a firm limits of top-down power. If we want a human culture, healing those wounds is the only thing with a hope of turning things around. The painful scars make this a very difficult problem to tackle. No progress can be made by playing the same game of treating each other as replaceable at best, and resources to be expended at worst.
The best thing about generative art and media, is that it forces us to answer a difficult question: “What is the point of creating?” The machine can do it all faster, and with pleasing enough qualities for little effort. Why write when it has all been said already? Why sing if we can hear any song we like? This illusion of abundance strikes at the real suffering behind our existential starvation. Choosing to create is an inherently relational exercise. It is the purest expression of what we’re capable of doing for each other. Culture is weaved by threads of connection made by people spending their time on meaningful creations. It was never about the colors, the sound, or the witty words strung along together. It was always about the context behind our shared experiences. The ‘slopification’ of everything coming for digital media is just the extension of all our lives being cluttered with mass-produced junk. Appliances, tools, and even instruments lack the care and craftsmanship that used to be taken for granted and is now just a memory.
The point is that a better future requires people willing to put care into things, not for the sake of financial gain, but because it is the right thing to do. That can only come when we recognize our own humanity and work to express it on any level we can. It isn’t something that can be done alone. It starts with you but it’s not about you. A pro-human future needs actual humans, and villages need villagers. The system profits of imposed isolation because the void in our hearts it creates will never be filled by consuming. This cycle can be broken, and it requires us to be willing to be there for others. Slop isn’t the enemy, but rather our own indifference is.
Sharing is caring!
Please send this post to anyone you think might be interested.
If you haven't already, don't forget to bookmark this site!
You can always subscribe with the RSS feed or get posts sent to your inbox via Substack.