The State of Cyberspace: Twilight Hours
By Gabriel
Published: Jan 07 2026
Technocracy
Government
Censorship
Surveillance
State of Cyberspace
This last year has been a difficult and chaotic time for those interested in preserving and advancing the pillars of digital autonomy: privacy, independence, and security. Governments are working overtime to weaponize cyberspace to gain dominance over their subjects and corporations are constructing the machinery to enable it. To make matters worse, much of the public is either unaware or entirely apathetic to the looming threats, except in ways that can be used as a pretext for expanded top-down control. These concerns are far from purely hypothetical or speculative, but rather are the consequences of many powerful and long-running trends. You can listen to Keonne Rodriguez outline the consequences of trying to take these forces head-on on the Watchman Privacy Podcast. If that seems too far removed from an ordinary persons’ difficulties, you can read this account of how a person was locked out of their entire digital life by Apple. These are two concrete but highly significant and related instances that are both downstream of the consolidation of full-spectrum technological dominance over the public.
The cause is not at all hopeless, but the hour is quite late in. Many knowledgeable cyber rebels are speculating if this is ‘game over’ for a free cyberspace. I can certainly empathize with the terror, and can absolutely admit to feeling it first-hand. In times like these it is crucial to be aware, but not petrified of problems. The antidote to fear-based paralysis is proactive effort taken on sustainably. The good news is that there is a great deal both technologically skilled and those who excel in other domains can do to meaningfully address these challenges. This is due to the fact that the terrors in the technological dimension have origins outside that realm. A broader base of knowledge is absolutely required, and so is a much greater emphasis on responsibility and seriousness. As I outline these dangers, know that hardly any of them exist in a vacuum. The inter-related dynamic of these problems is what seriously raises the bar for comprehension and correction. This is an attempt to give a glimpse at the ‘bigger picture’ as I currently understand it.
Novel assaults via cyberspace
The main lesson that should be learned from the mid 2010s to the early 2020s is that mass human misery is a force multiplier to all the predatory aspects of our present digital experience. This factor creates a tectonic divide between those who are vulnerable to all kinds of harmful psychological manipulation, and those who may be spared from them for a variety of reasons. This couldn’t be more clear in the rapidly escalating ‘Chatbot Psychosis’ phenomenon, where lonely and/or wounded individuals are easy prey for advanced neurological traps. To make matters much worse, these ‘soft’ dangers are amplified by shocks of economic instability and rising precarity. Those shocks are in part being accelerated by the rapid deployment of massive data-centers, which in turn are ostensibly for further AI deployment. This alone, is a significantly powerful feedback loop for devastation of the public.
One of the more troubling developments when it comes to the digitization of everything is its impact on the workforce and institutions. An extreme potential example would be the person who came forward claiming that ridesharing apps use a scoring mechanism to flag drivers who are desperate to ensure they are being paid as little as possible. This may sound like far-fetched speculation but there are already individualized pricing schemes being tried out by retailers, and it has been a feature of online sales for some time. These tactics raise questions about what possibilities there are in the future for a “multi-tiered” society to leverage all kinds of incentives and penalties to shape behavior. This potential system is far more opaque than a mere state-run “social credit system” because it would a mesh of various business and institutional interests making it very hard to even pin down who should be held accountable for specific abuses. To make matters worse, many of these excesses are effectively normalized today. It can be very difficult to draw the line between acceptable business decisions and what is very clearly organized collusion.
One of the important developing trends would be the wars and rumors of wars that threaten to upend what little stability remains. ‘War footing’ is a powerful pretext for corruption and the erosion if not outright suspension of civil liberties. It should not be ignored that as countries begin to rearm, cyberspace is becoming far more overtly tyrannical in favor of top-down control. It would certainly appear that we are in the stages of a ‘whole of society’ preparation to eliminate dissent prior. It is anyone’s guess what that preparation is actually for, but we can fully expect the many ongoing top-down threats to digital freedom to be rapidly escalated as this continues. As such it is our duty to be proactively focused on the wide array of methods our technological experience can be leveraged against the public to nullify political, social, and economic dissent.
At minimum, it should be expected that a great deal of the data collection over the last decade has been very fruitful in precisely tuning psychological manipulation tactics. Social media giants have been enabled to collect vast amounts of mental health data about individuals in even the most sensitive contexts, it would seem that physical health is next. Health monitors and various ‘wearables’ are not necessarily new, but with the rise of AI technology there is a desire to rapidly expand real-time bio-surveillance on individuals, and the public at large. If the mass abuse of the people’s mental health information can be any guide, we can fully expect the further digitization of health to be a similar-scale disaster if not much worse. Just like real identities online, we can see biometric surveillance move from fringe, to trendy, and eventually all-but-mandatory. It is clear that the politicization and commodification of health will reach exponentially new heights in the relatively near future. When the public is ‘feeling the pinch’ from all directions, there is an understandable impulse to latch on to any readily available ‘solution’ to make the pain go away.
There are many concrete examples of this already. We had the extreme and overt ‘biosecurity’ tyranny of the covid years, where people were denied access to employment or even society generally due to injection status. During the crisis, governments and corporations tested the limits of what they could impose on people. Platforms brazenly suppressed debate on state imposed measures, and institutions harshly punished internal dissent. Those seemingly ’temporary’ or ’emergency’ impositions are certainly not as unthinkable now as they would have been prior to 2020. Far more damaging than the measures themselves is the erosion if not outright eradication of medical ethics that has not been repaired. This means that any health information is almost certainly going to be weaponized as much as people’s social media information has been if not more so. This is escalating at a time where the youth face extreme pressures to leverage any ‘quick fix’ to thrive in a predatory economic and social environment. This can be seen plainly in the ’looksmaxxing’ phenomenon, where many either popularize or aspire to biohack themselves into improving their appearance at potentially high costs. For some of these people, transhumanism isn’t a hypothetical sci-fi concept, but rather a game they are learning to play. There are already people ordering experimental peptides directly for a wide variety of purposes. I speculate that the state’s “monopoly on medicine” is being phased out in favor of DIY human experimentation managed by Big Tech.
State and corporate conquest of cyberspace
What can sound as outlandish or absurd to those not influenced in these spaces, can still have outsized impact in people’s real lives. Far too often extreme toxic dynamics are underestimated because the damages are relegated to neglected youth confined to hostile cyberspace. Social media is already very proficient at leveraging their emotional weaknesses for self-destructive and highly profitable schemes. This can range from self-destructive subcultures to gambling, and naturally predator dominated environments. This dynamic severely victimizes vulnerable people, and the carnage builds demands for radical solutions to resolve the problems. Unfortunately, these problems are complex and require more than simplistic knee-jerk responses. Unfortunately, the (well-deserved) political capital to address the problem ends up being redirected towards seizing more control over cyberspace. Age verification and social media bans in the name of protecting youth do very little to address root causes and are hardly sufficient to hold negligent or malevolent platforms accountable.
The pattern of redirecting justified outrage stemming from complex problems appears to be a constant feature of our time. What is most conspicuous is that there is a repeated pattern of technological systems that exacerbate particular problems are granted free reign in the name of ‘innovation’ despite predictable hazards. Once the damage accumulates to the point of crisis the answer always seems to be a readily prepared package of chipping away at what little room for independence yet exists. All this despite the fact that the bulk of the problem was inflated by top-down consolidation of cyberspace, not mitigated by it. This familiar pattern is very clearly seen in the almost religious fervor governments and institutions are pushing for rapid AI integration throughout every aspect of operations, despite vocally crying concerns about the dangers of AI. Predictably, there wasn’t and still largely isn’t a mainstream discussion about the dangers of the massive power consolidation in cyberspace.
The formula is irritatingly simple:
- Prop up a system that nobody really asked for like chatbots in everything, serfdom as a service replacing previously stable employment, and more recently X users getting Grok to undress people
- Leverage the resulting frustration as an impetus to carry-out pre-planned policy prescriptions like more censorship, more surveillance, and even more bureaucratic control
- Because the top-down control that instigated the issue remains in place, the cycle repeats and is an endless feedback loop of more of the same
What is just as frustrating as the simplicity of this formula, is how well it rhetorically traps objectors. Just as anyone critiquing unprecedented and irresponsible covid lockdown measures were derided as ‘anti-science nutters’, those who point to the irresponsible and ham-fisted deployment of these systems can be similarly dismissed as AI denialists. This works because the complexity or hypothetical promise of the domain is used to both dismiss critical concerns of outsiders, while also being used as sufficient justification for not addressing harms. What’s notable is that this ‘sleight of hand’ is only acceptable when it is used to consolidate power, not to protect civil liberties or decentralize regardless of overall merit. For example the hypothetical benefits of giving children access to AI tools are presumed to outweigh the risks, but the benefits of people having anonymous access to the same material the bots are trained on (the web) is considered too dangerous to allow much longer.
We are currently in the midst of governments finalizing the negotiations with corporations under what terms they will dictate cyberspace to the public. What is good for the public, never mind what the people actually want is irrelevant to this process. The presumption is that a state’s “digital sovereignty” and corporate “innovation” are vastly more important than the interests of the people. This is why all ‘safety’ reforms for cyberspace will inevitably fail to actually increase safety, they’re not really intended to. In fact, the most likely outcome of these ‘reforms’ is to be highly dangerous to the public in more insidious and difficult to quantify ways. The fact that governments across the world are highly concerned about having control over any information the public potentially has access to, and communicates to each other should be alarming. It is highly unlikely that such unparalleled and unprecedented power over the population will be used for benevolent ends, and there are many if not countless malevolent machinations for it.
2025 has been a big year for the expansion of ‘age verification’ on various platforms. This too is a very difficult rhetorical trap. Serious and pressing online safety issues are the justification for drastic measures. Unfortunately these schemes are unlikely to meaningfully protect anyone. It’s security theater for anxious parents. Politicians get to show how seriously they take the issues, corporate compliance firms get their rates, and we all get to pretend there aren’t root issues to address. It would seem that the so-called ‘anxious generation’ isn’t actually the children using social media, but rather the parents and institutions that are beginning to grow concerned of the consequences of neglecting a generation. The firm hand of mass censorship and the smothering watchful eye of mass surveillance work together to ensure that there is nothing disturbing the false harmony that keeps the illusion going. Because of this, we can fully expect more scapegoating of digital freedom for issues that people would much rather not look to closely at.
Cyberwars and ‘splinternets’
They say war never changes, but it certainly has expanded into new realms. Advanced autonomous systems have driven drone warfare to ‘change the game’ on the battlefield, while cyberwars introduce new challenges for both military and civilian systems as a whole. The idealistic vision of the world wide web as a unified shared resource for humanity is certainly in peril. As our countries transition to wartime economies, ‘Digital Sovereignty’ becomes the slogan for total digital dominance over the public. Restricting access to the rest of the world virtually is just common sense to those who wish to have uncontested information control over the public. It turns out that between foreign propaganda and scams, there are endless justifications for doing exactly that. It is absolutely vital that those of us who do understand the dangers of total information control help others understand the grave threats involved. This dynamic is almost guaranteed to fracture the world wide web into a set of ‘splinternets’ separated via ‘great firewalls’ or even physically disconnected. This vastly complicates the future for not just internet freedom, but civil rights and freedom generally.
As it currently exists, the web is quite consolidated already. Cloudflare outages bring down a huge portion of the actual sites people actually use. The vast majority of people’s experience on the Internet is limited to a handful of smartphone apps. This means that for a great many people, the ability to manipulate and constrain their access to information is relatively straight-forward. State initiatives and corporate propaganda can enjoy uncontested authority when any dissent is considered too ‘high risk’ to propagate. As the web becomes more controlled, the people are no longer using the Internet, but become passive subjects to it. The ‘information superhighway’ is relegated to being “broadcast television 2.0”. You can be sure that the digital distraction tools will always be online in some form or another, but that’s not the same as the World Wide Web continuing to live on. If the encryption wars are any guide, we can fully expect information exchange across the world to be seen as too dangerous to permit.
This is not to say there aren’t serious cybersecurity threats. It is just a genuine tragedy that actual security seems to always take a second seat to the need for control. Instead of building up the people to make the best of the challenges and opportunities ahead, it seems that the ’nations of the free world’ would rather keep their citizens docile and compliant. Despite ’lawful access’ backdoor systems being infiltrated by foreign attackers, there seems little interest in actually building up a secure digital foundation for the public from the ground up. It would seem that the powers over the population are seen as much more critical than resilience to foreign threats. As an ordinary person, you’re essentially caught up in the crossfire of titans who are wholly indifferent to your well-being. This means that it is up to individuals and communities to build up digital independence, while that’s still possible.
The War against general-purpose computing
The last year brought one of my principal concerns to mainstream attention: the war on (public access to) general-purpose computing. I wholeheartedly believe that a great deal of scapegoating of technology is directly aimed at manufacturing consent towards ‘cyber disarmament’ of the people. What thrust this important fight into the public’s consciousness was the rapid spike in memory prices largely caused by OpenAI’s purchase of silicon wafers allegedly to ‘stall competition’. In additions to computer RAM, there are concerns about other components like storage becoming much more expensive. Given that storage is one of the primary resources in cyberspace, there are pretty serious implications to the public losing access to it over time. As we have seen, corporations would much rather you store your entire digital life on their cloud servers, rather than privately on your own machines. This then creates the opportunity for governments to dominate the information landscape by controlling these online services. To make matters worse this is not limited to just storage capacity.
Originally GPU prices rose because they were being bought up to run Bitcoin mining farms. This is because GPUs (‘aka “Gaming Cards”) are very useful for parallel processing which makes them great for the number-crunching required for bitcoin and using various AI tools. This double-punch has made what used to be a moderately expensive component to one that will likely be out of reach for a great many people. Not only that, in theory GPU mining on cryptocurrency reinforced the potential for decentralized systems via distributed Proof of Work. On the other hand GPUs being hoarded for centralized data systems would have no such upside. What’s worth recognizing is that the price hikes alone on GPUs effectively lower the ceiling of what potential computing power can be run independently. Computing power is also a primary resource in cyberspace, arguably one of the more critical ones. It is very difficult to decentralized compute in general due to its heavy reliance on energy. On some level computing power will always be inherently centralized to a degree, but the question is if the public be allowed to have independent computing at all.
People vastly underestimate the importance of general-purpose computing, and can’t fathom how much worse things can get when it’s gone. Imagine all the (real) economic benefits since the invention of the transistor being directed by a command economy. Liberties we take for granted on already flawed and compromised systems could be diminished entirely. A non-trivial part of this is that one could argue that personal computing has always been artificially cheap. It’s possible that these price hikes better represent a ‘fair’ value of these machines. That said, I have a feeling that if people paid the true value of computing, they would have zero tolerance for surveillance and any lack of software or hardware freedom. It just so happens that the artificial subsidy is being removed after enough dependency on various technological systems has been established. This is at minimum a form of technological austerity, where the opportunities derived from advances in technology are stifled or reversed.
Another troubling form of ’technological austerity’ would be a AI bubble itself. While so many conversations about AI are entirely contrived, there are important details to consider. Economists argue that opposing automation or technological development is bad and irrational because (in theory) everyone benefits from the efficiency gains. This can certainly be true, but it doesn’t always have to be. Just because a technological tool or system is involved, that doesn’t necessarily mean there are efficiency gains. Every tool is going to have trade-offs, especially when the explicit goal is to replace actual people with automated workflows. Those trade-offs can be net-positive, or net-negative depending on the circumstances, implementation, and complexity involved. For a long time, it would be safe to assume all technological advances would involve a net-positive situation. As our tools become more sophisticated this can change in ways that are hard for many to grasp.
Most people accept the argument that job losses and disruption are good trade for life improving overall. If fancy technology can make hospitals save more lives, move goods around faster, and create great new opportunities for people, then it’s all worth it. But this time seems to be different. People are beginning to recognize that the actual effects of how “AI” is currently being deployed are treating human displacement as an end to itself, rather than a mere byproduct of increased efficiency. They notice that the disruption of participation in society is being disrupted even at the cost of efficiency. This can be seen in large bureaucratic institutions to even warehouse operations. This fundamentally violates the ‘social contract’ behind automation: even if the gains aren’t shared equally throughout society, there should still be a net-gain to the public at large.
It is my position that this is what the “AI bubble” actually is. It’s not about any particular stock or even sector being overvalued. It’s about the recognition that particular tools and systems are being treated as an end to themselves because of real power dynamics, rather than sound economic activity. Anyone interested in ‘calling the top’ of the AI bubble should be warned that the market can remain irrational, longer than you can stay solvent. This is to say that we can expect the imposition of net-negative efficiency tools to continue far longer than raw economic analysis would explain. This is why the discussions around AI technology often involve rhetorical traps. Critics of what can be plainly seen as destructive economic warfare can be easily dismissed as ignorant luddites, while the promoters of the scheme lean on economic arguments that do not even apply in this circumstance.
It would be very different if these systems and tools had to truly compete on a level playing field. I would go as far as to argue that the actual technological benefits of many artificial intelligence tools are actually being under-utilized despite the hype. This is because the social imperatives driving so called “AI adoption” are actually divorced from the economic factors. For example, it’s a lot harder for ’the market’ to invest in machine efficiency in situations where the economic equilibrium is maintained by exploitation. Enterprises like farms and factories are much less likely to mechanize if the political environment is perfectly fine with supplying a grey or black market of workers with no bargaining power. This in turn devalues labor as a whole, impacting even the ‘high skilled’ workers over time. Much worse, eventually the problem becomes a structural dependency, and the idea of unraveling the economic distortion itself becomes seen as too costly. Factors like this are corrosive to actual wealth creation. The opportunities of a skilled and competitive workforce using the latest and greatest tools are traded away for politically connected ‘insiders’ maintaining their ‘moat’.
The point here is that the “AI bubble” is actually a lot worse than a mere misallocation of capital and resources. It is a scheme that is not only transforming the future of cyberspace, but also being weaponized against the public themselves. These assaults are being used to manufacture consent for the digital disarmament of the people by restricting access to the general public. A big part of this is the ubiquitous availability and scaremongering over anti-social use of various AI tools. Simply telling the public “tools aren’t good or bad” will fall on deaf ears when they can plainly see the technology being used in evil and destructive ways, while the benefits of the technology are either abstract or entirely hoarded away from them. The problem is that the public has very little understanding that consolidation over technology itself represents a dire threat to them, and it’s getting much worse over time. Due to the powerful nature of these sophisticated technological tools, there is a desperate race to seize a monopoly on computing itself. As such a true ‘free market’ of technological innovation can’t actually be permitted to exist. Instead of allowing entrepreneurs to focus on innovating ways to improve society, a bottleneck will be needed to ensure ’thinking machines’ don’t upset the many delicate rackets that maintain power over the public. While it’s unthinkable today, if this power grab goes unchecked I can imagine something as simple as a calculator won’t be permitted to be in public hands without the watchful eye of an AI system supervising it’s use. This is far beyond the mere destruction of knowledge, but the total control over information management itself.
Darkest before the dawn
Despite all the above, I think there is a great deal of excitement and opportunity in learning to forge a new path. For those who are interested in developing technical skills, there is a real chance to build truly transformative things. For those who have passions outside of technology, there are also real opportunities to shine. It is absolutely critical that those who want to build a better digital future learn from those who have skills to teach and culture to share. For far too long the idea of being ‘good in tech’ simply meant being the lubricant that allows Big Tech to seep into all aspects of our lives. With these uncertain times, things are dynamic enough that even relatively small things can have a huge impact.
For this year ahead, I am encouraging you to focus less on the small details. Instead of fighting over minute technological choices like which messaging app is best or what social media to use, stay focused on the real fight. What is the better digital future we’re hoping to pass on to future generations? What do we need to do to make small steps towards that? What do we need to genuinely foster not just a world wide web, but and entire technological landscape that supports human society rather than undermines and extracts from it? These certainly aren’t simple questions, with no easy answers, but they are the foundation for meaningful decisions.
There are going to be many technological difficulties that impose all kinds of difficult problems. People are going to be stuck in difficult disentangle traps that may look simple on the surface. Above all, I hope we remember that we’re trying to fix cyberspace for people, rather than ‘fixing’ people for cyberspace. As broader societal problems take on even more of a technological dimension, it is genuinely difficult to remind ourselves that that the problems don’t start and end with the devices we touch. The good news of this is that means there is a great deal that can be done by those outside of the technological realm, to make the job of improving things much easier.
With all this bleakness out of the way, you can look forward to me sharing how we can meaningfully act on these bigger fronts. Of course, I won’t be able to hold back from dropping a few ‘hot takes’ on the fine details as they come up as well. I’m wishing you an excellent 2026, and looking forward to building a better cyberspace with you.
Gabriel
Libre Solutions Network
Sharing is caring!
Please send this post to anyone you think might be interested.
If you haven't already, don't forget to bookmark this site!
You can always subscribe with the RSS feed or get posts sent to your inbox via Substack.