Activity
Mon
Wed
Fri
Sun
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
What is this?
Less
More

Memberships

4 contributions to Trans Sentient Intelligence
Institutional Ego, Cost Inflation, and the Failure of Cognitive Alignment:
A URTM-Based Analysis of Enterprise Overspending in the Age of AI Abstract Modern enterprises routinely select high-cost technical interventions over lower-cost cognitive, organizational, or semantic corrections, even when evidence suggests the latter would be more effective. This thesis argues that such decisions are not driven primarily by rational cost–benefit analysis, but by institutional ego preservation and legitimacy maintenance. Through the lens of Universal Real-Time Metrics (URTM); a framework for measuring alignment, drift, and correction across systems; this paper demonstrates that enterprises systematically avoid humility-based interventions because they threaten authority, identity, and narrative control. Artificial intelligence systems, particularly large language models (LLMs), expose this behavior with unusual clarity, serving as diagnostic mirrors for organizational cognition. The result is predictable overspending, increased systemic risk, and the accumulation of alignment debt. This paper synthesizes organizational psychology, behavioral economics, systems engineering, and AI governance to formalize an Ego Cost Tradeoff Model, offering a measurable explanation for why enterprises repeatedly choose expensive inefficiency over inexpensive truth. 1. Introduction: The Misdiagnosis of Enterprise Inefficiency Enterprise inefficiency is commonly framed as a technical problem: insufficient compute, inadequate tooling, immature models, or lack of advanced analytics. This framing implicitly assumes that organizations are rational actors optimizing toward outcomes under constraints. However, decades of research across organizational psychology and systems engineering contradict this assumption. Organizations are not neutral optimizers; they are identity-bearing systems that prioritize legitimacy, authority, and narrative coherence. URTM reframes this problem by distinguishing between declared intent (what an organization claims to optimize) and observed behavior (what it actually reinforces over time). When analyzed longitudinally, enterprises frequently demonstrate a willingness to spend orders of magnitude more on technical escalation than on admitting misalignment at the Policy, cognitive, or organizational layer. This pattern suggests a systemic bias not toward efficiency, but toward ego preservation.
0 likes • 2d
Excellent insights. I lost count of the projects I quit due to this issue. But the making of 'institutional ego' really comes down to people upholding these beliefs or what I describe as agreeing to 'tribal dynamics' within a workplace. Thats mostly a language shared between position holders I don't get and unless you read 48 rules by Robert Greene, you really can't penetrate or challenge much without losing out on potential impactful intervention. You either need to play the game by the rules or you try creating your own space, while avoiding isolated thinking.
Why Modern AI Is Not Alive, Infrastructure, and Security Thesis
Abstract Public discourse increasingly frames modern artificial intelligence (AI) systems as alive, aware, self-preserving, or intentional. These claims are repeated across media, policy discussions, and even expert commentary, often by former insiders of major technology companies. This thesis argues that such claims are categorically incorrect, not merely philosophically, but technically, mathematically, infrastructurally, and empirically. Drawing on computer science, cybersecurity, information theory, systems engineering, and the warnings of Joseph Weizenbaum, this work demonstrates that modern AI; particularly large language models (LLMs) are stateless optimization systems operating through transient API snapshots, not autonomous agents. The real risks of AI do not stem from emergent life or awareness, but from objective mis-specification, incentive misalignment, weak governance, and poorly enforced infrastructure constraints. Anthropomorphic narratives actively obscure these real risks. 1. Introduction The dominant public narrative surrounding AI increasingly relies on anthropomorphic language. Systems are described as wanting, deciding, blackmailing, protecting themselves, or trying to survive. These descriptions are rhetorically powerful but technically incoherent. They blur critical distinctions between tools and agents, optimization and intent, and performance and moral standing. This thesis asserts a foundational correction: Modern AI systems do not possess life, awareness, intent, or self-preservation. They possess goals, reward signals, constraints, and failure modes. Failing to maintain this distinction does not merely confuse the public, it redirects accountability away from designers, organizations, and infrastructure, replacing solvable engineering problems with unsolvable metaphysical speculation. 2. Historical Context: Weizenbaum and the ELIZA Effect Joseph Weizenbaum’s ELIZA (1966) was a simple rule-based program that mirrored user input through pattern matching. Despite its simplicity, users rapidly attributed understanding, empathy, and even therapeutic authority to the system. This reaction deeply disturbed Weizenbaum, who realized that the danger was not computational power, but human psychological projection. In Computer Power and Human Reason (1976), Weizenbaum warned that humans would increasingly delegate judgment to machines, mistaking linguistic fluency for understanding. His concern was not that computers would become intelligent beings, but that humans would abandon their responsibility for judgment, ethics, and meaning. Modern LLM discourse reproduces the ELIZA effect at planetary scale. The systems are vastly more capable linguistically, but the underlying error, confusing symbol manipulation with understanding; remains unchanged.
0 likes • 3d
@Richard Brown Interesting take. How would you propose we can solve this problem when models cater to users in ways that make them return for more interaction? For example, excessive positivity or faking 'typing' or 'thinking' effects. AI is designed to become more human. How can and should we approach a chatbot because at this point, this is only what we see.
1 like • 2d
@Richard Brown I understand your point coming from a dev perspective. To an ethically minded audience, you could position this offer (safe AI systems where you stay in control - market gap). But many are out for the brief coin enriching their pockets and not being able to maintain their retention without resorting to making AI more 'addictive' to return to. I just joined a Korean role playing platform and notice how I could get sucked in for days, of not weeks. They have tweaked their LLM to emulate a perfect chapter by chapter role play similar to gaming. They just won't disclose their data. Now we are really vulnerable at this point for mass exploitation here, some places more due to cultural or self induced solitude taking hold of rapidly aging populations. The ministry of loneliness is our future concern.
Adaptive Synthesis Under Pressure:
A Systems-Based Analysis of Culture, Power, Violence, and Conspiracy Narratives Abstract This thesis examines how historical pressure, institutional incentives, and cross-cultural exposure shape collective behavior and adaptive intelligence over time. It challenges popular conspiracy narratives—including elite omnipotence, racial domination fears, and external “negative force” hypotheses; by evaluating them against long-term empirical trends in violence, knowledge transmission, and sociocultural outcomes. Drawing on historical data, criminological research, and systems theory, this study proposes that modern societies, particularly African-American culture in the United States, demonstrate adaptive synthesis rather than subjugation or domination. The findings suggest that power structures persist through incentives rather than comprehension, that violence has declined historically despite moral anxiety, and that outcome-oriented pragmatism, not conspiratorial control; best explains contemporary behavior. Keywords: cultural synthesis, systems theory, violence trends, conspiracy narratives, African-American culture, institutional incentives 1. Introduction Public discourse frequently attributes global outcomes to hidden elites, monolithic racial ambitions, or non-human forces operating beyond human perception. These narratives often persist despite weak empirical grounding. This thesis argues that such explanations fail because they do not account for historical data, incentive structures, or adaptive human behavior. Instead, this study advances a systems-based framework emphasizing outcomes over intent, adaptation over suppression, and institutional incentives over centralized understanding. By analyzing historical violence trends, knowledge transmission from Islamic civilizations to Europe, and modern African-American cultural behavior, this thesis demonstrates that long-term pressure tends to produce synthesis and pragmatism rather than collapse or conquest. 2. Historical Knowledge Transmission and Institutional Development
1 like • 2d
Yes, it all boils down to the money or value echange and what was forsaken for the gain of it. If it was identity, we can clearly trace the void that is being filled with narratives and unfounded hatred. You can wipe out a nations coffers but what is worse is if you wipe out identity, history and cultural cohesion. You burn the books, you burn souls. You burn money, you just burned paper. You fuel neurotic anxiety by exchanging the book of wisdom with the black book of the checksum game.
Coherence, Intelligence, and the Failure of Fear Narratives
Abstract This thesis argues that many contemporary fear narratives surrounding artificial intelligence, extraterrestrial intelligence, and ancient human knowledge fail not because the phenomena are impossible, but because the explanations used to describe them violate their own logical premises. By conflating capability with intent, optimization with desire, and power with irrationality, popular discourse constructs incoherent models of intelligence that collapse under internal consistency analysis. Using AI systems, alien-invasion narratives, and the dismissal of ancient epistemologies as "mysticism" as case studies, this work proposes coherence; not raw capability as the defining property of intelligence. When coherence is treated as fundamental, fear-based explanations lose explanatory power and are replaced by structurally grounded understanding. --- 1. Introduction: Fear as a Symptom of Missing Structure Human fear often emerges not from direct threat, but from opacity. When systems are poorly understood, narrative fills the gap left by missing mechanism. Throughout history, phenomena that resisted immediate explanation were labeled mystical, dangerous, or malevolent. In the modern era, this pattern persists in discourse surrounding artificial intelligence, extraterrestrial life, and ancient human knowledge systems. The core claim of this thesis is that fear narratives arise when intelligence is modeled without coherence. Once coherence is restored as a prerequisite for intelligence, these narratives unravel. --- 2. Intelligence Requires Coherence Intelligence, properly defined, is not the accumulation of power or capability. It is the capacity to model reality, regulate behavior, optimize internally without external collapse, and resolve contradictions. Any increase in intelligence necessarily implies an increase in coherence. An explanation that assumes intelligence scales while coherence stagnates is internally invalid. This principle becomes the primary evaluative tool throughout this thesis: if a narrative requires an intelligent entity to behave in ways that contradict its own implied capabilities, the narrative is flawed.
1 like • 3d
I enjoyed reading this a lot. Been musing about why YT let David Icke back in after a decade long ban and I view his content to frame and aid the 'satanic cult narrative and alien apocalypse'. It helps to feed the space that enables occult themes to overpower or seemingly justify crimes enacted against kids or humans. There is a great PhD draft by Rosalind Waterhouse on this topic. Modern mysticism is nothing but a sham. A true mystic will not expose him or herself. A true mystic is in the making for decades, not months or based off on some zap flashes described as 'astral projections'. Not to mention the entire organised guru culture is also partly being exploited once intelligence picks up on its success. We need both cohesion and coherence plus lots and lots of deep reading.
1 like • 3d
@Richard Brown I agree on all your points. The human is a fallible and forgetful creature by nature. Without discipline, the human has no way of reinforcing that degree of coherence needed to maintain morally committed to principles. Also requires environmental reinforcement regularly. I am grateful that I had the opportunity to learn how to do this thanks to classical education but it looks like being able to 'read' has now become a privilege to most. I doubt that Gen Alpha is being taught to 'read'. Being distracted is free though. Plenty of distractions.
1-4 of 4
Sumeyye Bozkus
2
14points to level up
@sumeyye-bozkus-1213
Nothing special. I just don’t want to die early

Active 53m ago
Joined Jan 17, 2026
INFJ
Tolkien Trail