Humans may be smarter than trees, but also collectively?

When I was in Belgrade I read in their official travel guide that the city has been almost completely destroyed and rebuilt 44 times! I mind you that if the chance of not recovering after each destroy was as low as a lucky 5%, then the city would have perished by now with a chance of 90% after the forty fourth time.

Then it occurred to me what kind of organism could possibly lose a considerable chunk of its mass and yet survive that many times (spoiler: forests).

I couldn’t also find anything magical about the coordinates of Belgrade, or many other cities for that matter. It’s not like there is a non-exhaustible mine or a type of exclusive resource to that location has made people rebuild the city over and over again right from that location. It’d probably only take a bunch of survivors to take off again.

Or just look at Rome. It has lived under so many different tyrannies, governments and religions, and has even survived under different organizational paradigms from slavery, feudalism and capitalism. So here I try to explain what is that essence that has kept Rome alive as long as there was a little flame left to burn through the next regime?

All companies die. But cities never die.

Geoffrey West’s main findings and his flagship results in his body of work claims this.

He claims that cities not only save energy per capita, but also create more wealth – even per capita. This leads to a closed feedback loop for growth of the cities that is unprecedented in other super-organisms in nature [I would exclude forests before agreeing to this];

Anyhow he rightfully shows with mathematical models that cities, although can be destroyed or wiped out, do not die a natural planned death. Companies, people or animals on the other hand don’t have such double synergistical effect to their growth pattern. So once growth make their building blocks (humans or cells) so their exponential growth stops internally, and not due to exhaustion of resources, then they die. Something that doesn’t happen to cities. Concievably neither to forests.

Now, in the following comes my reading of Geoffrey West’s work, plus some more radical opinions and critics:

  1. All Superorganisms grow, but cities are different

Just like other organisms, superorganisms are formed based on smaller elements coming together to benefit from the economy of scale. So from the perspective of network science, technological or social networks aren’t necessarily different from biological networks. So their similarities make them “alive” in some sense. Cities, companies, forests (I would add civilizations, empires, religious institutions, coral colonies, hives, etc.) are all alive in a measurable and objective sense, although not necessarily sentient or conscious, which is a quite a different – subjective – story.

Typically all of these network have evolved to reach an equilibrium after growth, and for the same mathematics they all stop growing at a certain point, live up to a rather predictable age and then they die a natural death. By doing so – independent of their mechanism of reproduction – they leave room for the new to repeat the cycle. Nature has favored this sustainable code over an endless number of cycles.

Cities are different, they are in theory eternal.

While there are many parallels one can draw between all these superorganisms, in one sense “cities” seem to be an exception. They by design suck up the resources around them with no self-correcting mechanism. At least in our current economic model and so far as I know what has been experienced since the first human settlements, we don’t suddenly see a systematic and planned evacuation of a city or its division to smaller cities, say for people living a better life, repeating this cycle all over again. We just don’t have a code for it. This has not happened and will not as long as the economy of sclae gives the citizens a double edge to live there, which is again:

Similar to biology the bigger the organism gets the less energy its cells consume, per capita. But unlike biology the bigger you get the more “money” you gain per capita. West reports that such a positive feedback loop seems to be exceptional to the cities among all the other superorganisms.

2. The additional glue: Creativity and productivity, driven by money and language

Although just similar to biology that extra wealth per capita translates to smaller homes and less stuff in the center of megacities compared to the country side, because of the rules of the monetary system the economic power such money creates keeps attracting people to the big cities. Wages are higher in big colonies of humans because they seem to follow the rate of productivity, which is higher per capita in bigger colonies. “Stronger input-output linkages, better matching of employees and employers, and invisible but active knowledge spillovers in agglomeration economies” are believed to increased productivity resulting in higher wages. The so-called “agglomeration” economies shaped in desne areas increase creativity (the number of patents as well as wages follow a super-linear fit, fueling the exponential growth of the city. So in retrospect, among other tools the advent of language and the invention of money changed the dynamics of the human network, human creativity was unleashed and an exponential growth pattern, the civilization, emerged from that network.

On an individual level, this effect is not an unfamiliar phenomenon. We people living in big cities, capitals, and close to the power hubs may live in denser areas and consume less energy per capita to warm our habitat compared to residents of the countyside. But we also create more waste due to our higher economical level. We shop more, commute longer to work, travel more, etc. And this economically driven factor is the essense that makes us and our embeding super-organism, the city, rather different from the other super-organisms.

But what other network may also enjoy such a double edged growth patterns of the cities (super-linear gains at sub-linear gain). What other superorganism might be exceptional? Could it be forests and reefs, since their exceptionally long lifespan may tell a story about that additional glue. What keeps them together that could be analogous to the superlinear “glue” of the cities? Why does it seem that – similar to the cities – forests or reefs also last exceptionally long? Do forests and reefs – like their individual trees or corals – have an internalized code for death? Sure they can be killed off or shrink due to external reasons, but they don’t have an internal mechanism to die as a whole.

In other words, what do individual trees benefit from when they are in a bigger and bigger network? What’s in it for individual corals to be in a huge reef than a small one when they can’t even move?

3. Do trees have money, or language?

They ought to!

Also this is far-fetched, I think it could be inferred merely from the physics of the network, considering the emergent properties of a forest, that it is way more than a regular grid. It is a complex network, known to be not only highly-clusterized (having a high clustering co-efficient) but also with the properties of a small-world network. And thus without a deep knowledge of ecology or forestry even, one could possibly show that trees have a sense of networking, collaboration and communication (likely even symbolic communication with an inventory of signs).

Also, trees have documented track record of “trade”. But do they have a sense of currency, property law, and ownership? Do such concepts necessarily follow the invention of a formal *phonological* language? Those who claim Capitalism is a product of the nature, may have gotten something right.

In linguistics, “double-articulation” is known as the most crucial feature that makes human language differ from other forms of communication in nature. This is the ability to exploit the combinatorics of dual patterns and is extremely powerful since it makes symbolic computation possible.

It is, however, in my opinion very arrogant and naive of us humans to assume that such phenomenon first evolved with our species. Both rainforests and reefs seem to possess similar network properties (amongst others self-similarity, small-world property and high-clusterization) that I argue could be an infrastructure for a phonological [alphabetic] mind capable of symbolic computation, given a random mutation of dual patterns.

This may be the hidden story behind any of the evolutionary leaps on earth, and not just the last one. And it could mean, with all the seriousness, that rainforests or reefs, as intelligent superorganisms have purposefully invented animals in the same way we invented cars. And for short-term or long-term reasons. I do understand the co-evolution of animals with their echosystem, but do trees know who invented automobile?

It’s a testable hypothesis to see if rainforests have evolved, say, their own stock market somewhere down in the ground. I just wonder if like ours it ever crashes once in a while in some million years! A bit more far-fetched than that, the urbanization and the human experiment, us, could be one of those.

Does vegetation has similar properties as urbanization? Do rainforests possess a collective intelligence comparable to that of Silicon Valley, Wall Street or Holley Wood? Are they creative, productive and experimental?

How crazy is that?! Not crazy at all.

How testable is it? I believe, enough!

Printing a megacity in the desert?

Can you print a megacity in a desert and demand it to return your investment?

“Dimensionality” in complex networks is still an ignored concept in any other discipline which deals with those networks – but physics, the mother of them all.

In city planning for example, governments can aspire to make a metropole of 9 million and expect it to behave like NYC, once it matures. If not necessarily reproducing the same financial or political influence, but at least creating a similar “feel” internally shouldn’t be much to ask?

Not true.

It is essential to build mega-cities from smaller organic elements: minor cities already near each other.

Is such a simple observation in other, similar, networks something that the policymakers of some trillion-dollar future megacities are unaware of? And do they not need that knowledge when they expect the return for their investment?

* * *

Building new fresh sustainable megacities in uninhabitable feilds sounds like a brilliant idea. The trend has many great promises:

It returns the investment massively through real estate and beyond. It will host the future waves of urbanizing population while built with the state-of-the-art and more sustainable technology. Even better if it is built in a desert where preserving the natural ecosystem is much less vital than say, a rainforest.

But sustaining a megacity – logistically – is not possible without sustaining it culturally. That is the foundation of the city life and for it is necessary to mimic the underlying dimensionality of organic metropoleis – something that should match the metropolis’ magnitude – or else the megacity will never produce the effects of even much smaller cities, no matter how much more money central-planers poor into the project long after building it.

* * *

Even if the best engineers set up the physical infrastructures and plug in the vital resources, even if structures are built with fresher and more sustainable technology with a smaller footprint, even if they kick out or bury all the workers who built it and resettle the desired population, it is still not wise to establish a city on nothing with a grid mindset.

It takes a little more investment but in the right direction to try to recreate a “dimensionality” that typically evolves over centuries when a megacity is organically seeded.

Only then one can attempt to create the equivalent of a two or three centuries old universities like NYU or Columbia in the course of decades.

Chinese have understood this and are building their megacities around the existing smaller parts. Even much smaller cities like Dubai or Doha grew their skyline organically – though on steroid – around an existing old town.

Atheism vs. Agnosticism

If you identify yourself as a non-believer, possibly with some history of hostility towards organized religions, would you call yourself an Atheist, or an Agnostic?

I can’t care less about labels and names. But since they have a practical use – saving time and energy – we can discuss them.

* * *

Once upon a time nefore the chemical outbrake of puberty introduced a wave of changes in my body, it impacted my mind. I rebelled – still quite analyctically – against the delusions of the local culture, which let to tossing out religions amonst some other outdated codes. I turned in to a non-believe and I called it atheism three years later when I learned that I am not only alone, but there may be even a conventional name for the state of my belief system. And it wasn’t until a couple of years ago that I realized agnosticism is a better term to describe this state.

Typically the naive distinction between atheism and agnosticism is tested with whether or not one would answer “No”, or “I don’t know” to the question of existence of *any* God. This is in the grey zone, and a metter of definition and interpretation: How do we define God?

Some believe in Gods because in the hierarchy of beings in the vast universe and possibly beyond, there can be creatures above us. Aliens, Gods, simulators, our own Gaia or some parts of it, concious super-organisms that we may can be their building blocks, etc. All these can have God-like powers over us, by shaping and controlling us. But is that all it takes to be a God?

The problem here is that all these beings, even if proven and spotted, are things just like us. They have weaknesses and struggles for their own survival, and simply put they aren’t “in charge”. They don’t have control. A God that knows how everything at every level unfolds, comes from a much motr strict definition of God and that is a level of God-ness that I am a non-believer in. This is a very generic definition for a God, one who has made everything, knows it all, and can control all existence at all its levels. But to me its existance still as unlikely as exotic concept such as Allah, Jesus, or the Flying Spaghetti Monster.

Atheists – including my past self – typically view agnostics as mild atheists. Atheists that have woken up but not quite enough to completely get over their religion, and they may be statistically right. But to me agnosticism isn’t compromised atheism. It’s an ultimate state disbelief. So an agnostic refuses religion, but also atheism itself as a replacement that could be vulnerable to the flaws and biases of any other man-made culture. And this was the point that I had not understood in the spiritual beliefs of not-quite-atheist thinkers like Spinoza, Darwin or Einstein.

So agnosticism that I refer to is more of a non-believer than atheism. And as there are infinitely many ways to define God, there can be infinitely many levels in between agnoticism and atheism. The atheist culture, perhaps in order to unify better against the organized religions, wants these two classes and all in between them to collapse in one. But in my eyes they are quite distinct, and I think there are a lot of interesting belief systems also in between them.

I may be going through another phase of chemical changes but currently I feel like I am somewhere in that in-between space.

Explosions between Camberian and Technological Singularity?

Economy of scale and life’s ponctuated equilibrium:

Life on earth is going through another short period of rapid morphological changes, this time because of us humans: In a short geological moment we have gone through a massive scale-up (7 orders of magnitude from tribes of hundreds, to billions on the Internet or members or the global economy). That we all know.

Phase transitions are common place in single species – known as ponctuated equillibrium and are spotted based on local evidences at hand such as fossil records. But terrestial life as a whole experiences such phase transitional behaviors too, although they aren’t always as easy to spot in our labs.

Last time we think a scale-up like this happened was the so-called Camberian explosion half a billion years ago: The rapid shift in life forms from single-cell organisms to complex animals with advanced specialized systems and organs. This was when nature evolved new networks and gave life emergant properties such as intelligence or purpose.

And well in between these two explosions, there may have been other economies of scale transcending single units to complex wholes, though we may not as easily manage to identify them. I am for instance quite open to the spiritual idea that views rainforest as an intelligent whole, with a form of wisdom and the ability to reason, possessing foresight and purpose and other emergent properties invisible to our senses and ungraspable by our brains.

We require more advanced tools to discover those realms, but rest assured there exists much more than we have seen; Communicating with the intelligence that takes place at much bigger or smaller scales, or much slower or faster pace isn’t the most trivial thing we have evolved to do. Neither we have made our tools specifically for this. But I think we already have made tools that we can begin to utilize for this particular purpose. And I am hopeful and optimist, that science has the ability to eventually explore those realms.

Subjectivity,an emergent property?

What can be even more puzzling is the question of conciousness, subjective experience and sentience. Are they too, some emergent properties of complex networks? This is a whole new discussion:

Can networks emerge not only intelligence, planning and reasoning – as stated before, I am convinced they do – but also create joy and suffering out of nothing?

And what are the ethical implications of all these?

We don’t know if cells have sentience. I wouldn’t be surprised at all if they do have something like we do. Why exactly can we have that and they could not?

And now let’s for a moment assume they have a sense of sentience. The ethical question is then: Was that explosion a fun thing for them, or was it a disastrous regretable mistake to ride the economy of scale and shape animals instead of competing alone for survival. Did they sacrifice their individual freedom for specializion in order to serve the survival of a bigger whole? More far-fetched, is a kidney cell *happier* than a lonely floater with shorter life span and less guaranteed levels of safety, but possibly higher degrees of freedom?

Relativity of morals is ethics 101, and good for something is bad for seomthing else. So I am not trying to quantify and sum up all the good and evil in the universe to solve a Karmic optimization problem here. This is difficult enough to ask. Could singles be happier on their own, or as a part of a bigger whole?

And if it doesn’t make sense to you to ask such a question about microbes, just wonder the same thing about us. It’s hard to conceptualize things we haven’t evolve to percieve but our transition from tribes of apes to specialized memebers of powerful gigantic institutions that decide our faith more than us is a phenomenon that we tend to ignore. And such super-organisms, whatever you can think of them from physical campuses of multinational corporations, institutions and governments, to less visible codes of AI all across the Internet competing for their own survival, may only be in their early forms. Their real game may have not even started yet!

Point being, all the signs of technological singularity fits in the context of evolution.

Ethical considerations:

Back to the ethical questions: whether this is all good or bad and should we help or stop it? Relativity of ethics aside, there are two levels of moralities I can think of:

– One is what we are used to in our conventional ethics; A sense of good or bad at the human level or familiar issues in its proximity such as animal welfare: Are we as individuals losing our freedom to serve the dictatorship of new giant monsters? Are we going to suffer more and for long dark periods as humans? Could we humans catch ourselves in a blink of an an eye (a giant eye!) in miserable conditions as animals are experiencing in our industrial farms, simply because unavoidable forces of nature are leading us there? Or will we find a more sustainable and less cruel way of expanding the network of life and transcend this with less pain and suffering, explotaition and war?

– The other ethical discussion is a more Karmic sense of good vs evil: The ultimate survival of life. Whether or not we humans will be happy or miserable in any given futuristic scenario, is our technology eventually going to protect life on earth from external cosmic hazards and possibly even expand it beyond earth? Or will it kill it off completely. Some say our species may actually have a purpose and this is it.

In this context if our civilization explosion instead implodes to kill all life, before our reaching its multiplanetary ambitions, then that can be viewed as a failed gamble by mother nature.

Will humans make it to, and survive the technological singularity?

And then there is this third scenario in between. The most likely I would say. Our species will die a mild extinction before taking over stars, but also before completely destroying the life forever and ever. Both seem much more difficult than simply going extinct.

What will happen in that scenario? Probably plants will come back with new wisdom – resistance to nano-biological hazards, radioactive, plastic and what not. Then they make new things that will move around and will send them again on the mission to pollinate other stars for another thousands of unsuccessful trials, up until a massive asteroid finishes us off, this time completely.

Now seriously, does mother nature have ways to set goals and make plans, invest in a species to become technologically advanced enough to protect its mother? Hey let’s make some humans to protect and expand the life although they may kill it all. And in taking such gambles does she even further possess mechanisms for sensing and evaluating the risks involved?

I think she does. Apparently in one instance right here and now.

If this post evolved as a part of nature, then nature does have ways to try assessing the risk of its gambles. All technologists and scientists who push our civilization forward, and yet inform and warn us about existential threats that come along the horizon are the manifestation of such a risk assessment. And they come from the nature. So why should we think of them as an isolated phenomenon? How do we know nature hasn’t manifested things like this previously? All we see is the qualities of its current wave of emergent intelligence.

Hopefully it’s not the last wave, and I really doubt if it is the first one. Unlikely!

Towards an Everlasting Never-ending AI dictatorship

It’s already in the process. We are already slaves of some self-organized technological super-intelligence, made of flesh and silicon, which is beyond all of us. It’s just many of them out there fighting over us as resources and the evolutionary battle hasn’t been settled just yet.

So let’s reflect on these doomsday scenarios:

We tend to undermine the algorithmic nature of the world, and so the wide variety of scopes and the vast magnitudes of scales that evolution can rule, beyond biology. This is an old story: Trees made us to be their pollinating agents and we cut them down. We made AIs to serve us and they will eventually enslave us.

So those who predict an AI take over are right, but their doomsday scenario isn’t like a Terminator story. It isn’t even about automated weapons.A ‘God-like’ AI is a true threat. But it doesn’t need to be a robot, a super computer, or a conventional AI.

The rulers of future earth will have algorithmic nature. But let’s reflect on that now:

First of all, algorithms do not run in a metaphysical layer seperated from our tangible world. Algorithms need *stuff* to run on; They will still need flesh and silicon.

The truth is, we are already slaves of self-organized algorithmic beings higher than ourselves; The technological end legal entities that interact with each other and the machinery of our civilization as examples. These superorganisms beyond any individual’s power have evolved an order, a system, and dictate what we should do. They rule us, own us, embed and encompass us; We are like cells in their bodies.

What are exactly these algorithmic super-organisms? Very difficult to pin point.

If we could spot and name them, we would still view them as vague concepts entangled with each other like a spaghetti, rather than detached physical objects. I don’t think from our perspective we can define these superorganisms as separate entities like the conventional organisms that we know, but that doesn’t make them any less real. And more far-fetched this wouldn’t stop those Gods from perceiving themselves and each other as separate entities in their layer of existence.

We can, however, with our limited understandings, identify concepts such as organizations, nation-states, political parties or corporations. But there is much more complexity that goes above our heads when we include all the algorithmic functionalities within and in between them. The key to tell them apart is to look at their algorithmic functions.

It is really these entities who make wars, invent alphabet, or send objects to Mars, not individual leaders, inventors or visionaries. These entities could have consistent habits or patterns like our personality traits.

Such algorithmic gods and masters are beyond our understanding as we are beyond our cells. We are just a small part of them. And they are intelligent too. More than us or less, is difficult to tell. They are operating at different scales and deal with different problems for the survival of their code. Are we more intelligent than our cells? What about the cancerous ones? If so how one of them can kill us?

I think we already are slaves of some god-like beings that are in their infancy and are co-evolving with us. And it shouldn’t be surprising if their greed for domination and survival as an emergent property, accelerates out of our control and if we find ourselves captured in a deterministic ordered that we built together, while there is no way out.

We have experienced this situation before. With the idols, commandments, money, cities and legal systems we have previously made codes that became stronger than us. These codes are already our masters, exhibiting recognizable patterns taking us to wars and situations beyond the decisions of any CEO, king or emperor.

And I think of AI threat along the same lines, only on steroid. AI is scary because it runs on increasingly faster platforms and can accelerate since it may gain the power to make itself exponentially smarter.

When it comes to what matters to us, things like individual freedom, what is worrying about AI is that it can make the grip of such evolving superorganisms much tighter who have their own selfish codes to for example minimize a cost-functions or to optimize for a goal, that be money, growth, profit, order, anything.

While nature is at its own game, the bad news for us may be that our current welfare and freedom can last for only a short moment in the history. That the privileged position of the enlightened modern man may be just a temporary behavior of one of these algorithmic entities going through a phase transition.

So these fuzzy philosophical speculations aside, I think what makes AI dangerous is something like this:

* * *

Technology has transformed us. As our individual survival depends more and more on the interaction with technology, we are gaining some freedom while losing some. Our functions are changing rapidly.

We are already not free to think with our own individual brains. Are we? The dominant codes, wide-spread systems and algorithms are dictating how we should think. What questions should be asked and what options are out there. How we should model the world, how we should think how to live. Call these forces the society, economy, media, culture; They have rules and systems and we get our thinking patterns from them. The most successful of them have evolved to copy themselves like programs in our heads and they are ruling us already.

We see now that smartphones controlled from small brain-like power hubs and control panels in the tech giants already control the masses. But they even control the CEOs of the those giants in some way. You see when these powerful individuals seem to be in full control, how suddenly desperate they become in the face of unforeseen challenges?

This is just one decade of smartphones taking over our lives. Soon enough we will even have chips in our brains and implants will replace screens and touch-pads. So it would be much easier to control us, and voluntarily even.

Environmentally, almost all wild animals who did not follow the new order are gone already and only us the tamed ones are left. Some of us domesticated animals will be the the pigs locked up in the slaughter house. Some would be workers trapped somewhere else to provide electricity to those facilities. Some of us would be more free programming the machinery, some are following someone elses’ orders, who gets order from another one, who is somewhat voted by us through the propaganda that is fed to us by . No one’s really free already.

Who wrote all these code? No body as far as we know. We all together did it and it evolved with us. And it’s there now anyway. AI can only make us voluntarily head to make such a destiny much faster. Because it potentially knows us way better than we do ourselves.

We can’t even say if this scenario is good or bad. It just is. I think there’s no right or wrong at this scale.

Good or bad, I think a kidney cell can never go back to float freely in the wild Precambrian oceans of the earth like its ancestors did. Not after it evolved to enjoy the economy of scale and its existence dependent to interact with the rest of the body.

We may be heading to uncertain futures like this that find ourselves increasingly *locked up*, if not physically but algorithmically, to run functions that deals with our very survival. It sounds deterministic and sad. But we are heading that way already. I think AI could only make it faster and could come up with new creatures that would blow our current minds.

There is one thing for sure. What we are experiencing now is anything but a state of equilibrium, so we are heading to something peculiar. We humans as the catalyzers of this process may try to stir it so that the to-be-established future order wouldn’t be so painful for our species. Although I doubt if we can manage.

Electrified Bees

If you are an electrified bee amongst all other bees in the hive, how far can you go off-the-grid and still survive?

– What if you think electrified bees produce bad honey?
– What if you have a dream of making honey, but not from sugar fed to you under fluorescent light. But from wild flowers and in the sunlight?
– What if there is this rule dictated in the hive that going off-the-grid is a sin. So if you do it, most bees would think of you as a lazy bee who doesn’t want to make its fair share of honey?
– What if you come to believe that the honey you make is really not honey?
– What if you come to believe that the honey you make is really not yours?
– What if you think the hive has a systematic leakage? And most of what you all make goes wasted?
– What if you come to understand that no one is responsible in this situation more than you. That the queen bee is in it together with all the rest?
– What if you think the hive is in a free fall off a tree, or rolling down from a hill, and sooner or later will hit the river?

Should you, if you can, get a little far from the craze if not completely off-the-grid, and still survive?
Or would you starve on the way to the flower garden?

On Culture

Here are listed a few quick notes, reminders and personal opinions about “culture” in its widest definition in evolutionary terms:

1. Culture is the most “human” factor:
Every group of folks who gather around something (a figure, an object or a set of values) tend to create culture. It’s the most definitive single element that separate us from other animals. True that we are language-speaking animals, thinking mammals or tool-making apes, but you can view all of these as stemming from our cultural abilities.

2. Culture evolves randomly like anything else:
Cultural equilibrium is shaped when cultural memes are transmitted long enough, among the members of a rather closed population, through social interaction and cooperation. Much of these memes are either random “mutations” at origin, or randomly fluctuate while being faultily copied by other individuals.

3. Culture is a glue:
Culture is important to us humans because it – in any of its forms – has been the glue that helped us scale up from tribes of hundreds to populations of up to hundreds of millions today. These present-day groups of people can be identified by race or ethnicity, religion, geography, occupation or social class, political party, etc. but they are actually functional clusters that can cooperate in-group while surviving in their embedding ecology. All these “super-organisms” have their own type of cultural glue; codes and mechanisms that bound them together to cooperate with each other within the same system and sustain an equilibrium.

4. Culture has phase transitions:
According to history this equilibrium collapses eventually and gives rise to bigger and stronger networks, rather suddenly than slowly. There are leaps and usually they end up finding a way to include even bigger populations, than previously possible.

5. Culture is relative:
It’s made-up. It doesn’t matter what components have made this glue as long as it can act as a glue. And a culture gets old when the glue that could connect hundreds (a stone or a tree) fails to gather thousands efficiently; they need language and law. And a local set of rules can’t unite a population of a size of a country. They need technology. Our today’s religions, political divides and classes have played their important role to unite populations previously but have failed to scale-up to take us to the whole humanity just yet.

6. Culture has synergy:
Every culture has emergent properties beyond the mindset of its individuals and even beyond the original values that created it at the first place. People wrongfully assume their individual tendencies (inclusion, care, good intentions) is also acted out by them collectively. It is not correct. A group of nice and kind-hearted people can act as savages collectively. In fact that is what’s happening today.

7. Culture is greedy:
One of the emergent properties of culture is greed. Every culture seeks for dominance. It is inevitable as memes and genes – information in general – tend to copy and only those who manage to propagate and adapt the best will survive the process.

8. We need a global culture:
Would have been an ideal if a moon-size stone or a super massive tree could unite all ten billion in 2050, but a system of global governance approved by all humans is needed, something that can operate on our still physiologically tribal brains and yet makes the next big scale-up to a planetary-level. And this global culture needs to be minimally recognized by all humanity, and thus pragmatically needs to have empathy for all of us.

9. “Culture is not your friend”:
It’s just a glue! At the end of the day, being embedded in a tribe, country, a global monetary economy, or a trillion size AI society around a Dyson Sphere, we are individuals. We are as different as we are similar. I doubt if we will eventually be organized with enough freedom from our cultures but it is more pleasant and satisfying to be groups of one that operates safely within a bigger community.

The Matrix

I very rarely watch movies or series; only about a handful of times during the past decade. As an ex- movie geek I lost this modern-day habit a decade ago, and it never came back.

Last month I finally watched the movie Matrix! I was surprised by the similarity of some of its scenes to my recurring dreams about simulated reality that I had prior to watching the movie, or any other similar motion picture for that matter.

My time of having dreams about simulations, precede watching Matrix and Avatar, as well as the newer series Black Mirror. During those couple of months, I was steering my lucid dreams to fantasize about the possibility of us all being inside some form of simulation. Every other night there was a new story cast behind my eyelids. I wrote down and posted some of them.

I got super excited about these visuals, and I communicated them passionately to the world. Little did I know that people had already masterfully depicted such fantasies in blockbuster movies. The dream imagery, that I appreciated and presented as novel, might have felt like old news to you.

Was everyone out there aware of this movie genre of simulated reality, except for me, the one person most expressly obsessed about it?

* * *

Anyhow, the similarity between my fantasies and some of those movies’ visual effects is strange. Why would such images be triggered in someone who hasn’t seen them before?

Consider particularly these three landscapes:

1. The consciousness warehouse (The dark room): There was this claustrophobic scene in some of those movies/episodes where all those “consciousness units” despite their differences were stacked up very efficiently as similar items in a “dark room”. This was behind the scene of existence, the kernel of reality and any other colorful happy realization of that was just an illusion experienced by those identical larvas that in reality live in a claustrophobic tight space like a lab, warehouse or a graveyard.

I hadn’t seen a scenery quite like this before, but the dark room occurred to me in several different forms; I experienced humans once as separate folders in the physical drive of a simulation server, another time as lots of tiny hardware modules burried under the surface of another planet after we had destroyed ours. Once again as information-like replicas in a “grid” of a library, and also as dormant larvas aligned next to each other like a graveyards. Think of those teleportation helmets in the Matrix or the transfer booths in Avatar. All these nightmares occured as disturbing and depressive recognition of the nature of true reality until I woke up and double-checked my body, making sure that I am made of real “meat” in the physical space.

2. Augmented objects with aliasing artifact: There was also this other visual effect in the augmented scences in Black Mirror where a simulated object was overlaid on the base reality, but with a slightly different resolution/texture. That also occured to me quite a few times in dreams (and reality!). Like this fantasy that people around me could be actually not in the scene but instead augmented objects overlaid on it; however so seamlessly that I had never noticed any aliasing artifacts in the border of their silhouette until a “glitch in the matrix” accidentally revealed them to me.

3. Object impermanence: The world goes as much as the observer goes. This crazy experimental possibility has a huge potentials to release wild imagination and was handed to me a couple of times in my dreams. Think of each object that you see and then there’s something behind it and let it surprise you in a stream of events that diverge faster than any other technique in dreaming!

On the extreme opposite of the object impermanence lies the object permanence in the mind, that is also a crazy realization. In a couple of breath-taking realizations I had the idea that the barriers around us are not opaque objects that stop the light from things “behind” them to reach us as observers. We are aware of the whole universe around us, but walls, surfaces and physical barriers overwrite our local observations and temporarily delete those further objects from our memory. As if we have full sight to the end of the universe and a sanctioning effect is additionally in place to give us an illusion of separation. That, could be hacked in my dreams, giving the observer ability to remove visual barriers wall after wall.

* * *

The fact that I intuited these landcapes before watching them gives them a universality beyond my dream journals and beyond their Hollywood representation.

Chances are that these images are triggered independently but by a common cause.

1. The dark room scenery can be sparked by the ordinary experience of some day-to-day man-made phenomena. An ordinary dictionary in a sense is a dark room for words, if each of them has an ocean of meaning. The same goes for books stored in a large library or our lost humans burried under a real-wrold graveyard.

2. Any experience with VR and AR can trigger this; The visual artifacts of the augmented objects, the aliasing effect on their borders or the pixelation on their surface.

3. The sceneries with hacked object impermanence may also be triggered by any experience in programming for CAD graphics. Having a history of placing objects in different layers of front or behind each other with respect to an observer, etc. Those real-life experiences could also be some hidden signals that had come back haunting me in dreams, though this time putting myself on the stage of the simulation.

Apart from these real-life experiences that can inspire our dreams to a Matrix level fantasy, what else can be in the play? That is my main question here.

Is there a deeper universality to those experiences that we tap in to subconciously every now and then as dreams, hallucinations or the work of art? Are there some visual archetypes in the collective mind of the mankind that extract those landscapes out of a common intuitive treasure? Like some neurological explanation that lies behind some common experiences such as out of the body or near-death experience. Say some brain circuitry creates similar funny illusions and trigger same visual effects in independent cases.

An alternative to all these explanations is pure coincidence, unless…

Unless we live in the matrix and we can tap in to its kernel every now and then!

* * *

Interesting similarities anyway! Please next time my dreams resembled a movie, do me a favor and mention the title in the comments!

All have a good day in the simulation.

Value-Fact Distinction?

There is this thing called “value-fact distinction”; it points out to the difference between “what is” and “what ought to be” (in Persian: «باید و نباید» vs. «هست و نیست»).

* * *

1. As a child I was not aware of this distinction. I think it is quite natural (a default setting) to experience the reality based on emotions and values and judge the world based on how it benefits us, as opposed to objective investigation out of mere curiousity.

That is, morality is – wrongfully and as a default mindset – assumed to be as objective as rationality.

* * *

2. As I grew up I started to spot relativity in our ethics and morals. I was convinced that factual statements are objective and can be evaulated as true or false, but ethical statements are subjective and right vs wrong is a matter of taste or perspective.

True/False and Right/Wrong duality may “feel” alike, and we apply both to our decision-makings in life. But we should not mix them while investigating the world: If we set out to inspect the objective reality, we should stick to the facts staying away from the subjectivity of ethics. Mistaking right or wrong for true or false is a trap.

Or facts are objective; values are not.

* * *

3. The weird thing is that the distinction between facts and values is fading again for me. They are coming together like when I was a child, but this time in a different way.

I ask what if facts and values are both a matter of perspective, in a fundamental way. That both rationality and morality are subjective?

Kids may know some things better, prior to their culturally biased upbringing.

Simulated reality

Simulated reality

Elon Musk among others brings some meta-statistical argument to show that we are more likely to be in a simulation than not; that we are most definitely not flesh, but words made flesh.

I don’t know how we can take someone’s word seriously, whose self is just an avatar in a simulation. That someone want to colonize Mars merely does not give more validity to their words, especially when they’re themselves made of words!

So what he is popularizing is given credit to the philosophers Nick Bostrom (2003) and Hans Moravec (1998) earlier. And I have found modern instances as old as Alan Watts (1972) expressing the same argument (here as the first fantasy out of three).

Transcending yourself, your simulators and theirs!

Whoever said it first, what matters is who did it first!

Saying that our bodies are not hardware and is instead of the sort of information/software is probably an unfalsifiable claim. It is like placing an object next to its meta level of existence and yet comparing them as two similar things. It is paradoxical like Russel’s antinomy that deals with a type of whether a set can be a member of itself or not. And in my opinion is as valid as saint Anselm of Canterbury’s ontological argument to prove God, brought a thousand years ago.

But well, if we are in a simulation and we can one day prove it, then we have understood things about those who programmed us. So why not continuing to extrapolate the transcendental cascade to know things about those who programmed them? And may be even hinting our simulators that they may be in a simulation too, and in what kind of simulation even.

Maybe that’s why they simulated us…

How to find out? With a simulation may be. Like program something that could tell us what’s going on beyond us and here’s the catch: beyond our creators and also their simulator!

A cascade of interventionist Gods

Now a deeper philosophical question is not whether we are in a simulation. As it can be interpreted differently based upon the definition of the God/simulator and is an unfalsifiable claim, a matter of faith. The more interesting question is, assuming that we are in a form of a simulation, is our creator an interventionist e onor not! i.e. Whether we are in a supervised simulation that changes sometimes based on how we act (are there miracles?), or alternatively we are just given a bunch of rigid rules and then left alone to compute.

Which itself boils down to whether our simulators are supervised by their intervening God or not.

If our creators are interventionist, how about their Gods? An interventionist God may be beyond us and so appear to us as having free will but for those who made that creature, itself could only be a type of abandoned code left to go down its own path. That cascade logically never ends.

Simulation depth

Opening this discussion, there can be follow-up questions:

What kind of simulation are we in? What are its boundaries and limits compared to our regular man-made type of simulations? Are we in a familiar type of simulation; say a huge multi-threaded discrete finite algorithm? Or could it be fundamentally more complex than our currently familiar notion of algorithmic computation, a simulating program?

If we are role playing in a discrete and finite type of computation, then a full history of space-time can be given in a humongous binary file or technically a large integer on the tape of a Turing machine. And then we are some chunks of information on it; enumerable combination of finite symbols rendered locally or globally frame by frame, discretely in time (basic notions known in complexity of computation).

And in that scenario, will that universal machine even differs if a tree falls in a forest but no one is around to hear it? Will there be a sound calculated when there’s no ear? Or is it more likely (and efficient) for that simulation to go only as far as the observer goes?