Explosions between Cambrian and Technological Singularity

Economy of scale and life’s punctuated equilibrium:

Life on earth is going through another short period of rapid morphological changes, this time because of us humans: In a short geological moment we have gone through a massive scale-up (7 orders of magnitude from tribes of hundreds, to billions on the Internet or members or the global economy). That we all know.

Phase transitions are common place in single species – known as punctuated equilibrium and are spotted based on local evidences at hand such as fossil records. But terrestrial life as a whole experiences such phase transitional behaviors too, although they aren’t always as easy to spot in our labs.

Last time we think a scale-up like this happened was the so-called Cambrian explosion half a billion years ago: The rapid shift in life forms from single-cell organisms to complex animals with advanced specialized systems and organs. This was when nature evolved new networks and gave life emergent properties such as intelligence or purpose.

And well in between these two explosions, there may have been other economies of scale transcending single units to complex wholes, though we may not as easily manage to identify them. I am for instance quite open to the spiritual idea that views rainforest as an intelligent whole, with a form of wisdom and the ability to reason, possessing foresight and purpose and other emergent properties invisible to our senses and ungraspable by our brains.

We require more advanced tools to discover those realms, but rest assured there exists much more than we have seen; Communicating with the intelligence that takes place at much bigger or smaller scales, or much slower or faster pace isn’t the most trivial thing we have evolved to do. Neither we have made our tools specifically for this. But I think we already have made tools that we can begin to utilize for this particular purpose. And I am hopeful and optimist, that science has the ability to eventually explore those realms.

Subjectivity,an emergent property?

What can be even more puzzling is the question of conciousness, subjective experience and sentience. Are they too, some emergent properties of complex networks? This is a whole new discussion:

Can networks emerge not only intelligence, planning and reasoning – as stated before, I am convinced they do – but also create joy and suffering out of nothing?

And what are the ethical implications of all these?

We don’t know if cells have sentience. I wouldn’t be surprised at all if they do have something like we do. Why exactly can we have that and they could not?

And now let’s for a moment assume they have a sense of sentience. The ethical question is then: Was that explosion a fun thing for them, or was it a disastrous regrettable mistake to ride the economy of scale and shape animals instead of competing alone for survival. Did they sacrifice their individual freedom for specialization in order to serve the survival of a bigger whole? More far-fetched, is a kidney cell *happier* than a lonely floater with shorter life span and less guaranteed levels of safety, but possibly higher degrees of freedom?

Relativity of morals is ethics 101, and good for something is bad for soothing else. So I am not trying to quantify and sum up all the good and evil in the universe to solve a Karmic optimization problem here. This is difficult enough to ask. Could singles be happier on their own, or as a part of a bigger whole?

And if it doesn’t make sense to you to ask such a question about microbes, just wonder the same thing about us. It’s hard to conceptualize things we haven’t evolve to perceive but our transition from tribes of apes to specialized members of powerful gigantic institutions that decide our faith more than us is a phenomenon that we tend to ignore. And such super-organisms, whatever you can think of them from physical campuses of multinational corporations, institutions and governments, to less visible codes of AI all across the Internet competing for their own survival, may only be in their early forms. Their real game may have not even started yet!

Point being, all the signs of technological singularity fits in the context of evolution.

Ethical considerations:

Back to the ethical questions: whether this is all good or bad and should we help or stop it? Relativity of ethics aside, there are two levels of moralities I can think of:

– One is what we are used to in our conventional ethics; A sense of good or bad at the human level or familiar issues in its proximity such as animal welfare: Are we as individuals losing our freedom to serve the dictatorship of new giant monsters? Are we going to suffer more and for long dark periods as humans? Could we humans catch ourselves in a blink of an an eye (a giant eye!) in miserable conditions as animals are experiencing in our industrial farms, simply because unavoidable forces of nature are leading us there? Or will we find a more sustainable and less cruel way of expanding the network of life and transcend this with less pain and suffering, exploitation and war?

– The other ethical discussion is a more Karmic sense of good vs evil: The ultimate survival of life. Whether or not we humans will be happy or miserable in any given futuristic scenario, is our technology eventually going to protect life on earth from external cosmic hazards and possibly even expand it beyond earth? Or will it kill it off completely. Some say our species may actually have a purpose and this is it.

In this context if our civilization explosion instead implodes to kill all life, before our reaching its multi-planetary ambitions, then that can be viewed as a failed gamble by mother nature.

Will humans make it to, and survive the technological singularity?

And then there is this third scenario in between. The most likely I would say. Our species will die a mild extinction before taking over stars, but also before completely destroying the life forever and ever. Both seem much more difficult than simply going extinct.

What will happen in that scenario? Probably plants will come back with new wisdom – resistance to nano-biological hazards, radioactive, plastic and what not. Then they make new things that will move around and will send them again on the mission to pollinate other stars for another thousands of unsuccessful trials, up until a massive asteroid finishes us off, this time completely.

Now seriously, does mother nature have ways to set goals and make plans, invest in a species to become technologically advanced enough to protect its mother? Hey let’s make some humans to protect and expand the life although they may kill it all. And in taking such gambles does she even further possess mechanisms for sensing and evaluating the risks involved?

I think she does. Apparently in one instance right here and now.

If this post evolved as a part of nature, then nature does have ways to try assessing the risk of its gambles. All technologists and scientists who push our civilization forward, and yet inform and warn us about existential threats that come along the horizon are the manifestation of such a risk assessment. And they come from the nature. So why should we think of them as an isolated phenomenon? How do we know nature hasn’t manifested things like this previously? All we see is the qualities of its current wave of emergent intelligence.

Hopefully it’s not the last wave, and I really doubt if it is the first one. Unlikely!

Leave a Reply

Your email address will not be published.