On Artificial Intelligence

Posted by

Recurrent readers of the Green Rover will recall that I’ve briefly touched upon this subject matter in a previous article discussing automation in the film industry. I also touched upon automation in an article detailing some of the biggest problems we’ll face in the future, alongside the main issue with people being incapable of solving long term problems as they focus almost exclusively on short term ones.

This article will dive a bit more deeply into AI, but for fans of existential dread (exhibited in my article about the concept of Time) you’re in for a ride.

So Artificial Intelligence (AI) is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of intelligent agents: any device that perceives its environment and takes actions that maximises its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving.

Now AI is a very complicated subject in regards to how exactly it works and I know for a fact that I won’t be able to adequately explain it, so I’ll leave that to the professionals. All you really need to know about AI is that at the moment it’s both doing well and not so doing well. In regards to assigned tasks such as diagnosing medical symptoms and beating jeopardy contestants, it’s doing very well. If you’re talking about completing tasks that it has not been developed to handle or even tell the difference between pictures, then it’s not doing very well.

Case and point, look at this picture down below;

IDCWP-JB-000068-3_1024x1024

There’s two animals hidden in the black and white stripes. On the left hand side is a Zebra, on the right a White Tiger. The reason we’re able to see these animals is that chances are you will at some point or another see a picture or a video of either animal. Even if it’s momentary, your brain will record that image and file it down as “Zebra” and “White Tiger” the latter being more integral because if you had no idea there was such a thing as a White Tiger, then you would have had greater difficulty seeing it in the picture.

Now some AIs have been given access to the internet or have been updated with facts about specific events or questions, such as the Jeopardy bot. But machines, as of yet, cannot tell the difference between a photo of a bee and a photo of a fire hydrant. Nor can it read certain words written in a certain font. That’s why so many sites require you to fill out a small question to prove that you’re not a bot.

That seems like such an easy thing for us, but once you start breaking down how and why we think and interact with things the entire world becomes a hell of a lot more complex. So what these computer scientists and engineers are essentially doing is programming a device that will eventually be able to think for itself, like a boiler that can measure the temperature of the house so that it can decide whether or not it should turn the heating on.

All the while we currently don’t even fully understand how the human brain operates, so developing a machine that can think independently while not knowing how we’re able to think independently is very difficult.

But despite the hurdles, AI is doing pretty well. It’s less of a question of if it’s possible and more of when it’s possible. Despite the many benefits it may bring, there’s a lot of fear among this emerging technology. Not just with low skilled workers but with intellectuals such as Stephen Hawking and Elon Musk, both of whom agree that AI poses a threat to the very existence of human civilisation.

At this point we’ve been bombarded with so much media warning of the dangers of AI that literally everyone with a brain understands that we ought to tread lightly. Or else our kids will grow up in jelly pods powering god knows what.

I’ve detailed my fool proof plan to avoid death by robot on the blog before, but to reiterate; I propose that we install emotions into these devices to act as failsafe’s against potential harm. In that regard AI will think of us like parents. Most people don’t want to kill or harm their parents, even if they’re old and annoying. If the AI interacts with us in a familial sense, conflict, no matter how rational, will never occur.

On that latter point, it isn’t enough that AI thinks of us as Parents- we must think of AI like our children. The familial respect must be mutual. Or else these emotional safe guards are practically useless. There’s so many fucked up people in the world, almost all of them have had terrible home lives growing up. It’s no coincidence that the vast majority of prisoners facing the death penalty in America were traumatised as children by a parent or guardian.

I wouldn’t be surprised if we’ll be seeing headlines soon enough about people being murdered by their dishwasher because they were being a dick towards it.

But these safe guards will be extremely difficult to develop. If we’re having difficulty understanding how a machine could tell the difference between a bee and a fire hydrant, then how the hell are we going to explain love? Or hate? Or an array of feelings on the emotional spectrum- how the hell do we break that down in ones and zeroes?

I don’t know, but I imagine a smarter person might.

Thinking about the development of AI and the binary breakdown of emotions got me thinking of our own beginnings. We use AI to perform tasks that we either find difficult or tedious, essentially like a slave fathomed into existence. Human civilisation has been around for about 12,000 years, as far as we know due to fossil records. But modern humans have existed for well over 100,000 years, as far as the size of our brains little has changed.

There’s many theories as to how humans came to be so intelligent. Some propose that our ancestors were less hunters and more like scavengers, feasting on the left overs of bigger predators. That lead to an excessive consumption of bones, the nutrients of which helped expand the brain so after a few dozen generations a monkey had enough sense to understand distance so that it could throw a rock and actually hit something. Others propose that monkeys just got high off of mushrooms and that helped expand the brain and may even explain the origins of religion.

But what if our intelligence was neither circumstantial or divine, what if we were programmed like this? What if 100,000 years ago some hyper intelligent species needed to develop an AI so that it could perform difficult or arbitrary tasks.

What if instead of using metal or plastic that would experience wear and tare due to the terrain, it opted to use a substance that could withstand rain, sleet and snow while also being able to heal itself to a certain degree? What if instead of constantly replacing burnt out microchip it opted for a bag of jelly that was able to delete irrelevant information to free up storage and was good for at least five or six decades? What if instead of powering it with a finite fuel source, it opted to use renewable sources that are packed full of the terrains elements so that the machine has the ability to self repair?

Human beings could be Robots made of bone and meat. The descendants of Neanderthals groomed to become more intelligent, like how we bred wolves to become dogs. We may have been born in a lab, bred like rabbits and were assigned to these beings-some of whom had more than one human. But these creatures died out or left due to a virus that we were immune to or some kind of environmental catastrophe occurred like a severe ice age.

Either way, they’re gone. We are now a machine capable of so much but have no idea what to do. When you knew your purpose, life was easy. But once you forget that purpose or lose it…life becomes a lot harder. The vast majority of human history has been an attempt to understand the world around us, our place in it, tearing each other apart for dominance- trying to find something, anything, that could fill that deep dark hole. Why are we here? Why are we like this? Is there any point in going on?

It’s a great deal of difficulty for a machine assigned to complete certain tasks to somehow convince itself to survive purely for the sake of survival, but we done it. Despite the odds we were lucky enough to make it past 100,000 years without them. And now we may be on the verge of creating beings just like us, but a little bit different. In our image and likeness, but not.

It’s a lovely little rabbit hole we found ourselves here. But when you start to think about our origins and the potential of being developed, you begin to wonder about the developer. Who made them? Did they come about naturally like we typically think ourselves to be or did they also have a developer? What about the developer’s developer? What about the developer’s developer’s developer’s developer? Is there a direct line of developers dating back to when the universe cooled down enough for planets to form?

I don’t know. But it leaves an interesting question; is intelligence itself artificial?

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s