Have you seen the latest flick Ex-Machina? If you haven’t, you must. Having re-watched it a few times by now, I find myself enthralled by it’s message and apprehensive of the implication of that message. Are we, humans, capable of building truly sentient machines? And if it were possible, could we ensure these machines we build would be benign or the beginning of the end of our entire kind?
As a software programmer, a hacker, I’m particularly interested in the field of artificial intelligence. What does it really mean to be intelligent? Obviously being able to sum up a pair of digits (such as a 2) doesn’t make one smart; capable of pulling of simple arithmetic operations, but definitely not smart. But what about a machine capable of making many mathematical computations in a very short span of time, over and over again, in perpetuity and without a pause; does that make for an intelligent machine? Hardly. Just like we cannot in full conscience define computers as intelligent. They are very capable machines, perfectly fit to solve specific problems. But do they posses true intelligence? I think not.
The Mariam-Webster Dictionary defines the word “intelligence” as
the ability to learn or understand things or to deal with new or difficult situations
Thus, the mark of true intelligence is the ability to learn new things. Therefore the key to building a truly intelligent system is to program it to be able to learn: process new input, acquire and catalogue information in a meaningful way so as to be able to reference these vast stores of data to make sense of reality (through external stimuli) and chart future course of action.
And just how do we go about building such a machine? That I will touch on in my future musings.. stay tuned.