The idea of Artificial Intelligence, or AI, has been around a long time. It has been the topic of many books and movies, and also features in philosophy and ethics. However, though the term might conjure up images of a program that appears sentient, the reality is, contemporary AI is nowhere near that level of sophistication; contemporary AI are actually programs that are based on a field of study in Computer Science within the general field of Artificial Intelligence, namely Machine Learning. So if there's no "intelligence" involved in contemporary AI, what's with the sensationalist title? You've seen the category, you know what's coming; lets get into it.

Machine Learning (ML) is the study of computer algorithms that automatically improve via experience and/or data. Currently, the extremely trendy ML technique is the Artificial Neural Network (ANN, or simply Neural Net (NN)). It is trendy for good reason; it has proven to be extremely effective. The NN is not a new idea, but it was not feasible to work with when first conceived (~1940s) because the training phase is very compute intensive, and the technology just wasn't available. As computers became more powerful, and GPUs were introduced, training a NN became much more realistic, and the power of the NN became evident. A specialization of the NN is the Deep Neural Network (DNN). The "Deep" in the name implies that there are many, many layers between the input data and the output prediction. This is where the "Deep" in "Deep Fakes" and Google's DeepDream comes from.

Understanding the path to the apocalypse first requires understanding (at a high level) how a NN works. A NN might look something like this:

                 ┌──────┐            ┌──────┐
                 │      ├───────────►│      │
        ┌───────►│      │            │      ├─────────────┐
        │        │      ├──┐  ┌─────►│      │             │
        │        └──────┘  │  │      └──────┘             │
        │                  │  │                           │
    ┌───┴──┐               │  │                       ┌───▼──┐
    │      │               │  │                       │      │
    │      │               │  │                       │      │
    │      │               │  │                       │      │
    └───┬──┘               └──┼──┐                    └───▲──┘
        │                     │  │                        │
        │        ┌──────┐     │  │   ┌──────┐             │
        │        │      ├─────┘  └──►│      │             │
        └───────►│      │            │      ├─────────────┘
     Input       │      ├───────────►│      │          Output
     Layer       └──────┘            └──────┘          Layer
                 Hidden               Hidden
                 Layer                Layer
                 1                    2

The nodes are grouped vertically, and each grouping is called a layer. A DNN could have hundreds of hidden layers, with each layer having hundreds of nodes. Each node in the net has a weight associated with it. During a single training run, the input value is fed through the net, and the resulting output value is compared with the expected output value from the training data. The result of the comparison is then communicated to the NN through a process called Back Propagation, which adjusts the weights of the nodes in the NN. The more training data one has, the better the NN gets. At least that's the theory1.

Even without knowing all the details of exactly how NNs work, one can start to see a big problem. The NN is just a bunch of connected nodes with weights. Training involves using data (in input/output pairs) and essentially tweaking the weights in the NN until its predictions get good. Once a NN has been trained, it's given a nice coat of paint and bam! AI application let loose in the world. What does this actually mean? It means that though AI (read ML (read DNN)) is starting to be used everywhere, from facial recognition to providing recommendations on streaming services to filtering job applicants, there's no real understanding of how it makes a decision. It is just a giant network of numbers that spits out an output based on an input; it does not provide any insights on why an input translates to a particular output.

However, the most fundamental and insidious problem with NNs isn't even a characteristic of the NN. It has to do with us, its creators and users. We seem to be completely okay with the fact that we don't understand how these programs make decisions. Sure, there are people out there who are trying to tackle this problem, but the point is that we saw the practical applications and charged head first into the brave new world without requiring that explainability be equally as important as interpretability. We are already starting to depend on AI programs such as Google Assistant, Siri, Alexa, and the like. We, as a species, are starting to offload more and more responsibilities to AI. We're starting to implicitly accept suggestions. These unexplainable AI will soon drive our cars, order our groceries, deliver our packages, secure our homes, help us code, help us learn. What happens when we actually stumble upon true Artificial Intelligence?

The AI train cannot be stopped. The practical applications are undeniable. AI assistants for doctors help lower the cognitive load, allowing the doctors to help more patients. AI image recognition allows for more seamless personal banking. AI assistants for chefs might help unlock new flavours and dishes. AI Natural Language Processing means better computer translations and text-to-speech/speech-to-text, allowing travelers and those suffering from speech disorders to communicate more easily. The list could go on. However, while these truly are incredible innovations in our history, we must always be aware of its limitations; we should never allow ourselves to become permanently dependent on tools we cannot explain, and therefore cannot control.

1. I've heavily glossed over basically every detail of the process, but if you're interested in understanding how Neural Nets work, I highly recommend 3blue1brown's series on the topic. For an even simpler explanation, check out andycyca's writeup An oversimplified explanation of Neural Networks.


Log in or register to write something here or to contact authors.