Why AI is the most significant thing to happen in technology in over 40 years
Terms like artificial intelligence, deep learning and neural networks are frequently spoke about as buzzwords, and lumped together with other emerging technologies such as virtual reality and the internet of things. However, I feel that AI is much more than that.
43 years ago, Bill Gates and Paul Allen released the first version of Microsoft BASIC for the Altair. Yes, there were breakthroughs decades prior to this, but in my opinion, Microsoft BASIC was the first true step in aligning human beings with technology. For those unaware, BASIC is a programming language which takes human readable instuctions, and interprets them as machine code. Of course, before BASIC, there were other languages which could (to some degree) interpret human readable instructions and compile them to machine code, such as Fortran, LISP and Cobol. However, Microsoft BASIC was the first step in bringing programming out of the laboratory and into the hands of real users. For this reason, I believe that Microsoft BASIC was as signficant as it was. Here is the classic snippet of BASIC code:
10 PRINT "HELLO WORLD"
20 GOTO 10
So simple right?
43 years have subsequently passed since Microsoft BASIC was released to home users, and we are now looking at an era where creating deep neural networks is just as accessible to regular people in their bedrooms as BASIC was. What exactly does this mean? Deep learning is data-driven, so the machine learns from the data rather than from specific instruction. This is huge. We are now looking at a possible shift away from the need to meticulously give instructions to a machine line-by-line, and instead focus on curating information and providing it to the machine to figure the rest out. Think computer vision, self-driving cars and language translation.
This effectively opens up computer science to people with domain knowledge and ideas without necessarily requiring technical skill. Theoretically, the dentist with thousands of images of periodontitis, can create a system to analyse photos of teeth from patients and figure out how urgently they need to see him. The architect with tons of data on soil conditions and building materials can mitigate risk in new projects and save millions.
As it stands, creating the actual architecture for a system to learn from data, still requires technical skill. However, I believe that this is not going to be the case for much longer. Google’s DeepMind team are working on a system called AutoML, which figures out the architecture from the data. Take computer vision as an example, AutoML sifted through the millions of images in the ImageNet repository, and created a best-fit architecture for solving this problem called NASNet. NASNet performed better than any human-coded neural network architecture that preceded it. This is a huge step.
The summary here is that grouping deep learning into the family of emerging tech such as VR and AR, is really not doing it justice. The potential that deep learning has to democratise the human-computer interaction is insurmountable. Just like BASIC brought the ability to offer up instructions to a machine to the home user, hardware breakthroughs and deep learning accessibility will diminish the reliance on tech skills to create groundbreaking solutions.