Tuesday, March 31, 2015

How Deep Learning Will Change Every Aspect Of Our Modern World -- Very Soon



The concept of artificial intelligence is nothing new -- it has been around in popular culture for decades now. Hollywood seemingly makes a movie every couple of years or so around the concept of technology superseding human capabilities, often leading to dystopian futures in which we are taken over by our own creation. However, until the recent explosion in the field of Big Data and data analytics, combined with the constant increased computing power that is allowed from Moore's Law each year, artificial intelligence has been in the realm of science fiction.

Today, deep learning already exists, and all of the megacorps. in technology (i.e. Google, Microsoft, Apple) are all vying for the most qualified and experienced scientists involved in the field of machine learning, and they are buying them all up. IBM has already created their supercomputer Watson, which has already been shown to have the capability to surpass human capabilities in games which we had traditionally believed that they would not be able to, such as Jeopardy and chess. In the 1960's this was made possible by an individual who sought to teach a computer to beat him in chess. Instead of using the traditional method in which we code computers, however, which is by inputting each instruction that we would like the computer to carry out, he instead chose to have the computer learn the game on its own, in the same way that a human would learn to become good at chess. So he ran thousands and thousands of simulated games in order to allow the computer to learn, and it did exactly that, and it was not long before it began to beat him very easily. This was the first instance of the field of machine learning. This technology was created in the 1960's, though, so why should we expect anything to really drastically and quickly change in the realm of artificial intelligence?

Google has been the first company to really utilize this process of machine learning on a much larger scale than ever before. They created a sophisticated computer algorithm which is able to interpret and understand various different things using data alone, and it can categorize and organize them in an efficient manner, similar to how a human would be able to. This is fundamentally different than the method in which we have traditionally thought about programming computers; rather than input each task that we need the computer to perform, we are instead allowing it to analyze the data itself and allow it to use that to generate its own interpretation and understanding of what it is being asked to do. Therefore, no longer are we coding the computer to do specific things, we are just inputting large volumes of data, and allowing it to then in turn formulate its own understanding of how it should make sense of the material. And already today we are exposed to the byproducts of machine learning on an almost daily basis in the form of personalized recommendations and advertising based on data from your online footprint. For example, about a year ago I was searching for a certain type of basketball shoe on eBay, and consequently, the computer began to feed me advertisements for that exact same shoe anytime I logged into Facebook. Eery.

The difference between the present day and the 1960's, when machine learning was first conceptualized, is that now all of the other tools that are necessary to advance the technology exponentially have been improved and refined over the years. This improvement and refinement of our understanding comes from many different fields: data, brain imaging and understanding of neural networks, and increased capabilities of processing components of computers. All of these things combined have enabled companies like Google to use sophisticated algorithms to interpret their large databases in a way that is similar to the functioning of how the human brain learns. 

This has enabled computers to now think. Think about that for a second. Computers already can think. To illustrate this and its infinite potential applications, IBM's Watson supercomputer has already begun reading and analyzing hundreds of thousands of published scientific papers. This is something that would take even the most capable humans an extremely long and tedious process. Even more shocking, Watson, after reading all of these papers, it was able to create new innovations and theories once it had interpreted the database. And 99% of the theories that it had produced, after we studied them a bit, were absolutely correct. It actually even determined that, in cancerous cells, the cell that is cancerous is not the only important cause of the cancer itself. It found that the surrounding cells were critical in determining whether or not the cancer became activated or not. 

It only gets worse. Computers now can even see better than the human eye. In 2011, the first algorithm was demonstrated in a competition to be able to recognize traffic signs twice as accurately as a human. And the capabilities are constantly increasing with each passing day. In 2014, computers are down to a 6% image error rate in recognizing all types of different images on the web. This is much less than humans, and is able to analyze data at a rate infinitely faster than a human could.

Oh, it can also interpret and understand previously conceived to be extremely sophisticated components of knowledge -- complex sentences, even abstract things like humor, are now able to be understood at nearly human capacity, thanks to a Stanford algorithm.

As you can realistically expect, they can also write. It has been shown that a computer can take random images from the web, and it can describe exactly the content of those pictures in a coherent and understandable way, allowing for the labeling previously unlabeled data.


The Implications of This Technology's Future Impact

In a different post, I had mentioned an idea that Ray Kurzweil had pointed out in one of his speeches -- that we should not be apprehensive of technology displacing human jobs, because, as he noted, history is full of examples in which there have been similar concerns of technology altering economic fluidity, and each time there have been new opportunities for work that come out of these breakthroughs in which we did not previously conceive.

However, it is increasingly becoming my understanding that this change may be fundamentally different than those in the past, and there may be valid cause for concern. In the TED talk given by Jeremy Howard (link below), he shows a graph which displays countries in which the labor force is primarily service-oriented. Among many other developed countries, the United States is one of the countries in which our workforce is composed of over 80% of individuals provide a service. Services are also exactly the same areas that computers have just learned how to do more effectively and efficiently than humans. Not only that, but while human performance grows very slowly and gradually, deep learning grows at an exponential rate. And their rate only increases, as computers become more intelligent and capable, they will only be able to build and create even better and more capable computers.

We have successfully created a species which is superior to us in every aspect. Now is the time to think about how we are going to shift our perceptions of society and economy in order to adapt to this impending change, and we must begin planning a course of action in order to allow for a smooth transition. Time is the only true limiting factor.

Change is inevitable and constant, but in no way does that mean that preparation for it is not essential -- because it is.


Larry Page -- "Where's Google going next?" 





Jeremy Howard: The wonderful and terrifying implications of computers that can learn




No comments:

Post a Comment