November 2021 |
[an error occurred while processing this directive] |
What could the next decade of AI look like? Source https://www.linkedin.com/pulse/artificial-intelligence-26-what-could-next-decade-ai-look-ajit-jaokar/? |
Ajit Jaokar Course Director: Artificial Intelligence: Cloud and Edge Implementations - University of Oxford |
Articles |
Interviews |
Releases |
New Products |
Reviews |
[an error occurred while processing this directive] |
Editorial |
Events |
Sponsors |
Site Search |
Newsletters |
[an error occurred while processing this directive] |
Archives |
Past Issues |
Home |
Editors |
eDucation |
[an error occurred while processing this directive] |
Links |
Software |
[an error occurred while processing this directive] |
This week, I also shared our vision on #digitaltwins at the MathWorks site Digital Twins and the Evolution of Model-based Design.
Last week, I discussed my misgivings about the idea of data-driven v.s. model-driven AI. This week, we extrapolate that question and ask a much broader question: “Will AI breakthroughs this decade be based on last decade’s AI developments, OR will we see new directions of AI research in this decade?”
Undoubtedly, the last decade 2010 to 2020, was a game-changer for AI research. It was based on the current deep learning models characterised by parallelized networks comprising relatively simple neurons that use non-linear activation by adjusting the strengths of their connections. Using this model, we find the rules for function f(x), which maps domains x to y when the rules are hierarchical and (typically) the data is unstructured. To do this, we need a large number of labeled examples. The labels are at a higher level of abstraction (ex if the image is a cat or a dog). The algorithm can then discern the features that comprise the object(ex a cat has fur, whiskers, etc.). This is the essence of deep learning called representation learning and is common knowledge for data scientists.
Especially in the latter part of the last decade, three developments have accelerated this model:
a) Transformer based models
b) Reinforcement learning and
c) Generative adversarial networks.
And they continue to amaze us daily – for example:
Deepmind - Alphafold – deep mind protein folding; Deepmind - meta-algorithm creating the one algorithm to rule them all, i.e., a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data. And of course, Deepmind’s AI predicts almost precisely when and where it is going to rain, and now megatron Turing NLG from Nvidia and Microsoft – a large language model that claims to exceed GPT-3
Impressive as these are, all the examples above share some crucial properties
Also, the common element here is: all intelligence (rules) are derived from the data alone.
The other extreme is: rules are symbolic, i.e., decided by humans.
That was the early days of AI which ultimately led to the AI winter.
However, I believe that this decade will be all about techniques that can interject expert intelligence into the algorithm (but is not the same as symbolic as in the early days of AI)
One such interesting case is from some work by Max Welling. I have referred to this work before, but in this case, I am referring to the use of generative models to build counterfactual worlds. This could work where we have a problem domain with too many exceptions, i.e., there is a very long tail of situations that do not show up in your dataset used to model the problem.
[an error occurred while processing this directive]
[Click Banner To Learn More]
[Home Page] [The Automator] [About] [Subscribe ] [Contact Us]