Transportation

Reflections On a Decade Of AI


Recently, I was asked to be the General Co-Chair for the IEEE International Conference on Connected Vehicles (ICCVE).  Founded a decade ago with academic roots, the 2022 version extended beyond the academic model with a significant industry and regulatory emphasis, and concurrent physical locations in Shanghai, Pune, and Munich.   Interestingly, the decade-long historical backdrop of the conference and the assembly of the worldwide activity provided an opportunity to be reflective about progress beyond the noise from the tactical day-to-day headlines.

Where do we really stand with AI technology?   How did we get here?  The next few articles will consist of some reflections of the past, present, and future of this technology.

Historically, the fundamental technology and approach of “intelligence” has been a shifting paradigm.  In the early days of AI, intelligence was derived by a symbolic manipulation of objects in the era of “Symbolic AI.”   Somewhat akin to the difference between algebra (symbol) and arithmetic (numbers), these systems had an engine which would manipulate symbols until a goal was reached.  Using this methodology, many interesting problems could be solved, but the problems which could be solved were of limited size.  

The next wave consisted of the age of “Expert Systems” where a series of human developed rules were developed to solve a particular problem and the underlying engine sifted through the rules in order to reach a solution.  The introduction of human developed rules solved many of the size limitations of the previous approach, but also created a large dependency on human developed data. Both of these approaches significantly moved forward the state-of-art, but unfortunately were victims of the hype cycle with the result of “AI Winters.”

In the last decade, “intelligence” has been redefined again with terms such as deep learning. Recognizing the inherent limitation of a process which involves human “programming,” deep learning has built a “learning” paradigm.    In this paradigm, there is a period of training where the AI machine “learns” from data to build its own rules. Learning is defined using traditional mathematical optimization algorithms which try to minimize some notion of error.  Getting past the machinery, the thought process behind the model itself is fairly profound and somewhat unique. 

The process of simply giving data to an engine and automatically developing an algorithm has the feeling of a magic elixir. However, the technique does seem to “work” in areas such as vision or natural language processing where conventional solutions have been difficult.  This contrast between conventional deterministic computer science and machine learning was explored in an article “Is Machine Learning The Quantum Physics Of Computer Science ?”   where an analogy is built to conventional Einsteinian physics and Quantum Physics.  Similar to Quantum Physics, a collective understanding of the power as well as limitations of machine learning techniques are being developed.  Table 1 below discusses some of the learnings after a decade of the contrasts between these two approaches to algorithmic development.

After a decade, where do we stand?

ML algorithms are seen as the vital critical new tool which can enable the next wave of electronics driven innovation. There are unquestioned successes in fields such as search, recommendation systems, and increasingly language processing.  Significant investment has been expended in the area of autonomy with some success. However, several open questions remain:

  1. Safety Critical Systems:  With words such as non-deterministic, not analyzable, not theoryful,  how can one build and most importantly verify safety critical systems?
  2. Computational Complexity:  If one cannot evaluate computational complexity, how can one be assured that one can converge onto a solution.  Said another way:  Will Tesla’s Dojo significantly move the state-of-art or is it a white elephant dependent on the wrong fundamental algorithm?  
  3. Underlying Computational Fabric:  Tesla Dojo and many other applications have centered on the Nvidia computational model.  Is this the right computational structure or is it limiting to progress? Are there interesting alternatives ?

Over the last decade, the answers to some of these questions are coming into focus and will be explored in subsequent articles.



READ NEWS SOURCE

Also Read  As Bill Gates Picks Porsche Over Tesla, EV Choice Should Be Celebrated, Not Ridiculed