Transportation

Automated Machine Learning (AutoML) Is Hot In AI, But Getting A Cooler Reception For Self-Driving Cars


Suppose you could develop an AI application without having to lift a finger.

To some degree that is the goal of Automated Machine Learning, known as AutoML, which consists of an automated means to build on your behalf a Machine Learning application, requiring minimal by-hand effort on your part.

Just sit yourself down in front of a computer, make some selections on a few screens, and voila, out pops a Machine Learning app that does whatever it is you dreamed-up.

Well, that’s the idea behind the AutoML movement, though please be aware that life is never that easy, thus do not set your expectations quite that high if embarking upon using the latest and greatest in Automated Machine Learning.

Nonetheless, AutoML can still provide a lot of heavy lifting for those crafting an AI application, and serve as a kind of over-the-shoulder buddy that can double-check your work.

Let’s back up and consider what it takes to make use of Machine Learning tools, which are programs that essentially do pattern matching on data and you can then deploy those programs to do fieldwork as part of an overall AI system.

For those of you that have never tried to build an ML-based application, the closest that you might have come to do the same thing would involve having used a statistical package to do a statistical analysis.

Perhaps in college, you had to do a multiple regression statistical run on data about the relationship between the heights of basketball players and their weights. The effort probably was not especially enjoyable, and you might remember having to collect a bunch of data, get the data prepared for input, you had to run the statistics package, then interpret the results, and possibly do the whole thing over depending upon how the reports came out.

That is a pretty good overall perspective on the steps taken to craft a modern-day Machine Language application.

Indeed, anyone that has attempted to make use of today’s Machine Learning building tools is familiar with the difficulties associated with making an AI application that relies upon Machine Learning as a core element.

There are a series of steps that you customarily need to undertake.

The typical set of steps includes:

·        Identify the data that will be used for the ML training and testing

·        Ascertain the feature engineering aspects such as feature selection and extraction

·        Prepare the data so that it can be used by the ML tool

·        Do preliminary analyses of the data and get it ready for the ML effort

·        Choose an ML model that applies to the matter at hand, including neural networks and Deep Learning (DL)

·        Setup the hyperparameters associated with the ML model chosen

·        Use the ML model for initial training and inspect the results

·        Modify the hyperparameters as needed

·        Potentially reexamine the data in light of the ML model results

·        Rejigger the data and/or the ML model

·        Loopback to re-selecting the ML model if so needed

·        Undertake testing of the final ML model

·        Ready the ML for use and deployment

·        Over time make sure to monitor the ML and re-adjust

·        Other

If you skip a step, the odds are that your budding AI application is going to be a mess.

If you badly perform a step, the chances are that your aspiring AI application is going to be faulty.

Even if you do a good job of undertaking the prerequisite steps, you could inadvertently make a goof, perhaps forgetting to do something or doing the wrong thing by accident, and yet would have an AI application that might falsely seem to be okay on the surface though it has some rotten apples in its core.

With the ongoing rush toward pushing AI applications out-the-door as quickly as possible and doing so with great fanfare, the “developers” doing this kind of Machine Learning work are no longer the prior insider core that it once was.

It used to be that you had to have a strong AI and computer programming related background to do Machine Learning. Also, you likely had a hefty dose of statistics under your belt, and you were in many ways a Data Scientist, which is the newer terminology used to refer to someone that has expertise in tinkering with data.

Nowadays, just about anyone can claim to be a Machine Learning guru.

As mentioned, in many respects the ML technologies are akin to a statistical package that does pattern matching. In that sense, you usually do not need to develop raw code in an obtuse programming language. The main task involves running a package and making sure that you do so with some (hopefully) appropriate aplomb.

With larger and larger masses of people opting to toy with ML, the dangerous aspect is that they are using a jackhammer but do not know the proper ways to do so.

Others around them might be clueless too that the person they have hired or sought to make the ML is also clueless. 

This leads to the scary potential that the resulting ML application will not be in suitable shape for real-world use, though no one along this chain of “makers” realizes they are doing things wrongly.

What can happen?

An AI application based on a sour or poorly crafted ML core can contain inherent biases (see my indication at this link here). Perhaps the AI app is intended to identify those that should be approved to get a car loan. It could be that the underlying ML pattern matching uses gender or race as a key factor in ascertaining whether the loan will be granted.

You might be thinking that wouldn’t it be obvious that the AI app has such a foul underbelly?

The answer is no.

The biases might be deeply hidden within the guts of the ML portion.

It got in there because the “developer” of the ML app was not on the prowl to find such biases. It got in there too because the “developer” did not do sufficient testing. They did not do the needed data pre-screening. They did not do the expected assessment of which ML techniques would be the best fit. and so on.

In short, for many of today’s AI apps and the use of ML, it is the blind leading the blind.

Someone that does not properly know how to use ML is asked or paid to craft an ML-based application. Those making the request do not know how to judge that the ML is working prudently. In any case, deadlines must be met, and the AI app has to hit the ground quickly to keep up with the competition or to try and leapfrog those presumed lead-footed competitors not yet using AI.

In one sense, having an AutoML can provide handy-dandy guidance to those that are not especially versed in using ML. The AutoML does some crucial handholding and can offer keen advice about the data and the ML techniques being selected.

That’s good.

The unfortunate side of that coin is that it can encourage even more neophytes to take a blind shot at doing ML and further widen an already opened can of worms.

That’s bad.

Some argue that ML experts are essentially elite and that the use of AutoML will democratize the capability of leveraging Machine Learning. Rather than having ML capabilities only found within the hands of a few, the power of ML can be spread among experts and non-experts alike.

Historically, this same kind of debate has occurred in other facets of the computer field.

For example, writing code in conventional programming languages has always been subject to the same kind of expert versus non-expert criticisms. There have been numerous attempts at so-called fourth and fifth-generation programming languages, often indicated as 4GL and 5GL, trying to make programming easier for those that want to create applications.

Thus, this latest notion of putting something on top of Machine Learning tools to make things easier or more productive when using ML is not a wholly new idea or approach.

Those in the AI Ethics realm are worried that the ML add-ons that offer AutoML might undercut their call for being attentive to key principles underlying the stewardship of trustworthy AI.

The OECD has proffered these five foundational precepts as part of AI efforts:

1)     AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.

2)     AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.

3)     There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

4)     AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.

5)     Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

Similarly, the Vatican has provided akin precepts and so has the U.S. DoD (see my discussion about the Vatican’s AI Ethics statement and the U.S. DoD AI Ethics statement at this link here).

Will the use of AutoML spur attention to those precepts, allowing those that are making ML-based apps the needed time and capabilities to do so, or will the pell-mell ad hoc use of AutoML simply allow people to dodge or forgo those precepts?

Time will tell.

Some fervently clamor that any AutoML worth it’s salt ought to be enforcing those kinds of AI Ethics precepts.

In other words, if the AutoML is “shallow” and just provides the surface-level accouterments to make ML applications, it is likely more dangerous than it is good, while if the AutoML embraces fully and implements added capabilities to provide insight for the AI Ethics precepts it is hopefully going to do more good than harm.

How far the AutoML offerings will go in trying to imbue and showcase the AI Ethics guidelines and suggest or even “enforce” them upon the end-users of AutoML is yet to be seen.

In any case, the presence of AutoML is opening widely the possibilities of utilizing Machine Learning, doing so in nearly any domain, encompassing using AI/ML for medical uses, healthcare, financial, real estate, retail, agricultural, etc.

At this juncture, the AutoML is still in its infancy and some would say that the ML apps being crafted via AutoML are more so prototypes and pilot efforts, rather than full-fledged and robust ones (this is arguable, of course, and some AutoML tools providers would readily disagree with such an assessment).

What about in a domain that has already received intense focus on the use of Machine Learning?

For example, the emergence of today’s state-of-the-art self-driving cars can be greatly attributed to advances already told in the crafting of AI and Machine Learning capabilities.

Here’s how AI/ML comes to play in self-driving cars.

When a self-driving car is driving down a street, the sensors on-board the car are collecting vast amounts of data from the cameras, radar, LIDAR, ultrasonic, thermal imaging, and the rest, and then using Machine Learning apps that have been forged to analyze the data trove in real-time. The AI driving the car then uses the ML-based interpretations to gauge what the street scene consists of. This in turn enables the AI to make choices about whether to start to use the brakes or perhaps instead hit the gas and what direction to steer the vehicle.

Without the existent advances in ML, we would not nearly be as far along in the advent of self-driving cars as we are today.

Consider this intriguing question: Will AI-based true self-driving cars be seeing much benefit from AutoML in the effort to craft AI/ML driving systems?

Let’s unpack the matter and see.

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out, see my indication at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And AutoML

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

As earlier pointed out, the use of Machine Learning is a crucial element to the advent of self-driving cars.

Partially due to the maturity of using ML already, there is not yet much rapt attention going toward using AutoML for self-driving cars, at least not by those that have already made such advanced progress.

Why so?

The AutoML being provided today is usually suited more so for trying to explore a new domain that you’ve not previously tackled with ML. This can be very handy since you can use the AutoML to quickly try out a multitude of different ML models and parameter settings.

For self-driving cars, much of that kind of work has already come and gone, and the crafting of ML has significantly evolved. At this juncture, the emphasis tends to be on pushing ML models to greater lengths. Unless you are starting a self-driving car effort from scratch, the AutoML of today is not going to buy you much.

That being said, some enterprising experts are reshaping AutoML to provide specific functions for particular domains. If you want to make an ML for a medical domain, for example, the AutoML will have a pre-specified approach already included for dealing with medical-related data and such.

Some are doing likewise by adding or detailing AutoML for self-driving car uses.

Whether this will be sought out by groups already well along with their self-driving car activity is still open to question.

It could be that the AutoML might be used for more ancillary aspects of self-driving cars. The primary focus of AI/ML is naturally on the driving task, but there are lots of other ways that self-driving cars are likely to use AI. One area that is still being figured out involves the interaction with riders or passengers that are inside a self-driving car.

Those with a much too narrow view are seemingly thinking that riders will merely state their desired destination and no other conversation with the in-car Natural Language Processing (NLP) will take place. I have repeatedly exhorted that this is nonsense in that riders are going to want to converse robustly with the AI driving system. Imagine being inside a self-driving car and the likelihood that you want the AI to take a particular shortcut that you know or prefer, or you want to have the AI pick-up a friend that is a few blocks over, or you want to get a quick bite to eat by having the AI go to the drive-thru.

This is an aspect that can use AI/ML, and for which the AutoML might be of applicability.

Conclusion

Do you think that AutoML is going to be boon for making available Machine Learning apps on a wider basis and improve our lives accordingly?

Or, are you of the mind that AutoML is a Pandora’s box that is going to allow every knucklehead to generate a Machine Learning app and swamp us with ill-advised ill-prepared AI apps that eat our lunch?

Those that are versed in ML are already eyeing AutoML with concerted qualms, worried that the potential dumbing down of ML is going to be an adverse slippery slope, meanwhile, they welcome well-crafted AutoML that can bolster professional work on Machine Learning.

In these days of being worried about AI putting people out of a job, you might be thinking that some of the AI/ML experts are perhaps furtively worried that AutoML is going to put them out of a job. So far, that does not seem to be the case, and the worry generally is that those without the proper training and mindset are going to poison the societal elation for ML by churning out garbage ML with the ease of AutoML.

We could see the surge of excitement about ML suddenly shift into Machine Learning being the scourge of AI and needing to be banned. That’s decidedly not an outcome that it seems anyone wants, though if you see AutoML as having Frankenstein-like potential, there is certainly a chance of wanton desolation and we should be keeping careful watch for any such onset.

That’s up to us humans to do.



READ NEWS SOURCE

Also Read  TENET Is A Palindrome And Maybe Self-Driving Cars Ought To Be Reversible Too