Back in the days when “Bubbles,” the liquid-cooled Cray 2, was the fastest supercomputer in the world and before LISP was the programming language of choice in Marvin Minsky’s new AI Lab at the Massachusetts Institute of Technology, the pioneers of artificial intelligence (AI) had lofty goals.
They believed that AI would eventually give machines the same thinking capabilities as humans. While some question whether machines will ever be able to think in exactly the same way as humans, machine learning and other AI techniques already enable machines to assist humans with complex tasks such as forecasting and reduce the need for people to undertake trivial repetitive tasks.
Endowing computers with human-like intelligence has been the holy grail of computer experts since the dawn of electronic computing. Although the term artificial intelligence was not coined until 1956, the roots of the field go back to at least the 1940s, and the idea of AI was crystalized in Alan Turing’s famous 1950 paper, “Computing Machinery and Intelligence.”
In the 1980s these ideas spawned a boom in AI research and a raft of first generation AI companies including many specializing in so called ‘expert systems.’ Within a decade however, funding for AI research had mostly dried up, in part because expert systems didn’t deliver on their over hyped promise, but also because ‘general’ AI research problems proved more difficult than anticipated, and others proved insurmountable with the technologies available at the time.
Despite this, the leading major business application vendors of the time, including SAP, Oracle, and Siebel, integrated some of the techniques developed for special purpose “expert” systems, such as ‘rule engines,’ into their business application suites making it easier to incorporate complex, volatile, and critical business logic.
This more pragmatic, customer-centric approach to AI become even more apparent in the wake of the victory of IBM’s chess-playing computer Deep Blue over Garry Kasparov in 1997. When research in AI began to accelerate again at the turn of the century, researchers focused more on sub-problems of AI such as machine learning and the application of AI to real-world problems such as image recognition and medical diagnosis.
Progress since then has been driven by two main factors: the advent of low cost, high performance GPUs (graphics processing units), coupled with the expanded availability of big data providing the raw material for dramatically improved machine learning approaches and algorithms. Machine learning enables computers to spot patterns and relationships by analyzing large quantities of data without being explicitly programmed.
Narrow AI (Narrow Artificial Intelligence)
In recent years, machines have surpassed humans in the performance of certain specific tasks, or “narrow” AI tasks, such as some aspects of image recognition. For example, humans have a five percent human error rate in one popular image recognition challenge, but the best AI result has improved from a 26 percent error rate in 2011 to 3.5 percent in 2015.
Remarkable progress has been made on narrow AI, which addresses specific application areas such as playing strategic games, language translation, self-driving vehicles, and image recognition,” said the Preparing for the Future of Artificial Intelligence report issued by the U.S. Administration earlier this month.
As the report, prepared by the National Science and Technology Commission’s (NTSC’s) Subcommittee on Machine Learning and Artificial Intelligence added: “Narrow AI underpins many commercial services such as trip planning, shopper recommendation systems, and ad targeting, and is finding important applications in medical diagnosis, education, and scientific research.”
Simultaneously, industry has been increasing its investment in AI with Microsoft, Salesforce and Google all recently announcing major AI and machine learning initiatives. As Sundar Pichai, Google’s Chief Executive Officer, said recently, “Machine learning is a core, transformative way by which we’re rethinking how we’re doing everything. We are thoughtfully applying it across all our products, be it search, ads, YouTube, or Play. And we’re in early days, but you will see us—in a systematic way— apply machine learning in all these areas.”
SAP has also invested heavily in AI research, especially machine learning, and has made it a strategic priority for the company with plans to make all enterprise applications ‘intelligent’ and widely available by building Machine Learning capabilities into SAP HANA Cloud Platform.
Driven by the need to help customers navigate the transition to becoming digital businesses, SAP sees significant opportunities to use machine learning, Big Data analytics and other AI techniques to help enterprise customers make better, faster and more informed decisions.
For example the latest SAP Success Factors software uses machine learning to detect bias in the recruitment process. Machine learning can also be used to significantly reduce the time to shortlist best CVs for a particular job and It works in both directions: to identify the best candidates for a given job and the best job position for a given candidate.
Only 44% of executives feel that their company is managing sales effectively and only 22% of companies are able to make accurate sales predictions. SAP is using machine learning to predict which opportunities will close and recommend the best possible actions for sales reps to move the deal forward based on data in SAP Hybris Cloud for Customer and unstructured text from emails and the Web.
Other practical examples of the application of AI/ML include matching payments to invoices and managing social media. Manually matching payments to invoices is one of the most labor-intensive processes in accounting but machine learning can significantly increase automatic matching rates. Similarly social media community managers and support agents are often overwhelmed by posting volumes from Twitter and Facebook. Using machine learning incoming posts can be automatically tagged and clustered, and responses suggested.
Such examples underscore that, while it is very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, rapid progress in the field of specialized AI will continue, with machines reaching and exceeding human performance on an increasing number of tasks. Not your mother’s AI perhaps, but still pretty exciting