Will Machines Take Over Finance?



A lot of “knowledge work” today is really just the routine application of a set of rules, with some ability to identify situational errors and exceptions.

The (smart) machines are coming.

I’ve been interested in artificial intelligence (AI) – the concept that humans can design a machine that thinks for itself without explicitly provide instructions (or a “program”) — since the early 1970s. As an undergraduate math major, I wrote a program designed to learn to play the game of Monopoly (checkers was too easy, chess was too hard). The program “knew” the objective, the layout of the board, the rules of play, and the content of the three stacks of cards that drive the game. It could handle up to 8 players.

It wasn’t very smart, but it had one great advantage — it could play thousands of games against itself, remember the outcomes, and analyze why it had won (someone always wins) or lost using some simple algorithms. After a few weeks it was virtually unbeatable against one or more humans. I had stumbled onto a primitive form of “machine learning.”

As my sequence of working careers developed I’ve monitored the development of various (and varied) strands of AI and looked for additional opportunities to apply the field’s expanding set of ideas and techniques to real-world problems. Medical diagnosis in the 1980s; corporate tax compliance in the 1990s; automatic code generation, fraud detection, and credit risk assessment in the in the 2000s. Commercial insurance underwriting, in this decade. Some things worked really well; some did not. It wasn’t always clear why and in every case the “experts” hated the idea that they could be “replaced” by software, even when there was no intention to do so.

Meanwhile the AI field went through several cycles of interest and advances. In 1997 IBM’s Deep Blue started winning championship-level chess games; there was a lot of excitement (and some public interest) but relatively little commercialization of the capability. In 2011, IBM’s Watson beat human champions at Jeopardy; more public interest and, this time some progress in commercialization, although as yet Watson seems to be better at ingesting information and answering questions based on its data store (finding existing needles in haystacks) than in creating genuinely new insights. It’s very knowledgable and very fast, but it doesn’t “think” – at least not yet – and it’s definitely not “conscious.” Calling what Watson does “cognitive computing” is probably generous.

More recently, Siri, Cortana, and Alexa have “consumerized” the idea that humans can interact with software in an “intelligent” way. AI applied to the user experience is getting quite good, although humans are still doing the majority of the “thinking.”

Then in 2016, Alphabet’s (Google’s) DeepMind unit (a “moonshot” effort, that Google acquired for about $400 million in 2014) announced that it had used “machine learning” (which is still a major thread of AI research) to develop a computer capable of defeating the European champion at Go, the ancient Chinese game of strategy. Go is a much more complex “problem” than chess, even though both games are “computationally complete” — with enough computing power you can play every possible game and simply choose winning sequences from the universe of all possible games. Go’s universe is much larger than chess’ and no existing or planned computational capacity (with the possible exception of quantum computing, should it ever be realized) can solve either of the games that way, so the software has to be “smart” and apply effective strategies to a smaller sequence of moves, always trying to improve its position relative to its opponent.

Alphabet intends that its program (which is called AlphaGo — the details of how it works were published in Nature in January 2016) will now play a higher-ranked Go champion sometime in March (the initial win was against “only” the 633rd ranked player in the world — impressive, but not yet beating the best). If AlphaGo can continue to win against better and better human experts (and eventually against better software systems), another cognitive milestone will have been achieved with AI, and it will have been reached roughly a decade faster than had been generally predicted by the leading AI researchers.

In this case, the progress is unexpected enough that nobody, including the research team that achieved it, knows quite what it means. This isn’t a criticism of Alphabet’s efforts; rather, turning what seems to be a technology breakthrough in a very narrow domain (Go) into a more general-purpose tool can be very hard, if not impossible, because a useful outcome requires other problems to be solved and the results combined (from the discovery of semiconductors to the invention of the transistor took 80 years). It’s likely that DeepMind’s technology will be tested outside of board games, in areas like climate forecasting, medical diagnostics, and economics, which have some of the same characteristics. That will (it is hoped) illustrate what else we need to be able to solve for to get useful answers.

All of which raises some interesting questions about the future of many kinds of work. A lot of “knowledge work” today is really just the routine application of a set of rules, with some ability to identify situational errors (and fix them if possible) or exceptions (circumstances the rules can’t handle and which need to be routed to better qualified people to address). We’re getting close to the point where the various forms of AI can take over the more routine aspects of knowledge work — and in many cases do a better job of identifying (if not yet fixing or handling) errors and exceptions.

The corporate world’s endless drive for productivity will discover and adopt much of this sooner or later — looking to replace knowledge workers with software wherever they can. There will still be a need for both “relationship” or “interaction” skills that only humans, so far, can demonstrate and for exception handlers and error correction “specialists,” but a lot of other “routine” jobs could be endangered by the relentless advance of software mediated self-service and task automation.

While this makes microeconomic sense, the macroeconomic impacts need more thought. What careers remain as software expands its reach? How many people are really needed to run a business (or a society)? How do you train people to be experts when all the easier tasks are done by software? We can’t all be STEM superstars, or even domain experts with deep experience when there is no straightforward way to gain that experience. For a glimpse of the dystopian future this could portend, try reading Kurt Vonnegut’s “Player Piano.”

Also look out for Tom Davenport and Julia Kirby’s forthcoming book, “Only Humans Need Apply: Winners and Losers in the Age of Smart Machines” from Harper Collins. [Full disclosure: Both Tom and Julia are ex-colleagues of mine, but I get nothing from recommending their work]. Whether you agree or disagree with these ideas, AI in some form is coming. It will probably show up first as “augmentation” rather than automation (as in the IBM Watson TV ads — help you do better rather than replace you). However, like almost every aspect of research-based commercialization progress, every step builds toward a very different future. We may not be able to see exactly what that future looks like, but we should definitely be paying attention.

Source: John Parkinson

Comments

Popular posts from this blog

Contact lenses bestow telescopic vision!

பாம்பு, பூரான், தேள் கடித்தால் என்ன செய்வது?

Malaysia Decides