BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

The Singularity, Be Damned. Machines Should Replace Humans ASAP.

Following
This article is more than 2 years old.

I have no idea what’s taking so long.  There are all kinds of areas where machines perform better than humans – and lots more in the queue if we only gave them a chance to show us what they can do.  Why are we so scared of smart machines?  Why aren’t we screaming to be replaced?  Or do we like doing taxes, buying insurance, misdiagnosing, losing and filing – everything? Or is it ultimately about the jobs that will be eliminated?

Let’s Stipulate 

First, let’s stipulate that not all problems can be solved with AI and machine learning, though the line is blurring more every day.  But let’s also stipulate that problems that are repetitious in well-bounded areas are tailor made for automation.  As an example, income taxes are a perfect candidate:

“Machine learning (ML) is killing all sorts of processes and entire business models.  Areas like tax planning, preparation, reporting and documentation that are well-bounded and deductive – what those in AI call ‘narrow AI’ – are ready for supervised machine learning.  What’s taking so long?  Everything’s already digital.  Your personal taxes require preparation, submission and calculation, all pretty easy for smart machines (that don’t need to be all that smart to do your taxes).”

Help Wanted

So what can machines do better than humans?  

Healthcare is ready and able, but not so willing.  How about this?

“Researchers at an Oxford hospital have developed artificial intelligence (AI) that can diagnose scans for heart disease and lung cancer.  The systems will save billions of pounds by enabling the diseases to be picked up much earlier.  The heart disease technology will start to be available to NHS hospitals for free this summer.  The government's healthcare tsar, Sir John Bell, has told BBC News that AI could ‘save the NHS’.”

Or this?

“Technology developed using artificial intelligence (AI) could identify people at high risk of a fatal heart attack at least five years before it strikes, according to new research funded by the British Heart Foundation (BHF).  The findings are being presented at the European Society of Cardiology (ESC) Congress in Paris and published in the European Heart Journal.”

There’s so much more here – like medical imaging, diagnosis, drug discovery platforms, radiation treatment and genomics, among many other areas.  Humans should select and deploy the ones with the greatest impact and lowest cost – now.

The legal profession?

“Twenty US-trained lawyers, with decades of legal experience ranging from law firms to corporations, were asked to issue-spot legal issues in five standard NDAs. They competed against a LawGeex AI system that has been developed for three years and trained on tens of thousands of contracts.  The research was conducted with input from academics, data scientists, and legal and machine-learning experts, and was overseen by an independent consultant and lawyer ... following extensive testing, the LawGeex Artificial Intelligence achieved an average 94% accuracy rate, ahead of the lawyers who achieved an average rate of 85%.”

Finance?

“A research team from the University of Erlangen-Nuremberg in Germany has developed a number of algorithms that use historical data from markets to replicate real-time investments.  One of the models allowed for a 73% return on investment annually from 1992 to 2015, taking into account transaction costs. This compares with a real market return of 9% per year.  Profits were particularly high during the market shocks of 2000 (a 545% yield) and 2008 (a 681% yield), which proved the increased efficiency of quantitative algorithms during periods of high volatility, when emotions dominate the markets.”

War?

Well, we already know that drones are often the weapons of choice in many situations.  We also know that robots are being groomed to replace human soldiers. Tacticians and strategists will be automated. You name it.   

Others tasks?  

How about the menial tasks, like those conducted by CSRs, data entry, reading, translation, phone management, manufacturing, retail, security, employee onboarding and all forms of transportation (note that the Los Angeles Times predicts that “self-driving trucks could replace 1.7 million American truckers in the next ten years”).  It’s not “likely” but necessary that machines solve our expertise and employment problems.

Why – & Why Not?

In many areas – not every area – machines are smarter than humans.  They’re also cost-effective – and they don’t talk back or take vacations – and they’re fine with danger.  Mostly good, right?  (We should argue the ethics around autonomous weaponry all day because drones can miss their targets.).

So, what’s wrong?  What’s slowing all this down?  When we look at performance data where machines clearly outperform humans, we must wonder why deployment has been so slow, even avoided.  As I argued in the tax filing example, there’s a multi-billion-dollar tax preparation industry that stands in the way of the automation that many countries already enjoy.  There are also legitimate testing and validation reasons for cautious deployment, as well as legal issues, such as liability when autonomous vehicles malfunction.  But perhaps the biggest problem is found in the title of this post:  we’re not nearly as desperate for help as we should be, and we’re not obsessed about how to replace ourselves with machines that are smarter, faster, cheaper and better than we are.  Instead, we’re worried and threatened.  But in time the machines will win. We all know this.

Postscript:  The Singularity

What about all the fuss about “the singularity (which is defined loosely as follows)?

The technological singularity – or simply the singularity – is a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization.  According to the most popular version of the singularity hypothesis, called intelligence explosion, an upgradable intelligent agent will eventually enter a ‘runaway reaction’ of self-improvement cycles, each new and more intelligent generation appearing more and more rapidly, causing an ‘explosion’ in intelligence and resulting in a powerful superintelligence that qualitatively far surpasses all human intelligence.”

Predictions about the arrival of this event vary, of course, but I have another perspective, which essentially welcomes super-intelligence into our worlds.  Is it horrific and threatening when the best doctors are machines, or when machines provide universal healthcare (in the US), or when drug discoveries and genomic research are enabled by machines?  Of course, these “machines” can be abused, but if the singularitists are right then the machines will self-correct – as superior intellects will do – and avoid some of the major disasters that have plagued humans for centuries.  Without getting overly philosophical, why shouldn’t humans welcome and partner with expanded intelligence regardless of its form?  Especially when, according to subscribers of the singularity, we have no choice.  So until the machine intelligence explosion occurs, perhaps we should worry more about the explosions that have killed millions of humans over the years and look to smart machines to help us live longer and better lives.  If they can free us from tedium, help us stay healthier and propel us toward a safer future, I don’t think we should fear them at least until they become as lethal as the “intelligence” we face today.

Check out my website