BTL Mark: Resolve interoperability issues & increase buyer confidence
|We've Got It Backwards:
The Future Needs to Be More Human (and Less Machine)
| Toby Ruckert
Founder and CEO,
For the past
several decades, we have created computer programming languages and
managed to force entire generations across the globe into becoming
engineers and learning how to code.
And the result?
We have succeeded beyond our wildest dreams (we’ve created machines that can now learn on their own), and we have failed beyond our worst nightmares (we’ve created “black box” Artificial Intelligence (AI) which we don’t — and can’t — understand).
It’s time for us to rethink the future we’re so effectively creating. I’m concerned by this trend, where we focus our mental energy into machines, rather than having them understand us more.
And I’m not alone.
Industry and thought leaders across a wide array of disciplines have advocated for having machines understand our languages, instead of having people understand machines’ languages. This argument has recently moved beyond code. The uncharted new territory of machine learning has made it virtually impossible to understand why machines make the decisions they do.
How machines talk to each other is worth a deeper discussion, especially since Google’s Neutral Machine translation system (https://techcrunch.com/2016/11/22/googles-ai-translation-tool-seems-to-have-invented-its-own-secret-internal-language/) went live last year, and more recently, Microsoft researchers discovered that their AI has invented its own language (https://www.newscientist.com/article/mg23331144-500-ai-learns-to-write-its-own-code-by-stealing-from-other-programs/) to write code — which it could use to talk to machines — and each other, without us even understanding it. Eventually, it means that technology works out certain decisions with itself.
Skynet is already here.
If you're not concerned about AI safety, you should be. Vastly more risk than North Korea.
— Elon Musk (@elonmusk) August 12, 2017
Another worry of various technology leaders
is that we are rapidly
letting these intelligent machines assume responsibility for our daily
lives. Driving our cars, screening our health, monitoring our children,
pampering our pets- they are not just evolving our relationships with
machines but changing culture and society, and our place in it. In the
case of self-driving cars, for instance, insurers may decide that human
drivers pose an unacceptably high risk compared to autonomous cars,
trucks, and trains (http://mashable.com/2017/06/16/french-driverless-trains).
Right now, two of the most popular digital apps use Machine Learning to make suggestions. Facebook uses Machine Learning to decide for us what news we see (https://qz.com/913114/bill-gates-says-filter-bubbles-are-a-serious-problem-with-news/) — and don’t see (https://www.theguardian.com/technology/2017/may/16/facebook-fake-news-tools-not-working) — in our feeds. And Google Photos uses Machine Learning to identify people in photos.
And then there is MADCOMS (https://medium.com/artificial-intelligence-policy-laws-and-ethics/artificial-intelligence-chatbots-will-overwhelm-human-speech-online-the-rise-of-madcoms-e007818f31a1), which stands for Machine Driven Communications Tools — you may be more familiar with the term chatbots. These AI-driven robots are quickly outnumbering human communications online. They browse the internet to understand people by autonomously gathering data. You have likely seen them in the form of AI-created advertising. With hundreds of thousands of chatbots, we expect to soon see a world where machines are no longer learning from human content, but other machine-generated content.
These trends will not only affect how we think, communicate, and see ourselves and each other, but also limit our ability to learn and grow, as content selection, variety and exposure are based on sources that are linked to machine culture, and not human culture anymore.
At the same time, machines have moved beyond working around a set standard of operations and tasks, moving into the more subjective field of “art.”
The art of music is complicated enough for humans to have had a centuries-old debate about tastes, styles, and what “makes good music tick.” While it’s never going to be a Mozart, today, we already have AI that is creating music on its own (https://futurism.com/the-worlds-first-album-composed-and-produced-by-an-ai-has-been-unveiled/), and many people actually seem to like it. All this will affect the very fabric of our society — its arts and culture, even more than the internet and social media already have.
Smart devices and AI are increasingly taking a critical role in our lives. It’s up to us to decide whether we want a future to be more human or more machine. What makes us humans are our languages, and the cultures and thinking patterns that come with them.
Therefore, it is my wish that for this and future generations, we not just teach ourselves to code or understand machines, but rather focus on teaching ourselves how to think, and our machines to learn and understand our culture better.
About the Author
Perpetual inventor, classical pianist, and successful serial entrepreneur Toby Ruckert is the Founder and CEO of leading intelligent Internet of Things (IoT) messaging company Unified Inbox Pte. Ltd. (UIB).
Born in Germany, Toby studied music in Stuttgart while founding two IT companies. In 2003, he immigrated to Waiheke Island in New Zealand before moving to Singapore in 2014 where he now lives. Toby is passionate about leveraging technology to empower people to regain control of their digital lives. He believes new innovative solutions are needed to help us to realize the importance of our attention and then manage it to achieve our goals and reach our full potential.
Toby is a popular speaker at leading technology conferences around the world on topics including IoT, Artificial Intelligence, entrepreneurship, business philosophy and creating a culture of innovation.
[Click Banner To Learn More]
[Home Page] [The Automator] [About] [Subscribe ] [Contact Us]