September 2012 |
[an error occurred while processing this directive] |
When Worse is Better For the internet of things, it is essential to keep your systems' interfaces, small enough that they can be truly understood. |
Toby Considine |
Articles |
Interviews |
Releases |
New Products |
Reviews |
[an error occurred while processing this directive] |
Editorial |
Events |
Sponsors |
Site Search |
Newsletters |
[an error occurred while processing this directive] |
Archives |
Past Issues |
Home |
Editors |
eDucation |
[an error occurred while processing this directive] |
Training |
Links |
Software |
Subscribe |
[an error occurred while processing this directive] |
A common theme has popped up this week in several internet groups I
watch, for XML development, for ontology, and for SCADA security, in
discussions asking when is a worse solution better than a good solution.
The XML groups are interested in the interfaces between systems, that
is, in the information models that accompany distributed
architectures. The ontology groups look to identify the meaning in
system information, and to ensure that the same meaning is applied
everywhere. The SCADA security groups are pondering how to make systems
that allow only the right people to do the right things.
I would call it humility vs. arrogance, but those words are unfairly
loaded. As we create the internet of things, and most systems may
interact with dozens or even hundreds of other systems, the engineered
world, the world of buildings, and systems, and sensors will have to
come to terms with this dichotomy.
Systems in the intern of things are always resource constrained. They
may demand constraints on power use to stay inside a battery budget.
They may require constraints of memory or processing, to stay inside a
hardware budget. One way or another, the developer / engineer is
rewarded for precise understanding, precise specification, and precise
control. Those who build these systems naturally try to extend these
same approaches to the world of interconnected systems.
The conversation arose on an XML list because XML is used to connect
between systems There's no point to putting XML inside a small system that
communicates only with itself. The purpose of XML is to exchange
understandable messages between systems. This requires that the
messages be minimal. If the message is as complex as the entire system,
then it can only be understood, fully understood, by someone who
understands that system as well as its designer.
In the internet of things, a single device, or type of device, may
communicate with thousands of other systems. If every one of those must
fully understand that device, then by symmetry, that small device must
fully understand all of them. In the internet of things, devices may
last for a decade or more, which would mean that that they will interact
with systems not yet invented when they are deployed, and understand
them as well.
Ontologists strive for completeness, to establish common meaning across
many systems. Consistent ontology is the key to consistent business
rules, and consistent policy across an enterprise or across an
industry. A well-made ontology defines how systems can interact, and
establishes constancy across complex multi-participant environments.
Yet each new definition can be a barrier to entry, can prevent a new
value proposition, or a new system from joining. With fewer users, an
ontology is less useful, and its adoption less compelling. An ontology
must be useful enough for people to want to break any barrier, so wide
adoption and simplicity are essential.
Such problems can only be managed by understanding less, by creating
the smallest information exchange you can devise, and then making it
smaller. Many partners and a long time creates a huge burden of
diversity. Such diversity can only be managed by hiding it. If this is
a worse interface, and a worse interaction, than worse is better.
Security for engineered systems presents similar dilemmas. Many systems
are not secured at all, but those that are, carefully consider list
scenarios and sequences, and create rules. A typical rule states “only
authorized people are allowed to invoke sequence 35”. The SCADA
Security designer must define not only sequences, but sequences of
sequences, and secure each of them. This approach can recurse to
infinite, and the engineering time to define will not grow to match.
Combinations of sequences create different security problems than do
the identified sequences. "If an infinite number of monkeys on infinite
SCADA control consoles execute an infinite set of sequences then they
will create an indeterminate number of failure scenarios". I remember
one of the first executive information systems, an advanced
product of Digital Equipment Corp in the early 80s. The development
team boasted that it allowed managers to query comparative information
without ever being able to find personally identifiable information. At
the internal roll-out, a clever manager walked up and queried "What is
the average salary of female VP's". (There was one). So much for "No
unauthorized individual information".
In the famed Aurora demonstration, simple routine controls were
repeatedly executed in ways which caused a large dynamo to destroy
itself, and to rip itself off its mountings. To the Aurora
demonstration, the systems susceptible to the attack functioned exactly
as expected. One interpretation is that the system exposed too much
information, and too many controls. If given many options, bad guys can
always figure out one more combination, one never anticipated by the
designer. It is far better to give them fewer choices, and a smaller
interface, one so small that the designing engineer really can
anticipate all scenarios.
[an error occurred while processing this directive]By analogy here, we could talk about the flexibility of C++, and how it led to buffer overruns, but that is another discussion
System access should be limited to actions, rather than to low level
control. Then, with a limited simple interface, the system should
notice and record when the SCADA monkeys start trying unusual sequences
of sequences. This would bring attention to Aurora scenarios, and
possibly would bring attention to the Stuxnet scenarios. However, it is a
different level of monitoring, of noticing changes of patterns of
routine, and legal events, of recognizing that the system is moving
into unanticipated territory.
There is a legend about a Chinese emperor who loved maps. He loved maps
so much that he had his subjects build a scale model of his kingdom at
a scale of one inch to the mile. The map was beautiful. Tiny little
mountains towered over tiny little rivers. The emperor enjoyed it so
much that he ordered another map; his entire kingdom at a scale of one
foot to the mile. The map took years to complete, and was an absolute
wonder. The rivers of china were represented by rivers of liquid
mercury, running slowly through delicately sculpted countryside. Each
tiny little house was visible on the map, carved by skilled sculptors
of cameos and jade. The map nearly bankrupted the kingdom
The map filled an entire island in the river that ran beside the
imperial city. The emperor spent months walking on the island,
observing his realm. But of course, it was not enough. Soon he
conceived the idea that he could have a map that was better still…If
only he could see the people of China, each one inch tall. And so
he ordered another map.
The emperor’ ministers counseled him that the new map would bankrupt
the country. The emperor would hear no opposition. Surely the newer,
more detailed map would be even more wonderful than the one that he
already had. He ordered his workers to proceed.
Within two weeks the emperor was dead, seized by a sudden illness. Some
say that his son and heir had poisoned him, but there was never any
proof. He died because he could not recognize when his solution was
good enough.
For the internet of things, it is essential to keep your systems'
interfaces, small enough that they can be truly understood. Expose few
enough functions that you really can define the security scenarios. And
even then, every system that exposes an interface should know enough to
know when it is being asked to do the unusual.
[an error occurred while processing this directive]
[Click Banner To Learn More]
[Home Page] [The Automator] [About] [Subscribe ] [Contact Us]