September 2022 |
[an error occurred while processing this directive] |
Actors for Buildings Fifty-five years ago, we chose the wrong path. Small code that could run on small CPUs was recognized as a better path than large, probably single-threaded code. |
https://www.linkedin.com/in/tobyconsidine/ http://automatedbuildings.com/editors/tconsidine.htm |
[an error occurred while processing this directive]
Articles |
Interviews |
Releases |
New Products |
Reviews |
[an error occurred while processing this directive] |
Editorial |
Events |
Sponsors |
Site Search |
Newsletters |
[an error occurred while processing this directive] |
Archives |
Past Issues |
Home |
Editors |
eDucation |
|
Links |
Software |
[an error occurred while processing this directive] |
Actors for Buildings
Fifty-five years ago, we chose the wrong path. Small code that could run on small CPUs was recognized as a better path than large, probably single-threaded code. These elements of small code could be proven to be correct, unlike the large code programs that characterized mainframe-era programs. The software industry took a fork in the road, and led by the siren song of Intel CPUs, took the fork in the road of keeping the mainframe model of programming.
Nowhere
was software hurt more by this than in building systems and IoT. These
systems have a natural cadence based around independent actors for each
mechanical subsystem, communicating in a service mesh. The lure of the
large program you wrote last year being twice as fast next year due to
Moore’s law, even without re-writing, was too enticing.
Somewhere around a decade ago, Moore’s Law hit the wall. We all pretend
that it did not, based on renaming the old CPUs as “cores”, and packing
a number of them on a chip. A well-threaded program can take advantage
of all these cores. A few well known utilities programs do. If you look
in the per-core performance on a laptop, it can look like a
well-balanced threaded system. Too many of those CPUs are consumer
apps, wastes of processing power on a control system, giving us the
illusion of computational density. There is a better way.
Software in clouds have been moving toward swarms of smaller bits of code for some time. These actors are independent and arranged as need to meet business purposes and adapt to changing requirements without re-write. This style of programming is known as Cloud-Native computing (https://www.cncf.io/). Properly done it increases computational density for any system you have. It need not be just in “the cloud”.
Industry
thought leaders such as Alper Uzmeller have long advocated for a more
object-oriented approach to building controls. Actors have the
independence and interoperability that objects promise, while having
better manageability.
A simulation actor can be subscribed as a digital twin to the same message channel as the live system. This enables one to continuously compare results of the actual to the twin, whether for predictive maintenance or to detect cyberphysical security breaches. Small AI or ML actors can watch both twins continuously to create new insights.
For now this cloud-native computing style is mostly confined to the big cloud. Those premises that have their own on-site cloud can use the same code there. The cloud is moving to smaller ands smaller virtual machines for the actors (https://dapr.io/). Soon, not yet, but soon, real-time actors will be assigned their own cores to inhabit in multi-core CPUs on-site. The processing density of these multi-core systems will skyrocket.
[an error occurred while processing this directive]
[Click Banner To Learn More]
[Home Page] [The Automator] [About] [Subscribe ] [Contact Us]