Argumentation-based Coordination in IoT:

a Speaking Objects Proof-of-Concept


Stefano Mariani

DISMI, Università di Modena e Reggio Emilia


IDCS 2019 - Napoli, Italy - 10/10/2019

Outline

  1. Why argumentation-based coordination in IoT?
  2. The Speaking Objects vision
  3. Real world prototype
    • PoC deployment
    • Feasibility

IoT systems today

Common design considers devices as simple providers of services:

  • sensing services => raw data
  • actuating services => commands

Likewise, most designs adopt a centralised, cloud-based perspective:

  1. raw sensor data collected at a control point
  2. there analysed to inform decision making algorithms
  3. finally actuator commands generated and sent

IoT systems tomorrow

Devices becoming smarter by embedding AI algorithms:

  • sensors: raw data => situations => state of affairs
  • actuators: commands => rules => goals

Deployments moving to Fog/Edge, massive scale:

  • distributed sensing and control
  • decentralised coordination
  • local actions => global effects

Paradigm shift

Coordination => argumenting about current and future “state of the affairs”:

  • sensors => "speaking" objects
  • actuators => "hearing" objects
  • protocols => argumentation frameworks

Speaking Objects: core idea

The core idea behind Speaking Objects vision

Speaking Objects: coordination

Interaction is the focal point (vs. Smart Objects)

  • Speaking-to-Speaking => shared understanding
  • Speaking-to-Hearing => planning
  • Hearing-to-Hearing => joint deliberation

Dialogue types to frame conversational coordination

Speaking Objects: example

Speaking Objects exemplary smart home scenario

From theory to practice

  • Target scenarios:
    • detect home invasion, act accordingly
    • detect energy saving opportunities, act accordingly
  • Requirements:
    1. affordable devices, easy acquisition => scalability, time to market
    2. distributed deployment => scalability, faithful to Speaking Objects vision
    3. p2p interaction => scalability, faithful to Speaking Objects vision
    4. basic argumentation => coordination, faithful to Speaking Objects vision

Issues: context vs. commonsense

Contextual knowledge is:

  • highly dynamic
  • situated in time and space
  • tied to specific goals and functions
  • e.g. data streams from sensors

Commonsense knowledge is:

  • quasi-static
  • non situated
  • general purpose
  • e.g. relative concepts such as short vs. tall, hot vs cold
  • e.g. basic cause => effect laws such as light on => illumination higher

Where to put it? Context at the Edge vs. commonsense in the Cloud?

Issues: abstraction gap

Goal-orientation vs. commands:

  • commands => no processing => cheap Edge devices
  • reactive rules => little processing => Edge/Fog devices?
  • planning => medium processing => Edge/Fog devices + AI?
  • goals => heavy processing => Cloud only?

Agents at the Edge?

Situations vs. perceptions:

  • raw data => no processing => cheap Edge devices
  • aggregate information => little processing => Edge/Fog devices?
  • information fusion => medium processing => Edge/Fog devices + AI?
  • situation recognition => heavy processing => Cloud only?

Machine learning at the Edge?

PoC: architecture

Speaking Objects proof-of-concept architecture

  • Cloud (laptop PC):
    arbitration,
    goal setting

  • Fog (Arduino / ESP):
    embedded AI,
    arguments generation,
    planning

  • Edge (sensors / actuators):
    raw data,
    commands

PoC: feasibility

List of devices deployed to realised the PoC

ESP modules key enablers of p2p

  • wi-fi connection to local network for any device
  • basic data processing and rule-based reasoning

Home invasion

  1. Night-time, window closed
  2. Indoor noise > threshold => ask accelerometer
  3. Force > threshold => ask humidity
  4. Humidity + Outdoor noise > threshold => thunderstorm

Argument no_intrusion has stronger support than intrusion

Energy saving

Final argumentation graph in energy saving scenario

  • person + sunset => lights on

  • TV on + preference => lights off

  • no movement => asleep

  • asleep => lights off


Rules are commonsense, perceptions are context

logic facts + ConceptNet knowledge base

Lessons learnt

  • Key technologies ready for Speaking Objects
  • Unclear where / how to deploy commonsense knowledge
  • Weak agency feasible at the Edge / Fog
  • Need AI benchmarks assessing where the line between Edge and Cloud is

Thanks

for your attention



Questions?


Stefano Mariani

Università di Modena e Reggio Emilia