eHealth

The eHealth use case infers on the patient outcomes based on their data. Sensing and data vector preparation is done by Healthentia – an eClinical platform that is used for collecting data in decentralized clinical trials. The inference on patient outcomes is a new service of the platform, part of its transformation towards a Digital Therapeutics (DTx) solution.

The traditional cloud-based implementation of the online inference in Healthentia consists of an inference service that exposes a single API endpoint expecting the input vectors to infer upon and the name of the model to be used. The use case consists of an alternative approach using a PHYSICS function, deployed from a Node-RED flow. The function expects the exact same parameters as the API endpoint. At the heart of the function is a Python script that performs the inference.

Utilised RAMP Artefacts
OW Skeleton Interface for Node-RED flows as functions
Request Aggregator
Branch Join
Split Join Pattern
Inference flow for the eHealth use case, as created and tested using the PHYSICS Design Environment.

Benefits

The transformation to a FaaS architecture enables more dynamic and real-time applications, enabling arbitrary estimations to be acquired on demand by the customers. In the traditional approach, data are collected every 4 hours and inference is performed on the overall batch. This creates the limitation that an interested party needs to wait until the respective results of the entire batch are ready. However, in the new model, predictions can be launched and managed on demand with much lower needed resources compared to a traditional threadpool server, while exploiting the core features of a FaaS platform, such as queue based load levelling management and warm executions. This may lead to a more responsive and interactive application towards the end users.

Furthermore, the solution benefits from:

  • Abstraction from the specifics of creating, compiling, deploying and executing a function
  • Two resource optimizations over the traditional approach:
  • Large arrays of input vectors are processed for inference by different instances of the function after being automatically split.
  • Small arrays of input data are automatically concatenated together for a single function invocation.

Owner: Innovation Sprint

PoC: Dr. Aristodemos Pnevmatikakis

License: Apache License v2.0

5 2 votes
Asset Rating
0
Would love your thoughts, please comment.x
()
x