Four Streams:
Co-creation, Business modeling, Responsible Innovation & AI Machine Learning


The monitoring of human activity within HAAL is unique and made through various AAL services largely relying on the Information and Communication Technologies (ICTs) and Internet of Things (IoT) solutions, whose main scope is to obtain information about users’ daily activity, typically without contact and neglecting, however, and the subjective aspect of their well-being. Within HAAL, we will analyse the variability of the various sensor data that could be associated with the onset of pathology but also with a behaviour occurring after the prescription of a new therapy.

Machine Learning (ML) techniques and appropriate algorithms will be used to extract the health status condition of  the PwD behaviour changes, but also to detect onset and monitor progression of some age-related diseases and disorders. To maximize success and market fit, HAAL will be developed using an iterative methodology with 4 streams – i.e. co-creation, business modelling, responsible innovation & AI machine learning – and special attention for their coherence. All stakeholders – i.e. homecare nurses, informal caregivers, clients and care organizations, but also the developers of the technological solution – are involved in sessions focused on these streams and in pilots in all four participating countries. Unique for HAAL is the iterative assessment and balancing of the various interests of users, among themselves, but also with the commercial interests of developing parties – which are equally important to develop a viable business model around the HAAL bundle. For instance, different types of users may also have conflicting interests, for instance when a particular feature (e.g. algorithmic predictions on the health status) is beneficial for the PwD, but poses challenges to the caregiver (e.g. in terms of understanding the decisions that are made by HAAL).

Besides, while technological partners may have their interests to gather and learn from data about the user and interaction with the solutions for innovation purposes, users may have conflicting needs and demands around the gathering and use of their data in terms of privacy. Also, responsible intelligent technology may require a certain degree of openness about algorithmic procedures, also for users, who may require the ability to assess how HAAL makes decisions and to understand and control its practices. At the same time, however, algorithmic processing may be simply too complex for (some) users to understand the conditions to which they ‘agree’ when using a technology and we cannot simply assume that individual users are savvy enough to fend for themselves when it comes to protecting their digital rights and interests. In this sense, during the meaningful try-out, co-design and deployment of HAAL, the right trade-offs need to be made to ensure that the interests and values of different stakeholders are balanced. The coherence between the objectives and outcomes of the co-creation, business modelling, responsible innovation and AI Machine Learning streams (see figure), as well as the balancing of conflicting interests within and between these streams, are considered to be essential for this project, and for the successful adoption and scaling of HAAL solutions in general. The technological development of HAAL cannot be viewed separately from its resulting revenue models, ethics, regulations, and local differences in users’ values, needs and interests that may need to be adapted to; these develop together.