The recent wave of Artificial-Intelligence (AI) technologies based on Machine Learning (ML) has had a
huge societal and economic impact, with AI being (often silently) embedded in many of our everyday
experiences (such as virtual assistants, tracking devices, social media, recommender systems). The
research community (and society in general) has already realized that the current centralized approach
to AI, whereby our personal data are centrally collected and processed through opaque ML systems
(black-boxes), is not an acceptable and sustainable model in the long run. We posit that the next
wave of ML-driven AI should be (i) human-centric, (ii) explainable, and (iii) more distributed and
decentralized (i.e., not centrally controlled). These principles address the societal and ethical
expectations for trustworthy, privacy-respectful AI.
Our project SAI will develop the scientific foundations for novel ML-based AI systems ensuring (i)
individuation: in SAI each individual is associated with their own Personal AI Valet (PAIV), which acts as
the individuals proxy in a complex ecosystem of interacting PAIVs; (ii) personalization: PAIVs process
individuals data via explainable AI models tailored to the specific characteristics of their human twins;
(iii) purposeful interaction: PAIVs interact with each other, to build global AI models and/or come up
with collective decisions starting from the local (i.e., individual) models; (iv) human-centricity: novel AI
algorithms and the interaction between PAIVs are driven by (quantifiable) models of the individual and
social behavior of their human users; (v) explainability: explainable ML techniques are extended through
quantifiable human behavioral models and network science analysis to make both local and global AI
models explainable-by-design.
The ultimate goal of SAI is to provide the foundational elements enabling a decentralized collective of
explainable PAIVs to evolve local and global AI models, whose processes and decisions are transparent,
explainable and tailored to the needs and constraints of individual users.
This task will be solved by 7 internationally leading institutions. The role of our group at the Central
European University will be to explore the PAIV-network, a complex network of humans and AI units,
establish its structure and discover the emergent phenomena like segregation of opinions on it.