Poster
:
An Explainable Neural Network Model for Recommender Systems
SessionPosters
Event Type
Poster
Time
Location
DescriptionRecent years saw an explosive growth in the amount of digital information and the
number of users who interact with this information through various platforms. This increase in
information and users has naturally led to information overload which inherently limits
the capacity of users to discover and find their needs among the staggering array of
options available at any given time. Online services have handled this information overload by using algorithmic filtering tools that can suggest relevant and
personalized information to users. These filtering methods, known as Recommender
Systems (RS), have become essential to recommend a range of relevant options in
diverse domains. Most research on RS has focused on developing
and evaluating models that can make predictions efficiently and accurately, without
taking into account other desiderata such as fairness and transparency which are
becoming increasingly important to establish trust with humans. For this reason,
researchers have been recently pressed to develop recommendation systems that are
endowed with the increased ability to explain why a recommendation is given to a user.
Recent state of the art techniques for recommendation include Neural Network and Deep Learning (DL) which have achieved unprecedented levels of accuracy. Unfortunately, Neural Network methods are notorious for producing black box models that are unable to provide an explanation along with their predictions. The objective of our research is to design an explainable Neural Network RS that is able to make recommendations that are both accurate and explainable.