An On-line system for automated recognition of human activities

An On-line system for automated recognition of human activities

Tommaso Magherini, [1] Alessandro Fantechi, [2] Christopher D. Nugent, [3] Alessandro Pinzuti, [4] Enrico Vicario [5]

Cite as: Magherini, T., Fantechi, A., Nugent, C. D., Pinzuti, A., Vicario, E., ' An On-line system for automated recognition of human activities', European Journal of Law and Technology, Vol. 4, No. 2, 2013

Abstract

The recognition of activities performed by humans is key challenge in several research areas. Automated activity recognition systems may improve quality of life of elderly people, monitoring their behaviours and preventing dangerous situations , in the Ambient Assisted Living (AAL) context, and may support human operators during the detection of malicious situations in the Surveillance context. Advances in technologies allow the introduction of information sources into everyday objects, however, the data gathered from these pervasive devices may contain sensible user information, arising privacy and security issues. Nevertheless, a semantic gap exists between these data and information needed for the recognition process. We propose ARA (Automated Recogniser of ADLs), a system for the automated and real-time recognition of daily human activities in smart environments that guarantees user's information privacy and data security. ARA is based on model checker engine that detects human activities, represented through temporal logic formulae, over the information flow gathered from the sensorized environment.

Index Terms - Human activity recognition, model checking, real-time processing, temporal logic, ambient assisted living, smart surveillance.

1. Introduction

Automated recognition of human activities can be broadly formulated as the problem of identifying whether a sequence of data, gathered from a sensorised environment, can be interpreted as a footprint of a set of activities carried out by a person (Cook et al., 2007). This is essential for various application contexts (Turaga et al., 2008), including Ambient Assisted Living (AAL), real-time monitoring of humans, surveillance, forensic information retrieval, and human-computer interactions. Advances, from a purely technological perspective, have allowed the adoption of the Pervasive Computing paradigm, where information sources are integrated into everyday objects (Weiser, 1991), within the process of activity recognition. In particular, the introduction of automated techniques for recognition of human activities in the Surveillance context, offers the theoretical basis for Smart Surveillance systems (Hampapur et al., 2003). These systems exploit data gathered from the environment to generate real-time alerts on the basis of the detection of activities belonging to a predefined set (e.g., a car is entering into a residential car park) or deviating from the norm (e.g., a car has left a car park during a specified time period in the night). The detection of such activities allows a prompt situation evaluation and a response within an acceptable delay. In addition, the automated tracking of activity patterns by Smart Surveillance Systems supports information retrieval for legal purposes (e.g., selection of video frames wherein a suspect appears) and situation and context awareness (e.g., the orientation of cameras may change on the basis of detected activities).

Activity recognition is becoming a predominant challenge in the AAL context, which aims to improve the quality of life of elderly people and those suffering from long term or chronic health conditions. In particular, the automated recognition of Activities of Daily Living (ADLs) (Katz et al., 1970), which represent daily self-care tasks performed by a user (e.g., meal preparation, dressing, and drinking), provides the opportunity to investigate a number of interesting practical applications, among which the evaluation of independence loss in elderly people (e.g., tracking the differing stages of Alzheimer's along with the symptoms and behavioural changes), long-term monitoring of human's behaviours (e.g., evaluation of the actual quality of life of elderly and therapeutic applications) and the detection of abnormal or dangerous activity patterns (e.g., non-compliance with prescribed medication).

A rich technological layer may be used to support the acquisition of data from a variety of information sources integrated into an environment, such as: Camera based system (Mihailidis et al., 2004), Wireless Sensor Networks (WSNs) (Hong and Nugent, 2009), Wired Sensor Networks (Zhang et al., 2011), and Radio Frequency IDentification (RFID) devices (Rashidi and Cook, 2009). However, the extracted information can be vague and not sufficiently expressive, jeopardizing the activity recognition process. The effective deployment of applications in the area of activity recognition, therefore, appears to still be limited by a semantic gap between data gathered from the environment and high level concepts required for the recognition process. Moreover, the information collected by sensing devices must be properly managed to guarantee privacy and security. The former, which concerns protection of user sensitive data, may be seriously compromised during data collection and reasoning, while the latter, which concerns user authentication and data integrity, is vulnerable to denial of service and man in the middle attacks during data collection and management (Atzori et al., 2010), (Cavoukia et al., 2010).

Organization of responsibilities and separation of concerns, according to layered architectures (Das et al., 2002), (Coutaz et al., 2005), may provide the means to address the semantic gap satisfying to privacy and security issues. In the current work, without loss of generality, we define a three-layer structure: the upper layer, referred to as Recognition, is responsible for the detection of activities, which represent tasks performed by a user to achieve an intended goal (e.g., the user drinks a glass of water); the middle layer, referred as Perception, is responsible for the extraction of actions, which represent a single step of the task being considered (e.g., the user moves the glass); and the third and lower layer, referred to as Sensing, is responsible for the management of sensing devices and the acquisition of observations, which represent the data gathered from these devices (e.g., the acceleration value obtained by the sensor integrated into the glass).

The literature presents several approaches for the activity recognition, which can be classified into: specification and learning techniques. The former use expert knowledge to characterise user activities as a set of timed sequence of steps (Ghanem et al., 2004), (Philipose et al., 2004), while the latter exploit machine learning and data mining algorithms for the recognition process (Ye et al., 2011). The majority of these approaches address pattern classification task (i.e., detecting an activity occurrence in the observation stream) in an off line fashion, while, only rare exceptions (Lv et al., 2004) permit the real-time recognition of human activities. In the current work, we present a context independent system for automated and real-time recognition of ADLs within a sensorised environment, referred to as ARA (Automated Recogniser of ADLs). Our system is based on a layered architecture and exploits propositional temporal logic (Clarke et al., 1999) and a model checking engine (Baier and Katoen, 2008). We show how the logic has sufficient expressive power in order to represent realistic patterns, and how the model checking engine is capable to verify temporal formulae in a real-time manner, satisfying privacy and security issues. The rest of the paper is organised as follows. Section 2 describes an application scenario, and the privacy and security issues of the activity recognition domain, while Section 3 presents the ARA system. Conclusions are finally drawn in Section 4.

2. Application Domain

2.1 Scenario

Sara is worried about her elderly father, Mark, who lives alone in a house far from Sara's home. Although Mark is self-sufficient, Sara still prefers to supervise and monitor her father's behaviours. With the participation of the Healthcare Provider from the city hospital, Mark's house has been transformed into a Smart Home (SH), installing and integrating a sensor network and a camera system. The data gathered from these devices are used by the SH to recognise human activities with the aim of preventing dangerous situations and supervising Mark's behaviours. Mark spends his afternoon in the garage, where a small carpentry facility has been installed. During the afternoon, Sara tries to call Mark, however, he is working with the band saw and he does not hear the telephone ringing. Sara is anxious and uses the web interface, made available to SH, to recover information on her father. To protect the privacy of Mark, only the sensor network is activated, while the cameras are switched off. Using raw data gathered from sensor devices, SH detects the presence of a person in the garage, however, it is not able to infer user's identity. Sara, using the privacy privileges of her SH user account, switches on the camera placed in the garage and observes her father working with the saw. She is now reassured and decides to call him later.

A few hours later, Mark starts to prepare the meal for the dinner. The SH monitors Mark during the process of soup cooking in order to prevent any potentially dangerous situations. In this case, the system recognises that the user has correctly executed the meal preparation activity. Nevertheless, Mark forgot to switch off the stove. The SH detects that the user is eating and the stove is still turned on and subsequently informs Mark of situation, who immediately switches off the stove. After dinner Mark goes out for a walk and forgets to take his prescribed medication. The SH recognises that Mark did not take the medication and tries to inform him of the forgotten activity. Using raw data gathered from sensor devices, SH automatically detects if anybody is in the house. With the aim of improving Mark's location process, SH switches on the cameras and automatically processes the video streams; however, the system is still not able to detect the position of Mark.

SH manages critical situations adopting an escalation policy, which sequentially informs users on the basis of their privileges and responsibilities. Regarding the missed medication intake activity, the system, firstly, calls Mark on his mobile; however, the device is switched off. Subsequently, SH sends a message, via SMS, to Sara regarding the detection of a critical situation. Unfortunately, Sara is already sleeping and does not read the SMS. After fifteen minutes, the system has not yet received feedback from Sara. It therefore sends a message to the Healthcare Provider regarding the detected situation. The Provider automatically assigns a green priority to this situation and activates a standard protocol, which requires sending out human operators to user's house whether Mark does not come back to home until 2 hours from the receiving of warning message. Finally, Mark returns from the walk and the system informs him about the missed activity. It also sends a message to the other users informing them that the critical situation is solved. Finally, Marks goes to sleep and the system activates a security protocol, locking the front doors, back doors and the windows, and switching off the lights and the gas. Furthermore SH sends a log containing the user activities recorded during the day to Sara and Healthcare Provider

2.2 Privacy and Security in the activity recognition process

The aim of an activity recognition system is the analysis of a continuous observation stream, gathered from a sensorised environment, detecting relevant activities and rejecting portions of the observation stream that represent unknown or idle activities (Turaga et al., 2008). Nevertheless, there are several privacy and security issues regarding the interaction between the human user and monitoring software. As previously mentioned, privacy and security are two of the most relevant challenges in the activity recognition context. The former concerns the operational policies, procedures and regulations implemented within an information system to manage personal information held in any format. The latter addresses the various components of an information system that safeguard data and associated infrastructures from unauthorised activities (Friedewald et al., 2007). Regarding the AAL domain, the development of a care service system requires, on the one hand the access to user's health information with the aim of supervising patience's behaviours and detecting emergency situations; on the other hand, the system must protect high sensitive information that users may not agree to share with third parts (Bohn et al., 2004).

Figure 1: Graphical representation of privacy and security critical points in a sensing environment.

Fig. 1 shows the three primary points where privacy and security should be considered (Cavoukia et al., 2010):

  • User
  • Technology
  • Storage and Processing

An activity recognition system has to: a) respect user privacy during data acquisition; b) guarantee security of sensing devices and data transmission; c) protect the stored data on the server through privacy access policies.

A user perceives a privacy violation when a natural, social, spatial or temporal, and transient border is crossed (Marx, 2001). A natural border describes a physical restriction of observability (e.g., a wall, a door, and the darkness). A social border describes the protection of confidential information by a subset of people (e.g., the sensitive information shared with a doctor or a lawyer). A spatial or temporal border describes a part of the life of a user, which is considered to be isolated from the other parts (e.g., hobbies do not influence the working sphere, and vice versa). A transient border describes actions and declarations that a user hopes to be forgotten (e.g., old letters and photo put in the rubbish). In the Pervasive Computing context, the presence of sensing devices, which are invisible, proactive and ubiquitous, may preclude the user's choices regarding the authorization to record and store sensitive data. The introduction of suitable authorization policies and privacy settings in the devices may overcome these drawbacks.

Figure 2: Examples of smart technologies

Figure 2 Examples of smart technologies: a) WSN is a set of cooperatively sensors spatially distributed over an area with the aim of monitoring physical or environmental conditions; b) RFID is a no battery chip containing a radio-frequency electromagnetic field coin for automatic identification and tracking of tagged objects; and c) Video camera.

There are many technologies (see Fig. 2) involved in the Pervasive Computing context, each with different privacy and security issues. The use of sensors devices, such as WSNs or RFID, guarantees the protection of user privacy by extracting low-level information, however, these technologies are vulnerable to physical and networking attacks. The sensors are often placed in unattended areas and are characterised by low energy and computational power, hampering the adoption of complex secure schema for the transmission of data over a Wi-Fi network (Cook et al., 2007). The use of camera systems allows the extraction of relevant user information and guarantees a more secure transmission using wired networks; nevertheless, cameras may breach several privacy borders. These technologies generate a huge volume of data, which are stored and processed to infer relevant information. The adoption of minimization (Wright et al., 2009) and data masking (Kapadia et al., 2008) policies allows a suitable management of sensitive personal data. The former affirms that the system has to collect only the necessary data for the specific services and to retain them only for service execution time. The latter obscures highly sensitive data elements, maintaining data integrity. Nevertheless, an excessive amount of minimization and data masking may prejudice the performance of the recognition process, reducing the volume and the relevance of data. The data are characterised by different levels of privacy, consequently the system should introduce appropriate policies to manage these information, making available sensitive data only to the users with the necessary privileges. For example, video streams of a camera placed in the bedroom should be available to the family of the patient and not to the Healthcare Provider, which can watch the video only when a critical situation is automatically detected by the monitoring system, while, the temperature and humidity values of patient's home should be available to both users.

A care service system may include several process with different necessities. For example, a critical situation recogniser may require high sensitive data in an on-line manner, while an off line behaviour analyser may require only low level information. Consequently, the system should satisfy the needs of the process guaranteeing the privacy of user and the security of data access. A common approach to data storage is the adoption of a remote system, composed by a database and a data processor. However, this model requires complex access policies for the protection of high sensitive information and secure data transfer protocols. Alternatively, information processing and storage may be conducted in situ, using local system or integrating microprocessors with sufficient power in the sensing devices (Hampapur et al., 2003), allowing the transmission of only relevant data to the server. Nevertheless, local systems are vulnerable to physical attacks.

3. ARA System

We have developed ARA (Automated Recogniser of ADLs) (Magherini et al., 2013), a context independent system for the automated recognition of ADLs based on a temporal logic (Clarke et al., 1999), which is a set of rules and symbols used to represent and reason about knowledge, and an on-line model checker (Baier and Katoen, 2008), which is a formal method to verify if a model satisfies a given specification in a real-time fashion. Regarding the scenario presented in Section 2-A, the model represents the sensing home on the basis of data obtained by sensor network and processed camera streams, while the specifications represent the monitored activities through temporal logic formulae.

The user interacting with ARA can be classified into three classes:

  • The Patient is the monitored person and interacts with the system only passively, through sensing technologies.
  • The Family supervises the Patient using the system facilities, receiving information on activities detected by the system in a real-time fashion. Moreover, this user has sufficient privacy privileges to directly interact with the devices (e.g., camera).
  • The Healthcare Provider (HP) evaluates the changes of Patient's behaviours and manages emergency situations detected by the system guaranteeing Patient's privacy. The behavioural monitoring is based on the comparison of the daily logs of the Patient's activities, while the detection of emergency situations requires access of highly sensitive information in order to adopt a quasi real-time reply. Moreover, this user is responsible to the translation of monitored activities from natural languages representations to temporal logic formulae.

Figure 3: Overview of the ARA system

Figure 3 Overview of the ARA system, which is composed by three main components: the Formula Editor helps HP to represent ADLs through temporal logic formulae, the Model Manager creates, updates, and modifies the model, and the Model Checker verifies whether the model satisfies the logic formulae. Three classes of user interact with ARA: the Patient is the monitored user, the Family supervises the Patient activities and the Healthcare Provider evaluates the Patient behavioural changes and manages critical situations.

ARA is composed of three main components: Model Manager, Formula Editor, and Model Checker (refer to Fig. 3).

  • The Model Manager creates, updates, and modifies the model, using the set of actions and observations gathered from the environment guaranteeing the protection of user privacy and the data security using appropriate access protocols. The model is represented through a linear labelled directed graph G = (V, E), where V is a set of nodes and E is a set of edges. Each node is characterised by a time instant and the set of the actions performed by the user into a prefixed time interval, while the edges link consecutive nodes. Furthermore the model is periodically updated by adding a node containing the recognised actions in the previous time interval and by deleting the oldest one.
  • The Formula Editor helps HPs to represent ADLs using temporal logic formulae, by means of a graphical user interface (Rugnone et al., 2007). Each activity is characterised by a set of actions, and their quantitative timing and sequencing constraints. For example, following our scenario, the medication intake activity is composed of three ordered actions: user takes a pill, user pours water into a cup, and user swallows the pill drinking the water. These actions have to be executed within a prefixed maximum time and in ordered sequence. These action patterns can be conveniently captured using a suitable variant of a Propositional Temporal Logic, which is a formalism composed by atomic propositions, and Boolean and temporal operators for representing and reasoning about events whose truth values depend on time. For example, the logic formula G request (E replay) states that every time (Globally operator, G) a system receives a request, then (Implication operator, ), sometime in the future (Eventually operator, E), the system will send a response.
  • The Model Checker is the core of the system and recognises the activities performed by the Patient. This component verifies if the model, received by the Model Manager, satisfies a set of formulae, received by the Formula Editor. In addition it publishes a daily log of the detected activities for off-line processing.

The automated, real-time and context independent capabilities of our system can be conveniently exploited in the field of activity recognition.

  • The automated recognition capability avoids human intervention during the detection process overcoming human attention degradation (Green, 1999) and guarantees a higher privacy level of user sensitive information, given that extracted data are managed by computer instead of a human being (Cavoukia et al., 2010). Most of the current video surveillance systems require human operators to constantly monitor the data stream. Unfortunately, the effectiveness and response of these systems are largely determined by the vigilance of the human operators during the evaluation process. For example, the experimentation, presented in (Green, 1999), shows how after 20 minutes of video evaluation the recognition capability of a human being is reduced to unacceptable level. Furthermore, the number of cameras and the area under surveillance are limited by the personnel available. Following our scenario, no healthcare assistants are required to evaluate video streams, given that our system autonomously provides automatic detection of dangerous situations, reducing the cost of home care services and protecting user's privacy.
  • The real-time recognition capability permits the detection of potentially dangerous activities with an acceptable delay, allowing the adoption of well-timed countermeasures. Furthermore, this feature guarantees a better protection of user's privacy so long as the extracted information are stored only when dangerous situations are detected, avoiding problems related to the long-term management of user's sensitive data (Cavoukia et al., 2010). Following our scenario, whenever ARA recognises a potential dangerous situation, it generates a real-time alert allowing countermeasures previously specified by HP and Family (e.g., missed medication intake activity). During real-time recognition, ARA collects the detected activities in a log file allowing for off-line analysis in order to recognise relevant changes of Patient's behaviours.
  • The context independent capability makes the recognition process independent to the adopted sensing technologies, enabling the integration of several kinds of devices. Following our scenario, the HP centre may freely decide to add new cameras and to install RFID sensors inside the Patience's house. In particular, new cameras may monitor the house courtyard, whereas the RFID sensors may monitor the rooms where video cameras cannot be employed given that a high level of privacy is required (e.g., the bathroom).

Furthermore, ARA guarantees data protection by adopting minimization and data masking policies. The system collects and processes only data necessary for the purposes of activity detection, neglecting data stemming from unoccupied areas within the house. For example, during the preparation of a meal, ARA processes and stores only data gathered from the kitchen, ignoring data streams coming from the other rooms. The protection of stored data is achieved by introducing masking techniques for sensitive information and adopting strict access policies. Following our scenario, the privileges of the classes of user that actively interact with ARA are different: the Family can switch on/off cameras, watching video in a real-time fashion, however, it cannot access to Patient clinical data. The Healthcare Provider can only access to the user's clinical information. Nevertheless, when ARA recognises a dangerous activity, such as, a Patience fall, the HP gains privacy privileges to watch video streams, in order to manage the situation.

4. Conclusions

Automated recognition of human activities within sensorised environments is assuming a growing relevance in several application contexts. In particular, in the AAL, these techniques are introduced to monitor and supervise the activities and behaviours of elderly people and those suffering from long term chronic conditions with the aim of improving their quality of life. Instead, in the Smart Surveillance, automated activity recognition improves malicious situation detection and supports forensic data retrieval. Advances in technologies allows the integration of information sources into everyday objects, however, a semantic gap still exists between the information gathered from pervasive devices and automated recognition process. Furthermore, the management of extracted data presents several challenges from privacy and security perspectives.

We developed a context independent system referred to as ARA (Automated Recogniser of ADLs) for the automated and real-time recognition of ADLs based on a temporal logic and a model checking engine. Our system is independent from the adopted sensing technologies and can be ported into several domains. While, the recognition process in an automated and real-time fashion, avoids human intervention and allows well-timed countermeasures, respectively. The adoption of minimization and escalation policies, combined with the aforementioned ARA features, guarantees an acceptable level of user privacy and data security.

References

Atzori, L, Iera, A, and Morabito, G (2010), 'The internet of things: A survey', Computer Network 54.

Baier, C and Katoen, J P (2008), Principles of Model Checking (Cambridge: The MIT Press).

Bohn, J, Coroam, V, Langheinrich, M, Mattern, F, and Rohs, M (2004), 'Social, economic, and ethical implications of ambient intelligence and ubiquitous computing', In Springer - Ambient Intelligence

Cavoukia, A, Mihailidis, A, and Boger, J (2010), 'Sensors and in-home collection of health data: A privacy by design approach', Canada Information and Privacy Commissioner

Clarke, E M, Grumberg, O, and Peled, D A (1999). Model Checking (Cambridge: The MIT Press)

Cook, D J, Augusto, J C, and Jakkula, V R (2007), 'Ambient intelligence: Technologies, applications, and opportunities', Pervasive and Mobile Computing

Coutaz, J, Crowley, J L, Dobson, S, and Garlan, D (2005), 'Context is key', Communications of the ACM

Das, S K, Cook, D J, Battacharya III, A E O, and Lin, T (2002), 'The role of prediction algorithms in the mavhome smart home architecture', IEEE Wireless Communications

Friedewald, M, Vildjiounaite, E, Punie, Y, and Wright, D (2007), 'Privacy, identity and security in ambient intelligence: A scenario analysis', Telematics and Informatics 24(1)

Ghanem, N, DeMenthon, D, Doermann, D, and Davis, L (2004), 'Representation and recognition of events in surveillance video using petri nets', Computer Vision and Pattern Recognition Workshop 2004 CVPRW '04

Green, M W (1999), 'The Appropriate and Effective Use of Security Technologies in US Schools A Guide for Schools and Law Enforcement Agencies', National Institute of Justice Tech. Rep.

Hampapur, A, Brown, L, Connell, J, Pankanti, S, Senior, A, and Tian, Y (2003), 'Smart surveillance: Applications, technologies and implications', IEEE Pacific-Rim Conference On Multimedia

Hong, X and Nugent, C D (2009), 'Partitioning time series sensor data for activity recognition', 9th International Conference on Information Technology and Applications in Biomedicine 2009 - ITAB 2009

Kapadia, A, Triandopoulos, N, Cornelius, C, Peebles, D, and Kotz, D (2008), 'Anonysense: Opportunistic and privacy preserving context collection', 6th International Conference on Pervasive Computing 2008

Katz, S, Downs, T D, Cash, H R, and Grotz, R C (1970), 'Progress in the development of the index of ADL', Gerontologist 10

Lv, F, Kang, J, Nevatia, R, Cohen, I, and Medioni, G (2004), 'Automatic tracking and labeling of human activities in a video sequence' 6th IEEE International Workshop on Performance Evaluation of Tracking and Surveillance -PETS04

Magherini, T, Fantechi, A, Nugent, C D , and Vicario, E (2013), 'Using temporal logic and model checking in automated recognition of human activities for Ambient Assisted Living', IEEE Transactions on Human-Machine Systems (THMS), paper under review.

Marx, G T (2001), 'Murky conceptual waters: The public and the private', Ethics and Information technology 3

Mihailidis, A, Carmichael, B, and Boger, J (2004), 'The use of computer vision in an intelligent environment to support aging-in-place, safety, and independence in the home', IEEE Transactions on Information Technology in Biomedicine.

Philipose, M, Fishkin, K, Perkowitz, M, Patterson, D, Fox, D, Kautz, H, and Hahnel, D (2004), 'Inferring activities from interactions with objects', Pervasive Computing

Rashidi, P and Cook, D J (2009), 'Keeping the Resident in the Loop: Adapting the Smart Home to the User', IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans

Rugnone, A, Vicario, E, Nugent, C, Donnelly, M, Craig, D, Paggetti, C, and Tamburini, E (2007), 'Hometl: A visual formalism, based on temporal logic, for the design of home based care', IEEE International Conference on on Automation Science and Engineering, 2007 - CASE2007

Turaga, P K, Chellappa, R, Subrahmanian, V S, and Udrea, O (2008), 'Machine recognition of human activities: A survey', IEEE Trans. Circuits and Systems for Video Technology

Weiser, M (1991), 'The computer for the 21st century', Scientific American

Wright, D, Gutwirth, S, Friedewald, M, Hert, P D, Langheinrich, M, and Moscibroda, A (2009), 'Privacy, trust and policy-making: Challenges and responses', Computer Law & Security Report

Ye, J, Dobson, S, and McKeever, S (2011), 'A review of situation identification techniques in pervasive computing' Pervasive and Mobile Computing

Zhang, X, Chen, X, Li, Y, Lantz, V, Wang, K, and Yang, J (2011), 'A framework for hand gesture recognition based on accelerometer and emg sensors', IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Human



[1] Tomasso Magherini is with Dipartimento di Ingegneria dell'Informazione, Università di Firenze, Firenze, Italy (tommaso.magherini@unifi.it)

[2] Alessandro Fantechi is with Dipartimento di Ingegneria dell'Informazione, Università di Firenze, Firenze, Italy (fantechi@dsi.unifi.it)

[3] Christopher D. Nugent is with the School of Computing and Mathematics, University of Ulster, Belfast, UK (cd.nugent@ulster.ac.uk)

[4] Alessandro Pinzuti is with Dipartimento di Ingegneria dell'Informazione, Università di Firenze, Firenze, Italy (alessandro.pinzuti@unifi.it)

[5] Enrico Vicario is with Dipartimento di Ingegneria dell'Informazione, Università di Firenze, Firenze, Italy (enrico.vicario@unifi.it)