UOC researchers propose a novel way to evaluate and optimise algorithms for detecting cyberattacks in homes.
UOC
The number of smart homes, full of internet-connected devices, is growing. As of February 2025, Brazil had approximately 183 million internet users. This surpasses Mexico and Argentina combined, ranking second and third with 110 and 41.2 million, respectively. Meanwhile, in the Caribbean, the Dominican Republic had the largest number of internet users.
According to Eurostat data, in the European Union, more than 70% of the population has some type of connected device in their home, not counting computers or smartphones. Televisions, audio and game systems, virtual assistants and home automation systems are the most common.
All of these devices offer convenience and efficiency, but they also open the door to new cybersecurity risks. However, detecting anomalies in smart home systems – such as those caused by cyberattacks – is fraught with challenges stemming largely from the design of these detection algorithms themselves.
This research is led by Helena Rifà Pous, from the K-ryptography and Information Security for Open Networks (KISON) group, attached to the Centre for Research in Ethical Technologies and Connectivity for Humanity (UOC-TECH) and associate professor at the Faculty of Economics and Business and the Faculty of Computer Science, Multimedia and Telecommunications, and Juan Ignacio Iturbe Araya, researcher at the UOC-TECH of the Open University of Catalonia (UOC) and the Department of Computer Engineering at the University of Catalonia. Universidad de Santiago de Chile, has proposed a new work approach to optimize these algorithms.
Objective: to correct the imbalance of the algorithms
Traditional methods of attack detection have long been insufficient in the face of the increasing variety and volume of threats faced by smart home systems.
These models, which require the system to know in advance each type of attack and the patterns that identify it, are being displaced by so-called unsupervised learning techniques, capable of identifying anomalous behavior without the need for prior threat data. However, these techniques also have a weak point.
The performance of these systems is highly dependent on how the internal parameters by which they evaluate anomalous behaviors are adjusted to anticipate a potential attack. Choosing these values incorrectly can reduce the system's ability to detect new or infrequent attacks, something that is especially pronounced in environments with unbalanced data, that is, in environments such as the domestic one, in which there is much more data on normal traffic than on anomalous traffic and, within the anomalies, each one can have a very different frequency.
"Our work suggests that, even if unsupervised methods of anomaly detection are used, these methods can work better if we automatically optimize the system configuration," explains Helena Rifà Pous. "The study looks at how the selection of optimization metrics impacts the subsequent performance of those unsupervised learning models. And it concludes that metrics based on Matthews' correlation coefficient (a scale used to classify predictions) are the ones that have the best results, as they allow systems to be more generalizable, balanced and robust," he adds.
For the UOC researcher, the result of the study (published in the Journal of Network and Systems Management) underlines the importance of using more balanced metrics to move towards more reliable and effective security systems. "The change in criteria suggested by our research will allow us to create more flexible anomaly detection systems, which can be better adapted to the needs of individual users who do not have cybersecurity or computer skills. In essence, it will ensure that the products that reach the market are able to better detect real and rare attacks, and that they are not just good at confirming that traffic is normal," he says.
Challenges to strengthen domestic cybersecurity
The study proposes a new approach to the development of robust and optimized models that strengthen the security of connected homes. However, the researchers note that applying this approach to widely consumed commercial services also entails three major challenges:
Availability of real household data. Obtaining a significant volume of data from households that have suffered cyberattacks with which the operation of detection systems can be correctly validated is costly and complicated.
Future reliability. Normal household traffic changes over time for reasons such as the purchase of a new device or a change in consumption habits. Therefore, it is difficult to ensure that anomaly detection systems developed today will maintain their efficiency in the future.
Portability and standardization. Implementing an optimized model across different smart home and Internet of Things (IoT) platforms can be tricky, and it won't always be possible to maintain the performance of the proposed model.
"Our research is focused on finding what other mechanisms we can use so that anomaly detection models for smart homes adapt to their environments and can operate with little to no technical knowledge. We are looking for models that are not only accurate, but also autonomous and transparent," explains the researcher. "Our next step is to see how explainable artificial intelligence techniques can help us understand why these models fail or become obsolete," concludes Rifà Pous.

