Virtualized Test Infrastructure
14.04.2022

Context


Truminds focuses on delivering solutions that solve the business problems for the long term and today, we will discuss one such solution. Our client is a Telecommunications Network Equipment Provider that develops cloud-managed wireless edge networking equipment like routers, gateways and software to provide wireless Wide Area Network (WAN). Given the inherent complexity of the domain, it is critically important to monitor network quality in labs and predict degradation or failure in service quality or hardware. Monitoring which routers are underperforming or are likely to fail is vital in maintaining and providing reliable service.


Challenges


The biggest challenge faced in this case is to implement optimum levels of data collection so that the data is available for proper diagnostics while avoiding network clogging resulting in service degradation. To ensure this, we collect only a minimal logging data from the routers. This ensures that the network performance is not affected while collecting effective data for network diagnostics. However, the network scale is huge enough to ensure that we still end up with large amounts of data to be analysed. This leads to the next issue that the solution needs to address. The analysis of logging data to come up with relevant insights and rules to predict network issues (most commonly, router failure) is still very much a human exercise. This requires time and effort from SMEs (Subject Matter Experts). As one can imagine, making the SMEs devote the time needed for processing such high volumes of data is not the best use of their time.


Solution


To remedy this situation and minimise the amount of human time needed, we focused on reducing the time taken by experts to classify/analyse data. We applied AI/ML algorithms to analyse and learn from the data itself to classify (label) it. These labels were then reviewed by the experts to ensure accuracy. With time and the amount of data processed, along with expert inputs, the algorithms get better at classifying the data, further increasing accuracy and reducing the need for human involvement.


The beauty of this solution is that it is possible to apply this to any business domain where there is a large volume of data to be processed, but fewer SMEs are available. One of the major advantages is that these algorithms are context-aware and can deal with business jargon as well (once trained by humans). For example, when dealing with healthcare data, one will be able to understand that a “positive test result” can be a negative outcome. This will further boost the relevancy and usefulness of the solution. Given that the amount of data available for analysis can be limited, it is also possible to run simulations and generate synthetic data for deeper analysis. Thus, we can confidently say that the next era in deriving effective and efficient insights from business data with minimal human intervention is truly here.

Like what you see? Let’s Start Something Great!!