For most people the verb to `search’ has been effectively replaced by a name that has, like `Hoover’, become a verb. We now `Google’ things, and very effective it is too. But the notion of searching can , and is, being taken a great deal further.
Indeed it is becoming an important tool in a number of deeply technological issues related to cloud computing, such as application migration to the cloud and, further out, forensic analysis of data so that deep conceptual connections can be made that will help drive tasks such as big data analytics.
It will also provide a major underpinning for one of the big business opportunities that cloud is opening up – the aggregation of applications and services into more focussed and targeted business services designed to meet the specific needs of a vertical industry sector or a horizontal business process.
These are sectors for development that are starting to emerge at London-based software house, Ontology. The company is already building a track record in the development and application of semantic searching, which it particularly applies to areas of technology where even the technologists are unsure as to what is happening.
The best examples of this, according to Chief Technology Officer, Leo Zancani, can often be found in the telecoms world. Here, a major problem is in being able to identify customer service delivery problems. Failing to do so obviously costs the Telco money in lost revenue, and money in tracking down the problem (which, due to the complexity of possible connections, routers and switches that can be involved, can take several weeks to identify). Last, but not least, is the damage done to the business reputation.
By searching, and linking, at the data level using Ontology’s search systems, Telcos can identify faults causes by searching through data such as system self-reporting messages.
This process can then be extended to identify potential problems before they develop. “For example, Zancani said, “it becomes possible to map out the chain of dependencies that are associated with a particular unit, such as a router. This is a real life, big data issue, for the service provider needs to know what will happen if a specific router is taken out of service for maintenance. It needs to know what communications processes will be affected, and when they might be affected most. And most important of all, it needs to know which customers are likely to be adversely affected so that they can be informed.”
This model however, can be made to work equally well in a number of other areas, and as the tools are available as cloud delivered SaaS, as well as on-premise, these are expected to be the major volume markets for Ontology, long term.
One of the early targets Zancani is starting to find traction in is applications migration, particularly migration of existing applications to the cloud. It is part of the fundamental characteristics of the cloud that applications operating within it are only loosely coupled rather than using hard-wired integration techniques used in on-premise infrastructures. This means that dependencies between applications can be different in the cloud to an on-premise installation, for a wide number of reasons.
The issue here is that in a loosely coupled environment, the actions or processes of one application may inadvertently have an impact on the actions of another, perhaps inhibiting an action or process, or triggering an unwanted one. The key issue in migrating applications is therefore to be able to identify those dependencies so they can be engineered out.
“It is already being used for infrastructure planning processes in the network area,” Zanacani said, “and we expect to see it playing a similar role in applications migration, where it will be able to identify infrastructure planning redundancies.”
What he does see following on from this is the use of Ontology’s technology in the growth of service aggregation. Here the potential dilemma for service providers is that there will be applications that offer ideal individual services that can be used in an aggregated service targeting a specific market sector, but engineering them together may create unexpected dependencies that can lead to unexpected processes being triggered, or process failures for reasons that are not obvious and difficult to identify. The system will be able to identify those dependencies before they are implemented, which should both shorten service development times and create far more robust and reliable services.
“The system works from a data level up, which makes it possible to take real data and work up to what an aggregated service can deliver,” Zancani said. “In this way, it will allow service aggregators to play `what if’ games in building their services. This is already happening in internal IT departments, but we see it exploding when it comes to Cloud Service Providers offering aggregated services, especially for the SME marketplace.”
The ability of the tools to identify and map out the relationships between different elements of an application is also opening up new ways of forensic examination helping to create a better, richer understanding, at a conceptual level, of what applications are actually doing and what they might be capable of achieving.
Perhaps the most useful aspect of the Ontology tools is the fact that, while the key component is the Modeller, in which all the data transformations and data flows are described, the User Interface looks and acts like a search engine. So working with it is based on very familiar concepts.