|InfoVis.net>Magazine>message nş 131||Published 2003-10-13|
|También disponible en Espańol|
The digital magazine of InfoVis.net
The key point of the semantic web is the conversion of the current structure of the web as a data storage (interpretable only by human beings, that are able to put the data into context) into a structure of information storage.Â
In order to convert data into information we have to put it into context by adding metadata, data that contains the semantics, the explanation of the data it refers to; in the end, the context.
The Semantic Web is based on two fundamental concepts:
The description of the meaning requires concepts bound to
The automatic manipulation of the contents is done throughÂ
All these concepts belong to the field of Knowledge Representation.
Basically the semantic web pursues establishing a universal way of representing the relationships between data and between data and its meaning in order for an automatic system to follow the structure of the relationships reaching to their own conclusions regarding the query or the object of the search. Moreover this type of relationships can be successfully represented in visual form, as we saw in issue number 62.Â
For example letâ€™s suppose, following the example in the article of Paul Ford â€śAugust 2009: How Google beat Amazon and Ebay to the Semantic Webâ€ťl, that in Jimâ€™s web itâ€™s said that Jim is a friend of Paul. The logic we learnt at school says that
If A is friend of B, then B is friend of A
Then, in a semantic web, a search for the friends of Paul would find Jim even though no statement about Jim would be available at Paulâ€™s web. But, in order to do this, these types of sentences have to be coded in a format like RDF (Resource Description Framework) that is susceptible to being read by a machine and passed through an inference engine applying logic to the search.
The promise is the huge ease with which to find relevant information in a potentially simple way.
But, how has it evolved over the last three years? If we take a look at possibly two of the most relevant sites at a general level, like the World Wide Web Consortium (W3C) and the Semantic Web organisation we can see a huge amount of movement, specially on standardisation and at an academic level. There is a considerable proliferation of projects for the definition of ontology editors, like ProtĂ©gĂ© and many others, and inference engines like those you can find in the semantic web site.
The list of tool and service providers for the semantic web has increased noticeably, although there are still just a few dozen. (See, for example one of the existing lists).
You can find â€śintelligentâ€ť (but not free) browsers like Amblit that use the semantic web to link information more or less wisely.
Nevertheless the potential impact of all this technology hasnâ€™t substantially modified the web life yet. Standards progress slowly and the effort of designing interoperable ontologies and, above all, of entering the huge amount of metadata needed for properly indexing the existing data, makes its evolution a slow process.
Links of this issue:
Subscribe to the free newsletter