También disponible en Espańol


The digital magazine of InfoVis.net

The Semantic Web, today
by Juan C. DĂĽrsteler [message nş 131]

Nearly three years ago, in number 26, we commented on the promise of the semantic web to convert the Net into a self-navigable and self-understandable space. Where are we today?

The key point of the semantic web is the conversion of the current structure of the web as a data storage (interpretable only by human beings, that are able to put the data into context) into a structure of information storage. 

In order to convert data into information we have to put it into context by adding metadata, data that contains the semantics, the explanation of the data it refers to; in the end, the context.

Metadata is data about data. Data explaining the nature of other data. Note that it depends on the application. Data for one application can be metadata for another one. An example of metadata. The stored data is an image of the ISBN reference of a book plus the price. The image itself makes no sense for a machine, even for a barcode reader. The metadata states what type of reference it is. 

The Semantic Web is based on two fundamental concepts:

  • The description of the meaning of the content in the Web.

  • The automatic manipulation of these meanings

The description of the meaning requires concepts bound to

  • Semantics, understood as meaning susceptible to being processed by machines. 

  • Metadata, as containers of semantic information on the data. 

  • Ontologies, a set of terms and the relationship between them that describe a particular application domain.
Protege.gif (50252 bytes)
Ontology. Snapshot of the Protégé ontology editor with an example of an ontology for the structure of a newspaper. Click on the image to enlarge it and see the details (53 KB). Protégé uses an object oriented approach that expresses ontologies as classes and subclasses.

The automatic manipulation of the contents is done through 

  • Mathematical logic, that allows you to establish rules to treat the semantic contents. 

  • Inference Engines, that allow you to combine known knowledge to elaborate new conclusions, i.e. new knowledge.

All these concepts belong to the field of Knowledge Representation.

Basically the semantic web pursues establishing a universal way of representing the relationships between data and between data and its meaning in order for an automatic system to follow the structure of the relationships reaching to their own conclusions regarding the query or the object of the search. Moreover this type of relationships can be successfully represented in visual form, as we saw in issue number 62. 

For example let’s suppose, following the example in the article of Paul Ford “August 2009: How Google beat Amazon and Ebay to the Semantic Web”l, that in Jim’s web it’s said that Jim is a friend of Paul. The logic we learnt at school says that

If A is friend of B, then B is friend of A

Then, in a semantic web, a search for the friends of Paul would find Jim even though no statement about Jim would be available at Paul’s web. But, in order to do this, these types of sentences have to be coded in a format like RDF (Resource Description Framework) that is susceptible to being read by a machine and passed through an inference engine applying logic to the search.

The promise is the huge ease with which to find relevant information in a potentially simple way.

But, how has it evolved over the last three years? If we take a look at possibly two of the most relevant sites at a general level, like the World Wide Web Consortium (W3C) and the Semantic Web organisation we can see a huge amount of movement, specially on standardisation and at an academic level. There is a considerable proliferation of projects for the definition of ontology editors, like Protégé and many others, and inference engines like those you can find in the semantic web site.

The list of tool and service providers for the semantic web has increased noticeably, although there are still just a few dozen. (See, for example one of the existing lists).

You can find “intelligent” (but not free) browsers like Amblit that use the semantic web to link information more or less wisely.

Nevertheless the potential impact of all this technology hasn’t substantially modified the web life yet. Standards progress slowly and the effort of designing interoperable ontologies and, above all, of entering the huge amount of metadata needed for properly indexing the existing data, makes its evolution a slow process.

Links of this issue:

http://www.infovis.net/printMag.php?num=26&lang=2   Article 26 "The Semantic Web"
http://www.infovis.net/printMag.php?num=62&lang=2   Article 62 "Visualising the Semantic Web"
http://www.ftrain.com/google_takes_all.html   ArtĂ­cle "August 2009: How Google beat Amazon and Ebay to the Semantic Web" by Paul Ford
http://www.w3.org/2001/sw/   W3C Consortium. Page about the semantic web
http://www.semanticweb.org/   Semantic Web Org
http://protege.stanford.edu/   ProtĂ©gĂ© wen site
http://www.semanticweb.org/knowmarkup.html   List of ontology editors
http://www.semanticweb.org/inference.html   Lista of inference engines
http://business.semanticweb.org/staticpages/index.php?page=20021016230045730   Firms related to the semantic web
http://www.amblit.com/   Amblit, Intelligent browser
© Copyright InfoVis.net 2000-2018