The Semantic Web In Breadth

by Aaron Swartz (with much assistance)

Note: This piece speaks about the different parts of the Semantic Web and how they fit together. For a high-level interview, take a look at Sandro Hawke's The Semantic Web (Put Simply). On the other hand, if you're a Web developer who's interested in building Semantic Web Sites or Semantic Web Services, check out The Semantic Web (for Web Developers). Now back to your regularly scheduled article.

Identifiers: Uniform Resource Identifier (URI)

If I want to discuss something, I must first identify it. How else will you know what I'm referring to? I might do this in an indirect manner: "The North Star." "The strange man at the grocery store." "Those really sour candies that Bob always eats." I might also choose to be more direct: "Polaris." "Jonathan Roberts." "Mega Warheads."

To identify items on the Web, we also use identifiers. Because we use a uniform system of identifiers, and because each item identified is considered a "resource," we call these identifiers "Uniform Resource Identifiers" or URIs for short. We can give a URI to anything, and anything that has a URI can be said to be "on the Web": you, the book you bought last week, the fly that keeps buzzing in your ear and anything else you can think of -- they all can have a URI.

The URI is the foundation of the Web. While nearly every other part of the Web can be replaced, the URI cannot: it holds the rest of the Web together.

You're probably already familiar with one form of URI: the URL or Uniform Resource Locator. A URL is an address that lets you visit a webpage, such as: http://www.w3.org/Addressing/. If you break it down, you can see that a URL tells your computer where to find a specific resource (in this case, the W3C's Addressing website). Unlike most other forms of URIs, a URL both identifies and locates. Contrast this with a "mid:" URI. A "mid:" URI identifies an email message, but it isn't able to locate a copy of the message for you.

Because the Web is far too large for any one organization to control it, URIs are decentralized. No one person or organization controls who makes them or how they can be used. While some URI schemes (such as http:) depend on centralized systems (such as DNS), other schemes (such as freenet:) are completely decentralized.

This means that you don't need anyone's permission to create a URI. You can even create URIs for things you don't own. While this flexibility makes URIs powerful, it brings with it more than a few problems. Because anyone can create a URI, we will inevitably end up with multiple URIs representing the same thing. Worse, there will be no way to figure out whether two URIs refer to exactly the same resource. Thus, we'll never be able to say with certainty exactly what a given URI means. But these are tradeoffs that must be made if we are to create something as enormous as the Semantic Web.

A common practice for creating URIs is to begin with a Web page. The page describes the object to be identified and explains that the URL of the page is the URI for that object. For example, I wanted to create a URI for my copy of "Weaving the Web" by Tim Berners-Lee. First, I created a Web page that describes my copy. Second, I noted on that page that the URL for the page serves as the URI for my copy of the book. By doing this, I've associated that URI (http://logicerror.com/myWeavingTheWeb) with my copy of "Weaving the Web." Creating a URI can be just this simple.

You may have noticed that in this instance the URI (http://logicerror.com/myWeavingTheWeb) is doing double duty: it represents both the physical book, as well as the Web page that describes it. This is an area of some discussion -- it's called The Semantic Web Identification Problem, and it's a recurring point of discussion for Semantic Web workers.

This is an important fact to understand. A URI is not a set of directions telling your computer how to get to a specific file on the Web (though it may also do this). It is a name for a "resource" (a thing). This resource may or may not be accessible over the Internet. The URI may or may not provide a way for your computer to get more information about that resource. Yes, a URL is a type of URI that does provide a way to get information about a resource, or perhaps to retrieve the resource itself, and other methods for providing information about URIs and the resources they identify are under development. It is also true that the ability to say things about URIs is an important part of the Semantic Web. But you should not assume that a URI does anything more than provide an identifier for a resource.

Documents: Extensible Markup Language (XML)

XML was designed to be a simple way to send documents across the Web. It allows anyone to design their own document format and then write a document in that format. These document formats can include markup to enhance the meaning of the document's content. This markup is "machine-readable," that is, programs can read and understand it. By including machine-readable meaning in our documents, we make them much more powerful.

Consider a simple example: if a document contains certain words that are marked as "emphasized," the way those words are rendered can be adapted to the context. A Web browser might simply display them in italics, whereas a voice browser (which reads Web pages aloud) might indicate the emphasis by changing the tone or the volume of its voice. Each program can respond appropriately to the meaning encoded in the markup. In contrast, if I simply marked the words as "in italics", the computer has no way of knowing why those words are in italics. Is it for emphasis or simply for a visual effect? How does the voice browser display this effect?

Here's an example of a document in plain text:

I just got a new pet dog.

As far as your computer is concerned, this is just text. It has no particular meaning to the computer. But now consider this same passage marked up using an XML-based markup language (we'll make one up for this example):

<sentence>
<person href="http://aaronsw.com/">I</person> just got a new pet <animal>dog</animal>.
</sentence>

Notice that this has the same content, but that parts of that content are labeled. Each label consists of two "tags": an opening tag (e.g., <sentence>) and a closing tag (e.g., </sentence>). The name of the tag ("sentence") is the label for the content enclosed by the tags. We call this collection of tags and content an "element." Thus, the sentence element in the above document contains the sentence, "I just got a new pet dog." This tells the computer that "I just got a new pet dog" is a "sentence," but -- importantly -- it does not tell the computer what a sentence is. Still, the computer now has some information about the document, and we can put this information to use.

Similarly, the computer now knows that "I" is a "person" (whatever that is) and that "dog" is an "animal."

Sometimes it is useful to provide more information about the content of an element than we can provide with the name of the element alone. For example, the computer knows that "I" in the above sentence represents a "person," but it does not know which person. We can provide this sort of information by adding attributes to our elements. An attribute has both a name and a value. For example, we can rewrite our example thus:

<sentence>
<person href="http://aaronsw.com">I</person> just got a new pet <animal type="dog" href="http://aaronsw.com/myDog">dog</animal>.
</sentence>

As you might have guessed already, there is a problem here. I've used the words "sentence," "person," and "animal" in my markup language. But these are pretty common words. What if others have used these same words in their own markup languages? What if those words have different meanings in those languages? Perhaps "sentence" in another markup language refers to the amount of time that a convicted criminal must serve in a penal institution. How is my computer to keep these straight?

To prevent confusion, I must uniquely identify my markup elements. And what better way to identify them than with a Uniform Resource Identifier? So I assign a URI to each of my elements and attributes. I do this using something called XML Namespaces. This way, anyone can create their own tags and mix them with tags made by others. A namespace is just a way of identifying a part of the Web (space) from which we derive the meaning of these names. I create a "namespace" for my markup language by creating a URI for it. (As we discussed earlier, I'll probably create a Web page to describe my markup language and use the URL of my Web page as the URI for my namespace.)

Since everyone's tags have their own URIs, we don't have to worry about tag names conflicting. XML, of course, lets us abbreviate and set default URIs so we don't have to type them out each time:

<sentence
    xmlns="http://example.org/xml/documents/"
    xmlns:c="http://animals.example.net/xmlns/"
><c:person c:href="http://aaronsw.com/">I</c:person> just got a new pet <c:animal>dog</c:animal>.</sentence>

Note that "http://example.org/xml/documents/" is the default namespace for my document. That is where all elements and attributes not preceded by c: are defined (in this instance "sentence" is the only element thus defined).

Statements: Resource Description Framework (RDF)

Now we start to get into the meat of the Semantic Web. It's wonderful that we can create URIs and talk about them with our web pages. However, it'd be even better if we could talk about them in a way that computers could begin to process what we're saying. For example, it's one thing to say "I really like 'Weaving the Web.'" on a web discussion forum. But what would this mean to a computer?

RDF gives you a way to make statements that are machine-processable. Now the computer can't actually "understand" what you said, of course, but it can deal with it in a way that makes it seem like it does. For example, I could search the Web for all book reviews and create an average rating for each book. Then, I could put that information back on the Web. Another website could take that information (the list of book rating averages) and create a "Top Ten Highest Rated Books" page.

RDF is really quite simple. An RDF statement is a lot like a simple sentence, except that almost all the words are URIs. Each RDF statement has three parts: a subject, a predicate and an object. Let's look at a simple RDF statement:

<http://aaronsw.com/> <http://love.example.org/terms/reallyLikes> <http://www.w3.org/People/Berners-Lee/Weaving/> .

Can you guess what this says? The first URI is the subject. In this instance, the subject is me. The second URI is the predicate. It relates the subject to the object. In this instance, the predicate is "reallyLikes." The third URI is the object. Here, the object is Tim Berners-Lee's book "Weaving the Web." So the RDF statement above says that I really like "Weaving the Web."

You may notice that RDF statements can say practically anything, and that it doesn't matter who says them. There is no one official website that says everything about Weaving the Web, or about me. This leads us to an important RDF principle, namely "anything can say anything about anything". Information is spread across the Web, and two people can even say contradictory things -- Bob can say that Aaron loves Weaving the Web and John can say that Aaron hates it. This is the freedom that the Web provides.

The statement above is written in N-Triples, a language that allows you to write simple RDF statements. However, the official RDF specification defines an XML representation of RDF, which is a bit more complicated, but says the same thing:

<rdf:RDF
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:love="http://love.example.org/terms/"
    <rdf:Description rdf:about="http://aaronsw.com/">
        <love:reallyLikes rdf:resource="http://www.w3.org/People/Berners-Lee/Weaving/" />
    </rdf:Description>
</rdf:RDF>

Now, to write RDF like that is not the easiest thing in the world, and it seems unlikely that everyone will start speaking this strange new language anytime soon. So where do we expect all this RDF information to come from? The most likely source is databases.

In the world there are thousands of databases, most containing interesting machine-processable information. Governments store arrest records in databases; companies store part and inventory information in a database; most computerized address books store people's names and phone numbers in -- you guessed it! -- a database. When information is stored in a database, it's very easy to ask the computer certain questions about the data: "Show me everyone that was arrested in the past 6 months." "Print a list of all parts we're running low on." "Get me the phone numbers of the people whose last name is Jones."

RDF is ideally suited for publishing these databases to the Web. And when we put them on the Web, we give everything in the database a URI, so that other people can talk about it too. Now, intelligent programs can begin to fit the data together. Using the available information, the computer can begin to connect the Bob Jones whose phone number is in your address book with the Bob Jones who was arrested last week and the Bob Jones who just ordered 100,000 widgets. Now, we can ask questions of all these databases at once: "Get me the phone number of everyone who ordered more than 1,000 widgets and was arrested in the last 6 months."

Schemas and Ontologies: RDF Schemas, DAML+OIL, and WebOnt

All the work on databases assumes that the data is nearly perfect. Few (if any) database systems are ready for the messiness of the Web. Any system that is "hard-coded" to understand certain terms will likely go out of date, or at least have limited usefulness, as new terms are invented and defined. What if someone comes up with a new system that rates books on a scale of 1-10 instead of just saying that someone "reallyLikes" them. Programs built based on the old system won't be able to process the new information.

Worse, there's no way for a computer or human to figure out what a specific term means, or how it should be used. The use of all these URIs is useless if we never describe what they mean. This is where schemas and ontologies come in. A schema and an ontology are ways to describe the meaning and realtionships of terms. This description (in RDF, of course) helps computer systems use terms more easily, and decide how to convert between them.

Two closely related systems, RDF Schemas and the DARPA Agent Markup Language with Ontology Inference Layer (DAML+OIL) have been developed to solve this problem. For example, a schema might state that:

    @prefix dc: <http://purl.org/dc/elements/1.1/> .
    @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
 
    # A creator is a type of contributor:
    dc:creator rdfs:subClassOf dc:contributor .

(Here we're using Notation3, as superset of N-Triples that allows us to use more abbreviations.)

This says that a creator is a subclass (type) of contributor. Of what use is this? Well, let's say for example that you build a program to collect the creators and contributors to various documents. Your program uses this vocabulary (dc:creator and dc:contributor) to understand the information it finds. One day, a vast influx of newbies from AOL start creating RDF documents. None of them know about dc:creator, so they make up their own term: ed:hasAuthor.

    # The old way:
    <http://aaronsw.com/> is dc:creator of <http://logicerror.com/semanticWeb-long> .
 
    # The new way:
    <http://logicerror.com/semanticWeb-long> ed:hasAuthor <http://aaronsw.com/> .

Normally, your program would simply ignore these new statements, since it can't understand them. However, one kind soul was smart enough to bridge the gap between these two worlds, by providing information on how to convert between them:

    # [X dc:creator Y] is the same as [Y ed:hasAuthor X]
    dc:creator daml:inverse ed:hasAuthor .

This tells your program that ed:hasAuthor is the inverse of dc:creator. That means that all your program has to do is swap the subject and object and change ed:hasAuthor to dc:creator. Since your program understands DAML ontologies, it can now take this information and use it to process all of the hasAuthor statements it couldn't understand before.

As of 2001-09-03, the W3C is preparing to start a Web Ontology (WebOnt) Working Group (which some call the WOW-G). This group is chartered to prepare a Web Ontology language that builts upon the work done by RDF Schema and DAML+OIL. It will be interesting to see what this group develops.

Logic

From this point on, I'll be discussing parts of the Semantic Web that haven't been developed yet. Unlike the discussion above, I'm not discussing specific systems, but instead a general concept that could become (and are becoming) many different systems.

While it's nice to have systems that understand these basic concepts (subclass, inverse, etc.) it would be even better if we could state any logical principle and permit the computer to reason (by inference) using these principles.

Here's an example: Let's say one company decides that if someone sells more than 100 of our products, then they are a member of the Super Salesman club. A smart program can now follow this rule to make a simple deduction: "John has sold 102 things, therefore John is a member of the Super Salesman club."

Proof

Once we begin to build systems that follow logic, it makes sense to use them to prove things. People all around the world could write logic statements. Then your machine could follow these Semantic "links" to construct proofs.

Example: Corporate sales records show that Jane has sold 55 widgets and 66 sprockets. The inventory system states that widgets and sprockets are both different company products. The built-in math rules state that 55 + 66 = 121 and that 121 is more than 100. And, as we know, someone who sells more than 100 products is a member of the Super Salesman club. The computer puts all these logical rules together into a proof that Jane is a Super Salesman.

A diagram of the Semantic Web bus, courtesy of Tim Berners-LeeWhile it's very difficult to create these proofs (it can require following thousands, or perhaps millions of the links in the Semantic Web), it's very easy to check them. In this way, we begin to build a Web of information processors. Some of them merely provide data for others to use. Others are smarter, and can use this data to build rules. The smartest are "heuristic engines" which follow all these rules and statements to draw conclusions, and kindly place their results back on the Web as proofs, as well as plain old data.

Trust: Digital Signatures and Web of Trust

Now you've probably been thinking that this whole plan is great, but rather useless if anyone can say anything. Who would trust such as system? So you don't let me into your site? Ok, I just say I'm the King of the World and that I have permission. Because of the "anything can say anything about anything" rule, who's to stop me?

That's where Digital Signatures come in. Based on work in mathematics and cryptography, digital signatures provide proof that a certain person wrote (or agrees with) a document or statement. Aha! So I digitally sign all of my RDF statements. That way, you can be sure that I wrote them (or at least vouch for their authenticity). Now, you simply tell your program whose signatures to trust and whose not to. Each can set their own levels or trust (or paranoia) the computer can decide how much of what it reads to believe.

Now it's highly unlikely that you'll trust enough people to make use of most of the things on the Web. That's where the "Web of Trust" comes in. You tell your computer that you trust your best friend, Robert. Robert happens to be a rather popular guy on the Net, and trusts quite a number of people. And of course, all the people he trusts, trust another set of people. Each of those people trust another set of people, and so on. As these trust relationships fan out from you, they form a "Web of Trust." And each of these relationships has a degree of trust (or distrust) associated with it.

Note that distrust can be as useful as trust. Suppose that your computer discovers a document that no one explicitly trusts, but that no one explicitly distrusts either. Most likely, your computer will trust this document more than it trusts one that has been explicitly labeled as untrustworthy.

The computer takes all these factors into account when deciding how trustworthy a piece of information is. It can also make this process as transparent or opaque as you desire. For example, you might be happy with a simple "thumbs up/thumbs down" display. Someone else might insist on a complex explanation, including a description of some or all of the trust factors involved in the decision.

Tim Berners-Lee has proposed an "Oh, yeah?" button, that when clicked would have your computer attempt to provide reasons to trust the data. But whether you decide for yourself or leave it up to your computer, the information necessary to make an informed decision is available to you via the Web of Trust.

Conclusion: The Grand Vision

One of the best things about the Web is that it's so many different things to so many different people. The coming Semantic Web will multiply this versatility a thousandfold. For some, the defining feature of the Semantic Web will be the ease with which your PDA, your laptop, your desktop, your server, and your car will communicate with each other. For others, it will be the automation of corporate decisions that previously had to be laboriously hand-processed. For still others, it will be the ability to assess the trustworthiness of documents on the Web and the remarkable ease with which we'll be able to find the answers to our questions -- a process that is currently fraught with frustration.

Whatever the cause, almost everyone can find a reason to support this grand vision of the Semantic Web. Sure, it's a long way from here to there -- and there's no guarantee we'll make it -- but we've made quite a bit of progress so far. The possibilities are endless, and even if we don't ever achieve all of them, the journey will most certainly be its own reward.

Acknowledgements

Thanks to Sean B. Palmer for looking over a first draft of this article. He has since published The Semantic Web: An Introduction, which builds upon this piece. Jim Hendler and I published a paper based on this piece as "Swartz, A. and Hendler, J. The Semantic Web: A Network of Content for the Digital City, Proceedings Second Annual Digital Cities Workshop, Kyoto, Japan, October, 2001." Charles F. Munat* kindly cleaned up my writing, this latest version is updated with more and newer information.

Linked from: RDF Issues | The Semantic Web

Links to: Aaron Swartz | America On-Line (AOL) | Anatomy of an URL | DARPA Agent Markup Language with Ontology Inference Layer | Digital Signatures | Extensible Markup Language | Jim Hendler | N-Triples | Newbie | Notation3 | Ontology | RDF Schema | Resource Description Framework | Sandro Hawke | Schema | Sean B. Palmer | The Semantic Web (for Web Developers) | Tim Berners-Lee | URI Schemes | Weaving the Web | WebOntology (WebOnt) | World Wide Web Consortium | XML Namespaces

Last Modified: 2002-05-01 19:12:37

Powered by Blogspace, an Aaron Swartz project. Email the webmaster with problems.

 

The Semantic Web: An Introduction

This document is designed as being a simple but comprehensive introductory publication for anybody trying to get into the Semantic Web: from beginners through to long time hackers. Recommended pre-reading: the Semantic Web in Breadth.

Table Of Contents

  1. What Is The Semantic Web?
  2. Simple Data Modelling: Schemata
  3. Ontologies, Inferences, and DAML
  4. The Power Of Semantic Web Languages
  5. Trust and Proof
  6. Ambient Information and SEM
  7. Evolution
  8. Does It Work? What Semantic Web Applications Are There?
  9. What Now? Further Reading

What Is The Semantic Web?

The Semantic Web is a mesh of information linked up in such a way as to be easily processable by machines, on a global scale. You can think of it as being an efficient way of representing data on the World Wide Web, or as a globally linked database.

The Semantic Web was thought up by Tim Berners-Lee, inventor of the WWW, URIs, HTTP, and HTML. There is a dedicated team of people at the World Wide Web consortium (W3C) working to improve, extend and standardize the system, and many languages, publications, tools and so on have already been developed. However, Semantic Web technologies are still very much in their infancies, and although the future of the project in general appears to be bright, there seems to be little consensus about the likely direction and characteristics of the early Semantic Web.

What's the rationale for such a system? Data that is geneally hidden away in HTML files is often useful in some contexts, but not in others. The problem with the majority of data on the Web that is in this form at the moment is that it is difficult to use on a large scale, because there is no global system for publishing data in such a way as it can be easily processed by anyone. For example, just think of information about local sports events, weather information, plane times, Major League Baseball statistics, and television guides... all of this information is presented by numerous sites, but all in HTML. The problem with that is that, is some contexts, it is difficult to use this data in the ways that one might want to do so.

So the Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to pubish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning... onto how this will be accomplished.

The Semantic Web is generally built on syntaxes which use URIs to represent data, usually in triples based structures: i.e. many triples of URI data that can be held in databases, or interchanged on the world Wide Web using a set of particular syntaxes developed especially for the task. These syntaxes are called "Resource Description Framework" syntaxes.

URI - Uniform Resource Identifier

A URI is simply a Web identifier: like the strings starting with "http:" or "ftp:" that you often find on the World Wide Web. Anyone can create a URI, and the ownership of them is clearly delegated, so they form an ideal base technology with which to build a global Web on top of. In fact, the World Wide Web is such a thing: anything that has a URI is considered to be "on the Web".

The syntax of URIs is carefully governed by the IETF, who published RFC 2396 as the general URI specification. The W3C maintains a list of URI schemes.

RDF - Resource Description Framework

A triple can simply be described as three URIs. A language which utilises three URIs in such a way is called RDF: the W3C have developed an XML serialization of RDF, the "Syntax" in the RDF Model and Syntax recommendation. RDF XML is considered to be the standard interchange format for RDF on the Semantic Web, although it is not the only format. For example, Notation3 (which we shall be going through later on in this article) is an excellent plain text alternative serialization.

Once information is in RDF form, it becomes easy to process it, since RDF is a generic format, which already has many parsers. XML RDF is quite a verbose specification, and it can take some getting used to (for example, to learn XML RDF properly, you need to understand a little about XML and namespaces beforehand...), but let's take a quick look at an example of XML RDF right now:-

<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" 
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:foaf="http://xmlns.com/0.1/foaf/" >
    <rdf:Description rdf:about="">
        <dc:creator rdf:parseType="Resource">
            <foaf:name>Sean B. Palmer</foaf:name>
        </dc:creator>
        <dc:title>The Semantic Web: An Introduction</dc:title>
    </rdf:Description>
</rdf:RDF>

This piece of RDF basically says that this article has the title "The Semantic Web: An Introduction", and was written by someone whose name is "Sean B. Palmer". Here are the triples that this RDF produces:-

<> <http://purl.org/dc/elements/1.1/creator> _:x0 .
this <http://purl.org/dc/elements/1.1/title> "The Semantic Web: An Introduction" .
_:x0 <http://xmlns.com/0.1/foaf/name> "Sean B. Palmer" .

This format is actually a plain text serialization of RDF called "Notation3", which we shall be covering later on. Note that some people actually prefer using XML RDF to Notation3, but it is generally accepted that Notation3 is easier to use, and is of course convertable to XML RDF anyway.

Why RDF?

When people are confronted with XML RDF for the first time, they usually have two questions: "why use RDF rather than XML?", and "do we use XML Schema in conjunction with RDF?".

The answer to "why use RDF rather than XML?" is quite simple, and is twofold. Firstly, the benefit that one gets from drafting a language in RDF is that the information maps directly and unambiguously to a model, a model which is decentralized, and for which there are many generic parsers already available. This means that when you have an RDF application, you know which bits of data are the semantics of the application, and which bits are just syntactic fluff. And not only do you know that, everyone knows that, often implicitly without even reading a specification because RDF is so well known. The second part of the twofold answer is that we hope that RDF data will become a part of the Semantic Web, so the benefits of drafting your data in RDF now draws parallels with drafting your information in HTML in the early days of the Web.

The answer to "do we use XML Schema in conjunction with RDF?" is almost as brief. XML Schema is a language for restricting the syntax of XML applications. RDF already has a built in BNF that sets out how the language is to be used, so on the face of it the answer is a solid "no". However, using XML Schema in conjunction with RDF may be useful for creating datatypes and so on. Therefore the answer is "possibly", with a caveat that it is not really used to control the syntax of RDF. This is a common misunderstanding, perpetuated for too long now.

Screen Scraping, and Forms

For the Semantic Web to reach its full potential, many people need to start publishing data as RDF. Where is this information going to come from? A lot of it can be derived from many data publications that exist today, using a process called "screen scraping". Screen scraping is the act of literally getting the data from a source into a more manageable form (i.e. RDF) using whatever means come to hand. Two useful tools for screen scraping are XSLT (an XML transformations language), and RegExps (in Perl, Python, and so on).

However, screen scraping is often a tedious solution, so another way to approach it is to build proper RDF systems that take input from the user and then store it straight away in RDF. Data such as you may enter when signing up for a new mail account, buying some CDs online, or searching for a used car can all be stored as RDF and then used on the Semantic Web.

Notation3: RDF Made Easy

As you will have seen above, XML RDF can be rather difficult, but thankfully, there is are simpler teaching forms of RDF. One of these is called "Notation3", and was developed by Tim Berners-Lee. There is some documentation covering N3, including a specification, and an excellent Primer.

The design criteria behind Notation3 were fairly simple: design a simple easy to learn scribblable RDF format, that is easy to parse and build larger applications on top of. In Notation3, we can simply write out the URIs in a triple, delimiting them with a "<" and ">" symbols. For example, here's a simple triple consisting of three URI triples:-

<http://xyz.org/#a> <http://xyz.org/#b> <http://xyz.org/#c> .

To use literal values, simply enclose the value in double quote marks, thus:-

<http://xyz.org/#Sean> <http://xyz.org/#name> "Sean" .

If you don't want to give a URI for something that you are talking about, then there is a concept for that too (this is like saying "there is someone called... but without giving them a URI). You simply use an underscore and a colon, and then put a little label there:-

_:a1 <http://xyz.org/#name> "Sean" .

This may be read as "there is something that has the name Sean", or "a1 has the name Sean, for some value of a1". These things are called anonymous nodes, because they don't have a URI, and are sometimes referred to as existentially quantified nodes.

Note how in one of the examples above, we used the URI "http://xyz.org/#" three times, with only the last character changing each time? Notation3 gives us an excellent way to abbreviate this: by giving parts of URIs aliases, and using those aliases instead. This is how you declare an alias in Notation3:-

@prefix xyz: <http://xyz.org/#> .

Note that you must always declare an alias before you can use it. To use an alias, simply use the "xyz:" bit instead of the URI, and don't wrap the resulting term in the "<" and ">" delimiters. For example, instead of writing:-

<http://xyz.org/#a> <http://xyz.org/#b> <http://xyz.org/#c> .

We can instead do:-

@prefix xyz: <http://xyz.org/#> .
:a :b :c .

Note that it doesn't matter what alias you use for a URI, as long as you use the same one throughout that document. You can also declare many aliases. The following bits of code are both equivalent to the piece of code above:-

@prefix blargh: <http://xyz.org/#> .
blargh:a blargh:b blargh:c .
@prefix blargh: <http://xyz.org/#> .
@prefix xyz: <http://xyz.org/#> .
blargh:a xyz:b blargh:c .

However, it should be noted that we often use a few aliases pretty much standardly, so that when Semantic Web developers exchange code in plain text, they can just leave the prefixes out and people can guess what they're talking about. Note that code should not implement this feature. Here is an example of some "standard" aliases:-

@prefix : <#> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
@prefix daml: <http://www.daml.org/2001/03/daml+oil#> .
@prefix log: <http://www.w3.org/2000/10/swap/log#> .
@prefix dc: <http://purl.org/dc/elements/1.1/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .

The empty alias ":" is often used to denote a new namespace that the author has not yet created a URI for (tut, tut). We use it in this introduction.

Notation3 does have many other little constructs including contexts, DAML lists, and alternative ways of representing anonymous nodes, but we need not concern ourselves with them here. Note that a syntax was devised to be an even simpler subset of Notation3, called N-Triples, but it doesn't use prefixes, and hence many of the examples in this article are not valid N-Triples, but are valid Notation3.

Dan Connolly once called Notation3 a "poor-man's RDF authoring tool" (source: RDF IG logs, 2001-06-01 03:55:12). Apparently, it is called Notation3 because "RDF M&S was the first, the RDF strawman was the second and this is the third" (source: RDF IG F2F 2001-02-26).

CWM: An XML RDF And Notation3 Inference Engine

Although we won't be discussing inference engines until later on in this article, we should note at this point that much RDF and Semantic Web processing (albeit often only experimental or demonstrative, at this stage) is done using a Python program called CWM or "Closed World Machine". More information can be found on the SWAP site.

At the moment, the best demonstration of its use can be how it can convert from XML RDF into Notation3 and vice versa. To convert "a.n3" into "a.rdf", simply use the following command line:-

python cwm.py a.n3 -rdf > a.rdf

CWM is a very powerful Semantic Web toolkit, and we shall be refering to it occasionally throught this article.

Simple Data Modelling: Schemata

The first "layer" of the Semantic Web above the syntax discussed above is the simple datatyping model. A "schema" (plural "schemata") is simply a document or piece of code that controls a set of terms in another document or piece of code. It's like a master checklist, or definition grammar.

RDF Schema

RDF Schema (also: RDF Schema Candidate Recommendation) was designed to be a simple datatyping model for RDF. Using RDF Schema, we can say that "Fido" is a type of "Dog", and that "Dog" is a sub class of animal. We can also create properties and classes, as well as doing some slightly more "advanced" stuff such as creating ranges and domains for properties.

All of the terms for RDF Schema start with "http://www.w3.org/2000/01/rdf-schema#", which you may have noticed is in our list of "standard" aliases above. The alias "rdfs:" is often used for RDF Schema, and we continue that tradition here.

The first three most important concepts that RDF and RDF Schema give us are the "Resource" (rdfs:Resource), the "Class" (rdfs:Class), and the "Property" (rdf:Property). These are all "classes", in that terms may belong to these classes. For example, all terms in RDF are types of resource. To declare that something is a "type" of something else, we just use the rdf:type property:-

rdfs:Resource rdf:type rdfs:Class .
rdfs:Class rdf:type rdfs:Class .
rdf:Property rdf:type rdfs:Class .
rdf:type rdf:type rdf:Property .

This simply says that "Resource is a type of Class, Class is a type of Class, Property is a type of Class, and type is a type of Property". These are all true statements.

It is quite easy to make up your own classes. For example, let's create a class called "Dog", which contains all of the dogs in the world:-

:Dog rdf:type rdfs:Class .

Now we can say that "Fido is a type of Dog":-

:Fido rdf:type :Dog .

We can also create properties quite easily by saying that a term is a type of rdf:Property, and then use those properties in our RDF:-

:name rdf:type rdf:Property .
:Fido :name "Fido" .

Why have we said that Fido's name is "Fido"? Because the term ":Fido" is a URI, and we could quite easily have chosen any URI for Fido, including ":Squiggle" or ":n508s0srh". We just happened to use the URI ":Fido" because it's easier to remember. However, we still have to tell machines that his name is Fido, because although people can guess that from the URI (even though they probably shouldn't), machines can't.

RDF Schema also has a few more properties that we can make use of: rdfs:subClassOf and rdfs:subPropertyOf. These allow us to say that one class or property is a sub class or sub property of another. For example, we might want to say that the class "Dog" is a sub class of the class "Animal". To do that, we simply say:-

:Dog rdfs:subClassOf :Animal .

Hence, when we say that Fido is a Dog, we are also saying that Fido is an Animal. We can also say that there are other sub classes of Animal:-

:Human rdfs:subClassOf :Animal .
:Duck rdfs:subClassOf :Animal .

And then create new instances of those classes:-

:Bob rdf:type :Human .
:Quakcy rdf:type :Duck .

And then we can invent another property, use that, and build up even more information...

:owns rdf:type rdf:Property .
:Bob :owns :Fido .
:Bob :owns :Quacky .
:Bob :name "Bob Fleming" .
:Quacky :name "Quakcy" .

And so on. You can see that RDF Schema is very simple, and yet allows one to build up knowledge bases of data in RDF very very quickly.

The next concepts which RDF Schema provides us, which are important to mention, are ranges and domains. Ranges and domains let us say what classes the subject and object of each property must belong to. For example, we might want to say that the property ":bookTitle" must always apply to a book, and have a literal value:-

:Book rdf:type rdfs:Class .
:bookTitle rdf:type rdf:Property .
:bookTitle rdfs:domain :Book .
:bookTitle rdfs:range rdfs:Literal .
:MyBook rdf:type :Book .
:MyBook :bookTitle "My Book" .

rdfs:domain always says what class the subject of a triple using that property belongs to, and rdfs:range always says what class the object of a triple using that property belongs to.

RDF Schema also contains a set of properties for annotating schemata, providing comments, labels, and the like. The two properties for doing this are rdfs:label and rdfs:comment, and an example of their use is:-

:bookTitle rdfs:label "bookTitle"; 
   rdfs:comment "the title of a book" .

It is a good best practise to always label and comment your new properties, classes, and other terms.

Ontologies, Inferences, and DAML

DAML is a language created by DARPA as an ontology and inference langauge based upon RDF. DAML takes RDF Schema a step further, by giving us more in depth properties and classes. DAML allows one to be even more expressive than with RDF Schema, and brings us back on track with our Semantic Web discussion by providing some simple terms for creating inferences.

DAML+OIL

DAML provides us a method of saying things such as inverses, unambiguous properties, unique properties, lists, restrictions, cardinalities, pairwise disjoint lists, datatypes, and so on. We shall run through a couple of these here, but armed with the knowledge that you've already gotten from this introduction (assuming that you haven't skipped any of it!), it should be just as beneficial going through the DAML+OIL Walkthru.

One DAML construct that we shall run through is the daml:inverseOf property. Using this property, we can say that one property is the inverse of another. The rdfs:range and rdfs:domain values of daml:inverseOf is rdf:Property. Here is an example of daml:inverseOf being used:-

:hasName daml:inverseOf :isNameOf .
:Sean :hasName "Sean" .
"Sean" :isNameOf :Sean .

The second useful DAML construct that we shall go through is the daml:UnambiguousProperty class. Saying that a Property is a daml:UnambiguousProperty means that if the object of the property is the same, then the subjects are equivalent. For example:-

foaf:mbox rdf:type daml:UnambiguousProperty .
:x foaf:mbox  .
:y foaf:mbox  .

implies that:-

:x daml:equivalentTo :y .

Don't worry if this is getting all a bit too much... it's not essential to learning about the Semantic Web, but it is useful, since many Semantic Web applications now involve DAML. However, DAML is only one in a series of languages and so forth that are being used.

Inference

The principle of "inference" is quite a simple one: being able to derive new data from data that you already know. In a mathematical sense, querying is a form of inference (being able to infer some search results from a mass of data, for example). Inference is one of the driving principles of the Semantic Web, because it will allow us to create SW applications quite easily.

To demonstrate the power of inference, we can use some simple examples. Let's take the simple car example: we can say that:-

:MyCar de:macht "160KW" .

Now, to a German Semantic Web processor, the term ":macht" may well be built into it, and although an English processor may have an equivalent term built into it somewhere, it will not understand the code with the term in it that it doesn't understand. Here, then, is a piece of inference data that makes things clearer to the processor:-

de:macht daml:equivalentTo en:power .

We have used the DAML "equivalentTo" property to say that "macht" in the German system is equivalent to "power" in the English system. Now, using an inference engine, a Semantic Web client could successfully determine that:-

:MyCar en:power "160KW" .

This is only a very simple example of inference, but you can see immediately how easily the system could scale up. Merging databases simply becomes a matter of recording in RDF somewhere that "Person Name" in your database is equivalent to "Name" in my database, and then throwing all of the information together and getting a processor to think about it.

Indeed, this is already possible with Semantic Web tools that we have at our disposal today: CWM. Unfortunately, great levels of inference can only be provided using "First Order Predicate Logic" languages, and DAML is not a FOPL language entirely.

Logic

For the Semantic Web to become expressive enough to help us in a wide range of situations, it will become necessary to construct a powerful logical language for making inferences. There is a raging debate as to how and even whether this can be accomplished, with people pointing out that RDF lacks the power to quantify, and that the scope of quantification is not well defined. Predicate logic is better discussed in John Sowa's excellent Mathematical Background (Predicate Logic).

In particular, Pat Hayes is hard at work on a model for RDF that may ease the situation (2001-09), but there is still a great amount of uncertainty at this time. Of course, this does not stop us from using a Webized version of KIF or somesuch as a logical language on the Semantic Web.

At any rate, we already have a great range of tools with which to build the Semantic Web: assertions (i.e. "and"), and quoting (reification) in RDF, classes, properties, ranges and documentation in RDF Schema, disjoint classes, unambiguous and unique properties, datatypes, inverses, equivalencies, lists, and much more in DAML+OIL.

Note that Notation3 introduces a "context" construct, enabling one to group statements together and quantify over them using a specially designed logic vocabulary. Using this vocabulary, for example, one can express "or", using NANDs:-

{ { :Joe :loves :TheSimpsons } a log:Falsehood .
  { :Joe :is :Nuts } a log:Falsehood .
} a log:Falsehood .

Which can be read as "it is not true that Joe does not love The Simpsons and is not nuts". I resisted the temptation to make Joe a universally quantified variable.

Note that the above example does not serialize "properly" into XML RDF, because XML RDF does not have the context construct as denoted by the curly brackets in the example above. However a similar effect can be achieved using reification and containers.

The Power Of Semantic Web Languages

The main power of Semantic Web languages is that any one can create one, simply by publishing some RDF that describes a set of URIs, what they do, and how they should be used. We have already seen that RDF Schema and DAML are very powerful langauges for creating languages.

Because we use URIs for each of the terms in our languages, we can publish the languages easily without fear that they might get misinterpreted or stolen, and with the knowledge that anyone in the world that has a generic RDF processor can use them.

The Principle Of Least Power

The Semantic Web works on a principle of least power: the less rules, the better. This means that the Semantic Web is essentially very unconstraining in what it lets one say, and hence it follows that anyone can say anything about anything. When you look at what the Semantic Web is trying to do, it becomes very obvious why this level of power is necessary... if we started constraining people, they wouldn't be able to build a full range of applications, and the Semantic Web would therefore become useless to some people.

How Much Is Too Much?

However, it has been pointed out that this power will surely be too much... won't people be trying to process their shopping lists on an inference engine, and suddenly come up with a plan for world peace, or some strange and exciting new symphony?

The answer is (perhaps unfortunately!) no. Although the basic parts of the Semantic Web, RDF and the concepts behind it are very minimally constraining, applications that are built on top of the Semantic Web will be designed to perform specific tasks, and as such will be very well defined.

For example, take a simple server log program. One might want to record some server logs in RDF, and then build a program that can gather statistics from the logs that pertain to the site; how many visitors it had in a week, and so forth. That doesn't mean that it'll turn your floppy disc drive into a toaster or anything; it'll just process server logs. The power that you get from publishing your information in RDF is that once published in the public domain, it can be repurposed (used for other things) so much easier. Because RDF uses URIs, it is fully decentralized: you don't have to beg for some central authority to publish a language and all your data for you... you can do it yourself. It's Do It Yourself data management.

The Pedantic Web

Unfortunately, there is an air of academia and corporatate thinking lingering in the Semantic Web community, which has lead to the term "Pedantic Web" being coined, and a lot of mis/disinformation and unecessary hype being disseminated. Note that this very document was devised to help clear up some common misconceptions that people may have about the Semantic Web.

For example, almost all beginners to RDF go through a sort of "identity crisis" phase, where they confuse people with their names, and documents with their titles. For example, it is common to see statements such as:-

<http://example.org/> dc:creator "Bob" .

However, Bob is just a literal string, so how can a literal string write a document? What the author really means is:-

<http://example.org/> dc:creator _:b .
_:b foaf:name "Bob" .

i.e., that example.org was created by someone whose name is "Bob". Tips like these are being slowly collected, and some of them are being displayed in the SWTips guide, a collection of Semantic Web hints and tips maintained as a collaborative development project.

Education And Outreach

The move away from the "Pedantic Web", to some extent, is all part of a movement to bring the power of the Semantic Web to the people. This is a well documented need:-

[...] the idea that the above URIs reveal a schema that somehow fully describes this language and that it is so simple (only two {count 'em 2} possible "statements"), yet looks like the recipe for flying to Mars is a bit daunting. Its very simplicity enables it to evaluate and report on just about anything - from document through language via guidelines! It is a fundamental tool for the Semantic Web in that it gives "power to the people" who can say anything about anything.

- EARL for dummies, William Loughborough, May 2001

RDF Schema and DAML+OIL are generally languages that need to be learned, however, so what is being done to accomodate people who have neither the time nor patience to read up on these things, and yet want to create Semantic Web applications? Thankfully, many Semantic Web applications will be lower end appliactions, so you'll no more need to have a knowledge of RDF than Amaya requires one to have a knowledge of (X)HTML.

Trust and Proof

The next step in the archtecture of the Semantic Web is trust and proof. Very little is written about this layer, which is a shame since it will become very important in the future.

In stark reality, the simplest way to put it is: if one person says that x is blue, and another says that x is not blue, doesn't the whole Semantic Web fall apart?

The answer is of course not, because a) applications on the Semantic Web at the moment generally depend upon context, and b) because applications in the future will generally contain proof checking mechanisms, and digital signatures.

Context

Applications on the Semantic Web will depend on context generally to let people know whether or not they trust the data. If I get an RDF feed from a friend about some movies that he's seen, and how highly he rates them, I know that I trust that information. Moreover, I can then use that information and safely trust that it came from him, and then leave it down to my own judgement just to how much I trust his critiques of the films that he has reviewed.

Groups of people also operate on shared context. If one group is developing a Semantic Web depiction service, cataloguing who people are, what their names are, and where pictures of those people are, then my trust of that group is dependant upon how much I trust the people running it not to make spurious claims.

So context is a good thing because it lets us operate on local and medium scales intuitively, without having to rely on complex authentication and checking systems. However, what happens when there is a party that we know, but we don't know how to verify that a certain heap of RDF data came from them? That's where digital signatures come in.

Digital Signatures

Digital signatures are simply little bits of code that one can use to unambiguously verify that one wrote a certain document. Many people are probably familiar with the technology: it the same key based PGP-style thing that people use to encrypt and sign messages. We simply apply that technology to RDF.

For example, let's say I have some information in RDF that contains a link to a digital signature:-

this :signature <http://example.org/signature> .
:Jane :loves :Mary .

To ascertain whether or not we trust that Jane really loves Mary, we can feed the RDF into a trust engine (an inference engine that has a little digital signature checker built into it), and get it to work out if we trust the source of the information.

Proof Languages

A proof language is simply a language that let's us prove whether or not a statement is true. An instance of a proof language will generally consist of a list of inference "items" that have been used to derive the information in question, and the trust information for each of those items that can then be checked.

For example, we may want to prove that Joe loves Mary. The way that we came across the information is that we found two documents on a trusted site, one of which said that ":Joe :loves :MJS", and another of which said that ":MJS daml:equivalentTo :Mary". We also got the checksums of the files in person from the maintainer of the site.

To check this information, we can list the checksums in a local file, and then set up some FOPL rules that say "if file 'a' contains the information Joe loves mary and has the checksum md5:0qrhf8q3hfh, then record SuccessA", "if file 'b' contains the information MJS is equivalent to Mary, and has the checksum md5:0892t925h, then record SuccessB", and "if SuccessA and SuccessB, then Joe loves Mary".

An example of this in Notation3 can be found in some of the author's proof example experiments, but here is the rules file:-

@prefix : <http://infomesh.net/2001/proofexample/#> .
@prefix p: <http://www.w3.org/2001/07/imaginary-proof-ontology#> .
@prefix log: <http://www.w3.org/2000/10/swap/log#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
 
p:ProvenTruth rdfs:subClassOf log:Truth .
 
# Proof
 
{ { <a.n3> p:checksum <md5:blargh>; 
           log:resolvesTo [ log:includes { :Joe :loves :MJS } ] }
log:implies
{ :Step1 a p:Success } } a log:Truth .
 
{ { <b.n3> p:checksum <md5:test>; 
          log:resolvesTo [ log:includes { :MJS = :Mary } ] }
log:implies
{ :Step2 a p:Success } } a log:Truth .
 
{ { :Step1 a p:Success . :Step2 a p:Success }
log:implies 
{ { :Joe :loves :Mary } a p:ProvenTruth } } a log:Truth .

The file speaks for itself, and when processed using CWM, does indeed work, producing the intended output. CWM doesn't have the capability to automatically check file checksums or digital signatures, but it is only a matter of time before a proper Semantic Web trust engine is written.

Ambient Information and SEM

The scope of information was discussed a little, but let's take into consideration what it really means to have a "local" and a "global" system.

In general, there are small and large scale systems, and interactions between the two will most likely form a huge part of the transactions that occur on the Semantic Web. Let's define what we mean by large, medium, and small scale systems.

Large Scale

An example of a large scale system is two companies that are undergoing a merger needing to combine their databases. Another example would be search engines compiling results based upon a huge range of data. Large scale Semantic Web systems generally involve large databases, and heavy duty inference rules and processors are required to handle the databases.

Medium Scale

Medium scale Semantic Web systems attempt to make sense out of the larger scale Semantic Web systems, or are examples of small scale Semantic Web systems joined together. An example of the former is a company trying to partially understand two large scale invoice formats enough to use them together. An example of the latter is of two address book language groups trying to create a super-address book language.

Small Scale

Small scale Semantic Web systems are less widely discussed. By small scale Semantic Web systems, we mean languages that will be used primarily offline, or piles of data that will only be transferred with a limited scope, perhaps between friends, departments, or even two companies.

Sharing data on a local level is a very powerful example of how the Semantic Web can be useful in a myriad of situations. In the next section on evolution we shall be finding out how interactions between the different sized systems will form a key part of the Semantic Web.

SEM - SEmantic Memory

The concept of a SEmantic Memory was first proposed by Seth Russell, who suggested that personal database dumps of RDF that one has collected from the "rest" of the Semantic Web (a kind of Semantic Cloud) would be imperative for maintaining a coherant view of data. For example, a SEM would most likely be partitioned into data which is inherent to the whole Semantic Web (i.e., the schemata for the major languages such as XML RDF, RDF Schema, DAML+OIL, and so on), local data which is important for any Semantic Web applications that may be running (e.g. information about the logic namespace for CWM, which is currently built in), and data that the person has personally been using, is publishing, or that has been otherwise entered into the root context of the SEM.

The internal structure of a SEM will most likely go well beyond the usual triples structure of RDF, perhaps as far as quads or even pents. The extra fields are for contexts (an StID), and perhaps sequences. In other words, they are ways of grouping information within the SEM, for easy maintainence and update. For example, it should become simple to just delete any triple that was added into a certain context by removing all triples with that particular StID.

A lot of work on the Semantic Web has concentrated on making data stores (i.e. SEMs) interoperable, which is good, but that has lead to less work being conducted on what actually happens within the SEM itself, which is not good, because the representation of quads and pents in RDF is therefore up in the air. Obviously, statements can be modelled as they would be for reification:-

rdf:Statement rdfs:subClassOf :Pent .
_:s1 rdf:type :Pent .
_:s1 rdf:subject :x .
_:s1 rdf:predicate :y .
_:s1 rdf:object :z .
_:s1 :context :p .
_:s1 :seq "0" .

But clearly a dedicated pentuples format is always going to be more efficient, and aviod the perils of reification:-

:x :y :z :p "0" .

This language also needs a default context flag that indicates the root context of the document. The root context of a document is the space to which the (non-quoted) assertions are parsed, the conceptual information space in which all of the assertions are taken to be true. Any quoted information in the document (for example, using the Notation3 context syntax) would be in a different (possibly anonymous) context than the root context of the document.

TimBL appears, gaguing from the CWM source code, to be using "...#_formula" as the root context for the document, which (if true) is a bit of a nasty hack... what if one creates a property with the same URI? Maintaining interoperability at this level of the Semantic Web is an important thing for the Semantic Web developers to be investigating at this stage.

Evolution

A very important concept on the Semantic Web is that of evolution: going from one system into another. Two key parts of evolvability are partial understanding and transformability. We will find out next how these manifest themselves naturally when changing the scale of a system.

Partial Understanding: Large Scale to Medium Scale

The concept of partial understanding is a very important one on the Semantic Web, and can often be found in older documents that came out about the same time as the Semantic Web was first being theorized.

An example of partial understanding when moving a large scale system to a medium scale system is of a company trying to make sense out of two invoices, one from Company A and one from Company B. The knowledge that both of the companies use similar fields in their invoices is well known, so company trying to make sense out of the invoices can easily compile a master list of expenditures by simply scraping the data from the two invoice languages. Neither Company A nor Company B need to know that this is going on.

Indeed, TimBL included this example in his XML 2000 keynote:-

[...] what we'll end up doing in the future is converting things, so for example [...] in the Semantic Web we will have a relationship between two langauges so that if you get an invoice in a langauge you don't understand, and you have... some business software which can pay invoices... by following links across the Semantic Web, your machine will be able to automatically convert it from one language to another, and so process it.

- Tim Berners-Lee

Transformability: Small Scale to Medium Scale

An example of a small scale Semantic Web system joined together to make a medium sized Semantic Web system could be two groups that have published address book fomats wanting to make a larger and better address book format by merging the two current formats together. Anyone using one of the old address book formats could probably convert them into the new format, and hence there would be a greater sense of interoperability. That's generally what happens when one goes from a small scale Semantic Web system into a medium scale Semantic Web system, although this is often not without some disadvantages and incompatabilites. The Semantic Web takes the sting out of it by automating 99% of the process (it can convert field A into field B, but it can't fill in any new data for you... of course, new fields can always be left empty for a while).

Facilitating Evolvability

How do we document the evolution of languages? This is a very important and indeed urgent question, and one which TimBL summarized quite neatly:-

Where for example a library of congress schema talks of an "author", and a British Library talks of a "creator", a small bit of RDF would be able to say that for any person x and any resource y, if x is the (LoC) author of y, then x is the (BL) creator of y. This is the sort of rule which solves the evolvability problems. Where would a processor find it? [...]

- Semantic Web roadmap, Tim Berners-Lee

One possible answer is: third party databases. Very often, it is not practical to have (in TimBL's example) either the LoC or or BL record the fact that two of their fields are the same, so this information will have to be recorded by a reputable third party.

One such "third party" that was set up to investigate this is SWAG, the Semantic Web Agreement Group. Co-founded by Seth Russell, Sean B. Palmer, Aaron Swartz, and William Loughborough, the group aims to ensure interoperability on the Semantic Web. They set up what is possibly the first ever third party Semantic Web dictionary, the WebNS SWAG Dictionary.

Intertwingling: Difficult, But Important

Although the Semantic Web as a whole is still very much at a grassroots kind of level, people are starting to take notice; they're starting to publish information using RDF, and thereby making it fit for the Semantic Web.

However, not enough is being done to link information together... in other words, the "Semantic" part of the "Semantic Web" is coming along nicely, but where's the "Web"? People are not using other people's terms effectively; when they use other terms, they often do so because they're aimlessly trying to help, but just generating noice in the process. If you're going to use other people's data, try to find out what the advantage is in doing that beforehand. For example, just because you use the term "dc:title" in your RDF rather than a home brewed ":title", does that mean that suddenly a Dublin Core application is going to be able to "understand" your code? Of course not. What it does mean however is that if the "dc:title" property in your instance is being put to use in such a way that information may be need to repurposed from it in the near future, then you may gain some advantage because "dc:title" is such a commonly used term, you may be able to modify a current rules file, or whatever.

Another part of the problem may be due to a problem similar to the one that the early World Wide Web experienced: why bother publishing a Web site when there is no one else's site to link to or be linked to? Why bother publishing a Web site when so few people have browsers? Why bother writing a browser when there are so few Web sites? Some people have to make the leaps for it all to happen, and that's a slow process.

What can be done about the situation? Well, it may hopefully sort itself out. Another well-known principle that applies very well to Semantic Web applications is that there is no point in reinventing the wheel; viz., if someone has already invented a schema which contains a comprehensive and well understood and used set of terms that you also need to use in your application, then there is no point in trying to redo the work that they have done. At some points this may lead to a form of "schema war", but survival of the fittest should see to it that a core of the best schemata are put to the most use. This is probably what TimBL means when he says that terms will just "emerge" out of the Semantic Cloud, that when people keep using the term "zip", rather than just recording that my term "zip" is equivalent to your term "zip" which is equivalent to someone else's term "zip", we'll all just use the same URI, and hence interoperability will be vastly improved.

Does It Work? What Semantic Web Applications Are There?

I addressed this in a previous article: The Semantic Web: Taking Form, but it does bear repeating: the Semantic Web already works, and people are using it.

Semantic Web Applications

Unfortunately, the Semantic Web is dissimilar in many ways from the World Wide Web, including that you can't just point people to a Web site for them to realise how it's working, and what it is. However, there have been a number of small scale Semantic Web applications written up. One of the best ones is Dan Connolly's Arcs and Nodes diagrams experiment:-

One of the objectives of the advanced development component of the Semantic Web activity is to demonstrate how RDF and Semantic Web technologies can be applied to the W3C Process to increase efficiency, reliability, etc. In the early stages of developing an RDF model of the W3C process, the tools I was using to visualize the model while working on it started working well enough that I started applying them to all sorts of stuff.

- Circles and arrows diagrams using stylesheet rules, Dan Connolly.

Of course, this is a rather demonstration-oriented Semantic Web project, but it does illustrate the feasibility of applications being easily built using Semantic Web toolkits.

Another good example of the Semantic Web at work is Dan Brickley et al.'s RDFWeb. RDFWeb is a RDF database driven hypermedia blogspace, a site where all information is stored as RDF, and then that RDF used to render XHTML. Plans are underway to incorporate more advanced Semantic Web principles into the site.

What Can I Do To Help?

There are many ways in which one can contribute to creating the Semantic Web. Here's a few of them:-

There are many other ways in which one can help as well: ask in the community for more details.

What Now? Further Reading

As of 2001-09, the amount of Semantic Web Education and Outreach materials can only really be described as "pitiful" (hence this introduction, for a start). Here's a short list of some of the good primers and materials currently available, in no particular order:-

For more information, all the latest news etc., Dave Beckett's Resource Description Framework (RDF) Resource Guide is absolutely brilliant.

Many Semantic Web and RDF developers hang out on the RDF IG IRC chatroom, on irc.openprojects.net, #rdfig.

Acknowledgements

Many thanks to Aaron Swartz, Dan Brickley, and Danny Ayers for reviewing this, and pointing out many important little details that I missed.

Sean B. Palmer, 2001-09